content
stringlengths
0
14.9M
filename
stringlengths
44
136
#' Plot of the Relative Abundance of the Pollen Types #' #' Generates a barplot based on the relative abundance (as percentage) in the air of the pollen types with respect to the total amounts #' #' @param data A \code{data.frame} object including the general database where calculation of the pollen season must be applied. This \code{data.frame} must include a first column in \code{Date} format and the rest of columns in \code{numeric} format belonging to each pollen type by column. #' @param n.types A \code{numeric} (\code{integer}) value specifying the number of the most abundant pollen types that must be represented in the plot of the relative abundance. A more detailed information about the selection of the considered pollen types may be consulted in \strong{Details}. The \code{n.types} argument will be \code{15} types by default. #' @param y.start,y.end A \code{numeric} (\code{integer}) value specifying the period selected to calculate relative abundances of the pollen types (\code{start year _ end year}). If \code{y.start} and \code{y.end} are not specified (\code{NULL}), the entire database will be used to generate the pollen calendar. The \code{y.start} and \code{y.end} arguments will be \code{NULL} by default. #' @param interpolation A \code{logical} value. If \code{FALSE} the interpolation of the pollen data is not applicable. If \code{TRUE} an interpolation of the pollen series will be applied to complete the gaps with no data before the calculation of the pollen season. The \code{interpolation} argument will be \code{TRUE} by default. A more detailed information about the interpolation method may be consulted in \strong{Details}. #' @param int.method A \code{character} string specifying the method selected to apply the interpolation method in order to complete the pollen series. The implemented methods that may be used are: \code{"lineal"}, \code{"movingmean"}, \code{"spline"} or \code{"tseries"}. The \code{int.method} argument will be \code{"lineal"} by default. #' @param col.bar A \code{character} string specifying the color of the bars to generate the graph showing the relative abundances of the pollen types. The \code{color} argument will be \code{#E69F00} by default, but any color may be selected. #' @param type.plot A \code{character} string specifying the type of plot selected to show the plot showing the relative abundance of the pollen types. The implemented types that may be used are: \code{static} generates a static \strong{ggplot} object and \code{dynamic} generates a dynamic \strong{plotly} object. #' @param result A \code{character} string specifying the output for the function. The implemented outputs that may be obtained are: \code{"plot"} and \code{"table"}. The argument \code{result} will be \code{"plot"} by default. #' @param export.plot A \code{logical} value specifying if a plot saved in the working directory will be required or not. If \code{FALSE} graphical results will only be displayed in the active graphics window. If \code{TRUE} graphical results will be displayed in the active graphics window and also a \emph{pdf} or \emph{png} file (according to the \code{export.format} argument) will be saved within the \emph{plot_AeRobiology} directory automatically created in the working directory. This argument is applicable only for \code{"static"} plots. The \code{export.plot} will be \code{FALSE} by default. #' @param export.format A \code{character} string specifying the format selected to save the plot showing the relative abundance of the pollen types. The implemented formats that may be used are: \code{"pdf"} and \code{"png"}. This argument is applicable only for \code{"static"} plots. The \code{export.format} will be \code{"pdf"} by default. #' @param exclude A \code{character} string vector with the names of the pollen types to be excluded from the plot. #' @param ... Other additional arguments may be used to customize the exportation of the plots using \code{"pdf"} or \code{"png"} files and therefore arguments from \code{\link[grDevices]{pdf}} and \code{\link[grDevices]{png}} functions (\pkg{grDevices} package) may be implemented. For example, for \emph{pdf} files the user may custom the arguments: \code{width}, \code{height}, \code{family}, \code{title}, \code{fonts}, \code{paper}, \code{bg}, \code{fg}, \code{pointsize...}; and for \emph{png} files the user may custom the arguments: \code{width}, \code{height}, \code{units}, \code{pointsize}, \code{bg}, \code{res...} #' @details This function allows to calculate the relative abundance of the pollen types in the air from a database and to display a barplot with the percentage representation of the main pollen types as the graph reported by \emph{Rojo et al. (2016)}. This plot will be generated only for the specified number of the most abundant pollen types using the \code{n.types} argument by the user.\cr #' \cr #' Pollen time series frequently have different gaps with no data and this fact could be a problem for the calculation of specific methods for defining the pollen season even providing incorrect results. In this sense by default a linear interpolation will be carried out to complete these gaps before to define the pollen season (\code{interpolation = TRUE}). Additionally, the users may select other interpolation methods using the \code{int.method} argument, as \code{"lineal"}, \code{"movingmean"}, \code{"spline"} or \code{"tseries"}. For more information to see the \code{\link{interpollen}} function. #' @return This function returns different results:\cr #' \cr #' \code{plot} in the active graphics window displaying the pollen calendar generated by the user when \code{result = "plot"}. This \code{plot} may be included in an object by assignment operators.\cr #' \cr #' \code{data.frame} including the yearly average pollen amounts for each pollen types used to generate the pollen of the relative abundance when \code{result = "table"}. This \code{data.frame} will be included in an object named \code{annual.sum.data}.\cr #' \cr #' If \code{export.plot = FALSE} graphical results will only be displayed in the active graphics window as \strong{ggplot} graph. Additional characteristics may be incorporated to the plot by \code{\link[ggplot2]{ggplot}} syntax (see \pkg{ggplot2} package).\cr #' \cr #' If \code{export.plot = TRUE} and \code{export.format = pdf} a \emph{pdf} file with the plot will be saved within the \emph{plot_AeRobiology} directory created in the working directory. This option is applicable only for \code{"static"} plots. Additional characteristics may be incorporated to the exportation as \emph{pdf} file (see \pkg{grDevices} package).\cr #' \cr #' If \code{export.plot = TRUE} and \code{export.format = png} a \emph{png} file with the plot will be saved within the \emph{plot_AeRobiology} directory created in the working directory. This option is applicable only for \code{"static"} plots. Additional characteristics may be incorporated to the exportation as \emph{png} file (see \pkg{grDevices} package).\cr #' \cr #' If \code{type.plot = dynamic} graphical results will be displayed in the active Viewer window as \strong{plotly} graph. Additional characteristics may be incorporated to the plot by \code{\link[plotly]{plotly}} syntax (see \pkg{plotly} package). #' @references Rojo, J., Rapp, A., Lara, B., Sabariego, S., Fernandez_Gonzalez, F. and Perez_Badia, R., 2016. Characterisation of the airborne pollen spectrum in Guadalajara (central Spain) and estimation of the potential allergy risk. \emph{Environmental Monitoring and Assessment}, 188(3), p.130. #' @seealso \code{\link{interpollen}} #' @examples data("munich_pollen") #' @examples iplot_abundance (munich_pollen, interpolation = FALSE, export.plot = FALSE) #' @importFrom utils data #' @importFrom lubridate is.POSIXt #' @importFrom dplyr group_by summarise_all #' @importFrom plotly ggplotly #' @importFrom ggplot2 aes coord_flip element_blank element_text geom_bar geom_errorbar ggplot labs position_dodge theme theme_classic #' @importFrom graphics plot #' @importFrom grDevices dev.off pdf png #' @importFrom stats sd #' @importFrom tidyr %>% #' @export iplot_abundance <- function (data, n.types = 15, y.start = NULL, y.end = NULL, interpolation = TRUE, int.method = "lineal", col.bar = "#E69F00", type.plot = "static", result = "plot", export.plot = FALSE, export.format = "pdf", exclude = NULL, ...){ ############################################# CHECK THE ARGUMENTS ############################# if(export.plot == TRUE){ifelse(!dir.exists(file.path("plot_AeRobiology")), dir.create(file.path("plot_AeRobiology")), FALSE)} data<-data.frame(data) if(class(data) != "data.frame") stop ("Please include a data.frame: first column with date, and the rest with pollen types") if(class(n.types) != "numeric") stop ("Please include only numeric values for 'n.types' argument indicating the number of pollen types which will be displayed") if(class(y.start) != "numeric" & !is.null(y.start)) stop ("Please include only numeric values for y.start argument indicating the start year considered") if(class(y.end) != "numeric" & !is.null(y.end)) stop ("Please include only numeric values for 'y.end' argument indicating the end year considered") if(class(interpolation) != "logical") stop ("Please include only logical values for interpolation argument") if(int.method != "lineal" & int.method != "movingmean" & int.method != "spline" & int.method != "tseries") stop ("Please int.method only accept values: 'lineal', 'movingmean', 'spline' or 'tseries'") if(class(col.bar) != "character") stop ("Please include only character values indicating the color selected in the bars for generating the plot") if(type.plot != "static" & type.plot != "dynamic") stop ("Please type.plot only accept values: 'static' or 'dynamic'") if(result != "plot" & result != "table") stop ("Please result only accept values: 'plot' or 'table'") if(class(export.plot) != "logical") stop ("Please include only logical values for export.plot argument") if(export.format != "pdf" & export.format != "png") stop ("Please export.format only accept values: 'pdf' or 'png'") if(class(exclude) != "character" & !is.null(exclude)) stop ("Please include only character values for exclude argument indicating the pollen type to be excluded") if(class(data[,1])[1]!="Date" & !is.POSIXt(data[,1])) {stop("Please the first column of your data must be the date in 'Date' format")} data[,1]<-as.Date(data[,1]) if(interpolation == TRUE){data <- interpollen(data, method = int.method, plot = FALSE)} ############################################# SELECT ABUNDANT TYPES ############################# #annual.sum.data<-data.frame() data <- data.frame(date = data[ ,1], year = as.numeric(strftime(data[ ,1], "%Y")), data[ ,-1]) #types <- ddply(data[ ,-1], "year", function(x) colSums(x[-1], na.rm = T)) [-1] %>% # apply(., 2, function(x) mean(x, na.rm = T)) %>% .[order(., decreasing = TRUE)] %>% # names(.) %>% # .[1:n.types] types <- data.frame(data[ ,-1] %>% group_by(year) %>% summarise_all(sum, na.rm = TRUE))[-1] %>% apply(., 2, function(x) mean(x, na.rm = T)) %>% .[order(., decreasing = TRUE)] %>% names(.) %>% .[1:n.types] #data <- data[ ,which(colnames(data) %in% c("date", "year", types))] ############################################# SELECT PERIOD ############################# seasons <- unique(as.numeric(strftime(data[, 1], "%Y"))) if(is.null(y.start)){y.start <- min(seasons)}; if(is.null(y.end)){y.end <- max(seasons)} data <- data[which(as.numeric(strftime(data[ ,1], "%Y")) >= y.start & as.numeric(strftime(data[ ,1], "%Y")) <= y.end), ] ############################################################################## #sum.data <- ddply(data, c("year"), function(x) colSums(x[ ,-which(colnames(x) %in% c("date", "year"))], na.rm = T)) sum.data <- data.frame(data[ ,-1] %>% group_by(year) %>% summarise_all(sum, na.rm = TRUE)) sum.data$Total <- apply(sum.data[ ,-which(colnames(sum.data) %in% c("year"))], 1, sum, na.rm = T) perc.df <- sum.data[ ,-which(colnames(sum.data) == "year")] for (r in 1:nrow(perc.df)){ for(c in c(1:ncol(perc.df))){ perc.df[r,c] <- perc.df[r,c]*100/perc.df$Total[r] } } perc.df <- perc.df[which(colnames(perc.df) %in% types)] mean.perc <- data.frame(types = colnames(perc.df), mean = apply(perc.df, 2, FUN = function(x) mean(x, na.rm=T)), sd = apply(perc.df, 2, FUN = function(x) sd(x, na.rm=T))) mean.perc$types <- factor(mean.perc$types, levels = mean.perc$types[order(mean.perc$mean)]) if (!is.null(exclude)){ mean.perc<-mean.perc[!(as.character(mean.perc$types)%in%exclude),]} plot.abundance <- ggplot(mean.perc, aes(x = types, y = mean)) + geom_bar(stat = "identity", position = position_dodge(), color = "black", fill = col.bar) + geom_errorbar(aes(ymin = mean - sd, ymax = mean + sd), width=.2,position=position_dodge(.9))+ coord_flip()+ theme_classic()+ labs(y="Relative abundance (%)", title = paste0("Relative Abundance in the Air (", y.start, "-", y.end, ")"), size = 14)+ theme(axis.title.y = element_blank(), axis.title.x = element_text(size=10, face="bold"), axis.text.y = element_text(size=10, face="bold.italic"), axis.text = element_text(size=10), title=element_text(size=10, face="bold"), plot.title = element_text(hjust = 0.5, size = 16)) if(export.plot == TRUE & type.plot == "static" & export.format == "png") { png(paste0("plot_AeRobiology/abundance_plot", y.start, "-", y.end, ".png"), ...) plot(plot.abundance) dev.off() png(paste0("plot_AeRobiology/credits.png")) dev.off() } if(export.plot == TRUE & type.plot == "static" & export.format == "pdf") { pdf(paste0("plot_AeRobiology/abundance_plot", y.start, "-", y.end, ".pdf"), ...) plot(plot.abundance) dev.off() } if (result == "plot" & type.plot == "static") {return(plot.abundance)} if (result == "plot" & type.plot == "dynamic") {return(ggplotly(plot.abundance))} if (result == "table") {return(sum.data)} }
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/iplot_abundance.R
#' Phenological Plot #' #' Generates a boxplot based on phenological parameters (start_dates and end_dates) that are calculated by the estimation of the main parameters of the pollen season #' #' @param data A \code{data.frame} object including the general database where calculation of the pollen season must be applied in order to generate the phenological plot based on the start_dates and end_dates. This \code{data.frame} must include a first column in \code{Date} format and the rest of columns in \code{numeric} format belonging to each pollen type by column. #' @param method A \code{character} string specifying the method applied to calculate the pollen season and the main parameters. The implemented methods that can be used are: \code{"percentage"}, \code{"logistic"}, \code{"moving"}, \code{"clinical"} or \code{"grains"}. A more detailed information about the different methods for defining the pollen season may be consulted in \code{\link{calculate_ps}} function. #' @param n.types A \code{numeric} (\code{integer}) value specifying the number of the most abundant pollen types that must be represented in the pollen calendar. A more detailed information about the selection of the considered pollen types may be consulted in \strong{Details}. The \code{n.types} argument will be \code{15} types by default. #' @param th.day A \code{numeric} value in order to calculate the number of days when this level is exceeded for each year and each pollen type. This value will be obtained in the results of the function. The \code{th.day} argument will be \code{100} by default. #' @param perc A \code{numeric} value ranging \code{0_100}. This argument is valid only for \code{method = "percentage"}. This value represents the percentage of the total annual pollen included in the pollen season, removing \code{(100_perc)/2\%} of the total pollen before and after of the pollen season. The \code{perc} argument will be \code{95} by default. #' @param def.season A \code{character} string specifying the method for selecting the best annual period to calculate the pollen season. The pollen seasons may occur within the natural year or otherwise may occur between two years which determines the best annual period considered. The implemented options that can be used are: \code{"natural"}, \code{"interannual"} or \code{"peak"}. The \code{def.season} argument will be \code{"natural"} by default. A more detailed information about the different methods for selecting the best annual period to calculate the pollen season may be consulted in \code{\link{calculate_ps}} function. #' @param reduction A \code{logical} value. This argument is valid only for the \code{"logistic"} method. If \code{FALSE} the reduction of the pollen data is not applicable. If \code{TRUE} a reduction of the peaks above a certain level (\code{red.level} argument) will be carried out before the definition of the pollen season. The \code{reduction} argument will be \code{FALSE} by default. A more detailed information about the reduction process may be consulted in \code{\link{calculate_ps}} function. #' @param red.level A \code{numeric} value ranging \code{0_1} specifying the percentile used as level to reduce the peaks of the pollen series before the definition of the pollen season. This argument is valid only for the \code{"logistic"} method. The \code{red.level} argument will be \code{0.90} by default, specifying the percentile 90. #' @param derivative A \code{numeric} (\code{integer}) value belonging to options of \code{4}, \code{5} or \code{6} specifying the derivative that will be applied to calculate the asymptotes which determines the pollen season using the \code{"logistic"} method. This argument is valid only for the \code{"logistic"} method. The \code{derivative} argument will be \code{5} by default. #' @param man A \code{numeric} (\code{integer}) value specifying the order of the moving average applied to calculate the pollen season using the \code{"moving"} method. This argument is valid only for the \code{"moving"} method. The \code{man} argument will be \code{11} by default. #' @param th.ma A \code{numeric} value specifying the threshold used for the \code{"moving"} method for defining the beginning and the end of the pollen season. This argument is valid only for the \code{"moving"} method. The \code{th.ma} argument will be \code{5} by default. #' @param n.clinical A \code{numeric} (\code{integer}) value specifying the number of days which must exceed a given threshold (\code{th.pollen} argument) for defining the beginning and the end of the pollen season. This argument is valid only for the \code{"clinical"} method. The \code{n.clinical} argument will be \code{5} by default. #' @param window.clinical A \code{numeric} (\code{integer}) value specifying the time window during which the conditions must be evaluated for defining the beginning and the end of the pollen season using the \code{"clinical"} method. This argument is valid only for the \code{"clinical"} method. The \code{window.clinical} argument will be \code{7} by default. #' @param window.grains A \code{numeric} (\code{integer}) value specifying the time window during which the conditions must be evaluated for defining the beginning and the end of the pollen season using the \code{"grains"} method. This argument is valid only for the \code{"grains"} method. The \code{window.grains} argument will be \code{5} by default. #' @param th.pollen A \code{numeric} value specifying the threshold that must be exceeded during a given number of days (\code{n.clinical} or \code{window.grains} arguments) for defining the beginning and the end of the pollen season using the \code{"clinical"} or \code{"grains"} methods. This argument is valid only for the \code{"clinical"} or \code{"grains"} methods. The \code{th.pollen} argument will be \code{10} by default. #' @param th.sum A \code{numeric} value specifying the pollen threshold that must be exceeded by the sum of daily pollen during a given number of days (\code{n.clinical} argument) exceeding a given daily threshold (\code{th.pollen} argument) for defining the beginning and the end of the pollen season using the \code{"clinical"} method. This argument is valid only for the \code{"clinical"} method. The \code{th.sum} argument will be \code{100} by default. #' @param type A \code{character} string specifying the parameters considered according to a specific pollen type for calculating the pollen season using the \code{"clinical"} method. The implemented pollen types that may be used are: \code{"birch"}, \code{"grasses"}, \code{"cypress"}, \code{"olive"} or \code{"ragweed"}. As result for selecting any of these pollen types the parameters \code{n.clinical}, \code{window.clinical}, \code{th.pollen} and \code{th.sum} will be automatically adjusted for the \code{"clinical"} method. If no pollen types are specified (\code{type = "none"}), these parameters will be considered by the user. This argument is valid only for the \code{"clinical"} method. The \code{type} argument will be \code{"none"} by default. #' @param interpolation A \code{logical} value. If \code{FALSE} the interpolation of the pollen data is not applicable. If \code{TRUE} an interpolation of the pollen series will be applied to complete the gaps with no data before the calculation of the pollen season. The \code{interpolation} argument will be \code{TRUE} by default. A more detailed information about the interpolation method may be consulted in \strong{Details}. #' @param int.method A \code{character} string specifying the method selected to apply the interpolation method in order to complete the pollen series. The implemented methods that may be used are: \code{"lineal"}, \code{"movingmean"}, \code{"spline"} or \code{"tseries"}. The \code{int.method} argument will be \code{"lineal"} by default. #' @param type.plot A \code{character} string specifying the type of plot selected to show the phenological plot. The implemented types that may be used are: \code{"static"} generates a static \strong{ggplot} object and \code{"dynamic"} generates a dynamic \strong{plotly} object. #' @param export.plot A \code{logical} value specifying if a phenological plot saved in the working directory will be required or not. If \code{FALSE} graphical results will only be displayed in the active graphics window. If \code{TRUE} graphical results will be displayed in the active graphics window and also a \emph{pdf} or \emph{png} file (according to the \code{export.format} argument) will be saved within the \emph{plot_AeRobiology} directory automatically created in the working directory. This argument is applicable only for \code{"static"} plots. The \code{export.plot} will be \code{FALSE} by default. #' @param export.format A \code{character} string specifying the format selected to save the phenological plot. The implemented formats that may be used are: \code{"pdf"} and \code{"png"}. This argument is applicable only for \code{"static"} plots. The \code{export.format} will be \code{"pdf"} by default. #' @param ... Other additional arguments may be used to customize the exportation of the plots using \emph{pdf} or \emph{png} files and therefore arguments from \code{\link[grDevices]{pdf}} and \code{\link[grDevices]{png}} functions (\pkg{grDevices} package) may be implemented. For example, for \emph{pdf} files the user may custom the arguments: \code{width}, \code{height}, \code{family}, \code{title}, \code{fonts}, \code{paper}, \code{bg}, \code{fg}, \code{pointsize...}; and for \emph{png} files the user may custom the arguments: \code{width}, \code{height}, \code{units}, \code{pointsize}, \code{bg}, \code{res...} #' #' @details This function allows to calculate the pollen season using five different methods which are described in \code{\link{calculate_ps}} function. After calculating the start_date and end_date for each pollen type and each year a phenological plot will be generated using the boxplot approach where axis x represents the time (Day of the Year) and axis y includes the considered pollen types. The phenological plot will be generated only for the specified number of the most abundant pollen types using the \code{n.types} argument by the user. The implemented methods for defining the pollen season includes the most commonly used methodologies (\emph{Nilsson and Persson, 1981}; \emph{Andersen, 1991}; \emph{Galan et al., 2001}; \emph{Ribeiro et al., 2007}; \emph{Cunha et al., 2015}, \emph{Pfaar et al., 2017}) and a new implemented method (see \code{\link{calculate_ps}} function).\cr #' Pollen time series frequently have different gaps with no data and this fact could be a problem for the calculation of specific methods for defining the pollen season even providing incorrect results. In this sense by default a linear interpolation will be carried out to complete these gaps before to define the pollen season (\code{interpolation = TRUE}). Additionally, the users may select other interpolation methods using the \code{int.method} argument, as \code{"lineal"}, \code{"movingmean"}, \code{"spline"} or \code{"tseries"}. For more information to see the \code{\link{interpollen}} function. #' #' @return This function returns different results:\cr #' If \code{export.plot = FALSE} graphical results will only be displayed in the active graphics window as \strong{ggplot} graph. Additional characteristics may be incorporated to the plot by \code{\link[ggplot2]{ggplot}} syntax (see \pkg{ggplot2} package).\cr #' If \code{export.plot = TRUE} and \code{export.format = pdf} a \emph{pdf} file of the phenological plot will be saved within the \emph{plot_AeRobiology} directory created in the working directory. This option is applicable only for \code{"static"} plots. Additional characteristics may be incorporated to the exportation as \emph{pdf} file (see \pkg{grDevices} package).\cr #' If \code{export.plot = TRUE} and \code{export.format = png} a \emph{png} file of the phenological plot will be saved within the \emph{plot_AeRobiology} directory created in the working directory. This option is applicable only for \code{"static"} plots. Additional characteristics may be incorporated to the exportation \emph{png} file (see \pkg{grDevices} package).\cr #' If \code{type.plot = dynamic} graphical results will be displayed in the active Viewer window as \strong{plotly} graph. Additional characteristics may be incorporated to the plot \code{\link[plotly]{plotly}} syntax (see \pkg{plotly} package). #' @references Andersen, T.B., 1991. A model to predict the beginning of the pollen season. \emph{Grana}, 30(1), pp.269_275. #' @references Cunha, M., Ribeiro, H., Costa, P. and Abreu, I., 2015. A comparative study of vineyard phenology and pollen metrics extracted from airborne pollen time series. \emph{Aerobiologia}, 31(1), pp.45_56. #' @references Galan, C., Garcia_Mozo, H., Carinanos, P., Alcazar, P. and Dominguez_Vilches, E., 2001. The role of temperature in the onset of the \emph{Olea europaea} L. pollen season in southwestern Spain. \emph{International Journal of Biometeorology}, 45(1), pp.8_12. #' @references Nilsson, S. and Persson, S., 1981. Tree pollen spectra in the Stockholm region (Sweden), 1973_1980. \emph{Grana}, 20(3), pp.179_182. #' @references Pfaar, O., Bastl, K., Berger, U., Buters, J., Calderon, M.A., Clot, B., Darsow, U., Demoly, P., Durham, S.R., Galan, C., Gehrig, R., Gerth van Wijk, R., Jacobsen, L., Klimek, L., Sofiev, M., Thibaudon, M. and Bergmann, K.C., 2017. Defining pollen exposure times for clinical trials of allergen immunotherapy for pollen_induced rhinoconjunctivitis_an EAACI position paper. \emph{Allergy}, 72(5), pp.713_722. #' @references Ribeiro, H., Cunha, M. and Abreu, I., 2007. Definition of main pollen season using logistic model. \emph{Annals of Agricultural and Environmental Medicine}, 14(2), pp.259_264. #' @seealso \code{\link{calculate_ps}}, \code{\link{interpollen}} #' @examples data("munich_pollen") #' @examples iplot_pheno (munich_pollen, interpolation = FALSE) #' @importFrom plotly layout #' @importFrom ggplot2 aes coord_flip element_text geom_boxplot ggplot labs position_dodge scale_fill_manual scale_y_continuous theme theme_bw #' @importFrom graphics plot #' @importFrom grDevices dev.off pdf png #' @importFrom lubridate is.POSIXt #' @importFrom plotly ggplotly #' @importFrom dplyr group_by summarise_all #' @importFrom stats na.omit #' @importFrom utils data #' @importFrom tidyr %>% #' @export iplot_pheno <- function(data, method = "percentage", n.types = 15, th.day = 100, perc = 95, def.season = "natural", reduction = FALSE, red.level = 0.90, derivative = 5, man = 11, th.ma = 5, n.clinical = 5, window.clinical = 7, window.grains = 5, th.pollen = 10, th.sum = 100, type = "none", interpolation = TRUE, int.method = "lineal", type.plot = "static", export.plot = FALSE, export.format = "pdf",...){ ############################################# CHECK THE ARGUMENTS ############################# if(export.plot == TRUE){ifelse(!dir.exists(file.path("plot_AeRobiology")), dir.create(file.path("plot_AeRobiology")), FALSE)} data<-data.frame(data) if(class(data) != "data.frame") stop ("Please include a data.frame: first column with date, and the rest with pollen types") if(method != "percentage" & method != "logistic" & method != "moving" & method != "clinical" & method != "grains") stop ("Please method only accept values: 'percentage', 'logistic', 'moving', 'clinical' or 'grains'") if(class(n.types) != "numeric") stop ("Please include only numeric values for 'n.types' argument indicating the number of pollen types which will be displayed") if(class(th.day) != "numeric" | th.day < 0) stop ("Please include only numeric values >= 0 for th.day argument") if(class(perc) != "numeric" | perc < 0 | perc > 100) stop ("Please include only numeric values between 0-100 for perc argument") if(def.season != "natural" & def.season != "interannual" & def.season != "peak") stop ("Please def.season only accept values: 'natural', 'interannual' or 'peak'") if(class(reduction) != "logical") stop ("Please include only logical values for reduction argument") if(class(red.level) != "numeric" | red.level < 0 | red.level > 1) stop ("Please include only numeric values between 0-1 for red.level argument") if(derivative != 4 & derivative != 5 & derivative != 6) stop ("Please derivative only accept values: 4, 5 or 6") if(class(man) != "numeric" | man < 0) stop ("Please include only numeric values > 0 for man argument") if(class(th.ma) != "numeric" | th.ma < 0) stop ("Please include only numeric values > 0 for th.ma argument") if(class(n.clinical) != "numeric" | n.clinical < 0) stop ("Please include only numeric values >= 0 for n.clinical argument") if(class(window.clinical) != "numeric" | window.clinical < 0) stop ("Please include only numeric values >= 0 for window.clinical argument") if(class(window.grains) != "numeric" | window.grains < 0) stop ("Please include only numeric values >= 0 for window.grains argument") if(class(th.pollen) != "numeric" | th.pollen < 0) stop ("Please include only numeric values >= 0 for th.pollen argument") if(class(th.sum) != "numeric" | th.sum < 0) stop ("Please include only numeric values >= 0 for th.sum argument") if(type != "none" & type != "birch" & type != "grasses" & type != "cypress" & type != "olive" & type != "ragweed") stop ("Please def.season only accept values: 'none', 'birch', 'grasses', 'cypress', 'olive' or 'ragweed'") if(class(interpolation) != "logical") stop ("Please include only logical values for interpolation argument") if(int.method != "lineal" & int.method != "movingmean" & int.method != "spline" & int.method != "tseries") stop ("Please int.method only accept values: 'lineal', 'movingmean', 'spline' or 'tseries'") if(type.plot != "static" & type.plot != "dynamic") stop ("Please type.plot only accept values: 'static' or 'dynamic'") if(class(export.plot) != "logical") stop ("Please include only logical values for export.plot argument") if(export.format != "pdf" & export.format != "png") stop ("Please export.format only accept values: 'pdf' or 'png'") if(class(data[,1])[1]!="Date" & !is.POSIXt(data[,1])) {stop("Please the first column of your data must be the date in 'Date' format")} data[,1]<-as.Date(data[,1]) options(warn = -1) ############################################# SELECT ABUNDANT TYPES ############################# data <- data.frame(date = data[ ,1], year = as.numeric(strftime(data[ ,1], "%Y")), data[ ,-1]) #types <- ddply(data[ ,-1], "year", function(x) colSums(x[-1], na.rm = T)) [-1] %>% #apply(., 2, function(x) mean(x, na.rm = T)) %>% .[order(., decreasing = TRUE)] %>% #names(.) %>% #.[1:n.types] types <- data.frame(data[ ,-1] %>% group_by(year) %>% summarise_all(sum, na.rm = TRUE))[-1] %>% apply(., 2, function(x) mean(x, na.rm = T)) %>% .[order(., decreasing = TRUE)] %>% names(.) %>% .[1:n.types] data <- data[ ,which(colnames(data) %in% c("date", types))] ############################################# CALCULATION OF THE POLLEN SEASON ############################# list.ps <- calculate_ps(data = data, method = method, th.day = th.day, perc = perc, def.season = def.season, reduction = reduction, red.level = red.level, derivative = derivative, man = man, th.ma = th.ma, n.clinical = n.clinical, window.clinical = window.clinical, window.grains = window.grains, th.pollen = th.pollen, th.sum = th.sum, type = type, interpolation = interpolation, int.method = int.method, result = "list", plot = FALSE, export.plot = FALSE, export.result = FALSE) season.data <- list.ps type.name <- colnames(data)[-1] dat.plot <- data.frame(); mean.st <- data.frame() for(t in 1:(length(type.name))){ dat.type <- season.data[[type.name[t]]] dat.plot <- rbind(dat.plot, data.frame(phen = as.numeric(c(rep(0, nrow(dat.type)), rep(1, nrow(dat.type)))), j.days = c(dat.type$st.jd, dat.type$en.jd), type = type.name[t])) # %>% na.omit(.) mean.st <- rbind(mean.st, data.frame(type = type.name[t], mean.st = mean(dat.type$st.jd, na.rm = TRUE))) } mean.st$mean.st[is.nan(mean.st$mean.st)] <- NA; mean.st <- na.omit(mean.st)#; mean.st <- mean.st[order(mean.st$mean.st), 1] dat.plot$type <- factor(dat.plot$type, levels = as.character(mean.st[order(mean.st$mean.st), 1]), ordered = T) dat.plot$phen[dat.plot$phen == 0] <- "Start-date"; dat.plot$phen[dat.plot$phen == 1] <- "End-date" dat.plot$phen <- factor(dat.plot$phen, levels = c("Start-date", "End-date"), ordered = T) pheno.plot <- ggplot(data = na.omit(dat.plot), aes(y = j.days, x = type)) + #stat_boxplot(geom = "errorbar", width = 0.2)+ geom_boxplot(aes(fill = phen), width = 1, varwidth = TRUE, position = position_dodge(width = 0))+ labs(y = "Day of the year", x = "", title = "Phenological parameters", fill = "Phenophase")+ scale_fill_manual(labels = c("Start-date", "End-date"), values = c("tan1", "lightskyblue")) + scale_y_continuous(breaks = seq(0,365,25))+ coord_flip()+ theme_bw()+ theme(text = element_text(size = 14), legend.position = "bottom", axis.text.y = element_text(face="italic")) if(export.plot == TRUE & type.plot == "static" & export.format == "png") { if(method == "percentage") {png(paste0("plot_AeRobiology/pheno_plot_",method,perc,".png"), ...); plot(pheno.plot); dev.off(); png(paste0("plot_AeRobiology/credits.png")); dev.off()} if(method == "logistic") {png(paste0("plot_AeRobiology/pheno_plot_",method,derivative,".png"), ...); plot(pheno.plot); dev.off(); png(paste0("plot_AeRobiology/credits.png")); dev.off()} if(method == "moving") {png(paste0("plot_AeRobiology/pheno_plot_",method,man,"_",th.ma,".png"), ...); plot(pheno.plot); dev.off(); png(paste0("plot_AeRobiology/credits.png")); dev.off()} if(method == "clinical") {png(paste0("plot_AeRobiology/pheno_plot_",method,n.clinical,"_",window.clinical+1,"_",th.pollen,"_",th.sum,".png"), ...); plot(pheno.plot); dev.off(); png(paste0("plot_AeRobiology/credits.png")); dev.off()} if(method == "grains") {png(paste0("plot_AeRobiology/pheno_plot_",method,window.grains+1,"_",th.pollen,".png"), ...); plot(pheno.plot); dev.off(); png(paste0("plot_AeRobiology/credits.png")); dev.off()} } if(export.plot == TRUE & type.plot == "static" & export.format == "pdf") { if(method == "percentage") {pdf(paste0("plot_AeRobiology/pheno_plot_",method,perc,".pdf"), ...); plot(pheno.plot); dev.off()} if(method == "logistic") {pdf(paste0("plot_AeRobiology/pheno_plot_",method,derivative,".pdf"), ...); plot(pheno.plot); dev.off()} if(method == "moving") {pdf(paste0("plot_AeRobiology/pheno_plot_",method,man,"_",th.ma,".pdf"), ...); plot(pheno.plot); dev.off()} if(method == "clinical") {pdf(paste0("plot_AeRobiology/pheno_plot_",method,n.clinical,"_",window.clinical+1,"_",th.pollen,"_",th.sum,".pdf"), ...); plot(pheno.plot); dev.off()} if(method == "grains") {pdf(paste0("plot_AeRobiology/pheno_plot_",method,window.grains+1,"_",th.pollen,".pdf"), ...); plot(pheno.plot); dev.off()} } if (type.plot == "static") {return(pheno.plot)} options(warn = 0) if (type.plot == "dynamic") { ggplotly(pheno.plot) %>% layout(legend = list(orientation = 'v', x = -0.2, y = -0.3)) } }
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/iplot_pheno.R
#' Interactive Plotting Pollen Data (one season). #' #' Function to plot the pollen data during one season. The plots are fully interactive. #' #' @param data A \code{data.frame} object. This \code{data.frame} should include a first column in format \code{Date} and the rest of columns in format \code{numeric} belonging to each pollen type by column. #' @param year An \code{integer} value specifying the year to display. This is a mandatory argument. #' @return An interactive plot of the class \pkg{ggvis}. #' @seealso \code{\link{iplot_years}} #' @examples data("munich_pollen") #' @examples iplot_pollen(data = munich_pollen, year = 2012) #' @importFrom dplyr filter #' @importFrom ggvis add_axis axis_props ggvis input_checkboxgroup input_slider layer_lines layer_points scale_numeric %>% #' @importFrom lubridate is.POSIXt yday year #' @importFrom tidyr gather #' @importFrom stats complete.cases #' @export iplot_pollen<-function(data, year){ data<-data.frame(data) if(class(data) != "data.frame") stop ("Please include a data.frame: first column with date, and the rest with pollen types") if(class(year) != "numeric") stop ("Please include only numeric values for 'year' (including only years in your database)") if(class(data[,1])!="Date" & !is.POSIXt(data[,1])) {stop("Please the first column of your data must be the date in 'Date' format")} colnames(data)[1]<-"date" data[,1]<-as.Date(data[,1]) datalong <- gather( data, colnames(data[2:length(data)]), key = "variable", value = "value" ) ## Compare_pollen datalong_pollen <- datalong[which(year(datalong[, 1]) == year), ] datalong_pollen$date <- yday(datalong_pollen$date) datalong_pollen <- datalong_pollen[which(complete.cases(datalong_pollen)), ] iplot_pollen <- datalong_pollen %>% ggvis(x = ~ date, y = ~ value, stroke = ~ variable) %>% filter(variable %in% eval(input_checkboxgroup(unique( datalong_pollen$variable )))) %>% layer_lines(opacity := 0.6, strokeWidth := 1) %>% layer_points(size := 20, opacity := 0.8, strokeWidth := 5) %>% add_axis("y", title = "Pollen concentration", title_offset = 50) %>% scale_numeric("x", domain = input_slider(1, 365, c(1, 365)), clamp = T) %>% add_axis("x", title = "Day of the year") %>% add_axis("x", orient = "top", ticks = 0, title = paste("Year",year), properties = axis_props( axis = list(stroke = "white"), labels = list(fontSize = 0))) return(iplot_pollen) }
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/iplot_pollen.R
#' Interactive Plotting Pollen Data (one pollen type). #' #' Function to plot the pollen data of a pollen type during several seasons. The plots are fully interactive. #' #' @param data A \code{data.frame} object. This \code{data.frame} should include a first column in format \code{Date} and the rest of columns in format \code{numeric} belonging to each pollen type by column. #' @param pollen A \code{character} string with the name of the particle to show. This \code{character} must match with the name of a column in the input database. This is a mandatory argument. #' @return An interactive plot of the class \pkg{ggvis}. #' @seealso \code{\link{iplot_pollen}} #' @examples data("munich_pollen") #' @examples iplot_years(data = munich_pollen, pollen = "Betula") #' @importFrom dplyr filter #' @importFrom ggvis add_axis axis_props ggvis input_checkboxgroup input_slider layer_lines layer_points scale_numeric %>% #' @importFrom lubridate is.POSIXt yday year #' @importFrom tidyr gather #' @importFrom stats complete.cases #' @export iplot_years<-function(data, pollen){ data<-data.frame(data) if(class(data) != "data.frame") stop ("Please include a data.frame: first column with date, and the rest with pollen types") if(class(pollen) != "character") stop ("Please include only character values for 'pollen' (including only pollen types in your database)") if(class(data[,1])[1]!="Date" & !is.POSIXt(data[,1])) {stop("Please the first column of your data must be the date in 'Date' format")} data[,1]<-as.Date(data[,1]) colnames(data)[1]<-"date" datalong <- gather( data, colnames(data[2:length(data)]), key = "variable", value = "value" ) ## Compare_year datalong_year <- datalong[which(datalong[, 2] == pollen), ] datalong_year$year <- as.character(year(datalong_year$date)) datalong_year$date <- yday(datalong_year$date) datalong_year <- datalong_year[which(complete.cases(datalong_year)), ] iplot_year <- datalong_year %>% ggvis(x = ~ date, y = ~ value, stroke = ~ year) %>% filter(year %in% eval(input_checkboxgroup(unique(datalong_year$year)))) %>% layer_lines(opacity := 0.6, strokeWidth := 1) %>% layer_points(size := 20, opacity := 0.8, strokeWidth := 5) %>% add_axis("y", title = "Pollen concentration", title_offset = 50) %>% scale_numeric("x", domain = input_slider(1, 365, c(1, 365)), clamp = T) %>% add_axis("x", title = "Day of the year") %>% add_axis("x", orient = "top", ticks = 0, title = paste("Pollen",pollen), properties = axis_props( axis = list(stroke = "white"), labels = list(fontSize = 0))) return(iplot_year) }
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/iplot_years.R
#' Moving Average Calculator #' #' Function to calculate the moving average of a given numeric vector. The order of the moving average may be customized by the user (\code{man} argument). #' #' @param data A \code{numeric} vector (e.g. a \code{numeric} column of a \code{dataframe}). #' @param man An \code{integer} value specifying the order of the moving average applied to the data. By default, \code{man = 10}. #' @param warnings A \code{logical} value specifying the show of warning messages. By default, \code{warnings = FALSE}. #' @return This function returns a vector with the moving average of the input data. #' @examples data("munich_pollen") #' @examples ma(data = munich_pollen$Betula, man = 10, warnings = FALSE) #' @export ma <- function(data, man = 10, warnings = FALSE) { if (man %% 2 == 0) { man = man + 1 try(if (warnings == TRUE) warning (paste("WARNING! moving average is calculated for man:", man))) } temp <- data for (i in 1:length(data)) { if (i <= (man - 1) / 2) { init <- 1 } else{ init <- i - (man - 1) / 2 } if (i > length(data) - (man - 1) / 2) { end <- length(data) } else{ end <- i + (man - 1) / 2 } temp[i] = mean(data[init:end], na.rm = T) } return(temp) }
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/ma.R
#' Pollen data of Munich (2010_2015) #' #' A dataset containing information of daily concentrations of pollen in the atmosphere of Munich during the years 2010_2015. Pollen types included: \emph{"Alnus"}, \emph{"Betula"}, \emph{"Taxus"}, \emph{"Fraxinus"}, \emph{"Poaceae"}, \emph{"Quercus"}, \emph{"Ulmus"} and \emph{"Urtica"}. #' @format Time series of daily pollen concentrations expressed as pollen grains / m3 of air. #' @details Data were obtained at Munich (Zentrum Allergie und Umwelt, ZAUM) using a Hirst_type volumetric pollen trap (Hirst, 1952) following the standard methodology (VDI4252_4_2016). Some gaps have been added to test some functions of the package (e.g. \code{\link{quality_control}}, \code{\link{interpollen}}). #' @details The data were obtained by the research team of Prof. Jeroen Buters (Christine Weil & Ingrid Weichenmeier). We specially acknowledge this team and the Zentrum Allergie und Umwelt (ZAUM, directed by Prof. Carsten B. Schmidt_Weber) for their support. #' @references Hirst, J.M., 1952. AN AUTOMATIC VOLUMETRIC SPORE TRAP. Ann. Appl. Biol. 39, 257_265. #' @references VDI 4252_4. (2016). Bioaerosole und biologische Agenzien_Ermittlung von Pollen und Sporen in der Aussenluft unterVerwendung einer volumetrischen Methode fuer einMessnetz zu allergologischen Zwecken. VDI_Richtlinie4252 Blatt 4, Entwurf. VDI/DIN_Handbuch Reinhaltungder Luft, Band 1a: Beuth, Berlin #' @source \url{https://www.zaum-online.de/} "munich_pollen"
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/munich_pollen.R
#'Plotting hourly patterns with heatplot #' #'Function to plot pollen data expressed in concentrations with time resolution higher than 1 day (e.g. hourly, bi-hourly concentrations). As heatplot. #' #'@param data A \code{data.frame} object with the structure \code{long}. Where the first two columns are \code{factors} indicating the \code{pollen} and the \code{location}. The 3 and 4 columns are \code{POSIXct}, showing the hour. Where the third column is the beguinning of the concentration \code{from} and the fourth column is the end time of the concentrtion data \code{to}. The fifth column shows the concentrations of the different pollen types as \code{numeric}. Please see the example 3-hourly data from the automatic pollen monitor BAA500 from Munich and Viechtach in Bavaria (Germany) \code{data("POMO_pollen")}, supplied by ePIN Network supported by the Bavarian Government. #'@param locations A \code{logical} object with the specification if the different locations will be displayed in the plot. Argument only used when \code{result == "plot"}. By default, \code{locations = FALSE}. #'@param low.col A \code{character} object with the specification of the color of the lowest value for the scale. By default, \code{low.col = "blue"}. #'@param mid.col A \code{character} object with the specification of the color of the medium value for the scale. By default, \code{mid.col = "white"}. #'@param high.col A \code{character} object with the specification object with the specification of the color of the highest value for the scale. By default, \code{high.col = "red"}. #'@return The function returns an object or a list of objects of class \pkg{ggplot2}. #'@references Oteros, J., Pusch, G., Weichenmeier, I., Heimann, U., Mueller, R., Roeseler, S., ... & Buters, J. T. (2015). Automatic and online pollen monitoring. \emph{International archives of allergy and immunology}, 167(3), 158-166. #'@examples data("POMO_pollen") #'@examples plot_heathour(POMO_pollen) #'@importFrom ggplot2 ggplot scale_fill_brewer geom_tile scale_fill_gradientn #'@importFrom dplyr %>% #'@export plot_heathour <- function (data, locations = FALSE, low.col = "blue", mid.col = "white", high.col = "red") { data <- data.frame(data) if (class(data) != "data.frame"){ stop ( "Please include a data.frame: first column with factor indicating the pollen, second column with factor indicating the locaiton, third column with POSIXct indicating the (from), fourth column with POSIXct indicating the (to) and fifth column with numbers indicating the concentration" )} if (class(locations) != "logical"){ stop ("Please include only logical values for locations argument")} if (class(low.col) != "character"){ stop ("Please include only character values for low.col argument, introduce valid color names.")} if (class(mid.col) != "character"){ stop ("Please include only character values for mid.col argument, introduce valid color names.")} if (class(high.col) != "character"){ stop ("Please include only character values for high.col argument, introduce valid color names.")} frame3 <- plot_hour(data, result = "table", locations = locations) frame3$location <- as.character(frame3$location) summaryhour <- frame3 %>% group_by(pollen, location, Hour) %>% summarise(percent = mean(percent, na.rm = TRUE)) if (locations == TRUE) { pollenlist2 <- list() for (loca in 1:length(unique(summaryhour$location))) { locat <- unique(summaryhour$location)[loca] temp <- summaryhour[which(summaryhour$location == locat), ] plo <- ggplot(temp, aes(x = Hour, y = pollen, fill = percent)) + geom_tile() + scale_fill_gradientn(colours = c(low.col, mid.col, high.col), limits = c(0, NA)) + labs(x = "", y = "", title = locat) + theme(axis.text.y = element_text(face = "italic")) + theme(axis.text.x = element_text(angle = 45, hjust = 1)) pollenlist2[[loca]] <- plo } return(pollenlist2) } else{ temp<-summaryhour plott <- ggplot(temp, aes(x = Hour, y = pollen, fill = percent)) + geom_tile() + scale_fill_gradientn(colours = c(low.col, mid.col, high.col), limits = c(0, NA)) + labs(x = "", y = "", title = "") + theme(axis.text.y = element_text(face = "italic")) + theme(axis.text.x = element_text(angle = 45, hjust = 1)) return(plott) } }
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/plot_heathour.R
#'Plotting hourly patterns #' #'Function to plot pollen data expressed in concentrations with time resolution higher than 1 day (e.g. hourly, bi-hourly concentrations). #' #'@param data A \code{data.frame} object with the structure \code{long}. Where the first two columns are \code{factors} indicating the \code{pollen} and the \code{location}. The 3 and 4 columns are \code{POSIXct}, showing the hour. Where the third column is the beguinning of the concentration \code{from} and the fourth column is the end time of the concentrtion data \code{to}. The fifth column shows the concentrations of the different pollen types as \code{numeric}. Please see the example 3-hourly data from the automatic pollen monitor BAA500 from Munich and Viechtach in Bavaria (Germany) \code{data("POMO_pollen")}, supplied by ePIN Network supported by the Bavarian Government. #'@param result A \code{character} object with the definition of the object to be produced by the function. If \code{result == "plot"}, the function returns a list of objects of class \pkg{ggplot2}; if \code{result == "table"}, the function returns a \pkg{data.frame} with the hourly patterns. By default, \code{result = "plot"}. #'@param locations A \code{logical} object with the specification if the different locations will be displayed in the plot. Argument only used when \code{result == "plot"}. By default, \code{locations = FALSE}. #'@return If \code{result == "plot"}, the function returns a list of objects of class \pkg{ggplot2}; if \code{result == "table"}, the function returns a \pkg{data.frame} with the hourly patterns. #'@references Oteros, J., Pusch, G., Weichenmeier, I., Heimann, U., Mueller, R., Roeseler, S., ... & Buters, J. T. (2015). Automatic and online pollen monitoring. \emph{International archives of allergy and immunology}, 167(3), 158-166. #'@examples data("POMO_pollen") #'@examples plot_hour(POMO_pollen, result="plot", locations = FALSE) #'@examples plot_hour(POMO_pollen, result="plot", locations = TRUE) #'@importFrom stats aggregate #'@importFrom data.table data.table #'@importFrom lubridate hour yday #'@importFrom dplyr group_by summarise #'@importFrom tidyr spread #'@importFrom ggplot2 ggplot scale_fill_brewer #'@export plot_hour <- function (data, result="plot", locations = FALSE) { data<-data.frame(data) if(class(data) != "data.frame") stop ("Please include a data.frame: first column with factor indicating the pollen, second column with factor indicating the locaiton, third column with POSIXct indicating the (from), fourth column with POSIXct indicating the (to) and fifth column with numbers indicating the concentration") if(result != "plot" & result != "table") stop ("Please result only accept values: 'table' or 'plot'") if(class(locations) != "logical") stop ("Please include only logical values for locations argument") colnames(data)<-c("pollen","location","from","to","value") # HOUR = percent = location = NULL data$date<-as.Date(as.character(data$from),"%Y-%m-%d") data$hour<-hour(strptime(as.character(data$from), format = '%Y-%m-%d %H:%M', 'GMT')) data$hour2<-hour(strptime(as.character(data$to), format = '%Y-%m-%d %H:%M', 'GMT')) data$Hour<-paste0(data$hour,"-",data$hour2) # data$Hour<-factor(data$Hour,levels = c("0-3","3-6","6-9","9-12","12-15","15-18","18-21","21-0")) ### Data to 24h concentrations AND POLLENS selection ### dayconc<-data.table(data) colnames(dayconc)[1]<-"pollen" table<-aggregate( value ~ date+pollen+location, data=dayconc, FUN=mean) table2<-aggregate( value ~ date+pollen+location, data=dayconc, FUN=max) table$max<-table2$value mainpollen<-data%>%group_by(pollen)%>%summarise(total=sum(value, na.rm=TRUE)) mainpollen<-as.data.frame(mainpollen) pollens <-mainpollen[,"pollen"] pollens<-as.character(pollens) table<-table[which(as.character(table$pollen) %in% pollens),] table$location<-as.character(table$location) table$location<-as.factor(table$location) ##### Calculate hourly frames frame3<-data.frame() data_end<-data.frame() if (locations == TRUE){ for (loc in levels(table$location)){ tabletemp<-dayconc[which(dayconc$location==loc),] tabletemp$location<-as.character(tabletemp$location) data_wide <- spread(tabletemp, pollen, value) data_wide$location<-loc data_end<-rbind(data_end,data_wide) tabletemp<-as.data.frame(tabletemp) for ( a in 1:length(pollens)){ tempo<-tabletemp[which(tabletemp$pollen==pollens[a]),] if(nrow(tempo)==0){next} for ( b in 1:nrow(tempo)){ tempo[b,"percent"]<-tempo[b,"value"]*100/table[which(table$pollen==pollens[a] & table$date==tempo[b,"date"] & table$location==tempo[b,"location"]),"value"] } tempo$percent<-tempo$percent/length(unique(data$Hour)) frame3<-rbind(frame3,tempo) } frame3<-frame3[complete.cases(frame3),] } }else { tabletemp<-dayconc data_wide <- spread(tabletemp, pollen, value) data_wide$location<-"All" data_end<-data_wide tabletemp<-as.data.frame(tabletemp) for ( a in 1:length(pollens)){ tempo<-tabletemp[which(tabletemp$pollen==pollens[a]),] if(nrow(tempo)==0){next} for ( b in 1:nrow(tempo)){ tempo[b,"percent"]<-tempo[b,"value"]*100/table[which(table$pollen==pollens[a] & table$date==tempo[b,"date"] & table$location==tempo[b,"location"]),"value"] } tempo$percent<-tempo$percent/length(unique(data$Hour)) frame3<-rbind(frame3,tempo) } frame3<-frame3[complete.cases(frame3),] } # frame3<-frame3[which(yday(frame3$date)%in%summerseries),] frame3 = frame3[order(frame3$hour), ] frame3$Hour = factor(frame3$Hour, levels = unique(frame3$Hour)) if (result == "plot" & locations==FALSE) { plottotal<-ggplot(frame3,aes(x=as.factor(Hour),y=percent))+ geom_bar(stat = "summary", fun.y = "mean", color="black", fill="gray")+ theme_bw()+ labs(x="hour", y="daily pollen (%)", title="Total pollen")+ theme(axis.text.x = element_text(angle = 45, hjust = 1)) pollenlist<-list() pollenlist[[1]]<-plottotal for ( a in 1:length(pollens)){ tempo<-frame3[which(frame3$pollen==pollens[a]),] pollenlist[[a+1]]<- ggplot(tempo,aes(x=as.factor(Hour),y=percent))+ geom_bar(stat = "summary", fun.y = "mean", color="black", fill="gray")+ theme_bw()+ labs(x="hour", y="daily pollen (%)", title=pollens[a])+ theme(axis.text.x = element_text(angle = 45, hjust = 1)) } pollenlist }else if (result == "plot" & locations==TRUE){ plottotal<-ggplot(frame3,aes(x=as.factor(Hour),y=percent,fill=location))+ geom_bar(stat = "summary", fun.y = "mean", color="black",position='dodge')+ theme_bw()+ labs(x="hour", y="daily pollen (%)", title="Total pollen")+ theme(axis.text.x = element_text(angle = 45, hjust = 1))+ scale_fill_brewer(palette="Set1") pollenlist<-list() pollenlist[[1]]<-plottotal for ( a in 1:length(pollens)){ tempo<-frame3[which(frame3$pollen==pollens[a]),] pollenlist[[a+1]]<- ggplot(tempo,aes(x=as.factor(Hour),y=percent, fill=location))+ geom_bar(stat = "summary", fun.y = "mean", color="black",position='dodge')+ theme_bw()+ labs(x="hour", y="daily pollen (%)", title=pollens[a])+ theme(axis.text.x = element_text(angle = 45, hjust = 1))+ scale_fill_brewer(palette="Set1") } pollenlist } if (result=="plot"){ return(pollenlist) } else if (result=="table"){ return(frame3) } }
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/plot_hour.R
#' Plotting the Amplitude of Several Pollen Seasons. #' #' Function to plot the pollen data amplitude during several seasons: daily average pollen concentration over the study period, maximum pollen concentration of each day over the study period and minimum pollen concentration of each day value over the study period. It is possible to plot the relative abundance per day and smoothing the pollen season by calculating a moving average. #' #' @param data A \code{data.frame} object. This \code{data.frame} should include a first column in format \code{Date} and the rest of columns in format \code{numeric} belonging to each pollen type by column. #' @param pollen A \code{character} string with the name of the particle to show. This \code{character} must match with the name of a column in the input database. This is a mandatory argument. #' @param mave An \code{integer} value specifying the order of the moving average applied to the data. By default, \code{mave = 1}. #' @param normalized A \code{logical} value specifying if the visualization shows real pollen data (\code{normalized = FALSE}) or the percentage of every day over the whole pollen season (\code{normalized = TRUE}). By default, \code{normalized = FALSE}. #' @param interpolation A \code{logical} value specifying if the visualization shows the gaps in the inputs data (\code{interpolation = FALSE}) or if an interpolation method is used for filling the gaps (\code{interpolation = TRUE}). By default, \code{interpolation = TRUE}. #' @param int.method A \code{character} string with the name of the interpolation method to be used. The implemented methods that may be used are: \code{"lineal"}, \code{"movingmean"}, \code{"tseries"} or \code{"spline"}. By default, \code{int.method = "lineal"}. #' @param export.plot A \code{logical} value specifying if a plot will be exported or not. If \code{FALSE} graphical results will only be displayed in the active graphics window. If \code{TRUE} graphical results will be displayed in the active graphics window and also one pdf/png file will be saved within the \emph{plot_AeRobiology} directory automatically created in the working directory. By default, \code{export.plot = FALSE}. #' @param export.format A \code{character} string specifying the format selected to save the plot. The implemented formats that may be used are: \code{"pdf"} or \code{"png"}. By default, \code{export.format = "pdf"}. #' @param axisname A \code{character} string specifying the title of the y axis. By default, \code{axisname = "Pollen grains / m3"}. #' @param color.plot A \code{character} string. The argument defines the color to fill the plot. Will be \code{"orange2"} by default. #' @param ... Other additional arguments may be used to customize the exportation of the plots using \code{"pdf"} or \code{"png"} files and therefore arguments from functions \code{\link[grDevices]{pdf}} and \code{\link[grDevices]{png}} may be implemented. For example, for pdf files the user may custom the arguments: width, height, family, title, fonts, paper, bg, fg, pointsize...; and for png files the user may custom the arguments: width, height, units, pointsize, bg, res... #' @return This function returns plot of class \pkg{ggplot2}. User are able to customize the output as a \pkg{ggplot2} object. #' @seealso \code{\link{calculate_ps}}; \code{\link{plot_summary}} #' @examples data("munich_pollen") #' @examples plot_normsummary(munich_pollen, pollen = "Betula", interpolation = FALSE, export.plot = FALSE) #' @importFrom graphics plot #' @importFrom utils data #' @importFrom ggplot2 aes element_text geom_line geom_ribbon ggplot labs scale_colour_manual scale_x_date theme theme_classic #' @importFrom grDevices dev.off pdf png #' @importFrom lubridate is.POSIXt yday year #' @importFrom scales date_format #' @importFrom stats aggregate #' @importFrom tidyr %>% #' @export plot_normsummary<-function (data, pollen, mave=1, normalized=FALSE, interpolation = TRUE, int.method = "lineal", export.plot = FALSE, export.format = "pdf", color.plot="orange2", axisname="Pollen grains / m3", ...){ # Sys.setlocale(category = "LC_ALL", locale="english") ############################################# CHECK THE ARGUMENTS ############################# if(export.plot == TRUE){ifelse(!dir.exists(file.path("plot_AeRobiology")), dir.create(file.path("plot_AeRobiology")), FALSE)} data<-data.frame(data) if(class(data) != "data.frame") stop ("Please include a data.frame: first column with date, and the rest with pollen types") if(class(pollen) != "character") stop ("Please include only character values for 'pollen'") if(class(export.plot) != "logical") stop ("Please include only logical values for export.plot argument") if(export.format != "pdf" & export.format != "png") stop ("Please export.format only accept values: 'pdf' or 'png'") if(class(data[,1])[1]!="Date" & !is.POSIXt(data[,1])) {stop("Please the first column of your data must be the date in 'Date' format")} data[,1]<-as.Date(data[,1]) if(class(interpolation) != "logical") stop ("Please include only logical values for interpolation argument") if(class(axisname) != "character") stop ("Please include only character values for 'axisname'") if(class(color.plot) != "character") stop ("Please include only character values indicating the name of a colour for 'color.plot'") # if(int.method != "lineal" & int.method != "movingmean" & int.method != "spline") stop ("Please int.method only accept values: 'lineal', 'movingmean' or 'spline'") ncolpollen<-which( colnames(data)==pollen) data<-data[,c(1,ncolpollen)] if(interpolation == TRUE){data <- interpollen(data, method = int.method)} data[, 1] <- as.Date(data[, 1]) colnames(data)[1]<-"Date" data$DOY<-yday(data[, "Date"]) data$Year<-year(data[, "Date"]) sum<-aggregate(data[,pollen], by=list(data[,"Year"]), FUN=sum, na.rm=T) colnames(sum)[1]<-"Year" data<-merge(data,sum,by="Year") colnames(data)[length(data)]<-"sum" max<-aggregate(data[,pollen], by=list(data[,"Year"]), FUN=max, na.rm=T) colnames(max)[1]<-"Year" data<-merge(data,max,by="Year") colnames(data)[length(data)]<-"max" min<-aggregate(data[,pollen], by=list(data[,"Year"]), FUN=min, na.rm=T) colnames(min)[1]<-"Year" data<-merge(data,min,by="Year") colnames(data)[length(data)]<-"min" data$Norm<-data[,pollen]*100/data$sum data$noNorm<-data[,pollen] if (normalized == TRUE){ meant<-aggregate(data[,"Norm"], by=list(data[,"DOY"]), FUN=mean, na.rm=T) mint<-aggregate(data[,"Norm"], by=list(data[,"DOY"]), FUN=min, na.rm=T) maxt<-aggregate(data[,"Norm"], by=list(data[,"DOY"]), FUN=max, na.rm=T) frame<-data.frame(DOY=meant$Group.1,Mean=meant$x,Min=mint$x,Max=maxt$x, date=as.Date(meant$Group.1, origin = "2000-01-01")) frame<-frame[!is.infinite(frame[,3]),] frame$Mean<-ma(frame$Mean,man=mave) frame$Min<-ma(frame$Min,man=mave) frame$Max<-ma(frame$Max,man=mave) plot.summary.norm<-ggplot(data = frame, aes(x = date, y = Mean)) + labs(x="", y="percentage (%)", title=pollen) + geom_ribbon(aes(ymin=Min,ymax=Max),alpha=0.3,fill=color.plot)+ geom_line(size = 0.8)+ scale_colour_manual("Parameters",values=c("black"))+ theme_classic()+ theme(text = element_text(size = 13), axis.text.x = element_text(size = 13), axis.text.y = element_text(size = 13))+ scale_x_date(labels=date_format("%b"), date_breaks = "1 month")+ theme(plot.title=element_text(face="bold.italic", size=13)) } else { meant<-aggregate(data[,"noNorm"], by=list(data[,"DOY"]), FUN=mean, na.rm=T) mint<-aggregate(data[,"noNorm"], by=list(data[,"DOY"]), FUN=min, na.rm=T) maxt<-aggregate(data[,"noNorm"], by=list(data[,"DOY"]), FUN=max, na.rm=T) frame<-data.frame(DOY=meant$Group.1,Mean=meant$x,Min=mint$x,Max=maxt$x, date=as.Date(meant$Group.1, origin = "2000-01-01")) frame<-frame[!is.infinite(frame[,3]),] frame$Mean<-ma(frame$Mean,man=mave) frame$Min<-ma(frame$Min,man=mave) frame$Max<-ma(frame$Max,man=mave) plot.summary.norm<-ggplot(data = frame, aes(x = date, y = Mean)) + labs(x="", y=axisname, title=pollen) + geom_ribbon(aes(ymin=Min,ymax=Max),alpha=0.3,fill=color.plot)+ geom_line(size = 0.8)+ scale_colour_manual("Parameters",values=c("black"))+ theme_classic()+ theme(text = element_text(size = 13), axis.text.x = element_text(size = 13), axis.text.y = element_text(size = 13))+ scale_x_date(labels=date_format("%b"), date_breaks = "1 month")+ theme(plot.title=element_text(face="bold.italic", size=13)) } if(export.plot == TRUE & export.format == "png") { png(paste0("plot_AeRobiology/plot_normsummary_", pollen,".png"), ...) plot(plot.summary.norm) dev.off() png(paste0("plot_AeRobiology/credits.png")) dev.off() } if(export.plot == TRUE & export.format == "pdf") { pdf(paste0("plot_AeRobiology/plot_normsummary_", pollen,".pdf"), ...) plot(plot.summary.norm) dev.off() } return(plot.summary.norm) }
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/plot_normsummary.R
#'Pollen Season Plot #' #'Function to plot the main pollen season of a single pollen type. #' #'@param data A \code{data.frame} object including the general database where interpollation must be performed. This \code{data.frame} must include a first column in \code{Date} format and the rest of columns in \code{numeric} format. Each column must contain information of one pollen type. It is not necessary to insert missing gaps; the function will automatically detect them. #'@param pollen.type A \code{character} string specifying the name of the pollen type which will be plotted. The name must be exactly the same that appears in the column name. Mandatory argument with no default. #'@param year A \code{numeric (interger)} value specifying the season to be plotted. The season does not necessary fit a natural year. See \code{\link{calculate_ps}} for more details. Mandatory argument with no default. #'@param days A \code{numeric (interger)} specifying the number of days beyond each side of the main pollen season that will be represented. The \code{days} argument will be \code{30} by default. #'@param fill.col A \code{character} string specifying the name of the color to fill the main pollen season (Galan et al., 2017) in the plot. See \code{\link[ggplot2]{ggplot}} function for more details. The \code{fill.col} argument will be \code{"turquoise4"} by default. #'@param int.method A \code{character} string specifying the method selected to apply the interpolation method in order to complete the pollen series. The implemented methods that may be used are: \code{"lineal"}, \code{"movingmean"}, \code{"spline"} or \code{"tseries"}. See \code{\link{interpollen}} function for more details. The \code{int.method} argument will be \code{"lineal"} by default. #'@param axisname A \code{character} string or an expression specifying the y axis title of the plot. The \code{axisname} argument will be \code{expression(paste("Pollen grains / m" ^ "3"))} by default. #'@param ... Other arguments passed on to the pollen season calculation as specified in \code{\link{calculate_ps}} function. #'@details \code{plot_ps} function is designed to easily plot the main pollen season (Galan et al., 2017). The pre_peak period and the post_peak period are marked with different color intensity in the graph. The user must choose a single pollen type and season to plot. #'@return The function returns an object of class \code{\link[ggplot2]{ggplot}} with a graphical representation of the main pollen season of the selected pollen type. The pre_peak and post_peak periods are marked with different color intensity. #'@references Galan, C., Ariatti, A., Bonini, M., Clot, B., Crouzy, B., Dahl, A., Fernandez_Gonzalez, D., Frenguelli, G., Gehrig, R., Isard, S., Levetin, E., Li, D.W., Mandrioli, P., Rogers, C.A., Thibaudon, M., Sauliene, I., Skjoth, C., Smith, M., Sofiev, M., 2017. Recommended terminology for aerobiological studies. Aerobiologia (Bologna). 293_295. #'@seealso \code{\link{calculate_ps}}, \code{\link{interpollen}}, \code{\link[ggplot2]{ggplot}}. #'@examples data("munich_pollen") #'@examples plot_ps(munich_pollen, year = 2013, pollen.type = "Betula") #'@importFrom utils data #'@importFrom scales date_format #'@importFrom lubridate is.POSIXt year #'@importFrom ggplot2 aes element_blank element_text geom_area geom_line ggplot labs scale_x_date theme theme_classic ylab #'@importFrom graphics plot #'@importFrom grDevices dev.off png #'@importFrom tidyr %>% #'@export plot_ps <- function(data, pollen.type = NULL , year = NULL , days = 30, fill.col = "turquoise4", int.method="lineal", axisname= expression(paste("Pollen grains / m" ^ "3")), ...) { if(class(axisname)!="expression" & class(axisname)!="character"){stop("axisname: Please, insert only a character string or an expression")} if (class(fill.col) != "character") { stop("fill.col: Please, insert only a character string defining an existing color") } data<-data.frame(data) if (class(data) != "data.frame" & !is.null(data)) { stop ("Please include a data.frame: first column with date, and the rest with pollen types") } if (is.null(year)) { stop("Please, select a year to plot") } if (is.null(pollen.type)) { stop("Please, select a pollen.type to plot") } if (class(pollen.type) != "character") { stop("pollen.type: Please, insert only a character string") } if (class(year) != "numeric") { stop("year: Please, insert only a number") } if (class(days) != "numeric") { stop("days: Please, insert only a number bigger than 1") } if (days < 1 | days %% 1 != 0 ) { stop("days: Please, insert only an entire number bigger than 1") } if(class(data[,1])[1]!="Date" & !is.POSIXt(data[,1])) {stop("Please the first column of your data must be the date in 'Date' format")} data[,1]<-as.Date(data[,1]) namecolumn <- colnames(data)[-1] years <- unique(year(data[, 1])) if (!(pollen.type %in% namecolumn)) { stop( "pollen.type: Please, insert only a pollen type which is in your database (type the exact name)" ) } if (!(year %in% years)) { stop("pollen.type: Please, insert only a year which is in your database") } datename <- colnames(data)[1] dataframe <- data[, c(datename, pollen.type)] referencetable <- calculate_ps(data = dataframe, plot = FALSE, export.result = FALSE, int.method = int.method, interpolation = TRUE, ...) dataframe <- interpollen(data = dataframe, method=int.method, plot = FALSE) Start <- referencetable[which(referencetable$seasons == year), 3] - (days) End <- referencetable[which(referencetable$seasons == year), 5] + (days) StartMPS <- referencetable[which(referencetable$seasons == year), 3] EndMPS <- referencetable[which(referencetable$seasons == year), 5] Peak <- referencetable[which(referencetable$seasons == year), 11] dataplot <- dataframe[which(dataframe[, 1] >= Start & dataframe[, 1] <= End),] colnames(dataplot) <- c("date", "pollen") Preframe <- dataplot Postframe <- dataplot Preframe[which(Preframe$date < StartMPS | Preframe$date > Peak), 2] <- NA Postframe[which(Postframe$date < Peak | Preframe$date > EndMPS), 2] <- NA graph <- ggplot() + geom_area(data = dataplot, aes(date, pollen), color = "grey90", fill = "grey90") + geom_area(data = Preframe, aes(date, pollen), fill = fill.col) + geom_area(data = Postframe, aes(date, pollen), fill = fill.col, alpha = 0.5) + geom_line(data = dataplot, aes(date, pollen), color = "grey10", size = 0.3) + scale_x_date(labels = date_format("%d-%b"), breaks = '7 days') + theme_classic() + labs(title = paste(pollen.type, year)) + ylab(axisname) + theme( plot.title = element_text(face = "bold.italic", size = 16), axis.title.y = element_text(size = 12), axis.title.x = element_blank(), axis.text = element_text(size = 10, color = "black"), axis.text.x = element_text(angle = 45, hjust = 1) ) return(graph) }
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/plot_ps.R
#' Plotting Several Pollen Seasons. #' #' Function to plot the pollen data during several seasons. Also plots the averaged pollen season over the study period. It is possible to plot the relative abundance per day and smoothing the pollen season by calculating a moving average. #' #' @param data A \code{data.frame} object. This \code{data.frame} should include a first column in format \code{Date} and the rest of columns in format \code{numeric} belonging to each pollen type by column. #' @param pollen A \code{character} string with the name of the particle to show. This \code{character} must match with the name of a column in the input database. This is a mandatory argument. #' @param mave An \code{integer} value specifying the order of the moving average applied to the data. By default, \code{mave = 1}. #' @param normalized A \code{logical} value specifying if the visualization shows real pollen data (\code{normalized = FALSE}) or the percentage of every day over the whole pollen season (\code{normalized = TRUE}). By default, \code{normalized = FALSE}. #' @param interpolation A \code{logical} value specifying if the visualization shows the gaps in the inputs data (\code{interpolation = FALSE}) or if an interpolation method is used for filling the gaps (\code{interpolation = TRUE}). By default, \code{interpolation = TRUE}. #' @param int.method A \code{character} string with the name of the interpolation method to be used. The implemented methods that may be used are: \code{"lineal"}, \code{"movingmean"}, \code{"tseries"} or \code{"spline"}. By default, \code{int.method = "lineal"}. #' @param export.plot A \code{logical} value specifying if a plot will be exported or not. If \code{FALSE} graphical results will only be displayed in the active graphics window. If \code{TRUE} graphical results will be displayed in the active graphics window and also one pdf/png file will be saved within the \emph{plot_AeRobiology} directory automatically created in the working directory. By default, \code{export.plot = FALSE}. #' @param export.format A \code{character} string specifying the format selected to save the plot. The implemented formats that may be used are: \code{"pdf"} or \code{"png"}. By default, \code{export.format = "pdf"}. #' @param axisname A \code{character} string specifying the title of the y axis. By default, \code{axisname = "Pollen grains / m3"}. #' @param ... Other additional arguments may be used to customize the exportation of the plots using \code{"pdf"} or \code{"png"} files and therefore arguments from functions \code{\link[grDevices]{pdf}} and \code{\link[grDevices]{png}} may be implemented. For example, for pdf files the user may custom the arguments: width, height, family, title, fonts, paper, bg, fg, pointsize...; and for png files the user may custom the arguments: width, height, units, pointsize, bg, res... #' #' @details This function allows to summarize the pollen season by a simple plot. Even though the package was originally designed to treat aeropalynological data, it can be used to study many other atmospheric components (e.g., bacteria in the air, fungi, insects ...) \emph{(Buters et al., 2018; Oteros et al., 2019)}. #' @return This function returns plot of class \pkg{ggplot2}. User are able to customize the output as a \pkg{ggplot2} object. #' @references Buters, J. T. M., Antunes, C., Galveias, A., Bergmann, K. C., Thibaudon, M., Galan, C. & Oteros, J. (2018). Pollen and spore monitoring in the world. \emph{Clinical and translational allergy}, 8(1), 9. #' @references Oteros, J., Bartusel, E., Alessandrini, F., Nunez, A., Moreno, D. A., Behrendt, H., ... & Buters, J. (2019). Artemisia pollen is the main vector for airborne endotoxin. \emph{Journal of Allergy and Clinical Immunology}. #' @seealso \code{\link{calculate_ps}}; \code{\link{plot_normsummary}} #' @examples data("munich_pollen") #' @examples plot_summary(munich_pollen, pollen = "Betula", export.plot = FALSE, interpolation = FALSE) #' @importFrom graphics plot #' @importFrom utils data #' @importFrom ggplot2 aes element_text geom_area geom_line ggplot ggtitle labs theme theme_bw theme_classic theme_set #' @importFrom grDevices dev.off pdf png #' @importFrom lubridate is.POSIXt yday year #' @importFrom stats aggregate #' @importFrom tidyr %>% #' @export plot_summary<-function (data, pollen, mave=1, normalized=FALSE, interpolation = TRUE, int.method = "lineal", export.plot = FALSE, export.format = "pdf", axisname="Pollen grains / m3", ...){ # Sys.setlocale(category = "LC_ALL", locale="english") ############################################# CHECK THE ARGUMENTS ############################# if(export.plot == TRUE){ifelse(!dir.exists(file.path("plot_AeRobiology")), dir.create(file.path("plot_AeRobiology")), FALSE)} data<-data.frame(data) if(class(data) != "data.frame") stop ("Please include a data.frame: first column with date, and the rest with pollen types") if(class(pollen) != "character") stop ("Please include only charactr values for 'pollen'") if(class(export.plot) != "logical") stop ("Please include only logical values for export.plot argument") if(export.format != "pdf" & export.format != "png") stop ("Please export.format only accept values: 'pdf' or 'png'") if(class(data[,1])[1]!="Date" & !is.POSIXt(data[,1])) {stop("Please the first column of your data must be the date in 'Date' format")} data[,1]<-as.Date(data[,1]) if(class(axisname) != "character") stop ("Please include only character values for 'axisname'") # if(class(interpolation) != "logical") stop ("Please include only logical values for interpolation argument") # # if(int.method != "lineal" & int.method != "movingmean" & int.method != "tseries" & int.method != "spline") stop ("Please int.method only accept values: 'lineal', 'movingmean', 'tseries' or 'spline'") # ncolpollen<-which( colnames(data)==pollen) data<-data[,c(1,ncolpollen)] if(interpolation == TRUE){data <- interpollen(data, method = int.method)} data[, 1] <- as.Date(data[, 1]) colnames(data)[1]<-"Date" data$DOY<-yday(data[, 1] ) data$Year<-year(data[, 1] ) data[,2]<-ma(data[,2],man=mave) if (normalized==T){ sum<-aggregate(data[,pollen], by=list(data[,"Year"]), FUN=sum, na.rm=T) colnames(sum)[1]<-"Year" data<-merge(data,sum,by="Year") colnames(data)[length(data)]<-"sum" data$percent<-data[,pollen]*100/data[,"sum"] data<-data[which(data$sum!=0),] meant<-aggregate(data[,"percent"], by=list(data[,"DOY"]), FUN=mean, na.rm=T) colnames(meant)[1]<-"DOY" data_pollen<-merge(data, meant, by ="DOY") data_pollen$Year<-as.character(year(data_pollen[,"Date"])) # data_pollen<-data_pollen[,c(colnames(data_pollen)[1:2],pollen,"Year","x")] plot.summary<-ggplot(data_pollen, aes(x=DOY, y=data_pollen[,"percent"],fill=Year)) + theme_set(theme_bw()) + geom_area(position = "identity", alpha=0.3)+ geom_line(data=data_pollen, aes(y=x), size=1)+ labs(x="Day of the year", y="percentage (%)") + theme(axis.text=element_text(size=10), axis.title=element_text(size=10)) + ggtitle(pollen) + theme_classic()+ theme(plot.title=element_text( face="bold.italic", size=13)) }else{ sum<-aggregate(data[,pollen], by=list(data[,"Year"]), FUN=sum, na.rm=T) colnames(sum)[1]<-"Year" data<-merge(data,sum,by="Year") colnames(data)[length(data)]<-"sum" data$percent<-data[,pollen]*100/data[,"sum"] data<-data[which(data$sum!=0),] meant<-aggregate(data[,pollen], by=list(data[,"DOY"]), FUN=mean, na.rm=T) colnames(meant)[1]<-"DOY" data_pollen<-merge(data, meant, by ="DOY") data_pollen$Year<-as.character(year(data_pollen[,"Date"])) data_pollen<-data_pollen[,c(colnames(data_pollen)[1:2],pollen,"Year","x")] plot.summary<-ggplot(data_pollen, aes(x=DOY, y=data_pollen[,pollen],fill=Year)) + theme_set(theme_bw()) + geom_area(position = "identity", alpha=0.3)+ geom_line(data=data_pollen, aes(y=x), size=1)+ labs(x="Day of the year", y=axisname) + theme(axis.text=element_text(size=10), axis.title=element_text(size=10)) + ggtitle(pollen) + theme_classic()+ theme(plot.title=element_text(face="bold.italic", size=13)) } if(export.plot == TRUE & export.format == "png") { png(paste0("plot_AeRobiology/plot_summary_", pollen,".png"), ...) plot(plot.summary) dev.off() png(paste0("plot_AeRobiology/credits.png")) dev.off() } if(export.plot == TRUE & export.format == "pdf") { pdf(paste0("plot_AeRobiology/plot_summary_", pollen, ".pdf"), ...) plot(plot.summary) dev.off() } return(plot.summary) }
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/plot_summary.R
#' Calculating and Plotting Trends of Pollen Data. #' #' Function to calculate the main seasonal indexes of the pollen season (\emph{Start Date}, \emph{Peak Date}, \emph{End Date} and \emph{Pollen Integral}). Trends analysis of the parameters over the seasons. Plots showing the distribution of the main seasonal indexes over the years. #' #' @param data A \code{data.frame} object. This \code{data.frame} should include a first column in format \code{Date} and the rest of columns in format \code{numeric} belonging to each pollen type by column. #' @param interpolation A \code{logical} value specifying if the visualization shows the gaps in the inputs data (\code{interpolation = FALSE}) or if an interpolation method is used for filling the gaps (\code{interpolation = TRUE}). By default, \code{interpolation = TRUE}. #' @param int.method A \code{character} string with the name of the interpolation method to be used. The implemented methods that may be used are: \code{"lineal"}, \code{"movingmean"}, \code{"tseries"} or \code{"spline"}. By default, \code{int.method = "lineal"}. #' @param export.plot A \code{logical} value specifying if a plot will be exported or not. If \code{FALSE} graphical results will only be displayed in the active graphics window. If \code{TRUE} graphical results will be displayed in the active graphics window and also one pdf/png file will be saved within the \emph{plot_AeRobiology} directory automatically created in the working directory. By default, \code{export.plot = TRUE}. #' @param export.format A \code{character} string specifying the format selected to save the plot. The implemented formats that may be used are: \code{"pdf"} or \code{"png"}. By default, \code{export.format = "pdf"}. #' @param export.result A \code{logical} value. If \code{export.result = TRUE}, a table is exported with the extension \emph{.xlsx}, in the directory \emph{table_AeRobiology}. This table has the information about the \code{slope} \emph{"beta coefficient of a lineal model using as predictor the year and as dependent variable one of the main pollen season indexes"}. The information is referred to the main pollen season indexes: \emph{Start Date}, \emph{Peak Date}, \emph{End Date} and \emph{Pollen Integral}. #' @param method A \code{character} string specifying the method applied to calculate the pollen season and the main seasonal parameters. The implemented methods that can be used are: \code{"percentage"}, \code{"logistic"}, \code{"moving"}, \code{"clinical"} or \code{"grains"}. By default, \code{method = "percentage"} (\code{perc = 95}\%). A more detailed information about the different methods for defining the pollen season may be consulted in the function \code{\link{calculate_ps}}. #' @param ... Additional arguments for the function \code{\link{calculate_ps}} are also accepted. #' @return This function returns several plots in the directory \emph{plot_AeRobiology/trend_plots} with the extension \emph{.pdf} or \emph{.png}.Also produces an object of the class \code{data.frame} and export a table with the extension \emph{.xlsx}, in the directory \emph{table_AeRobiology}.\cr #'These tables have the information about the \code{slope} \emph{(beta coefficient of a lineal model using as predictor the year and as dependent variable one of the main pollen season indexes)}. The information is referred to the main pollen season indexes: \emph{Start Date}, \emph{Peak Date}, \emph{End Date} and \emph{Pollen Integral}. #' @seealso \code{\link{calculate_ps}}; \code{\link{analyse_trend}} #' @examples data("munich_pollen") #' @examples plot_trend(munich_pollen, interpolation = FALSE, export.plot = FALSE, export.result = TRUE) #' @importFrom graphics plot #' @importFrom utils data #' @importFrom ggplot2 aes element_text geom_point geom_smooth ggplot labs theme theme_classic theme_set scale_x_continuous #' @importFrom grDevices dev.off pdf png #' @importFrom grid grid.layout pushViewport viewport #' @importFrom lubridate is.POSIXt #' @importFrom stats as.formula complete.cases lm pf #' @importFrom scales pretty_breaks #' @importFrom writexl write_xlsx #' @importFrom tidyr %>% #' @export plot_trend <- function (data, interpolation = TRUE, int.method = "lineal", export.plot = TRUE, export.format = "pdf", export.result=TRUE, method="percentage", ...){ # Sys.setlocale(category = "LC_ALL", locale="english") ############################################# CHECK THE ARGUMENTS ############################# if(export.plot == TRUE){ifelse(!dir.exists(file.path("plot_AeRobiology")), dir.create(file.path("plot_AeRobiology")), FALSE)} if(export.plot == TRUE){ifelse(!dir.exists(file.path("plot_AeRobiology/trend_plots")), dir.create(file.path("plot_AeRobiology/trend_plots")), FALSE)} if(export.result == TRUE){ifelse(!dir.exists(file.path("table_AeRobiology")), dir.create(file.path("table_AeRobiology")), FALSE)} data<-data.frame(data) if(class(data) != "data.frame") stop ("Please include a data.frame: first column with date, and the rest with pollen types") if(class(export.plot) != "logical") stop ("Please include only logical values for export.plot argument") if(export.format != "pdf" & export.format != "png") stop ("Please export.format only accept values: 'pdf' or 'png'") if(class(data[,1])[1]!="Date" & !is.POSIXt(data[,1])) {stop("Please the first column of your data must be the date in 'Date' format")} data[,1]<-as.Date(data[,1]) if(class(interpolation) != "logical") stop ("Please include only logical values for interpolation argument") # if(int.method != "lineal" & int.method != "movingmean" & int.method != "spline" & int.method != "tseries") stop ("Please int.method only accept values: 'lineal', 'movingmean', 'tseries' or 'spline'") # if(interpolation == TRUE){data <- interpollen(data, method = int.method)} colnames(data)[1]<-"Date" ## Function for p value lmp <- function (modelobject) { if (class(modelobject) != "lm") stop("Not an object of class 'lm' ") f <- summary(modelobject)$fstatistic p <- pf(f[1],f[2],f[3],lower.tail=F) attributes(p) <- NULL return(p) } datafram<-calculate_ps(data, method=method, interpolation = interpolation, int.method=int.method,plot=FALSE,...) variables<-c("st.jd","pk.jd","en.jd","sm.ps") trendtime<-data.frame() data_summary<- for (t in 1:length(unique(datafram$type))){ type<-unique(as.character(datafram$type))[t] for (v in 1:length(variables)){ tryCatch({ variable<-variables[v] temp<-datafram[which(datafram$type==type),c(1:2,which( colnames(datafram)==variable))] lm <- lm (as.formula(paste(variable,"~ seasons")), data= temp, x = TRUE, y = TRUE) tempframe<-data.frame(type=type,variable=variable,coef=summary(lm)$coefficients[2,1],p=lmp(lm)) trendtime<-rbind(trendtime,tempframe) }, error=function(e){ print(paste(type, variable, ": Error, linear model not calculated. Probably due to insufficient amount of years")) }) } } datafram<-datafram[complete.cases(datafram),] ### Now start the rock and roll for (p in 1:length(unique(as.character(datafram$type)))){ pollen<-unique(as.character(datafram$type))[p] dataframtemp<-datafram[which(datafram$type==pollen),] dataframtemp$seasons<-as.integer(dataframtemp$seasons) slope<-trendtime[which(trendtime$type==pollen & trendtime$variable=="st.jd"),"coef"] slope<-round(slope, 1) slope<-paste0("slope: ",slope) p<-trendtime[which(trendtime$type==pollen & trendtime$variable=="st.jd"),"p"] p<-round(p, 3) if (p < 0.001) { pvalue <- "p<0.001" } else if (p < 0.01) { pvalue <- "p<0.01" } else if (p < 0.05) { pvalue <- "p<0.05" } else { pvalue <- "p>0.05" } p<-pvalue comb<-paste0(slope,", ",p) p1 <- ggplot(dataframtemp, aes(x=seasons, y=st.jd)) + theme_set(theme_classic()) + geom_smooth(method="loess", size=1, colour="blue", fill="light blue") + geom_smooth(method="lm", size=1, se=FALSE, colour="red", linetype="dashed") + geom_point(shape=21, stroke=1.5,size = 3.5, fill="darkgrey", colour="black") + labs(x="", y="Day of the Year (DOY)", title= paste("Start Date", pollen), subtitle=comb)+ theme(axis.text=element_text(size=10), axis.title=element_text(size=10)) + theme(plot.title=element_text(size=13, face = "bold.italic"))+ scale_x_continuous(breaks=pretty_breaks()) p1 <- ggplot(dataframtemp, aes(x=seasons, y=st.jd)) + theme_set(theme_classic()) + geom_smooth(method="loess", size=1, colour="blue", fill="light blue") + geom_smooth(method="lm", size=1, se=FALSE, colour="red", linetype="dashed") + geom_point(shape=21, stroke=1.5,size = 3.5, fill="darkgrey", colour="black") + labs(x="", y="Day of the Year (DOY)", title= paste("Start Date", pollen), subtitle=comb)+ theme(axis.text=element_text(size=10), axis.title=element_text(size=10)) + theme(plot.title=element_text(size=13, face = "bold.italic"))+ scale_x_continuous(breaks=pretty_breaks()) slope<-trendtime[which(trendtime$type==pollen & trendtime$variable=="pk.jd"),"coef"] slope<-round(slope, 1) slope<-paste0("slope: ",slope) p<-trendtime[which(trendtime$type==pollen & trendtime$variable=="pk.jd"),"p"] p<-round(p, 3) if (p < 0.001) { pvalue <- "p<0.001" } else if (p < 0.01) { pvalue <- "p<0.01" } else if (p < 0.05) { pvalue <- "p<0.05" } else { pvalue <- "p>0.05" } p<-pvalue comb<-paste0(slope,", ",p) p2 <- ggplot(dataframtemp, aes(x=seasons, y=pk.jd)) + theme_set(theme_classic()) + geom_smooth(method="loess",size=1, colour="blue", fill="light blue") + geom_smooth(method="lm", size=1, se=FALSE, colour="red", linetype="dashed") + geom_point(shape=21, stroke=1.5,size = 3.5, fill="darkgrey", colour="black") + labs(x="", y="Day of the Year (DOY)", title=paste("Peak Date", pollen), subtitle=comb)+ theme(axis.text=element_text(size=10), axis.title=element_text(size=10)) + theme(plot.title=element_text(size=13, face = "bold.italic"))+ scale_x_continuous(breaks=pretty_breaks()) slope<-trendtime[which(trendtime$type==pollen & trendtime$variable=="en.jd"),"coef"] slope<-round(slope, 1) slope<-paste0("slope: ",slope) p<-trendtime[which(trendtime$type==pollen & trendtime$variable=="en.jd"),"p"] p<-round(p, 3) if (p < 0.001) { pvalue <- "p<0.001" } else if (p < 0.01) { pvalue <- "p<0.01" } else if (p < 0.05) { pvalue <- "p<0.05" } else { pvalue <- "p>0.05" } p<-pvalue comb<-paste0(slope,", ",p) p3 <- ggplot(dataframtemp, aes(x=seasons, y=en.jd)) + theme_set(theme_classic()) + geom_smooth(method="loess",size=1, colour="blue", fill="light blue") + geom_smooth(method="lm", size=1, se=FALSE, colour="red", linetype="dashed") + geom_point(shape=21, stroke=1.5,size = 3.5, fill="darkgrey", colour="black") + labs(x="", y="Day of the Year (DOY)", title= paste("End Date", pollen), subtitle=comb)+ theme(axis.text=element_text(size=10), axis.title=element_text(size=10)) + theme(plot.title=element_text(size=13, face = "bold.italic"))+ scale_x_continuous(breaks=pretty_breaks()) slope<-trendtime[which(trendtime$type==pollen & trendtime$variable=="sm.ps"),"coef"] slope<-round(slope, 1) slope<-paste0("slope: ",slope) p<-trendtime[which(trendtime$type==pollen & trendtime$variable=="sm.ps"),"p"] p<-round(p, 3) if (p < 0.001) { pvalue <- "p<0.001" } else if (p < 0.01) { pvalue <- "p<0.01" } else if (p < 0.05) { pvalue <- "p<0.05" } else { pvalue <- "p>0.05" } p<-pvalue comb<-paste0(slope,", ",p) p4 <- ggplot(dataframtemp, aes(x=seasons, y=sm.ps)) + theme_set(theme_classic()) + geom_smooth(method="loess", size=1, colour="blue", fill="light blue") + geom_smooth(method="lm", size=1, se=FALSE, colour="red", linetype="dashed") + geom_point(shape=21, stroke=1.5,size = 3.5, fill="darkgrey", colour="black") + labs(x="", y="Pollen grains", title=paste("Total Pollen", pollen),subtitle=comb)+ theme(axis.text=element_text(size=10), axis.title=element_text(size=10)) + theme(plot.title=element_text(size=13, face = "bold.italic"))+ scale_x_continuous(breaks=pretty_breaks()) if(export.plot == TRUE & export.format == "png") { png(paste0("plot_AeRobiology/trend_plots/plot_trend_",pollen,".png")) pushViewport(viewport(layout=grid.layout(2,2))) vplayout<-function(x,y) viewport(layout.pos.row = x, layout.pos.col=y) print(p1, vp = vplayout(1,1)) print(p2, vp = vplayout(1,2)) print(p3, vp = vplayout(2,1)) print(p4, vp = vplayout(2,2)) dev.off() } if(export.plot == TRUE & export.format == "pdf") { pdf(paste0("plot_AeRobiology/trend_plots/plot_trend_",pollen, ".pdf")) pushViewport(viewport(layout=grid.layout(2,2))) vplayout<-function(x,y) viewport(layout.pos.row = x, layout.pos.col=y) print(p1, vp = vplayout(1,1)) print(p2, vp = vplayout(1,2)) print(p3, vp = vplayout(2,1)) print(p4, vp = vplayout(2,2)) dev.off() } } lista<-list() lista[["plot_trend"]]<-trendtime lista [["Information"]] <- data.frame( Attributes = c("st.jd", "pk.jd", "en.jd", "sm.ps", "coef", "p", "", "", "Package", "Authors"), Description = c("Start-date (day of the year)","Peak-date (day of year)", "End-date (day of the year)", "Pollen integral", "Slope of the linear trend", "Significance level of the linear trend", "", "", "AeRobiology", "Jesus Rojo, Antonio Picornell & Jose Oteros")) if (export.result == TRUE) { write_xlsx(lista, "table_AeRobiology/summary_of_plot_trend.xlsx") } return(trendtime) }
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/plot_trend.R
#' Pollen Calendar by Different Methods from a Historical Pollen Database #' #' Function to calculate the pollen calendar from a historical database of several pollen types and using the most commonly used methods in the generation of the pollen calendars in the aerobiology field. #' #' @param data A \code{data.frame} object including the general database where calculation of the pollen calendar must be applied. This \code{data.frame} must include a first column in \code{Date} format and the rest of columns in \code{numeric} format belonging to each pollen type by column. #' @param method A \code{character} string specifying the method applied to calculate and generate the pollen calendar. The implemented methods that can be used are: \code{"heatplot"}, \code{"violinplot"} or \code{"phenological"}. A more detailed information about the different methods for defining the pollen season may be consulted in \strong{Details}. The \code{method} argument will be \code{heatplot} by default. #' @param n.types A \code{numeric} (\code{integer}) value specifying the number of the most abundant pollen types that must be represented in the pollen calendar. A more detailed information about the selection of the considered pollen types may be consulted in \strong{Details}. The \code{n.types} argument will be \code{15} types by default. #' @param start.month A \code{numeric} (\code{integer}) value ranging \code{1_12} specifying the number of the month (January_December) when the beginning of the pollen calendar must be considered. This argument is only applicable for the \code{"heatplot"} method with \code{"daily"} period, for the \code{"phenological"} method with \code{"avg_before"} \code{average.method}, and for the \code{"violinplot"} method, and the rest of methods only may be generated from the January (\code{start.month = 1}). The \code{start.month} argument will be \code{1} (month of January) by default. #' @param y.start,y.end A \code{numeric} (\code{integer}) value specifying the period selected to calculate the pollen calendar (start year _ end year). If \code{y.start} and \code{y.end} are not specified (\code{NULL}), the entire database will be used to generate the pollen calendar. The \code{y.start} and \code{y.end} arguments will be \code{NULL} by default. #' @param perc1,perc2 A \code{numeric} value ranging \code{0_100}. These arguments are valid only for the \code{"phenological"} method. These values represent the percentage of the total annual pollen included in the pollen season, removing \code{(100_percentage)/2\%} of the total pollen before and after of the pollen season. Two percentages must be specified because of the definition of the "main pollination period" (\code{perc1}) and "early/late pollination" (\code{perc2}) based on the \code{"phenological"} method proposed by \emph{Werchan et al. (2018)}. The \code{perc1} argument will be \code{80} and \code{perc2} argument will be \code{99} by default. A more detailed information about the \code{phenological} method to generate the pollen calendar may be consulted in \strong{Details}. #' @param th.pollen A \code{numeric} value specifying the minimum threshold of the average pollen concentration which will be used to generate the pollen calendar. Days below this threshold will not be considered. For the \code{"phenological"} method this value limits the "possible occurrence" period as proposed by \emph{Werchan et al. (2018)}. The \code{th.pollen} argument will be \code{1} by default. A more detailed information about the methods to generate the pollen calendar may be consulted in \emph{Details}. #' @param average.method A \code{character} string specifying the moment of the application of the average. This argument is valid only for the \code{"phenological"} method. The implemented methods that can be used are: \code{"avg_before"} or \code{"avg_after"}. \code{"avg_before"} produces the average to the daily concentrations and then pollen season are calculated for all pollen types, this method is recommended as it is a more concordant method with respect to the rest of implemented methods. Otherwise, \code{"avg_after"} determines the pollen season for all years and all pollen types, and then a circular average is calculated for the start_dates and end_dates. The \code{average.method} argument will be \code{"avg_before"} by default. #' @param period A \code{character} string specifying the interval time considered to generate the pollen calendar. This argument is valid only for the \code{"heatplot"} method. The implemented periods that can be used are: \code{"daily"} or \code{"weekly"}. \code{"daily"} selection produces a pollen calendar using daily averages during the year and \code{"weekly"} selection produces a pollen calendar using weekly averages during the year. The \code{period} argument will be \code{"daily"} by default. #' @param method.classes A \code{character} string specifying the method to define the classes used for classifying the average pollen concentrations to generate the pollen calendar. This argument is valid only for the \code{"heatplot"} method. The implemented methods for defining classes are: \code{"exponential"} and \code{"custom"}. The \code{method.classes} argument will be \code{"exponential"} by default. A more detailed information about the methods to classify the average pollen concentrations to generate the pollen calendar may be consulted in \strong{Details}. #' @param n.classes A \code{numeric} (\code{integer}) value specifying the number of classes that will be used for classifying the average pollen concentrations to generate the pollen calendar. This argument is valid only for the \code{"heatplot"} method and the classification by \code{method.classes = "custom"}. The \code{n.classes} argument will be \code{5} by default. A more detailed information about the methods to classify the average pollen concentrations to generate the pollen calendar may be consulted in \strong{Details}. #' @param classes A \code{numeric} vector specifying the threshold established to define the different classes that will be used for classifying the average pollen concentrations to generate the pollen calendar. This argument is valid only for the \code{"heatplot"} method and the classification by \code{method.classes = "custom"}. The \code{classes} argument will be \code{c(25, 50, 100, 300)} by default. The number of specified classes must be equal to \code{n.classes _ 1} because of the maximum threshold will be automatically specified by the maximum value. A more detailed information about the methods to classify the average pollen concentrations to generate the pollen calendar may be consulted in \strong{Details}. #' @param color A \code{character} string specifying the color to generate the graph showing the pollen calendar. This argument is valid only for the \code{"heatplot"} method. The implemented color palettes to generate the pollen calendar are: \code{"green"}, \code{"red"}, \code{"blue"}, \code{"purple"} or \code{"black"}. The \code{color} argument will be \code{"green"} by default. #' @param interpolation A \code{logical} value. If \code{FALSE} the interpolation of the pollen data is not applicable. If \code{TRUE} an interpolation of the pollen series will be applied to complete the gaps before the calculation of the pollen calendar. The \code{interpolation} argument will be \code{TRUE} by default. A more detailed information about the interpolation method may be consulted in \strong{Details}. #' @param int.method A \code{character} string specifying the method selected to apply the interpolation method in order to complete the pollen series. The implemented methods that may be used are: \code{"lineal"}, \code{"movingmean"}, \code{"spline"} or \code{"tseries"}. The \code{int.method} argument will be \code{"lineal"} by default. #' @param na.remove A \code{logical} value specifying if \code{na} values must be remove for the pollen calendar or not. \code{n.remove = TRUE} by default. #' @param legendname A \code{character} string specifying the title of the legend. By default is \code{"Pollen / m3"}. #' @param result A \code{character} string specifying the output for the function. The implemented outputs that may be obtained are: \code{"plot"} and \code{"table"}. The argument \code{result} will be \code{"plot"} by default. #' @param export.plot A \code{logical} value specifying if a plot with the pollen calendar saved in the working directory will be required or not. If \code{FALSE} graphical results will only be displayed in the active graphics window. If \code{TRUE} graphical results will be displayed in the active graphics window and also a \emph{pdf} file will be saved within the \emph{plot_AeRobiology} directory automatically created in the working directory. The \code{export.plot} will be \code{FALSE} by default. #' @param export.format A \code{character} string specifying the format selected to save the pollen calendar plot. The implemented formats that may be used are: \code{"pdf"} and \code{"png"}. The \code{export.format} will be \code{"pdf"} by default. #' @param ... Other additional arguments may be used to customize the exportation of the plots using \code{"pdf"} or \code{"png"} files and therefore arguments from \emph{pdf} and \emph{png} functions (\pkg{grDevices} package) may be implemented. For example, for \emph{pdf} files the user may custom the arguments: \code{width}, \code{height}, \code{family}, \code{title}, \code{fonts}, \code{paper}, \code{bg}, \code{fg}, \code{pointsize...}; and for \emph{png} files the user may custom the arguments: \code{width}, \code{height}, \code{units}, \code{pointsize}, \code{bg}, \code{res...} #' @details This function allows to calculate and generate the pollen calendar using three different methods which are described below. The pollen calendar will be calculated and generated only for the period specified by the user using the \code{y.start} and \code{y.end} arguments, and for the specified number of the most abundant pollen types using the \code{n.types} argument by the user. The most abundant pollen types will be selected according to the highest average annual amounts of pollen registered by the pollen types during the considered period. #' \itemize{ #' \item \code{"heatplot"} method. This pollen calendar is constructed based on the daily or weekly average of pollen concentrations, depending of the preferences of the user that may select \code{"daily"} or \code{"weekly"} as \code{period} argument. Then, these averages may be classified in different categories following different methods selected by the user according to the \code{method.classes} argument. If \code{method.classes = "exponential"} the user will apply the classification based on exponential classes proposed by \emph{Stix and Ferreti (1974)}, which has been commonly used in aerobiology for the generation of pollen calendars. The classification based on exponential method considers 11 classes (\code{1_2, 3_5, 6_11, 12_24, 25_49, 50_99, 100_199, 200_399, 400_799, 800_1600, >1600}). An example of this pollen calendar may be consulted in \emph{Rojo et al. (2016)}. This method to design pollen calendars is an adaptation from the pollen calendar proposed by \emph{Spieksma (1991)} who considered 10_day periods instead of daily or weekly periods. Otherwise, if \code{method.classes = "custom"} the user may customize the classification according to the number of classes selected (\code{n.classes} argument) and the thresholds of the pollen concentrations used to define the classes (\code{classes} argument). Average values below the level of the \code{th.pollen} argument will be removed from the pollen calendar. #' \item \code{"phenological"} method. This pollen calendar is based on phenological definition of the pollen season and adapted from the methodology proposed by \emph{Werchan et al. (2018)}. After to obtain daily average pollen concentrations for the most abundant pollen types different pollination periods were calculated using the daily averages. The main pollination period was calculated based on the percentage defined by \code{perc1} argument (selected by the user, 80\% by default) of the annual total pollen. For example, if \code{perc1 = 80} the beginning of the high season was marked when 10\% of the annual value was reached and the end was selected when 90\% was reached. In the case of the early/late pollination a total of the percentage defined by \code{perc2} argument (selected by the user, 99\% by default) of the annual total pollen will be registered during this period. For this kind of pollen calendar the \code{th.pollen} argument will define the "possible occurrence" period as adapted by \emph{Werchan et al. (2018)}, considering the entire period between the first and the last day when this pollen level is reached. In an alternative way the average may be carried out after to define the pollen seasons using \code{method_average = "avg_after"} (instead of \code{"avg_before"} by default). \code{"avg_after"} determines the pollen season for all years and all pollen types, and then an average for circular data is calculated from the start_dates and end_dates. #' \item \code{"violinplot"} method. This pollen calendar is based on the pollen intensity and adapted from the pollen calendar published by \emph{ORourke (1990)}. In first time the daily averages of the pollen concentrations are calculated and then these averages are represented using the \emph{violin plot} graph. The shape of the \emph{violin plot} display the pollen intensity of the pollen types in a relative way i.e. the values will be calculated as relative measurements regarding to the most abundant pollen type in annual amounts. Therefore, this pollen calendar shows a relative comparison between the pollen intensity of the pollen types but without scales and units. Average values below the level of the \code{th.pollen} argument will be removed from the pollen calendar. #' } #' Pollen time series frequently have different gaps with no data and this fact could be a problem for the calculation of specific methods for defining the pollen season even providing incorrect results. In this sense by default a linear interpolation will be carried out to complete these gaps before to generate the pollen calendar. For more information to see the \code{\link{interpollen}} function. #' @return This function returns different results:\cr #' \code{plot} in the active graphics window displaying the pollen calendar generated by the user when \code{result = "plot"}. This plot may be included in an object by assignment operators.\cr #' \code{data.frame} including the daily or weekly average pollen concentrations (according to the selection of the user) used to generate the pollen calendar. This \code{data.frame} will be returned when \code{result = "table"}.\cr #' If \code{export.plot = TRUE} this plot displaying the pollen calendar will also be exported as file within the \emph{Plot_AeRobiology}" directory created in the working directory.\cr #' If \code{export.plot = TRUE} and \code{export.format = pdf} a \emph{pdf} file of the pollen calendar will be saved within the \emph{plot_AeRobiology} directory created in the working directory. Additional characteristics may be incorporated to the exportation as \emph{pdf} file (see \pkg{grDevices} package)\cr #' If \code{export.plot = TRUE} and \code{export.format = png} a \emph{png} file of the pollen calendar will be saved within the \emph{plot_AeRobiology} directory created in the working directory. Additional characteristics may be incorporated to the exportation as \emph{png} file (see \pkg{grDevices} package). #' @references ORourke, M.K., 1990. Comparative pollen calendars from Tucson, Arizona: Durhamvs. Burkard samplers. \emph{Aerobiologia}, 6(2), p.136_140. #' @references Rojo, J., Rapp, A., Lara, B., Sabariego, S., Fernandez_Gonzalez, F. and Perez_Badia, R., 2016. Characterisation of the airborne pollen spectrum in Guadalajara (central Spain) and estimation of the potential allergy risk. \emph{Environmental Monitoring and Assessment}, 188(3), p.130. #' @references Spieksma, F.T.M., 1991. \emph{Regional European pollen calendars. Allergenic pollen and pollinosis in Europe}, pp.49_65. #' @references Stix, E. and Ferretti, M.L., 1974. \emph{Pollen calendars of three locations in Western Germany. Atlas European des Pollens Allergisants}, pp.85_94. #' @references Werchan, M., Werchan, B. and Bergmann, K.C., 2018. German pollen calendar 4.0_update based on 2011_2016 pollen data. \emph{Allergo Journal International}, 27, pp.69_71. #' @seealso \code{\link{interpollen}}, \code{\link{calculate_ps}} #' @examples data("munich_pollen") #' @examples pollen_calendar(munich_pollen, method = "heatplot", interplation = FALSE) #' @importFrom dplyr ungroup group_by mutate summarise_all #' @importFrom circular circular mean.circular #' @importFrom ggplot2 aes coord_flip element_text geom_tile geom_violin ggplot labs scale_fill_manual scale_x_continuous scale_x_date scale_y_date theme theme_bw theme_dark #' @importFrom graphics abline barplot legend lines mtext par plot #' @importFrom grDevices colorRampPalette dev.off pdf png recordPlot #' @importFrom lubridate is.POSIXt #' @importFrom scales date_format #' @importFrom stats na.omit #' @importFrom tidyr gather %>% #' @export pollen_calendar <- function (data, method = "heatplot", # "phenological" "violinplot" "heatplot" n.types = 15, start.month = 1, y.start = NULL, y.end = NULL, perc1 = 80, perc2 = 99, th.pollen = 1, average.method = "avg_before", # "avg_before" "avg_after" period = "daily", # "daily" "weekly method.classes = "exponential", # "custom" "exponential" n.classes = 5, classes = c(25,50,100,300), color = "green", # "red" "green" "blue" "purple" "black" interpolation = TRUE, int.method = "lineal", na.remove = TRUE, result = "plot", export.plot = FALSE, export.format = "pdf", legendname = "Pollen grains / m3",...){ ############################################# CHECK THE ARGUMENTS ############################# if(export.plot == TRUE){ifelse(!dir.exists(file.path("plot_AeRobiology")), dir.create(file.path("plot_AeRobiology")), FALSE)} data<-data.frame(data) if(class(data) != "data.frame") stop ("Please include a data.frame: first column with date, and the rest with pollen types") if(method != "phenological" & method != "violinplot" & method != "heatplot") stop ("Please 'method' argument only accept values: 'phenological', 'violinplot' or 'heatplot'") if(class(n.types) != "numeric") stop ("Please include only numeric values for 'n.types' argument indicating the number of pollen types which will be displayed") if(class(start.month) != "numeric" | ((start.month %in% c(1,2,3,4,5,6,7,8,9,10,11,12)) == FALSE)) stop ("Please include only numeric integer values between 1-12 for 'start.month' argument indicating the start month for the pollen calendar") if(average.method == "avg_after" & start.month != 1) stop ("'avg_after' only can be calculate when 'start.month' argument is equal to 1") if(class(y.start) != "numeric" & !is.null(y.start)) stop ("Please include only numeric values for y.start argument indicating the start year considered") if(class(y.end) != "numeric" & !is.null(y.end)) stop ("Please include only numeric values for 'y.end' argument indicating the end year considered") if(class(perc1) != "numeric" | perc1 < 0 | perc1 > 100) stop ("Please include only numeric values between 0-100 for 'perc1' argument") if(class(perc2) != "numeric" | perc2 < 0 | perc2 > 100) stop ("Please include only numeric values between 0-100 for 'perc2' argument") if(perc1 > perc2) stop ("Please should be perc 1 < perc2") if(class(th.pollen) != "numeric") stop ("Please include only numeric values for 'th.pollen' argument indicating the minimum averaged pollen concentration to be considered") if(average.method != "avg_before" & average.method != "avg_after") stop ("Please average.method only accept values: 'avg_before' or 'avg_after'") if(period != "daily" & period != "weekly") stop ("Please period only accept values: 'daily' or 'weekly'") if(method.classes != "custom" & method.classes != "exponential") stop ("Please method.classes only accept values: 'custom' or 'exponential'") if(class(n.classes) != "numeric") stop ("Please include only numeric values for n.classes argument indicating the number of classes which will be used for the plots") if(class(classes) != "numeric") stop ("Please include only numeric values for classes argument indicating the thresholds used for classifying the average pollen concentration for the plots") if ((length(classes) + 1) != n.classes) stop ("The number of specified classes must be equal to 'n.classes - 1' because of the maximum threshold will be automatically specified by the maximum value") if(color != "red" & color != "blue" & color != "green" & color != "purple" & color != "black") stop ("Please 'color' argument only accept values: 'red', 'blue', 'green', 'purple' or 'black'") if(class(interpolation) != "logical") stop ("Please include only logical values for interpolation argument") if(int.method != "lineal" & int.method != "movingmean" & int.method != "spline" & int.method != "tseries") stop ("Please int.method only accept values: 'lineal', 'movingmean', 'spline' or 'tseries'") if(result != "plot" & result != "table") stop ("Please result only accept values: 'plot' or 'table'") if(class(export.plot) != "logical") stop ("Please include only logical values for export.plot argument") if(class(data[,1])[1]!="Date" & !is.POSIXt(data[,1])) {stop("Please the first column of your data must be the date in 'Date' format")} data[,1]<-as.Date(data[,1]) if(ncol(data) - 1 == 1) stop ("The number of pollen types must be 2 at least") if(ncol(data)-1 < n.types) { n.types = ncol(data)-1 warning(paste("WARNING: the number of columns is smaller than 'n.types' argument. 'n.types' adjusted to", n.types))} ############################################# MANAGEMENT OF THE DATABASE ############################# average_values<-data.frame() perc1 <- 100 - perc1; perc2 <- 100 - perc2 if(interpolation == TRUE){data <- interpollen(data, method = int.method, plot = F)} jd.month <- c(1, 32, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335) seasons <- unique(as.numeric(strftime(data[, 1], "%Y"))) if(is.null(y.start)){y.start <- min(seasons)}; if(is.null(y.end)){y.end <- max(seasons)} data <- data[which(as.numeric(strftime(data[ ,1], "%Y")) >= y.start & as.numeric(strftime(data[ ,1], "%Y")) <= y.end), ] data <- data.frame(date = data[ ,1], year = as.numeric(strftime(data[ ,1], "%Y")), jd = as.numeric(strftime(data[ ,1], "%j")), week = as.numeric(strftime(data[ ,1], "%W")), data[ ,-1]) #types <- ddply(data[ ,-c(1,3,4)], "year", function(x) colSums(x[-1], na.rm = T)) [-1] %>% # apply(., 2, function(x) mean(x, na.rm = T)) %>% .[order(., decreasing = TRUE)] %>% # names(.) %>% # .[1:n.types] types <- data.frame(data[ ,-c(1,3,4)] %>% group_by(year) %>% summarise_all(sum, na.rm = TRUE))[-1] %>% apply(., 2, function(x) mean(x, na.rm = T)) %>% .[order(., decreasing = TRUE)] %>% names(.) %>% .[1:n.types] ############################################# ORDER ###################################################### data.or <- data[ ,which(colnames(data) %in% c("jd", types))] #data.or <- ddply(data.or, "jd", function(x) colMeans(x[-1], na.rm = T)) %>% #.[ ,colnames(.) %in% c("jd", types)] data.or <- data.frame(data.or %>% group_by(jd) %>% summarise_all(mean, na.rm = T)) data.or <- data.or[which(data.or$jd != 366), ] data.or[is.na(data.or)] <- NA #data <- data[-nrow(data), ] pc.df1 <- data.frame(type = NA, start = NA, end = NA) pc.df2 <- data.frame(type = NA, start = NA, end = NA) pc.df3 <- data.frame(type = NA, start = NA, end = NA) for (t in 1:length(types)){ pollen.s <- na.omit(data.or[ ,which(colnames(data.or) %in% c("jd",types[t]))]) # Pollen data without NAs pollen.s$acum1 <- NA for (i in 1:nrow(pollen.s)) { if (i == 1) { pollen.s$acum1[i] = pollen.s[i, types[t]] } else { pollen.s$acum1[i] = pollen.s[i, types[t]] + pollen.s$acum1[i-1] } } pollen.s$acum2 <- NA for (i in nrow(pollen.s):1) { if (i == nrow(pollen.s)) { pollen.s$acum2[i] = pollen.s[i, types[t]] } else { pollen.s$acum2[i] = pollen.s[i, types[t]] + pollen.s$acum2[i+1] } } lim1 <- sum(pollen.s[types[t]])/100*(perc1/2) lim2 <- sum(pollen.s[types[t]])/100*(perc2/2) pc.df1[t,"type"] <- types[t] pc.df1[t,"start"] <- pollen.s$jd[which(pollen.s$acum1 >= lim1)][1] pc.df1[t,"end"] <- pollen.s$jd[which(pollen.s$acum2 < lim1)][1] - 1; pc.df1[t,"end"] <- pc.df1[t,"end"] - pc.df1[t,"start"] + 1 pc.df2[t,"type"] <- types[t] pc.df2[t,"start"] <- pollen.s$jd[which(pollen.s$acum1 >= lim2)][1] pc.df2[t,"end"] <- pollen.s$jd[which(pollen.s$acum2 < lim2)][1] - 1; pc.df2[t,"end"] <- pc.df2[t,"end"] - pc.df2[t,"start"] + 1 pc.df3[t,"type"] <- types[t] pc.df3[t,"start"] <- pollen.s$jd[which(pollen.s[ ,types[t]] >= th.pollen)][1] pc.df3[t,"end"] <- pollen.s$jd[which(pollen.s[ ,types[t]] >= th.pollen)][length(pollen.s$jd[which(pollen.s[ ,types[t]] >= th.pollen)])]; pc.df3[t,"end"] <- pc.df3[t,"end"] - pc.df3[t,"start"] + 1 } type.or <- pc.df2$type[order(pc.df2$start, decreasing = T)] ############################################# METHODS OF CALENDAR ########################### if (method == "phenological" & average.method == "avg_before"){ data <- data[ ,which(colnames(data) %in% c("jd", types))] #data <- ddply(data, "jd", function(x) colMeans(x[-1], na.rm = T)) %>% # .[ ,colnames(.) %in% c("jd", types)] data <- data.frame(data %>% group_by(jd) %>% summarise_all(mean, na.rm = T)) data <- data[which(data$jd != 366), ] if(start.month != 1){data <- rbind(data[which(data$jd == jd.month[start.month]):nrow(data), ], data[1:(which(data$jd == jd.month[start.month])-1), ])} data$jd1 <- 1:nrow(data) data[is.na(data)] <- NA #data <- data[-nrow(data), ] pc.df1 <- data.frame(type = NA, start = NA, end = NA) pc.df2 <- data.frame(type = NA, start = NA, end = NA) pc.df3 <- data.frame(type = NA, start = NA, end = NA) for (t in 1:length(types)){ pollen.s <- na.omit(data[ ,which(colnames(data) %in% c("jd1",types[t]))]) # Pollen data without NAs pollen.s$acum1 <- NA for (i in 1:nrow(pollen.s)) { if (i == 1) { pollen.s$acum1[i] = pollen.s[i, types[t]] } else { pollen.s$acum1[i] = pollen.s[i, types[t]] + pollen.s$acum1[i-1] } } pollen.s$acum2 <- NA for (i in nrow(pollen.s):1) { if (i == nrow(pollen.s)) { pollen.s$acum2[i] = pollen.s[i, types[t]] } else { pollen.s$acum2[i] = pollen.s[i, types[t]] + pollen.s$acum2[i+1] } } lim1 <- sum(pollen.s[types[t]])/100*(perc1/2) lim2 <- sum(pollen.s[types[t]])/100*(perc2/2) pc.df1[t,"type"] <- types[t] pc.df1[t,"start"] <- pollen.s$jd1[which(pollen.s$acum1 >= lim1)][1] pc.df1[t,"end"] <- pollen.s$jd1[which(pollen.s$acum2 < lim1)][1] - 1; pc.df1[t,"end"] <- pc.df1[t,"end"] - pc.df1[t,"start"] + 1 pc.df2[t,"type"] <- types[t] pc.df2[t,"start"] <- pollen.s$jd1[which(pollen.s$acum1 >= lim2)][1] pc.df2[t,"end"] <- pollen.s$jd1[which(pollen.s$acum2 < lim2)][1] - 1; pc.df2[t,"end"] <- pc.df2[t,"end"] - pc.df2[t,"start"] + 1 pc.df3[t,"type"] <- types[t] pc.df3[t,"start"] <- pollen.s$jd1[which(pollen.s[ ,types[t]] >= th.pollen)][1] pc.df3[t,"end"] <- pollen.s$jd1[which(pollen.s[ ,types[t]] >= th.pollen)][length(pollen.s$jd[which(pollen.s[ ,types[t]] >= th.pollen)])]; pc.df3[t,"end"] <- pc.df3[t,"end"] - pc.df3[t,"start"] + 1 } } ###################################################################################################### if (method == "phenological" & average.method == "avg_after"){ data <- data[ ,which(colnames(data) %in% c("date", types))] pc.df1 <- data.frame(type = NA, start = NA, end = NA) pc.df2 <- data.frame(type = NA, start = NA, end = NA) pc.df3 <- data.frame(type = NA, start = NA, end = NA) pollen.ps1 <- calculate_ps(data = data, method = "percentage", perc = (100 - perc1), plot = F, export.result = F, interpolation = F) pollen.ps2 <- calculate_ps(data = data, method = "percentage", perc = (100 - perc2), plot = F, export.result = F, interpolation = F) list.results <- list() for (t in 1:length(types)){ seasons <- unique(as.numeric(strftime(data[, 1], "%Y"))) st.ps <-NA; mx.ps <- NA; en.ps <- NA result.ps <- data.frame(seasons, st.jd = NA, en.jd = NA) for (j in 1:length(seasons)) { tryCatch({ ye <- seasons[j] pollen.s <- na.omit(data[which(as.numeric(strftime(data[, 1], "%Y"))==ye), which(colnames(data) %in% c("date",types[t]))]) # Pollen data without NAs pollen.s$jdays <- as.numeric(strftime(pollen.s[, 1], "%j")) st.ps <- pollen.s$jdays[which(pollen.s[ ,types[t]] > 0)][1] en.ps <- pollen.s$jdays[which(pollen.s[ ,types[t]] > 0)][length(pollen.s$jdays[which(pollen.s[ ,types[t]] > 0)])] result.ps[j,"st.jd"] <- pollen.s$jdays[pollen.s$jdays == st.ps] # Start-date of PS (JD) result.ps[j,"en.jd"] <- pollen.s$jdays[pollen.s$jdays == en.ps] # End-date of PS (JD) print(paste(ye, types[t])) }, error = function(e){ print(paste("Year", ye, types[t], ". Try to check the aerobiological data for this year"))}) } list.results[[types[t]]] <- result.ps } nam.list <- names(list.results) for(l in 1:length(nam.list)){ if(l == 1) {df.results <- data.frame(type = nam.list[l], list.results[[l]]) } else { df.results <- rbind(df.results, data.frame(type = nam.list[l], list.results[[l]])) } } pollen.ps3 <- df.results for (t in 1:length(types)){ tryCatch({ pc.df1[t,"type"] <- types[t] st <- na.omit(pollen.ps1$st.jd[pollen.ps1$type == types[t]]) * 360 / 365 pc.df1[t,"start"] <- round((as.numeric(mean.circular(circular(st, units = "degrees")))) * 365 / 360); if(pc.df1[t,"start"] <= 0) {pc.df1[t,"start"] <- pc.df1[t,"start"] + 365} en <- na.omit(pollen.ps1$en.jd[pollen.ps1$type == types[t]]) * 360 / 365 pc.df1[t,"end"] <- round((as.numeric(mean.circular(circular(en, units = "degrees")))) * 365 / 360); if(pc.df1[t,"end"] <= 0) {pc.df1[t,"end"] <- pc.df1[t,"end"] + 365}; pc.df1[t,"end"] <- pc.df1[t,"end"] - pc.df1[t,"start"] + 1 pc.df2[t,"type"] <- types[t] st <- na.omit(pollen.ps2$st.jd[pollen.ps2$type == types[t]]) * 360 / 365 pc.df2[t,"start"] <- round((as.numeric(mean.circular(circular(st, units = "degrees")))) * 365 / 360); if(pc.df2[t,"start"] <= 0) {pc.df2[t,"start"] <- pc.df2[t,"start"] + 365} en <- na.omit(pollen.ps2$en.jd[pollen.ps2$type == types[t]]) * 360 / 365 pc.df2[t,"end"] <- round((as.numeric(mean.circular(circular(en, units = "degrees")))) * 365 / 360); if(pc.df2[t,"end"] <= 0) {pc.df2[t,"end"] <- pc.df2[t,"end"] + 365}; pc.df2[t,"end"] <- pc.df2[t,"end"] - pc.df2[t,"start"] + 1 pc.df3[t,"type"] <- types[t] st <- na.omit(pollen.ps3$st.jd[pollen.ps3$type == types[t]]) * 360 / 365 pc.df3[t,"start"] <- round((as.numeric(mean.circular(circular(st, units = "degrees")))) * 365 / 360); if(pc.df3[t,"start"] <= 0) {pc.df3[t,"start"] <- pc.df3[t,"start"] + 365} en <- na.omit(pollen.ps3$en.jd[pollen.ps3$type == types[t]]) * 360 / 365 pc.df3[t,"end"] <- round((as.numeric(mean.circular(circular(en, units = "degrees")))) * 365 / 360); if(pc.df3[t,"end"] <= 0) {pc.df3[t,"end"] <- pc.df3[t,"end"] + 365}; pc.df3[t,"end"] <- pc.df3[t,"end"] - pc.df3[t,"start"] + 1 }, error = function(e){}) } } ###################################################################################################### if (method == "violinplot" | (method == "heatplot" & period == "daily")){ data <- data[ ,which(colnames(data) %in% c("jd", types))] #data <- ddply(data, "jd", function(x) colMeans(x[-1], na.rm = T)) %>% # .[ ,colnames(.) %in% c("jd", types)] data <- data.frame(data %>% group_by(jd) %>% summarise_all(mean, na.rm = T)) data <- data[which(data$jd != 366), ] if(start.month != 1){data <- rbind(data[which(data$jd == jd.month[start.month]):nrow(data), ], data[1:(which(data$jd == jd.month[start.month])-1), ])} if(start.month != 1){data$jd <- seq(as.Date(strptime(paste0(as.character(data$jd),"-2017"), format = "%j-%Y"))[1], as.Date(strptime(paste0(as.character(data$jd),"-2018"), format = "%j-%Y"))[nrow(data)], by = "days") } else { data$jd <- seq(as.Date(strptime(paste0(as.character(data$jd),"-2017"), format = "%j-%Y"))[1], as.Date(strptime(paste0(as.character(data$jd),"-2017"), format = "%j-%Y"))[nrow(data)], by = "days") } #violin_data <- gather(data, key = variable, value = value, -jd, na.rm = TRUE) violin_data <- gather(data, key = variable, value = value, -jd) violin_data$value[violin_data$value < th.pollen] <- NA #if (method == "heatplot" & period == "daily"){violin_data <- na.omit(violin_data)} # create a weight variable for each variable. dplyr will make this easy. violin_data <- violin_data %>% group_by(value) %>% mutate(wt = value / max(colSums(data[,-1], na.rm = TRUE))) %>% ungroup() data[ ,1] <- as.numeric(strftime(data[ ,1], "%j")) } ###################################################################################################### if (method == "heatplot" & period == "weekly"){ #data <- ddply(data[ ,-c(1:3)], "week", function(x) colMeans(x[-1], na.rm = TRUE)) %>% # .[ ,colnames(.) %in% c("week", types)] data <- data.frame(data[ ,-c(1:3)] %>% group_by(week) %>% summarise_all(mean, na.rm = T)) %>% .[ ,colnames(.) %in% c("week", types)] heat.data <- gather(data, key = variable, value = value, -week) heat.data$value[heat.data$value < th.pollen] <- NA heat.data <- na.omit(heat.data) heat.data$variable <- factor(heat.data$variable, levels = as.character(pc.df2$type), ordered = TRUE) } ################################################################# PLOTS ########################## if (method == "phenological"){ #ORDER pc.df3$type <- factor(pc.df3$type, levels = as.character(type.or), ordered = T); pc.df3 <- pc.df3[order(pc.df3$type), ] pc.df2$type <- factor(pc.df2$type, levels = as.character(type.or), ordered = T); pc.df2 <- pc.df2[order(pc.df2$type), ] pc.df1$type <- factor(pc.df1$type, levels = as.character(type.or), ordered = T); pc.df1 <- pc.df1[order(pc.df1$type), ] pc.df1 <- pc.df1[which(pc.df1$type %in% pc.df2$type & pc.df1$type %in% pc.df3$type), ] pc.df2 <- pc.df2[which(pc.df2$type %in% pc.df1$type & pc.df2$type %in% pc.df3$type), ] pc.df3 <- pc.df3[which(pc.df3$type %in% pc.df1$type & pc.df3$type %in% pc.df2$type), ] #PLOT par(mar = c(5,7,3,1), xpd = F) barplot(`colnames<-`(t(pc.df3[-1]), pc.df3[,1]), width = 1, space = 0, horiz = T, col=c("transparent","yellow"), border = NA, las = 2, xlim = c(1, 365), axes = F, main = paste0("Pollen calendar (Period ",y.start,"-",y.end, ")"), cex.main = 1.8, font.axis = 3) barplot(`colnames<-`(t(pc.df2[-1]), pc.df2[,1]), width = 1, space = 0, horiz = T, col=c("transparent","orange"), border = NA, add = T, las = 2, axes = F, font.axis = 3) barplot(`colnames<-`(t(pc.df1[-1]), pc.df1[,1]), width = 1, space = 0, horiz = T, col=c("transparent","red"), border = NA, add = T, las = 2, axes = F, font.axis = 3) abline (h=0:n.types, lwd = 1, col = "gray") lines(x = c(1,1) , y = c(0,nrow(pc.df1)), lwd = 3, col = 1); lines(x = c(365,365) , y = c(0,nrow(pc.df1)), lwd = 3, col = 1) lines(x = c(1,365) , y = c(0,0), lwd = 2, col = 1); lines(x = c(1,365) , y = c(nrow(pc.df1),nrow(pc.df1)), lwd = 2, col = 1) nam.mo <- c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec") len.mo <- c(31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31) if(start.month == 1){ for(m in c(start.month:12,1:(start.month-1))){ if(m <= 11) {lines(x = c(sum(len.mo[1:m]),sum(len.mo[1:m])) , y = c(-1,n.types), lwd = 1, col = "gray", lty = 2)} if(m <= 11) {lines(x = c(sum(len.mo[1:m]),sum(len.mo[1:m])) , y = c(-1,0), lwd = 2, col = "black", lty = 1)} if(m == 1) {mtext (side = 1, adj = 0.028, text = nam.mo[m], cex = 1.2)} if(m > 1) {mtext (side = 1, adj = 0.028 + (0.086*(m-1)), text = nam.mo[m], cex = 1.2)} } } else { cont <- as.numeric() for(m in c(start.month:12,1:(start.month-1))){ cont <-c(cont, m) if(length(cont) <= 11) {lines(x = c(sum(len.mo[cont]),sum(len.mo[cont])) , y = c(-1,n.types), lwd = 1, col = "gray", lty = 2)} if(length(cont) <= 11) {lines(x = c(sum(len.mo[cont]),sum(len.mo[cont])) , y = c(-1,0), lwd = 2, col = "black", lty = 1)} if(length(cont) == 1) {mtext (side = 1, adj = 0.028, text = nam.mo[m], cex = 1.2)} if(length(cont) > 1) {mtext (side = 1, adj = 0.028 + (0.086*(length(cont)-1)), text = nam.mo[m], cex = 1.2)} } } # legend("bottomleft", legend = c("Main pollination period", "Early/late pollination", "Possible occurrence"), col = c("red", "orange", "yellow"), pch = 15, bty = "n", cex = 1, x.intersp = 0.6, y.intersp = 0.5, inset = c(0.55,-0.2), xpd = TRUE) plot.calendar <- recordPlot() } ###################################################################################################### if (method == "violinplot"){ violin_data$variable <- factor(violin_data$variable, levels = as.character(type.or), ordered = T) violin_data <- data.frame(violin_data) if (na.remove == TRUE) {violin_data <- na.omit(violin_data)} plot.calendar <- ggplot(violin_data) + aes(x = variable, y = jd, weight = wt) + geom_violin(fill = "yellow", colour = "yellow", size = 1)+ coord_flip()+ theme_dark()+ labs(y = "", x = "", title = paste0("Pollen calendar (Period ",y.start,"-",y.end,")"))+ scale_y_date(limits = c(min(violin_data[ ,1])[1], max(violin_data[ ,1])[1]), breaks = "1 month", labels = date_format("%b"))+ theme(axis.text.y = element_text(size = 12, face = "bold.italic"), axis.text.x = element_text(size = 14, face = "bold"), title = element_text(size = 14, face = "bold")) } ###################################################################################################### if (method == "heatplot" & period == "daily"){ violin_data$variable <- factor(violin_data$variable, levels = as.character(type.or), ordered = T) if (color == "red") {colmin = "#fc9272"; colmax = "#67000d"} if (color == "green") {colmin = "#c7e9c0"; colmax = "#00441b"} if (color == "black") {colmin = "#d9d9d9"; colmax = "#000000"} if (color == "blue") {colmin = "#d0d1e6"; colmax = "#023858"} if (color == "purple") {colmin = "#dadaeb"; colmax = "#3f007d"} if(method.classes == "custom"){ classes = c(0, classes, max(violin_data$value, na.rm = TRUE)+100) lab.classes <- as.character() for (l in 2:length(classes)){ if(l == 2) {lab.classes <- c(paste0("<",classes[l]))} if(l != 2 & l != length(classes)) {lab.classes <- c(lab.classes, paste0(classes[l-1],"-",classes[l]))} if(l == length(classes)) {lab.classes <- c(lab.classes, paste0(">",classes[l-1]))} } } if(method.classes == "exponential"){ if (max(violin_data$value, na.rm = TRUE) > 1600 ) {n.classes = 11 classes = c(0,2,5,11,24,49,99,199,399,799,1600,(max(violin_data$value, na.rm = TRUE)+100)) lab.classes = c("1-2","3-5","6-11","12-24","25-49", "50-99", "100-199", "200-399", "400-799", "800-1600", ">1600")} if (max(violin_data$value, na.rm = TRUE) <= 1600 ) {n.classes = 10 classes = c(0,2,5,11,24,49,99,199,399,799,1600) lab.classes = c("1-2","3-5","6-11","12-24","25-49", "50-99", "100-199", "200-399", "400-799", "800-1600")} } #violin_data <- data.frame(violin_data) if (na.remove == TRUE) {violin_data <- na.omit(violin_data)} plot.calendar <- ggplot(violin_data, aes(jd, variable)) + geom_tile(aes(fill = cut(value, breaks = classes, labels = lab.classes)))+ scale_fill_manual(drop=FALSE, values=colorRampPalette(c(colmin, colmax))(n.classes), name = legendname, na.translate = F)+ theme_bw()+ labs(y = "", x = "", title = paste0("Pollen calendar (Period ",y.start,"-",y.end,")"))+ #scale_x_date(limits = c(min(violin_data[ ,1], na.rm = T)[1], max(violin_data[ ,1], na.rm = T)[1]), breaks = "1 month", labels = date_format("%b"))+ scale_x_date(breaks = "1 month", labels = date_format("%b"))+ theme(axis.text.y = element_text(size = 12, face = "bold.italic"), axis.text.x = element_text(size = 14, face = "bold"), title = element_text(size = 14, face = "bold")) } ###################################################################################################### if (method == "heatplot" & period == "weekly"){ heat.data$variable <- factor(heat.data$variable, levels = as.character(type.or), ordered = T) if (color == "red") {colmin = "#fc9272"; colmax = "#67000d"} if (color == "green") {colmin = "#c7e9c0"; colmax = "#00441b"} if (color == "black") {colmin = "#d9d9d9"; colmax = "#000000"} if (color == "blue") {colmin = "#d0d1e6"; colmax = "#023858"} if (color == "purple") {colmin = "#dadaeb"; colmax = "#3f007d"} if(method.classes == "custom"){ classes = c(0, classes, max(heat.data$value, na.rm = TRUE)+100) lab.classes <- as.character() for (l in 2:length(classes)){ if(l == 2) {lab.classes <- c(paste0("<",classes[l]))} if(l != 2 & l != length(classes)) {lab.classes <- c(lab.classes, paste0(classes[l-1],"-",classes[l]))} if(l == length(classes)) {lab.classes <- c(lab.classes, paste0(">",classes[l-1]))} } } if(method.classes == "exponential"){ if (max(heat.data$value, na.rm = TRUE) > 1600 ) {n.classes = 11 classes = c(0,2,5,11,24,49,99,199,399,799,1600,(max(heat.data$value, na.rm = TRUE)+100)) lab.classes = c("1-2","3-5","6-11","12-24","25-49", "50-99", "100-199", "200-399", "400-799", "800-1600", ">1600")} if (max(heat.data$value, na.rm = TRUE) <= 1600 ) {n.classes = 10 classes = c(0,2,5,11,24,49,99,199,399,799,1600) lab.classes = c("1-2","3-5","6-11","12-24","25-49", "50-99", "100-199", "200-399", "400-799", "800-1600")} } if (na.remove == TRUE) {heat.data <- na.omit(heat.data)} plot.calendar <- ggplot(heat.data, aes(week, variable)) + geom_tile(aes(fill = cut(value, breaks = classes, labels = lab.classes)), colour = "white")+ scale_fill_manual(drop=FALSE, values=colorRampPalette(c(colmin, colmax))(n.classes), name = legendname)+ theme_bw()+ labs(y = "", x = "Week of the year", title = paste0("Pollen calendar (Period ",y.start,"-",y.end,")"))+ scale_x_continuous(breaks = seq(0,50,5), limits = c(0,53))+ theme(axis.text.y = element_text(size = 12, face = "bold.italic"), axis.text.x = element_text(size = 14), axis.title.x = element_text(size = 16, face = "bold"), title = element_text(size = 14, face = "bold")) } ############################################# EXPORT RESULTS ############################# if(export.plot == TRUE & export.format == "pdf") { pdf(paste0("plot_AeRobiology/pollen_calendar_",method,".pdf"), ...) if(method == "phenological"){print(plot.calendar) } else { plot(plot.calendar) } dev.off()} if(export.plot == TRUE & export.format == "png") { png(paste0("plot_AeRobiology/pollen_calendar_",method,".png"), ...) if(method == "phenological"){print(plot.calendar) } else { plot(plot.calendar) } dev.off() png(paste0("plot_AeRobiology/credits.png")) dev.off() } if (result == "plot") return(plot.calendar) if (result == "table") {return(data)} }
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/pollen_calendar.R
#' Quality Control of a Pollen Database #' #' Function to check the quality of an historical database of several pollen types. #'@param data A \code{data.frame} object including the general database where quality must be checked. This data.frame must include a first column in \code{Date} format and the rest of columns in \code{numeric} format belonging to each pollen type by column. It is not necessary to insert the missing gaps; the function will automatically detect them. #'@param int.window A \code{numeric (interger)}value bigger or equal to \code{1}. The argument specifies the number of days of each side of the start, peak or end date of the main pollen season which will be checked during the quality control. If any of these days has been interpolated, the current season will not pass the quality control. The \code{int.window} argument will be \code{2} by default. #'@param perc.miss A \code{numeric (interger)} value between \code{0} and \code{100}. The argument specifies the maximal percentage of interpolated days which is allowed inside the main pollen season to pass the quality control. The \code{perc.miss} argument will be \code{20} by default. #'@param ps.method A \code{character} string specifying the method applied to calculate the pollen season and the main parameters. The implemented methods that can be used are: \code{"percentage"}, \code{"logistic"}, \code{"moving"}, \code{"clinical"} or \code{"grains"}. A more detailed information about the different methods for defining the pollen season may be consulted in \code{\link{calculate_ps}} function. The \code{ps.method} argument will be \code{"percentage"} by default. #'@param result A \code{character} string specifying the format of the results. Only \code{"plot"} or \code{"table"} available. If \code{"plot"}, graphical resume of the quality control will be plotted. If \code{"table"}, a \code{data.frame} will be created indicating the filters passed by each pollen type and season. Consult 'Return' for more information .The \code{result} argument will be \code{"plot"} by default. #'@param th.day See \code{\link{calculate_ps}} for more details. #'@param perc See \code{\link{calculate_ps}} for more details. #'@param def.season See \code{\link{calculate_ps}} for more details. #'@param reduction See \code{\link{calculate_ps}} for more details. #'@param red.level See \code{\link{calculate_ps}} for more details. #'@param derivative See \code{\link{calculate_ps}} for more details. #'@param man See \code{\link{calculate_ps}} for more details. #'@param th.ma See \code{\link{calculate_ps}} for more details. #'@param n.clinical See \code{\link{calculate_ps}} for more details. #'@param window.clinical See \code{\link{calculate_ps}} for more details. #'@param window.grains See \code{\link{calculate_ps}} for more details. #'@param th.pollen See \code{\link{calculate_ps}} for more details. #'@param th.sum See \code{\link{calculate_ps}} for more details. #'@param type See \code{\link{calculate_ps}} for more details. #'@param int.method See \code{\link{calculate_ps}} for more details. #'@param ... Other arguments passed on to the pollen season calculation as specified in \code{\link{calculate_ps}} function. #'@details Quality control is a relevant topic for aerobiology (Oteros et al., 2013). This function is another approach to improve the quality control management in the field. \cr\code{quality_control} function checks the quality of the pollen data of each pollen type and season. The filters applied by the function are: \cr #'\itemize{ #'\item If the main pollen season (Galan et al., 2017) cannot be calculated according to \code{\link{calculate_ps}} function minimal requirements (lack of data for these pollen type and year). Filter named \code{"Complete"} in the \code{"quality_control"} \code{data.frame}. #'\item If the start, end or peak date of the main pollen season has been interpolated or a day near to it (number of days specified by \code{int.window} argument). If a day near to these dates is missing, the selected date could not be the right one. Filters named \code{"Start"}, \code{"Peak"} and \code{"End"} in the \code{"quality_control"} \code{data.frame}. #'\item The percentage of missing data inside the main pollen season. It calculates the number of days which have been interpolated by the algorithm and their percentage inside the main pollen season. If a high percentage of the main pollen season has been interpolated, the information of these season could not be reliable. Filter named \code{"Comp.MPS"} in the \code{"quality_control"} \code{data.frame}. #'} #'@return This function can return different results: \cr #'\itemize{ #'\item If \code{result = "plot"}: Graphical resume of the Quality Control results showing the seasons of each pollen type and their quality (the risk assumed if they are included in further studies). The legend indicates the number of filter that have been unsuccessfully passed for each case. Object of class \code{\link[ggplot2]{ggplot}}. For graphical customization, see \code{\link[ggplot2]{ggplot}} function. #'\item If \code{result = "table"}: \code{data.frame} with \code{logical} values for each pollen type and season. If \code{TRUE}, the filter has been successfully passed for this case. If FALSE, this case does not fit the minimal requirements of this filter. #'} #'@references Galan, C., Ariatti, A., Bonini, M., Clot, B., Crouzy, B., Dahl, A., Fernandez_Gonzalez, D., Frenguelli, G., Gehrig, R., Isard, S., Levetin, E., Li, D.W., Mandrioli, P., Rogers, C.A., Thibaudon, M., Sauliene, I., Skjoth, C., Smith, M., Sofiev, M., 2017. Recommended terminology for aerobiological studies. Aerobiologia (Bologna). 293_295. #'@references Oteros, J., Galan, C., Alcazar, P., & Dominguez_Vilches, E. (2013). Quality control in bio_monitoring networks, Spanish Aerobiology Network. Science of the Total Environment, 443, 559_565. #'@seealso \code{\link{calculate_ps}}, \code{\link{interpollen}}, \code{\link[ggplot2]{ggplot}}, \code{\link[ggplot2]{ggsave}} #' @examples data("munich_pollen") #' @examples quality_control(munich_pollen[,c(1:4)]) #' @importFrom utils data #' @importFrom lubridate is.POSIXt #' @importFrom ggplot2 aes element_blank element_text geom_tile ggplot ggsave labs scale_fill_gradient scale_x_discrete theme theme_classic #' @importFrom graphics plot #' @importFrom grDevices dev.off png #' @importFrom tidyr %>% #' @export quality_control<-function(data, int.window=2, perc.miss=20, ps.method="percentage", result = "plot", th.day=100, perc = 95, def.season = "natural", reduction = FALSE, red.level = 0.90, derivative = 5, man = 11, th.ma = 5, n.clinical = 5, window.clinical = 7, window.grains = 5, th.pollen = 10, th.sum = 100, type = "none", int.method = "lineal", ...){ data<-data.frame(data) if (class(data) != "data.frame"& !is.null(data)){ stop ("Please include a data.frame: first column with date, and the rest with pollen types")} if(class(data[,1])[1]!="Date" & !is.POSIXt(data[,1])) {stop("Please the first column of your data must be the date in 'Date' format")} data[,1]<-as.Date(data[,1]) if(class(int.window)!="numeric" | int.window %% 1 != 0){stop("int.window: Please, insert only an entire number bigger than 1")} if(int.window<1){stop("int.window: Please, insert only an entire number bigger or equal to 1")} if(class(perc.miss)!="numeric"){stop("perc.miss: Please, insert only a number between 0 and 100")} if(perc.miss<0 | perc.miss > 100){stop("perc.miss: Please, insert only a number between 0 and 100")} if(result!="plot" & result!="table"){stop("result: Please, insert only 'plot' or 'table'.")} Pollen<-calculate_ps(data=data, method=ps.method, th.day=th.day, perc=perc, def.season=def.season, reduction=reduction, red.level=red.level, derivative=derivative, man=man, th.ma=th.ma, n.clinical=n.clinical, window.clinical=window.clinical, window.grains=window.grains, th.pollen=th.pollen, th.sum=th.sum, type=type, interpolation=TRUE, int.method=int.method, plot=FALSE, maxdays = 300 ) Interpolated<-interpollen(data=data, method=int.method, maxdays = 300, plot=FALSE, result="long") Qualitycontrol<-data.frame() Dataframe<-Pollen[,c(1,2)] Dataframe$Complete<-TRUE Dataframe$Start<-TRUE Dataframe$Peak<-TRUE Dataframe$End<-TRUE Dataframe$Comp.MPS<-TRUE nrow<-nrow(Dataframe) for(a in 1:nrow){ if(any(is.na(Pollen[a,]))){ Dataframe[a,-c(1,2)]<-FALSE }else{ #Start Interpolated$Type<-as.character(Interpolated$Type) Interwindow<-Interpolated[which(Interpolated$Type==as.character(Pollen[a,1]) & Interpolated$Date>=(Pollen[a,3]-int.window) & Interpolated$Date<=(Pollen[a,3]+int.window) ),] if(sum(Interwindow[,4],na.rm = T)!=0){ Dataframe[a,4]<-FALSE } #Peak Interwindow2<-Interpolated[which(Interpolated$Type==as.character(Pollen[a,1]) & Interpolated$Date>=(Pollen[a,11]-int.window) & Interpolated$Date<=(Pollen[a,11]+int.window) ),] if(sum(Interwindow2[,4],na.rm = T)!=0){ Dataframe[a,5]<-FALSE } #End Interwindow3<-Interpolated[which(Interpolated$Type==as.character(Pollen[a,1]) & Interpolated$Date>=(Pollen[a,5]-int.window) & Interpolated$Date<=(Pollen[a,5]+int.window) ),] if(sum(Interwindow3[,4],na.rm = T)!=0){ Dataframe[a,6]<-FALSE } MPS<-Interpolated[which(Interpolated$Type==as.character(Pollen[a,1]) & Interpolated$Date>=Pollen[a,3] & Interpolated$Date<=Pollen[a,5]),] Day.dif<-as.numeric(Pollen[a,5]-Pollen[a,3]) if(sum(MPS[,4],na.rm=T)>round(Day.dif*(perc.miss/100))){ Dataframe[a,7]<-FALSE } } } Dataframe$Risk<-apply(Dataframe[,-c(1,2)], 1, function(x) length(x[x==FALSE]) ) graph<-ggplot(Dataframe, aes(seasons, type))+ geom_tile(aes(fill=Risk), colour="grey")+ scale_fill_gradient(low="white", high="#cb181d", limits=c(0,5))+ scale_x_discrete(limits = unique(Dataframe$seasons))+ theme_classic()+ labs(title="Quality Control")+ theme(plot.title = element_text(hjust=0.5), title = element_text(size = 16, face="bold"),axis.title=element_blank(), axis.text = element_text(size = 12, face="bold", colour= "black"), axis.text.y = element_text(face="bold.italic"), legend.title = element_text(size = 14), legend.text = element_text(size = 12)) if(result=="plot"){ return(graph) } if(result=="table"){ return(Dataframe) } }
/scratch/gouwar.j/cran-all/cranData/AeRobiology/R/quality_control.R
## ----include = FALSE----------------------------------------------------- knitr::opts_chunk$set(echo = FALSE, warning = FALSE, fig.width = 7) Sys.setlocale(locale="english") ## ----echo=FALSE---------------------------------------------------------- library ("knitr") ## ------------------------------------------------------------------------ library ("AeRobiology") ## ----eval=FALSE, echo = TRUE--------------------------------------------- # install.packages("AeRobiology") # library (AeRobiology) ## ----echo = TRUE--------------------------------------------------------- data("munich_pollen") ## ----eval=FALSE, echo = TRUE--------------------------------------------- # install.packages("readxl") # library (readxl) ## ----eval=FALSE, echo = TRUE--------------------------------------------- # Mydata<-read_xlsx("C:/Users/Antonio/Desktop/Prueba Markdown/mydata.xlsx") # ## ----eval=FALSE, echo = TRUE--------------------------------------------- # Mydata<-read_xlsx("C:/Users/Antonio/Desktop/Prueba Markdown/mydata.xlsx", sheet=2) # ## ----echo=TRUE, results='hold'------------------------------------------- str(munich_pollen) ## ----echo=TRUE, results='hide', fig.keep='first'------------------------- QualityControl<-quality_control(munich_pollen, result = "table") ## ----echo=TRUE, results='hold'------------------------------------------- head(QualityControl) ## ----echo=TRUE, fig.keep='first', results='hide'------------------------ quality_control(munich_pollen, result = "plot") ## ----echo=TRUE, results='hide', fig.keep='first'------------------------- quality_control(munich_pollen, int.window = 4, perc.miss = 50, ps.method = "percentage", perc = 80) ## ----echo=TRUE, results='hide', fig.keep='first'------------------------- Interpolated<-interpollen(munich_pollen[,c(1,6)], method="lineal", plot = TRUE, result = "long") ## ----echo=TRUE, results='hold'------------------------------------------- head(Interpolated) ## ----echo=TRUE, results='hide'------------------------------------------- CompleteData<-interpollen(munich_pollen[,c(1,6)], method="lineal", plot = FALSE, result = "wide") ## ----echo=TRUE, eval=FALSE----------------------------------------------- # calculate_ps(munich_pollen, method="percentage", interpolation=TRUE, int.method = "lineal", plot = F) ## ----echo=TRUE, results='hide', fig.keep='first'------------------------- i<-interpollen(munich_pollen[,c(1,6)], method="movingmean", factor = 2, plot = TRUE) ## ----echo=TRUE, results='hide', fig.keep='first'------------------------- i2<-interpollen(munich_pollen[,c(1,6)], method="spline", ndays=3, spar=0.7, plot = FALSE) ## ----echo=TRUE, results='hide', fig.keep='first'------------------------- i3<-interpollen(munich_pollen[,c(1,6)], method="spline", ndays=5, spar=0.2, plot = TRUE) ## ----echo=TRUE, results='hide', fig.keep='last'-------------------------- i4<-interpollen(munich_pollen, method="tseries", plot = TRUE) ## ----echo = TRUE, results='hide', fig.keep='last', warning=FALSE--------- pollen_season <- calculate_ps(munich_pollen) ## ----echo = FALSE, fig.keep='all', warning=FALSE------------------------- knitr::kable(pollen_season[24:31, ] , format = "html", booktabs = TRUE) ## ----echo = TRUE, fig.keep='last', warning=FALSE------------------------- calculate_ps(munich_pollen[,c(1,6)], plot = TRUE) ## ----echo=TRUE, results='hide', fig.keep='first', eval=FALSE------------- # calculate_ps(munich_pollen[,c(1,6)], method = "percentage", perc = 90, export.result = TRUE) ## ----echo=TRUE, results='hide', fig.keep='first'------------------------- calculate_ps(munich_pollen[,c(1,6)], method = "percentage", perc = 90, export.result = FALSE, int.method = "spline") ## ----echo=TRUE, results='hide', fig.keep='first'------------------------- calculate_ps(munich_pollen[,c(1,6)], method = "percentage", perc = 75, export.result = FALSE, interpolation = FALSE) ## ----echo=TRUE, results='hide', fig.keep='first', warning=FALSE---------- pollen_season<-calculate_ps(munich_pollen[,c(1,6)], method = "logistic", derivative = 5, reduction=FALSE) ## ----echo = FALSE, fig.keep='all', warning=FALSE------------------------- knitr::kable(pollen_season, format = "html", booktabs = TRUE ) ## ----echo=TRUE, results='hide', fig.keep='first', warning=FALSE---------- pollen_season<-calculate_ps(munich_pollen[,c(1,6)], method = "logistic", derivative = 6, reduction=FALSE, red.level = 0.8) ## ----echo = FALSE, fig.keep='all', warning=FALSE------------------------- knitr::kable(pollen_season, format = "html", booktabs = TRUE) ## ----echo=TRUE, results='hide', fig.keep='first', warning=FALSE---------- pollen_season<-calculate_ps(munich_pollen[,c(1,6)], method = "clinical", type = "birch") ## ----echo = FALSE, fig.keep='all', warning=FALSE------------------------- knitr::kable(pollen_season, format = "html", booktabs = TRUE) ## ----echo=TRUE, results='hide', fig.keep='first', warning=FALSE---------- pollen_season<-calculate_ps(munich_pollen[,c(1,6)], method = "clinical", n.clinical = 5, window.clinical = 7, th.pollen = 10, th.sum = 100, th.day = 100) ## ----echo = FALSE, fig.keep='all', warning=FALSE------------------------- knitr::kable(pollen_season, format = "html", booktabs = TRUE) ## ----echo=TRUE, results='hide', fig.keep='first'------------------------- calculate_ps(munich_pollen[,c(1,6)], method = "grains", window.grains = 3, th.pollen = 2 ) ## ----echo=TRUE, results='hide', fig.keep='first'------------------------- calculate_ps(munich_pollen[,c(1,6)], method = "moving", man = 7, th.ma = 4) ## ----echo=TRUE, results='hide', fig.keep='first'------------------------- calculate_ps(munich_pollen[,c(1,3)], method = "moving", man = 7, th.ma = 4, def.season = "interannual") ## ----echo=TRUE, results='hide', fig.keep='first'------------------------- calculate_ps(munich_pollen[,c(1,6)], method = "percentage", perc=95, def.season = "peak") ## ----echo=TRUE, results='hide', fig.keep='first'------------------------- CompleteData<-interpollen(munich_pollen, method="spline", ndays=3, spar=0.7, plot = TRUE, maxdays = 3, result = "wide") ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- plot_ps(CompleteData, pollen.type="Alnus", year=2013, fill.col = "orange", axisname = "AeRobiology custom units") ## ----echo = TRUE, results='hide',fig.keep='all', warning=FALSE----------- pollen_calendar(munich_pollen, method = "heatplot", period = "daily", color = "green", interpolation = FALSE) ## ----echo = TRUE, fig.keep='all', warning=FALSE-------------------------- average_values<-pollen_calendar(munich_pollen, method = "heatplot", period = "daily", color = "green", interpolation = FALSE, result = "table") knitr::kable(average_values[82:90, ], format = "html", booktabs = TRUE) ## ----echo = TRUE, results='hide',fig.keep='all', warning=FALSE----------- pollen_calendar(data = munich_pollen, method = "heatplot", period = "daily", color = "red", method.classes = "custom", n.classes = 5, classes = c(5, 25, 50, 200), interpolation = FALSE) ## ----echo = TRUE, results='hide',fig.keep='all', warning=FALSE----------- pollen_calendar(data = munich_pollen, method = "heatplot", period = "daily", color = "purple", method.classes = "custom", n.classes = 5, classes = c(5, 25, 50, 200), start.month = 11, na.remove = FALSE, interpolation = FALSE) ## ----echo = TRUE, results='hide',fig.keep='all', warning=FALSE----------- pollen_calendar(data = munich_pollen, method = "heatplot", period = "weekly", color = "blue", method.classes = "exponential", n.types = 4, y.start = 2011, y.end = 2014, interpolation = FALSE) ## ----echo = TRUE, results='hide',fig.keep='all', warning=FALSE----------- pollen_calendar(data = munich_pollen, method = "phenological", n.types = 5, y.start = 2011, y.end = 2014, interpolation = FALSE) ## ----echo = TRUE, results='hide',fig.keep='all', warning=FALSE----------- pollen_calendar(data = munich_pollen, method = "phenological", perc1 = 90, perc2 = 95, th.pollen = 5, interpolation = FALSE) ## ----echo = TRUE, results='hide', fig.keep='all', warning=FALSE---------- pollen_calendar(data = munich_pollen, method = "violinplot", y.start = 2012, y.end = 2015, interpolation = FALSE) ## ----echo = TRUE, results='hide', fig.keep='all', warning=FALSE---------- pollen_calendar(data = munich_pollen, method = "violinplot", th.pollen = 10, na.rm = FALSE, interpolation = FALSE) ## ----echo = TRUE, results='hide',fig.keep='all', eval=FALSE-------------- # iplot_pollen(munich_pollen, year = 2012) # iplot_years(munich_pollen, pollen = "Betula") ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- plot_summary(munich_pollen, pollen = "Betula") ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- plot_summary(munich_pollen, pollen = "Betula", mave = 5) ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- plot_summary(munich_pollen, pollen = "Betula", mave = 5, normalized = TRUE) ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- plot_normsummary(munich_pollen, pollen = "Betula", color.plot = "red") ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- plot_normsummary(munich_pollen, pollen = "Betula", color.plot = "green", mave = 5, normalized = TRUE) ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, result="plot") ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, split = FALSE, quantil = 1, result="plot") ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, split=FALSE, quantil = 0.5, result="plot") ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, significant = 1, result = "plot") ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, significant = 1, result="table") ## ----echo = TRUE, results='hide',fig.keep='all', eval=FALSE-------------- # plot_trend(munich_pollen, interpolation = FALSE, export.plot = TRUE, export.result = TRUE) ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- iplot_abundance(munich_pollen, interpolation = FALSE, export.plot = FALSE, export.result = FALSE) ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- iplot_abundance(munich_pollen, interpolation = FALSE, export.plot = FALSE, export.result = FALSE, n.types = 3) ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- iplot_abundance(munich_pollen, interpolation = FALSE, export.plot = FALSE, export.result = FALSE, n.types = 3, y.start = 2011, y.end = 2011, col.bar = "#d63131") ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- iplot_abundance(munich_pollen, interpolation = FALSE, export.plot = FALSE, export.result = FALSE, n.types = 3, y.start = 2011, y.end = 2011, col.bar = "#d63131", type.plot = "dynamic") ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- iplot_pheno(munich_pollen, method= "percentage", perc=80, int.method="spline", n.types = 8) ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- iplot_pheno(munich_pollen, method= "clinical", n.clinical = 3, int.method="spline", n.types = 4) ## ----echo = TRUE, results='hide',fig.keep='all', eval=FALSE-------------- # iplot_pheno(munich_pollen, method= "clinical", n.clinical = 3, int.method="spline", n.types = 4, type.plot = "dynamic") ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- plot_ps(munich_pollen, pollen.type="Alnus", year=2011) ## ----echo = TRUE, results='hold', error=TRUE----------------------------- plot_ps(munich_pollen, pollen.type="Alnuscdscscr", year=2011) ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- plot_ps(munich_pollen, pollen.type="Alnus", year=2013, method= "percentage", perc=95 ,int.method = "lineal") ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- plot_ps(munich_pollen, pollen.type="Alnus", year=2013, method= "percentage", perc=95 ,int.method = "movingmean") ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- plot_ps(munich_pollen, pollen.type="Alnus", year=2013, days = 90) ## ----echo = TRUE, results='hide',fig.keep='all'-------------------------- plot_ps(munich_pollen, pollen.type="Alnus", year=2013, fill.col = "orange", axisname = "AeRobiology custom units") ## ----echo = FALSE, warning=FALSE, message=FALSE-------------------------- library(ggplot2) library(dplyr) ## ----echo = TRUE--------------------------------------------------------- data("POMO_pollen") ## ----echo = TRUE, message=FALSE------------------------------------------ plot_hour(POMO_pollen) ## ----message=FALSE------------------------------------------------------- TO<-plot_hour(POMO_pollen, result ="table") knitr::kable(TO[1:10,], caption = "3-Hourly patterns", row.names = FALSE, digits = 1, format = "html", booktabs = TRUE) ## ---- message=FALSE, echo=TRUE------------------------------------------- plot_hour(POMO_pollen, locations = TRUE) ## ---- message=FALSE, echo=TRUE------------------------------------------- plot_heathour(POMO_pollen) ## ---- message=FALSE, echo=TRUE------------------------------------------- plot_heathour(POMO_pollen, low.col = "darkgreen", mid.col = "moccasin", high.col = "brown") ## ---- message=FALSE, echo=TRUE------------------------------------------- plot_heathour(POMO_pollen, locations = TRUE)
/scratch/gouwar.j/cran-all/cranData/AeRobiology/inst/doc/my-vignette.R
--- title: "Vignette of AeRobiology" subtitle: "A Computational Tool for Aerobiological Data" author: Jesus Rojo^[University of Castilla-La Mancha & ZAUM (TUM/Helmholtz Zentrum)] Antonio Picornell^[University of Malaga] Jose Oteros^[Zentrum Allergie und Umwelt - ZAUM (TUM/Helmholtz Zentrum)] date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Vignette of AeRobiology} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r include = FALSE} knitr::opts_chunk$set(echo = FALSE, warning = FALSE, fig.width = 7) Sys.setlocale(locale="english") ``` <style> p.comment { background-color: #DBDBDB; padding: 5px; margin-left: 3px; border-radius: 5px; font-style: italic; } </style> ![AeRobiology R package](logo_aerobiology.jpg) This package gathers different tools for managing aerobiological databases, elaborating the main calculations and visualization of results. In a first step, data may be checked using tools for quality control and all missing gaps can be completed. Then, the main parameters of the pollen season can be calculated and represented graphically. Multiple graphical tools are available: pollen calendars, phenological plots, time series, tendencies, interactive plots, abundance plots... <div class="alert alert-info">**Please, take into account that not all the arguments of each function are explained in this document. This is a quick guide and further details can be consulted in the [official document](https://CRAN.R-project.org/package=AeRobiology) of the package.**</div> The first thing you have to do is to install from CRAN repository and to load the package <font size="2" face="verdana">AeRobiology</font>.<div/> ```{r echo=FALSE} library ("knitr") ``` ```{r} library ("AeRobiology") ``` ```{r eval=FALSE, echo = TRUE} install.packages("AeRobiology") library (AeRobiology) ``` # Using the attached data During this tutorial we are going to use the **pollen data from Munich** which are **integrated in the package**. This will allow you to follow the tutorial obtaining the same results. If you want to follow the tutorial by using your own data, check the next section. <p class="comment">**`munich_pollen`** is a data set containing information of daily concentrations of pollen in the atmosphere of Munich during the years 2010_2015. Pollen types included: "Alnus", "Betula", "Taxus", "Fraxinus", "Poaceae", "Quercus", "Ulmus" and "Urtica". Data were obtained at Munich (Zentrum Allergie und Umwelt, ZAUM) using a Hirst_type volumetric pollen trap (Hirst, 1952) following the standard methodology (VDI4252_4_2016). Some gaps have been added to test some functions of the package (e.g. <font size="2" face="verdana">quality_control()</font>, <font size="2" face="verdana">interpollen()</font>). <br> The data were obtained by the research team of Prof. Jeroen Buters (Christine Weil & Ingrid Weichenmeier). At the Zentrum Allergie und Umwelt ([ZAUM](https://www.zaum-online.de/), directed by Prof. Carsten B. Schmidt_Weber). </p> You can **load the data** in your working environment by typing: ```{r echo = TRUE} data("munich_pollen") ``` # Loading your database from Excel This section has been developed for people who are not familiar with R. **If you have experience in loading databases in R, you might not be interested in this section and you can skip it.** There are several functions to import data from Excel. Arbitrarily, we are going to use the package **<font size="3" face="verdana">readxl</font>**. ```{r eval=FALSE, echo = TRUE} install.packages("readxl") library (readxl) ``` Now we can import the data with the function **<font size="3" face="verdana">read_xlsx()</font>**. I strongly recommend you to **have your Excel file in the same folder in which you are working with R**. This will avoid long paths to your files and also will prevent broken paths when some files have been moved. If you are working with an old database, you might be interested in the **<font size="3" face="verdana">read_xls()</font>** function. ```{r eval=FALSE, echo = TRUE} Mydata<-read_xlsx("C:/Users/Antonio/Desktop/Prueba Markdown/mydata.xlsx") ``` You can also **select the sheet** you want to import: ```{r eval=FALSE, echo = TRUE} Mydata<-read_xlsx("C:/Users/Antonio/Desktop/Prueba Markdown/mydata.xlsx", sheet=2) ``` <div class="alert alert-info">**IMPORTANT**: You must remove all the symbols or letters from your database. **If you have missing gaps within your database don't put "-" or "No Data" or any letter in the gaps**, keep the cell empty or remove the cell. The dates should be in a complete format (E.g.:"14/02/2019"). There must be a first column with the date followed by as many columns as pollen types with only numeric values (**check decimal separation: commas must be replaced by dots**).</div> Now you should have loaded your data in the object `Mydata`. Let's check if there is any mistake: ```{r echo=TRUE, results='hold'} str(munich_pollen) ``` **Note**: I'm using `munich_pollen` database as example but you should use `Mydata`. As you can see, the object is a `data.frame`, the first column is "Date" type and the rest are "num", which means numeric. This is the format you database **must** have. Normally it is automatically recognized by the import function, but if not, check the functions: <font size="3" face="verdana">**as.Date()**</font>, <font size="3" face="verdana">**as.numeric()**</font> and <font size="3" face="verdana">**as.data.frame()**</font>. <div class="alert alert-info"><p>**Your database must have this format to start working with the package**. The functions are designed to warn you in case the format is not correct but there may be some exceptions. **This is the most important step when using the package and some strange errors reported by the functions can be solved by doing this**. Sometimes the date column has 2 different types simultaneously ("POSIXlt" and "POSIXct") and it might cause mistakes. You can use one or another, but not both. Furthermore, it is strongly recommended to use "Date" format instead of "POSIXlt" or "POSIXct" despite theoretically there may not be problems by using them.</p><br> <p>**Please, don`t exasperate**. This is the **slowest step of using the package and the most important**. Once you have your data loaded, you only have to spend a few minutes in each function to have your results. If you have some unsolving problems, search on the internet. Some other people must have had your same problem and it should be solved in some forums. If not, write us and we will help you as soon as possible: [email protected] </p></div> <p class="comment">If you want to import your data from **csv files**, you might be interested in **<font size="3" face="verdana">read.csv()</font>** function.</p> # Function <font face="verdana">quality_control()</font> **This function was designed to check the quality of an historical database of several pollen types**. Since many of the quality requirements depend on the criteria selected to establish the main pollen season, this function has the **<font size="3" face="verdana">calculate_ps()</font> function integrated**. You can insert arguments of calculate_ps() in this function to select the method you want for calculating the main pollen season. calculate_ps() details can be consulted later in this same manuscript. <p class="comment">If **result = "plot"**, this function returns a **graphical resume** of the quality of the database marking with different red intensities the "weak-points" of it, and if **result = "table"** a `data.frame` with the detailed reasons of their "weakness".</p> **It establishes the quality according to 5 different criteria**: * **It checks if the main pollen season cannot be calculated according to the minimal requirements:** lack of data for this pollen type and year. This criteria appears as **"Complete"** in the generated data.frame. * **If the start, end or peak dates of the main pollen season have been interpolated** (or a date near them specified by **`int.window`** argument). It is based on the following premise: If this day or a nearby date is missing/interpolated, the selected date as start/peak/end might not be the real one. These criteria appear as **"Start"**, **"Peak"** and **"End"** in the generated data.frame. * **The percentage of missing data within the main pollen season.** It calculates the number of days which have been interpolated by interpollen() function and their percentage of the main pollen season. If a high percentage of the main pollen season has been interpolated, the information of this season might not be reliable. The maximal percentage allowed can be specified by the **`perc.miss`** argument. This filter appears as **"Comp.MPS"** in the generated data.frame. When running the function, it gives you a **graphical resume** with different color intensity depending on the risk of including each pollen/spore type for a concrete year and a **data.frame** with detailed information about the reasons of this evaluation. <div class="alert alert-info"> If **result = "plot"**, the function returns a list of objects of class *ggplot2*; if **result = "table"**, the function returns a *data.frame*. By default, **result = "table"**.</div> ```{r echo=TRUE, results='hide', fig.keep='first'} QualityControl<-quality_control(munich_pollen, result = "table") ``` If the filter has been successfully passed, it appears as `TRUE` in the generated `data.frame`. If not, it appears as `FALSE`. ```{r echo=TRUE, results='hold'} head(QualityControl) ``` ```{r echo=TRUE, fig.keep='first', results='hide'} quality_control(munich_pollen, result = "plot") ``` There are lots of arguments for this function. You can consult them in the "help" section of the function in R or in the [official document](https://CRAN.R-project.org/package=AeRobiology) of the package. Let's suppose we want to check the quality of our database, but we want to calculate it for a **80% defined Main Pollen Season** (arguments **`ps.method`** and **`perc`**). We don't want to take into account years with a missing data less than **4 days** away from the start, peak or end dates (argument **`int.window`**). Furthermore, we only want to exclude years with more than **50% of interpolated data** within the main pollen season (argument **`perc.miss`**): ```{r echo=TRUE, results='hide', fig.keep='first'} quality_control(munich_pollen, int.window = 4, perc.miss = 50, ps.method = "percentage", perc = 80) ``` **As you would have noticed, there are several differences between this result and the previous one.** <div class="alert alert-info">**<font size="3" face="verdana">interpollen()</font> function** is integrated in this function. Consult the "help" section of this function to use it.</div> # Function <font face="verdana">interpollen()</font> The function **<font size="3" face="verdana">interpollen()</font>** was designed to complete all the missing gaps within a database in just one step. All the gaps of each pollen/spore type are completed simultaneously. **There are different methods to do so which may be more or less appropriate according to the particularities of your database.** The arguments are: * **data**: A `data.frame` object. This data.frame should include a first column in "Date" format and the rest of the columns in "numeric" format belonging to each pollen/spore type by column. * **method**: A `character` string specifying the method applied to calculate and generate the pollen missing data. The implemented methods that can be used are: **"lineal"**, **"movingmean"**, **"spline"**, **"tseries"** or **"neighbour"**. The method argument will be "lineal" by default. * **maxdays**: A `numeric (integer)` value specifying the maximum number of consecutive days with missing data that the algorithm is going to interpolate. If the gap is bigger than the argument value, the gap will not be interpolated. Not valid with <font size="2" face="verdana">"tseries"</font> method. The <font size="2" face="verdana">maxdays</font> argument will be 30 by default. This argument might be very interesting to avoid long interpolated gaps. * **plot**: A `logical (integer)` argument. If `TRUE`, graphical previews of the input database will be plot at the end of the interpolation process. All the interpolated gaps will be marked in red. The `plot` argument will be `TRUE` by default. * Others... ```{r echo=TRUE, results='hide', fig.keep='first'} Interpolated<-interpollen(munich_pollen[,c(1,6)], method="lineal", plot = TRUE, result = "long") ``` As you can see above, when **result = "long"**, **the missing gaps which have been completed by the algorithm are marked in red.** It displays similar graphs for each pollen/spore type within your database. Furthermore, the **"Interpolated"** data.frame is produced as "long" with another new column indicating if each data has been interpolated or not (1 means interpolated and 0 means original data). ```{r echo=TRUE, results='hold'} head(Interpolated) ``` If you want to integrate all the new data in your database, **you can assign the function to a new object** and specify the argument **result = "wide"**: ```{r echo=TRUE, results='hide'} CompleteData<-interpollen(munich_pollen[,c(1,6)], method="lineal", plot = FALSE, result = "wide") ``` Now your database without gaps appears as "CompleteData". **As you might have noticed, if you add `plot=FALSE`, the graphs are not shown.** You don't have to do this before applying another function of this package. **<font size="3" face="verdana">"interpollen"</font> function has been integrated in the main functions of the package** so you can choose whether you want your data to be completed or not by a specific argument in each function.E.g.: ```{r echo=TRUE, eval=FALSE} calculate_ps(munich_pollen, method="percentage", interpolation=TRUE, int.method = "lineal", plot = F) ``` <div class="alert alert-info">**FREQUENT ERROR:** Sometimes when applying interpollen you receive the following error: **"Error in rep(NA, ncasos): invalid 'times' argument"**. This is due to a mistake in your database: **you have some dates repeated or disorganized**. The algorithm of `interpollen()` searches for correlative dates and, when they are not consecutive natural days, it carries out the interpolation. When the second date is an earlier date than the previous one, it reports an error. You have to correct these issues in your database before applying the interpollen function. </div> ## Methods of interpolation In this function there are **5 different methods to interpolate missing gaps**. Each method can be more appropriate than the others under some specific circumstances. Interpolated data are not real data, but including such estimations can reduce the error of some calculations such as the main pollen season by percentage or a pollen calendar, i.e.: if you keep the gap, these days would count as 0 for the calculations and this would suppose a bigger error than estimating pollen concentrations for these days. Obviously, this will also suppose a bigger error than having the real data. ### <font face="verdana">"lineal"</font> method As mentioned above, there are many methods to interpolate your missing data. The simplest one was the showed in the previous example: "lineal". **It traces a straight line between each extreme of the gap.** The pollen/spore concentration of the missing days is calculated according to this line. This method may be appropriate for dates without pollen/spores or for small gaps. **Nevertheless, there are other method which can be more effective during the pollen season.** ### <font face="verdana">"movingmean"</font> method This method calculates the moving mean of the pollen daily concentrations. The gaps are fulfilled by **calculating the moving mean of the concentrations for this particular day.** The window of the moving mean is centered on this day and its size is the result of multiplicating the gap size by the **`factor` argument**. E.g.: If you have a gap of 3 days and your factor is 2, the gaps are replaced by the value of a moving mean of 6 days (3 x 2) centered in this particular day and not taking into account the missing days for the mean value. Furthermore, it is a dynamic function: **for each gap of the database the window size of the moving mean changes depending of each gap size.** ```{r echo=TRUE, results='hide', fig.keep='first'} i<-interpollen(munich_pollen[,c(1,6)], method="movingmean", factor = 2, plot = TRUE) ``` ### <font face="verdana">"spline"</font> method This method carries out the interpolation by tracing a **polinomic function to link both extremes of the gap.** The polinomic function is chosen according to the best fitting equation to the previous and posterior days of the gap (second, third, fourth degree...). The number of days of each side of the gap which will be taken into account for calculating the spline regression are specified by **`ndays` argument**. The smoothness of the adjustment can be specified by the **`spar` argument**. ```{r echo=TRUE, results='hide', fig.keep='first'} i2<-interpollen(munich_pollen[,c(1,6)], method="spline", ndays=3, spar=0.7, plot = FALSE) ``` **Note**: By changing the `ndays` and `spar` argument, very different results can be obtained. E.g.: ```{r echo=TRUE, results='hide', fig.keep='first'} i3<-interpollen(munich_pollen[,c(1,6)], method="spline", ndays=5, spar=0.2, plot = TRUE) ``` ### <font face="verdana">"tseries"</font> method If you have long time series of data, this might be the most suitable method for you. This method **analyses the time series of the pollen/spore database and performs a seasonal-trend decomposition based on LOESS** ([Cleveland et al., 1990](http://www.nniiem.ru/file/news/2016/stl-statistical-model.pdf)). It extracts the **seasonality of the historical database** and uses it to predict the missing data by performing a linear regression with the target year. ```{r echo=TRUE, results='hide', fig.keep='last'} i4<-interpollen(munich_pollen, method="tseries", plot = TRUE) ``` **Seasonality is represented in grey, real data in black and interpolated data in red.** This method improves with long time series of years. Strange results may be caused by lack of years or small sampling periods. ### <font face="verdana">"neighbour"</font> method **Other near stations provided by the user are used to interpolate the missing data of the target station.** First of all, a Spearman correlation is performed between the target station and the neighbour stations to discard the neighbour stations with a correlation coefficient smaller than **`mincorr` value**. For each gap, a linear regression is performed between the neighbour stations and the target stations to determine the equation which converts the pollen concentrations of the neighbour stations into the pollen concentration of the target station. Only neighbour stations without any missing data during the gap period are taken into account for each gap. <p class="comment">You can include 4 different databases of neighbour stations with the arguments **`data2`, `data3`, `data4` and `data5`.** **The format of these databases must be the same than the original one and the name of the pollen/spore types must be exactly the same.**</p> With the **`mincorr` argument** you can specify the minimal correlation coefficient (Spearman correlation) that the neighbour stations must have with the target station to be taken into account for a concrete gap. This process is completely independent for each gap of each pollen type. # Function <font face="verdana">calculate_ps()</font> The function **<font size="3" face="verdana">calculate_ps()</font>** is one of the core functions of the AeRobiology package. It was designed to **calculate the main parameters of the pollen season** with regard to phenology and pollen intensity from a historical database of several pollen types. The function can use the most common methods to define the main pollen season. <div class="alert alert-info">**Please, be patient:** This function has tons of arguments. Some of them are only useful for specific situations. You can use the function without knowing all of them. Nevertheless, **we decided to include all the possible arguments in order to provide specific tools for the advanced users**.</div> **This function has several arguments, but don't worry, we are going to perform examples later**. The main arguments are: * **data**: A `data.frame` object including the general database where calculation of the pollen season must be applied. This data.frame must include a first column in "Date" format and the rest of columns in "numeric" format belonging to each pollen type by column. * **method**: A `character` string specifying the method applied to calculate the pollen season and the main parameters. The implemented methods that can be used are: "percentage", "logistic", "moving", "clinical" or "grains". A more detailed information about the different methods for defining the pollen season may be consulted [here](https://CRAN.R-project.org/package=AeRobiology)) or later in this same document. * **th.day**: A `numeric` value. The number of days whose pollen concentration is bigger than this threshold is calculated for each year and pollen type. This value will be obtained in the results of the function. The `th.day` argument will be 100 by default. * **perc**: A `numeric` value ranging 0_100. This argument is valid only for `method = "percentage"`. This value represents the percentage of the total annual pollen included in the pollen season, removing (100_perc)/2% of the total pollen before and after of the pollen season. The `perc` argument will be 95 by default. * **def.season**: A `character` string specifying the method for selecting the best annual period to calculate the pollen season. The pollen seasons may occur within the natural year between two years. The implemented options that can be used are: "natural", "interannual" or "peak". The `def.season` argument will be "natural" by default. A more detailed information about the different methods for selecting the best annual period to calculate the pollen season may be consulted [here](https://CRAN.R-project.org/package=AeRobiology). * **reduction**: A `logical` value. This argument is valid only for the "logistic" method. If `FALSE` the reduction of the pollen data is not applicable. If `TRUE` a reduction of the peaks above a certain level (`red.level` argument) will be carried out before the definition of the pollen season. The reduction argument will be `FALSE` by default. A more detailed information about the reduction process may be consulted [here](https://CRAN.R-project.org/package=AeRobiology). * **red.level**: A `numeric` value ranging 0_1 specifying the percentile used as level to reduce the peaks of the pollen series before the definition of the pollen season. This argument is valid only for the "logistic" method. The `red.level` argument will be 0.90 by default, specifying the percentile 90. * **derivative**: A `numeric (integer)` value belonging to options of 4, 5 or 6 specifying the derivative that will be applied to calculate the asymptotes which determines the pollen season using the "logistic" method. This argument is valid only for the "logistic" method. The `derivative` argument will be 5 by default. More information may be consulted [here](https://CRAN.R-project.org/package=AeRobiology). * **man**: A `numeric (integer)` value specifying the order of the moving average applied to calculate the pollen season using the "moving" method. This argument is valid only for the "moving" method. The `man` argument will be 11 by default. * **th.ma**: A `numeric` value specifying the threshold used for the "moving" method for defining the beginning and the end of the pollen season. This argument is valid only for the "moving" method. The `th.ma` argument will be 5 by default. * **n.clinical**: A `numeric (integer)` value specifying the number of days which must exceed a given threshold (th.pollen argument) for defining the beginning and the end of the pollen season. This argument is valid only for the "clinical" method. The `n.clinical` argument will be 5 by default. * **window.clinical**: A `numeric (integer)` value specifying the time window during which the conditions must be evaluated for defining the beginning and the end of the pollen season using the "clinical" method. This argument is valid only for the "clinical" method. The `window.clinical` argument will be 7 by default. * **window.grains**: A `numeric (integer)` value specifying the time window during which the conditions must be evaluated for defining the beginning and the end of the pollen season using the "grains" method. This argument is valid only for the "grains" method. The `window.grains` argument will be 5 by default. * **th.pollen**: A `numeric` value specifying the threshold that must be exceeded during a given number of days (n.clinical or window.grains arguments) for defining the beginning and the end of the pollen season using the "clinical" or "grains" methods. This argument is valid only for the "clinical" or "grains" methods. The `th.pollen` argument will be 10 by default. * **th.sum**: A `numeric` value specifying the pollen threshold that must be exceeded by the sum of daily pollen during a given number of days (`n.clinical` argument) exceeding a given daily threshold (`th.pollen` argument) for defining the beginning and the end of the pollen season using the "clinical" method. This argument is valid only for the "clinical" method. The `th.sum` argument will be 100 by default. * **type**: A `character` string specifying the parameters considered according to a specific pollen type for calculating the pollen season using the "clinical" method. The implemented pollen types that may be used are: "birch", "grasses", "cypress", "olive" or "ragweed". As result for selecting any of these pollen types the parameters `n.clinical`, `window.clinical`, `th.pollen` and `th.sum` will be automatically adjusted for the "clinical" method. If no pollen types are specified (`type = "none"`), these parameters will be considered by the user. This argument is valid only for the "clinical" method. The `type` argument will be "none" by default. * **interpolation**: A `logical` value. If `FALSE` the interpolation of the pollen data is not applicable. If `TRUE` an interpolation of the pollen series will be applied to complete the gaps with no data before the calculation of the pollen season. The `interpolation` argument will be `TRUE` by default. A more detailed information about the interpolation method may be consulted [here](https://CRAN.R-project.org/package=AeRobiology). * **int.method**: A `character` string specifying the method selected to apply the interpolation method in order to complete the pollen series. The implemented methods that may be used are: "lineal", "movingmean", "spline" or "tseries". The `int.method` argument will be "lineal" by default. * **maxdays**: A `numeric` (integer value) specifying the maximum number of consecutive days with missing data that the algorithm is going to interpolate. If the gap is bigger than the argument value, the gap will not be interpolated. Not valid with `int.method = "tseries"`. The `maxdays` argument will be 30 by default. * **export.plot**: A `logical` value specifying if a set of plots based on the definition of the pollen season and saved in the working directory will be required or not. If `FALSE` graphical results will not be saved. If `TRUE` a pdf file for each pollen type showing graphically the definition of the pollen season for each studied year will be saved within the plot_AeRobiology directory created in the working directory. The `export.plot` argument will be `FALSE` by default. * **export.result**: A `logical` value specifying if a excel file including all parameters for the definition of the pollen season saved in the working directory will be required or not. If `FALSE` the results will not exported. If `TRUE` the results will be exported as xlsx file including all parameters calculated from the definition of the pollen season within the table_AeRobiology directory created in the working directory. The `export.result` argument will be `FALSE` by default. * **plot**: A `logical` value specifying if the plots are generated in the plot history. The `plot` argument will be `TRUE` by default. * **result**: A `character` string specifying the output for the function. The implemented outputs that may be obtained are: `"table"` and `"list"`. The argument `result` will be `"table"` by default. <div class="alert alert-info">**IMPORTANT :** if `export.plot=TRUE`, the plots are **EXPORTED** in "plot_AeRobiology" folder.</div> ## Methods of calculating the Main Pollen Season This function allows to calculate the pollen season using **five different methods** which are described below. After calculating the **start_date**, **end_date** and **peak_date** for the pollen season all rest of **parameters** are calculated: * **type**: pollen type * **seasons**: year of the beginning of the season * **st.dt**: start_date (date) * **st.jd**: start_date (day of the year) * **en.dt**: end_date (date) * **en.jd**: end_date (day of the year) * **ln.ps**: length of the season * **sm.tt**: total sum * **sm.ps**: pollen integral * **pk.val**: peak value * **pk.dt**: peak_date (date) * **pk.jd**: peak_date (day of year) * **ln.prpk**: length of the pre_peak period * **sm.prpk**: pollen integral of the pre_peak period * **ln.pspk**: length of the post_peak period * **sm.pspk**: pollen integral of the post_peak period * **daysth**: number of days with more than 100 pollen grains * **st.dt.hs**: start_date of the High pollen season (date, only for clinical method) * **st.jd.hs**: start_date of the High pollen season (day of the year, only for clinical method) * **en.dt.hs**: end_date of the High pollen season (date, only for clinical method) * **en.jd.hs**: end_date of the High pollen season (day of the year, only for clinical method) <div class="alert alert-info">**IMPORTANT :** If `export.result=TRUE`, these objects will be exported as xlsx file within the "table_AeRobiology" directory created in your working directory. This Excel file will have a sheet for each pollen type and a last sheet with the legend of all the abbreviations and the method selected to calculate them. **We strongly recommend to use `export.result=TRUE`**</div> Although the results can be extracted in the working directory in a xlsx file, **they can also be assigned to an object, and visualized**, for example (an extract): ```{r echo = TRUE, results='hide', fig.keep='last', warning=FALSE} pollen_season <- calculate_ps(munich_pollen) ``` ```{r echo = FALSE, fig.keep='all', warning=FALSE} knitr::kable(pollen_season[24:31, ] , format = "html", booktabs = TRUE) ``` The result is also plotted by default (*plot = TRUE*). ```{r echo = TRUE, fig.keep='last', warning=FALSE} calculate_ps(munich_pollen[,c(1,6)], plot = TRUE) ``` ### "percentage" method This is a commonly used method for defining the pollen season based on the **elimination of a certain percentage in the beginning and the end of the pollen season** ( [Nilsson and Persson, 1981](https://www.tandfonline.com/doi/abs/10.1080/00173138109427661); [Andersen, 1991](https://www.tandfonline.com/doi/pdf/10.1080/00173139109427810)). For example, if the pollen season is based on the 95% of the total annual pollen (`perc = 95`), the start_date of the pollen season is marked as the day in which 2.5% of the total pollen is registered and the end_date of the pollen season is marked as the day in which 97.5% of the total pollen is registered. ```{r echo=TRUE, results='hide', fig.keep='first', eval=FALSE} calculate_ps(munich_pollen[,c(1,6)], method = "percentage", perc = 90, export.result = TRUE) ``` In this case we have calculated the main pollen season based on the 90% of the total annual pollen. **Results are stored in the "table_AeRobiology" folder since `export.result=TRUE`**. You can select different methods of interpolation or even to not interpolate gaps: ```{r echo=TRUE, results='hide', fig.keep='first'} calculate_ps(munich_pollen[,c(1,6)], method = "percentage", perc = 90, export.result = FALSE, int.method = "spline") ``` ```{r echo=TRUE, results='hide', fig.keep='first'} calculate_ps(munich_pollen[,c(1,6)], method = "percentage", perc = 75, export.result = FALSE, interpolation = FALSE) ``` <div class="alert alert-info">**IMPORTANT :** if `export.plot=TRUE`, the plots are **EXPORTED** in "plot_AeRobiology" folder.</div> ### "logistic" method This method was developed by Ribeiro et al. (2007) and modified by Cunha et al. (2015). It is based on fitting annually a **non-linear logistic regression model** to the daily accumulated curve for each pollen type. This logistic function and the **different derivatives** were considered to calculate the **start_date** and **end_date** of the pollen season, based on the **asymptotes when pollen amounts are stabilized on the beginning and the end of the accumulated curve**. For more information about the method, see [Ribeiro et al. (2007)](https://www.ncbi.nlm.nih.gov/pubmed/18247462) and [Cunha et al. (2015)](https://link.springer.com/article/10.1007%2Fs10453-014-9345-3). **Three different derivatives** may be used (`derivative` argument) 4, 5 or 6 that represent **from higher to lower restrictive criterion** for defining the pollen season. This method may be **complemented with an optional method for reduction the peaks values** (`reduction = TRUE`), thus avoiding the effect of the great influence of extreme peaks. In this sense, **peaks values will be cut below a certain level** that the user may select based on a percentile analysis of peaks. For example, `red.level = 0.90` will cut all peaks above the percentile 90. ```{r echo=TRUE, results='hide', fig.keep='first', warning=FALSE} pollen_season<-calculate_ps(munich_pollen[,c(1,6)], method = "logistic", derivative = 5, reduction=FALSE) ``` ```{r echo = FALSE, fig.keep='all', warning=FALSE} knitr::kable(pollen_season, format = "html", booktabs = TRUE ) ``` In the previous case, the reduction wasn't carried out (`reduction=FALSE`) and all the peaks were conserved. We can cut some peaks and change the `derivative`: ```{r echo=TRUE, results='hide', fig.keep='first', warning=FALSE} pollen_season<-calculate_ps(munich_pollen[,c(1,6)], method = "logistic", derivative = 6, reduction=FALSE, red.level = 0.8) ``` ```{r echo = FALSE, fig.keep='all', warning=FALSE} knitr::kable(pollen_season, format = "html", booktabs = TRUE) ``` As you can observe, results are a bit different. <div class="alert alert-info">**IMPORTANT :** if `export.plot=TRUE`, the plots are **EXPORTED** in "plot_AeRobiology" folder.</div> ### "clinical" method This method was proposed by [Pfaar et al. (2017)](https://www.ncbi.nlm.nih.gov/pubmed/27874202). It is based on the expert consensus in relation to pollen exposure and the **relationship with allergic symptoms** derived of the literature. Different periods may be defined by this method: the **pollen season**, the **high pollen season** and the **high pollen days**: 1) The **start_date** and **end_date** of the **pollen season** were defined as a **certain number of days (`n.clinical` argument) within a time window period (`window.clinical` argument) exceeding a certain pollen threshold (`th.pollen` argument) which summation is above a certain pollen sum (`th.sum` argument)**. <br> All these parameters are established for each pollen type according to Pfaar et al. (2017) and using the **`type`** argument these parameters may be automatically adjusted for the specific pollen types ("birch", "grasses", "cypress", "olive" or "ragweed"). Furthermore, **the user may change all parameters to do a customized definition of the pollen season**. 2) The **start_date** and **end_date** of the **high pollen season** were defined as **three consecutive days exceeding a certain pollen threshold (`th.day` argument)**. 3) The number of **high pollen days** will also be calculated exceeding this pollen threshold (`th.day`). For more information about the method, see [Pfaar et al. (2017)](https://www.ncbi.nlm.nih.gov/pubmed/27874202). Running the following example, the main pollen season will be established according to the birch requirements: more than 5 days within a week in which more than 10 pollen grains/m3 are registered and whose sum exceeds 100 pollen grains/m3. The high pollen days are those which exceed 100 pollen grains/m3 (`n.clinical=5, window.clinical=7, th.pollen=10, th.sum=100, th.day=100`) ```{r echo=TRUE, results='hide', fig.keep='first', warning=FALSE} pollen_season<-calculate_ps(munich_pollen[,c(1,6)], method = "clinical", type = "birch") ``` ```{r echo = FALSE, fig.keep='all', warning=FALSE} knitr::kable(pollen_season, format = "html", booktabs = TRUE) ``` The code above returns the same result than: ```{r echo=TRUE, results='hide', fig.keep='first', warning=FALSE} pollen_season<-calculate_ps(munich_pollen[,c(1,6)], method = "clinical", n.clinical = 5, window.clinical = 7, th.pollen = 10, th.sum = 100, th.day = 100) ``` ```{r echo = FALSE, fig.keep='all', warning=FALSE} knitr::kable(pollen_season, format = "html", booktabs = TRUE) ``` We have resumed all these parameters under the argument `type=birch` to facilitate its application. <div class="alert alert-info">**IMPORTANT :** if **`export.plot=TRUE`**, the plots are **EXPORTED** in "plot_AeRobiology" folder.</div> ### "grains" method This method was proposed by Galan et al. (2001) originally in olive pollen but after also applied in other pollen types. The **start_date** and **end_date** of the pollen season were defined as a **certain number of days (`window.grains` argument) exceeding a certain pollen threshold (`th.pollen` argument)**. For more information about the method, see [Galan et al. (2001)](https://link.springer.com/article/10.1007/s004840000081). We want to establish the main pollen season start in the first day when more than 3 consecutive days have more than 2 pollen grains/m3, and the end in the last day in which these conditions are fulfilled: ```{r echo=TRUE, results='hide', fig.keep='first'} calculate_ps(munich_pollen[,c(1,6)], method = "grains", window.grains = 3, th.pollen = 2 ) ``` ### "moving" method <div class="alert alert-info">**This method is proposed the first time by the authors of this package. We are developing a research paper explaining the method in detail.**</div> The definition of the pollen season is based on the application of a **moving average** to the pollen series in order to obtain the general seasonality of the pollen curve avoiding the great variability of the daily fluctuations. Thus, **the start_date and the end_date will be established when the curve of the moving average reaches a given pollen threshold** (**`th.ma`** argument). Also the order of the moving average may be customized by the user (**`man`** argument). By default, `man = 11` and `th.ma = 5`. <p class="comment"> The idea of this method is to be able to calculate the start of the main pollen season when the season has not finished yet. Moreover, it allows to establish the start and end even if there is a huge amount of lacking data within the main pollen season. Its similar to the "grains" method but taking into account the moving mean to avoid daily variability.</p> You might understand it better by consulting the plots obtained. <div class="alert alert-info">**IMPORTANT :** if **`export.plot=TRUE`**, the plots are **EXPORTED** in "plot_AeRobiology" folder.</div> Let's calculate it with a moving average of 7 days. We are going to establish the start and end of the main pollen season when the moving average reaches 4 pollen grains/m3: ```{r echo=TRUE, results='hide', fig.keep='first'} calculate_ps(munich_pollen[,c(1,6)], method = "moving", man = 7, th.ma = 4) ``` ## Advanced examples ### Southern Hemisphere & interannual types As you may have noticed, we have been using the function under an "European" point of view: the calculations are from 1st January to 31th December. The researchers of the **Southern Hemisphere** are used to work with **interanual pollen seasons**. Don't worry, we haven't forgotten you! 1) You can work **from 1st June to 31th May** by means of the argument **`def.season="interannual"`**: ```{r echo=TRUE, results='hide', fig.keep='first'} calculate_ps(munich_pollen[,c(1,3)], method = "moving", man = 7, th.ma = 4, def.season = "interannual") ``` <p class="comment"> In this method, the season belongs to the first year of the pair of years, i.e: from June 2017 to May 2018 -> season "2017".</p> 2) You can **center the main pollen season in the average peak day** (182 days before and after the average date of the peak): ```{r echo=TRUE, results='hide', fig.keep='first'} calculate_ps(munich_pollen[,c(1,6)], method = "percentage", perc=95, def.season = "peak") ``` <p class="comment"> In this last method, the season belongs to the year in which the average peak date - 182 days is located, i.e.: if the average peak date is in January 2013, the season is called "2012" in the data.frames.</p> ### Interpolation in detail Pollen time series frequently have different gaps with no data and this fact could be a problem for the calculation of specific methods for defining the pollen season even providing incorrect results. In this sense **by default a linear interpolation will be carried out to complete these gaps before to define the pollen season (`interpolation = TRUE`)**. Additionally, the users may select other interpolation methods using the **`int.method` argument**, as "lineal", "movingmean", "spline" or "tseries". Some advanced users may have noticed that you can't use directly all the arguments of **<font size="3" face="verdana">interpollen()</font>** through **<font size="3" face="verdana">calculate_ps()</font>**. You only are able to select the interpolation method. Nevertheless, it is not impossible to use them: 1) **Use <font size="3" face="verdana">interpollen()</font>** with all the arguments you want and **overwrite your interpolated database in an object**: ```{r echo=TRUE, results='hide', fig.keep='first'} CompleteData<-interpollen(munich_pollen, method="spline", ndays=3, spar=0.7, plot = TRUE, maxdays = 3, result = "wide") ``` 2) Then, **use "CompleteData" instead of "munich_pollen" (or your database) in the following steps**: ```{r echo = TRUE, results='hide',fig.keep='all'} plot_ps(CompleteData, pollen.type="Alnus", year=2013, fill.col = "orange", axisname = "AeRobiology custom units") ``` <p class="comment"> **Note**: You might select **`interpolation=FALSE`** in the other functions you use after interpolating manually. In some cases it doesn't matter, but if you want to interpolate only gaps with less than 3 days and you did in the first step, if you don't use `interpolation=FALSE` in calculate_ps function, a second interpolation will be carried out by using the default method (gaps with less than 30 days and lineal interpolation). You will obtain a database with the gaps of less than 3 days interpolated by "spline" method and the rest with the "lineal" method. </p> # Function <font face="verdana">pollen_calendar</font> This function **calculates the pollen calendar from a historical database** of several pollen types using different designs. The main arguments are: * **data**: A `data.frame` with the first column in "Date" format and the rest of the columns in "numeric" format (pollen types). * **method**: for choosing the method to generate the pollen calendar. The options are "heatplot", "violinplot" and "phenological". * **n.types**: indicating the number of the most abundant pollen types showed in the pollen calendar * **start.month**: ranging 1-12 indicates the number of the month (January-December) when the beginning of the pollen calendar must be considered, see more details [here](https://CRAN.R-project.org/package=AeRobiology). * **export.plot**: specifying if a plot with the pollen calendar saved in the working directory will be required or not. Other arguments have been incorporated in relation to the format to export the plot (`export.format`). * **result**: **"plot"** or **"table"**. **Data exportation:** - If `export.plot = TRUE` this plot displaying the pollen calendar will be exported within the **Plot_AeRobiology** directory created in the working directory. - If `export.plot = TRUE` and `export.format = pdf`, a pdf file with the pollen calendar will be saved within the **plot_AeRobiology** directory, created in the working directory. Additional characteristics may be incorporated to the exportation as pdf file. - If `export.plot = TRUE` and `export.format = png`, a png file with the pollen calendar will be saved within the **plot_AeRobiology** directory, created in the working directory. Additional characteristics may be incorporated to the exportation as png file. **Exclusive arguments** for pollen calendars generated as **"heatplot"**: * **period**: specifying the interval time considered to generate the pollen calendar. The options are "weekly" and "daily." * **method.classes**: indicating the method to define the classes used for classifying the average pollen concentrations. The options are "exponential" and "custom". * **n.classes**: specifying the number of classes that will be used when `method.classes = custom`. * **classes**: specifying a numeric vector with the desired thresholds to define the different classes. * **color**: choosing different options such as "green", "red", "blue", "purple"or "black". **Exclusive arguments** for pollen calendars generated as **"phenological"**: * **perc1, perc2**: both ranging 0-100. These values represent the percentage of the total annual pollen included in the pollen season, removing (100-percentage)/2% of the total pollen before and after the pollen season. Two percentages must be specified to define the "main pollination period" (`perc1`) and the "early/late pollination" (`perc2`) based on the "phenological" method proposed by [Werchan et al. (2018)](https://link.springer.com/article/10.1007/s40629-018-0055-1). * **th.pollen**: specifying the minimum threshold of the average pollen concentration which will be used to generate the pollen calendar. Days below this threshold will not be considered and days above this threshold will be considered as "possible occurrence" as proposed by [Werchan et al. (2018)](https://link.springer.com/article/10.1007/s40629-018-0055-1). This argument also works for "violinplot" pollen calendar <div class="alert alert-info">By default the database will be interpolated according to the **<font size="3" face="verdana">interpollen</font>** function, and also other arguments for this function are incorporated in the **<font size="3" face="verdana">pollen_calendar</font>** function.</div> ## "heatplot" method This pollen calendar is constructed based on the **daily or weekly average of pollen concentrations** (depending on the preferences of the user, who may select "daily" or "weekly" as **`period` argument**). Then, these averages may be classified in different categories following different methods selected by the user. An example of this pollen calendar may be consulted in [Rojo et al. (2016)](https://link.springer.com/article/10.1007/s10661-016-5129-2). This method to design pollen calendars is an **adaptation from the pollen calendar proposed by Spieksma (1991)**, who considered 10-day periods instead of daily or weekly periods. First, we are going to generate a pollen calendar based on the **heatplot**, designed with **green** color and constructed with **daily** averages: ```{r echo = TRUE, results='hide',fig.keep='all', warning=FALSE} pollen_calendar(munich_pollen, method = "heatplot", period = "daily", color = "green", interpolation = FALSE) ``` In all cases, the table used by the pollen calendar with the averaged values will be created. This table can be visualized. To do that we have to set the argument **result = "table"** For example (an extract): ```{r echo = TRUE, fig.keep='all', warning=FALSE} average_values<-pollen_calendar(munich_pollen, method = "heatplot", period = "daily", color = "green", interpolation = FALSE, result = "table") knitr::kable(average_values[82:90, ], format = "html", booktabs = TRUE) ``` By default the method for defining the classes for the pollen calendar is according to the **exponential** method. Nevertheless, the classes can be customized by **`classes` argument**: ```{r echo = TRUE, results='hide',fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "heatplot", period = "daily", color = "red", method.classes = "custom", n.classes = 5, classes = c(5, 25, 50, 200), interpolation = FALSE) ``` In addition, for species with their pollen season occurring between two natural years, the start of the pollen calendar can be selected by the **`start.month` argument**: ```{r echo = TRUE, results='hide',fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "heatplot", period = "daily", color = "purple", method.classes = "custom", n.classes = 5, classes = c(5, 25, 50, 200), start.month = 11, na.remove = FALSE, interpolation = FALSE) ``` <p class="comment">**`NA` (no data) can be removed by using `na.remove` argument.**</p> Also we can generate a pollen calendar based on the **heatplot** design with **weekly** averages. ```{r echo = TRUE, results='hide',fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "heatplot", period = "weekly", color = "blue", method.classes = "exponential", n.types = 4, y.start = 2011, y.end = 2014, interpolation = FALSE) ``` In this case, we have included other restrictive arguments as **`n.types`** limiting the number of pollen types and **`y.start`** and **`y.end`**, limiting the period to be considered for the pollen calendar. ## "phenological" method This pollen calendar is based on the phenological definition of the pollen season and adapted from the methodology proposed by [Werchan et al. (2018)](https://link.springer.com/article/10.1007/s40629-018-0055-1). After obtaining the **daily average pollen concentrations** for the most abundant pollen types, different pollination periods are calculated. The **main pollination period** is calculated according to the percentage defined by `perc1` argument (selected by the user, 80% by default; red) of the annual total pollen. For example, if `perc1 = 80`, the beginning of the high season is marked when the 10% of the annual value is reached; the end is selected when 90% is reached. The **early/late pollination period** is defined with the `perc2` argument (selected by the user, 99% of the total annual pollen by default; orange), i.e.: the start of this period will be when the 0.5% is reached and the end when the 99.5% is reached. For this kind of pollen calendar the `th.pollen` argument defines the **possible occurrence** period as adapted by [Werchan et al. (2018)](https://link.springer.com/article/10.1007/s40629-018-0055-1), considering the entire period between the first and the last day when this pollen level is reached (yellow). ```{r echo = TRUE, results='hide',fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "phenological", n.types = 5, y.start = 2011, y.end = 2014, interpolation = FALSE) ``` Furthermore, different criteria can be customized: ```{r echo = TRUE, results='hide',fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "phenological", perc1 = 90, perc2 = 95, th.pollen = 5, interpolation = FALSE) ``` In this last case, the pollen calendar has been generated with more restrictive criteria for `perc1`, `perc2` and `th.pollen`. ## "violinplot" method This pollen calendar is based on the **pollen intensity**, and adapted from the pollen calendar published by [ORourke (1990)](https://link.springer.com/article/10.1007/BF02539105). At first, the daily averages of the pollen concentrations are calculated and then, these averages are represented by using the violin plot graph. <div class="alert alert-info">The shape of the violin plot represents the pollen intensity of the pollen types in a relative way, i.e.: the values will be calculated as **relative measurements** regarding to the most abundant pollen type in annual amounts. Therefore, this pollen calendar shows a relative comparison between the pollen intensity of the pollen types but **without scales and units**.</div> ```{r echo = TRUE, results='hide', fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "violinplot", y.start = 2012, y.end = 2015, interpolation = FALSE) ``` In addition, **`th.pollen`** can be established, specifying the minimum pollen concentration considered (E.g.: 10 pollen grains/m3): ```{r echo = TRUE, results='hide', fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "violinplot", th.pollen = 10, na.rm = FALSE, interpolation = FALSE) ``` # Functions <font face="verdana">iplot_pollen()</font> and <font face="verdana">iplot_years()</font> These functions have been designed for a **quick view** of your data for discussions or interpretations of them. **Interactive plots are displayed**. It might be interesting for group meetings or real time exposition of results. The functions create a emergent screen in which you can select the pollen/spore type or the years you want to plot. <div class="alert alert-info">**IMPORTANT :** To stop the real time visualization and continue using the package you must click in the **"Stop" signal.**</div> The function **<font size="3" face="verdana">iplot_pollen()</font>** is to plot the pollen data during one season. The arguments are: * **data**: A `data.frame` object. This data.frame should include a first column in "Date" format and the rest of columns in format "numeric" belonging to each pollen type by column. * **year:** An `integer` value specifying the year to display. This is a mandatory argument. The function **<font size="3" face="verdana">iplot_years()</font>** is to plot the pollen data during one season. The arguments are: * **data**: A `data.frame` object. This data.frame should include a first column in "Date" format and the rest of columns in format "numeric" belonging to each pollen type by column. * **pollen:** A `character` string with the name of the particle to show. This character must match with the name of a column in the input database. This is a mandatory argument. ```{r echo = TRUE, results='hide',fig.keep='all', eval=FALSE} iplot_pollen(munich_pollen, year = 2012) iplot_years(munich_pollen, pollen = "Betula") ``` <p class="comment">**Note:** We are not able to plot interactive figures in this document. Please, run the codes above in your R session.</p> # Function <font face="verdana">plot_summary()</font> The function **<font size="3" face="verdana">plot_summary()</font>** is to plot the pollen data during several seasons. Also to plot the averaged pollen season over the study period. It is possible to plot the relative abundance per day and smoothing the pollen season by calculating a moving average. The main arguments are: * **data**: A `data.frame` object. This data.frame should include a first column in "Date" format and the rest of columns in "numeric" format belonging to each pollen type by column. * **pollen:** A `character` string with the name of the particle to show. This character must match with the name of a column in the input database. This is a mandatory argument. * **mave:** An `integer` value specifying the order of the moving average applied to the data. By default, `mave = 1`. * **normalized:** A `logical` value specifying if the visualization shows real pollen data (`normalized = FALSE`) or the percentage of every day over the whole pollen season (`normalized = TRUE`). By default, `normalized = FALSE`. * **axisname:** A `character` string specifying the title of the y axis. By default, `axisname = "Pollen grains / m3"` * Others... ```{r echo = TRUE, results='hide',fig.keep='all'} plot_summary(munich_pollen, pollen = "Betula") ``` In some cases the user may want to reduce the "noise" of the daily values by **calculating moving means** (E.g.: 5 days moving mean; `mave = 5`): ```{r echo = TRUE, results='hide',fig.keep='all'} plot_summary(munich_pollen, pollen = "Betula", mave = 5) ``` You might be also interested in representing as background the **percentage** each day suppose to the main pollen season (`normalized = TRUE`): ```{r echo = TRUE, results='hide',fig.keep='all'} plot_summary(munich_pollen, pollen = "Betula", mave = 5, normalized = TRUE) ``` <div class="alert alert-info">**<font size="3" face="verdana">interpollen()</font> function** is integrated in this function. Consult the "help" section of this function to use it.</div> # Function <font face="verdana">plot_normsummary()</font> The function **<font face="verdana">plot_normsummary()</font>** has been designed to plot the pollen data amplitude during several seasons: daily average pollen concentration over the study period, maximum pollen concentration of each day over the study period and minimum pollen concentration of each day value over the study period. It is possible to plot the relative abundance per day and smoothing the pollen season by calculating a moving average. The main arguments similar to **<font size="3" face="verdana">plot_summary()</font>**, but as a result **you will obtain the the max-min range of the study period** instead of the values for every year. ```{r echo = TRUE, results='hide',fig.keep='all'} plot_normsummary(munich_pollen, pollen = "Betula", color.plot = "red") ``` The **maximum values** are marked in **red** and the **minimum values in white**. The **average** is the **black line**. Of course, you can change the color (`color.plot` argument): ```{r echo = TRUE, results='hide',fig.keep='all'} plot_normsummary(munich_pollen, pollen = "Betula", color.plot = "green", mave = 5, normalized = TRUE) ``` <div class="alert alert-info">**<font size="3" face="verdana">interpollen()</font> function** is integrated in this function. Consult the "help" section of this function to use it.</div> # Function <font face="verdana">analyse_trend()</font> The function **<font size="3" face="verdana">analyse_trend()</font>** has been created to calculate the main seasonal indexes of the pollen season ("Start Date", "Peak Date", "End Date" and "Pollen Integral"), as well as **trends analysis** of those parameters over the seasons. It is a summary dot plot showing the distribution of the main seasonal indexes over the years. The results can also be stored in two folders termed **"plot_AeRobiology"** and **"table_AeRobiology"**, which will be created in your working directory (only if `export.result=TRUE` & `export.plot=TRUE`). You can decide which result to be returned from the function by setting the argument **result**: *result = "table"* or *result = "plot"*. The function allows you to decide if you want to interpolate the data or not, by the argument **`interpolation`**, it also allows you to select the interpolation method by the argument **`int.method`**. Furthermore, it allows you to select the pollen season definition method, by the argument **`method`** and additional arguments for the function **<font size="3" face="verdana">calculate_ps()</font>**. Some arguments about the visualization are: * **split**: A `logical` argument. If `split = TRUE`, the plot is separated in two according to the nature of the variables (i.e. dates or pollen concentrations). **This argument was a solution to reduce the scale of the x-axis when "total pollen"" variable has a very high/low slope**. By default, `split = TRUE`. * **quantil**: A `numeric` value (between 0 and 1) indicating the quantile of data to be displayed in the graphical output of the function. `quantil = 1` would show all the values, however a lower quantile will exclude the most extreme values of the sample. **This argument was designed after realizing of a common problem: when plotting the results with `split=FALSE`, some "outlier" results increased a lot the scale of the x-axis, making unreadable the rest of results**. Our solution was to create an argument to exclude these outliers results and to be able to observe the main results in an appropriate scale. Furthermore, this argument may be used to split the parameters using a different sampling units (e.g. dates and pollen concentrations) can be used low vs high values of quantil argument (e.g. 0.5 vs 1). Also can be used an extra argument: `split`. By default, `quantil = 0.75`. **This argument only works when `split=FALSE`.** * **significant**: A `numeric` value indicating the significant level to be considered in the linear trends analysis. This p level is displayed in the graphical output of the function (as a number in the legend and as a black ring in the graphical representation). By default, `significant = 0.05`. ```{r echo = TRUE, results='hide',fig.keep='all'} analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, result="plot") ``` Lets see what happen if we don't split the graphics: ```{r echo = TRUE, results='hide',fig.keep='all'} analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, split = FALSE, quantil = 1, result="plot") ``` Now the results are less readable, but this kind of graphical representation may be interesting for some people. When plotting all the results together, it might be useful to exclude some outliers: ```{r echo = TRUE, results='hide',fig.keep='all'} analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, split=FALSE, quantil = 0.5, result="plot") ``` As you can appreciate, a lot of points have been omitted. You can also change the significance level as mentioned above. Why don`t we try?: ```{r echo = TRUE, results='hide',fig.keep='all'} analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, significant = 1, result = "plot") ``` Now everything is significant! Have a look to the numbers by setting *result = "table"* (the default output). ```{r echo = TRUE, results='hide',fig.keep='all'} analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, significant = 1, result="table") ``` <div class="alert alert-info">**<font size="3" face="verdana">interpollen()</font> function** is integrated in this function. Consult the "help" section of this function to use it.</div> <div class="alert alert-info"> If **result = "plot"**, the function returns a list of objects of class *ggplot2*; if **result = "table"**, the function returns a *data.frame*. By default, **result = "table"**.</div> # Function <font face="verdana">plot_trend()</font> The function **<font size="3" face="verdana">plot_trend()</font>** has been created to calculate the main seasonal indexes of the pollen season ("Start Date", "Peak Date", "End Date" and "Pollen Integral") and their trends analysis over the seasons. It produces plots showing the distribution of the main seasonal indexes over the years. The **results are stored in two folders termed "plot_AeRobiology" and "table_AeRobiology"** which will be located in your working directory (only if `export.result=TRUE` & `export.plot=TRUE`). ```{r echo = TRUE, results='hide',fig.keep='all', eval=FALSE} plot_trend(munich_pollen, interpolation = FALSE, export.plot = TRUE, export.result = TRUE) ``` <div class="alert alert-info">**NOTE:** The plots are not shown in your R environment. Because of the high amount of graphs, they are stored in new folders created in your working directory (search were you have saved the R project and you will find the folders).</div> <p class="comment">The **interval of confidence** only appears if more than 6 dots are plotted. If not, a line crossing all the dots is plotted.</p> <div class="alert alert-info">**<font size="3" face="verdana">interpollen()</font> function** is integrated in this function. Consult the "help" section of this function to use it.</div> # Function <font face="verdana">iplot_abundance()</font> The function **<font size="3" face="verdana">iplot_abundance()</font>** generates a barplot based on the relative abundance in the air (as percentage) of each pollen/spore type with respect to the total amounts. The main arguments are: * **n.types:** A `numeric` (integer) value specifying the number of the most abundant pollen types that must be represented in the plot of the relative abundance. A more detailed information about the selection of the considered pollen types may be consulted [here](https://CRAN.R-project.org/package=AeRobiology). The `n.types` argument will be 15 types by default. * **y.start/y.end:** A `numeric` (integer) value specifying the period selected to calculate relative abundances of the pollen types (start year - end year). If `y.start` and `y.end` are not specified (`NULL`), the entire database will be used to generate the pollen calendar. The `y.start` and `y.end` arguments will be `NULL` by default. * **col.bar:** A `character` string specifying the color of the bars to generate the graph showing the relative abundances of the pollen types. The `color` argument will be #E69F00 by default, but any color may be selected. * **type.plot: ** A `character` string specifying the type of plot selected to show the plot showing the relative abundance of the pollen types. The implemented types that may be used are: "static" (generates a static ggplot object) and "dynamic" (generates a dynamic plotly object). * **exclude:** A character string vector with the names of the pollen types to be excluded from the plot. * Others... ```{r echo = TRUE, results='hide',fig.keep='all'} iplot_abundance(munich_pollen, interpolation = FALSE, export.plot = FALSE, export.result = FALSE) ``` Now we are going to reduce the number of types: ```{r echo = TRUE, results='hide',fig.keep='all'} iplot_abundance(munich_pollen, interpolation = FALSE, export.plot = FALSE, export.result = FALSE, n.types = 3) ``` We can also select the abundance for only one year and change the color: ```{r echo = TRUE, results='hide',fig.keep='all'} iplot_abundance(munich_pollen, interpolation = FALSE, export.plot = FALSE, export.result = FALSE, n.types = 3, y.start = 2011, y.end = 2011, col.bar = "#d63131") ``` Furthermore, we can make it interactive: ```{r echo = TRUE, results='hide',fig.keep='all'} iplot_abundance(munich_pollen, interpolation = FALSE, export.plot = FALSE, export.result = FALSE, n.types = 3, y.start = 2011, y.end = 2011, col.bar = "#d63131", type.plot = "dynamic") ``` <div class="alert alert-info">**<font size="3" face="verdana">interpollen()</font> function** is integrated in this function. Consult the "help" section of this function to use it.</div> # Function <font face="verdana">iplot_pheno()</font> The function **<font size="3" face="verdana">iplot_pheno()</font>** generates a boxplot based on phenological parameters (start_dates and end_dates) which are calculated by the estimation of the main parameters of the pollen season. The main arguments are: * **data**: A `data.frame` object including the general database where calculation of the pollen season must be applied in order to generate the phenological plot based on the start_dates and end_dates. This data.frame must include a first column in "Date" format and the rest of columns in "numeric" format belonging to each pollen type by column. * **method**: A `character` string specifying the method applied to calculate the pollen season and the main parameters. The implemented methods that can be used are: "percentage", "logistic", "moving", "clinical" or "grains". A more detailed information about the different methods for defining the pollen season may be consulted in calculate_ps function. * **n.types**: A `numeric` (integer) value specifying the number of the most abundant pollen types that must be represented. A more detailed information about the selection of the considered pollen types may be consulted [here](https://CRAN.R-project.org/package=AeRobiology). The `n.types` argument will be 15 by default. * **type.plot**: A `character` string specifying the type of plot selected to show the phenological plot. The implemented types that may be used are: "static" (generates a static ggplot object) and "dynamic" (generates a dynamic plotly object). * Other arguments related to **<font size="3" face="verdana">calculate_ps()</font> function** ```{r echo = TRUE, results='hide',fig.keep='all'} iplot_pheno(munich_pollen, method= "percentage", perc=80, int.method="spline", n.types = 8) ``` We can change the method to establish the main pollen season and the number of pollen types to show: ```{r echo = TRUE, results='hide',fig.keep='all'} iplot_pheno(munich_pollen, method= "clinical", n.clinical = 3, int.method="spline", n.types = 4) ``` Furthermore, we can make the plot interactive to obtain more information by clicking in each object: ```{r echo = TRUE, results='hide',fig.keep='all', eval=FALSE} iplot_pheno(munich_pollen, method= "clinical", n.clinical = 3, int.method="spline", n.types = 4, type.plot = "dynamic") ``` # Function <font face="verdana">plot_ps()</font> The function **<font size="3" face="verdana">plot_ps()</font> function** was designed to plot the main pollen season of a single pollen type and year. Some of the arguments are: * **data**: A `data.frame` object including the general database where interpolation must be performed. This data.frame must include a first column in "Date" format and the rest of the columns in "numeric" format. Each column must contain information of one pollen type. It is not necessary to insert missing gaps, the function will automatically detect them. * **pollen.type**: A `character` string specifying the name of the pollen type which will be plotted. The name must be exactly the same that appears in the column name. Mandatory argument with no default. * **year**: A `numeric` (integer) value specifying the season to be plotted. The season does not necessary fit a natural year. See calculate_ps for more details. Mandatory argument with no default. * **days**: A `numeric` (integer) specifying the number of days beyond each side of the main pollen season that will be represented. The `days` argument will be 30 by default. * **fill.col**: A `character` string specifying the name of the color to fill the main pollen season in the plot. See ggplot function for more details. The `fill.col` argument will be "turquoise4" by default. It uses the ggplot color codes. * **axisname**:A `character` string specifying the title of the y axis. By default, `axisname = expression(paste("Pollen grains / m"^"3"))` * Others... <div class="alert alert-info">**<font size="3" face="verdana">calculate_ps</font> function** is integrated in this function. Consult [here](https://CRAN.R-project.org/package=AeRobiology) for more information. **Interpolation is mandatory** for this function. Due to technical encoding, you must use interpollen function through calculate_ps function. Consult calculate_ps for more information.</div> ```{r echo = TRUE, results='hide',fig.keep='all'} plot_ps(munich_pollen, pollen.type="Alnus", year=2011) ``` If you misspell some pollen type name, the function will tell you: ```{r echo = TRUE, results='hold', error=TRUE} plot_ps(munich_pollen, pollen.type="Alnuscdscscr", year=2011) ``` **Let`s test more arguments:** ```{r echo = TRUE, results='hide',fig.keep='all'} plot_ps(munich_pollen, pollen.type="Alnus", year=2013, method= "percentage", perc=95 ,int.method = "lineal") ``` As you have noticed, the arguments `method`, `perc` and `int.method` are from **<font size="3" face="verdana">calculate_ps</font> function**. What about changing the interpolation method? ```{r echo = TRUE, results='hide',fig.keep='all'} plot_ps(munich_pollen, pollen.type="Alnus", year=2013, method= "percentage", perc=95 ,int.method = "movingmean") ``` Do you want a larger scale? ```{r echo = TRUE, results='hide',fig.keep='all'} plot_ps(munich_pollen, pollen.type="Alnus", year=2013, days = 90) ``` Maybe a different color and y-axis name? ```{r echo = TRUE, results='hide',fig.keep='all'} plot_ps(munich_pollen, pollen.type="Alnus", year=2013, fill.col = "orange", axisname = "AeRobiology custom units") ``` # Function <font face="verdana">plot_hour()</font> Please keep in mind that the function *plot_hour()* is only available after package **version 2.0**. The input data must be a data.frame object with the structure long. Where the first two columns are factors indicating the pollen and the location. The 3 and 4 columns are POSIXct, showing the date with the hour. Where the third column is the beginning of the concentration *from* and the fourth column is the end time of the concentration data *to*. The fifth column shows the concentrations of the different pollen types as numeric. Please see the example 3-hourly data from the automatic pollen monitor BAA500 from Munich and Viechtach in Bavaria (Germany) data("POMO_pollen"), supplied by ePIN Network supported by the Bavarian Government. We will load an example dataset about 3-hourly data from the ePIN network in Bavaria (Germany). The dataset contains information of 3-hourly concentrations of pollen in the atmosphere of Munich (DEBIED) and Viechtach (DEVIEC) during the year 2018. Pollen types included: "Poaceae" and "Pinus". The data were obtained by the automatic pollen monitor BAA500 from Munich and Viechtach in Bavaria (Germany) data("POMO_pollen"), supplied by the public ePIN Network supported by the Bavarian Government. The ePIN Network was built by Das Bayerische Landesamt für Gesundheit und Lebensmittelsicherheit (LGL) in collaboration with Zentrum Allergie und Umwelt (ZAUM). ```{r echo = FALSE, warning=FALSE, message=FALSE} library(ggplot2) library(dplyr) ``` ```{r echo = TRUE} data("POMO_pollen") ``` The function plots pollen data expressed in concentrations with time resolution higher than 1 day (e.g. hourly, bi-hourly concentrations). If the argument *result = "plot"*, the function returns a list of objects of class **ggplot2**; if *result = "table"*, the function returns a **data.frame** with the hourly patterns. ```{r echo = TRUE, message=FALSE} plot_hour(POMO_pollen) ``` To display a table we have to set the argument *result = "table"*. ```{r message=FALSE} TO<-plot_hour(POMO_pollen, result ="table") knitr::kable(TO[1:10,], caption = "3-Hourly patterns", row.names = FALSE, digits = 1, format = "html", booktabs = TRUE) ``` We can also split the different stations by setting the argument *locations = TRUE*. ```{r, message=FALSE, echo=TRUE} plot_hour(POMO_pollen, locations = TRUE) ``` # Function <font face="verdana">plot_heathour()</font> An alternative to *plot_hour()* is *plot_heathour()*, which shows a summary of all particles with a heatplot. The input data should have the same format as for *plot_hour()*. ```{r, message=FALSE, echo=TRUE} plot_heathour(POMO_pollen) ``` You can also set the colors by the arguments: *low.col*, *mid.col* and *high.col*. E.g. ```{r, message=FALSE, echo=TRUE} plot_heathour(POMO_pollen, low.col = "darkgreen", mid.col = "moccasin", high.col = "brown") ``` By setting *locations = TRUE* you can split the result by locations: ```{r, message=FALSE, echo=TRUE} plot_heathour(POMO_pollen, locations = TRUE) ```
/scratch/gouwar.j/cran-all/cranData/AeRobiology/inst/doc/my-vignette.Rmd
--- title: "Vignette of AeRobiology" subtitle: "A Computational Tool for Aerobiological Data" author: Jesus Rojo^[University of Castilla-La Mancha & ZAUM (TUM/Helmholtz Zentrum)] Antonio Picornell^[University of Malaga] Jose Oteros^[Zentrum Allergie und Umwelt - ZAUM (TUM/Helmholtz Zentrum)] date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Vignette of AeRobiology} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r include = FALSE} knitr::opts_chunk$set(echo = FALSE, warning = FALSE, fig.width = 7) Sys.setlocale(locale="english") ``` <style> p.comment { background-color: #DBDBDB; padding: 5px; margin-left: 3px; border-radius: 5px; font-style: italic; } </style> ![AeRobiology R package](logo_aerobiology.jpg) This package gathers different tools for managing aerobiological databases, elaborating the main calculations and visualization of results. In a first step, data may be checked using tools for quality control and all missing gaps can be completed. Then, the main parameters of the pollen season can be calculated and represented graphically. Multiple graphical tools are available: pollen calendars, phenological plots, time series, tendencies, interactive plots, abundance plots... <div class="alert alert-info">**Please, take into account that not all the arguments of each function are explained in this document. This is a quick guide and further details can be consulted in the [official document](https://CRAN.R-project.org/package=AeRobiology) of the package.**</div> The first thing you have to do is to install from CRAN repository and to load the package <font size="2" face="verdana">AeRobiology</font>.<div/> ```{r echo=FALSE} library ("knitr") ``` ```{r} library ("AeRobiology") ``` ```{r eval=FALSE, echo = TRUE} install.packages("AeRobiology") library (AeRobiology) ``` # Using the attached data During this tutorial we are going to use the **pollen data from Munich** which are **integrated in the package**. This will allow you to follow the tutorial obtaining the same results. If you want to follow the tutorial by using your own data, check the next section. <p class="comment">**`munich_pollen`** is a data set containing information of daily concentrations of pollen in the atmosphere of Munich during the years 2010_2015. Pollen types included: "Alnus", "Betula", "Taxus", "Fraxinus", "Poaceae", "Quercus", "Ulmus" and "Urtica". Data were obtained at Munich (Zentrum Allergie und Umwelt, ZAUM) using a Hirst_type volumetric pollen trap (Hirst, 1952) following the standard methodology (VDI4252_4_2016). Some gaps have been added to test some functions of the package (e.g. <font size="2" face="verdana">quality_control()</font>, <font size="2" face="verdana">interpollen()</font>). <br> The data were obtained by the research team of Prof. Jeroen Buters (Christine Weil & Ingrid Weichenmeier). At the Zentrum Allergie und Umwelt ([ZAUM](https://www.zaum-online.de/), directed by Prof. Carsten B. Schmidt_Weber). </p> You can **load the data** in your working environment by typing: ```{r echo = TRUE} data("munich_pollen") ``` # Loading your database from Excel This section has been developed for people who are not familiar with R. **If you have experience in loading databases in R, you might not be interested in this section and you can skip it.** There are several functions to import data from Excel. Arbitrarily, we are going to use the package **<font size="3" face="verdana">readxl</font>**. ```{r eval=FALSE, echo = TRUE} install.packages("readxl") library (readxl) ``` Now we can import the data with the function **<font size="3" face="verdana">read_xlsx()</font>**. I strongly recommend you to **have your Excel file in the same folder in which you are working with R**. This will avoid long paths to your files and also will prevent broken paths when some files have been moved. If you are working with an old database, you might be interested in the **<font size="3" face="verdana">read_xls()</font>** function. ```{r eval=FALSE, echo = TRUE} Mydata<-read_xlsx("C:/Users/Antonio/Desktop/Prueba Markdown/mydata.xlsx") ``` You can also **select the sheet** you want to import: ```{r eval=FALSE, echo = TRUE} Mydata<-read_xlsx("C:/Users/Antonio/Desktop/Prueba Markdown/mydata.xlsx", sheet=2) ``` <div class="alert alert-info">**IMPORTANT**: You must remove all the symbols or letters from your database. **If you have missing gaps within your database don't put "-" or "No Data" or any letter in the gaps**, keep the cell empty or remove the cell. The dates should be in a complete format (E.g.:"14/02/2019"). There must be a first column with the date followed by as many columns as pollen types with only numeric values (**check decimal separation: commas must be replaced by dots**).</div> Now you should have loaded your data in the object `Mydata`. Let's check if there is any mistake: ```{r echo=TRUE, results='hold'} str(munich_pollen) ``` **Note**: I'm using `munich_pollen` database as example but you should use `Mydata`. As you can see, the object is a `data.frame`, the first column is "Date" type and the rest are "num", which means numeric. This is the format you database **must** have. Normally it is automatically recognized by the import function, but if not, check the functions: <font size="3" face="verdana">**as.Date()**</font>, <font size="3" face="verdana">**as.numeric()**</font> and <font size="3" face="verdana">**as.data.frame()**</font>. <div class="alert alert-info"><p>**Your database must have this format to start working with the package**. The functions are designed to warn you in case the format is not correct but there may be some exceptions. **This is the most important step when using the package and some strange errors reported by the functions can be solved by doing this**. Sometimes the date column has 2 different types simultaneously ("POSIXlt" and "POSIXct") and it might cause mistakes. You can use one or another, but not both. Furthermore, it is strongly recommended to use "Date" format instead of "POSIXlt" or "POSIXct" despite theoretically there may not be problems by using them.</p><br> <p>**Please, don`t exasperate**. This is the **slowest step of using the package and the most important**. Once you have your data loaded, you only have to spend a few minutes in each function to have your results. If you have some unsolving problems, search on the internet. Some other people must have had your same problem and it should be solved in some forums. If not, write us and we will help you as soon as possible: [email protected] </p></div> <p class="comment">If you want to import your data from **csv files**, you might be interested in **<font size="3" face="verdana">read.csv()</font>** function.</p> # Function <font face="verdana">quality_control()</font> **This function was designed to check the quality of an historical database of several pollen types**. Since many of the quality requirements depend on the criteria selected to establish the main pollen season, this function has the **<font size="3" face="verdana">calculate_ps()</font> function integrated**. You can insert arguments of calculate_ps() in this function to select the method you want for calculating the main pollen season. calculate_ps() details can be consulted later in this same manuscript. <p class="comment">If **result = "plot"**, this function returns a **graphical resume** of the quality of the database marking with different red intensities the "weak-points" of it, and if **result = "table"** a `data.frame` with the detailed reasons of their "weakness".</p> **It establishes the quality according to 5 different criteria**: * **It checks if the main pollen season cannot be calculated according to the minimal requirements:** lack of data for this pollen type and year. This criteria appears as **"Complete"** in the generated data.frame. * **If the start, end or peak dates of the main pollen season have been interpolated** (or a date near them specified by **`int.window`** argument). It is based on the following premise: If this day or a nearby date is missing/interpolated, the selected date as start/peak/end might not be the real one. These criteria appear as **"Start"**, **"Peak"** and **"End"** in the generated data.frame. * **The percentage of missing data within the main pollen season.** It calculates the number of days which have been interpolated by interpollen() function and their percentage of the main pollen season. If a high percentage of the main pollen season has been interpolated, the information of this season might not be reliable. The maximal percentage allowed can be specified by the **`perc.miss`** argument. This filter appears as **"Comp.MPS"** in the generated data.frame. When running the function, it gives you a **graphical resume** with different color intensity depending on the risk of including each pollen/spore type for a concrete year and a **data.frame** with detailed information about the reasons of this evaluation. <div class="alert alert-info"> If **result = "plot"**, the function returns a list of objects of class *ggplot2*; if **result = "table"**, the function returns a *data.frame*. By default, **result = "table"**.</div> ```{r echo=TRUE, results='hide', fig.keep='first'} QualityControl<-quality_control(munich_pollen, result = "table") ``` If the filter has been successfully passed, it appears as `TRUE` in the generated `data.frame`. If not, it appears as `FALSE`. ```{r echo=TRUE, results='hold'} head(QualityControl) ``` ```{r echo=TRUE, fig.keep='first', results='hide'} quality_control(munich_pollen, result = "plot") ``` There are lots of arguments for this function. You can consult them in the "help" section of the function in R or in the [official document](https://CRAN.R-project.org/package=AeRobiology) of the package. Let's suppose we want to check the quality of our database, but we want to calculate it for a **80% defined Main Pollen Season** (arguments **`ps.method`** and **`perc`**). We don't want to take into account years with a missing data less than **4 days** away from the start, peak or end dates (argument **`int.window`**). Furthermore, we only want to exclude years with more than **50% of interpolated data** within the main pollen season (argument **`perc.miss`**): ```{r echo=TRUE, results='hide', fig.keep='first'} quality_control(munich_pollen, int.window = 4, perc.miss = 50, ps.method = "percentage", perc = 80) ``` **As you would have noticed, there are several differences between this result and the previous one.** <div class="alert alert-info">**<font size="3" face="verdana">interpollen()</font> function** is integrated in this function. Consult the "help" section of this function to use it.</div> # Function <font face="verdana">interpollen()</font> The function **<font size="3" face="verdana">interpollen()</font>** was designed to complete all the missing gaps within a database in just one step. All the gaps of each pollen/spore type are completed simultaneously. **There are different methods to do so which may be more or less appropriate according to the particularities of your database.** The arguments are: * **data**: A `data.frame` object. This data.frame should include a first column in "Date" format and the rest of the columns in "numeric" format belonging to each pollen/spore type by column. * **method**: A `character` string specifying the method applied to calculate and generate the pollen missing data. The implemented methods that can be used are: **"lineal"**, **"movingmean"**, **"spline"**, **"tseries"** or **"neighbour"**. The method argument will be "lineal" by default. * **maxdays**: A `numeric (integer)` value specifying the maximum number of consecutive days with missing data that the algorithm is going to interpolate. If the gap is bigger than the argument value, the gap will not be interpolated. Not valid with <font size="2" face="verdana">"tseries"</font> method. The <font size="2" face="verdana">maxdays</font> argument will be 30 by default. This argument might be very interesting to avoid long interpolated gaps. * **plot**: A `logical (integer)` argument. If `TRUE`, graphical previews of the input database will be plot at the end of the interpolation process. All the interpolated gaps will be marked in red. The `plot` argument will be `TRUE` by default. * Others... ```{r echo=TRUE, results='hide', fig.keep='first'} Interpolated<-interpollen(munich_pollen[,c(1,6)], method="lineal", plot = TRUE, result = "long") ``` As you can see above, when **result = "long"**, **the missing gaps which have been completed by the algorithm are marked in red.** It displays similar graphs for each pollen/spore type within your database. Furthermore, the **"Interpolated"** data.frame is produced as "long" with another new column indicating if each data has been interpolated or not (1 means interpolated and 0 means original data). ```{r echo=TRUE, results='hold'} head(Interpolated) ``` If you want to integrate all the new data in your database, **you can assign the function to a new object** and specify the argument **result = "wide"**: ```{r echo=TRUE, results='hide'} CompleteData<-interpollen(munich_pollen[,c(1,6)], method="lineal", plot = FALSE, result = "wide") ``` Now your database without gaps appears as "CompleteData". **As you might have noticed, if you add `plot=FALSE`, the graphs are not shown.** You don't have to do this before applying another function of this package. **<font size="3" face="verdana">"interpollen"</font> function has been integrated in the main functions of the package** so you can choose whether you want your data to be completed or not by a specific argument in each function.E.g.: ```{r echo=TRUE, eval=FALSE} calculate_ps(munich_pollen, method="percentage", interpolation=TRUE, int.method = "lineal", plot = F) ``` <div class="alert alert-info">**FREQUENT ERROR:** Sometimes when applying interpollen you receive the following error: **"Error in rep(NA, ncasos): invalid 'times' argument"**. This is due to a mistake in your database: **you have some dates repeated or disorganized**. The algorithm of `interpollen()` searches for correlative dates and, when they are not consecutive natural days, it carries out the interpolation. When the second date is an earlier date than the previous one, it reports an error. You have to correct these issues in your database before applying the interpollen function. </div> ## Methods of interpolation In this function there are **5 different methods to interpolate missing gaps**. Each method can be more appropriate than the others under some specific circumstances. Interpolated data are not real data, but including such estimations can reduce the error of some calculations such as the main pollen season by percentage or a pollen calendar, i.e.: if you keep the gap, these days would count as 0 for the calculations and this would suppose a bigger error than estimating pollen concentrations for these days. Obviously, this will also suppose a bigger error than having the real data. ### <font face="verdana">"lineal"</font> method As mentioned above, there are many methods to interpolate your missing data. The simplest one was the showed in the previous example: "lineal". **It traces a straight line between each extreme of the gap.** The pollen/spore concentration of the missing days is calculated according to this line. This method may be appropriate for dates without pollen/spores or for small gaps. **Nevertheless, there are other method which can be more effective during the pollen season.** ### <font face="verdana">"movingmean"</font> method This method calculates the moving mean of the pollen daily concentrations. The gaps are fulfilled by **calculating the moving mean of the concentrations for this particular day.** The window of the moving mean is centered on this day and its size is the result of multiplicating the gap size by the **`factor` argument**. E.g.: If you have a gap of 3 days and your factor is 2, the gaps are replaced by the value of a moving mean of 6 days (3 x 2) centered in this particular day and not taking into account the missing days for the mean value. Furthermore, it is a dynamic function: **for each gap of the database the window size of the moving mean changes depending of each gap size.** ```{r echo=TRUE, results='hide', fig.keep='first'} i<-interpollen(munich_pollen[,c(1,6)], method="movingmean", factor = 2, plot = TRUE) ``` ### <font face="verdana">"spline"</font> method This method carries out the interpolation by tracing a **polinomic function to link both extremes of the gap.** The polinomic function is chosen according to the best fitting equation to the previous and posterior days of the gap (second, third, fourth degree...). The number of days of each side of the gap which will be taken into account for calculating the spline regression are specified by **`ndays` argument**. The smoothness of the adjustment can be specified by the **`spar` argument**. ```{r echo=TRUE, results='hide', fig.keep='first'} i2<-interpollen(munich_pollen[,c(1,6)], method="spline", ndays=3, spar=0.7, plot = FALSE) ``` **Note**: By changing the `ndays` and `spar` argument, very different results can be obtained. E.g.: ```{r echo=TRUE, results='hide', fig.keep='first'} i3<-interpollen(munich_pollen[,c(1,6)], method="spline", ndays=5, spar=0.2, plot = TRUE) ``` ### <font face="verdana">"tseries"</font> method If you have long time series of data, this might be the most suitable method for you. This method **analyses the time series of the pollen/spore database and performs a seasonal-trend decomposition based on LOESS** ([Cleveland et al., 1990](http://www.nniiem.ru/file/news/2016/stl-statistical-model.pdf)). It extracts the **seasonality of the historical database** and uses it to predict the missing data by performing a linear regression with the target year. ```{r echo=TRUE, results='hide', fig.keep='last'} i4<-interpollen(munich_pollen, method="tseries", plot = TRUE) ``` **Seasonality is represented in grey, real data in black and interpolated data in red.** This method improves with long time series of years. Strange results may be caused by lack of years or small sampling periods. ### <font face="verdana">"neighbour"</font> method **Other near stations provided by the user are used to interpolate the missing data of the target station.** First of all, a Spearman correlation is performed between the target station and the neighbour stations to discard the neighbour stations with a correlation coefficient smaller than **`mincorr` value**. For each gap, a linear regression is performed between the neighbour stations and the target stations to determine the equation which converts the pollen concentrations of the neighbour stations into the pollen concentration of the target station. Only neighbour stations without any missing data during the gap period are taken into account for each gap. <p class="comment">You can include 4 different databases of neighbour stations with the arguments **`data2`, `data3`, `data4` and `data5`.** **The format of these databases must be the same than the original one and the name of the pollen/spore types must be exactly the same.**</p> With the **`mincorr` argument** you can specify the minimal correlation coefficient (Spearman correlation) that the neighbour stations must have with the target station to be taken into account for a concrete gap. This process is completely independent for each gap of each pollen type. # Function <font face="verdana">calculate_ps()</font> The function **<font size="3" face="verdana">calculate_ps()</font>** is one of the core functions of the AeRobiology package. It was designed to **calculate the main parameters of the pollen season** with regard to phenology and pollen intensity from a historical database of several pollen types. The function can use the most common methods to define the main pollen season. <div class="alert alert-info">**Please, be patient:** This function has tons of arguments. Some of them are only useful for specific situations. You can use the function without knowing all of them. Nevertheless, **we decided to include all the possible arguments in order to provide specific tools for the advanced users**.</div> **This function has several arguments, but don't worry, we are going to perform examples later**. The main arguments are: * **data**: A `data.frame` object including the general database where calculation of the pollen season must be applied. This data.frame must include a first column in "Date" format and the rest of columns in "numeric" format belonging to each pollen type by column. * **method**: A `character` string specifying the method applied to calculate the pollen season and the main parameters. The implemented methods that can be used are: "percentage", "logistic", "moving", "clinical" or "grains". A more detailed information about the different methods for defining the pollen season may be consulted [here](https://CRAN.R-project.org/package=AeRobiology)) or later in this same document. * **th.day**: A `numeric` value. The number of days whose pollen concentration is bigger than this threshold is calculated for each year and pollen type. This value will be obtained in the results of the function. The `th.day` argument will be 100 by default. * **perc**: A `numeric` value ranging 0_100. This argument is valid only for `method = "percentage"`. This value represents the percentage of the total annual pollen included in the pollen season, removing (100_perc)/2% of the total pollen before and after of the pollen season. The `perc` argument will be 95 by default. * **def.season**: A `character` string specifying the method for selecting the best annual period to calculate the pollen season. The pollen seasons may occur within the natural year between two years. The implemented options that can be used are: "natural", "interannual" or "peak". The `def.season` argument will be "natural" by default. A more detailed information about the different methods for selecting the best annual period to calculate the pollen season may be consulted [here](https://CRAN.R-project.org/package=AeRobiology). * **reduction**: A `logical` value. This argument is valid only for the "logistic" method. If `FALSE` the reduction of the pollen data is not applicable. If `TRUE` a reduction of the peaks above a certain level (`red.level` argument) will be carried out before the definition of the pollen season. The reduction argument will be `FALSE` by default. A more detailed information about the reduction process may be consulted [here](https://CRAN.R-project.org/package=AeRobiology). * **red.level**: A `numeric` value ranging 0_1 specifying the percentile used as level to reduce the peaks of the pollen series before the definition of the pollen season. This argument is valid only for the "logistic" method. The `red.level` argument will be 0.90 by default, specifying the percentile 90. * **derivative**: A `numeric (integer)` value belonging to options of 4, 5 or 6 specifying the derivative that will be applied to calculate the asymptotes which determines the pollen season using the "logistic" method. This argument is valid only for the "logistic" method. The `derivative` argument will be 5 by default. More information may be consulted [here](https://CRAN.R-project.org/package=AeRobiology). * **man**: A `numeric (integer)` value specifying the order of the moving average applied to calculate the pollen season using the "moving" method. This argument is valid only for the "moving" method. The `man` argument will be 11 by default. * **th.ma**: A `numeric` value specifying the threshold used for the "moving" method for defining the beginning and the end of the pollen season. This argument is valid only for the "moving" method. The `th.ma` argument will be 5 by default. * **n.clinical**: A `numeric (integer)` value specifying the number of days which must exceed a given threshold (th.pollen argument) for defining the beginning and the end of the pollen season. This argument is valid only for the "clinical" method. The `n.clinical` argument will be 5 by default. * **window.clinical**: A `numeric (integer)` value specifying the time window during which the conditions must be evaluated for defining the beginning and the end of the pollen season using the "clinical" method. This argument is valid only for the "clinical" method. The `window.clinical` argument will be 7 by default. * **window.grains**: A `numeric (integer)` value specifying the time window during which the conditions must be evaluated for defining the beginning and the end of the pollen season using the "grains" method. This argument is valid only for the "grains" method. The `window.grains` argument will be 5 by default. * **th.pollen**: A `numeric` value specifying the threshold that must be exceeded during a given number of days (n.clinical or window.grains arguments) for defining the beginning and the end of the pollen season using the "clinical" or "grains" methods. This argument is valid only for the "clinical" or "grains" methods. The `th.pollen` argument will be 10 by default. * **th.sum**: A `numeric` value specifying the pollen threshold that must be exceeded by the sum of daily pollen during a given number of days (`n.clinical` argument) exceeding a given daily threshold (`th.pollen` argument) for defining the beginning and the end of the pollen season using the "clinical" method. This argument is valid only for the "clinical" method. The `th.sum` argument will be 100 by default. * **type**: A `character` string specifying the parameters considered according to a specific pollen type for calculating the pollen season using the "clinical" method. The implemented pollen types that may be used are: "birch", "grasses", "cypress", "olive" or "ragweed". As result for selecting any of these pollen types the parameters `n.clinical`, `window.clinical`, `th.pollen` and `th.sum` will be automatically adjusted for the "clinical" method. If no pollen types are specified (`type = "none"`), these parameters will be considered by the user. This argument is valid only for the "clinical" method. The `type` argument will be "none" by default. * **interpolation**: A `logical` value. If `FALSE` the interpolation of the pollen data is not applicable. If `TRUE` an interpolation of the pollen series will be applied to complete the gaps with no data before the calculation of the pollen season. The `interpolation` argument will be `TRUE` by default. A more detailed information about the interpolation method may be consulted [here](https://CRAN.R-project.org/package=AeRobiology). * **int.method**: A `character` string specifying the method selected to apply the interpolation method in order to complete the pollen series. The implemented methods that may be used are: "lineal", "movingmean", "spline" or "tseries". The `int.method` argument will be "lineal" by default. * **maxdays**: A `numeric` (integer value) specifying the maximum number of consecutive days with missing data that the algorithm is going to interpolate. If the gap is bigger than the argument value, the gap will not be interpolated. Not valid with `int.method = "tseries"`. The `maxdays` argument will be 30 by default. * **export.plot**: A `logical` value specifying if a set of plots based on the definition of the pollen season and saved in the working directory will be required or not. If `FALSE` graphical results will not be saved. If `TRUE` a pdf file for each pollen type showing graphically the definition of the pollen season for each studied year will be saved within the plot_AeRobiology directory created in the working directory. The `export.plot` argument will be `FALSE` by default. * **export.result**: A `logical` value specifying if a excel file including all parameters for the definition of the pollen season saved in the working directory will be required or not. If `FALSE` the results will not exported. If `TRUE` the results will be exported as xlsx file including all parameters calculated from the definition of the pollen season within the table_AeRobiology directory created in the working directory. The `export.result` argument will be `FALSE` by default. * **plot**: A `logical` value specifying if the plots are generated in the plot history. The `plot` argument will be `TRUE` by default. * **result**: A `character` string specifying the output for the function. The implemented outputs that may be obtained are: `"table"` and `"list"`. The argument `result` will be `"table"` by default. <div class="alert alert-info">**IMPORTANT :** if `export.plot=TRUE`, the plots are **EXPORTED** in "plot_AeRobiology" folder.</div> ## Methods of calculating the Main Pollen Season This function allows to calculate the pollen season using **five different methods** which are described below. After calculating the **start_date**, **end_date** and **peak_date** for the pollen season all rest of **parameters** are calculated: * **type**: pollen type * **seasons**: year of the beginning of the season * **st.dt**: start_date (date) * **st.jd**: start_date (day of the year) * **en.dt**: end_date (date) * **en.jd**: end_date (day of the year) * **ln.ps**: length of the season * **sm.tt**: total sum * **sm.ps**: pollen integral * **pk.val**: peak value * **pk.dt**: peak_date (date) * **pk.jd**: peak_date (day of year) * **ln.prpk**: length of the pre_peak period * **sm.prpk**: pollen integral of the pre_peak period * **ln.pspk**: length of the post_peak period * **sm.pspk**: pollen integral of the post_peak period * **daysth**: number of days with more than 100 pollen grains * **st.dt.hs**: start_date of the High pollen season (date, only for clinical method) * **st.jd.hs**: start_date of the High pollen season (day of the year, only for clinical method) * **en.dt.hs**: end_date of the High pollen season (date, only for clinical method) * **en.jd.hs**: end_date of the High pollen season (day of the year, only for clinical method) <div class="alert alert-info">**IMPORTANT :** If `export.result=TRUE`, these objects will be exported as xlsx file within the "table_AeRobiology" directory created in your working directory. This Excel file will have a sheet for each pollen type and a last sheet with the legend of all the abbreviations and the method selected to calculate them. **We strongly recommend to use `export.result=TRUE`**</div> Although the results can be extracted in the working directory in a xlsx file, **they can also be assigned to an object, and visualized**, for example (an extract): ```{r echo = TRUE, results='hide', fig.keep='last', warning=FALSE} pollen_season <- calculate_ps(munich_pollen) ``` ```{r echo = FALSE, fig.keep='all', warning=FALSE} knitr::kable(pollen_season[24:31, ] , format = "html", booktabs = TRUE) ``` The result is also plotted by default (*plot = TRUE*). ```{r echo = TRUE, fig.keep='last', warning=FALSE} calculate_ps(munich_pollen[,c(1,6)], plot = TRUE) ``` ### "percentage" method This is a commonly used method for defining the pollen season based on the **elimination of a certain percentage in the beginning and the end of the pollen season** ( [Nilsson and Persson, 1981](https://www.tandfonline.com/doi/abs/10.1080/00173138109427661); [Andersen, 1991](https://www.tandfonline.com/doi/pdf/10.1080/00173139109427810)). For example, if the pollen season is based on the 95% of the total annual pollen (`perc = 95`), the start_date of the pollen season is marked as the day in which 2.5% of the total pollen is registered and the end_date of the pollen season is marked as the day in which 97.5% of the total pollen is registered. ```{r echo=TRUE, results='hide', fig.keep='first', eval=FALSE} calculate_ps(munich_pollen[,c(1,6)], method = "percentage", perc = 90, export.result = TRUE) ``` In this case we have calculated the main pollen season based on the 90% of the total annual pollen. **Results are stored in the "table_AeRobiology" folder since `export.result=TRUE`**. You can select different methods of interpolation or even to not interpolate gaps: ```{r echo=TRUE, results='hide', fig.keep='first'} calculate_ps(munich_pollen[,c(1,6)], method = "percentage", perc = 90, export.result = FALSE, int.method = "spline") ``` ```{r echo=TRUE, results='hide', fig.keep='first'} calculate_ps(munich_pollen[,c(1,6)], method = "percentage", perc = 75, export.result = FALSE, interpolation = FALSE) ``` <div class="alert alert-info">**IMPORTANT :** if `export.plot=TRUE`, the plots are **EXPORTED** in "plot_AeRobiology" folder.</div> ### "logistic" method This method was developed by Ribeiro et al. (2007) and modified by Cunha et al. (2015). It is based on fitting annually a **non-linear logistic regression model** to the daily accumulated curve for each pollen type. This logistic function and the **different derivatives** were considered to calculate the **start_date** and **end_date** of the pollen season, based on the **asymptotes when pollen amounts are stabilized on the beginning and the end of the accumulated curve**. For more information about the method, see [Ribeiro et al. (2007)](https://www.ncbi.nlm.nih.gov/pubmed/18247462) and [Cunha et al. (2015)](https://link.springer.com/article/10.1007%2Fs10453-014-9345-3). **Three different derivatives** may be used (`derivative` argument) 4, 5 or 6 that represent **from higher to lower restrictive criterion** for defining the pollen season. This method may be **complemented with an optional method for reduction the peaks values** (`reduction = TRUE`), thus avoiding the effect of the great influence of extreme peaks. In this sense, **peaks values will be cut below a certain level** that the user may select based on a percentile analysis of peaks. For example, `red.level = 0.90` will cut all peaks above the percentile 90. ```{r echo=TRUE, results='hide', fig.keep='first', warning=FALSE} pollen_season<-calculate_ps(munich_pollen[,c(1,6)], method = "logistic", derivative = 5, reduction=FALSE) ``` ```{r echo = FALSE, fig.keep='all', warning=FALSE} knitr::kable(pollen_season, format = "html", booktabs = TRUE ) ``` In the previous case, the reduction wasn't carried out (`reduction=FALSE`) and all the peaks were conserved. We can cut some peaks and change the `derivative`: ```{r echo=TRUE, results='hide', fig.keep='first', warning=FALSE} pollen_season<-calculate_ps(munich_pollen[,c(1,6)], method = "logistic", derivative = 6, reduction=FALSE, red.level = 0.8) ``` ```{r echo = FALSE, fig.keep='all', warning=FALSE} knitr::kable(pollen_season, format = "html", booktabs = TRUE) ``` As you can observe, results are a bit different. <div class="alert alert-info">**IMPORTANT :** if `export.plot=TRUE`, the plots are **EXPORTED** in "plot_AeRobiology" folder.</div> ### "clinical" method This method was proposed by [Pfaar et al. (2017)](https://www.ncbi.nlm.nih.gov/pubmed/27874202). It is based on the expert consensus in relation to pollen exposure and the **relationship with allergic symptoms** derived of the literature. Different periods may be defined by this method: the **pollen season**, the **high pollen season** and the **high pollen days**: 1) The **start_date** and **end_date** of the **pollen season** were defined as a **certain number of days (`n.clinical` argument) within a time window period (`window.clinical` argument) exceeding a certain pollen threshold (`th.pollen` argument) which summation is above a certain pollen sum (`th.sum` argument)**. <br> All these parameters are established for each pollen type according to Pfaar et al. (2017) and using the **`type`** argument these parameters may be automatically adjusted for the specific pollen types ("birch", "grasses", "cypress", "olive" or "ragweed"). Furthermore, **the user may change all parameters to do a customized definition of the pollen season**. 2) The **start_date** and **end_date** of the **high pollen season** were defined as **three consecutive days exceeding a certain pollen threshold (`th.day` argument)**. 3) The number of **high pollen days** will also be calculated exceeding this pollen threshold (`th.day`). For more information about the method, see [Pfaar et al. (2017)](https://www.ncbi.nlm.nih.gov/pubmed/27874202). Running the following example, the main pollen season will be established according to the birch requirements: more than 5 days within a week in which more than 10 pollen grains/m3 are registered and whose sum exceeds 100 pollen grains/m3. The high pollen days are those which exceed 100 pollen grains/m3 (`n.clinical=5, window.clinical=7, th.pollen=10, th.sum=100, th.day=100`) ```{r echo=TRUE, results='hide', fig.keep='first', warning=FALSE} pollen_season<-calculate_ps(munich_pollen[,c(1,6)], method = "clinical", type = "birch") ``` ```{r echo = FALSE, fig.keep='all', warning=FALSE} knitr::kable(pollen_season, format = "html", booktabs = TRUE) ``` The code above returns the same result than: ```{r echo=TRUE, results='hide', fig.keep='first', warning=FALSE} pollen_season<-calculate_ps(munich_pollen[,c(1,6)], method = "clinical", n.clinical = 5, window.clinical = 7, th.pollen = 10, th.sum = 100, th.day = 100) ``` ```{r echo = FALSE, fig.keep='all', warning=FALSE} knitr::kable(pollen_season, format = "html", booktabs = TRUE) ``` We have resumed all these parameters under the argument `type=birch` to facilitate its application. <div class="alert alert-info">**IMPORTANT :** if **`export.plot=TRUE`**, the plots are **EXPORTED** in "plot_AeRobiology" folder.</div> ### "grains" method This method was proposed by Galan et al. (2001) originally in olive pollen but after also applied in other pollen types. The **start_date** and **end_date** of the pollen season were defined as a **certain number of days (`window.grains` argument) exceeding a certain pollen threshold (`th.pollen` argument)**. For more information about the method, see [Galan et al. (2001)](https://link.springer.com/article/10.1007/s004840000081). We want to establish the main pollen season start in the first day when more than 3 consecutive days have more than 2 pollen grains/m3, and the end in the last day in which these conditions are fulfilled: ```{r echo=TRUE, results='hide', fig.keep='first'} calculate_ps(munich_pollen[,c(1,6)], method = "grains", window.grains = 3, th.pollen = 2 ) ``` ### "moving" method <div class="alert alert-info">**This method is proposed the first time by the authors of this package. We are developing a research paper explaining the method in detail.**</div> The definition of the pollen season is based on the application of a **moving average** to the pollen series in order to obtain the general seasonality of the pollen curve avoiding the great variability of the daily fluctuations. Thus, **the start_date and the end_date will be established when the curve of the moving average reaches a given pollen threshold** (**`th.ma`** argument). Also the order of the moving average may be customized by the user (**`man`** argument). By default, `man = 11` and `th.ma = 5`. <p class="comment"> The idea of this method is to be able to calculate the start of the main pollen season when the season has not finished yet. Moreover, it allows to establish the start and end even if there is a huge amount of lacking data within the main pollen season. Its similar to the "grains" method but taking into account the moving mean to avoid daily variability.</p> You might understand it better by consulting the plots obtained. <div class="alert alert-info">**IMPORTANT :** if **`export.plot=TRUE`**, the plots are **EXPORTED** in "plot_AeRobiology" folder.</div> Let's calculate it with a moving average of 7 days. We are going to establish the start and end of the main pollen season when the moving average reaches 4 pollen grains/m3: ```{r echo=TRUE, results='hide', fig.keep='first'} calculate_ps(munich_pollen[,c(1,6)], method = "moving", man = 7, th.ma = 4) ``` ## Advanced examples ### Southern Hemisphere & interannual types As you may have noticed, we have been using the function under an "European" point of view: the calculations are from 1st January to 31th December. The researchers of the **Southern Hemisphere** are used to work with **interanual pollen seasons**. Don't worry, we haven't forgotten you! 1) You can work **from 1st June to 31th May** by means of the argument **`def.season="interannual"`**: ```{r echo=TRUE, results='hide', fig.keep='first'} calculate_ps(munich_pollen[,c(1,3)], method = "moving", man = 7, th.ma = 4, def.season = "interannual") ``` <p class="comment"> In this method, the season belongs to the first year of the pair of years, i.e: from June 2017 to May 2018 -> season "2017".</p> 2) You can **center the main pollen season in the average peak day** (182 days before and after the average date of the peak): ```{r echo=TRUE, results='hide', fig.keep='first'} calculate_ps(munich_pollen[,c(1,6)], method = "percentage", perc=95, def.season = "peak") ``` <p class="comment"> In this last method, the season belongs to the year in which the average peak date - 182 days is located, i.e.: if the average peak date is in January 2013, the season is called "2012" in the data.frames.</p> ### Interpolation in detail Pollen time series frequently have different gaps with no data and this fact could be a problem for the calculation of specific methods for defining the pollen season even providing incorrect results. In this sense **by default a linear interpolation will be carried out to complete these gaps before to define the pollen season (`interpolation = TRUE`)**. Additionally, the users may select other interpolation methods using the **`int.method` argument**, as "lineal", "movingmean", "spline" or "tseries". Some advanced users may have noticed that you can't use directly all the arguments of **<font size="3" face="verdana">interpollen()</font>** through **<font size="3" face="verdana">calculate_ps()</font>**. You only are able to select the interpolation method. Nevertheless, it is not impossible to use them: 1) **Use <font size="3" face="verdana">interpollen()</font>** with all the arguments you want and **overwrite your interpolated database in an object**: ```{r echo=TRUE, results='hide', fig.keep='first'} CompleteData<-interpollen(munich_pollen, method="spline", ndays=3, spar=0.7, plot = TRUE, maxdays = 3, result = "wide") ``` 2) Then, **use "CompleteData" instead of "munich_pollen" (or your database) in the following steps**: ```{r echo = TRUE, results='hide',fig.keep='all'} plot_ps(CompleteData, pollen.type="Alnus", year=2013, fill.col = "orange", axisname = "AeRobiology custom units") ``` <p class="comment"> **Note**: You might select **`interpolation=FALSE`** in the other functions you use after interpolating manually. In some cases it doesn't matter, but if you want to interpolate only gaps with less than 3 days and you did in the first step, if you don't use `interpolation=FALSE` in calculate_ps function, a second interpolation will be carried out by using the default method (gaps with less than 30 days and lineal interpolation). You will obtain a database with the gaps of less than 3 days interpolated by "spline" method and the rest with the "lineal" method. </p> # Function <font face="verdana">pollen_calendar</font> This function **calculates the pollen calendar from a historical database** of several pollen types using different designs. The main arguments are: * **data**: A `data.frame` with the first column in "Date" format and the rest of the columns in "numeric" format (pollen types). * **method**: for choosing the method to generate the pollen calendar. The options are "heatplot", "violinplot" and "phenological". * **n.types**: indicating the number of the most abundant pollen types showed in the pollen calendar * **start.month**: ranging 1-12 indicates the number of the month (January-December) when the beginning of the pollen calendar must be considered, see more details [here](https://CRAN.R-project.org/package=AeRobiology). * **export.plot**: specifying if a plot with the pollen calendar saved in the working directory will be required or not. Other arguments have been incorporated in relation to the format to export the plot (`export.format`). * **result**: **"plot"** or **"table"**. **Data exportation:** - If `export.plot = TRUE` this plot displaying the pollen calendar will be exported within the **Plot_AeRobiology** directory created in the working directory. - If `export.plot = TRUE` and `export.format = pdf`, a pdf file with the pollen calendar will be saved within the **plot_AeRobiology** directory, created in the working directory. Additional characteristics may be incorporated to the exportation as pdf file. - If `export.plot = TRUE` and `export.format = png`, a png file with the pollen calendar will be saved within the **plot_AeRobiology** directory, created in the working directory. Additional characteristics may be incorporated to the exportation as png file. **Exclusive arguments** for pollen calendars generated as **"heatplot"**: * **period**: specifying the interval time considered to generate the pollen calendar. The options are "weekly" and "daily." * **method.classes**: indicating the method to define the classes used for classifying the average pollen concentrations. The options are "exponential" and "custom". * **n.classes**: specifying the number of classes that will be used when `method.classes = custom`. * **classes**: specifying a numeric vector with the desired thresholds to define the different classes. * **color**: choosing different options such as "green", "red", "blue", "purple"or "black". **Exclusive arguments** for pollen calendars generated as **"phenological"**: * **perc1, perc2**: both ranging 0-100. These values represent the percentage of the total annual pollen included in the pollen season, removing (100-percentage)/2% of the total pollen before and after the pollen season. Two percentages must be specified to define the "main pollination period" (`perc1`) and the "early/late pollination" (`perc2`) based on the "phenological" method proposed by [Werchan et al. (2018)](https://link.springer.com/article/10.1007/s40629-018-0055-1). * **th.pollen**: specifying the minimum threshold of the average pollen concentration which will be used to generate the pollen calendar. Days below this threshold will not be considered and days above this threshold will be considered as "possible occurrence" as proposed by [Werchan et al. (2018)](https://link.springer.com/article/10.1007/s40629-018-0055-1). This argument also works for "violinplot" pollen calendar <div class="alert alert-info">By default the database will be interpolated according to the **<font size="3" face="verdana">interpollen</font>** function, and also other arguments for this function are incorporated in the **<font size="3" face="verdana">pollen_calendar</font>** function.</div> ## "heatplot" method This pollen calendar is constructed based on the **daily or weekly average of pollen concentrations** (depending on the preferences of the user, who may select "daily" or "weekly" as **`period` argument**). Then, these averages may be classified in different categories following different methods selected by the user. An example of this pollen calendar may be consulted in [Rojo et al. (2016)](https://link.springer.com/article/10.1007/s10661-016-5129-2). This method to design pollen calendars is an **adaptation from the pollen calendar proposed by Spieksma (1991)**, who considered 10-day periods instead of daily or weekly periods. First, we are going to generate a pollen calendar based on the **heatplot**, designed with **green** color and constructed with **daily** averages: ```{r echo = TRUE, results='hide',fig.keep='all', warning=FALSE} pollen_calendar(munich_pollen, method = "heatplot", period = "daily", color = "green", interpolation = FALSE) ``` In all cases, the table used by the pollen calendar with the averaged values will be created. This table can be visualized. To do that we have to set the argument **result = "table"** For example (an extract): ```{r echo = TRUE, fig.keep='all', warning=FALSE} average_values<-pollen_calendar(munich_pollen, method = "heatplot", period = "daily", color = "green", interpolation = FALSE, result = "table") knitr::kable(average_values[82:90, ], format = "html", booktabs = TRUE) ``` By default the method for defining the classes for the pollen calendar is according to the **exponential** method. Nevertheless, the classes can be customized by **`classes` argument**: ```{r echo = TRUE, results='hide',fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "heatplot", period = "daily", color = "red", method.classes = "custom", n.classes = 5, classes = c(5, 25, 50, 200), interpolation = FALSE) ``` In addition, for species with their pollen season occurring between two natural years, the start of the pollen calendar can be selected by the **`start.month` argument**: ```{r echo = TRUE, results='hide',fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "heatplot", period = "daily", color = "purple", method.classes = "custom", n.classes = 5, classes = c(5, 25, 50, 200), start.month = 11, na.remove = FALSE, interpolation = FALSE) ``` <p class="comment">**`NA` (no data) can be removed by using `na.remove` argument.**</p> Also we can generate a pollen calendar based on the **heatplot** design with **weekly** averages. ```{r echo = TRUE, results='hide',fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "heatplot", period = "weekly", color = "blue", method.classes = "exponential", n.types = 4, y.start = 2011, y.end = 2014, interpolation = FALSE) ``` In this case, we have included other restrictive arguments as **`n.types`** limiting the number of pollen types and **`y.start`** and **`y.end`**, limiting the period to be considered for the pollen calendar. ## "phenological" method This pollen calendar is based on the phenological definition of the pollen season and adapted from the methodology proposed by [Werchan et al. (2018)](https://link.springer.com/article/10.1007/s40629-018-0055-1). After obtaining the **daily average pollen concentrations** for the most abundant pollen types, different pollination periods are calculated. The **main pollination period** is calculated according to the percentage defined by `perc1` argument (selected by the user, 80% by default; red) of the annual total pollen. For example, if `perc1 = 80`, the beginning of the high season is marked when the 10% of the annual value is reached; the end is selected when 90% is reached. The **early/late pollination period** is defined with the `perc2` argument (selected by the user, 99% of the total annual pollen by default; orange), i.e.: the start of this period will be when the 0.5% is reached and the end when the 99.5% is reached. For this kind of pollen calendar the `th.pollen` argument defines the **possible occurrence** period as adapted by [Werchan et al. (2018)](https://link.springer.com/article/10.1007/s40629-018-0055-1), considering the entire period between the first and the last day when this pollen level is reached (yellow). ```{r echo = TRUE, results='hide',fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "phenological", n.types = 5, y.start = 2011, y.end = 2014, interpolation = FALSE) ``` Furthermore, different criteria can be customized: ```{r echo = TRUE, results='hide',fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "phenological", perc1 = 90, perc2 = 95, th.pollen = 5, interpolation = FALSE) ``` In this last case, the pollen calendar has been generated with more restrictive criteria for `perc1`, `perc2` and `th.pollen`. ## "violinplot" method This pollen calendar is based on the **pollen intensity**, and adapted from the pollen calendar published by [ORourke (1990)](https://link.springer.com/article/10.1007/BF02539105). At first, the daily averages of the pollen concentrations are calculated and then, these averages are represented by using the violin plot graph. <div class="alert alert-info">The shape of the violin plot represents the pollen intensity of the pollen types in a relative way, i.e.: the values will be calculated as **relative measurements** regarding to the most abundant pollen type in annual amounts. Therefore, this pollen calendar shows a relative comparison between the pollen intensity of the pollen types but **without scales and units**.</div> ```{r echo = TRUE, results='hide', fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "violinplot", y.start = 2012, y.end = 2015, interpolation = FALSE) ``` In addition, **`th.pollen`** can be established, specifying the minimum pollen concentration considered (E.g.: 10 pollen grains/m3): ```{r echo = TRUE, results='hide', fig.keep='all', warning=FALSE} pollen_calendar(data = munich_pollen, method = "violinplot", th.pollen = 10, na.rm = FALSE, interpolation = FALSE) ``` # Functions <font face="verdana">iplot_pollen()</font> and <font face="verdana">iplot_years()</font> These functions have been designed for a **quick view** of your data for discussions or interpretations of them. **Interactive plots are displayed**. It might be interesting for group meetings or real time exposition of results. The functions create a emergent screen in which you can select the pollen/spore type or the years you want to plot. <div class="alert alert-info">**IMPORTANT :** To stop the real time visualization and continue using the package you must click in the **"Stop" signal.**</div> The function **<font size="3" face="verdana">iplot_pollen()</font>** is to plot the pollen data during one season. The arguments are: * **data**: A `data.frame` object. This data.frame should include a first column in "Date" format and the rest of columns in format "numeric" belonging to each pollen type by column. * **year:** An `integer` value specifying the year to display. This is a mandatory argument. The function **<font size="3" face="verdana">iplot_years()</font>** is to plot the pollen data during one season. The arguments are: * **data**: A `data.frame` object. This data.frame should include a first column in "Date" format and the rest of columns in format "numeric" belonging to each pollen type by column. * **pollen:** A `character` string with the name of the particle to show. This character must match with the name of a column in the input database. This is a mandatory argument. ```{r echo = TRUE, results='hide',fig.keep='all', eval=FALSE} iplot_pollen(munich_pollen, year = 2012) iplot_years(munich_pollen, pollen = "Betula") ``` <p class="comment">**Note:** We are not able to plot interactive figures in this document. Please, run the codes above in your R session.</p> # Function <font face="verdana">plot_summary()</font> The function **<font size="3" face="verdana">plot_summary()</font>** is to plot the pollen data during several seasons. Also to plot the averaged pollen season over the study period. It is possible to plot the relative abundance per day and smoothing the pollen season by calculating a moving average. The main arguments are: * **data**: A `data.frame` object. This data.frame should include a first column in "Date" format and the rest of columns in "numeric" format belonging to each pollen type by column. * **pollen:** A `character` string with the name of the particle to show. This character must match with the name of a column in the input database. This is a mandatory argument. * **mave:** An `integer` value specifying the order of the moving average applied to the data. By default, `mave = 1`. * **normalized:** A `logical` value specifying if the visualization shows real pollen data (`normalized = FALSE`) or the percentage of every day over the whole pollen season (`normalized = TRUE`). By default, `normalized = FALSE`. * **axisname:** A `character` string specifying the title of the y axis. By default, `axisname = "Pollen grains / m3"` * Others... ```{r echo = TRUE, results='hide',fig.keep='all'} plot_summary(munich_pollen, pollen = "Betula") ``` In some cases the user may want to reduce the "noise" of the daily values by **calculating moving means** (E.g.: 5 days moving mean; `mave = 5`): ```{r echo = TRUE, results='hide',fig.keep='all'} plot_summary(munich_pollen, pollen = "Betula", mave = 5) ``` You might be also interested in representing as background the **percentage** each day suppose to the main pollen season (`normalized = TRUE`): ```{r echo = TRUE, results='hide',fig.keep='all'} plot_summary(munich_pollen, pollen = "Betula", mave = 5, normalized = TRUE) ``` <div class="alert alert-info">**<font size="3" face="verdana">interpollen()</font> function** is integrated in this function. Consult the "help" section of this function to use it.</div> # Function <font face="verdana">plot_normsummary()</font> The function **<font face="verdana">plot_normsummary()</font>** has been designed to plot the pollen data amplitude during several seasons: daily average pollen concentration over the study period, maximum pollen concentration of each day over the study period and minimum pollen concentration of each day value over the study period. It is possible to plot the relative abundance per day and smoothing the pollen season by calculating a moving average. The main arguments similar to **<font size="3" face="verdana">plot_summary()</font>**, but as a result **you will obtain the the max-min range of the study period** instead of the values for every year. ```{r echo = TRUE, results='hide',fig.keep='all'} plot_normsummary(munich_pollen, pollen = "Betula", color.plot = "red") ``` The **maximum values** are marked in **red** and the **minimum values in white**. The **average** is the **black line**. Of course, you can change the color (`color.plot` argument): ```{r echo = TRUE, results='hide',fig.keep='all'} plot_normsummary(munich_pollen, pollen = "Betula", color.plot = "green", mave = 5, normalized = TRUE) ``` <div class="alert alert-info">**<font size="3" face="verdana">interpollen()</font> function** is integrated in this function. Consult the "help" section of this function to use it.</div> # Function <font face="verdana">analyse_trend()</font> The function **<font size="3" face="verdana">analyse_trend()</font>** has been created to calculate the main seasonal indexes of the pollen season ("Start Date", "Peak Date", "End Date" and "Pollen Integral"), as well as **trends analysis** of those parameters over the seasons. It is a summary dot plot showing the distribution of the main seasonal indexes over the years. The results can also be stored in two folders termed **"plot_AeRobiology"** and **"table_AeRobiology"**, which will be created in your working directory (only if `export.result=TRUE` & `export.plot=TRUE`). You can decide which result to be returned from the function by setting the argument **result**: *result = "table"* or *result = "plot"*. The function allows you to decide if you want to interpolate the data or not, by the argument **`interpolation`**, it also allows you to select the interpolation method by the argument **`int.method`**. Furthermore, it allows you to select the pollen season definition method, by the argument **`method`** and additional arguments for the function **<font size="3" face="verdana">calculate_ps()</font>**. Some arguments about the visualization are: * **split**: A `logical` argument. If `split = TRUE`, the plot is separated in two according to the nature of the variables (i.e. dates or pollen concentrations). **This argument was a solution to reduce the scale of the x-axis when "total pollen"" variable has a very high/low slope**. By default, `split = TRUE`. * **quantil**: A `numeric` value (between 0 and 1) indicating the quantile of data to be displayed in the graphical output of the function. `quantil = 1` would show all the values, however a lower quantile will exclude the most extreme values of the sample. **This argument was designed after realizing of a common problem: when plotting the results with `split=FALSE`, some "outlier" results increased a lot the scale of the x-axis, making unreadable the rest of results**. Our solution was to create an argument to exclude these outliers results and to be able to observe the main results in an appropriate scale. Furthermore, this argument may be used to split the parameters using a different sampling units (e.g. dates and pollen concentrations) can be used low vs high values of quantil argument (e.g. 0.5 vs 1). Also can be used an extra argument: `split`. By default, `quantil = 0.75`. **This argument only works when `split=FALSE`.** * **significant**: A `numeric` value indicating the significant level to be considered in the linear trends analysis. This p level is displayed in the graphical output of the function (as a number in the legend and as a black ring in the graphical representation). By default, `significant = 0.05`. ```{r echo = TRUE, results='hide',fig.keep='all'} analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, result="plot") ``` Lets see what happen if we don't split the graphics: ```{r echo = TRUE, results='hide',fig.keep='all'} analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, split = FALSE, quantil = 1, result="plot") ``` Now the results are less readable, but this kind of graphical representation may be interesting for some people. When plotting all the results together, it might be useful to exclude some outliers: ```{r echo = TRUE, results='hide',fig.keep='all'} analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, split=FALSE, quantil = 0.5, result="plot") ``` As you can appreciate, a lot of points have been omitted. You can also change the significance level as mentioned above. Why don`t we try?: ```{r echo = TRUE, results='hide',fig.keep='all'} analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, significant = 1, result = "plot") ``` Now everything is significant! Have a look to the numbers by setting *result = "table"* (the default output). ```{r echo = TRUE, results='hide',fig.keep='all'} analyse_trend(munich_pollen, interpolation = FALSE, export.result = FALSE, export.plot = FALSE, significant = 1, result="table") ``` <div class="alert alert-info">**<font size="3" face="verdana">interpollen()</font> function** is integrated in this function. Consult the "help" section of this function to use it.</div> <div class="alert alert-info"> If **result = "plot"**, the function returns a list of objects of class *ggplot2*; if **result = "table"**, the function returns a *data.frame*. By default, **result = "table"**.</div> # Function <font face="verdana">plot_trend()</font> The function **<font size="3" face="verdana">plot_trend()</font>** has been created to calculate the main seasonal indexes of the pollen season ("Start Date", "Peak Date", "End Date" and "Pollen Integral") and their trends analysis over the seasons. It produces plots showing the distribution of the main seasonal indexes over the years. The **results are stored in two folders termed "plot_AeRobiology" and "table_AeRobiology"** which will be located in your working directory (only if `export.result=TRUE` & `export.plot=TRUE`). ```{r echo = TRUE, results='hide',fig.keep='all', eval=FALSE} plot_trend(munich_pollen, interpolation = FALSE, export.plot = TRUE, export.result = TRUE) ``` <div class="alert alert-info">**NOTE:** The plots are not shown in your R environment. Because of the high amount of graphs, they are stored in new folders created in your working directory (search were you have saved the R project and you will find the folders).</div> <p class="comment">The **interval of confidence** only appears if more than 6 dots are plotted. If not, a line crossing all the dots is plotted.</p> <div class="alert alert-info">**<font size="3" face="verdana">interpollen()</font> function** is integrated in this function. Consult the "help" section of this function to use it.</div> # Function <font face="verdana">iplot_abundance()</font> The function **<font size="3" face="verdana">iplot_abundance()</font>** generates a barplot based on the relative abundance in the air (as percentage) of each pollen/spore type with respect to the total amounts. The main arguments are: * **n.types:** A `numeric` (integer) value specifying the number of the most abundant pollen types that must be represented in the plot of the relative abundance. A more detailed information about the selection of the considered pollen types may be consulted [here](https://CRAN.R-project.org/package=AeRobiology). The `n.types` argument will be 15 types by default. * **y.start/y.end:** A `numeric` (integer) value specifying the period selected to calculate relative abundances of the pollen types (start year - end year). If `y.start` and `y.end` are not specified (`NULL`), the entire database will be used to generate the pollen calendar. The `y.start` and `y.end` arguments will be `NULL` by default. * **col.bar:** A `character` string specifying the color of the bars to generate the graph showing the relative abundances of the pollen types. The `color` argument will be #E69F00 by default, but any color may be selected. * **type.plot: ** A `character` string specifying the type of plot selected to show the plot showing the relative abundance of the pollen types. The implemented types that may be used are: "static" (generates a static ggplot object) and "dynamic" (generates a dynamic plotly object). * **exclude:** A character string vector with the names of the pollen types to be excluded from the plot. * Others... ```{r echo = TRUE, results='hide',fig.keep='all'} iplot_abundance(munich_pollen, interpolation = FALSE, export.plot = FALSE, export.result = FALSE) ``` Now we are going to reduce the number of types: ```{r echo = TRUE, results='hide',fig.keep='all'} iplot_abundance(munich_pollen, interpolation = FALSE, export.plot = FALSE, export.result = FALSE, n.types = 3) ``` We can also select the abundance for only one year and change the color: ```{r echo = TRUE, results='hide',fig.keep='all'} iplot_abundance(munich_pollen, interpolation = FALSE, export.plot = FALSE, export.result = FALSE, n.types = 3, y.start = 2011, y.end = 2011, col.bar = "#d63131") ``` Furthermore, we can make it interactive: ```{r echo = TRUE, results='hide',fig.keep='all'} iplot_abundance(munich_pollen, interpolation = FALSE, export.plot = FALSE, export.result = FALSE, n.types = 3, y.start = 2011, y.end = 2011, col.bar = "#d63131", type.plot = "dynamic") ``` <div class="alert alert-info">**<font size="3" face="verdana">interpollen()</font> function** is integrated in this function. Consult the "help" section of this function to use it.</div> # Function <font face="verdana">iplot_pheno()</font> The function **<font size="3" face="verdana">iplot_pheno()</font>** generates a boxplot based on phenological parameters (start_dates and end_dates) which are calculated by the estimation of the main parameters of the pollen season. The main arguments are: * **data**: A `data.frame` object including the general database where calculation of the pollen season must be applied in order to generate the phenological plot based on the start_dates and end_dates. This data.frame must include a first column in "Date" format and the rest of columns in "numeric" format belonging to each pollen type by column. * **method**: A `character` string specifying the method applied to calculate the pollen season and the main parameters. The implemented methods that can be used are: "percentage", "logistic", "moving", "clinical" or "grains". A more detailed information about the different methods for defining the pollen season may be consulted in calculate_ps function. * **n.types**: A `numeric` (integer) value specifying the number of the most abundant pollen types that must be represented. A more detailed information about the selection of the considered pollen types may be consulted [here](https://CRAN.R-project.org/package=AeRobiology). The `n.types` argument will be 15 by default. * **type.plot**: A `character` string specifying the type of plot selected to show the phenological plot. The implemented types that may be used are: "static" (generates a static ggplot object) and "dynamic" (generates a dynamic plotly object). * Other arguments related to **<font size="3" face="verdana">calculate_ps()</font> function** ```{r echo = TRUE, results='hide',fig.keep='all'} iplot_pheno(munich_pollen, method= "percentage", perc=80, int.method="spline", n.types = 8) ``` We can change the method to establish the main pollen season and the number of pollen types to show: ```{r echo = TRUE, results='hide',fig.keep='all'} iplot_pheno(munich_pollen, method= "clinical", n.clinical = 3, int.method="spline", n.types = 4) ``` Furthermore, we can make the plot interactive to obtain more information by clicking in each object: ```{r echo = TRUE, results='hide',fig.keep='all', eval=FALSE} iplot_pheno(munich_pollen, method= "clinical", n.clinical = 3, int.method="spline", n.types = 4, type.plot = "dynamic") ``` # Function <font face="verdana">plot_ps()</font> The function **<font size="3" face="verdana">plot_ps()</font> function** was designed to plot the main pollen season of a single pollen type and year. Some of the arguments are: * **data**: A `data.frame` object including the general database where interpolation must be performed. This data.frame must include a first column in "Date" format and the rest of the columns in "numeric" format. Each column must contain information of one pollen type. It is not necessary to insert missing gaps, the function will automatically detect them. * **pollen.type**: A `character` string specifying the name of the pollen type which will be plotted. The name must be exactly the same that appears in the column name. Mandatory argument with no default. * **year**: A `numeric` (integer) value specifying the season to be plotted. The season does not necessary fit a natural year. See calculate_ps for more details. Mandatory argument with no default. * **days**: A `numeric` (integer) specifying the number of days beyond each side of the main pollen season that will be represented. The `days` argument will be 30 by default. * **fill.col**: A `character` string specifying the name of the color to fill the main pollen season in the plot. See ggplot function for more details. The `fill.col` argument will be "turquoise4" by default. It uses the ggplot color codes. * **axisname**:A `character` string specifying the title of the y axis. By default, `axisname = expression(paste("Pollen grains / m"^"3"))` * Others... <div class="alert alert-info">**<font size="3" face="verdana">calculate_ps</font> function** is integrated in this function. Consult [here](https://CRAN.R-project.org/package=AeRobiology) for more information. **Interpolation is mandatory** for this function. Due to technical encoding, you must use interpollen function through calculate_ps function. Consult calculate_ps for more information.</div> ```{r echo = TRUE, results='hide',fig.keep='all'} plot_ps(munich_pollen, pollen.type="Alnus", year=2011) ``` If you misspell some pollen type name, the function will tell you: ```{r echo = TRUE, results='hold', error=TRUE} plot_ps(munich_pollen, pollen.type="Alnuscdscscr", year=2011) ``` **Let`s test more arguments:** ```{r echo = TRUE, results='hide',fig.keep='all'} plot_ps(munich_pollen, pollen.type="Alnus", year=2013, method= "percentage", perc=95 ,int.method = "lineal") ``` As you have noticed, the arguments `method`, `perc` and `int.method` are from **<font size="3" face="verdana">calculate_ps</font> function**. What about changing the interpolation method? ```{r echo = TRUE, results='hide',fig.keep='all'} plot_ps(munich_pollen, pollen.type="Alnus", year=2013, method= "percentage", perc=95 ,int.method = "movingmean") ``` Do you want a larger scale? ```{r echo = TRUE, results='hide',fig.keep='all'} plot_ps(munich_pollen, pollen.type="Alnus", year=2013, days = 90) ``` Maybe a different color and y-axis name? ```{r echo = TRUE, results='hide',fig.keep='all'} plot_ps(munich_pollen, pollen.type="Alnus", year=2013, fill.col = "orange", axisname = "AeRobiology custom units") ``` # Function <font face="verdana">plot_hour()</font> Please keep in mind that the function *plot_hour()* is only available after package **version 2.0**. The input data must be a data.frame object with the structure long. Where the first two columns are factors indicating the pollen and the location. The 3 and 4 columns are POSIXct, showing the date with the hour. Where the third column is the beginning of the concentration *from* and the fourth column is the end time of the concentration data *to*. The fifth column shows the concentrations of the different pollen types as numeric. Please see the example 3-hourly data from the automatic pollen monitor BAA500 from Munich and Viechtach in Bavaria (Germany) data("POMO_pollen"), supplied by ePIN Network supported by the Bavarian Government. We will load an example dataset about 3-hourly data from the ePIN network in Bavaria (Germany). The dataset contains information of 3-hourly concentrations of pollen in the atmosphere of Munich (DEBIED) and Viechtach (DEVIEC) during the year 2018. Pollen types included: "Poaceae" and "Pinus". The data were obtained by the automatic pollen monitor BAA500 from Munich and Viechtach in Bavaria (Germany) data("POMO_pollen"), supplied by the public ePIN Network supported by the Bavarian Government. The ePIN Network was built by Das Bayerische Landesamt für Gesundheit und Lebensmittelsicherheit (LGL) in collaboration with Zentrum Allergie und Umwelt (ZAUM). ```{r echo = FALSE, warning=FALSE, message=FALSE} library(ggplot2) library(dplyr) ``` ```{r echo = TRUE} data("POMO_pollen") ``` The function plots pollen data expressed in concentrations with time resolution higher than 1 day (e.g. hourly, bi-hourly concentrations). If the argument *result = "plot"*, the function returns a list of objects of class **ggplot2**; if *result = "table"*, the function returns a **data.frame** with the hourly patterns. ```{r echo = TRUE, message=FALSE} plot_hour(POMO_pollen) ``` To display a table we have to set the argument *result = "table"*. ```{r message=FALSE} TO<-plot_hour(POMO_pollen, result ="table") knitr::kable(TO[1:10,], caption = "3-Hourly patterns", row.names = FALSE, digits = 1, format = "html", booktabs = TRUE) ``` We can also split the different stations by setting the argument *locations = TRUE*. ```{r, message=FALSE, echo=TRUE} plot_hour(POMO_pollen, locations = TRUE) ``` # Function <font face="verdana">plot_heathour()</font> An alternative to *plot_hour()* is *plot_heathour()*, which shows a summary of all particles with a heatplot. The input data should have the same format as for *plot_hour()*. ```{r, message=FALSE, echo=TRUE} plot_heathour(POMO_pollen) ``` You can also set the colors by the arguments: *low.col*, *mid.col* and *high.col*. E.g. ```{r, message=FALSE, echo=TRUE} plot_heathour(POMO_pollen, low.col = "darkgreen", mid.col = "moccasin", high.col = "brown") ``` By setting *locations = TRUE* you can split the result by locations: ```{r, message=FALSE, echo=TRUE} plot_heathour(POMO_pollen, locations = TRUE) ```
/scratch/gouwar.j/cran-all/cranData/AeRobiology/vignettes/my-vignette.Rmd
#' Estimate Aerosol Particle Collection Through Sample Lines #' #' This package provides a method to estimate sampling #' efficiency of sampling systems drawing aerosol particles #' through tubing. #' #' Functions were developed consistent with the approach #' described in Hogue, Mark; Thompson, Martha; Farfan, #' Eduardo; Hadlock, Dennis, (2014), "Hand Calculations for #' Transport of Radioactive Aerosols through Sampling Systems" #' Health Phys 106, 5, S78-S87, <doi:10.1097/HP.0000000000000092>. #' #' To learn how to use AeroSampleR, start with the vignette: #' `browseVignettes(package = "AeroSampleR")` #' @name AeroSampleR NULL
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/R/AeroSampleR-package.R
#' bend efficiency #' #' In order to run this function, first produce a particle distribution #' with the `particle_dist` function, then produce a parameter set with #' the `set_params` function. Both of these results must be stored as #' per examples described in the help set with each. #' #' @param df is the particle data set (data frame) established with the #' `particle_dist` function #' @param params is the parameter data set for parameters that are not #' particle size-dependent #' @param bend_angle bend angle in degrees #' @param bend_radius bend radius in m #' @param method choice of models: Pui, McFarland, or Zhang #' @param elnum element number to provide unique column names #' #' @references #' A. R. McFarland, H. Gong, A. Muyshondt, W. B. Wente, and N. K. Anand #' Environmental Science & Technology 1997 31 (12), 3371-3377 #' <doi:10.1021/es960975c> #' #' Pusheng Zhang, Randy M. Roberts, André Bénard, #' Computational guidelines and an empirical model for particle deposition #' in curved pipes using an Eulerian-Lagrangian approach, #' Journal of Aerosol Science, Volume 53, 2012, Pages 1-20,ISSN 0021-8502, #' <doi:10.1016/j.jaerosci.2012.05.007> #' #' David Y. H. Pui, Francisco Romay-Novas & Benjamin Y. H. Liu (1987) #' Experimental Study of Particle Deposition in Bends of Circular Cross #' Section, Aerosol Science and Technology, 7:3, 301-315, #' <doi:10.1080/02786828708959166> #' #' @returns data frame containing original particle distribution with added #' data for this element #' #' @examples #' df <- particle_dist() # set up particle distribution #' params <- set_params_1("D_tube" = 2.54, "Q_lpm" = 100, #' "T_C" = 25, "P_kPa" = 101.325) #example system parameters #' df <- set_params_2(df, params) #particle size-dependent parameters #' df <- probe_eff(df, params, orient = 'h') #probe orientation - horizontal #' df <- bend_eff(df, params, method='Zhang', bend_angle=90, #' bend_radius=0.1, elnum=3) #' head(df) #' @export #' bend_eff <- function(df, params, method, bend_angle, bend_radius, elnum) { # cat('This function appends a new eff_bend.. column to the data # frame') cat('\n') angle_rad <- bend_angle * pi/180 rat_curv <- bend_radius/(params$D_tube/2) if (method == "Pui") { ifelse(params$Re < 6000, eff_bend <- (1 + (df$Stk/0.171)^(0.452 * df$Stk/0.171 + 2.242))^-(2 * angle_rad/pi), eff_bend <- exp(-2.823 * df$Stk * angle_rad)) } if (method == "Zhang") { eff_bend <- exp(-0.528 * angle_rad * df$Stk^((2)^(1/rat_curv)) * rat_curv^0.5) } if (method == "McFarland") { a <- -0.9526 - 0.05686 * rat_curv b <- (-0.297 - 0.0174 * rat_curv)/(1 - 0.07 * rat_curv + 0.0171 * rat_curv^2) c <- -0.306 + 1.895/rat_curv^0.5 - 2/rat_curv d <- (0.131 - 0.0132 * rat_curv + 0.000383 * rat_curv^2)/(1 - 0.129 * rat_curv + 0.0136 * rat_curv^2) eff_bend <- 0.01 * exp((4.61 + a * angle_rad * df$Stk)/(1 + b * angle_rad * df$Stk + c * angle_rad * df$Stk^2 + d * angle_rad^2 * df$Stk)) # this is necessary to remove very high results when # denominator of the function above approaches zero if (df$Stk < 2.065 && df$Stk > 2.05) eff_bend <- 0 # More removal of out-of-range results if (eff_bend >= 1 && df$Stk > 1) eff_bend <- 0 } eff_bend[eff_bend > 1] <- 1 df <- cbind(df, eff_bend) names(df)[length(df)] <- paste0("eff_bend_", as.character(elnum)) df }
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/R/bend_eff.R
#' @title Data from readme file for use in plot examples #' #' @description This data was created by running the readme #' script. It is needed for simple plot examples. #' @format A \code{data.frame} #' \describe{ #' \item{D_p}{particle diameter in micrometers} #' \item{dens}{probability density} #' \item{dist}{either log_norm or discrete} #' \item{C_c}{Cunningham slip correction factor} #' \item{v_ts}{particle terminal velocity} #' \item{Re_p}{Reynold's number for particle} #' \item{Stk}{Stokes' number for particle} #' \item{eff_probe}{aspiration efficiency for probe} #' \item{eff_bend_2}{transport efficiency for the second component, a bend} #' \item{eff_tube_3}{transport efficiency for the third component, a straight tube} #' } "dat_for_plots"
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/R/dat_for_plots.R
#' Create a particle distribution #' #' Needed as a first step in estimating system efficiency. #' Make the data frame that will be used to estimate efficiency of #' variously sized aerosol particles' transport through the sampling #' system. To create your data, save this data to the global #' environment as shown in the examples. #' #' All inputs are in micron AMAD, meaning: #' the aerodynamic diameter of a particle is the diameter of a #' standard density (1000 kg/m3) sphere that has the same #' gravitational settling velocity as the particle in question. #' @param AMAD default is 5 based on ICRP 66 #' @param log_norm_sd default is 2.5 based on ICRP 66 #' @param log_norm_min default is 0.0005 based on ICRP 66 #' @param log_norm_max default is 100 based on ICRP 66 #' @param discrete_vals default is c(1, 5, 10) #' #' @examples #' df <- particle_dist() # default #' df <- particle_dist(AMAD = 4.4, #' log_norm_sd = 1.8) #' head(df) #' #' @return a data frame containing a lognormally distributed set of #' particles and discrete particle sizes #' #' @export #' particle_dist <- function(AMAD = 5, log_norm_sd = 2.5, log_norm_min = 5e-4, log_norm_max = 100, discrete_vals = c(1, 5, 10)) { n <- 1000 # number of bins - have to be high to meet del target log_int <- (log(log_norm_max) - log(log_norm_min)) / (n - 1) particle_bins <- log_norm_min * exp(0:(n - 1) * log_int) particle_dens <- stats::dlnorm(particle_bins, log(AMAD), log(log_norm_sd)) df <- data.frame("D_p" = particle_bins, "dens" = particle_dens) df$dist <- "log_norm" df <- rbind(df, data.frame("D_p" = discrete_vals, "dens" = rep(1, length(discrete_vals)), "dist" = rep("discrete", length(discrete_vals)))) df }
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/R/particle_dist.R
#' Probe efficiency #' #' In order to run this function, first produce a particle distribution #' with the `particle_dist` function, then produce a parameter set with #' the `set_params` function. Both of these results must be stored as #' per examples described in the help set with each. #' #' @param df is the particle data set (data frame) established with the #' `particle_dist` function #' @param params is the parameter data set for parameters that are not #' particle size-dependent #' @param method is the model for the probe efficiency. Default is #' 'blunt pipe', based on Su WC and Vincent JH, Towards a general #' semi-empirical model for the aspiration efficiencies of aerosol samplers #' in perfectly calm air, Aerosol Science 35 (2004) 1119-1134 #' @param orient orientation of the probe. Options are 'u' for up, #' 'd' for down, and 'h' for horizontal #' #' @returns data frame containing original particle distribution with added #' data for this element #' #' @examples #' df <- particle_dist() # set up particle distribution #' params <- set_params_1("D_tube" = 2.54, "Q_lpm" = 100, #' "T_C" = 25, "P_kPa" = 101.325) #example system parameters #' df <- set_params_2(df, params) #particle size-dependent parameters #' df <- probe_eff(df, params, orient = 'u') #probe orientation - draws upward #' head(df) #' @export #' probe_eff <- function(df, params, orient = "u", method = "blunt pipe") { # cat('This function replaces the eff_probe column if it already # exists') cat('\n') stopifnot(`Only blunt pipe method currently available` = method == "blunt pipe") R <- (df$D_p * 1e-04)^2 * 9.807/(18 * params$viscosity_air * params$velocity_air) # The stokes number in Su is different from our base Stk by 1/2 p <- 2.2 * R^(1.3) * df$Stk/2 q <- 75 * R^1.7 * df$Stk/2 # ******** alpha, beta and B are from Ref 6: Thin-walled probe # facing downwards: alpha = 1; beta = 0; B = 1 Thin-walled probe # facing upwards: alpha = 0; beta = 1; B = 1 Horizontal # thin-walled probe: alpha = 0.8; beta = 0.2; B = 1 alpha <- dplyr::case_when(orient == "u" ~ 0, orient == "d" ~ 1, orient == "h" ~ 0.8) beta <- dplyr::case_when(orient == "u" ~ 1, orient == "d" ~ 0, orient == "h" ~ 0.2) p <- 2.2 * R^(1.3) * df$Stk/2 q <- 75 * R^1.7 * df$Stk/2 B <- 1 #for all thin-walled probes df$eff_probe <- (1 - 0.8 * (4 * df$Stk/2 * R^(3/2)) + 0.08 * (4 * df$Stk/2 * R^(3/2))^2 - alpha * ((0.5 * R^(1/2)) - R * (B^2 - 1)) - beta * (0.12 * R^-0.4 * (exp(-p) - exp(-q)) - R^(3/2) * (B^(1/2) - 1))) # all the bonkers results set to zero df$eff_probe[which(df$D_p > 75)] <- 0 df$eff_probe[which(df$eff_probe == "NaN")] <- 0 df$eff_probe[which(df$eff_probe < 0)] <- 0 #correct for negative # ggplot(df, aes(D_p, eff_probe)) + geom_line() to compare to # reference results in Su and Vincent: ggplot(df, aes(Stk, # eff_probe)) + geom_point() + scale_x_log10() df }
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/R/probe_eff.R
#' report on transport efficiency #' #' In order to run a report, first produce a model of each individual #' element. Start with producing a particle distribution #' with the `particle_dist` function, then produce a parameter set with #' the `set_params` function. Both of these results must be stored as #' per examples described in the help set with each. Next, add elements #' in the sample system until all are complete. #' #' @param df is the particle data set (data frame) established with the #' `particle_dist` function #' @param params is the parameter data set for parameters that are not #' particle size-dependent #' @param dist selects the distribution for the report. Options are #' 'discrete' for discrete particle sizes or 'log' for the log-normal #' distribution of particles that were started with the `particle_dist` #' function. #' #' @returns report of system efficiency #' #' @examples #' df <- particle_dist() # set up particle distribution #' params <- set_params_1("D_tube" = 2.54, "Q_lpm" = 100, #' "T_C" = 25, "P_kPa" = 101.325) #example system parameters #' df <- set_params_2(df, params) #particle size-dependent parameters #' df <- probe_eff(df, params, orient = 'h') #probe orientation - horizontal #' df <- bend_eff(df, params, method='Zhang', bend_angle=90, #' bend_radius=0.1, elnum=3) #' df <- tube_eff(df, params, L = 100, #' angle_to_horiz = 90, elnum = 3) #' report_basic(df, params, dist = 'discrete') #' #' @export report_basic <- function(df, params, dist) { # # housekeeping to avoid no visible binding warnings #. = NULL - # try this if below doesn't work D_p = microns = sys_eff = dens = ambient = bin_eff = sampled = . = starts_with = NULL # provide parameter details cat("System Parameters") cat("\n") cat("All values in MKS units, except noted") cat("\n") cat("Notes: D_tube is in m.") cat("\n") cat("Q_lpm is system flow in liters per minute.") cat("\n") cat("velocity_air is the derived air flow velocity in meters per second.") cat("T_K is system temperature in Kelvin.") cat("\n") cat("P_kPa is system pressure in kiloPascals.") cat("\n") cat("Re is the system Reynolds number, a measure of turbulence.") cat("\n") #utils::str(params[c(1, 2, 3, 4, 5, 10)]) data.frame(t(params)) cat("\n") eff_cols <- tidyselect::starts_with("eff_", vars = names(df)) if (dist == "discrete") { # make data frame with just the discrete data df_disc <- df |> dplyr::filter(dist == "discrete") # compute system efficiency and add this column df_disc$sys_eff <- apply(df_disc[, eff_cols], 1, prod) # select columns for the report discrete_report <- df_disc |> dplyr::select(D_p, sys_eff) return(discrete_report) } if (dist == "log") { # make data frame of just the log data df_log <- df |> dplyr::filter(dist == "log_norm") # compute efficiency for each particle size (bin) and add this column df_log$bin_eff <- apply(df_log[, eff_cols], 1, prod) # compute ambient mass-based quantity for each bin df_log$ambient <- df_log$dens * 4/3 * pi * (df_log$D_p/2)^3 * diff(c(0, df_log$D_p)) df_log$sampled <- df_log$ambient * df_log$bin_eff data.frame("activity fraction sampled" = sum(df_log$sampled)/ sum(df_log$ambient)) } }
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/R/report_basic.R
#' report on cumulative transport system efficiency (discrete particle sizes only) #' #' In order to run a report, first produce a model of each individual #' element. Start with producing a particle distribution #' with the `particle_dist` function, then produce a parameter set with #' the `set_params` function. Both of these results must be stored as #' per examples described in the help set with each. Next, add elements #' in the sample system until all are complete. #' #' @param df is the particle data set - after transport analysis by element #' @param micron selects the particle size (aerodynamic mass activity #' diameter in micrometers). This must be selected from the original #' distribution of particles that were started with the `particle_dist` #' function. #' #' @return A plot of cumulative transport efficiencies is generated in a plot window #' #' @examples #' report_cum_plots(dat_for_plots, micron = 10) #' #' @export #' report_cum_plots <- function(df, micron) { df <- df |> dplyr::filter(dist == "discrete") D_p = microns = sys_eff = dens = ambient = bin_eff = sampled = . = starts_with = everything = element = efficiency = dist = NULL # make a cumulative efficiency set df_effs <- df |> dplyr::filter(D_p == micron) |> dplyr::select(., tidyselect::starts_with("eff_")) df_effs[1, ] <- cumprod(as.numeric(df_effs)) names(df_effs) <- stringr::str_replace(names(df_effs), "eff_", "") # plot by element, by particle size df_effs <- df_effs |> tidyr::pivot_longer(cols = everything(), names_to = "element", values_to = "efficiency") df_effs$element <- factor(df_effs$element, levels = df_effs$element) plt <- ggplot2::ggplot(df_effs, ggplot2::aes(element, efficiency)) + ggplot2::geom_point(size = 3, alpha = 0.5) + ggthemes::theme_calc() + ggthemes::scale_color_gdocs() + ggplot2::guides(x = ggplot2::guide_axis(angle = 90)) + ggplot2::ggtitle("cumulative transport efficiency", subtitle = paste0(micron, " micrometer")) return(plt) }
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/R/report_cum_plots.R
#' report relative masses by particle of a log-normal distribution #' #' This function shows the entire table of results by particle diameter. #' #' @param df is the particle data set - after transport analysis by element #' #' @examples #' df <- particle_dist() # set up particle distribution #' params <- set_params_1("D_tube" = 2.54, "Q_lpm" = 100, #' "T_C" = 25, "P_kPa" = 101.325) #example system parameters #' df <- set_params_2(df, params) #particle size-dependent parameters #' df <- probe_eff(df, params, orient = 'h') #probe orientation - horizontal #' df <- bend_eff(df, params, method='Zhang', bend_angle=90, #' bend_radius=0.1, elnum=3) #' df <- tube_eff(df, params, L = 100, #' angle_to_horiz = 90, elnum = 3) #' report_log_mass(df) #' #' @returns data frame containing mass-based particle fractions in ambient #' location and in distribution delivered through the system. #' #' @export #' report_log_mass <- function(df) { D_p = microns = sys_eff = dens = ambient = bin_eff = sampled = . = starts_with = everything = element = efficiency = amb_mass = sampled_mass = bin_frac_lost = total_frac_lost = dist = NULL # make data frame of just the log data df_log <- df |> dplyr::filter(dist == "log_norm") # compute efficiency for each particle size (bin) and add this column df_log$bin_eff <- purrr::pmap_dbl(dplyr::select(df_log, tidyselect::starts_with("eff_")), prod) # compute ambient mass-based quantity for each bin df_log$amb_mass <- df_log$dens * 4/3 * pi * (df_log$D_p/2)^3 * diff(c(0, df_log$D_p)) df_log$sampled_mass <- df_log$amb_mass * df_log$bin_eff df_log$bin_frac_lost <- (df_log$amb_mass - df_log$sampled_mass) / df_log$amb_mass df_log$total_frac_lost <- (df_log$amb_mass - df_log$sampled_mass) / sum(df_log$amb_mass) dplyr::select(df_log, D_p, dens, bin_eff, amb_mass, sampled_mass, bin_frac_lost, total_frac_lost) }
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/R/report_log_mass.R
#' plots of individual on transport system elements #' #' In order to run a report, first produce a model of each individual #' element. Start with producing a particle distribution #' with the `particle_dist` function, then produce a parameter set with #' the `set_params` function. Both of these results must be stored as #' per examples described in the help set with each. Next, add elements #' in the sample system until all are complete. #' #' @param df is the particle data set - after transport analysis by element #' @param dist selects the distribution for the report. Options are #' 'discrete' for discrete particle sizes or 'log' for the log-normal #' distribution of particles that were started with the `particle_dist` #' function. #' #' @return A plot of transport efficiencies is generated in a plot window #' #' @examples #' report_plots(dat_for_plots, dist = 'discrete') #' #' @export #' report_plots <- function(df, dist) { D_p = microns = sys_eff = dens = ambient = bin_eff = sampled = . = starts_with = everything = element = efficiency = amb_mass = rel_activity = location = NULL eff_cols <- tidyselect::starts_with("eff_", vars = names(df)) if (dist == "discrete") { # plot by element, by particle size df_long <- df |> dplyr::filter(dist == "discrete") |> dplyr::select(., c(D_p, tidyselect::starts_with("eff_"))) |> tidyr::pivot_longer(cols = tidyselect::starts_with("eff_"), names_to = "element", values_to = "efficiency") df_long$element <- stringr::str_remove(df_long$element, "eff_") df_long$D_p <- as.factor(df_long$D_p) # This factor assignment retains the element order df_long$element <- factor(df_long$element, levels = unique(df_long$element)) plt <- ggplot2::ggplot(df_long, ggplot2::aes(element, efficiency, color = D_p, shape = D_p)) + ggplot2::geom_point(size = 3, alpha = 0.5) + ggthemes::scale_color_gdocs() + ggplot2::guides(x = ggplot2::guide_axis(angle = 90)) + ggplot2::ggtitle("transport efficiency by element") return(plt) } if (dist == "log") { # mass weighted plot # make data frame of just the log data df_log <- df |> dplyr::filter(dist == "log_norm") # compute efficiency for each particle size (bin) and add this column df_log$bin_eff <- apply(df_log[, eff_cols], 1, prod) # compute ambient mass-based quantity for each bin df_log$ambient <- df_log$dens * 4/3 * pi * (df_log$D_p/2)^3 * diff(c(0, df_log$D_p)) df_log$sampled <- df_log$ambient * df_log$bin_eff df_log$microns <- df_log$D_p df_log |> dplyr::select(microns, ambient, sampled) |> tidyr::pivot_longer(2:3, names_to = "location", values_to = "rel_activity") |> ggplot2::ggplot(ggplot2::aes(microns, rel_activity, color = location)) + ggplot2::geom_point() + ggplot2::ggtitle("ambient and sampled activity") } }
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/R/report_plots.R
#' Set parameters (not particle size specific) #' #' Make a set of parameters that will be used throughout this package. #' `set_params_1` sets all single parameters. #' `set_params_2` adds particle-size-dependent parameters to the #' particle distribution #' #' All parameters are to be in MKS units, except as noted. #' #' @param D_tube_cm Inside diameter of tubing in cm, no default #' @param Q_lpm System flow in lpm, no default #' @param T_C System temperature in Celsius #' @param P_kPa System pressure in kPa (Pa is the MKS unit) #' #' #' @return a data frame with singular parameters #' #' examples #' params <- set_params_1("D_tube" = 2.54, "Q_lpm" = 100, #' "T_C" = 25, "P_kPa" = 101.325) #' t(params) #' #' @export set_params_1 <- function(D_tube_cm, Q_lpm, T_C = 20, P_kPa = 101.325) { S <- 110.56 #Sutherland constant, K CK <- 273.15 D_tube <- D_tube_cm / 100 Q_lpm <- Q_lpm #used for reporting and calculating v velocity_air <- Q_lpm / 1000 / 60 / # conversion from lpm to cfs (pi * (D_tube / 2)^2) #m/s T_C <- T_C T_K <- T_C + CK P_kPa <- P_kPa R_u <- 8314.471 #U gas constant J/kmol·K MW_air <- 28.962 #kg/kmol k <- 1.3807E-23 # N*m/K Boltzmann's Constant g <- 9.807 # m/s^2 gravitational acceleration density_par <- 1000 # kg/m^3 AMAD density #air density at ntp corrected to system T and P density_air <- 1.2041 * ((CK + 20) / (T_K) * (P_kPa / 101.325)) # Depo_Calc Eq 3: viscosity_air <- 1.716E-05 * #ref viscosity N-s/m2 ((T_K) / 273.11)^1.5 * (273.11 + S) / (T_K + S) # Depo_Calc Eq 4: (in microns) mfp <- 1e6 * sqrt(pi/8) * (viscosity_air / 0.4987445) * sqrt(1 / (density_air * P_kPa * 1000)) # Depo_Calc Eq 5: Reynolds number for flow (applying only to tube) Re <- density_air * velocity_air * D_tube / viscosity_air params <- data.frame("D_tube" = D_tube, "Q_lpm" = Q_lpm, "velocity_air" = velocity_air, "T_K" = T_K, "P_kPa" = P_kPa, "density_air" = density_air, "viscosity_air" = viscosity_air, "mfp" = mfp, "density_par" = density_par, "Re" = Re, "k" = k) params }
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/R/set_params_1.R
#' Make a set of particle-size-dependent parameters #' #' This set of parameters will be used for evaluation of transport #' efficiency for particle-size-dependent parameters. #' #' No user-selected arguments are needed. Parameters are used in #' efficiency functions. For each particle diameter, an entry is #' made in the data frame for the Cunningham slip correction factor, #' the particle terminal velocity, the particle Reynold's number, #' and the Stokes factor. #' #' `set_params_1` sets all single parameters. #' `set_params_2` adds particle size-dependent parameters to the #' particle distribution #' #' @param df is the particle data set (data frame) established with the #' `particle_dist` function #' @param params is the parameter data set for parameters that are not #' particle size-dependent #' #' @examples #' df <- particle_dist() #' params <- set_params_1("D_tube" = 2.54, "Q_lpm" = 100, #' "T_C" = 25, "P_kPa" = 101.325) #' df <- set_params_2(df, params) #' head(df) #' #' @return a data frame starting with the submitted particle #' distribution with additional columns for particle-size-dependent #' parameters #' #' @export #' set_params_2 <- function(df, params) { # Author notes for documentation in Q-CLC-G-00128 # Depo_Calc Eq 6: Cunningham Correction Factor # Depo Calc Eq 33: terminal settling velocity # Depo Calc Eq 32: Particle Reynolds number (tube) # Depo_Calc Eq 7: Stokes number C_c = v_ts = sys_eff = NULL df <- df |> dplyr::mutate( C_c = 1 + params$mfp / df$D_p * (2.34 + 1.05 * exp(-0.39 * df$D_p)), v_ts = params$density_par * 9.807 * (1e-6 * df$D_p)^2 * C_c / (18 * params$viscosity_air), # C_d = 24 / -- skipping due to circular references (Re_p) # # v_ts_turb = sqrt(4 * params$density_par * 9.807 * (1e-6 * df$D_p)^2 / # (3 * C_d * params$density_air)), Re_p = params$density_air * v_ts * 1e-6 * df$D_p / params$viscosity_air, Stk = C_c * params$density_par * (1e-6 * df$D_p)^2 * params$velocity_air / (9 * params$viscosity_air * params$D_tube)) df }
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/R/set_params_2.R
#' Tube efficiency #' #' Computation is consistent with the approach described in Hogue, Mark; #' Thompson, Martha; Farfan, Eduardo; Hadlock, Dennis, (2014), #' "Hand Calculations for Transport of Radioactive Aerosols through #' Sampling Systems" Health Phys 106, 5, S78-S87, #' <doi:10.1097/HP.0000000000000092>, with the exception that the diffusion #' deposition mechanism is included. #' #' In order to run this function, first produce a particle distribution #' with the `particle_dist` function, then produce a parameter set with #' the `set_params` function. Both of these results must be stored as #' per examples described in the help set with each. #' #' @param df is the particle data set (data frame) established with the #' `particle_dist` function #' @param params is the parameter data set for parameters that are not #' particle size-dependent #' @param L_cm tube length, cm #' @param angle_to_horiz angle to horizontal in degrees #' @param elnum element number to provide unique column names #' #' @returns data frame containing original particle distribution with added #' data for this element #' #' @examples #' # Example output is a sample of the full particle data set. #' #' # laminar flow (Reynolds number < 2100) #' #' df <- particle_dist() # distribution #' params <- set_params_1("D_tube" = 2.54, "Q_lpm" = 20, #' "T_C" = 25, "P_kPa" = 101.325) #example system parameters #' df <- set_params_2(df, params) #particle size-dependent parameters #' df <- probe_eff(df, params, orient = 'h') #probe orientation - horizontal #' df <- tube_eff(df, params, L_cm = 100, #' angle_to_horiz = 90, elnum = 2) #' (df[sort(sample(1:1000, 10)), ]) #' # turbulent flow (Reynolds number > 4000) #' #' df <- particle_dist() # distribution #' params <- set_params_1("D_tube" = 2.54, "Q_lpm" = 100, #' "T_C" = 25, "P_kPa" = 101.325) #example system parameters #' df <- set_params_2(df, params) #particle size-dependent parameters #' df <- probe_eff(df, params, orient = 'h') #probe orientation - horizontal #' df <- tube_eff(df, params, L_cm = 100, #' angle_to_horiz = 90, elnum = 2) #' (df[sort(sample(1:1000, 10)), ]) #' #' # midrange flow (Reynolds number > 2100 and < 4000) #' #' df <- particle_dist() # distribution #' params <- set_params_1("D_tube" = 2.54, "Q_lpm" = 60, #' "T_C" = 25, "P_kPa" = 101.325) #example system parameters #' df <- set_params_2(df, params) #particle size-dependent parameters #' df <- probe_eff(df, params, orient = 'h') #probe orientation - horizontal #' df <- tube_eff(df, params, L_cm = 100, #' angle_to_horiz = 90, elnum = 2) #' (df[sort(sample(1:1000, 10)), ]) #' #' @export #' tube_eff <- function(df, params, L_cm, angle_to_horiz, elnum) { # convert angle from degrees to radians angle_to_horiz_radians <- angle_to_horiz * pi/180 L <- L_cm / 100 # L is in meters # assign some factors for use as needed: # diffusion coefficient Dc <- params$k * params$T_K * df$C_c/(3 * pi * params$viscosity_air * 1e-06 * df$D_p) # Schmidt number Sc <- params$viscosity_air/(params$density_air * Dc) # diffusion time t_diffus <- pi * Dc * L/(params$Q_lpm/1000/60) # Sherwood number (ratio of the convective mass transfer to the rate of diffusive mass transport) ifelse(params$Re < 2100, Sh <- 3.66 + 0.2672/(t_diffus + 1.0079 * t_diffus^(1/3)), Sh <- 0.0118 * params$Re^(7/8) * Sc^(1/3)) # efficiency after THERMAL DIFFUSION deposition eff_therm <- exp(-t_diffus * Sh) # t_prime <- L * df$v_ts/(params$velocity_air * params$D_tube) * cos(angle_to_horiz_radians) # t_plus <- 0.0395 * df$Stk * params$Re^(3/4) # V_plus <- 6e-04 * t_plus^2 + 2e-08 * params$Re # particle deposition velocity Vt <- V_plus * params$velocity_air/5.03 * params$Re^(-1/8) # efficiency after TURBULENT deposition eff_turb <- exp(-pi * t_prime * L * Vt/(params$Q_lpm/1000/60)) # K <- 3/4 * t_prime # efficiency after GRAVITATIONAL SETTLING # laminar eff_grav_lam <- 1 - (2/pi) * (2 * K * sqrt(1 - K^(2/3)) - K^(1/3) * sqrt(1 - K^(2/3)) + asin(K^(1/3))) eff_grav_lam[is.na(eff_grav_lam)] <- 0 # Z <- 4 * t_prime/pi # efficiency after GRAVITATIONAL SETTLING # turbulent eff_grav_turb <- exp(-Z) # efficiency with laminar flow lam <- eff_grav_lam * eff_therm # efficiency with turbulent flow turb <- eff_turb * eff_therm * eff_grav_turb # in between, use lower of the two lam_min <- lam < turb mixed <- dplyr::case_when( lam_min == TRUE ~ lam, TRUE ~ turb ) eff_tube <- dplyr::case_when( # laminar flow params$Re < 2100 ~ lam, # turbulent flow params$Re > 4000 ~ turb, # not clearly laminar or turbulent TRUE ~ mixed) # add a colmn for latest efficiency df <- cbind(df, eff_tube) # rename eff_tube to provide unique column name names(df)[length(df)] <- paste0("eff_tube_", as.character(elnum)) df }
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/R/tube_eff.R
## ----setup, include = FALSE--------------------------------------------------- knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ## ----echo=FALSE, message=FALSE, warning=FALSE--------------------------------- library(AeroSampleR) library(ggplot2) library(dplyr) library(flextable) ## ----echo=FALSE, message=FALSE, warning=FALSE, comment = ">"------------------ sys_df <- structure(list( el_num = c("1", "2", "3", "4"), el_type = c("probe", "tube", "bend", "tube"), length_cm = c(NA, 111.76, NA, 146.05), angle_to_horiz = c(NA, 90, NA, 0), orient = c("u", NA, NA, NA), bend_angle = c(NA, NA, 90, NA), bend_rad_cm = c(NA, NA, 12.7, NA)), row.names = c(NA, -4L), class = c("tbl_df", "tbl", "data.frame")) cat("\n") ## ----echo=FALSE, message=FALSE, warning=FALSE, comment = ">"------------------ ft <- flextable(sys_df) ft <- colformat_double(ft, digits = 0) ft ## ----echo=TRUE, eval=FALSE---------------------------------------------------- # sys_df <- read.table( # file = "c:/work/system.txt", # header = TRUE # ) ## ----echo=TRUE, eval=FALSE---------------------------------------------------- # sys_df <- readxl::read_xlsx(path = "c:/work/system.xlsx", # sheet = "Sheet1", #default - update if needed # range = "A1:G5", #put in entire range # col_types = c("numeric", # "text", # "numeric", # "numeric", # "text", # "numeric", # "numeric") # ) ## ----echo=TRUE, message=TRUE, warning=FALSE----------------------------------- df <- particle_dist() #Default ## ----echo=FALSE, message=TRUE, warning=FALSE, fig.width=5, fig.height= 3------ df |> filter(dist == "log_norm") |> ggplot(aes(D_p, dens)) + geom_point(color = "blue") + ggtitle("distribution of lognormal particle sizes") df |> filter(dist == "log_norm") |> mutate("activity" = D_p ^3 * dens) |> ggplot(aes(D_p, activity)) + geom_point(color = "blue") + ggtitle("relative activity by particle size", subtitle = "diameter cubed times density") ## ----echo=TRUE, message=TRUE, warning=FALSE----------------------------------- # In this example the tubing wall is 1.65 mm thick. params <- set_params_1("D_tube" = 2.54 - (2 * 0.165), #1 inch tube diameter "Q_lpm" = 2 * 28.3, #2 cfm converted to lpm "T_C" = 25, "P_kPa" = 101.325) ## ----echo=TRUE, message=TRUE, warning=FALSE----------------------------------- df <- set_params_2(df, params) ## ----echo=TRUE, message=TRUE, warning=FALSE----------------------------------- df <- probe_eff(df, params, orient = sys_df$orient[1]) ## ----echo=TRUE, message=TRUE, warning=FALSE----------------------------------- df <- tube_eff(df, params, L = sys_df$length_cm[2] / 100, angle_to_horiz = sys_df$angle_to_horiz[2], elnum = sys_df$el_num[2]) ## ----echo=TRUE, message=TRUE, warning=FALSE----------------------------------- df <- bend_eff(df, params, method = "Zhang", bend_angle = sys_df$bend_angle[3], bend_radius = sys_df$bend_rad_cm[3] / 100, elnum = sys_df$el_num[3]) ## ----echo=TRUE, message=TRUE, warning=FALSE----------------------------------- df <- tube_eff(df, params, L = sys_df$length_cm[2] / 100, angle_to_horiz = sys_df$angle_to_horiz[4], elnum = sys_df$el_num[4]) ## ----echo=TRUE, message=TRUE, warning=FALSE----------------------------------- tail(df) ## ----echo=TRUE, message=FALSE, warning=FALSE, fig.width=5, fig.height= 3, fig.align = 'center'---- params[, 7] <- formatC(params[, 7], digits = 2, format = "e") params[, 8] <- formatC(params[, 8], digits = 2, format = "e") params[, 11] <- formatC(params[, 11], digits = 2, format = "e") params[, 3] <- formatC(params[, 3], digits = 4) params[, 10] <- formatC(params[, 10], digits = 4) ft <- flextable(params) ft <- set_caption(ft, "system parameters") ft ## ----echo=TRUE, message=FALSE, warning=FALSE, fig.width=5, fig.height= 3, fig.align = 'center'---- ft <- flextable(report_basic(df, params, "discrete")) ft <- colformat_double(ft, digits = 3) ft <- set_caption(ft, "results for discrete particle diameters") ft report_plots(df, "discrete") report_cum_plots(df, 1) report_cum_plots(df, 5) report_cum_plots(df, 10) ft <- flextable(report_basic(df, params, "log")) ft <- colformat_double(ft, digits = 3) ft <- set_caption(ft, "results for log distribution of particle diameters") ft ## ----echo=TRUE, message=TRUE, warning=FALSE, fig.width=5, fig.height= 3, fig.align = 'center'---- df_log <- report_log_mass(df)[sort(sample(1:1000, 10)), ] # need to make format changes so that flextable will show scientific notation df_log[, 1] <- formatC(df_log[, 1], digits = 4) df_log[, 2] <- formatC(df_log[, 2], digits = 2, format = "e") df_log[, 3] <- formatC(df_log[, 3], digits = 2, format = "e") df_log[, 4] <- formatC(df_log[, 4], digits = 2, format = "e") df_log[, 5] <- formatC(df_log[, 5], digits = 2, format = "e") df_log[, 6] <- formatC(df_log[, 6], digits = 2, format = "e") df_log[, 7] <- formatC(df_log[, 7], digits = 2, format = "e") ft <- flextable(df_log) ft <- colformat_double(ft, digits = 3) ft <- set_caption(ft, "results for random sample of 1000 particle diameters from the log set") ft ## ----echo=TRUE, message=TRUE, warning=FALSE, fig.width=5, fig.height= 3, fig.align = 'center'---- report_plots(df, "log")
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/inst/doc/AeroSampleR_vignette.R
--- title: "Using AeroSampleR to model aerosol sampling efficiency" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Using AeroSampleR to model aerosol sampling efficiency} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown} editor_options: chunk_output_type: console markdown: wrap: 72 --- ```{=html} <style> body { text-align: justify} </style> ``` ```{r setup, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` ## AeroSampleR Introduction Air sampling systems are used in many applications that require the monitoring of hazardous airborne particles. When tubing is used to collect aerosol particles, some of the particles are lost along the way. The efficiency of the system depends on the particle and tubing configuration. This version of AeroSampleR provides sampling efficiency for a limited set of system elements. Sampling systems always include a probe, after which are combinations of straight tubing and bends. While some systems include expansion or contraction elements, or sample splitters. These components are not covered in this version of AeroSampleR. The probe model is limited to a simple open ended pipe in still air. AeroSampleR relies on the concept of aerodynamic median activity diameter (AMAD), which accounts for particle density and shape, leaving equivalent spherical water droplets as the modeling targets. Efficiency functions are based predominantly on testing with aerosol particles through stainless steel tubing. The [Zhang](https://doi.org/10.1016/j.jaerosci.2012.05.007), [McFarland](https://doi.org/10.1021/es960975c), and [Pui](https://doi.org/10.1080/02786828708959166) bend models are used in this package. The aerosol transport models are based on tests on clean systems. This package is designed primarily for new tubing designs. If a system is not maintained clean -- and free of condensation, there can be no expectation that sampling efficiency models will be accurate. ```{r echo=FALSE, message=FALSE, warning=FALSE} library(AeroSampleR) library(ggplot2) library(dplyr) library(flextable) ``` ```{r echo=FALSE, message=FALSE, warning=FALSE, comment = ">"} sys_df <- structure(list( el_num = c("1", "2", "3", "4"), el_type = c("probe", "tube", "bend", "tube"), length_cm = c(NA, 111.76, NA, 146.05), angle_to_horiz = c(NA, 90, NA, 0), orient = c("u", NA, NA, NA), bend_angle = c(NA, NA, 90, NA), bend_rad_cm = c(NA, NA, 12.7, NA)), row.names = c(NA, -4L), class = c("tbl_df", "tbl", "data.frame")) cat("\n") ``` ### example data The first task in evaluating a system is to set up a table that includes all the "elements" with the following column headers: - `el_num` sequential number of the element - `el_type` starting with "probe", followed by "tube" and "bend" elements - `length_cm` length of tubes in centimeters. Leave blank for probe and bends. - `angle_to_horiz` degree of slope of straight tube elements. Leave blank for probes and bends. - `orient` orientation of the probe. Options are "u" for up, "d" for down and "h" for horizontal ``bend_angle` how many degrees a bend turns the sample. Typically, 90 or 45. Leave blank for probes and tubes. - `bend_rad_cm` the bend radius in centimeters. Leave blank for probes and tubes. ```{r echo=FALSE, message=FALSE, warning=FALSE, comment = ">"} ft <- flextable(sys_df) ft <- colformat_double(ft, digits = 0) ft ``` ### 1) Get the system data You need to get this data into R and call it `sys_df`, the "system" data frame. There are many options on how to do this. Below are two examples. You will have to provide the path and the file name, but for the example, we'll show the file to be called system.txt or system.xlsx in `c:/work`. Here are two: a. Use base R, (`utils` package that is loaded with base R) to read a text file: ```{r echo=TRUE, eval=FALSE} sys_df <- read.table( file = "c:/work/system.txt", header = TRUE ) ``` b. Use the readxl package to read a spreadsheet of the 'xlsx' format: ```{r echo=TRUE, eval=FALSE} sys_df <- readxl::read_xlsx(path = "c:/work/system.xlsx", sheet = "Sheet1", #default - update if needed range = "A1:G5", #put in entire range col_types = c("numeric", "text", "numeric", "numeric", "text", "numeric", "numeric") ) ``` ### 2) Create particle distribution with `particle_dist()` This function provides a logarithmic distribution of 1000 particle sizes and an additional set of discrete particles. By default, the logarithmically-distributed particles have an AMAD of 5 and a lognormal standard deviation of 2.5, consistent with ICRP 66. The discrete particles are 1, 5, and 10 micrometers AMAD. ```{r echo=TRUE, message=TRUE, warning=FALSE} df <- particle_dist() #Default ``` ```{r echo=FALSE, message=TRUE, warning=FALSE, fig.width=5, fig.height= 3} df |> filter(dist == "log_norm") |> ggplot(aes(D_p, dens)) + geom_point(color = "blue") + ggtitle("distribution of lognormal particle sizes") df |> filter(dist == "log_norm") |> mutate("activity" = D_p ^3 * dens) |> ggplot(aes(D_p, activity)) + geom_point(color = "blue") + ggtitle("relative activity by particle size", subtitle = "diameter cubed times density") ``` ### 3) Set up the parameters for tube size, flow rate, temperature, and pressure. These parameters are not particle dependent and so can be kept in a small separate data frame. - D_tube is the inner diameter of the tube {cm} - Q_lpm is the flow rate of air {lpm} - T_C is the system pressure {Celsius} - P_kPa is the pressure of the system {kPa} ```{r echo=TRUE, message=TRUE, warning=FALSE} # In this example the tubing wall is 1.65 mm thick. params <- set_params_1("D_tube" = 2.54 - (2 * 0.165), #1 inch tube diameter "Q_lpm" = 2 * 28.3, #2 cfm converted to lpm "T_C" = 25, "P_kPa" = 101.325) ``` Next, we compute the particle size-dependent parameters. These include factors for transport efficiency computation. - Cunningham Correction Factor {C_c} - terminal settling velocity {v_ts} - Particle Reynolds number (tube) {Re_p} - Stokes number {Stk} ```{r echo=TRUE, message=TRUE, warning=FALSE} df <- set_params_2(df, params) ``` At this point, our main particle distribution data frame has been modified with computed factors for use in the transport efficiency models, row by row. ### 4) Next, we compute the efficiency, element by element in transport order. We have only four elements in this example and we will evaluate them with `prob_eff()`, `tube_eff()`, `bend_eff()`, and lastly `tube_eff()` again. This will add columns to our particle data frame. Calculate the efficiency of the probe via `prob_eff()` and add it to a new data frame. The orient argument sets the orientation of the probe. "u" means the probe is vertically upward. "d" is for a vertically downward facing probe. "h" is for a probe in a side configuration. The probe is in the first row, so we use `[1]` to identify the orient parameter. ```{r echo=TRUE, message=TRUE, warning=FALSE} df <- probe_eff(df, params, orient = sys_df$orient[1]) ``` Calculate the efficiency of the first tube. Tube Efficiency is found using `tube_eff()` function. The length is given in cm (`length_cm`) and the angle from tube to horizontal orientation parameter (`angle_to_horiz`) is specified here. All three parameters can be added to the above data frame, which will return a column for each distribution. ```{r echo=TRUE, message=TRUE, warning=FALSE} df <- tube_eff(df, params, L = sys_df$length_cm[2] / 100, angle_to_horiz = sys_df$angle_to_horiz[2], elnum = sys_df$el_num[2]) ``` Calculate the efficiency of the bend. Here, we'll take the Zhang model option. Bend Efficiency is found via the `bend_eff()` function and is where you will choose to use one of three different tube models {Zhang, McFarland, or Pui}. The bend angle and element number are also listed in the function. ```{r echo=TRUE, message=TRUE, warning=FALSE} df <- bend_eff(df, params, method = "Zhang", bend_angle = sys_df$bend_angle[3], bend_radius = sys_df$bend_rad_cm[3] / 100, elnum = sys_df$el_num[3]) ``` Finally, we'll calculate transport efficiency through the last tube element. ```{r echo=TRUE, message=TRUE, warning=FALSE} df <- tube_eff(df, params, L = sys_df$length_cm[2] / 100, angle_to_horiz = sys_df$angle_to_horiz[4], elnum = sys_df$el_num[4]) ``` ### At this point, the transport efficiencies have been built into the data frame. Let's have a look at the bottom few rows. *It doesn't all fit horizontally, so match the top portion and the bottom portion by the row number.* ```{r echo=TRUE, message=TRUE, warning=FALSE} tail(df) ``` ### 5) Generate reports with `report_basic`, `report_plots` and `report_cum_plots` The `report_basic` function provides total system efficiency for either all of the logarithmically distributed particles or all of the discrete particle sizes. The `report_plots` function shows individual element efficiency. The `report_cum_plots` function shows cumulative efficiency through the system. This plot takes efficiency data from the rows of the data frame. It therefore only works for individually selected particle sizes. We'll show the parameter set first, so that the output message on the basic report, regarding units, makes sense. ```{r echo=TRUE, message=FALSE, warning=FALSE, fig.width=5, fig.height= 3, fig.align = 'center'} params[, 7] <- formatC(params[, 7], digits = 2, format = "e") params[, 8] <- formatC(params[, 8], digits = 2, format = "e") params[, 11] <- formatC(params[, 11], digits = 2, format = "e") params[, 3] <- formatC(params[, 3], digits = 4) params[, 10] <- formatC(params[, 10], digits = 4) ft <- flextable(params) ft <- set_caption(ft, "system parameters") ft ``` ```{r echo=TRUE, message=FALSE, warning=FALSE, fig.width=5, fig.height= 3, fig.align = 'center'} ft <- flextable(report_basic(df, params, "discrete")) ft <- colformat_double(ft, digits = 3) ft <- set_caption(ft, "results for discrete particle diameters") ft report_plots(df, "discrete") report_cum_plots(df, 1) report_cum_plots(df, 5) report_cum_plots(df, 10) ft <- flextable(report_basic(df, params, "log")) ft <- colformat_double(ft, digits = 3) ft <- set_caption(ft, "results for log distribution of particle diameters") ft ``` ### 6) Optional extra reports for the logarithmically distributed particle set The `report_log_mass` function provides details on every particle size. Since there are 1000 data points, the full ouptput is probably not suitable for a typical report. The report provides the following columns of output: - microns = the particle size in micrometers (microns is shorter, but is considered supersed by micrometers) - probs = relative probability of a particle being in this size bin - bin_eff = the overall system efficiency for a particle of this size - amb_mass = the probability of the particle multiplied by the mass of a spherical particle with the size given and density of 1 g per ml. This is the relative mass of this particle size in the ambient air being sampled. - sampled_mass = the relative mass that made it through the sampling system with this particle size - bin_frac_lost = ambient mass in this bin minus the sampled mass in the bin, divided by the ambient mass - total_frac_lost = ambient mass in this bin minus the sampled mass in the bin, divided by the sum of the ambient mass A random selection of ten rows of the 1000 rows are provided below: ```{r echo=TRUE, message=TRUE, warning=FALSE, fig.width=5, fig.height= 3, fig.align = 'center'} df_log <- report_log_mass(df)[sort(sample(1:1000, 10)), ] # need to make format changes so that flextable will show scientific notation df_log[, 1] <- formatC(df_log[, 1], digits = 4) df_log[, 2] <- formatC(df_log[, 2], digits = 2, format = "e") df_log[, 3] <- formatC(df_log[, 3], digits = 2, format = "e") df_log[, 4] <- formatC(df_log[, 4], digits = 2, format = "e") df_log[, 5] <- formatC(df_log[, 5], digits = 2, format = "e") df_log[, 6] <- formatC(df_log[, 6], digits = 2, format = "e") df_log[, 7] <- formatC(df_log[, 7], digits = 2, format = "e") ft <- flextable(df_log) ft <- colformat_double(ft, digits = 3) ft <- set_caption(ft, "results for random sample of 1000 particle diameters from the log set") ft ``` The particle mass modeled in the ambient air and sampled through the air sampling system is shown with the function `report_plots`. ```{r echo=TRUE, message=TRUE, warning=FALSE, fig.width=5, fig.height= 3, fig.align = 'center'} report_plots(df, "log") ```
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/inst/doc/AeroSampleR_vignette.Rmd
--- title: "Using AeroSampleR to model aerosol sampling efficiency" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Using AeroSampleR to model aerosol sampling efficiency} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown} editor_options: chunk_output_type: console markdown: wrap: 72 --- ```{=html} <style> body { text-align: justify} </style> ``` ```{r setup, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` ## AeroSampleR Introduction Air sampling systems are used in many applications that require the monitoring of hazardous airborne particles. When tubing is used to collect aerosol particles, some of the particles are lost along the way. The efficiency of the system depends on the particle and tubing configuration. This version of AeroSampleR provides sampling efficiency for a limited set of system elements. Sampling systems always include a probe, after which are combinations of straight tubing and bends. While some systems include expansion or contraction elements, or sample splitters. These components are not covered in this version of AeroSampleR. The probe model is limited to a simple open ended pipe in still air. AeroSampleR relies on the concept of aerodynamic median activity diameter (AMAD), which accounts for particle density and shape, leaving equivalent spherical water droplets as the modeling targets. Efficiency functions are based predominantly on testing with aerosol particles through stainless steel tubing. The [Zhang](https://doi.org/10.1016/j.jaerosci.2012.05.007), [McFarland](https://doi.org/10.1021/es960975c), and [Pui](https://doi.org/10.1080/02786828708959166) bend models are used in this package. The aerosol transport models are based on tests on clean systems. This package is designed primarily for new tubing designs. If a system is not maintained clean -- and free of condensation, there can be no expectation that sampling efficiency models will be accurate. ```{r echo=FALSE, message=FALSE, warning=FALSE} library(AeroSampleR) library(ggplot2) library(dplyr) library(flextable) ``` ```{r echo=FALSE, message=FALSE, warning=FALSE, comment = ">"} sys_df <- structure(list( el_num = c("1", "2", "3", "4"), el_type = c("probe", "tube", "bend", "tube"), length_cm = c(NA, 111.76, NA, 146.05), angle_to_horiz = c(NA, 90, NA, 0), orient = c("u", NA, NA, NA), bend_angle = c(NA, NA, 90, NA), bend_rad_cm = c(NA, NA, 12.7, NA)), row.names = c(NA, -4L), class = c("tbl_df", "tbl", "data.frame")) cat("\n") ``` ### example data The first task in evaluating a system is to set up a table that includes all the "elements" with the following column headers: - `el_num` sequential number of the element - `el_type` starting with "probe", followed by "tube" and "bend" elements - `length_cm` length of tubes in centimeters. Leave blank for probe and bends. - `angle_to_horiz` degree of slope of straight tube elements. Leave blank for probes and bends. - `orient` orientation of the probe. Options are "u" for up, "d" for down and "h" for horizontal ``bend_angle` how many degrees a bend turns the sample. Typically, 90 or 45. Leave blank for probes and tubes. - `bend_rad_cm` the bend radius in centimeters. Leave blank for probes and tubes. ```{r echo=FALSE, message=FALSE, warning=FALSE, comment = ">"} ft <- flextable(sys_df) ft <- colformat_double(ft, digits = 0) ft ``` ### 1) Get the system data You need to get this data into R and call it `sys_df`, the "system" data frame. There are many options on how to do this. Below are two examples. You will have to provide the path and the file name, but for the example, we'll show the file to be called system.txt or system.xlsx in `c:/work`. Here are two: a. Use base R, (`utils` package that is loaded with base R) to read a text file: ```{r echo=TRUE, eval=FALSE} sys_df <- read.table( file = "c:/work/system.txt", header = TRUE ) ``` b. Use the readxl package to read a spreadsheet of the 'xlsx' format: ```{r echo=TRUE, eval=FALSE} sys_df <- readxl::read_xlsx(path = "c:/work/system.xlsx", sheet = "Sheet1", #default - update if needed range = "A1:G5", #put in entire range col_types = c("numeric", "text", "numeric", "numeric", "text", "numeric", "numeric") ) ``` ### 2) Create particle distribution with `particle_dist()` This function provides a logarithmic distribution of 1000 particle sizes and an additional set of discrete particles. By default, the logarithmically-distributed particles have an AMAD of 5 and a lognormal standard deviation of 2.5, consistent with ICRP 66. The discrete particles are 1, 5, and 10 micrometers AMAD. ```{r echo=TRUE, message=TRUE, warning=FALSE} df <- particle_dist() #Default ``` ```{r echo=FALSE, message=TRUE, warning=FALSE, fig.width=5, fig.height= 3} df |> filter(dist == "log_norm") |> ggplot(aes(D_p, dens)) + geom_point(color = "blue") + ggtitle("distribution of lognormal particle sizes") df |> filter(dist == "log_norm") |> mutate("activity" = D_p ^3 * dens) |> ggplot(aes(D_p, activity)) + geom_point(color = "blue") + ggtitle("relative activity by particle size", subtitle = "diameter cubed times density") ``` ### 3) Set up the parameters for tube size, flow rate, temperature, and pressure. These parameters are not particle dependent and so can be kept in a small separate data frame. - D_tube is the inner diameter of the tube {cm} - Q_lpm is the flow rate of air {lpm} - T_C is the system pressure {Celsius} - P_kPa is the pressure of the system {kPa} ```{r echo=TRUE, message=TRUE, warning=FALSE} # In this example the tubing wall is 1.65 mm thick. params <- set_params_1("D_tube" = 2.54 - (2 * 0.165), #1 inch tube diameter "Q_lpm" = 2 * 28.3, #2 cfm converted to lpm "T_C" = 25, "P_kPa" = 101.325) ``` Next, we compute the particle size-dependent parameters. These include factors for transport efficiency computation. - Cunningham Correction Factor {C_c} - terminal settling velocity {v_ts} - Particle Reynolds number (tube) {Re_p} - Stokes number {Stk} ```{r echo=TRUE, message=TRUE, warning=FALSE} df <- set_params_2(df, params) ``` At this point, our main particle distribution data frame has been modified with computed factors for use in the transport efficiency models, row by row. ### 4) Next, we compute the efficiency, element by element in transport order. We have only four elements in this example and we will evaluate them with `prob_eff()`, `tube_eff()`, `bend_eff()`, and lastly `tube_eff()` again. This will add columns to our particle data frame. Calculate the efficiency of the probe via `prob_eff()` and add it to a new data frame. The orient argument sets the orientation of the probe. "u" means the probe is vertically upward. "d" is for a vertically downward facing probe. "h" is for a probe in a side configuration. The probe is in the first row, so we use `[1]` to identify the orient parameter. ```{r echo=TRUE, message=TRUE, warning=FALSE} df <- probe_eff(df, params, orient = sys_df$orient[1]) ``` Calculate the efficiency of the first tube. Tube Efficiency is found using `tube_eff()` function. The length is given in cm (`length_cm`) and the angle from tube to horizontal orientation parameter (`angle_to_horiz`) is specified here. All three parameters can be added to the above data frame, which will return a column for each distribution. ```{r echo=TRUE, message=TRUE, warning=FALSE} df <- tube_eff(df, params, L = sys_df$length_cm[2] / 100, angle_to_horiz = sys_df$angle_to_horiz[2], elnum = sys_df$el_num[2]) ``` Calculate the efficiency of the bend. Here, we'll take the Zhang model option. Bend Efficiency is found via the `bend_eff()` function and is where you will choose to use one of three different tube models {Zhang, McFarland, or Pui}. The bend angle and element number are also listed in the function. ```{r echo=TRUE, message=TRUE, warning=FALSE} df <- bend_eff(df, params, method = "Zhang", bend_angle = sys_df$bend_angle[3], bend_radius = sys_df$bend_rad_cm[3] / 100, elnum = sys_df$el_num[3]) ``` Finally, we'll calculate transport efficiency through the last tube element. ```{r echo=TRUE, message=TRUE, warning=FALSE} df <- tube_eff(df, params, L = sys_df$length_cm[2] / 100, angle_to_horiz = sys_df$angle_to_horiz[4], elnum = sys_df$el_num[4]) ``` ### At this point, the transport efficiencies have been built into the data frame. Let's have a look at the bottom few rows. *It doesn't all fit horizontally, so match the top portion and the bottom portion by the row number.* ```{r echo=TRUE, message=TRUE, warning=FALSE} tail(df) ``` ### 5) Generate reports with `report_basic`, `report_plots` and `report_cum_plots` The `report_basic` function provides total system efficiency for either all of the logarithmically distributed particles or all of the discrete particle sizes. The `report_plots` function shows individual element efficiency. The `report_cum_plots` function shows cumulative efficiency through the system. This plot takes efficiency data from the rows of the data frame. It therefore only works for individually selected particle sizes. We'll show the parameter set first, so that the output message on the basic report, regarding units, makes sense. ```{r echo=TRUE, message=FALSE, warning=FALSE, fig.width=5, fig.height= 3, fig.align = 'center'} params[, 7] <- formatC(params[, 7], digits = 2, format = "e") params[, 8] <- formatC(params[, 8], digits = 2, format = "e") params[, 11] <- formatC(params[, 11], digits = 2, format = "e") params[, 3] <- formatC(params[, 3], digits = 4) params[, 10] <- formatC(params[, 10], digits = 4) ft <- flextable(params) ft <- set_caption(ft, "system parameters") ft ``` ```{r echo=TRUE, message=FALSE, warning=FALSE, fig.width=5, fig.height= 3, fig.align = 'center'} ft <- flextable(report_basic(df, params, "discrete")) ft <- colformat_double(ft, digits = 3) ft <- set_caption(ft, "results for discrete particle diameters") ft report_plots(df, "discrete") report_cum_plots(df, 1) report_cum_plots(df, 5) report_cum_plots(df, 10) ft <- flextable(report_basic(df, params, "log")) ft <- colformat_double(ft, digits = 3) ft <- set_caption(ft, "results for log distribution of particle diameters") ft ``` ### 6) Optional extra reports for the logarithmically distributed particle set The `report_log_mass` function provides details on every particle size. Since there are 1000 data points, the full ouptput is probably not suitable for a typical report. The report provides the following columns of output: - microns = the particle size in micrometers (microns is shorter, but is considered supersed by micrometers) - probs = relative probability of a particle being in this size bin - bin_eff = the overall system efficiency for a particle of this size - amb_mass = the probability of the particle multiplied by the mass of a spherical particle with the size given and density of 1 g per ml. This is the relative mass of this particle size in the ambient air being sampled. - sampled_mass = the relative mass that made it through the sampling system with this particle size - bin_frac_lost = ambient mass in this bin minus the sampled mass in the bin, divided by the ambient mass - total_frac_lost = ambient mass in this bin minus the sampled mass in the bin, divided by the sum of the ambient mass A random selection of ten rows of the 1000 rows are provided below: ```{r echo=TRUE, message=TRUE, warning=FALSE, fig.width=5, fig.height= 3, fig.align = 'center'} df_log <- report_log_mass(df)[sort(sample(1:1000, 10)), ] # need to make format changes so that flextable will show scientific notation df_log[, 1] <- formatC(df_log[, 1], digits = 4) df_log[, 2] <- formatC(df_log[, 2], digits = 2, format = "e") df_log[, 3] <- formatC(df_log[, 3], digits = 2, format = "e") df_log[, 4] <- formatC(df_log[, 4], digits = 2, format = "e") df_log[, 5] <- formatC(df_log[, 5], digits = 2, format = "e") df_log[, 6] <- formatC(df_log[, 6], digits = 2, format = "e") df_log[, 7] <- formatC(df_log[, 7], digits = 2, format = "e") ft <- flextable(df_log) ft <- colformat_double(ft, digits = 3) ft <- set_caption(ft, "results for random sample of 1000 particle diameters from the log set") ft ``` The particle mass modeled in the ambient air and sampled through the air sampling system is shown with the function `report_plots`. ```{r echo=TRUE, message=TRUE, warning=FALSE, fig.width=5, fig.height= 3, fig.align = 'center'} report_plots(df, "log") ```
/scratch/gouwar.j/cran-all/cranData/AeroSampleR/vignettes/AeroSampleR_vignette.Rmd
#' Aggregate numeric, Date and categorical variables #' #' The \code{Aggregate} function (not to be confounded with aggregate) prepares a data frame or data table for merging by computing the sum, mean and variance of all continuous (integer and numeric) variables by a given variable. For all categorical variabes (character and factor), it creates dummies and subsequently computes the sum and the mode by a given variable. For all Date variables, it computes the recency and duration by a given variable with repsect the an end date variable. For computational speed, all the calculations are done with \code{data.table}. This functions aims at maximum information extraction with a minimum amount of code. #' #' @param x A data frame or data table. Categorical variables have to be of type character or factor and continuous variables have to be of type integer or numeric. Date variables should be in the Date format. #' @param by A character string specifying the variable on which to aggregate the results. Note that 'by' should be a variable of the table 'x'. #' @param end_ind A Date object, or something which can be coerced by \code{as.Date(origin, ...)} to such an object. If not specified, we take the \code{Sys.Date()} as end date. #' @param format A character string. If not specified, the ISO 8601 international standard which expresses a day "\%Y-\%m-\%d" is taken. #' @param tibble Should the output be a tibble, data frame or data table? By default, the function returns a data frame or data table depending on the input. To return a tibble, the user must set the tibble = TRUE. #' @param verbose indicator Used to show the progress. #' @param object Parameter related to the \code{dummy} function. See ?\code{dummy} for more information. #' @param p Parameter related to the \code{dummy} function. See ?\code{dummy} for more information. #' #' @return A data frame, data table or tibble with the aforementioned variables aggregated by the given ID variables. If the input is a data frame, a data frame is returned else a data table is returned. #' #' @author Authors: Matthias Bogaert, Michel Ballings, Dirk Van den Poel, Maintainer: \email{matthias.bogaert@@UGent.be} #' @examples #' # Example #' # Create some data #' data <- data.frame(V1=sample(as.factor(c('yes','no')), 200000, TRUE), #' V2=sample(as.character(c(1,2,3,4,5)),200000, TRUE), #' V3=sample(1:20000,200000, TRUE), #' V4=sample(300:1000, 200000, TRUE), #' V5 = sample(as.Date(as.Date('2014-12-09'):Sys.Date()-1, #' origin = "1970-01-01"),200000,TRUE), #' ID=sample(x = as.character(1:4), size = 200000, replace = TRUE)) #' #' Aggregate(x=data,by='ID') #' #' # Examples of how to use the object and p argument. See dummy and categories function for details. #' # Aggregate(x=data,by='ID',object=categories(data)) #' # Aggregate(x=data,by='ID',p=2) #' @import stats methods data.table NCmisc #' @importFrom utils tail #' @importFrom tibble as_tibble #' @export # From now on the x parameter needs to be a data frame/data table WITH the by column. # From now on the by parameter needs to be the name of the by column in quotes # From now on all the analyses are done in data.table and then transformed to data.frame or data.table # Allow print out in a tibble by setting tibble = TRUE. Default = FALSE Aggregate <- function (x, by, end_ind = Sys.Date(), format = '%Y-%m-%d', tibble = FALSE, verbose = TRUE, object = NULL, p = "all") { options(stringsAsFactors = FALSE) Mode <- function(x) { ux <- unique(x) ux[which.max(tabulate(match(x, ux)))] } # problem handling if (!any(sapply(by,is.character))) stop('by-variable must be a character!') if (class(end_ind) != 'Date') stop('The end_ind should be of class Date') if (!any(class(x) %in% c("data.table", "data.frame")))stop("x needs to be either a data.frame or data.table") if (all(class(x) == "data.frame")) cl <- "data.frame" else cl <- "data.table" if (!any(class(x) == "data.table")) setDT(x) ###NEW: Assume x is data table for speed issues categoricals <- (sapply(x, is.factor) | sapply(x, is.character)) & colnames(x) != by ###NEW if (any(categoricals == TRUE)) { if(verbose == TRUE) cat('Calculating categorical variables ... \n') if(!is.null(object)) { ind <- which(names(object) == by) object <- object[-ind] } dummies_df <- dummy(x = x[,categoricals,with=FALSE],num = TRUE, ref = TRUE, object = object, p = p) ### Works with the new dummy function dummies_df <- cbind(x[,by, with = FALSE], dummies_df) dummies_df <- dummies_df[,as.list(unlist(lapply(.SD, function(x) list(sum=sum(x), mode=Mode(x))))), by=eval(by)]###NEW names(dummies_df) <- gsub('[.]','_',names(dummies_df)) } numerics <- sapply(x, is.numeric) | sapply (x, is.integer) if (any(numerics==TRUE)) { if(verbose == TRUE) cat('Calculating numerical variables ... \n') numerics <- numerics | colnames(x) == by numerics_df <- x[, numerics, with = FALSE] numerics_df <- numerics_df[,as.list(unlist(lapply(.SD, function(x) list(sum=sum(x), mean=mean(x), var=var(x))))), by=eval(by)] names(numerics_df) <- gsub('[.]','_',names(numerics_df)) } dates <- sapply(x,function(z) is(z,"Date")) if(any(dates == TRUE)) { if(verbose == TRUE) cat('Calculating date variables ... \n') end_ind <- as.Date(end_ind, format = format) dates <- dates | colnames(x) == by dates_df <- x[,dates, with = FALSE] dates_df <- dates_df[, as.list(unlist(lapply(.SD, function (x) list(duration = end_ind - min(x), recency = end_ind - max(x) )))), by = eval(by)] names(dates_df) <- gsub('[.]','_',names(dates_df)) } if (any(dates == TRUE) && any(categoricals == TRUE) && any(numerics == TRUE)) { mergelist <- list(dummies_df, numerics_df, dates_df) final <- Reduce(function(x,y) merge(x,y, by = by), mergelist) if (cl == 'data.frame') final <- data.frame(final) else final <- data.table (final) } else if (any(categoricals == TRUE) && any(numerics == TRUE)) { final <- merge(dummies_df, numerics_df, by = by) if (cl == 'data.frame') final <- data.frame(final) else final <- data.table (final) } else if (any(categoricals == TRUE) && any(dates == TRUE)) { final <- merge(dummies_df, dates_df, by = by) if (cl == 'data.frame') final <- data.frame(final) else final <- data.table (final) } else if (any(dates == TRUE) && any(numerics == TRUE)) { final <- merge(numerics_df, dates_df, by = by) if (cl == 'data.frame') final <- data.frame(final) else final <- data.table (final) } else if (any(categoricals == TRUE)) { if (cl == 'data.frame') final <- data.frame(dummies_df) else final <- data.table (dummies_df) } else if (any(numerics == TRUE)) { if (cl == 'data.frame') final <- data.frame(numerics_df) else final <- data.table (numerics_df) } else if (any(dates == TRUE)) { if (cl == 'data.frame') final <- data.frame(dates_df) else final <- data.table (dates_df) } if(tibble) as_tibble(final) else final }
/scratch/gouwar.j/cran-all/cranData/AggregateR/R/Aggregate.R
#' Extraction of Categorical Values as a Preprocessing Step for Making Dummy Variables #' #' \code{categories} stores all the categorical values that are present in the factors and character vectors of a data frame. Numeric and integer vectors are ignored. It is a preprocessing step for the \code{dummy} function. This function is appropriate for settings in which the user only wants to compute dummies for the categorical values that were present in another data set. This is especially useful in predictive modeling, when the new (test) data has more or other categories than the training data. #' #' @param x data frame or data table containing factors or character vectors that need to be transformed to dummies. Numerics, dates and integers will be ignored. #' @param p select the top p values in terms of frequency. Either "all" (all categories in all variables), an integer scalar (top p categories in all variables), or a vector of integers (number of top categories per variable in order of appearance. #' @examples #' #create toy data #' (traindata <- data.frame(var1=as.factor(c("a","b","b","c")), #' var2=as.factor(c(1,1,2,3)), #' var3=c("val1","val2","val3","val3"), #' stringsAsFactors=FALSE)) #' (newdata <- data.frame(var1=as.factor(c("a","b","b","c","d","d")), #' var2=as.factor(c(1,1,2,3,4,5)), #' var3=c("val1","val2","val3","val3","val4","val4"), #' stringsAsFactors=FALSE)) #' #' categories(x=traindata,p="all") #' categories(x=traindata,p=2) #' categories(x=traindata,p=c(2,1,3)) #' @seealso \code{\link{dummy}} #' @return A list containing the variable names and the categories #' @author Authors: Michel Ballings, and Dirk Van den Poel, Maintainer: \email{Michel.Ballings@@GMail.com} #' @export categories <- function(x,p="all"){ colnames(x) <- make.names(colnames(x),TRUE) categoricals <- which(sapply(x,function(x) is.factor(x) || is.character(x))) if (!any(class(x)=="data.table")){ #if data.frame x <- x[,categoricals, drop=FALSE] } else if (any(class(x)=="data.table")){ #if data.table x <- x[,categoricals, with=FALSE] } cats <- sapply(1:ncol(x),function(z) { if (!any(class(x)=="data.table")){ #if data.frame cats <- table(x[,z]) } else if (any(class(x)=="data.table")){ #if data.table cats <- table(x[,z, with=FALSE]) } if(is.numeric(p) && length(p) == 1) { names(sort(cats,decreasing=TRUE)[1:if(length(cats) <= p) length(cats) else p]) } else if (is.numeric(p) && length(p) > 1) { names(sort(cats,decreasing=TRUE)[1:if(length(cats) <= p[z]) length(cats) else p[z]]) } else if (p=="all") { names(cats) } },simplify=FALSE) names(cats) <- names(x) return(cats) }
/scratch/gouwar.j/cran-all/cranData/AggregateR/R/categories.R
#' Fast-automatic Dummy Variable Creation with Support for Predictive Contexts #' #' \code{dummy} creates dummy variables of all the factors and character vectors in a data frame or data table. It also supports settings in which the user only wants to compute dummies for the categorical values that were present in another data set. This is especially useful in the context of predictive modeling, in which the new (test) data has more or other categories than the training data.For computational speed, the code is written in \code{data.table}. #' #' @param x a data frame or data table containing at least one factor or character vector #' @param p Only relevant if object is NULL. Select the top p values in terms of frequency. Either "all" (all categories in all variables), an integer scalar (top p categories in all variables), or a vector of integers (number of top categories per variable in order of appearance). #' @param object output of the \code{categories} function. This parameter is to be used when dummies should be created only of categories present in another data set (e.g., training set) #' @param num should the dummies be of class numeric (TRUE) or factor (FALSE). Setting this to TRUE will speed up execution considerably. #' @param verbose logical. Used to show progress. Does not work when \code{parallel="variable"}. #' @param ref logical. Only relevant when x is a data.table. If TRUE x will be overwritten by the dummy output (called transformed x), and a reference (i.e., not a copy) to the transformed x will be returned invisibly. If FALSE, x will be left untouched, and the output will be returned as usual. The difference between ref=TRUE and ref=FALSE is that the former uses less memory equal to the amount of the original x (not transformed x). If x=TRUE only the transformed x survives the function. If x=FALSE both the original x and the output (equal in size as transformed x) will survive. The difference is hence the size of the original x, and therefore ref=TRUE is more memory efficient. #' @examples #' #create toy data #' (traindata <- data.frame(var1=as.factor(c("a","b","b","c")), #' var2=as.factor(c(1,1,2,3)), #' var3=c("val1","val2","val3","val3"), #' stringsAsFactors=FALSE)) #' (newdata <- data.frame(var1=as.factor(c("a","b","b","c","d","d")), #' var2=as.factor(c(1,1,2,3,4,5)), #' var3=c("val1","val2","val3","val3","val4","val4"), #' stringsAsFactors=FALSE)) #' #create dummies of training set #' (dummies_train <- dummy(x=traindata)) #' #create dummies of new set #' (dummies_new <- dummy(x=newdata)) #' #' #how many new dummy variables should not have been created? #' sum(! colnames(dummies_new) %in% colnames(dummies_train)) #' #' #create dummies of new set using categories found in training set #' (dummies_new <- dummy(x=newdata,object=categories(traindata,p="all"))) #' #' #how many new dummy variables should not have be created? #' sum(! colnames(dummies_new) %in% colnames(dummies_train)) #' #' #' #create dummies of training set, #' #using the top 2 categories of all variables found in the training data #' dummy(x=traindata,p=2) #' #' #create dummies of training set, #' #using respectively the top 2,3 and 1 categories of the three #' #variables found in training data #' dummy(x=traindata,p=c(2,3,1)) #' #' #create all dummies of training data #' dummy(x=traindata) #' #' \dontrun{ #' ####################### #' #example ref parameter #' #' #ref=TRUE, example 1 #' (DT = data.table(a=c("a","b"),b=c("c","c"))) #' dummy(DT,ref=TRUE) #' DT[] #DT has changed #' #' #ref=TRUE, example 2 #' #uses exactly same amount of memory as example 1 #' (DT = data.table(a=c("a","b"),b=c("c","c"))) #' d1 <- dummy(DT,ref=TRUE) #' DT[] #DT has changed #' d1[] #d1 is a reference (not a copy) to DT #' #' #ref=FALSE, example 3 #' #example 1 and 2 are more memory efficient than example 3 #' (DT = data.table(a=c("a","b"),b=c("c","c"))) #' d2 <- dummy(DT, ref=FALSE) #' DT[] #DT has not changed #' d[] #' # deleting DT after dummy finishes would result in the same final #' # memory footprint as example 1 and 2, except that in example 3 #' # memory usage is higher when dummy is being executed, and this may be #' # problematic when DT is large. #' } #' @seealso \code{\link{categories}} #' @return A data frame or data table containing dummy variables. If ref=TRUE then the output will be invisible and x will contain the output. NOTE: data.table currently has a print bug. In some cases the output does not print. Running the output object multiple times or running it once with [] appended will make it print. In either case, the output will be produced. str() also always works. #' @author Authors: Michel Ballings, and Dirk Van den Poel, Maintainer: \email{Michel.Ballings@@GMail.com} #' @export dummy <- function(x, p="all", object=NULL, num=TRUE, verbose=FALSE, ref=FALSE){ if(all(class(x)=="data.frame") || ref==FALSE) { # for data.frame # will break reference if used on data.table # (i.e., operate on object by value) colnames(x) <- make.names(colnames(x),TRUE) } else { # for data.table when we want to keep # the reference (i.e., operate on object by reference) # (note that the solution for data.frame # would work too, but it would break the reference) setnames(x,names(x),make.names(names(x),TRUE)) } if(is.null(object)) object <- categories(x,p=p) ans <- list() len <- length(object) #error handling if (!any(class(x) %in% c("data.table","data.frame"))) stop("x needs to be either a data.frame or data.table") #store class of x at input if(all(class(x)=="data.frame")) cl <- "data.frame" else cl <- "data.table" #change to data.table if x was a data.frame at input if (!any(class(x)=="data.table")) setDT(x) if (verbose) cat("Start\n") for (i in 1:len){ #for each value an ifelse ii <- 0 envir <- environment() for (z in object[[i]]){ if (num==FALSE) { x[, make.names(paste0(names(object)[i],"_",z),TRUE) := as.factor(ifelse(get(names(object)[i]) == get("z",envir= envir),1,0))] } else if (num==TRUE) { x[, make.names(paste0(names(object)[i],"_",z),TRUE) := ifelse(get(names(object)[i]) == get("z",envir= envir),1,0)] } if (verbose) { if (ii != 0 && w==z) ii <- 0 ii <- ii + 1 cat(" ",round((ii*100)/length(object[[i]]),0),"% of categories processed \n") } w <- z } if (verbose) cat(round((i*100)/len,0),"% of variables processed \n") } #remove original categoricals for (i in 1:len){ x[ , names(object)[i]:= NULL ] } #change back to data.frame if x was a data.frame at input if (cl=="data.frame") setDF(x) #use x[] instead of x to make sure it prints if (any(class(x)=="data.table") && ref==TRUE){ return(invisible(x)) } else { return(x) } }
/scratch/gouwar.j/cran-all/cranData/AggregateR/R/dummy.R
#' @title agrInt2alpha #' @description Function agrInt2alpha calculates discordance rate (alpha) using clinically meaningful limit. #' @author Jialin Xu, Jason Liao #' @export #' @rdname agrInt2alpha #' @param clin.limit Clinically meaningful lower and upper limit #' @param n Sample size #' @param sigmae Variance estimate of residual from measurement error model #' @return Discordance rate #' #' @details Function agrInt2alpha calculates discordance rate (alpha) using clinically meaningful limit. #' #' @examples #' agrInt2alpha(clin.limit=c(-15, 15), n=52, sigmae=46.09245) #' @references Jason J. Z. Liao, Quantifying an Agreement Study, Int. J. Biostat. 2015; 11(1): 125-133 agrInt2alpha=function(clin.limit, n, sigmae){ if (length(clin.limit)!=2 | !is.numeric(clin.limit)) stop("Error: clin.limit has to be numeric vector of length 2") if (!is.numeric(n) | n<0 | length(n)!=1 | !is.numeric(sigmae) | sigmae<0 | length(sigmae)!=1 ) stop("Error: n and sigmae should be numeric values greater than 0, of length 1") width=diff(clin.limit) qt=width/2/sqrt(2*sigmae) result=2*(1-pt(qt, n-1)) return(result) }
/scratch/gouwar.j/cran-all/cranData/AgreementInterval/R/argInt2alpha.R
#' @title aiAdj #' @description Function aiAdj calculates bias-adjusted average interval from ai object #' @author Jialin Xu, Jason Liao #' @export #' @rdname aiAdj #' @param object ai object from ai function #' @param x A numeric value or a vector of numeric values to calculate bias-adjusted average interval for #' @return bias-adjusted and total-adjusted average interval for each value in \code{x} #' #' @details Function aiAdj uses proportional bias per \code{x} unit, Liao's average interval, #' Liao's average interval adjusted for fixed bias to calculate bias-adjusted and total-adjusted average interval. #' #' @examples #' ans <- ai(x=IPIA$Tomography, y=IPIA$Urography) #' aiAdj(object=ans, x=1) #' aiAdj(object=ans, x=c(1, 2)) #' @references Jason J. Z. Liao, Quantifying an Agreement Study, Int. J. Biostat. 2015; 11(1): 125-133 aiAdj=function(object, x){ if (class(object)!="ai"){ msg="Please input ai object running through improvedBA function" stop(msg) } if (!is.numeric(x) | !is.vector(x)) stop("Error: please enter a numeric vector of x for adjusted average interval estimation.") result=object cnames=c("x", paste("Liao.AI.Adj.Propo.", result$conf.level, c("LL", "UL"), sep=""), paste("Liao.AI.Adj.Total.", result$conf.level, c("LL", "UL"), sep="") ) pred=data.frame(matrix(NA, length(x), 5)) if (length(x)>1) { pred[, 1]=x pred[, 2]=result$intervalEst["Liao.AI", 1] + result$biasEst["Propo.Bias.at.x=1", "Est"]*x pred[, 3]=result$intervalEst["Liao.AI", 2] + result$biasEst["Propo.Bias.at.x=1", "Est"]*x pred[, 4]=result$intervalEst["Liao.AI.Adj.Fixed", 1] + result$biasEst["Propo.Bias.at.x=1", "Est"]*x pred[, 5]=result$intervalEst["Liao.AI.Adj.Fixed", 2] + result$biasEst["Propo.Bias.at.x=1", "Est"]*x colnames(pred)=cnames } if (length(x)==1) { pred[1]=x pred[2]=result$intervalEst["Liao.AI", 1] + result$biasEst["Propo.Bias.at.x=1", "Est"]*x pred[3]=result$intervalEst["Liao.AI", 2] + result$biasEst["Propo.Bias.at.x=1", "Est"]*x pred[4]=result$intervalEst["Liao.AI.Adj.Fixed", 1] + result$biasEst["Propo.Bias.at.x=1", "Est"]*x pred[5]=result$intervalEst["Liao.AI.Adj.Fixed", 2] + result$biasEst["Propo.Bias.at.x=1", "Est"]*x names(pred)=cnames } return(pred) }
/scratch/gouwar.j/cran-all/cranData/AgreementInterval/R/calAIAdj.R
#' IPIA measures from 52 kidneys #' #' A dataset containing inferior pelvic infundibular angle (IPIA) dataset measured by urography and tomography on n=52 kidneys. The variables are as follows: #' #' @format A data frame with 52 rows and 3 variables: #' \itemize{ #' \item id: sample ids #' \item Urography: IPIA data evaluated by means of computerized urography #' \item Tomography: IPIA data evaluated by means of computerized tomography #' } #' @references Luiz RR, Costa AJL, Kale PL, Werneck GL. Assessment of agreement of a quantitative variable: a new graphical approach. J Clin Epidemiol 2003; 56:963-7. "IPIA"
/scratch/gouwar.j/cran-all/cranData/AgreementInterval/R/data.R
#' @title ai #' @description Calculate Agreement Interval of Two Measurement Methods and quantify the agreement #' @author Jialin Xu, Jason Liao #' @export #' @rdname ai #' @param x A continous numeric vector from measurement method 1 #' @param y A continous numeric vector from measurement method 2, the same length as x. #' @param lambda Reliability ratio of x vs y. default 1. #' @param alpha Discordance rate to estimate confidence interval #' @param clin.limit Clinically meaningful limit (optional) #' @return Function ai returns an object of class "ai". #' #' An object of class "ai" is a list containing the following components: #' #' alpha: Alpha input for confidence interval estimates #' #' n: Sample size #' #' conf.level: Confidence level calculated from alpha #' #' lambda: Reliability ratio input of x vs y #' #' summaryStat: Summary statistics of input data #' #' sigma.e: Random error estimates #' #' indexEst: Agreement estimates (CI.) based on index approaches #' #' intervalEst: Agreement estimates (CI.) based on interval approaches #' #' biasEst: Bias estimate #' #' intercept: Intercept of linear regression line from measure error model #' #' slope: Slope of linear regression line from measure error model #' #' x.name: x variable name extracted from input, used for plotting #' #' y.name: y variable name extracted from input, used for plotting #' #' tolProb.cl: Tolrance probability calculated based on optional clinically meaningful limit #' #' k.cl: Number of discordance pairs based on optional clinically meaningful limit #' #' alpha.cl: Discordance rate based on clinically meaningful limit #' #' @details This is the function to calculate agreement interval (confidence interval) of two continuous numerical vectors from two measurement methods on the same samples. Note that this function only works for scenario with two evaluators, for example, comparing the concordance between two evaluators. We are working on the scenario with more than two evaluators. #' The two numerical vectors are \code{x} and \code{y}. It also provides commonly used measures based on index approaches, #' for example, Pearson's correlation coefficient, the intraclass correlation coefficient (ICC), #' the concordance correlation coefficient (Lin's CCC), and improved CCC (Liao's ICCC). #' #' @examples #' ai(x=1:4, y=c(1, 1, 2, 4)) #' a <- c(1, 2, 3, 4, 7) #' b <- c(1, 3, 2, 5, 3) #' ai(x=a, y=b) #' ai(x=IPIA$Tomography, y=IPIA$Urography) #' ai(x=IPIA$Tomography, y=IPIA$Urography, clin.limit=c(-15, 15)) #' @importFrom stats aov cor pt qnorm qt var #' @importFrom psych ICC #' @references Luiz RR, Costa AJL, Kale PL, Werneck GL. Assessment of agreement of a quantitative variable: a new graphical approach. J Clin Epidemiol 2003; 56:963-7. #' @references Jason J. Z. Liao, Quantifying an Agreement Study, Int. J. Biostat. 2015; 11(1): 125-133 #' @references Shrout, Patrick E. and Fleiss, Joseph L. Intraclass correlations: uses in assessing rater reliability. Psychological Bulletin, 1979, 86, 420-3428. #' @references Lin L-K., A Concordance Correlation Coefficient to Evaluate Reproducibility. Biometrics 1989; 45:255-68 #' @references Liao JJ. An Improved Concordance Correlation Coefficient. Pharm Stat 2003; 2:253-61 #' @references Nicole Jill-Marie Blackman, Reproducibility of Clinical Data I: Continuous Outcomes, Pharm Stat 2004; 3:99-108 #' # This is the main function to calculate agreement interval and create "ai" object # based on continuous numeric vector \code {x} and y of the same length. # # Author: Jialin Xu, [email protected] # # Reference: Jason J. Z. Liao, Quantifying an Agreement Study, Int. J. Biostat. 2015; 11(1): 125-133 ai<- function(x,y,lambda=1, alpha=0.05, clin.limit=NA) { # # x and y are two observations with error ## lambda is the reliability ratio of x vs y, usually it is 1 # the approach is from JL 2011 paper if (sum(is.na(clin.limit))==0 & (length(clin.limit)!=2 | !is.numeric(clin.limit))) stop("Error: clin.limit has to be numeric vector of length 2") x.name=deparse(substitute(x)) y.name=deparse(substitute(y)) if (grepl(x=x.name, pattern="\\$")){ x.name=strsplit(x=x.name, split="\\$")[[1]][2] } if (grepl(x=y.name, pattern="\\$")){ y.name=strsplit(x=y.name, split="\\$")[[1]][2] } # first, use error in variable model to estimate the parameters # formulas are from Casella and Berger book ## Check the class and length of x, y if (!is.numeric(x) | !is.vector(x) | !is.numeric(y) | !is.vector(y)) stop("Error: Please enter x and y as numeric vectors") if (length(x) != length(y) | length(x)<3) stop("Error: Please enter paired data between x and y with the same length (at least 3) ") if (length(lambda)>1 | lambda<=0 | !is.numeric(lambda)) stop("Error: Please enter reliability ratio between x and y as lambda, a positive value. (default 1)") if (length(alpha)>1 | alpha<=0 | alpha>1 | !is.numeric(alpha)) stop("Error: Please enter discordance rate, a value between 0 and 1, default 0.05") n <- length(x) x.mean <- mean(x) x.std <- sqrt(var(x)) y.mean <- mean(y) y.std <- sqrt(var(y)) newx <- x - x.mean newy <- y - y.mean s12 <- sum(newx * newy) s11 <- sum(newx * newx) s22 <- sum(newy * newy) tem <- (s11 - lambda * s22)^2 + 4 * lambda * s12^2 slope <- ( - (s11 - lambda * s22) + sqrt(tem))/(2 * lambda * s12) int <- y.mean - x.mean * slope # # now construct variance # sigmab <- ((1 + lambda * slope^2)^2 * (s11 * s22 - s12^2))/tem sigmae <- (s22 - slope * s12)/n sigmaa <- sigmae/n + sigmab *x.mean^2 #CI of bias ci.fixed <- int + c(-1,1)*qnorm(1-alpha/2)*sqrt(sigmaa)/sqrt(n) slope.adj <- slope - 1 ci.proportion <- slope.adj + c(-1,1)*qnorm(1-alpha/2)*sqrt(sigmab)/sqrt(n) #agreement interval ai <- c(-1,1)*qt(1-alpha/2,n-1)*sqrt(2*sigmae) ai.fixed.adjusted <- int + ai # Bland-Altman's limit of agreement loa <- mean(y-x) + c(-1,1)*qnorm(1-alpha/2)*sqrt(var(y-x)) #correlation between x and y #Pearson CC cor.xy <- cor(x, y) # ICC #require(psych) x1=data.frame(x=x, y=y) ICC.ans=ICC(x=x1, alpha=alpha)$results cor.icc=unlist( subset(ICC.ans, ICC.ans$type=="ICC1", select="ICC") ) ci.cor.icc=unlist( subset(ICC.ans, ICC.ans$type=="ICC1", select=c("lower bound", "upper bound")) ) # Lin's CCC cor.lin <- (var(x)+var(y)-var(x-y))/(var(x)+var(y)+mean(x-y)*mean(x-y)) # new CCC cor.new <- cor(x,y)*(4*sqrt(var(x)*var(y))-cor(x,y)*(var(x)+var(y)))/ ((2-cor(x,y))*(var(x)+var(y))+mean(x-y)*mean(x-y)) #CI of Pearson's CC #z value z.xy <- log((1+cor.xy)/(1-cor.xy))/2 #std of z value var.z.xy <- 1/sqrt(n - 3) #CI of z value ci.z.xy <- z.xy + c(-1,1)*1.96*var.z.xy #CI of Pearson's CC ci.cor.xy <- (exp(2*ci.z.xy) - 1) / (exp(2*ci.z.xy) + 1) #CI of ICC #var.icc.xy<-2*(2*n-1)*(1-cor.icc^2)^2/(2*2*n*(n-1)) # the formula from eqn (13) of Blackman (2004a) #var.z.icc<-var.icc.xy/((1-cor.icc^2)^2) #ci.cor.zicc<-0.5*log((1+cor.icc)/(1-cor.icc))+c(-1,1)* 1.96*sqrt(var.z.icc) #ci.cor.icc<-(exp(2*ci.cor.zicc) - 1) / (exp(2*ci.cor.zicc) + 1) # CI of Lin's CCC diff12 <- (mean(x) - mean(y)) / sqrt(x.std * y.std) var.cc.lin <- (1 - cor.xy^2)*cor.lin^2/((1-cor.lin^2)*cor.xy^2) + 4*cor.lin^3*(1-cor.lin)*diff12^2/(cor.xy*(1-cor.lin^2)^2) - 2*cor.lin^4*diff12^4/(cor.xy^2*(1-cor.lin^2)^2) # log Lin's CCC log.cor.lin <- log((1+cor.lin)/(1-cor.lin))/2 # CI of log Lin's CCC ci.log.cor.lin <- log.cor.lin + c(-1,1)*1.96*sqrt(var.cc.lin / (length(x) - 2)) # CI of Lin's CCC ci.cor.lin <- (exp(2*ci.log.cor.lin) - 1) / (exp(2*ci.log.cor.lin) + 1) # CI of Liao's CCC s12 <- var(x) s22 <- var(y) rho <- cor(x, y) rc <- cor.new diff <- mean(y) - mean(x) ratio <- sqrt(s22 / s12) up <- 4 * ratio - rho * (1 + ratio^2) dow <- (2 - rho) * (1 + ratio^2) + diff^2/s12 do <- s12*s22*((2 - rho) * (s12 + s22) + diff^2) tem <- sqrt(s12*s22) # log-scaled new CCC z <- 0.5*log((1+rc)/(1-rc)) # now construct variance v1 <- rho*tem*(4*s22-rho*tem)/do-rc^2/s12-rc*s22*(2*s12+0.5*rho*s12+1.5*rho*s22)/do v2 <- rho*tem*(4*s12-rho*tem)/do-rc^2/s22-rc*s12*(2*s22+0.5*rho*s22+1.5*rho*s12)/do v3 <- 2*tem*(2*sqrt(s12*s22)-rho*s22-rho*s12)/do+rc*tem*(s12+s22)/do v4 <- -rc*2*tem^2*diff/do vz <- 2*(s12*v1)^2+4*rho^2*s12*s22*v1*v2+4*rho*tem*s12*v1*v3+2*(s22*v2)^2+4*rho*tem*s22*v2*v3+(1+rho^2)*s12*s22*v3*v3+(s12+s22-rho*tem)*v4*v4 n <- length(x) bound <- qnorm(1 - 0.05/2) * sqrt(vz/n) #/(1-rc^2) #need to check this out part with ccc lin # CI for log-scaled new CCC zbound <- z+c(-1,1)*bound # CI for Liao's CCC ci.new.ccc <- (exp(2*zbound)-1)/(exp(2*zbound)+1) ci.cor.xy <- (exp(2*ci.z.xy) - 1) / (exp(2*ci.z.xy) + 1) conf.level=round((1-alpha)*100, 3) indexEst=data.frame(matrix(NA, 4, 3)) indexEst[1, ]=c(cor.xy, ci.cor.xy) indexEst[2, ]=c(cor.icc, ci.cor.icc) indexEst[3, ]=c(cor.lin, ci.cor.lin) indexEst[4, ]=c(cor.new, ci.new.ccc) colnames(indexEst)=c("Est", paste(conf.level, c("LL", "UL"), sep="")) rownames(indexEst)=c("Pearson", "ICC", "Lin.CCC", "Liao.ICCC") fixed.bias=data.frame(matrix(c(int, ci.fixed), 1, 3)) colnames(fixed.bias)=c("Est", paste(conf.level, c("LL", "UL"), sep="")) rownames(fixed.bias)="Fixed bias" propo.bias=data.frame(matrix(c(slope.adj, ci.proportion), 1, 3)) colnames(fixed.bias)=c("Est", paste(conf.level, c("LL", "UL"), sep="")) rownames(fixed.bias)="Proportional bias at x=1" summaryStat=data.frame(matrix(NA, 2, 3)) summaryStat[1, ]=c(n, x.mean, x.std) summaryStat[2, ]=c(n, y.mean, y.std) colnames(summaryStat)=c("n", "mean", "sd") rownames(summaryStat)=c("x", "y") intervalEst=data.frame(matrix(NA, 5, 2)) intervalEst[1, ]=loa intervalEst[2, ]=ai intervalEst[3, ]=ai.fixed.adjusted intervalEst[4, ]=ai+slope.adj*x.mean intervalEst[5, ]=ai.fixed.adjusted+slope.adj*x.mean colnames(intervalEst)=paste(conf.level, c("LL", "UL"), sep="") rownames(intervalEst)=c("Bland-Altman.LOA", "Liao.AI", "Liao.AI.Adj.Fixed", "Liao.AI.Adj.Propo.xMean", "Liao.AI.Adj.Total.xMean") biasEst=data.frame(matrix(NA, 2, 3)) biasEst[1, ]=fixed.bias biasEst[2, ]=propo.bias colnames(biasEst)=c("Est", paste(conf.level, c("LL", "UL"), sep="")) rownames(biasEst)=c("Fixed.Bias", "Propo.Bias.at.x=1") cutoff=intervalEst["Liao.AI", ] k=sum( (y-x)<min(cutoff) | (y-x)>max(cutoff) ) tol.prob=tolProb(n=n, k=k, alpha=alpha) if (sum(is.na(clin.limit))==0){ alpha.cl=agrInt2alpha(clin.limit=clin.limit, n=n, sigmae=sigmae) k.cl=sum( (y-x)<min(clin.limit) | (y-x)>max(clin.limit) ) tol.prob.cl=tolProb(n=n, k=k.cl, alpha=alpha.cl) } if (sum(is.na(clin.limit))==0){ result=list( x=x, y=y, x.name=x.name, y.name=y.name, alpha=alpha, n=n, conf.level=conf.level, lambda=lambda, summaryStat=summaryStat, sigma.e=sigmae, indexEst=indexEst, intervalEst=intervalEst, biasEst=biasEst, intercept=int, slope=slope, tolProb=tol.prob, k=k, clin.limit=clin.limit, tolProb.cl=tol.prob.cl, k.cl=k.cl, alpha.cl=alpha.cl # results triggered by clin.limit ) } else { result=list( x=x, y=y, x.name=x.name, y.name=y.name, alpha=alpha, n=n, conf.level=conf.level, lambda=lambda, summaryStat=summaryStat, sigma.e=sigmae, indexEst=indexEst, intervalEst=intervalEst, biasEst=biasEst, intercept=int, slope=slope, tolProb=tol.prob, k=k, clin.limit=clin.limit ) } class(result)="ai" # output the results cat(x.name, " and ", y.name, "agree with each other at discordance rate of", alpha, "with tolerance probability of", round(tol.prob, 2), "from sample size n=", n, ".\n") if (sum(is.na(clin.limit))==0){ cat("Given clinical relevant limit,", x.name, " and ", y.name, "agree with each other at discordance rate of", round(alpha.cl, 3), "with tolerance probability of", round(tol.prob.cl, 2), "from the same sample size n=", n, ".\n") } cat("\n") return(result) }
/scratch/gouwar.j/cran-all/cranData/AgreementInterval/R/improvedBA_3jx.R
#' @title plot.ai #' @description The plot method for ai objects #' @author Jialin Xu, Jason Liao #' @export #' @rdname plot.ai #' @param x ai object from ai function #' @param clin.limit Clinically meaningful lower and upper limit #' @param which Index parameter to control which plot to output, by default, all four plots will be outputed. #' @param ... Additional arguments to be passed to the round function and to control number of decimals in the display. #' @return Function plot.ai returns 2 by 2 plots (See details) #' #' @details The four plots include 1) scatterplot of raw data with regression line from the measurement error model, #' 2) Difference between two measurement methods with original average interval determined by alpha and clinically meaningful lower and upper limit, #' 3) Difference between two measurement methods with average interval adjusted for fixed bias, #' as well as 4) Sorted difference bewteen two measurement methods with average interval adjusted for total bias. #' #' @examples #' a <- c(1, 2, 3, 4, 7) #' b <- c(1, 3, 2, 5, 3) #' ans <- ai(x=a, y=b) #' plot(x=ans) #' plot(x=ans, clin.limit=c(-5, 5)) #' @importFrom graphics abline legend lines par plot #' @references Jason J. Z. Liao, Quantifying an Agreement Study, Int. J. Biostat. 2015; 11(1): 125-133 # This is the plot function to plot original data and average interval limit from "ai" object. # # Author: Jialin Xu, Jason Liao # # Reference: Jason J. Z. Liao, Quantifying an Agreement Study, Int. J. Biostat. 2015; 11(1): 125-133 plot.ai=function(x, clin.limit=NA, which=1:4, ...){ if (class(x)!="ai"){ msg="Please input ai object running through improvedBA function" stop(msg) } result=x range.y= range( result$intervalEst["Liao.AI", 1], clin.limit, result$y-result$x, result$intervalEst["Liao.AI.Adj.Fixed", 1],result$y-result$x, clin.limit-result$intercept, result$intercept+(result$slope-1)*result$x[order(result$x)]+result$intervalEst["Liao.AI.Adj.Fixed", 1], result$intercept+(result$slope-1)*result$x[order(result$x)]+result$intervalEst["Liao.AI.Adj.Fixed", 2], na.rm=TRUE ) ######## NEED TO REVISE based on current output object if (sum(!is.na(clin.limit))>0 | sum(!is.na(x$clin.limit))>0){ if (sum(!is.na(clin.limit))==0) clin.limit=x$clin.limit leg.tex2=paste("Clin. limit: (", round(clin.limit[1], ...), ", ", round(clin.limit[2], ...), ")", sep="") leg.tex2b=paste("Clin. limit: (", round(clin.limit[1]-result$intercept, ...), ", ", round(clin.limit[2]-result$intercept, ...), ")", sep="") lty2=2 } if (length(which)>4 | sum(which>4)>0) { stop("Please specify plot numbers between 1 and 4") } if (length(which)==4) pframe=c(2, 2) if (length(which)==3) pframe=c(1, 3) if (length(which)==2) pframe=c(1, 2) if (length(which)==1) pframe=c(1, 1) x.name=result$x.name y.name=result$y.name par(mfrow=pframe) if (sum(which==1)>0){ plot.main="Raw" plot(result$x,result$y, xlab=x.name, ylab=y.name, main=plot.main) abline(0,1,lty=1) abline(result$intercept,result$slope,lty=2) legend(x="bottomright", legend=c("U=T", "model"), lty=c(1, 2), bty="n") } if (sum(which==2)>0){ plot.main="Agreement Interval" plot(1:result$summaryStat["x", "n"],result$y-result$x, xlab="Samples",ylab=paste("Diff. (", y.name, " - ", x.name, ")", sep=""),pch=1,main=plot.main, ylim=range.y) abline(h=0,lty=3) abline(h=result$intervalEst["Liao.AI", ],lty=1) # doscordance rate alpha=0.05 leg.tex1=paste("(", round(result$intervalEst["Liao.AI", 1], ...), ", ", round(result$intervalEst["Liao.AI", 2], ...), ")", sep="") lty1=1 if (sum(!is.na(clin.limit))>0){ abline(h=clin.limit, lty=2) # clinical meaningful limit legend(x="bottomleft", legend=c(leg.tex1, leg.tex2), lty=c(lty1, lty2), bty="n") } else { legend(x="bottomleft", legend=leg.tex1, lty=lty1, bty="n") } } if (sum(which==3)>0){ plot.main="Agreement Interval Adjusted for Fixed Bias" leg.tex1=paste("(", round(result$intervalEst["Liao.AI.Adj.Fixed", 1], ...), ", ", round(result$intervalEst["Liao.AI.Adj.Fixed", 2], ...), ")", sep="") lty1=1 leg.tex=ifelse(is.na(clin.limit), leg.tex1, c(leg.tex1, leg.tex2b)) lty=ifelse(is.na(clin.limit), lty1, c(lty1, lty2)) plot(1:result$summaryStat["x", "n"],result$y-result$x, xlab="Samples",ylab=paste("Diff. (", y.name, " - ", x.name, ")", sep=""),pch=1,main=plot.main, ylim=range.y) abline(h=result$intercept, lty=3) abline(h=result$intervalEst["Liao.AI.Adj.Fixed", ], lty=1) if (sum(!is.na(clin.limit))>0) { abline(h=clin.limit-result$intercept, lty=2) } legend(x="bottomleft", legend=leg.tex, lty=lty, bty="n") } if (sum(which==4)>0){ plot.main="Difference vs. sorted samples" plot(1:result$summaryStat["x", "n"],(result$y-result$x)[(1:result$summaryStat["x", "n"])[order(result$x)]], xlab="Sorted Samples",ylab=paste("Diff. (", y.name, " - ", x.name, ")", sep=""),pch=1,main=plot.main, ylim=range.y) lines(1:result$n,result$intercept+(result$slope-1)*result$x[order(result$x)],lty=2) lines(1:result$n,result$intercept+(result$slope-1)*result$x[order(result$x)]+result$intervalEst["Liao.AI.Adj.Fixed", 1],lty=1) lines(1:result$n,result$intercept+(result$slope-1)*result$x[order(result$x)]+result$intervalEst["Liao.AI.Adj.Fixed", 2],lty=1) } }
/scratch/gouwar.j/cran-all/cranData/AgreementInterval/R/plot.ai.R
#' @title summary.ai #' @description The summary method for ai objects #' @author Jialin Xu, Jason Liao #' @export #' @rdname summary.ai #' @param object ai object from ai function #' @param ... additional arguments affecting the summary produced #' @return Function summary.ai prints out key summaries on screen #' #' @examples #' a <- c(1, 2, 3, 4, 7) #' b <- c(1, 3, 2, 5, 3) #' ans <- ai(x=a, y=b) #' summary(ans) #' @references Jason J. Z. Liao, Quantifying an Agreement Study, Int. J. Biostat. 2015; 11(1): 125-133 # This is the summary function to summarize and print out original data from "ai" object. # # Author: Jialin Xu, [email protected] # # Reference: Jason J. Z. Liao, Quantifying an Agreement Study, Int. J. Biostat. 2015; 11(1): 125-133 summary.ai=function(object, ...){ #summary.ai=function(object, digits=3){ digits=3 x=object if (class(x)!="ai"){ msg="Please input ai object running through improvedBA function" stop(msg) } result=x cat(" ------- Summary --------\n") cat(result$x.name, " and ", result$y.name, "agree with each other at discordance rate of", result$alpha, "with tolerance probability of", round(result$tolProb, digits=digits), "from sample size n=", result$n, ".\n") if (sum(is.na(result$clin.limit))==0){ cat("Given clinical relevant limit,", result$x.name, " and ", result$y.name, "agree with each other at discordance rate of", round(result$alpha.cl, digits=digits), "with tolerance probability of", round(result$tolProb.cl, digits=digits), "from the same sample size n=", result$n, ".\n") } cat("\n") cat(" ------- Summary Statistics -------", "\n") print(round(result$summaryStat, digits=digits)); cat("Sigma e = ", round(result$sigma.e, digits=digits), "\n") cat("\n------- Concordance Assessment Using Index Approaches -------", "\n") print(round(result$indexEst, digits=digits)) cat("\n------- Concordance Assessment Using Interval Approaches -------", "\n") print(round(result$intervalEst, digits=digits)) cat("\n------- Bias Estimates -------", "\n") print(round(result$biasEst, digits=digits)) cat("\n------- End of summary -------\n\n") }
/scratch/gouwar.j/cran-all/cranData/AgreementInterval/R/summary.ai.R
#' @title tolProb #' @description Function tolProb calculates tolerance probability based on #' sample size (n), number of discordance pairs (k) and discordance rate (alpha). #' @author Jialin Xu, Jason Liao #' @export #' @rdname tolProb #' @param n Sample size #' @param k Number of discordance pairs, discordance pairs are defined as samples with difference greater than average interval #' @param alpha Discordance rate, default 0.05. #' @return tolerance probability #' #' @details Function tolProb calculates tolearance probability based on sample size(n), number of discordance pairs (k) and discordance rate (alpha). #' Its value is calculated as the largest value such that the following inequality is true: #' \deqn{1-\sum _{i=0}^{k} {n\choose i} * {(1-\alpha)}^{n-i} * {\alpha}^i \ge \beta} #' #' #' @examples #' tolProb(n=52, k=5, alpha=0.05) #' tolProb(n=52, k=0, alpha=0.05) #' @references Jason J. Z. Liao, Quantifying an Agreement Study, Int. J. Biostat. 2015; 11(1): 125-133 ## Function to calculate tolerance probability based on sample size (n) and number of paired data out of limit (k) tolProb=function(n, k, alpha=0.05){ if (length(n)!=1) stop("Please enter n as one single integer value") if (length(k)!=1) stop("Please enter k as one single integer value") if (!is.numeric(n) | length(n)!=1 | !is.numeric(k) | length(k)!=1 | n<k) stop("Please enter n and k as integer and make sure n>=k") if (length(alpha)!=1 | !is.numeric(alpha)) stop("Please enter a numeric value of alpha between 0 and 1, default 0.05") i=0:k x1=choose(n, i) x2=(1-alpha)^(n-i) x3=alpha^i x0=sum(x1*x2*x3) beta=1-x0 return(beta) }
/scratch/gouwar.j/cran-all/cranData/AgreementInterval/R/tolProb.R
#' Analysis: Randomized block design by glm #' @description Statistical analysis of experiments conducted in a randomized block design using a generalized linear model. It performs the deviance analysis and the effect is tested by a chi-square test. Multiple comparisons are adjusted by Tukey. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param trat Numerical or complex vector with treatments #' @param block Numerical or complex vector with blocks #' @param response Numerical vector containing the response of the experiment. Use cbind(resp, n-resp) for binomial or quasibinomial family. #' @param glm.family distribution family considered (\emph{default} is binomial) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{default} is qualitative) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param geom Graph type (columns, boxes or segments) #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param textsize Font size #' @param labelsize Label size #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angle x-axis scale text rotation #' @param family Font family #' @param dec Number of cells #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param posi Legend position #' @param point Defines whether to plot mean ("mean"), mean with standard deviation ("mean_sd" - \emph{default}) or mean with standard error (\emph{default} - "mean_se"). #' @param angle.label label angle #' @export #' @examples #' data("aristolochia") #' attach(aristolochia) #' # Assuming the same aristolochia data set, but considering randomized blocks #' bloco=rep(paste("B",1:16),5) #' resp=resp/2 #' DBC.glm(trat,bloco, cbind(resp,50-resp), glm.family="binomial") DBC.glm=function(trat, block, response, glm.family="binomial", quali=TRUE, alpha.f=0.05, alpha.t=0.05, geom="bar", theme=theme_classic(), sup=NA, ylab="Response", xlab="", fill="lightblue", angle=0, family="sans", textsize=12, labelsize=5, dec=3, addmean=TRUE, errorbar=TRUE, posi="top", point="mean_sd", angle.label=0){ if(angle.label==0){hjust=0.5}else{hjust=0} requireNamespace("emmeans") # requireNamespace("stringr") requireNamespace("multcomp") requireNamespace("crayon") requireNamespace("ggplot2") trat=as.factor(trat) block=as.factor(block) resp=response if(glm.family=="binomial" | glm.family=="quasibinomial"){ if(glm.family=="binomial"){a = glm(resp ~ trat+block,family = "binomial")} if(glm.family=="quasibinomial"){a = glm(resp ~ trat+block,family = "quasibinomial")} anava1 = anova(a, test="Chisq") anava2 = summary(a) anava=rbind("Null deviance" = anava1$`Resid. Dev`[1], "Df Null deviance" = round(anava1$`Resid. Df`[1]), "-----"=NA, "Treatment effects"=NA, "Residual deviance" = anava1$`Resid. Dev`[2], "Df residual deviance" = round(anava1$`Resid. Df`[2],0), "p-value(Chisq)" = anava1$`Pr(>Chi)`[2], "-----"=NA, "Block effects"=NA, "Residual deviance" = anava1$`Resid. Dev`[3], "Df residual deviance" = round(anava1$`Resid. Df`[3],0), "p-value(Chisq)" = anava1$`Pr(>Chi)`[3], "-----"=NA, AIC = anava2$aic) colnames(anava)="" if(is.na(sup==TRUE)){sup=0.1*mean(a$fitted.values*100)} cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of deviance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(as.matrix(round(anava,3)),na.print = "",quote=F) cat("\n\n") message(if (anava1$`Pr(>Chi)`[2]<alpha.f){ black("As the calculated p-value, it is less than the 5% significance level.The hypothesis H0 of equality of means is rejected. Therefore, at least two treatments differ")} else {"As the calculated p-value is greater than the 5% significance level, H0 is not rejected"}) cat(green(bold("\n\n-----------------------------------------------------------------\n"))) if(quali==TRUE){cat(green(bold("Multiple Comparison Test")))}else{cat(green(bold("Regression")))} cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali==TRUE){ letra <- cld(regrid(emmeans(a, "trat", alpha=alpha.t)), Letters=letters,reversed=TRUE,adjusted="tukey") rownames(letra)=letra$trat letra=letra[unique(as.character(trat)),] out=letra[,-4] out[,c(2,3,4,5)]=round(out[,c(2,3,4,5)],2) print(out) prob=letra$prob*100 superior=letra$asymp.UCL*100 inferior=letra$asymp.LCL*100 # grupo=str_trim(letra$.group) grupo=gsub(" ","",letra$.group) trat=letra$trat desvio=letra$asymp.UCL-letra$prob media=prob trats=trat dadosm=data.frame(trat,prob,superior,inferior,grupo,desvio, media,trats) if(addmean==TRUE){dadosm$letra=paste(format(round(prob,3),digits = dec), grupo)} if(addmean==FALSE){dadosm$letra=grupo} letra=dadosm$letra if(geom=="bar"){ grafico=ggplot(dadosm,aes(x=trat,y=prob)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trat),color=1)}else{grafico=grafico+ geom_col(aes(fill=trat),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=superior+sup,label=letra), family=family,size=labelsize, angle=angle.label, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+geom_text(aes(y=prob+sup,label=letra),family=family, angle=angle.label, size=labelsize,hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=inferior, ymax=superior,color=1), color="black",width=0.3)}} if(geom=="point"){grafico=ggplot(dadosm,aes(x=trat, y=prob)) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=superior+sup, label=letra), family=family,angle=angle.label,size=labelsize, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=prob+sup,label=letra),size=labelsize,family=family,angle=angle.label, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=inferior, ymax=superior,color=1), color="black",width=0.3)} if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trat),size=5)} else{grafico=grafico+ geom_point(aes(color=trat), color="black", fill=fill,shape=21,size=5)}} grafico=grafico+ theme+ ylab(ylab)+ xlab(xlab)+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") if(angle !=0){grafico=grafico+ theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} grafico=as.list(grafico) print(grafico) } } if(glm.family=="poisson" | glm.family=="quasipoisson"){ if(glm.family=="poisson"){a = glm(resp ~ trat+block,family = "poisson")} if(glm.family=="quasipoisson"){a = glm(resp ~ trat+block,family = "quasipoisson")} anava1 = anova(a, test="Chisq") anava2 = summary(a) anava=rbind("Null deviance" = anava1$`Resid. Dev`[1], "Df Null deviance" = round(anava1$`Resid. Df`[1]), "-----"=NA, "Treatment effects"=NA, "Residual deviance" = anava1$`Resid. Dev`[2], "Df residual deviance" = round(anava1$`Resid. Df`[2]), "p-value(Chisq)" = anava1$`Pr(>Chi)`[2], "-----"=NA, "Block effects"=NA, "Residual deviance" = anava1$`Resid. Dev`[3], "Df residual deviance" = round(anava1$`Resid. Df`[3]), "p-value(Chisq)" = anava1$`Pr(>Chi)`[3], "-----"=NA, AIC = anava2$aic) colnames(anava)="" if(is.na(sup==TRUE)){sup=0.1*mean(a$fitted.values)} cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of deviance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(as.matrix(round(anava,3)),na.print = "",quote=F) cat("\n\n") message(if (anava1$`Pr(>Chi)`[2]<alpha.f){ black("As the calculated p-value, it is less than the 5% significance level.The hypothesis H0 of equality of means is rejected. Therefore, at least two treatments differ")} else {"As the calculated p-value is greater than the 5% significance level, H0 is not rejected"}) cat(green(bold("\n\n-----------------------------------------------------------------\n"))) if(quali==TRUE){cat(green(bold("Multiple Comparison Test")))}else{cat(green(bold("Regression")))} cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali==TRUE){ letra <- cld(regrid(emmeans(a, "trat", alpha=alpha.t,adjusted="tukey")), Letters=letters,reversed=TRUE) rownames(letra)=letra$trat letra=letra[unique(as.character(trat)),] out=letra[,-4] out[,c(2,3,4,5)]=round(out[,c(2,3,4,5)],2) print(out) rate=letra$rate superior=letra$asymp.UCL inferior=letra$asymp.LCL # grupo=str_trim(letra$.group) grupo=gsub(" ","",letra$.group) trat=letra$trat desvio=letra$asymp.UCL-letra$rate media=rate trats=trat dadosm=data.frame(trat,rate,superior,inferior, grupo,desvio, media,trats) if(addmean==TRUE){dadosm$letra=paste(format(rate,digits = dec), grupo)} if(addmean==FALSE){dadosm$letra=grupo} letra=dadosm$letra if(geom=="bar"){ grafico=ggplot(dadosm,aes(x=trat,y=rate)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trat),color=1)}else{grafico=grafico+ geom_col(aes(fill=trat),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=superior+sup,label=letra), family=family,size=labelsize, angle=angle.label, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+geom_text(aes(y=rate+sup,label=letra),family=family, angle=angle.label,size=labelsize, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=inferior, ymax=superior,color=1), color="black",width=0.3)}} if(geom=="point"){grafico=ggplot(dadosm,aes(x=trat, y=rate)) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=superior+sup, label=letra), family=family,size=labelsize,angle=angle.label, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=rate+sup,label=letra),family=family,size=labelsize,angle=angle.label, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=inferior, ymax=superior,color=1), color="black",width=0.3)} if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trat),size=5)} else{grafico=grafico+ geom_point(aes(color=trat), color="black", fill=fill,shape=21,size=5)}} grafico=grafico+ theme+ ylab(ylab)+ xlab(xlab)+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") if(angle !=0){grafico=grafico+ theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} grafico=as.list(grafico) print(grafico) } } }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/DBCglm_function.R
#' Analysis: Completely randomized design by glm #' @description Statistical analysis of experiments conducted in a completely randomized design using a generalized linear model. It performs the deviance analysis and the effect is tested by a chi-square test. Multiple comparisons are adjusted by Tukey. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param trat Numerical or complex vector with treatments #' @param response Numerical vector containing the response of the experiment. Use cbind(resp, n-resp) for binomial or quasibinomial family. #' @param glm.family distribution family considered (\emph{default} is binomial) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{default} is qualitative) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param geom Graph type (columns, boxes or segments) #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param textsize Font size #' @param labelsize Label size #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angle x-axis scale text rotation #' @param family Font family #' @param dec Number of cells #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param posi Legend position #' @param point Defines whether to plot mean ("mean"), mean with standard deviation ("mean_sd" - \emph{default}) or mean with standard error (\emph{default} - "mean_se"). #' @param angle.label label angle #' @importFrom emmeans regrid #' @export #' @examples #' data("aristolochia") #' attach(aristolochia) #' #============================= #' # Use the DIC function #' #============================= #' DIC(trat, resp) #' #' #============================= #' # Use the DIC function noparametric #' #============================= #' DIC(trat, resp, test="noparametric") #' #' #============================= #' # Use the DIC.glm function #' #============================= #' #' resp=resp/4 # total germinated seeds #' #' # the value 25 is the total of seeds in the repetition #' DIC.glm(trat, cbind(resp,25-resp), glm.family="binomial") DIC.glm=function(trat, response, glm.family="binomial", quali=TRUE, alpha.f=0.05, alpha.t=0.05, geom="bar", theme=theme_classic(), sup=NA, ylab="Response", xlab="", fill="lightblue", angle=0, family="sans", textsize=12, labelsize=5, dec=3, addmean=TRUE, errorbar=TRUE, posi="top", point="mean_sd", angle.label=0){ if(angle.label==0){hjust=0.5}else{hjust=0} requireNamespace("emmeans") # requireNamespace("stringr") requireNamespace("multcomp") requireNamespace("crayon") requireNamespace("ggplot2") trat=as.factor(trat) resp=response if(glm.family=="binomial" | glm.family=="quasibinomial"){ if(glm.family=="binomial"){a = glm(resp ~ trat,family = "binomial")} if(glm.family=="quasibinomial"){a = glm(resp ~ trat,family = "quasibinomial")} anava1 = anova(a, test="Chisq") anava2 = summary(a) anava=rbind("Null deviance" = anava1$`Resid. Dev`[1], "Df Null deviance" = round(anava1$`Resid. Df`[1]), "-----"=NA, "Residual deviance" = anava1$`Resid. Dev`[2], "Df residual deviance" = round(anava1$`Resid. Df`[2]), "p-value(Chisq)" = anava1$`Pr(>Chi)`[2], "-----"=NA, AIC = anava2$aic) colnames(anava)="" if(is.na(sup==TRUE)){sup=0.1*mean(a$fitted.values*100)} cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of deviance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(as.matrix(round(anava,3)),na.print = "",quote=F) cat("\n\n") message(if (anava1$`Pr(>Chi)`[2]<alpha.f){ black("As the calculated p-value, it is less than the 5% significance level.The hypothesis H0 of equality of means is rejected. Therefore, at least two treatments differ")} else {"As the calculated p-value is greater than the 5% significance level, H0 is not rejected"}) cat(green(bold("\n\n-----------------------------------------------------------------\n"))) if(quali==TRUE){cat(green(bold("Multiple Comparison Test")))}else{cat(green(bold("Regression")))} cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali==TRUE){ letra <- cld(regrid(emmeans(a, "trat", alpha=alpha.t)), Letters=letters,reversed=TRUE,adjusted="tukey") rownames(letra)=letra$trat letra=letra[unique(as.character(trat)),] out=letra[,-4] out[,c(2,3,4,5)]=round(out[,c(2,3,4,5)],2) print(out) prob=letra$prob*100 superior=letra$asymp.UCL*100 inferior=letra$asymp.LCL*100 # grupo=str_trim(letra$.group) grupo=gsub(" ","",letra$.group) trat=letra$trat dadosm=data.frame(trat,prob,superior,inferior,grupo) if(addmean==TRUE){dadosm$letra=paste(format(round(prob,3),digits = dec), grupo)} if(addmean==FALSE){dadosm$letra=grupo} letra=dadosm$letra if(geom=="bar"){ grafico=ggplot(dadosm,aes(x=trat,y=prob)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trat),color=1)}else{grafico=grafico+ geom_col(aes(fill=trat),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=superior+sup,label=letra), family=family,size=labelsize, angle=angle.label, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+geom_text(aes(y=prob+sup,label=letra),family=family,size=labelsize, angle=angle.label, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=inferior, ymax=superior,color=1), color="black",width=0.3)}} if(geom=="point"){grafico=ggplot(dadosm,aes(x=trat, y=prob)) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=superior+sup, label=letra),size=labelsize, family=family,angle=angle.label, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=prob+sup,label=letra),family=family,size=labelsize,angle=angle.label, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=inferior, ymax=superior,color=1), color="black",width=0.3)} if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trat),size=5)} else{grafico=grafico+ geom_point(aes(color=trat), color="black", fill=fill,shape=21,size=5)}} grafico=grafico+ theme+ ylab(ylab)+ xlab(xlab)+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") if(angle !=0){grafico=grafico+ theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} grafico=as.list(grafico) print(grafico) } } if(glm.family=="poisson" | glm.family=="quasipoisson"){ if(glm.family=="poisson"){a = glm(resp ~ trat,family = "poisson")} if(glm.family=="quasipoisson"){a = glm(resp ~ trat,family = "quasipoisson")} anava1 = anova(a, test="Chisq") anava2 = summary(a) anava=rbind("Null deviance" = anava1$`Resid. Dev`[1], "Df Null deviance" = round(anava1$`Resid. Df`[1]), "-----"=NA, "Residual deviance" = anava1$`Resid. Dev`[2], "Df residual deviance" = round(anava1$`Resid. Df`[2]), "p-value(Chisq)" = anava1$`Pr(>Chi)`[2], "-----"=NA, AIC = anava2$aic) colnames(anava)="" if(is.na(sup==TRUE)){sup=0.1*mean(a$fitted.values)} cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of deviance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(as.matrix(round(anava,3)),na.print = "",quote=FALSE) cat("\n\n") message(if (anava1$`Pr(>Chi)`[2]<alpha.f){ black("As the calculated p-value, it is less than the 5% significance level.The hypothesis H0 of equality of means is rejected. Therefore, at least two treatments differ")} else {"As the calculated p-value is greater than the 5% significance level, H0 is not rejected"}) cat(green(bold("\n\n-----------------------------------------------------------------\n"))) if(quali==TRUE){cat(green(bold("Multiple Comparison Test")))}else{cat(green(bold("Regression")))} cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali==TRUE){ letra <- cld(regrid(emmeans(a, "trat", alpha=alpha.t,adjusted="tukey")), Letters=letters,reversed=TRUE) rownames(letra)=letra$trat letra=letra[unique(as.character(trat)),] out=letra[,-4] out[,c(2,3,4,5)]=round(out[,c(2,3,4,5)],2) print(out) rate=letra$rate superior=letra$asymp.UCL inferior=letra$asymp.LCL # grupo=str_trim(letra$.group) grupo=gsub(" ","",letra$.group) desvio=letra$asymp.UCL-letra$rate trat=letra$trat media=rate trats=trat dadosm=data.frame(trat,rate,superior,inferior, grupo,desvio, media,trats) if(addmean==TRUE){dadosm$letra=paste(format(round(rate,3),digits = dec), grupo)} if(addmean==FALSE){dadosm$letra=grupo} letra=dadosm$letra if(geom=="bar"){ grafico=ggplot(dadosm,aes(x=trat,y=rate)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trat),color=1)}else{grafico=grafico+ geom_col(aes(fill=trat),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=superior+sup,label=letra), family=family,size=labelsize, angle=angle.label, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+geom_text(aes(y=rate+sup,label=letra),family=family,size=labelsize, angle=angle.label, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=inferior, ymax=superior,color=1), color="black",width=0.3)}} if(geom=="point"){grafico=ggplot(dadosm,aes(x=trat, y=rate)) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=superior+sup, label=letra),size=labelsize, family=family,angle=angle.label, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=rate+sup,label=letra),size=labelsize,family=family,angle=angle.label, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=inferior, ymax=superior,color=1), color="black",width=0.3)} if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trat),size=5)} else{grafico=grafico+ geom_point(aes(color=trat), color="black", fill=fill,shape=21,size=5)}} grafico=grafico+ theme+ ylab(ylab)+ xlab(xlab)+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") if(angle !=0){grafico=grafico+ theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} grafico=as.list(grafico) print(grafico) } } }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/DICglm_function.R
#' Analysis: DBC experiments in double factorial #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @description Analysis of an experiment conducted in a randomized block design in a double factorial scheme using analysis of variance of fixed effects. #' @param f1 Numeric or complex vector with factor 1 levels #' @param f2 Numeric or complex vector with factor 2 levels #' @param block Numerical or complex vector with blocks #' @param response Numerical vector containing the response of the experiment. #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param homog Homogeneity test of variances (\emph{default} is Bartlett) #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{qualitative}) #' @param names.fat Name of factors #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param transf Applies data transformation (default is 1; for log consider 0; `angular` for angular transformation) #' @param constant Add a constant for transformation (enter value) #' @param grau Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with two elements. #' @param grau12 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 and qualitative factor 2 and quantitative factor 1. #' @param grau21 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 and qualitative factor 1 and quantitative factor 2. #' @param geom Graph type (columns or segments (For simple effect only)) #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param xlab.factor Provide a vector with two observations referring to the x-axis name of factors 1 and 2, respectively, when there is an isolated effect of the factors. This argument uses `parse`. #' @param posi Legend position #' @param legend Legend title name #' @param ylim y-axis scale #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angle x-axis scale text rotation #' @param textsize font size #' @param labelsize label size #' @param dec number of cells #' @param family font family #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param CV Plotting the coefficient of variation and p-value of Anova (\emph{default} is TRUE) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param color Column chart color (\emph{default} is "rainbow") #' @param point This function defines whether the point must have all points ("all"), mean ("mean"), standard deviation (\emph{default} - "mean_sd") or mean with standard error ("mean_se") if quali= FALSE. For quali=TRUE, `mean_sd` and `mean_se` change which information will be displayed in the error bar. #' @param angle.label label angle #' @param width.column Width column if geom="bar" #' @param width.bar Width errorbar #' @note The order of the chart follows the alphabetical pattern. Please use `scale_x_discrete` from package ggplot2, `limits` argument to reorder x-axis. The bars of the column and segment graphs are standard deviation. #' @return The table of analysis of variance, the test of normality of errors (Shapiro-Wilk, Lilliefors, Anderson-Darling, Cramer-von Mises, Pearson and Shapiro-Francia), the test of homogeneity of variances (Bartlett or Levene), the test of independence of Durbin-Watson errors, the test of multiple comparisons (Tukey, LSD, Scott-Knott or Duncan) or adjustment of regression models up to grade 3 polynomial, in the case of quantitative treatments. The column chart for qualitative treatments is also returned. #' @note The function does not perform multiple regression in the case of two quantitative factors. #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @keywords DBC #' @keywords Factorial #' @import ggplot2 #' @importFrom crayon green #' @importFrom crayon bold #' @importFrom crayon italic #' @importFrom crayon red #' @importFrom crayon blue #' @import stats #' @seealso \link{FAT2DBC.ad} #' @references #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' #' Mendiburu, F., and de Mendiburu, M. F. (2019). Package ‘agricolae’. R Package, Version, 1-2. #' #' @export #' @examples #' #' #================================================ #' # Example cloro #' #================================================ #' library(AgroR) #' data(cloro) #' attach(cloro) #' FAT2DBC(f1, f2, bloco, resp, ylab="Number of nodules", legend = "Stages") #' FAT2DBC(f1, f2, bloco, resp, mcomp="sk", ylab="Number of nodules", legend = "Stages") #' #================================================ #' # Example covercrops #' #================================================ #' library(AgroR) #' data(covercrops) #' attach(covercrops) #' FAT2DBC(A, B, Bloco, Resp, ylab=expression("Yield"~(Kg~"100 m"^2)), #' legend = "Cover crops") #' FAT2DBC(A, B, Bloco, Resp, mcomp="sk", ylab=expression("Yield"~(Kg~"100 m"^2)), #' legend = "Cover crops") FAT2DBC=function(f1, f2, block, response, norm = "sw", homog = "bt", alpha.f = 0.05, alpha.t = 0.05, quali = c(TRUE, TRUE), names.fat=c("F1", "F2"), mcomp = "tukey", grau=c(NA,NA), grau12=NA, # F1/F2 grau21=NA, # F2/F1 transf = 1, constant = 0, geom = "bar", theme = theme_classic(), ylab = "Response", xlab = "", xlab.factor=c("F1","F2"), legend = "Legend", fill = "lightblue", angle = 0, textsize = 12, labelsize=4, dec = 3, width.column=0.9, width.bar=0.3, family = "sans", point = "mean_sd", addmean = TRUE, errorbar = TRUE, CV = TRUE, sup = NA, color = "rainbow", posi = "right", ylim = NA, angle.label=0){ if(angle.label==0){hjust=0.5}else{hjust=0} requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} ordempadronizado=data.frame(f1,f2,block,resp,response) resp1=resp organiz=data.frame(f1,f2,block,resp,response) organiz=organiz[order(organiz$block),] organiz=organiz[order(organiz$f2),] organiz=organiz[order(organiz$f1),] f1=organiz$f1 f2=organiz$f2 block=organiz$block resp=organiz$resp response=organiz$response fator1=f1 fator2=f2 fator1a=fator1 fator2a=fator2 bloco=block if(is.na(sup==TRUE)){sup=0.1*mean(response)} Fator1=factor(fator1, levels = unique(fator1)) Fator2=factor(fator2, levels = unique(fator2)) bloco=as.factor(bloco) nv1 <- length(summary(Fator1)) nv2 <- length(summary(Fator2)) lf1 <- levels(Fator1) lf2 <- levels(Fator2) fatores <- data.frame(Fator1, Fator2) graph=data.frame(Fator1,Fator2,resp) a=anova(aov(resp~Fator1*Fator2+bloco)) ab=anova(aov(response~Fator1*Fator2+bloco)) b=aov(resp~Fator1*Fator2+bloco) anava=a colnames(anava)=c("GL","SQ","QM","Fcal","p-value") bres=aov(resp~as.factor(f1)*as.factor(f2)+as.factor(block), data = ordempadronizado) respad=bres$residuals/sqrt(a$`Mean Sq`[5]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} if(norm=="sw"){norm1 = shapiro.test(b$res)} if(norm=="li"){norm1=lillie.test(b$residuals)} if(norm=="ad"){norm1=ad.test(b$residuals)} if(norm=="cvm"){norm1=cvm.test(b$residuals)} if(norm=="pearson"){norm1=pearson.test(b$residuals)} if(norm=="sf"){norm1=sf.test(b$residuals)} trat=as.factor(paste(Fator1,Fator2)) c=aov(resp~trat+bloco) if(homog=="bt"){ homog1 = bartlett.test(b$res ~ trat) statistic=homog1$statistic phomog=homog1$p.value method=paste("Bartlett test","(",names(statistic),")",sep="") } if(homog=="levene"){ homog1 = levenehomog(c$res~trat)[1,] statistic=homog1$`F value`[1] phomog=homog1$`Pr(>F)`[1] method="Levene's Test (center = median)(F)" names(homog1)=c("Df", "statistic","p.value")} indep = dwtest(b) Ids=ifelse(respad>3 | respad<(-3), "darkblue","black") residplot=ggplot(data=data.frame(respad,Ids),aes(y=respad,x=1:length(respad)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(respad),label=1:length(respad), color=Ids,size=labelsize)+ scale_x_continuous(breaks=1:length(respad))+ theme_classic()+theme(axis.text.y = element_text(size=textsize), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) homoge=data.frame(Method=method, Statistic=statistic, "p-value"=phomog) rownames(homoge)="" print(homoge) cat("\n") message(if(homog1$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Independence from errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) indepe=data.frame(Method=paste(indep$method,"(", names(indep$statistic),")",sep=""), Statistic=indep$statistic, "p-value"=indep$p.value) rownames(indepe)="" print(indepe) cat("\n") message(if(indep$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors are not independent"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV (%) = ",round(sqrt(a$`Mean Sq`[5])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nMean = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian = ",round(median(response,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) anava1=as.matrix(data.frame(anava)) colnames(anava1)=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anava1)=c(names.fat[1],names.fat[2],"Block", paste(names.fat[1],"x",names.fat[2]),"Residuals") print(anava1,na.print = "") cat("\n") if(transf==1 && norm1$p.value<0.05 | transf==1 && indep$p.value<0.05 | transf==1 &&homog1$p.value<0.05){ message("\n Your analysis is not valid, suggests using a non-parametric test and try to transform the data\n")}else{} if(transf != 1 && norm1$p.value<0.05 | transf!=1 && indep$p.value<0.05 | transf!=1 && homog1$p.value<0.05){ message("\n Your analysis is not valid\n")}else{} if (a$`Pr(>F)`[4] > alpha.f){ cat(green(bold("-----------------------------------------------------------------\n"))) cat(green(bold("No significant interaction"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) fatores <- data.frame(Fator1 = factor(fator1), Fator2 = factor(fator2)) fatoresa <- data.frame(Fator1 = fator1a, Fator2 = fator2a) graficos=list(1,2,3) for (i in 1:2) {if (a$`Pr(>F)`[i] <= alpha.f){ cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(bold(names.fat[i])) cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali[i]==TRUE){ if(mcomp=="tukey"){ letra <- TUKEY(b, colnames(fatores[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ letra <- LSD(b, colnames(fatores[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[5], nrep = nrep, QME = a$`Mean Sq`[5], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ letra <- duncan(b, colnames(fatores[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) print(letra1) ordem=unique(as.vector(unlist(fatores[i]))) #===================================================== if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[ordem]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[ordem]} dadosm=data.frame(letra1[ordem,], media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[ordem], desvio=desvio) dadosm$trats=factor(rownames(dadosm),levels = ordem) dadosm$limite=dadosm$media+dadosm$desvio lim.y=dadosm$limite[which.max(abs(dadosm$limite))] if(is.na(ylim[1])==TRUE && lim.y<0){ylim=c(1.5*lim.y,0)} if(is.na(ylim[1])==TRUE && lim.y>0){ylim=c(0,1.5*lim.y)} if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio trats=dadosm$trats letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm,aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+geom_col(aes(fill=trats), color=1,width = width.column)} else{grafico=grafico+geom_col(aes(fill=trats),fill=fill,color=1,width = width.column)} grafico=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[i])) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+geom_text(aes(y=media+sup, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black", width=width.bar)} if(angle !=0){grafico=grafico+ theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} grafico=grafico+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family), legend.position = "none")} if(geom=="point"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trats),size=5)} else{grafico=grafico+ geom_point(aes(color=trats),fill="gray",pch=21,color="black",size=5)} grafico=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[i])) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra,size=labelsize), family=family,angle=angle.label, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black", width=width.bar)} if(angle !=0){grafico=grafico+ theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} grafico=grafico+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family), legend.position = "none")} if(CV==TRUE){grafico=grafico+labs(caption=paste("p-value ", if(a$`Pr(>F)`[i]<0.0001){paste("<", 0.0001)} else{paste("=", round(a$`Pr(>F)`[i],4))},"; CV = ", round(abs(sqrt(a$`Mean Sq`[5])/mean(resp))*100,2),"%"))} grafico=grafico if(color=="gray"){grafico=grafico+scale_fill_grey()} # print(grafico) cat("\n\n") } if(quali[i]==FALSE){ dose=as.vector(unlist(fatoresa[i])) grafico=polynomial(dose, resp, grau = grau[i], ylab=ylab, xlab=parse(text = xlab.factor[i]), posi=posi, theme=theme, textsize=textsize, point=point, family=family, SSq=ab$`Sum Sq`[5], DFres = ab$Df[5]) grafico=grafico[[1]]} graficos[[i+1]]=grafico}} graficos[[1]]=residplot if(a$`Pr(>F)`[1]>=alpha.f && a$`Pr(>F)`[2] <alpha.f){ cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green("Isolated factors 1 not significant")) cat(green(bold("\n-----------------------------------------------------------------\n"))) d1=data.frame(tapply(response,fator1,mean, na.rm=TRUE)) colnames(d1)="Mean" print(d1) } if(a$`Pr(>F)`[1]<alpha.f && a$`Pr(>F)`[2] >=alpha.f){ cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green("Isolated factors 2 not significant")) cat(green(bold("\n-----------------------------------------------------------------\n"))) d1=data.frame(tapply(response,fator2,mean, na.rm=TRUE)) colnames(d1)="Mean" print(d1) } if(a$`Pr(>F)`[1]>=alpha.f && a$`Pr(>F)`[2] >=alpha.f){ cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green("Isolated factors not significant")) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(tapply(response,list(fator1,fator2),mean, na.rm=TRUE))} } if (a$`Pr(>F)`[4] <= alpha.f) { cat(green(bold("-----------------------------------------------------------------\n"))) cat(green(bold("\nSignificant interaction: analyzing the interaction\n"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat("\n-----------------------------------------------------------------\n") cat("Analyzing ", names.fat[1], " inside of each level of ",names.fat[2]) cat("\n-----------------------------------------------------------------\n") des1<-aov(resp~ bloco + Fator2/Fator1) l1<-vector('list',nv2) names(l1)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv2+j) l1[[j]]<-v v<-numeric(0) } rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[2], sep = ""), lf2[j])) } des1.tab<-summary(des1,split=list('Fator2:Fator1'=l1))[[1]] rownames(des1.tab)=c("Block",names.fat[2], paste(names.fat[1],"x",names.fat[2],"+",names.fat[1]), paste(" ",rn),"Residuals") print(des1.tab) desdobramento1=des1.tab if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati, a$Df[5], a$`Mean Sq`[5], alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico[[i]]=tukey$groups[as.character(unique(trati)),2] ordem[[i]]=rownames(tukey$groups[as.character(unique(trati)),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra } if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati, a$Df[5], a$`Mean Sq`[5], alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico[[i]]=lsd$groups[as.character(unique(trati)),2] ordem[[i]]=rownames(lsd$groups[as.character(unique(trati)),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra } if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati, a$Df[5], a$`Mean Sq`[5], alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico[[i]]=duncan$groups[as.character(unique(trati)),2] ordem[[i]]=rownames(duncan$groups[as.character(unique(trati)),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[5], nrep = nrep, QME = a$`Mean Sq`[5], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # sk=sk(respi,trati,a$Df[5], a$`Sum Sq`[5],alpha.t) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} } cat("\n-----------------------------------------------------------------\n") cat("Analyzing ", names.fat[2], " inside of the level of ",names.fat[1]) cat("\n-----------------------------------------------------------------\n") cat("\n") des2<-aov(resp~ bloco + Fator1/Fator2) l2<-vector('list',nv1) names(l2)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv1+j) l2[[j]]<-v v<-numeric(0) } rn<-numeric(0) for (i in 1:nv1) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[1], sep = ""), lf1[i])) } des2.tab<-summary(des2,split=list('Fator1:Fator2'=l2))[[1]] rownames(des2.tab)=c("Block",names.fat[1], paste(names.fat[1],"x",names.fat[2],"+",names.fat[2]), paste(" ",rn),"Residuals") print(des2.tab) desdobramento2=des2.tab if(quali[1]==TRUE && quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,a$Df[5],a$`Mean Sq`[5],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico1[[i]]=tukey$groups[as.character(unique(trati)),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,a$Df[5],a$`Mean Sq`[5],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator1 == lf1[i]], trati,mean, na.rm=TRUE)[rownames( lsd$groups)]} tukeygrafico1[[i]]=lsd$groups[as.character(unique(trati)),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,a$Df[5],a$`Mean Sq`[5],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean)[rownames(duncan$groups)]} duncangrafico1[[i]]=duncan$groups[as.character(unique(trati)),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[5], nrep = nrep, QME = a$`Mean Sq`[5], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # # sk=sk(respi,trati,a$Df[5], a$`Sum Sq`[5],alpha.t) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)} } if(quali[1]==FALSE && color=="gray"| quali[2]==FALSE && color=="gray"){ if(quali[2]==FALSE){ teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) if (mcomp == "tukey"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati, a$Df[5], a$`Mean Sq`[5], alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups) } } if (mcomp == "lsd"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati, a$Df[5], a$`Mean Sq`[5], alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[ rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups) } } if (mcomp == "duncan"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati, a$Df[5], a$`Mean Sq`[5], alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups) }} if (mcomp == "sk"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[5], nrep = nrep, QME = a$`Mean Sq`[5], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # sk=sk(respi,trati,a$Df[5], a$`Sum Sq`[5],alpha.t) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk) }} } if(quali[2]==FALSE){ Fator2a=fator2a#as.numeric(as.character(Fator2)) grafico=polynomial2(Fator2a, response, Fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, point=point, posi=posi, ylim=ylim, textsize=textsize, family=family, SSq=ab$`Sum Sq`[5], DFres = ab$Df[5]) if(quali[1]==FALSE & quali[2]==FALSE){ graf=list(grafico,NA)} } if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,a$Df[5],a$`Mean Sq`[5],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups) }} if (mcomp == "lsd"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,a$Df[5],a$`Mean Sq`[5],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator1 == lf1[i]], trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups) }} if (mcomp == "duncan"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,a$Df[5],a$`Mean Sq`[5],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups) }} if (mcomp == "sk"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[5], nrep = nrep, QME = a$`Mean Sq`[5], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # sk=sk(respi,trati,a$Df[5], a$`Sum Sq`[5],alpha.t) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk) } } } if(quali[1]==FALSE){ Fator1a=fator1a#as.numeric(as.character(Fator1)) grafico=polynomial2(Fator1a, response, Fator2, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, posi=posi, point=point, textsize=textsize, family=family, ylim=ylim, SSq=ab$`Sum Sq`[5], DFres = ab$Df[5]) if(quali[1]==FALSE & quali[2]==FALSE){ graf[[2]]=grafico grafico=graf} } } if(quali[1]==FALSE && color=="rainbow"| quali[2]==FALSE && color=="rainbow"){ if(quali[2]==FALSE){ teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste)))) if (mcomp == "tukey"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati, a$Df[5], a$`Mean Sq`[5], alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups) } } if (mcomp == "lsd"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati, a$Df[5], a$`Mean Sq`[5], alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups) } } if (mcomp == "duncan"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati, a$Df[5], a$`Mean Sq`[5], alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups) }} if (mcomp == "sk"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[5], nrep = nrep, QME = a$`Mean Sq`[5], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # sk=sk(respi,trati,a$Df[5], a$`Sum Sq`[5],alpha.t) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk) }} } if(quali[2]==FALSE){ Fator2a=fator2a#as.numeric(as.character(Fator2)) grafico=polynomial2_color(Fator2a, response, Fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, point=point, posi=posi, ylim=ylim, textsize=textsize, family=family, SSq=ab$`Sum Sq`[5], DFres = ab$Df[5]) if(quali[1]==FALSE & quali[2]==FALSE){ graf=list(grafico,NA)} } if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,a$Df[5],a$`Mean Sq`[5],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups) }} if (mcomp == "lsd"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,a$Df[5],a$`Mean Sq`[5],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator1 == lf1[i]], trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups) }} if (mcomp == "duncan"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,a$Df[5],a$`Mean Sq`[5],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups) }} if (mcomp == "sk"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[5], nrep = nrep, QME = a$`Mean Sq`[5], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # sk=sk(respi,trati,a$Df[5], a$`Sum Sq`[5],alpha.t) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk) } } } if(quali[1]==FALSE){ Fator1a=fator1a#as.numeric(as.character(Fator1)) grafico=polynomial2_color(Fator1a, response, Fator2, grau = grau12, color=color, ylab=ylab, xlab=xlab, theme=theme, point=point, posi=posi, ylim=ylim, textsize=textsize, family=family, SSq=ab$`Sum Sq`[5], DFres = ab$Df[5]) if(quali[1]==FALSE & quali[2]==FALSE){ graf[[2]]=grafico grafico=graf} } } if(quali[1]==TRUE & quali[2]==TRUE){ media=tapply(response,list(Fator1,Fator2), mean, na.rm=TRUE) # desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE) if(point=="mean_sd"){desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE)/ sqrt(tapply(response,list(Fator1,Fator2), length))} graph=data.frame(f1=rep(rownames(media),length(colnames(media))), f2=rep(colnames(media),e=length(rownames(media))), media=as.vector(media), desvio=as.vector(desvio)) limite=graph$media+graph$desvio lim.y=limite[which.max(abs(limite))] if(is.na(ylim[1])==TRUE && lim.y<0){ylim=c(1.5*lim.y,0)} if(is.na(ylim[1])==TRUE && lim.y>0){ylim=c(0,1.5*lim.y)} rownames(graph)=paste(graph$f1,graph$f2) graph=graph[paste(rep(unique(Fator1),e=length(colnames(media))), rep(unique(Fator2),length(rownames(media)))),] graph$letra=letra graph$letra1=letra1 graph$f1=factor(graph$f1,levels = unique(Fator1)) graph$f2=factor(graph$f2,levels = unique(Fator2)) if(addmean==TRUE){graph$numero=paste(format(graph$media,digits = dec), graph$letra,graph$letra1, sep="")} if(addmean==FALSE){graph$numero=paste(graph$letra,graph$letra1, sep="")} f1=graph$f1 f2=graph$f2 media=graph$media desvio=graph$desvio numero=graph$numero colint=ggplot(graph, aes(x=f1, y=media, fill=f2))+ geom_col(position = "dodge",color="black",width = width.column)+ ylab(ylab)+xlab(xlab)+ylim(ylim)+ theme if(errorbar==TRUE){colint=colint+ geom_errorbar(data=graph, aes(ymin=media-desvio, ymax=media+desvio), width=width.bar,color="black", position = position_dodge(width=width.column))} if(errorbar==TRUE){colint=colint+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=numero), position = position_dodge(width=width.column),family = family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){colint=colint+ geom_text(aes(y=media+sup,label=numero), position = position_dodge(width=width.column),family = family,angle=angle.label, hjust=hjust,size=labelsize)} colint=colint+theme(text=element_text(size=textsize, family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.text = element_text(family = family), legend.title = element_text(family = family), legend.position = posi)+labs(fill=legend) if(CV==TRUE){colint=colint+labs(caption=paste("p-value ", if(a$`Pr(>F)`[4]<0.0001){paste("<", 0.0001)} else{paste("=", round(a$`Pr(>F)`[4],4))},"; CV = ", round(abs(sqrt(a$`Mean Sq`[5])/mean(resp))*100,2),"%"))} if(angle !=0){colint=colint+theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} if(color=="gray"){colint=colint+scale_fill_grey()} print(colint) grafico=colint letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator2) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(matriz) message(black("\n\nAverages followed by the same lowercase letter in the column \nand uppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")")) } } if(a$`Pr(>F)`[4]>alpha.f){ names(graficos)=c("residplot","graph1","graph2") graficos}else{colints=list(residplot,grafico)} }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/FAT2DBC_function.R
#' Analysis: DBC experiment in double factorial design with an additional treatment #' @description Analysis of an experiment conducted in a randomized block design in a double factorial scheme using analysis of variance of fixed effects. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param f1 Numeric or complex vector with factor 1 levels #' @param f2 Numeric or complex vector with factor 2 levels #' @param block Numeric or complex vector with repetitions #' @param response Numerical vector containing the response of the experiment. #' @param responseAd Numerical vector with additional treatment responses #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param homog Homogeneity test of variances (\emph{default} is Bartlett) #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD and Duncan) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{qualitative}) #' @param names.fat Name of factors #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param grau Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with two elements. #' @param grau12 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 and qualitative factor 2 and quantitative factor 1. #' @param grau21 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 and qualitative factor 1 and quantitative factor 2. #' @param transf Applies data transformation (default is 1; for log consider 0; `angular` for angular transformation) #' @param constant Add a constant for transformation (enter value) #' @param geom Graph type (columns or segments (For simple effect only)) #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param xlab.factor Provide a vector with two observations referring to the x-axis name of factors 1 and 2, respectively, when there is an isolated effect of the factors. This argument uses `parse`. #' @param legend Legend title name #' @param ad.label Aditional label #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angle x-axis scale text rotation #' @param textsize Font size #' @param labelsize Label Size #' @param dec Number of cells #' @param width.column Width column if geom="bar" #' @param width.bar Width errorbar #' @param family Font family #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param CV Plotting the coefficient of variation and p-value of Anova (\emph{default} is TRUE) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param color Column chart color (\emph{default} is "rainbow") #' @param posi legend position #' @param ylim y-axis scale #' @param point This function defines whether the point must have all points ("all"), mean ("mean"), standard deviation (\emph{default} - "mean_sd") or mean with standard error ("mean_se") if quali= FALSE. For quali=TRUE, `mean_sd` and `mean_se` change which information will be displayed in the error bar. #' @param angle.label label angle #' @note The order of the chart follows the alphabetical pattern. Please use `scale_x_discrete` from package ggplot2, `limits` argument to reorder x-axis. The bars of the column and segment graphs are standard deviation. #' @note The function does not perform multiple regression in the case of two quantitative factors. #' @note The assumptions of variance analysis disregard additional treatment #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @return The table of analysis of variance, the test of normality of errors (Shapiro-Wilk, Lilliefors, Anderson-Darling, Cramer-von Mises, Pearson and Shapiro-Francia), the test of homogeneity of variances (Bartlett or Levene), the test of independence of Durbin-Watson errors, the test of multiple comparisons (Tukey, LSD, Scott-Knott or Duncan) or adjustment of regression models up to grade 3 polynomial, in the case of quantitative treatments. The column chart for qualitative treatments is also returned. #' @keywords DBC #' @keywords Factorial #' @keywords Aditional #' @seealso \link{FAT2DBC} #' @seealso \link{dunnett} #' @references #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' #' Mendiburu, F., and de Mendiburu, M. F. (2019). Package ‘agricolae’. R Package, Version, 1-2. #' #' @export #' @examples #' library(AgroR) #' data(cloro) #' respAd=c(268, 322, 275, 350, 320) #' with(cloro, FAT2DBC.ad(f1, f2, bloco, resp, respAd, ylab="Number of nodules", legend = "Stages")) FAT2DBC.ad=function(f1, f2, block, response, responseAd, norm="sw", homog="bt", alpha.f=0.05, alpha.t=0.05, quali=c(TRUE,TRUE), names.fat=c("F1","F2"), mcomp="tukey", grau=c(NA,NA), grau12=NA, # F1/F2 grau21=NA, # F2/F1 transf=1, constant=0, geom="bar", theme=theme_classic(), ylab="Response", xlab="", xlab.factor=c("F1","F2"), legend="Legend", ad.label="Additional", color="rainbow", fill="lightblue", textsize=12, labelsize=4, addmean=TRUE, errorbar=TRUE, CV=TRUE, dec=3, width.column=0.9, width.bar=0.3, angle=0, posi="right", family="sans", point="mean_sd", sup=NA, ylim=NA, angle.label=0){ if(angle.label==0){hjust=0.5}else{hjust=0} requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} if(transf==1){respAd=responseAd+constant}else{if(transf!="angular"){respAd=((responseAd+constant)^transf-1)/transf}} # if(transf==1){respAd=responseAd+constant}else{respAd=((responseAd+constant)^transf-1)/transf} if(transf==0){respAd=log(responseAd+constant)} if(transf==0.5){respAd=sqrt(responseAd+constant)} if(transf==-0.5){respAd=1/sqrt(responseAd+constant)} if(transf==-1){respAd=1/(responseAd+constant)} if(transf=="angular"){respAd=asin(sqrt((responseAd+constant)/100))} ordempadronizado=data.frame(f1,f2,block,resp,response) resp1=resp organiz=data.frame(f1,f2,block,response,resp) organiz=organiz[order(organiz$block),] organiz=organiz[order(organiz$f2),] organiz=organiz[order(organiz$f1),] f1=organiz$f1 f2=organiz$f2 block=organiz$block response=organiz$response resp=organiz$resp fator1=f1 fator2=f2 fator1a=fator1 fator2a=fator2 block=as.factor(block) if(is.na(sup==TRUE)){sup=0.1*mean(response)} Fator1=factor(fator1, levels = unique(fator1)) Fator2=factor(fator2, levels = unique(fator2)) nv1 <- length(summary(Fator1)) nv2 <- length(summary(Fator2)) lf1 <- levels(Fator1) lf2 <- levels(Fator2) fatores <- cbind(fator1, fator2) J = length(respAd) n.trat2 <- nv1 * nv2 anavaF2 <- summary(aov(resp ~ Fator1 * Fator2 + block)) anava=anavaF2[[1]][c(1:4),] col1 <- numeric(0) for (i in 1:n.trat2) { col1 <- c(col1, rep(i, J)) } col1 <- c(col1, rep("ad", J)) col2 <- c(block, rep(1:J)) col3 <- c(resp, respAd) tabF2ad <- data.frame(TRAT2 = col1, REP = col2, RESP2 = col3) TRAT2 <- factor(tabF2ad[, 1]) REP <- factor(tabF2ad$REP) anavaf1 <- aov(tabF2ad[, 3] ~ TRAT2+REP) anavaTr <- summary(anavaf1)[[1]] anava1=rbind(anava,anavaTr[c(1,3),]) anava1[3,]=anavaTr[2,] anava1$Df[5]=1 anava1$`Sum Sq`[5]=anava1$`Sum Sq`[5]-sum(anava1$`Sum Sq`[c(1,2,4)]) anava1$`Mean Sq`[5]=anava1$`Sum Sq`[5]/anava1$Df[5] anava1$`F value`[1:5]=anava1$`Mean Sq`[1:5]/anava1$`Mean Sq`[6] for(i in 1:nrow(anava1)-1){ anava1$`Pr(>F)`[i]=1-pf(anava1$`F value`[i],anava1$Df[i],anava1$Df[6]) } rownames(anava1)[5]="Ad x Factorial" anava=anava1 b=aov(resp ~ as.factor(f1) * as.factor(f2)+as.factor(block),data = ordempadronizado) an=anova(b) respad=b$residuals/sqrt(an$`Mean Sq`[5]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} if(norm=="sw"){norm1 = shapiro.test(b$res)} if(norm=="li"){norm1=lillie.test(b$residuals)} if(norm=="ad"){norm1=ad.test(b$residuals)} if(norm=="cvm"){norm1=cvm.test(b$residuals)} if(norm=="pearson"){norm1=pearson.test(b$residuals)} if(norm=="sf"){norm1=sf.test(b$residuals)} trat=as.factor(paste(Fator1,Fator2)) c=aov(resp~trat) if(homog=="bt"){ homog1 = bartlett.test(b$res ~ trat) statistic=homog1$statistic phomog=homog1$p.value method=paste("Bartlett test","(",names(statistic),")",sep="") } if(homog=="levene"){ homog1 = levenehomog(c$res~trat) statistic=homog1$`F value`[1] phomog=homog1$`Pr(>F)`[1] method="Levene's Test (center = median)(F)" names(homog1)=c("Df", "statistic","p.value")} indep = dwtest(b) Ids=ifelse(respad>3 | respad<(-3), "darkblue","black") residplot=ggplot(data=data.frame(respad,Ids), aes(y=respad,x=1:length(respad)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(respad),label=1:length(respad), color=Ids,size=labelsize)+ scale_x_continuous(breaks=1:length(respad))+ theme_classic()+theme(axis.text.y = element_text(size=textsize), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) homoge=data.frame(Method=method, Statistic=statistic, "p-value"=phomog) rownames(homoge)="" print(homoge) cat("\n") message(if(homog1$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Independence from errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) indepe=data.frame(Method=paste(indep$method,"(", names(indep$statistic),")",sep=""), Statistic=indep$statistic, "p-value"=indep$p.value) rownames(indepe)="" print(indepe) cat("\n") message(if(indep$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors are not independent"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV (%) = ",round(sqrt(anava$`Mean Sq`[6])/mean(c(resp,respAd),na.rm=TRUE)*100,2))) cat(paste("\nMean Factorial = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian Factorial = ",round(median(response,na.rm=TRUE),4))) cat(paste("\nMean Aditional = ",round(mean(responseAd,na.rm=TRUE),4))) cat(paste("\nMedian Aditional = ",round(median(responseAd,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) anava1=as.matrix(data.frame(anava)) colnames(anava1)=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anava1)=c(names.fat[1],names.fat[2],"Block", paste(names.fat[1],"x",names.fat[2]),"Ad x Factorial","Residuals") print(anava1,na.print = "") cat("\n") if(anava$`Pr(>F)`[5]<alpha.f){"The additional treatment does differ from the factorial by the F test"}else{"The additional treatment does not differ from the factorial by the F test "} if(transf==1 && norm1$p.value<0.05 | transf==1 && indep$p.value<0.05 | transf==1 &&homog1$p.value<0.05){ message("\nYour analysis is not valid, suggests using a non-parametric test and try to transform the data\n")}else{} if(transf != 1 && norm1$p.value<0.05 | transf!=1 && indep$p.value<0.05 | transf!=1 && homog1$p.value<0.05){ message("\nYour analysis is not valid, suggests using the function FATDIC.art\n")}else{} message(if(transf !=1){blue("NOTE: resp = transformed means; respO = averages without transforming\n")}) if (anava$`Pr(>F)`[4] > alpha.f) { cat(green(bold("-----------------------------------------------------------------\n"))) cat(green(bold("No significant interaction"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) fatores <- data.frame(Fator1 = factor(fator1), Fator2 = factor(fator2)) fatoresa <- data.frame(Fator1 = fator1a, Fator2 = fator2a) graficos=list(1,2,3) for (i in 1:2) {if (anava$`Pr(>F)`[i] <= alpha.f) {cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(bold(names.fat[i])) cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali[i]==TRUE){ if(mcomp=="tukey"){ letra <- TUKEY(resp, fatores[,i], anava$Df[6], anava$`Mean Sq`[6], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ letra <- LSD(resp, fatores[,i], anava$Df[6],anava$`Mean Sq`[6],alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if (mcomp == "sk"){ nrep=table(fatores[i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anava$Df[5], nrep = nrep, QME = anava$`Mean Sq`[5], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ letra <- duncan(resp, fatores[,i],anava$Df[6],anava$`Mean Sq`[6], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) print(letra1) ordem=unique(as.vector(unlist(fatores[i]))) #===================================================== if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[ordem]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[ordem]} dadosm=data.frame(letra1[ordem,], media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[ordem], desvio=desvio) dadosm$trats=factor(rownames(dadosm),levels = ordem) dadosm$limite=dadosm$media+dadosm$desvio lim.y=dadosm$limite[which.max(abs(dadosm$limite))] if(is.na(ylim[1])==TRUE && lim.y<0){ylim=c(1.5*lim.y,0)} if(is.na(ylim[1])==TRUE && lim.y>0){ylim=c(0,1.5*lim.y)} if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio trats=dadosm$trats letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trats),color=1,width = width.column)} else{grafico=grafico+ geom_col(aes(fill=trats), fill=fill,color=1,width = width.column)} grafico=grafico+theme+ylab(ylab)+xlab(parse(text = xlab.factor[i]))+ylim(ylim) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+ sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=width.bar)} if(angle !=0){grafico=grafico+theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} grafico=grafico+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family))} if(geom=="point"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trats),size=5)} else{grafico=grafico+ geom_point(aes(color=trats),fill="gray",pch=21,color="black",size=5)} grafico=grafico+theme+ylab(ylab)+xlab(parse(text = xlab.factor[i]))+ylim(ylim) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+ if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black", width=width.bar)} if(angle !=0){grafico=grafico+theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} grafico=grafico+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family))} grafico=grafico+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="") if(CV==TRUE){grafico=grafico+labs(caption=paste("p-value = ", if(anava$`Pr(>F)`[i]<0.0001){paste("<", 0.0001)} else{paste("=", round(anava$`Pr(>F)`[i],4))},"; CV = ", round(abs(sqrt(anava$`Mean Sq`[6])/mean(c(resp,respAd),na.rm=TRUE))*100,2),"%"))} if(color=="gray"){grafico=grafico+scale_fill_grey()} print(grafico) cat("\n\n") } if(quali[i]==FALSE){ dose=as.vector(unlist(fatoresa[i])) grafico=polynomial(dose, response, grau = grau[i], ylab=ylab, xlab=parse(text = xlab.factor[i]), posi=posi, theme=theme, textsize=textsize, point=point, family=family,SSq = anava$`Sum Sq`[6],DFres = anava$Df[6]) grafico=grafico[[1]]+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="")} graficos[[i+1]]=grafico}} graficos[[1]]=residplot if(anava$`Pr(>F)`[1]>=alpha.f && anava$`Pr(>F)`[2] <alpha.f){ cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green("Isolated factors 1 not significant")) cat(green(bold("\n-----------------------------------------------------------------\n"))) d1=data.frame(tapply(response,fator1,mean, na.rm=TRUE)) colnames(d1)="Mean" print(d1) } if(anava$`Pr(>F)`[1]<alpha.f && anava$`Pr(>F)`[2] >=alpha.f){ cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green("Isolated factors 2 not significant")) cat(green(bold("\n-----------------------------------------------------------------\n"))) d1=data.frame(tapply(response,fator2,mean, na.rm=TRUE)) colnames(d1)="Mean" print(d1)} if(anava$`Pr(>F)`[1]>=alpha.f && anava$`Pr(>F)`[2] >=alpha.f){ cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green("Isolated factors not significant")) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(tapply(response,list(fator1,fator2),mean, na.rm=TRUE))} } if (anava$`Pr(>F)`[4] <= alpha.f) { fatores <- data.frame(Fator1, Fator2) cat(green(bold("-----------------------------------------------------------------\n"))) cat(green(bold("Significant interaction: analyzing the interaction"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) des1<-aov(resp~Fator2/Fator1+block) cat("\n-----------------------------------------------------------------\n") cat("Analyzing ", names.fat[1], " inside of the level of ",names.fat[2]) cat("\n-----------------------------------------------------------------\n") cat("\n") l1<-vector('list',nv2) names(l1)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv2+j) l1[[j]]<-v v<-numeric(0) } des1.tab<-summary(des1,split=list('Fator2:Fator1'=l1))[[1]] nlinhas=nrow(des1.tab) des1.tab=des1.tab[-c(nlinhas),] des1.tab$`F value`=des1.tab$`Mean Sq`/anava$`Mean Sq`[6] des1.tab$`Pr(>F)`=1-pf(des1.tab$`F value`,des1.tab$Df,anava$Df[6]) rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[2], sep = ""), lf2[j])) } rownames(des1.tab)=c(names.fat[2],"Block", paste(names.fat[1],"x",names.fat[2],"+",names.fat[1]), paste(" ",rn)) print(des1.tab) desdobramento1=des1.tab if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anava$Df[6],anava$`Mean Sq`[6],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico[[i]]=tukey$groups[as.character(unique(trati)),2] ordem[[i]]=rownames(tukey$groups[as.character(unique(trati)),]) } letra=unlist(tukeygrafico) datag=data.frame(letra, ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra } if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anava$Df[6],anava$`Mean Sq`[6],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico[[i]]=duncan$groups[as.character(unique(trati)),2] ordem[[i]]=rownames(duncan$groups[as.character(unique(trati)),]) } letra=unlist(duncangrafico) datag=data.frame(letra, ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra } if (mcomp == "lsd"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anava$Df[6],anava$`Mean Sq`[6],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(lsd$groups)]} duncangrafico[[i]]=lsd$groups[as.character(unique(trati)),2] ordem[[i]]=rownames(lsd$groups[as.character(unique(trati)),]) } letra=unlist(duncangrafico) datag=data.frame(letra, ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra } if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anava$Df[6], nrep = nrep, QME = anava$`Mean Sq`[6], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # sk=sk(respi,trati,anava$Df[6],anava$`Sum Sq`[6],alpha.t) if(transf !="1"){sk$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} } cat("\n-----------------------------------------------------------------\n") cat("Analyzing ", names.fat[2], " inside of the level of ",names.fat[1]) cat("\n-----------------------------------------------------------------\n") cat("\n") des1<-aov(resp~Fator1/Fator2+block) l1<-vector('list',nv1) names(l1)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv1+j) l1[[j]]<-v v<-numeric(0) } des1.tab<-summary(des1,split=list('Fator1:Fator2'=l1))[[1]] nlinhas=nrow(des1.tab) des1.tab=des1.tab[-c(nlinhas),] des1.tab$`F value`=des1.tab$`Mean Sq`/anava$`Mean Sq`[6] des1.tab$`Pr(>F)`=1-pf(des1.tab$`F value`,des1.tab$Df,anava$Df[6]) rn<-numeric(0) for (i in 1:nv1) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[1], sep = ""), lf1[i])) } rownames(des1.tab)=c(names.fat[1],"Block", paste(names.fat[1],"x",names.fat[2],"+",names.fat[2]), paste(" ",rn)) print(des1.tab) desdobramento2=des1.tab if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anava$Df[6],anava$`Mean Sq`[6],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico1[[i]]=tukey$groups[as.character(unique(trati)),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anava$Df[6],anava$`Mean Sq`[6],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico1[[i]]=duncan$groups[as.character(unique(trati)),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anava$Df[6],anava$`Mean Sq`[6],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico1[[i]]=lsd$groups[as.character(unique(trati)),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anava$Df[6], nrep = nrep, QME = anava$`Mean Sq`[6], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)} } if(quali[1]==FALSE && color=="gray"| quali[2]==FALSE && color=="gray"){ if(quali[2]==FALSE){ Fator2=fator2a#as.numeric(as.character(Fator2)) grafico=polynomial2(Fator2,response,Fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, posi=posi, point=point, textsize=textsize, family=family, ylim=ylim,SSq = anava$`Sum Sq`[6],DFres = anava$Df[6])+ geom_hline(aes(color=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="")} if(quali[2]==TRUE){ Fator1=fator1a grafico=polynomial2(Fator1, response, Fator2, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, posi=posi, point=point, textsize=textsize, family=family, ylim=ylim,SSq = anava$`Sum Sq`[6],DFres = anava$Df[6])+ geom_hline(aes(color=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="")} } if(quali[1]==FALSE && color=="rainbow"| quali[2]==FALSE && color=="rainbow"){ if(quali[2]==FALSE){ Fator2=fator2a grafico=polynomial2_color(Fator2, response, Fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, posi=posi, point=point, textsize=textsize, family=family, ylim=ylim,SSq = anava$`Sum Sq`[6],DFres = anava$Df[6])+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="")} if(quali[2]==TRUE){ Fator1=fator1a grafico=polynomial2_color(Fator1, response, Fator2, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, posi=posi, point=point, textsize=textsize, family=family, ylim=ylim,SSq = anava$`Sum Sq`[6],DFres = anava$Df[6])+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="")} } if(quali[1] & quali[2]==TRUE){ media=tapply(response,list(Fator1,Fator2), mean, na.rm=TRUE) # desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE) if(point=="mean_sd"){desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE)/ sqrt(tapply(response,list(Fator1,Fator2), length))} graph=data.frame(f1=rep(rownames(media),length(colnames(media))), f2=rep(colnames(media),e=length(rownames(media))), media=as.vector(media), desvio=as.vector(desvio)) limite=graph$media+graph$desvio lim.y=limite[which.max(abs(limite))] if(is.na(ylim[1])==TRUE && lim.y<0){ylim=c(1.5*lim.y,0)} if(is.na(ylim[1])==TRUE && lim.y>0){ylim=c(0,1.5*lim.y)} rownames(graph)=paste(graph$f1,graph$f2) graph=graph[paste(rep(unique(Fator1), e=length(colnames(media))), rep(unique(Fator2),length(rownames(media)))),] graph$letra=letra graph$letra1=letra1 graph$f1=factor(graph$f1,levels = unique(Fator1)) graph$f2=factor(graph$f2,levels = unique(Fator2)) if(addmean==TRUE){graph$numero=paste(format(graph$media,digits = dec), graph$letra,graph$letra1, sep="")} if(addmean==FALSE){graph$numero=paste(graph$letra,graph$letra1, sep="")} f1=graph$f1 f2=graph$f2 media=graph$media desvio=graph$desvio numero=graph$numero colint=ggplot(graph, aes(x=f1, y=media, fill=f2))+ geom_col(position = "dodge",color="black",width = width.column)+ ylab(ylab)+xlab(xlab)+ylim(ylim)+ theme if(errorbar==TRUE){colint=colint+ geom_errorbar(data=graph, aes(ymin=media-desvio, ymax=media+desvio), width=width.bar,color="black", position = position_dodge(width=width.column))} if(errorbar==TRUE){colint=colint+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=numero), position = position_dodge(width=width.column), family = family,angle=angle.label,hjust=hjust,size=labelsize)} if(errorbar==FALSE){colint=colint+ geom_text(aes(y=media+sup,label=numero), position = position_dodge(width=width.column), family = family,angle=angle.label, hjust=hjust,size=labelsize)} colint=colint+theme(text=element_text(size=textsize,family = family), axis.text = element_text(size=textsize,color="black",family = family), axis.title = element_text(size=textsize,color="black",family = family), legend.text = element_text(family = family,size = textsize), legend.title = element_text(family = family,size = textsize), legend.position = posi)+labs(fill=legend)+ geom_hline(aes(color=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="") if(CV==TRUE){colint=colint+labs(caption=paste("p-value ", if(anava$`Pr(>F)`[4]<0.0001){paste("<", 0.0001)} else{paste("=", round(anava$`Pr(>F)`[4],4))},"; CV = ", round(abs(sqrt(anava$`Mean Sq`[6])/mean(c(resp,respAd),na.rm=TRUE))*100,2),"%"))} if(angle !=0){colint=colint+ theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} if(color=="gray"){colint=colint+scale_fill_grey()} print(colint) grafico=colint letras=paste(graph$letra, graph$letra1, sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator2) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(matriz) message(black("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")")) } } if(anava$`Pr(>F)`[4]>alpha.f){ names(graficos)=c("residplot","graph1","graph2") graficos}else{colints=list(residplot,grafico)} }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/FAT2DBCad_function.R
#' Analysis: DIC experiments in double factorial #' @description Analysis of an experiment conducted in a completely randomized design in a double factorial scheme using analysis of variance of fixed effects. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param f1 Numeric or complex vector with factor 1 levels #' @param f2 Numeric or complex vector with factor 2 levels #' @param response Numerical vector containing the response of the experiment. #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param homog Homogeneity test of variances (\emph{default} is Bartlett) #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{qualitative}) #' @param names.fat Name of factors #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param grau Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with two elements. #' @param grau12 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 and qualitative factor 2 and quantitative factor 1. #' @param grau21 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 and qualitative factor 1 and quantitative factor 2. #' @param transf Applies data transformation (default is 1; for log consider 0; `angular` for angular transformation) #' @param constant Add a constant for transformation (enter value) #' @param geom Graph type (columns or segments (For simple effect only)) #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param xlab.factor Provide a vector with two observations referring to the x-axis name of factors 1 and 2, respectively, when there is an isolated effect of the factors. This argument uses `parse`. #' @param legend Legend title name #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angle x-axis scale text rotation #' @param textsize Font size #' @param labelsize Label Size #' @param dec Number of cells #' @param width.column Width column if geom="bar" #' @param width.bar Width errorbar #' @param family Font family #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param CV Plotting the coefficient of variation and p-value of Anova (\emph{default} is TRUE) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param color Column chart color (\emph{default} is "rainbow") #' @param posi Legend position #' @param ylim y-axis scale #' @param point This function defines whether the point must have all points ("all"), mean ("mean"), standard deviation (\emph{default} - "mean_sd") or mean with standard error ("mean_se") if quali= FALSE. For quali=TRUE, `mean_sd` and `mean_se` change which information will be displayed in the error bar. #' @param angle.label Label angle #' @import ggplot2 #' @importFrom crayon green #' @importFrom crayon bold #' @importFrom crayon italic #' @importFrom crayon red #' @importFrom crayon blue #' @import stats #' @note The order of the chart follows the alphabetical pattern. Please use `scale_x_discrete` from package ggplot2, `limits` argument to reorder x-axis. The bars of the column and segment graphs are standard deviation. #' @note The function does not perform multiple regression in the case of two quantitative factors. #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @return The table of analysis of variance, the test of normality of errors (Shapiro-Wilk, Lilliefors, Anderson-Darling, Cramer-von Mises, Pearson and Shapiro-Francia), the test of homogeneity of variances (Bartlett or Levene), the test of independence of Durbin-Watson errors, the test of multiple comparisons (Tukey, LSD, Scott-Knott or Duncan) or adjustment of regression models up to grade 3 polynomial, in the case of quantitative treatments. The column chart for qualitative treatments is also returned. #' @keywords DIC #' @keywords Factorial #' @seealso \link{FAT2DIC.ad} #' @references #' #' Principles and procedures of statistics a biometrical approach Steel & Torry & Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' #' Mendiburu, F., & de Mendiburu, M. F. (2019). Package ‘agricolae’. R Package, Version, 1-2. #' #' @export #' @examples #' #' #==================================== #' # Example cloro #' #==================================== #' library(AgroR) #' data(cloro) #' with(cloro, FAT2DIC(f1, f2, resp, ylab="Number of nodules", legend = "Stages")) #' #' #==================================== #' # Example corn #' #==================================== #' library(AgroR) #' data(corn) #' with(corn, FAT2DIC(A, B, Resp, quali=c(TRUE, TRUE),ylab="Heigth (cm)")) #' with(corn, FAT2DIC(A, B, Resp, mcomp="sk", quali=c(TRUE, TRUE),ylab="Heigth (cm)")) #' FAT2DIC=function(f1, f2, response, norm="sw", homog="bt", alpha.f=0.05, alpha.t=0.05, quali=c(TRUE,TRUE), names.fat=c("F1", "F2"), mcomp="tukey", grau=c(NA,NA), grau12=NA, # F1/F2 grau21=NA, # F2/F1 transf=1, constant=0, geom="bar", theme=theme_classic(), ylab="Response", xlab="", xlab.factor=c("F1","F2"), legend="Legend", color="rainbow", fill="lightblue", textsize=12, labelsize=4, addmean=TRUE, errorbar=TRUE, CV=TRUE, dec=3, width.column=0.9, width.bar=0.3, angle=0, posi="right", family="sans", point="mean_sd", sup=NA, ylim=NA, angle.label=0){ if(angle.label==0){hjust=0.5}else{hjust=0} requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") # ================================ # Transformacao de dados # ================================ if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} if(is.na(sup==TRUE)){sup=0.1*mean(response)} ordempadronizado=data.frame(f1,f2,resp,response) resp1=resp organiz=data.frame(f1,f2,resp,response) organiz=organiz[order(organiz$f2),] organiz=organiz[order(organiz$f1),] f1=organiz$f1 f2=organiz$f2 response=organiz$response resp=organiz$resp fator1=f1 fator2=f2 fator1a=fator1 fator2a=fator2 Fator1=factor(fator1, levels = unique(fator1)) Fator2=factor(fator2, levels = unique(fator2)) nv1 <- length(summary(Fator1)) nv2 <- length(summary(Fator2)) lf1 <- levels(Fator1) lf2 <- levels(Fator2) # fac.names = c("F1", "F2") fatores <- data.frame(Fator1, Fator2) graph=data.frame(Fator1,Fator2,resp) a=anova(aov(resp~Fator1*Fator2)) b=aov(resp~Fator1*Fator2) ab=anova(aov(response~Fator1*Fator2)) anava=a colnames(anava)=c("GL","SQ","QM","Fcal","p-value") bres=aov(resp~as.factor(f1)*as.factor(f2), data = ordempadronizado) respad=bres$residuals/sqrt(a$`Mean Sq`[4]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} if(norm=="sw"){norm1 = shapiro.test(b$res)} if(norm=="li"){norm1=lillie.test(b$residuals)} if(norm=="ad"){norm1=ad.test(b$residuals)} if(norm=="cvm"){norm1=cvm.test(b$residuals)} if(norm=="pearson"){norm1=pearson.test(b$residuals)} if(norm=="sf"){norm1=sf.test(b$residuals)} trat=as.factor(paste(Fator1,Fator2)) c=aov(resp~trat) if(homog=="bt"){ homog1 = bartlett.test(b$res ~ trat) statistic=homog1$statistic phomog=homog1$p.value method=paste("Bartlett test","(",names(statistic),")",sep="") } if(homog=="levene"){ homog1 = levenehomog(c$res~trat)[1,] statistic=homog1$`F value`[1] phomog=homog1$`Pr(>F)`[1] method="Levene's Test (center = median)(F)" names(homog1)=c("Df", "statistic","p.value")} indep = dwtest(b) Ids=ifelse(respad>3 | respad<(-3), "darkblue","black") residplot=ggplot(data=data.frame(respad,Ids), aes(y=respad,x=1:length(respad)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(respad),label=1:length(respad), color=Ids,size=labelsize)+ scale_x_continuous(breaks=1:length(respad))+ theme_classic()+theme(axis.text.y = element_text(size=textsize), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) homoge=data.frame(Method=method, Statistic=statistic, "p-value"=phomog) rownames(homoge)="" print(homoge) cat("\n") message(if(homog1$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Independence from errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) indepe=data.frame(Method=paste(indep$method,"(", names(indep$statistic),")",sep=""), Statistic=indep$statistic, "p-value"=indep$p.value) rownames(indepe)="" print(indepe) cat("\n") message(if(indep$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors are not independent"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV (%) = ",round(sqrt(a$`Mean Sq`[4])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nMean = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian = ",round(median(response,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) anava1=as.matrix(data.frame(anava)) colnames(anava1)=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anava1)=c(names.fat[1],names.fat[2], paste(names.fat[1],"x",names.fat[2]),"Residuals") print(anava1,na.print = "") cat("\n") if(transf==1 && norm1$p.value<0.05 | transf==1 && indep$p.value<0.05 | transf==1 && homog1$p.value<0.05){ message("\nYour analysis is not valid, suggests using a non-parametric test and try to transform the data\n")}else{} if(transf != 1 && norm1$p.value<0.05 | transf!=1 && indep$p.value<0.05 | transf!=1 && homog1$p.value<0.05){ message("\nYour analysis is not valid\n")}else{} message(if(transf !=1){blue("NOTE: resp = transformed means; respO = averages without transforming\n")}) if (a$`Pr(>F)`[3] > alpha.f) { cat(green(bold("-----------------------------------------------------------------\n"))) cat(green(bold("No significant interaction"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) fatores <- data.frame(Fator1 = factor(fator1), Fator2 = factor(fator2)) fatoresa <- data.frame(Fator1 = fator1a, Fator2 = fator2a) graficos=list(1,2,3) for (i in 1:2) {if (a$`Pr(>F)`[i] <= alpha.f) {cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(bold(names.fat[i])) cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali[i]==TRUE){ if(mcomp=="tukey"){ letra <- TUKEY(b, colnames(fatores[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ letra <- LSD(b, colnames(fatores[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[4], nrep = nrep, QME = a$`Mean Sq`[4], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i], mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ letra <- duncan(b, colnames(fatores[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) print(letra1) ordem=unique(as.vector(unlist(fatores[i]))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[ordem]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[ordem]} dadosm=data.frame(letra1[ordem,], media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[ordem], desvio=desvio) dadosm$trats=factor(rownames(dadosm),levels = ordem) dadosm$limite=dadosm$media+dadosm$desvio lim.y=dadosm$limite[which.max(abs(dadosm$limite))] if(is.na(ylim[1])==TRUE && lim.y<0){ylim=c(1.5*lim.y,0)} if(is.na(ylim[1])==TRUE && lim.y>0){ylim=c(0,1.5*lim.y)} if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio trats=dadosm$trats letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trats),color=1,width = width.column)} else{grafico=grafico+ geom_col(aes(fill=trats), fill=fill,color=1,width = width.column)} grafico=grafico+theme+ylab(ylab)+xlab(parse(text = xlab.factor[i]))+ylim(ylim) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+ sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=width.bar)} if(angle !=0){grafico=grafico+theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} grafico=grafico+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family), legend.position = "none")} if(geom=="point"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trats),size=5)} else{grafico=grafico+ geom_point(aes(color=trats),fill="gray",pch=21,color="black",size=5)} grafico=grafico+theme+ylab(ylab)+xlab(parse(text = xlab.factor[i]))+ylim(ylim) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+ if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black", width=width.bar)} if(angle !=0){grafico=grafico+theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} grafico=grafico+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family), legend.position = "none")} if(CV==TRUE){grafico=grafico+labs(caption=paste("p-value = ", if(a$`Pr(>F)`[i]<0.0001){paste("<", 0.0001)} else{paste("=", round(a$`Pr(>F)`[i],4))},"; CV = ", round(abs(sqrt(a$`Mean Sq`[4])/mean(resp))*100,2),"%"))} if(color=="gray"){grafico=grafico+scale_fill_grey()} cat("\n\n") } # Regression if(quali[i]==FALSE){ dose=as.vector(unlist(fatoresa[i])) grafico=polynomial(dose, response, grau = grau[i], ylab=ylab, xlab=parse(text = xlab.factor[i]), posi=posi, theme=theme, textsize=textsize, point=point, family=family, SSq=ab$`Sum Sq`[4], DFres = ab$Df[4]) grafico=grafico[[1]]} graficos[[i+1]]=grafico}} graficos[[1]]=residplot if(a$`Pr(>F)`[1]>=alpha.f && a$`Pr(>F)`[2] <alpha.f){ cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green("Isolated factors 1 not significant")) cat(green(bold("\n-----------------------------------------------------------------\n"))) d1=data.frame(tapply(response,fator1,mean, na.rm=TRUE)) colnames(d1)="Mean" print(d1) } if(a$`Pr(>F)`[1]<alpha.f && a$`Pr(>F)`[2] >=alpha.f){ cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green("Isolated factors 2 not significant")) cat(green(bold("\n-----------------------------------------------------------------\n"))) d1=data.frame(tapply(response,fator2,mean, na.rm=TRUE)) colnames(d1)="Mean" print(d1)} if(a$`Pr(>F)`[1]>=alpha.f && a$`Pr(>F)`[2] >=alpha.f){ cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green("Isolated factors not significant")) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(tapply(response,list(fator1,fator2),mean, na.rm=TRUE))} } if (a$`Pr(>F)`[3] <= alpha.f) { fatores <- data.frame(Fator1, Fator2) cat(green(bold("-----------------------------------------------------------------\n"))) cat(green(bold("Significant interaction: analyzing the interaction"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat("\n-----------------------------------------------------------------\n") cat("Analyzing ", names.fat[1], " inside of each level of ",names.fat[2]) cat("\n-----------------------------------------------------------------\n") cat("\n") des1<-aov(resp~Fator2/Fator1) l1<-vector('list',nv2) names(l1)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv2+j) l1[[j]]<-v v<-numeric(0) } rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[2], sep = ""), lf2[j])) } des1.tab<-summary(des1,split=list('Fator2:Fator1'=l1))[[1]] rownames(des1.tab)=c(names.fat[2], paste(names.fat[1],"x",names.fat[2],"+",names.fat[1]), paste(" ",rn),"Residuals") print(des1.tab) desdobramento1=des1.tab if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico[[i]]=tukey$groups[as.character(unique(trati)),2] ordem[[i]]=rownames(tukey$groups[as.character(unique(trati)),]) } letra=unlist(tukeygrafico) datag=data.frame(letra, ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra } if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico[[i]]=duncan$groups[as.character(unique(trati)),2] ordem[[i]]=rownames(duncan$groups[as.character(unique(trati)),]) } letra=unlist(duncangrafico) datag=data.frame(letra, ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra } if (mcomp == "lsd"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(lsd$groups)]} duncangrafico[[i]]=lsd$groups[as.character(unique(trati)),2] ordem[[i]]=rownames(lsd$groups[as.character(unique(trati)),]) } letra=unlist(duncangrafico) datag=data.frame(letra, ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra } if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[4], nrep = nrep, QME = a$`Mean Sq`[4], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} } cat("\n-----------------------------------------------------------------\n") cat("Analyzing ", names.fat[2], " inside of the level of ",names.fat[1]) cat("\n-----------------------------------------------------------------\n") cat("\n") des1<-aov(resp~Fator1/Fator2) l1<-vector('list',nv1) names(l1)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv1+j) l1[[j]]<-v v<-numeric(0) } rn<-numeric(0) for (i in 1:nv1) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[1], sep = ""), lf1[i])) } des1.tab<-summary(des1,split=list('Fator1:Fator2'=l1))[[1]] rownames(des1.tab)=c(names.fat[1], paste(names.fat[1],"x",names.fat[2],"+",names.fat[2]), paste(" ",rn),"Residuals") print(des1.tab) desdobramento2=des1.tab if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico1[[i]]=tukey$groups[as.character(unique(trati)),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico1[[i]]=duncan$groups[as.character(unique(trati)),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico1[[i]]=lsd$groups[as.character(unique(trati)),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[4], nrep = nrep, QME = a$`Mean Sq`[4], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # sk=sk(respi,trati,a$Df[4], a$`Sum Sq`[4],alpha.t) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)} } if(quali[1]==FALSE && color=="gray"| quali[2]==FALSE && color=="gray"){ if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups) }} if (mcomp == "duncan"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups)} } if (mcomp == "sk"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[4], nrep = nrep, QME = a$`Mean Sq`[4], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk) }} } if(quali[2]==FALSE){ Fator2a=fator2a grafico=polynomial2(Fator2a, response, Fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, posi=posi, point=point, textsize=textsize, family=family, ylim=ylim, SSq=ab$`Sum Sq`[4], DFres = ab$Df[4]) if(quali[1]==FALSE & quali[2]==FALSE){ graf=list(grafico,NA)} } if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups) }} if (mcomp == "duncan"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico1[[i]]=lsd$groups[as.character(unique(trati)),2] cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[4], nrep = nrep, QME = a$`Mean Sq`[4], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk) }} } if(quali[1]==FALSE){ Fator1a=fator1a grafico=polynomial2(Fator1a, response, Fator2, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, posi=posi, point=point, textsize=textsize, family=family, ylim=ylim, SSq=ab$`Sum Sq`[4], DFres = ab$Df[4]) if(quali[1]==FALSE & quali[2]==FALSE){ graf[[2]]=grafico grafico=graf} } } if(quali[1]==FALSE && color=="rainbow"| quali[2]==FALSE && color=="rainbow"){ if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups) }} if (mcomp == "duncan"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups)} } if (mcomp == "sk"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[4], nrep = nrep, QME = a$`Mean Sq`[4], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk) }} } if(quali[2]==FALSE){ Fator2=fator2a grafico=polynomial2_color(Fator2, response, Fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, posi=posi, point=point, textsize=textsize, family=family, ylim=ylim, SSq=ab$`Sum Sq`[4], DFres = ab$Df[4]) if(quali[1]==FALSE & quali[2]==FALSE){ graf=list(grafico,NA)} } if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups) }} if (mcomp == "duncan"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,a$Df[4],a$`Mean Sq`[4],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = a$Df[4], nrep = nrep, QME = a$`Mean Sq`[4], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk) }} } if(quali[1]==FALSE){ Fator1a=fator1a#as.numeric(as.character(Fator1)) grafico=polynomial2_color(Fator1a, response, Fator2, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, posi=posi, point=point, textsize=textsize, family=family, ylim=ylim, SSq=ab$`Sum Sq`[4], DFres = ab$Df[4]) if(quali[1]==FALSE & quali[2]==FALSE){ graf[[2]]=grafico grafico=graf} } } if(quali[1] & quali[2]==TRUE){ media=tapply(response,list(Fator1,Fator2), mean, na.rm=TRUE) if(point=="mean_sd"){desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE)/ sqrt(tapply(response,list(Fator1,Fator2), length))} graph=data.frame(f1=rep(rownames(media),length(colnames(media))), f2=rep(colnames(media),e=length(rownames(media))), media=as.vector(media), desvio=as.vector(desvio)) limite=graph$media+graph$desvio lim.y=limite[which.max(abs(limite))] if(is.na(ylim[1])==TRUE && lim.y<0){ylim=c(1.5*lim.y,0)} if(is.na(ylim[1])==TRUE && lim.y>0){ylim=c(0,1.5*lim.y)} rownames(graph)=paste(graph$f1,graph$f2) graph=graph[paste(rep(unique(Fator1), e=length(colnames(media))), rep(unique(Fator2),length(rownames(media)))),] graph$letra=letra graph$letra1=letra1 graph$f1=factor(graph$f1,levels = unique(Fator1)) graph$f2=factor(graph$f2,levels = unique(Fator2)) if(addmean==TRUE){graph$numero=paste(format(graph$media,digits = dec), graph$letra,graph$letra1, sep="")} if(addmean==FALSE){graph$numero=paste(graph$letra,graph$letra1, sep="")} f1=graph$f1 f2=graph$f2 media=graph$media desvio=graph$desvio numero=graph$numero colint=ggplot(graph, aes(x=f1, y=media, fill=f2))+ geom_col(position = "dodge",color="black",width = width.column)+ ylab(ylab)+xlab(xlab)+ylim(ylim)+ theme if(errorbar==TRUE){colint=colint+ geom_errorbar(data=graph, aes(ymin=media-desvio, ymax=media+desvio), width=width.bar,color="black", position = position_dodge(width = width.column))} if(errorbar==TRUE){colint=colint+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=numero), position = position_dodge(width=width.column), family = family,angle=angle.label,hjust=hjust,size=labelsize)} if(errorbar==FALSE){colint=colint+ geom_text(aes(y=media+sup,label=numero), position = position_dodge(width=width.column), family = family,angle=angle.label, hjust=hjust,size=labelsize)} colint=colint+theme(text=element_text(size=textsize,family = family), axis.text = element_text(size=textsize,color="black",family = family), axis.title = element_text(size=textsize,color="black",family = family), legend.text = element_text(family = family), legend.title = element_text(family = family), legend.position = posi)+labs(fill=legend) if(CV==TRUE){colint=colint+labs(caption=paste("p-value ", if(a$`Pr(>F)`[3]<0.0001){paste("<", 0.0001)} else{paste("=", round(a$`Pr(>F)`[3],4))},"; CV = ", round(abs(sqrt(a$`Mean Sq`[4])/mean(resp))*100,2),"%"))} if(angle !=0){colint=colint+ theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} if(color=="gray"){colint=colint+scale_fill_grey()} print(colint) grafico=colint letras=paste(graph$letra, graph$letra1, sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator2) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(matriz) message(black("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")")) } } if(a$`Pr(>F)`[3]>alpha.f){ names(graficos)=c("residplot","graph1","graph2") graficos}else{ colints=list(residplot,grafico)} }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/FAT2DIC_function.R
#' Analysis: DIC experiment in double factorial design with an additional treatment #' @description Analysis of an experiment conducted in a completely randomized design in a double factorial scheme using analysis of variance of fixed effects. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param f1 Numeric or complex vector with factor 1 levels #' @param f2 Numeric or complex vector with factor 2 levels #' @param repe Numeric or complex vector with repetitions #' @param response Numerical vector containing the response of the experiment. #' @param responseAd Numerical vector with additional treatment responses #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param homog Homogeneity test of variances (\emph{default} is Bartlett) #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD and Duncan) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{qualitative}) #' @param names.fat Name of factors #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param grau Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with two elements. #' @param grau12 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 and qualitative factor 2 and quantitative factor 1. #' @param grau21 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 and qualitative factor 1 and quantitative factor 2. #' @param transf Applies data transformation (default is 1; for log consider 0; `angular` for angular transformation) #' @param constant Add a constant for transformation (enter value) #' @param geom Graph type (columns or segments (For simple effect only)) #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param xlab.factor Provide a vector with two observations referring to the x-axis name of factors 1 and 2, respectively, when there is an isolated effect of the factors. This argument uses `parse`. #' @param legend Legend title name #' @param ad.label Aditional label #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angle x-axis scale text rotation #' @param textsize Font size #' @param labelsize Label Size #' @param dec Number of cells #' @param width.column Width column if geom="bar" #' @param width.bar Width errorbar #' @param family Font family #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param CV Plotting the coefficient of variation and p-value of Anova (\emph{default} is TRUE) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param color Column chart color (\emph{default} is "rainbow") #' @param posi legend position #' @param ylim y-axis scale #' @param point This function defines whether the point must have all points ("all"), mean ("mean"), standard deviation (\emph{default} - "mean_sd") or mean with standard error ("mean_se") if quali= FALSE. For quali=TRUE, `mean_sd` and `mean_se` change which information will be displayed in the error bar. #' @param angle.label label angle #' @import ggplot2 #' @importFrom crayon green #' @importFrom crayon bold #' @importFrom crayon italic #' @importFrom crayon red #' @importFrom crayon blue #' @import stats #' @note The order of the chart follows the alphabetical pattern. Please use `scale_x_discrete` from package ggplot2, `limits` argument to reorder x-axis. The bars of the column and segment graphs are standard deviation. #' @note The function does not perform multiple regression in the case of two quantitative factors. #' @note The assumptions of variance analysis disregard additional treatment #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @return The table of analysis of variance, the test of normality of errors (Shapiro-Wilk, Lilliefors, Anderson-Darling, Cramer-von Mises, Pearson and Shapiro-Francia), the test of homogeneity of variances (Bartlett or Levene), the test of independence of Durbin-Watson errors, the test of multiple comparisons (Tukey, LSD, Scott-Knott or Duncan) or adjustment of regression models up to grade 3 polynomial, in the case of quantitative treatments. The column chart for qualitative treatments is also returned. #' @keywords DIC #' @keywords Factorial #' @keywords Aditional #' @seealso \link{FAT2DIC} #' @seealso \link{dunnett} #' @references #' #' Principles and procedures of statistics a biometrical approach Steel & Torry & Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' #' Mendiburu, F., & de Mendiburu, M. F. (2019). Package ‘agricolae’. R Package, Version, 1-2. #' #' @export #' @examples #' library(AgroR) #' data(cloro) #' respAd=c(268, 322, 275, 350, 320) #' with(cloro, FAT2DIC.ad(f1, f2, bloco, resp, respAd, ylab="Number of nodules", legend = "Stages")) #' FAT2DIC.ad=function(f1, f2, repe, response, responseAd, norm="sw", homog="bt", alpha.f=0.05, alpha.t=0.05, quali=c(TRUE,TRUE), names.fat=c("F1","F2"), mcomp="tukey", grau=c(NA,NA), grau12=NA, # F1/F2 grau21=NA, # F2/F1 transf=1, constant=0, geom="bar", theme=theme_classic(), ylab="Response", xlab="", xlab.factor=c("F1","F2"), legend="Legend", ad.label="Additional", color="rainbow", fill="lightblue", textsize=12, labelsize=4, addmean=TRUE, errorbar=TRUE, CV=TRUE, dec=3, width.column=0.9, width.bar=0.3, angle=0, posi="right", family="sans", point="mean_sd", sup=NA, ylim=NA, angle.label=0){ if(angle.label==0){hjust=0.5}else{hjust=0} requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") # ================================ # Transformacao de dados # ================================ if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} if(transf==1){respAd=responseAd+constant}else{if(transf!="angular"){respAd=((responseAd+constant)^transf-1)/transf}} # if(transf==1){respAd=responseAd+constant}else{respAd=((responseAd+constant)^transf-1)/transf} if(transf==0){respAd=log(responseAd+constant)} if(transf==0.5){respAd=sqrt(responseAd+constant)} if(transf==-0.5){respAd=1/sqrt(responseAd+constant)} if(transf==-1){respAd=1/(responseAd+constant)} if(transf=="angular"){respAd=asin(sqrt((responseAd+constant)/100))} ordempadronizado=data.frame(f1,f2,resp,response) resp1=resp organiz=data.frame(f1,f2,resp,response) organiz=organiz[order(organiz$f2),] organiz=organiz[order(organiz$f1),] f1=organiz$f1 f2=organiz$f2 response=organiz$response resp=organiz$resp fator1=f1 fator2=f2 fator1a=fator1 fator2a=fator2 repe=as.factor(repe) if(is.na(sup==TRUE)){sup=0.1*mean(response)} Fator1=factor(fator1, levels = unique(fator1)) Fator2=factor(fator2, levels = unique(fator2)) nv1 <- length(summary(Fator1)) nv2 <- length(summary(Fator2)) lf1 <- levels(Fator1) lf2 <- levels(Fator2) fatores <- cbind(fator1, fator2) J = length(respAd) n.trat2 <- nv1 * nv2 anavaF2 <- summary(aov(resp ~ Fator1 * Fator2)) anava=anavaF2[[1]][c(1:3),] col1 <- numeric(0) for (i in 1:n.trat2) { col1 <- c(col1, rep(i, J)) } col1 <- c(col1, rep("ad", J)) col2 <- c(repe, rep(1:J)) col3 <- c(resp, respAd) tabF2ad <- data.frame(TRAT2 = col1, REP = col2, RESP2 = col3) TRAT2 <- factor(tabF2ad[, 1]) anavaf1 <- aov(tabF2ad[, 3] ~ TRAT2) anavaTr <- summary(anavaf1)[[1]] anava1=rbind(anava,anavaTr) anava1$Df[4]=1 anava1$`Sum Sq`[4]=anava1$`Sum Sq`[4]-sum(anava1$`Sum Sq`[c(1:3)]) anava1$`Mean Sq`[4]=anava1$`Sum Sq`[4]/anava1$Df[4] anava1$`F value`[1:4]=anava1$`Mean Sq`[1:4]/anava1$`Mean Sq`[5] for(i in 1:nrow(anava1)-1){ anava1$`Pr(>F)`[i]=1-pf(anava1$`F value`[i],anava1$Df[i],anava1$Df[5]) } rownames(anava1)[4]="Ad x Factorial" anava=anava1 b=aov(resp ~ as.factor(f1) * as.factor(f2),data = ordempadronizado) an=anova(b) respad=b$residuals/sqrt(an$`Mean Sq`[4]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} if(norm=="sw"){norm1 = shapiro.test(b$res)} if(norm=="li"){norm1=lillie.test(b$residuals)} if(norm=="ad"){norm1=ad.test(b$residuals)} if(norm=="cvm"){norm1=cvm.test(b$residuals)} if(norm=="pearson"){norm1=pearson.test(b$residuals)} if(norm=="sf"){norm1=sf.test(b$residuals)} trat=as.factor(paste(Fator1,Fator2)) c=aov(resp~trat) if(homog=="bt"){ homog1 = bartlett.test(b$res ~ trat) statistic=homog1$statistic phomog=homog1$p.value method=paste("Bartlett test","(",names(statistic),")",sep="") } if(homog=="levene"){ homog1 = levenehomog(c$res~trat) statistic=homog1$`F value`[1] phomog=homog1$`Pr(>F)`[1] method="Levene's Test (center = median)(F)" names(homog1)=c("Df", "statistic","p.value")} indep = dwtest(b) Ids=ifelse(respad>3 | respad<(-3), "darkblue","black") residplot=ggplot(data=data.frame(respad,Ids), aes(y=respad,x=1:length(respad)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(respad),label=1:length(respad), color=Ids,size=labelsize)+ scale_x_continuous(breaks=1:length(respad))+ theme_classic()+theme(axis.text.y = element_text(size=textsize), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) # print(residplot) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) homoge=data.frame(Method=method, Statistic=statistic, "p-value"=phomog) rownames(homoge)="" print(homoge) cat("\n") message(if(homog1$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Independence from errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) indepe=data.frame(Method=paste(indep$method,"(", names(indep$statistic),")",sep=""), Statistic=indep$statistic, "p-value"=indep$p.value) rownames(indepe)="" print(indepe) cat("\n") message(if(indep$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors are not independent"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV (%) = ",round(sqrt(anava$`Mean Sq`[5])/mean(c(resp,respAd),na.rm=TRUE)*100,2))) cat(paste("\nMean Factorial = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian Factorial = ",round(median(response,na.rm=TRUE),4))) cat(paste("\nMean Aditional = ",round(mean(responseAd,na.rm=TRUE),4))) cat(paste("\nMedian Aditional = ",round(median(responseAd,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) anava1=as.matrix(data.frame(anava)) colnames(anava1)=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anava1)=c(names.fat[1],names.fat[2], paste(names.fat[1],"x",names.fat[2]),"Ad x Factorial","Residuals") print(anava1,na.print = "") cat("\n") if(anava$`Pr(>F)`[4]<alpha.f){"The additional treatment does differ from the factorial by the F test"}else{"The additional treatment does not differ from the factorial by the F test "} if(transf==1 && norm1$p.value<0.05 | transf==1 && indep$p.value<0.05 | transf==1 &&homog1$p.value<0.05){ message("\nYour analysis is not valid, suggests using a non-parametric test and try to transform the data\n")}else{} if(transf != 1 && norm1$p.value<0.05 | transf!=1 && indep$p.value<0.05 | transf!=1 && homog1$p.value<0.05){ message("\nYour analysis is not valid, suggests using the function FATDIC.art\n")}else{} message(if(transf !=1){blue("NOTE: resp = transformed means; respO = averages without transforming\n")}) if (anava$`Pr(>F)`[3] > alpha.f) { cat(green(bold("-----------------------------------------------------------------\n"))) cat(green(bold("No significant interaction"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) fatores <- data.frame(Fator1 = factor(fator1), Fator2 = factor(fator2)) fatoresa <- data.frame(Fator1 = fator1a, Fator2 = fator2a) graficos=list(1,2,3) for (i in 1:2) {if (anava$`Pr(>F)`[i] <= alpha.f) {cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(bold(names.fat[i])) cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali[i]==TRUE){ ## Tukey if(mcomp=="tukey"){ letra <- TUKEY(resp, fatores[,i], anava$Df[5],anava$`Mean Sq`[5], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ letra <- LSD(resp, fatores[,i], anava$Df[5],anava$`Mean Sq`[5],alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if (mcomp == "sk"){ nrep=table(fatores[i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anava$Df[5], nrep = nrep, QME = anava$`Mean Sq`[5], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ letra <- duncan(resp, fatores[,i],anava$Df[5],anava$`Mean Sq`[5], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) print(letra1) ordem=unique(as.vector(unlist(fatores[i]))) #===================================================== if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[ordem]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[ordem]} dadosm=data.frame(letra1[ordem,], media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[ordem], desvio=desvio) dadosm$trats=factor(rownames(dadosm),levels = ordem) dadosm$limite=dadosm$media+dadosm$desvio lim.y=dadosm$limite[which.max(abs(dadosm$limite))] if(is.na(ylim[1])==TRUE && lim.y<0){ylim=c(1.5*lim.y,0)} if(is.na(ylim[1])==TRUE && lim.y>0){ylim=c(0,1.5*lim.y)} if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio trats=dadosm$trats letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trats),color=1,width = width.column)} else{grafico=grafico+ geom_col(aes(fill=trats), fill=fill,color=1,width = width.column)} grafico=grafico+theme+ylab(ylab)+xlab(parse(text = xlab.factor[i]))+ylim(ylim) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+ sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=width.bar)} if(angle !=0){grafico=grafico+theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} grafico=grafico+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family))} if(geom=="point"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trats),size=5)} else{grafico=grafico+ geom_point(aes(color=trats),fill="gray",pch=21,color="black",size=5)} grafico=grafico+theme+ylab(ylab)+xlab(parse(text = xlab.factor[i]))+ylim(ylim) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+ if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black", width=width.bar)} if(angle !=0){grafico=grafico+theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} grafico=grafico+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family))} grafico=grafico+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="") if(CV==TRUE){grafico=grafico+labs(caption=paste("p-value = ", if(anava$`Pr(>F)`[i]<0.0001){paste("<", 0.0001)} else{paste("=", round(anava$`Pr(>F)`[i],4))},"; CV = ", round(abs(sqrt(anava$`Mean Sq`[5])/mean(c(resp,respAd),na.rm=TRUE))*100,2),"%"))} if(color=="gray"){grafico=grafico+scale_fill_grey()} print(grafico) cat("\n\n") } # Regression if(quali[i]==FALSE){ # dose=as.numeric(as.character(as.vector(unlist(fatores[i])))) dose=as.vector(unlist(fatoresa[i])) grafico=polynomial(dose, response, grau = grau[i], ylab=ylab, xlab=parse(text = xlab.factor[i]), posi=posi, theme=theme, textsize=textsize, point=point, family=family,SSq = anava$`Sum Sq`[5],DFres = anava$Df[5]) grafico=grafico[[1]]+ geom_hline(aes(color=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="")} # Ns #if (a$`Pr(>F)`[i] > alpha.f) {cat("\nAccording to the F test, the means do not differ\n")} graficos[[i+1]]=grafico}} graficos[[1]]=residplot if(anava$`Pr(>F)`[1]>=alpha.f && anava$`Pr(>F)`[2] <alpha.f){ cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green("Isolated factors 1 not significant")) cat(green(bold("\n-----------------------------------------------------------------\n"))) d1=data.frame(tapply(response,fator1,mean, na.rm=TRUE)) colnames(d1)="Mean" print(d1) } if(anava$`Pr(>F)`[1]<alpha.f && anava$`Pr(>F)`[2] >=alpha.f){ cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green("Isolated factors 2 not significant")) cat(green(bold("\n-----------------------------------------------------------------\n"))) d1=data.frame(tapply(response,fator2,mean, na.rm=TRUE)) colnames(d1)="Mean" print(d1)} if(anava$`Pr(>F)`[1]>=alpha.f && anava$`Pr(>F)`[2] >=alpha.f){ cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green("Isolated factors not significant")) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(tapply(response,list(fator1,fator2),mean, na.rm=TRUE))} } if (anava$`Pr(>F)`[3] <= alpha.f) { fatores <- data.frame(Fator1, Fator2) cat(green(bold("-----------------------------------------------------------------\n"))) cat(green(bold("Significant interaction: analyzing the interaction"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) des1<-aov(resp~Fator2/Fator1) l1<-vector('list',nv2) names(l1)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv2+j) l1[[j]]<-v v<-numeric(0) } des1.tab<-summary(des1,split=list('Fator2:Fator1'=l1))[[1]] nlinhas=nrow(des1.tab) des1.tab=des1.tab[-c(nlinhas),] des1.tab$`F value`=des1.tab$`Mean Sq`/anava$`Mean Sq`[5] des1.tab$`Pr(>F)`=1-pf(des1.tab$`F value`,des1.tab$Df,anava$Df[5]) cat("\n-----------------------------------------------------------------\n") cat("Analyzing ", names.fat[1], " inside of the level of ",names.fat[2]) cat("\n-----------------------------------------------------------------\n") cat("\n") rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[2], sep = ""), lf2[j])) } rownames(des1.tab)=c(names.fat[2], paste(names.fat[1],"x",names.fat[2],"+",names.fat[1]), paste(" ",rn)) print(des1.tab) desdobramento1=des1.tab if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anava$Df[5],anava$`Mean Sq`[5],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico[[i]]=tukey$groups[as.character(unique(trati)),2] ordem[[i]]=rownames(tukey$groups[as.character(unique(trati)),]) } letra=unlist(tukeygrafico) datag=data.frame(letra, ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra } if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anava$Df[5],anava$`Mean Sq`[5],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico[[i]]=duncan$groups[as.character(unique(trati)),2] ordem[[i]]=rownames(duncan$groups[as.character(unique(trati)),]) } letra=unlist(duncangrafico) datag=data.frame(letra, ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra } if (mcomp == "lsd"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anava$Df[5],anava$`Mean Sq`[5],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(lsd$groups)]} duncangrafico[[i]]=lsd$groups[as.character(unique(trati)),2] ordem[[i]]=rownames(lsd$groups[as.character(unique(trati)),]) } letra=unlist(duncangrafico) datag=data.frame(letra, ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra } if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anava$Df[5], nrep = nrep, QME = anava$`Mean Sq`[5], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} } cat("\n-----------------------------------------------------------------\n") cat("Analyzing ", names.fat[2], " inside of the level of ",names.fat[1]) cat("\n-----------------------------------------------------------------\n") cat("\n") des1<-aov(resp~Fator1/Fator2) l1<-vector('list',nv1) names(l1)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv1+j) l1[[j]]<-v v<-numeric(0) } des1.tab<-summary(des1,split=list('Fator1:Fator2'=l1))[[1]] nlinhas=nrow(des1.tab) des1.tab=des1.tab[-c(nlinhas),] des1.tab$`F value`=des1.tab$`Mean Sq`/anava$`Mean Sq`[5] des1.tab$`Pr(>F)`=1-pf(des1.tab$`F value`,des1.tab$Df,anava$Df[5]) rn<-numeric(0) for (i in 1:nv1) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[1], sep = ""), lf1[i])) } rownames(des1.tab)=c(names.fat[1], paste(names.fat[1],"x",names.fat[2],"+",names.fat[2]), paste(" ",rn)) print(des1.tab) desdobramento2=des1.tab if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anava$Df[5],anava$`Mean Sq`[5],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico1[[i]]=tukey$groups[as.character(unique(trati)),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anava$Df[5],anava$`Mean Sq`[5],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico1[[i]]=duncan$groups[as.character(unique(trati)),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anava$Df[5],anava$`Mean Sq`[5],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico1[[i]]=lsd$groups[as.character(unique(trati)),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anava$Df[5], nrep = nrep, QME = anava$`Mean Sq`[5], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)}} if(quali[1]==FALSE && color=="gray"| quali[2]==FALSE && color=="gray"){ if(quali[2]==FALSE){ Fator2=fator2a grafico=polynomial2(Fator2,response,Fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, posi=posi, point=point, textsize=textsize, family=family, ylim=ylim,SSq = anava$`Sum Sq`[5],DFres = anava$Df[5])+ geom_hline(aes(color=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="")} if(quali[2]==TRUE){ Fator1=fator1a grafico=polynomial2(Fator1, response, Fator2, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, posi=posi, point=point, textsize=textsize, family=family, ylim=ylim,SSq = anava$`Sum Sq`[5],DFres = anava$Df[5])+ geom_hline(aes(color=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="")} } if(quali[1]==FALSE && color=="rainbow"| quali[2]==FALSE && color=="rainbow"){ if(quali[2]==FALSE){ Fator2=fator2a grafico=polynomial2_color(Fator2, response, Fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, posi=posi, point=point, textsize=textsize, family=family, ylim=ylim,SSq = anava$`Sum Sq`[5],DFres = anava$Df[5])+ geom_hline(aes(color=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="")} if(quali[2]==TRUE){ Fator1=fator1a grafico=polynomial2_color(Fator1, response, Fator2, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, posi=posi, point=point, textsize=textsize, family=family, ylim=ylim,SSq = anava$`Sum Sq`[5],DFres = anava$Df[5])+ geom_hline(aes(color=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="")} } if(quali[1] & quali[2]==TRUE){ media=tapply(response,list(Fator1,Fator2), mean, na.rm=TRUE) # desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE) if(point=="mean_sd"){desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE)/ sqrt(tapply(response,list(Fator1,Fator2), length))} graph=data.frame(f1=rep(rownames(media),length(colnames(media))), f2=rep(colnames(media),e=length(rownames(media))), media=as.vector(media), desvio=as.vector(desvio)) limite=graph$media+graph$desvio lim.y=limite[which.max(abs(limite))] if(is.na(ylim[1])==TRUE && lim.y<0){ylim=c(1.5*lim.y,0)} if(is.na(ylim[1])==TRUE && lim.y>0){ylim=c(0,1.5*lim.y)} rownames(graph)=paste(graph$f1,graph$f2) graph=graph[paste(rep(unique(Fator1), e=length(colnames(media))), rep(unique(Fator2),length(rownames(media)))),] graph$letra=letra graph$letra1=letra1 graph$f1=factor(graph$f1,levels = unique(Fator1)) graph$f2=factor(graph$f2,levels = unique(Fator2)) if(addmean==TRUE){graph$numero=paste(format(graph$media,digits = dec), graph$letra,graph$letra1, sep="")} if(addmean==FALSE){graph$numero=paste(graph$letra,graph$letra1, sep="")} f1=graph$f1 f2=graph$f2 media=graph$media desvio=graph$desvio numero=graph$numero colint=ggplot(graph, aes(x=f1, y=media, fill=f2))+ geom_col(position = "dodge",color="black",width = width.column)+ ylab(ylab)+xlab(xlab)+ylim(ylim)+ theme if(errorbar==TRUE){colint=colint+ geom_errorbar(data=graph, aes(ymin=media-desvio, ymax=media+desvio), width=width.bar,color="black", position = position_dodge(width=width.column))} if(errorbar==TRUE){colint=colint+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=numero), position = position_dodge(width=width.column), family = family,angle=angle.label,hjust=hjust,size=labelsize)} if(errorbar==FALSE){colint=colint+ geom_text(aes(y=media+sup,label=numero), position = position_dodge(width=width.column), family = family,angle=angle.label, hjust=hjust,size=labelsize)} colint=colint+theme(text=element_text(size=textsize,family = family), axis.text = element_text(size=textsize,color="black",family = family), axis.title = element_text(size=textsize,color="black",family = family), legend.text = element_text(family = family,size=textsize), legend.title = element_text(family = family,size=textsize), legend.position = posi)+labs(fill=legend)+ geom_hline(aes(color=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="") if(CV==TRUE){colint=colint+labs(caption=paste("p-value ", if(anava$`Pr(>F)`[3]<0.0001){paste("<", 0.0001)} else{paste("=", round(anava$`Pr(>F)`[3],4))},"; CV = ", round(abs(sqrt(anava$`Mean Sq`[5])/mean(c(resp,respAd),na.rm=TRUE))*100,2),"%"))} if(angle !=0){colint=colint+ theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} if(color=="gray"){colint=colint+scale_fill_grey()} print(colint) grafico=colint letras=paste(graph$letra, graph$letra1, sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator2) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(matriz) message(black("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")")) } } if(anava$`Pr(>F)`[3]>alpha.f){ names(graficos)=c("residplot","graph1","graph2") graficos}else{colints=list("residplot"=residplot,"grafico"=grafico)} }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/FAT2DICad_function.R
#' Analysis: DBC experiments in triple factorial #' #' @description Analysis of an experiment conducted in a randomized block design in a triple factorial scheme using analysis of variance of fixed effects. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param f1 Numeric or complex vector with factor 1 levels #' @param f2 Numeric or complex vector with factor 2 levels #' @param f3 Numeric or complex vector with factor 3 levels #' @param block Numerical or complex vector with blocks #' @param response Numerical vector containing the response of the experiment. #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{qualitative}) #' @param names.fat Allows labeling the factors 1, 2 and 3. #' @param grau Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with three elements. #' @param grau12 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 and qualitative factor 2 and quantitative factor 1. #' @param grau21 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 and qualitative factor 1 and quantitative factor 2. #' @param grau13 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 3, in the case of interaction f1 x f3 and qualitative factor 3 and quantitative factor 1. #' @param grau31 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f3 and qualitative factor 1 and quantitative factor 3. #' @param grau23 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 3, in the case of interaction f2 x f3 and qualitative factor 3 and quantitative factor 2. #' @param grau32 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f2 x f3 and qualitative factor 2 and quantitative factor 3. #' @param grau123 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 x f3 and quantitative factor 1. #' @param grau213 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 x f3 and quantitative factor 2. #' @param grau312 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 3, in the case of interaction f1 x f2 x f3 and quantitative factor 3. #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab.factor Provide a vector with two observations referring to the x-axis name of factors 1, 2 and 3, respectively, when there is an isolated effect of the factors. This argument uses `parse`. #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param transf Applies data transformation (\emph{default} is 1; for log consider 0; `angular` for angular transformation) #' @param constant Add a constant for transformation (enter value) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param geom Graph type (columns or segments) #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angulo x-axis scale text rotation #' @param textsize Font size #' @param labelsize Label Size #' @param dec Number of cells #' @param family Font family #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param point This function defines whether the point must have all points ("all"), mean ("mean"), standard deviation (\emph{default} - "mean_sd") or mean with standard error ("mean_se") if quali= FALSE. For quali=TRUE, `mean_sd` and `mean_se` change which information will be displayed in the error bar. #' @param angle.label label angle #' @note The order of the chart follows the alphabetical pattern. Please use `scale_x_discrete` from package ggplot2, `limits` argument to reorder x-axis. The bars of the column and segment graphs are standard deviation. #' @return The analysis of variance table, the Shapiro-Wilk error normality test, the Bartlett homogeneity test of variances, the Durbin-Watson error independence test, multiple comparison test (Tukey, LSD, Scott-Knott or Duncan) or adjustment of regression models up to grade 3 polynomial, in the case of quantitative treatments. The column chart for qualitative treatments is also returned.For significant triple interaction only, no graph is returned. #' @note The function does not perform multiple regression in the case of two or more quantitative factors. The bars of the column and segment graphs are standard deviation. #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @references #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' #' Ferreira, E. B., Cavalcanti, P. P., and Nogueira, D. A. (2014). ExpDes: an R package for ANOVA and experimental designs. Applied Mathematics, 5(19), 2952. #' #' Mendiburu, F., and de Mendiburu, M. F. (2019). Package ‘agricolae’. R Package, Version, 1-2. #' #' @keywords DIC #' @keywords Factorial #' @export #' @examples #' library(AgroR) #' data(enxofre) #' with(enxofre, FAT3DBC(f1, f2, f3, bloco, resp)) FAT3DBC=function(f1, f2, f3, block, response, norm="sw", alpha.f=0.05, alpha.t=0.05, quali=c(TRUE,TRUE,TRUE), mcomp='tukey', transf=1, constant=0, names.fat=c("F1","F2","F3"), ylab="Response", xlab="", xlab.factor=c("F1","F2","F3"), sup=NA, grau=c(NA,NA,NA), # isolado e interação tripla grau12=NA, # F1/F2 grau13=NA, # F1/F3 grau23=NA, # F2/F3 grau21=NA, # F2/F1 grau31=NA, # F3/F1 grau32=NA, # F3/F2 grau123=NA, grau213=NA, grau312=NA, fill="lightblue", theme=theme_classic(), angulo=0, errorbar=TRUE, addmean=TRUE, family="sans", dec=3, geom="bar", textsize=12, labelsize=4, point="mean_sd", angle.label=0) { if(is.na(sup==TRUE)){sup=0.2*mean(response)} if(angle.label==0){hjust=0.5}else{hjust=0} requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} ordempadronizado=data.frame(f1,f2,f3,block,response,resp) resp1=resp organiz=data.frame(f1,f2,f3,block,response,resp) organiz=organiz[order(organiz$block),] organiz=organiz[order(organiz$f3),] organiz=organiz[order(organiz$f2),] organiz=organiz[order(organiz$f1),] f1=organiz$f1 f2=organiz$f2 f3=organiz$f3 block=organiz$block response=organiz$response resp=organiz$resp fator1=f1 fator2=f2 fator3=f3 fator1a=fator1 fator2a=fator2 fator3a=fator3 bloco=block names.fat=names.fat fatores<-data.frame(fator1,fator2,fator3) Fator1<-factor(fator1,levels=unique(fator1)); Fator2<-factor(fator2,levels=unique(fator2)); Fator3<-factor(fator3,levels=unique(fator3)) nv1<-length(summary(Fator1)); nv2<-length(summary(Fator2)); nv3<-length(summary(Fator3)) J<-(length(resp))/(nv1*nv2*nv3) lf1<-levels(Fator1); lf2<-levels(Fator2); lf3<-levels(Fator3) bloco=as.factor(bloco) anava<-aov(resp~Fator1*Fator2*Fator3+bloco) anavaF3<-anova(anava) anovaF3=anavaF3 colnames(anovaF3)=c("GL","SQ","QM","Fcal","p-value") anavares<-aov(resp~as.factor(f1)* as.factor(f2)* as.factor(f3)+ as.factor(block),data = ordempadronizado) respad=anavares$residuals/sqrt(anavaF3$`Mean Sq`[9]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} Ids=ifelse(respad>3 | respad<(-3), "darkblue","black") residplot=ggplot(data=data.frame(respad,Ids),aes(y=respad,x=1:length(respad)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(respad),label=1:length(respad),color=Ids,size=labelsize)+ scale_x_continuous(breaks=1:length(respad))+ theme_classic()+theme(axis.text.y = element_text(size=textsize), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) norm1<-shapiro.test(anava$residuals) cat(green(bold("\n------------------------------------------"))) cat(green(bold("\nNormality of errors"))) cat(green(bold("\n------------------------------------------"))) print(norm1) message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) homog1=bartlett.test(anava$residuals~paste(Fator1,Fator2,Fator3)) cat(green(bold("\n------------------------------------------"))) cat(green(bold("\nHomogeneity of Variances"))) cat(green(bold("\n------------------------------------------"))) print(homog1) message(if(homog1$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) indep=dwtest(anava) cat(green(bold("\n------------------------------------------"))) cat(green(bold("\nIndependence from errors"))) cat(green(bold("\n------------------------------------------"))) print(indep) message(if(indep$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors are not independent"}) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV (%) = ",round(sqrt(anavaF3$`Mean Sq`[9])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nMean = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian = ",round(median(response,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n------------------------------------------\n"))) anava1=as.matrix(data.frame(anovaF3)) colnames(anava1)=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anava1)=c(names.fat[1], names.fat[2], names.fat[3], "Block", paste(names.fat[1],"x",names.fat[2]), paste(names.fat[1],"x",names.fat[3]), paste(names.fat[2],"x",names.fat[3]), paste(names.fat[1],"x",names.fat[2],"x",names.fat[3]), "Residuals") print(anava1,na.print = "") cat("\n") if(transf==1 && norm1$p.value<0.05 | transf==1 && indep$p.value<0.05 | transf==1 &&homog1$p.value<0.05){ message("\n Your analysis is not valid, suggests using a non-parametric test and try to transform the data\n")}else{} if(transf != 1 && norm1$p.value<0.05 | transf!=1 && indep$p.value<0.05 | transf!=1 && homog1$p.value<0.05){ message("\n Your analysis is not valid, suggests using the function FATDIC.art\n")}else{} message(if(transf !=1){blue("\nNOTE: resp = transformed means; respO = averages without transforming\n")}) if(anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[7,5]>alpha.f && anavaF3[8,5]>alpha.f) { graficos=list(1,2,3) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold('Non-significant interaction: analyzing the simple effects'))) cat(green(bold("\n------------------------------------------\n"))) fatores<-data.frame('fator 1'=fator1,'fator 2' = fator2,'fator 3' = fator3) for(i in 1:3){ if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) cat(green(bold("\n------------------------------------------\n"))) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[9,1],anavaF3[9,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[9,1], nrep = nrep, QME = anavaF3[9,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) print(letra1) cat(green(bold("\n------------------------------------------\n"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos), color=1)}else{grafico=grafico+ geom_col(aes(fill=Tratamentos), fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+ if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3)} grafico=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[i]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") print(grafico)} if(geom=="point"){grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_point(aes(color=Tratamentos))} else{grafico=grafico+ geom_point(aes(color=Tratamentos),color=fill,size=4)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3)} grafico=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[i]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") print(grafico)} } if(anavaF3[i,5]>alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) cat(green(bold("\n------------------------------------------\n"))) mean.table<-mean_stat(response,fatores[,i],mean) colnames(mean.table)<-c('Niveis','Medias') print(mean.table) grafico=NA} if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) cat(green(bold("\n------------------------------------------\n"))) dose=as.numeric(as.vector(unlist(fatores[,i]))) grafico=polynomial(dose,resp,grau = grau[i], DFres= anavaF3[9,1],SSq = anavaF3[9,2],ylab=ylab,xlab=parse(text = xlab.factor[i]),point = point)[[1]] cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command")) cat(green(bold("\n------------------------------------------")))} graficos[[1]]=residplot graficos[[i+1]]=grafico } } if(anavaF3[8,5]>alpha.f && anavaF3[5,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(names.fat[1],'*',names.fat[2],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[1], ' within the combination of levels ', names.fat[2]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator2/Fator1+Fator3+Fator2+Fator2:Fator3+Fator1:Fator2:Fator3+bloco) l<-vector('list',nv2) names(l)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv2+j) l[[j]]<-v v<-numeric(0)} des1<-summary(des,split=list('Fator2:Fator1'=l))[[1]] des1a=des1[-c(1,2,3,4,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[2], sep = ""), lf2[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # sk=sk(respi,trati,anavaF3$Df[9],anavaF3$`Sum Sq`[9],alpha.t) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra}} cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[2], " inside of the level of ",names.fat[1]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator1/Fator2+Fator3+Fator1+Fator1:Fator3+Fator1:Fator2:Fator3+bloco) l<-vector('list',nv1) names(l)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv1+j) l[[j]]<-v v<-numeric(0)} des1<-summary(des,split=list('Fator1:Fator2'=l))[[1]] des1a=des1[-c(1,2,3,4,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv1) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[1], sep = ""), lf1[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico1[[i]]=tukey$groups[levels(trati),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico1[[i]]=duncan$groups[levels(trati),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico1[[i]]=lsd$groups[levels(trati),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)}} if(quali[1]==TRUE & quali[2]==TRUE){ f1=rep(levels(Fator1),e=length(levels(Fator2))) f2=rep(unique(as.character(Fator2)),length(levels(Fator1))) f1=factor(f1,levels = unique(f1)) f2=factor(f2,levels = unique(f2)) media=tapply(resp,paste(Fator1,Fator2), mean, na.rm=TRUE)[unique(paste(f1,f2))] # desvio=tapply(resp,paste(Fator1,Fator2), sd, na.rm=TRUE)[unique(paste(f1,f2))] if(point=="mean_sd"){desvio=tapply(response,paste(Fator1,Fator2), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,paste(Fator1,Fator2), sd, na.rm=TRUE)/ sqrt(tapply(response,paste(Fator1,Fator2), length))} desvio=desvio[unique(paste(f1,f2))] graph=data.frame(f1=f1, f2=f2, media, desvio, letra,letra1, numero=format(media,digits = dec)) numero=paste(graph$numero,graph$letra,graph$letra1,sep="") graph$numero=numero colint=ggplot(graph, aes(x=f2, y=media, fill=f1))+ ylab(ylab)+xlab(xlab)+ theme+ labs(fill=names.fat[1])+ geom_col(position = "dodge",color="black")+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=0.3,position = position_dodge(width=0.9))+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)+ theme(text=element_text(size=textsize,family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family)) colint1=colint print(colint) letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator2) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) print(matriz) cat("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")") } if(quali[1]==FALSE | quali[2]==FALSE){ if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk)}}} if(quali[1]==FALSE){ Fator1a=fator1a colint1=polynomial2(Fator1a, response, Fator3, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[9,1],SSq = anavaF3[9,2])} if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk)}} } if(quali[2]==FALSE){ Fator2a=fator2a colint1=polynomial2(Fator2a, response, Fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[9,1],SSq = anavaF3[9,2])} cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial2\" command\n"))} if(anavaF3[6,5]>alpha.f && anavaF3[7,5]>alpha.f) { i<-3 { if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',names.fat[i]))) cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[9,1],anavaF3[9,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[9,1], nrep = nrep, QME = anavaF3[9,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} print(letra1) cat(green(bold("\n------------------------------------------\n"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos),color=1)} else{grafico=grafico+ geom_col(aes(fill=Tratamentos),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3)} grafico1=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[3]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") print(grafico1)} } if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat('Analyzing the simple effects of the factor ',names.fat[3]) cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) grafico1=polynomial(resp, fatores[,i],grau=grau[i], DFres= anavaF3[9,1],SSq = anavaF3[9,2],ylab=ylab,xlab=parse(text = xlab.factor[3]),point = point)[[1]] cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command"))} } } if(anavaF3[8,5]>alpha.f && anavaF3[6,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(names.fat[1],'*',names.fat[3],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[1], ' within the combination of levels ', names.fat[3]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator3/Fator1+Fator2+Fator3+Fator2:Fator3+Fator1:Fator2:Fator3+bloco) l<-vector('list',nv3) names(l)<-names(summary(Fator3)) v<-numeric(0) for(j in 1:nv3) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv3+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator3:Fator1'=l))[[1]] des1a=des1[-c(1,2,3,4,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv3) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[3], sep = ""), lf3[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra}} cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[3], " inside of the level of ",names.fat[1]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator1/Fator3+Fator1+Fator2+Fator2:Fator1+Fator1:Fator2:Fator3+bloco) l<-vector('list',nv1) names(l)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv3-2)) v<-cbind(v,i*nv1+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator1:Fator3'=l))[[1]] des1a=des1[-c(1,2,3,4,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv1) { rn <- c(rn, paste(paste(names.fat[3], ":", names.fat[1], sep = ""), lf1[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) tukeygrafico1[[i]]=tukey$groups[levels(trati),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)}} if(quali[1]==TRUE & quali[3]==TRUE){ f1=rep(levels(Fator1),e=length(levels(Fator3))) f3=rep(unique(as.character(Fator3)),length(levels(Fator1))) f1=factor(f1,levels = unique(f1)) f3=factor(f3,levels = unique(f3)) media=tapply(response,paste(Fator1,Fator3), mean, na.rm=TRUE)[unique(paste(f1,f3))] # desvio=tapply(response,paste(Fator1,Fator3), sd, na.rm=TRUE)[unique(paste(f1,f3))] if(point=="mean_sd"){desvio=tapply(response,paste(Fator1,Fator3), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,paste(Fator1,Fator3), sd, na.rm=TRUE)/ sqrt(tapply(response,paste(Fator1,Fator3), length))} desvio=desvio[unique(paste(f1,f3))] graph=data.frame(f1=f1, f3=f3, media, desvio, letra, letra1, numero=format(media,digits = dec)) numero=paste(graph$numero,graph$letra,graph$letra1,sep="") graph$numero=numero colint=ggplot(graph, aes(x=f3, y=media, fill=f1))+ geom_col(position = "dodge",color="black")+ ylab(ylab)+ xlab(xlab)+ theme+ labs(fill=names.fat[1])+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=0.3, position = position_dodge(width=0.9))+ geom_text(aes(y=media+desvio+sup, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)+ theme(text=element_text(size=textsize,family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family)) colint2=colint print(colint) letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator3) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) print(matriz) cat("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")") } if(quali[1]==FALSE | quali[3]==FALSE){ if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv3) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv3) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk)}}} if(quali[1]==FALSE){ Fator1a=fator1a colint2=polynomial2(Fator1a, response, Fator3, grau = grau13, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[9,1],SSq = anavaF3[9,2])} if(quali[3]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Sum Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(sk)}}} if(quali[3]==FALSE){ Fator3a=fator3a colint2=polynomial2(Fator3a, response, Fator1, grau = grau31, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[9,1],SSq = anavaF3[9,2])} cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial2\" command\n")) } if(anavaF3[5,5]>alpha.f && anavaF3[7,5]>alpha.f) { i<-2 { if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',names.fat[i]))) cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[9,1],anavaF3[9,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[9,1], nrep = nrep, QME = anavaF3[9,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} print(letra1) cat(green(bold("\n------------------------------------------"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos),color=1)} else{grafico=grafico+ geom_col(aes(fill=Tratamentos),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio},label=letra),family=family, angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3) grafico2=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[2]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") print(grafico2)} } if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat('Analyzing the simple effects of the factor ',names.fat[2]) cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) grafico2=polynomial(resp, fatores[,i],grau=grau[i], DFres= anavaF3[9,1],SSq = anavaF3[9,2],ylab=ylab,xlab=parse(text = xlab.factor[2]),point = point)[[1]] cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command")) } cat('\n') } } } if(anavaF3[8,5]>alpha.f && anavaF3[7,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(names.fat[2],'*',names.fat[3],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[2], ' within the combination of levels ', names.fat[3]) cat("\n-------------------------------------------------\n") des<-aov(resp~Fator3/Fator2+Fator1+Fator3+Fator1:Fator3+Fator1:Fator2:Fator3+bloco) l<-vector('list',nv3) names(l)<-names(summary(Fator3)) v<-numeric(0) for(j in 1:nv3) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv3+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator3:Fator2'=l))[[1]] des1a=des1[-c(1,2,3,4,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv3) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[3], sep = ""), lf3[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[2]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra}} cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[3], " inside of the level of ",names.fat[2]) cat(green(bold("\n------------------------------------------\n"))) cat("\n") des<-aov(resp~Fator2/Fator3+Fator1+Fator2+Fator1:Fator2+Fator1:Fator2:Fator3+bloco) l<-vector('list',nv2) names(l)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv3-2)) v<-cbind(v,i*nv2+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator2:Fator3'=l))[[1]] des1a=des1[-c(1,2,3,4,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[3], ":", names.fat[2], sep = ""), lf2[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[2]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) tukeygrafico1[[i]]=tukey$groups[levels(trati),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) duncangrafico1[[i]]=duncan$groups[levels(trati),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) lsdgrafico1[[i]]=lsd$groups[levels(trati),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)}} if(quali[2]==TRUE & quali[3]==TRUE){ f2=rep(levels(Fator2),e=length(levels(Fator3))) f3=rep(unique(as.character(Fator3)),length(levels(Fator2))) f2=factor(f2,levels = unique(f2)) f3=factor(f3,levels = unique(f3)) media=tapply(response,paste(Fator2,Fator3), mean, na.rm=TRUE)[unique(paste(f2,f3))] if(point=="mean_sd"){desvio=tapply(response,paste(Fator2,Fator3), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,paste(Fator2,Fator3), sd, na.rm=TRUE)/ sqrt(tapply(response,paste(Fator2,Fator3), length))} desvio=desvio[unique(paste(f2,f3))] # desvio=tapply(response,paste(Fator2,Fator3), sd, na.rm=TRUE)[unique(paste(f2,f3))] graph=data.frame(f2=f2, f3=f3, media, desvio, letra,letra1, numero=format(media,digits = dec)) numero=paste(graph$numero,graph$letra,graph$letra1,sep="") graph$numero=numero colint=ggplot(graph, aes(x=f3, y=media, fill=f2))+ geom_col(position = "dodge",color="black")+ ylab(ylab)+xlab(xlab)+ theme+ labs(fill=names.fat[2])+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=0.3,position = position_dodge(width=0.9))+ geom_text(aes(y=media+desvio+sup, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)+ theme(text=element_text(size=textsize,family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family)) colint3=colint print(colint) letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras), ncol = length(levels(Fator2))))) rownames(matriz)=levels(Fator2) colnames(matriz)=levels(Fator3) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) print(matriz) cat("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")") } if(quali[2]==FALSE | quali[3]==FALSE){ if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk)}}} if(quali[2]==FALSE){ Fator2a=fator2a colint3=polynomial2(Fator2a, response, Fator3, grau = grau23, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[9,1],SSq = anavaF3[9,2])} if(quali[3]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(tukey)}} if (mcomp == "duncan"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(duncan)}} if (mcomp == "lsd"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(lsd)}} if (mcomp == "sk"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(sk)}}} if(quali[3]==FALSE){ Fator3a=fator3a colint3=polynomial2(Fator3a, response, Fator2, grau = grau32, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[9,1],SSq = anavaF3[9,2])} cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial2\" command\n")) } #Checar o Fator1 if(anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f) { i<-1 { if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',names.fat[i]))) cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[9,1],anavaF3[9,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[9,1], nrep = nrep, QME = anavaF3[9,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} print(letra1) cat(green(bold("\n------------------------------------------"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra grafico=ggplot(dadosm,aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos),color=1)} else{grafico=grafico+ geom_col(aes(fill=Tratamentos),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3) grafico3=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[1]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") print(grafico3)} } if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat('\nAnalyzing the simple effects of the factor ',names.fat[1],'\n') cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) grafico3=polynomial(resp, fatores[,i],grau=grau[i], DFres= anavaF3[9,1],SSq = anavaF3[9,2],ylab=ylab,xlab=parse(text = xlab.factor[1]),point = point)[[1]] cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command")) } cat('\n') } } } if(anavaF3[8,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\nInteraction",paste(names.fat[1],'*',names.fat[2],'*',names.fat[3],sep='')," significant: unfolding the interaction\n"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[1], ' within the combination of levels ', names.fat[2], 'and',names.fat[3]) cat(green(bold("\n------------------------------------------\n"))) m1=aov(resp~(Fator2*Fator3)/Fator1+bloco) anova(m1) pattern <- c(outer(levels(Fator2), levels(Fator3), function(x,y) paste("Fator2",x,":Fator3",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==5])) des1.tab <- summary(m1, split = list("Fator2:Fator3:Fator1" = des.tab)) desd=des1.tab[[1]][-c(1,2,3,4,5),] desd=data.frame(desd[-length(rownames(desd)),]) # rownames(desd)=cbind(paste("Fator2:",rep(levels(Fator2),length(levels(Fator3))), # "Fator3:",rep(levels(Fator3),e=length(levels(Fator2))))) rownames(desd)=cbind(paste(names.fat[2],":",rep(levels(Fator2),length(levels(Fator3))), names.fat[3],":",rep(levels(Fator3),e=length(levels(Fator2))))) colnames(desd)=c("Df", "Sum Sq", "Mean Sq", "F value", "Pr(>F)") print(desd) ii<-0 for(i in 1:nv2) { for(j in 1:nv3) { ii<-ii+1 if(quali[1]==TRUE){ cat('\n\n',names.fat[1],' inside of each level of ',lf2[i],' of ',names.fat[2],' and ',lf3[j],' of ',names.fat[3],"\n") if(mcomp=='tukey'){tukey=TUKEY(resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], anavaF3[9,1], anavaF3[9,3], alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") if(transf !=1){tukey$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]],mean, na.rm=TRUE)[rownames(tukey)]} print(tukey)} if(mcomp=='duncan'){duncan=duncan(resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], anavaF3[9,1], anavaF3[9,3], alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") if(transf !=1){duncan$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]],mean, na.rm=TRUE)[rownames(duncan)]} print(duncan)} if(mcomp=='lsd'){lsd=LSD(resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], anavaF3[9,1], anavaF3[9,3], alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") if(transf !=1){lsd$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]],mean, na.rm=TRUE)[rownames(lsd)]} print(lsd)} if(mcomp=='sk'){ fat= fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]] fat1=factor(fat,unique(fat)) levels(fat1)=1:length(levels(fat1)) resp1=resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]] nrep=table(fat1)[1] medias=sort(tapply(resp1,fat1,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) sk=sk[as.character(unique(fat1)),] rownames(sk)=unique(fat) if(transf !=1){sk$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]],mean, na.rm=TRUE)[rownames(sk)]} print(sk)} } if(quali[1]==FALSE){ cat('\n',names.fat[1],' within the combination of levels ',lf2[i],' of ',names.fat[2],' and ',lf3[j],' of ',names.fat[3],"\n") polynomial(fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], grau=grau123, DFres= anavaF3[9,1],SSq = anavaF3[9,2],ylab=ylab,xlab=xlab,point = point)[[1]]} } } cat('\n\n') cat("\n------------------------------------------\n") cat("Analyzing ", names.fat[2], ' within the combination of levels ', names.fat[1], 'and',names.fat[3]) cat("\n------------------------------------------\n") m1=aov(resp~(Fator1*Fator3)/Fator2+bloco) anova(m1) pattern <- c(outer(levels(Fator1), levels(Fator3), function(x,y) paste("Fator1",x,":Fator3",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==5])) des1.tab <- summary(m1, split = list("Fator1:Fator3:Fator2" = des.tab)) desd=des1.tab[[1]][-c(1,2,3,4,5),] desd=data.frame(desd[-length(rownames(desd)),]) # rownames(desd)=cbind(paste("Fator1:",rep(levels(Fator1),length(levels(Fator3))), # "Fator3:",rep(levels(Fator3),e=length(levels(Fator1))))) rownames(desd)=cbind(paste(names.fat[1],":",rep(levels(Fator1),length(levels(Fator3))), names.fat[3],":",rep(levels(Fator3),e=length(levels(Fator1))))) colnames(desd)=c("Df", "Sum Sq", "Mean Sq", "F value", "Pr(>F)") print(desd) ii<-0 for(k in 1:nv1) { for(j in 1:nv3) { ii<-ii+1 if(quali[2]==TRUE){ cat('\n\n',names.fat[2],' inside of each level of ',lf1[k],' of ',names.fat[1],' and ',lf3[j],' of ',names.fat[3],'\n') if(mcomp=='tukey'){tukey=TUKEY(resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], anavaF3[9,1], anavaF3[9,3], alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") if(transf !=1){tukey$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(tukey)]} print(tukey)} if(mcomp=='duncan'){duncan=duncan(resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], anavaF3[9,1], anavaF3[9,3], alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") if(transf !=1){duncan$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(duncan)]} print(duncan)} if(mcomp=='lsd'){lsd=LSD(resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], anavaF3[9,1], anavaF3[9,3], alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") if(transf !=1){lsd$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(lsd)]} print(lsd)} if(mcomp=='sk'){ fat=fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]] fat1=factor(fat,unique(fat)) levels(fat1)=1:length(levels(fat1)) resp1=resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]] nrep=table(fat1)[1] medias=sort(tapply(resp1,fat1,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) sk=sk[as.character(unique(fat1)),] rownames(sk)=unique(fat) if(transf !=1){sk$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(sk)]} print(sk)} } if(quali[2]==FALSE){ cat('\n\n',names.fat[2],' within the combination of levels ',lf1[k],' of ',names.fat[1],' and ',lf3[j],' of ',names.fat[3],'\n') polynomial(fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], grau=grau213, DFres= anavaF3[9,1],SSq = anavaF3[9,2],ylab=ylab,xlab=xlab,point = point)[[1]]} } } cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[3], ' within the combination of levels ', names.fat[1], 'and',names.fat[2]) cat(green(bold("\n------------------------------------------\n"))) m1=aov(resp~(Fator1*Fator2)/Fator3+bloco) anova(m1) pattern <- c(outer(levels(Fator1), levels(Fator2), function(x,y) paste("Fator1",x,":Fator2",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==5])) des1.tab <- summary(m1, split = list("Fator1:Fator2:Fator3" = des.tab)) desd=des1.tab[[1]][-c(1,2,3,4,5),] desd=data.frame(desd[-length(rownames(desd)),]) # rownames(desd)=cbind(paste("Fator1:",rep(levels(Fator1),length(levels(Fator2))), # "Fator2:",rep(levels(Fator2),e=length(levels(Fator1))))) rownames(desd)=cbind(paste(names.fat[1],":",rep(levels(Fator1),length(levels(Fator2))), names.fat[2],":",rep(levels(Fator2),e=length(levels(Fator1))))) colnames(desd)=c("Df", "Sum Sq", "Mean Sq", "F value", "Pr(>F)") print(desd) ii<-0 for(k in 1:nv1) { for(i in 1:nv2) { ii<-ii+1 if(quali[3]==TRUE){ cat('\n\n',names.fat[3],' inside of each level of ',lf1[k],' of ',names.fat[1],' and ',lf2[i],' of ',names.fat[2],'\n') if(mcomp=='tukey'){tukey=TUKEY(resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], anavaF3[9,1], anavaF3[9,3], alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") if(transf !=1){tukey$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(tukey)]} print(tukey)} if(mcomp=='duncan'){duncan=duncan(resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], anavaF3[9,1], anavaF3[9,3], alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") if(transf !=1){duncan$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(duncan)]} print(duncan)} if(mcomp=='lsd'){lsd=LSD(resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], anavaF3[9,1], anavaF3[9,3], alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") if(transf !=1){lsd$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(lsd)]} print(lsd)} if(mcomp=='sk'){ fat=fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]] fat1=factor(fat,unique(fat)) levels(fat1)=1:length(levels(fat1)) resp1=resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]] nrep=table(fat1)[1] medias=sort(tapply(resp1,fat1,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) colnames(sk)=c("resp","letters") sk=sk[as.character(unique(fat1)),] rownames(sk)=unique(fat) if(transf !=1){sk$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(sk)]} print(sk)} } if(quali[3]==FALSE){ cat('\n\n',names.fat[3],' inside of each level of ',lf1[k],' of ',names.fat[1],' and ',lf2[i],' of ',names.fat[2],'\n') polynomial(fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], grau=grau312, DFres= anavaF3[9,1],SSq = anavaF3[9,2],ylab=ylab,xlab=xlab,point = point)[[1]]} } } } if(anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[7,5]>alpha.f && anavaF3[8,5]>alpha.f){ if(anavaF3[1,5]<=alpha.f | anavaF3[2,5]<=alpha.f | anavaF3[3,5]<=alpha.f){ print(residplot) graficos}else{graficos=NA}} if(anavaF3[8,5]>alpha.f && anavaF3[5,5]<=alpha.f){ graficos=list(residplot,colint1) if(anavaF3[6,5]>alpha.f && anavaF3[7,5]>alpha.f && anavaF3[3,5]<=alpha.f){ graficos=list(residplot,colint1,grafico1)} graficos} if(anavaF3[8,5]>alpha.f && anavaF3[6,5]<=alpha.f){ graficos=list(residplot,colint2) if(anavaF3[5,5]>alpha.f && anavaF3[7,5]>alpha.f && anavaF3[2,5]<=alpha.f){ graficos=list(residplot,colint2,grafico2)} graficos} if(anavaF3[8,5]>alpha.f && anavaF3[7,5]<=alpha.f){ graficos=list(residplot,colint3) if(anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[1,5]<=alpha.f){ graficos=list(residplot,colint3,grafico3)}} if(anavaF3[8,5]>alpha.f && anavaF3[5,5]<=alpha.f && anavaF3[6,5]<=alpha.f){ graficos=list(residplot,colint1,colint2) graficos} if(anavaF3[8,5]>alpha.f && anavaF3[5,5]<=alpha.f && anavaF3[7,5]<=alpha.f){ graficos=list(residplot,colint1,colint3) graficos} if(anavaF3[8,5]>alpha.f && anavaF3[6,5]<=alpha.f && anavaF3[7,5]<=alpha.f){ graficos=list(residplot,colint2,colint3) graficos} if(anavaF3[8,5]<=alpha.f){graficos=list(residplot)} graficos=graficos }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/FAT3DBC_function.R
#' Analysis: DBC experiments in triple factorial with aditional #' #' @description Analysis of an experiment conducted in a randomized block design in a triple factorial scheme with one aditional control using analysis of variance of fixed effects. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param f1 Numeric or complex vector with factor 1 levels #' @param f2 Numeric or complex vector with factor 2 levels #' @param f3 Numeric or complex vector with factor 3 levels #' @param block Numerical or complex vector with blocks #' @param response Numerical vector containing the response of the experiment. #' @param responseAd Numerical vector containing the aditional response #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{qualitative}) #' @param names.fat Allows labeling the factors 1, 2 and 3. #' @param grau Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with three elements. #' @param grau12 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 and qualitative factor 2 and quantitative factor 1. #' @param grau21 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 and qualitative factor 1 and quantitative factor 2. #' @param grau13 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 3, in the case of interaction f1 x f3 and qualitative factor 3 and quantitative factor 1. #' @param grau31 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f3 and qualitative factor 1 and quantitative factor 3. #' @param grau23 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 3, in the case of interaction f2 x f3 and qualitative factor 3 and quantitative factor 2. #' @param grau32 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f2 x f3 and qualitative factor 2 and quantitative factor 3. #' @param grau123 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 x f3 and quantitative factor 1. #' @param grau213 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 x f3 and quantitative factor 2. #' @param grau312 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 3, in the case of interaction f1 x f2 x f3 and quantitative factor 3. #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab.factor Provide a vector with two observations referring to the x-axis name of factors 1, 2 and 3, respectively, when there is an isolated effect of the factors. This argument uses `parse`. #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param transf Applies data transformation (\emph{default} is 1; for log consider 0; `angular` for angular transformation) #' @param constant Add a constant for transformation (enter value) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param geom Graph type (columns or segments) #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angulo x-axis scale text rotation #' @param textsize Font size #' @param labelsize Label size #' @param dec Number of cells #' @param family Font family #' @param ad.label Aditional label #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param point This function defines whether the point must have all points ("all"), mean ("mean"), standard deviation (\emph{default} - "mean_sd") or mean with standard error ("mean_se") if quali= FALSE. For quali=TRUE, `mean_sd` and `mean_se` change which information will be displayed in the error bar. #' @param angle.label label angle #' @note The order of the chart follows the alphabetical pattern. Please use `scale_x_discrete` from package ggplot2, `limits` argument to reorder x-axis. The bars of the column and segment graphs are standard deviation. #' @return The analysis of variance table, the Shapiro-Wilk error normality test, the Bartlett homogeneity test of variances, the Durbin-Watson error independence test, multiple comparison test (Tukey, LSD, Scott-Knott or Duncan) or adjustment of regression models up to grade 3 polynomial, in the case of quantitative treatments. The column chart for qualitative treatments is also returned.For significant triple interaction only, no graph is returned. #' @note The function does not perform multiple regression in the case of two or more quantitative factors. The bars of the column and segment graphs are standard deviation. #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @references #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' #' Ferreira, E. B., Cavalcanti, P. P., and Nogueira, D. A. (2014). ExpDes: an R package for ANOVA and experimental designs. Applied Mathematics, 5(19), 2952. #' #' Mendiburu, F., and de Mendiburu, M. F. (2019). Package ‘agricolae’. R Package, Version, 1-2. #' #' @keywords DIC #' @keywords Factorial #' @export #' @examples #' library(AgroR) #' data(enxofre) #' respAd=c(2000,2400,2530,2100) #' attach(enxofre) #' with(enxofre, FAT3DBC.ad(f1, f2, f3, bloco, resp, respAd)) FAT3DBC.ad = function(f1, f2, f3, block, response, responseAd, norm = "sw", alpha.f = 0.05, alpha.t = 0.05, quali = c(TRUE, TRUE, TRUE), mcomp = 'tukey', transf = 1, constant = 0, names.fat = c("F1", "F2", "F3"), ylab = "Response", xlab = "", xlab.factor=c("F1","F2","F3"), sup = NA, grau=c(NA,NA,NA), # isolado e interação tripla grau12=NA, # F1/F2 grau13=NA, # F1/F3 grau23=NA, # F2/F3 grau21=NA, # F2/F1 grau31=NA, # F3/F1 grau32=NA, # F3/F2 grau123=NA, grau213=NA, grau312=NA, fill = "lightblue", theme = theme_classic(), ad.label="Additional", angulo = 0, errorbar = TRUE, addmean = TRUE, family = "sans", dec = 3, geom = "bar", textsize = 12, labelsize=4, point="mean_sd", angle.label = 0) { if(is.na(sup==TRUE)){sup=0.2*mean(response)} if(angle.label==0){hjust=0.5}else{hjust=0} requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} if(transf==1){respAd=responseAd+constant}else{if(transf!="angular"){respAd=((responseAd+constant)^transf-1)/transf}} # if(transf==1){respAd=responseAd+constant}else{respAd=((responseAd+constant)^transf-1)/transf} if(transf==0){respAd=log(responseAd+constant)} if(transf==0.5){respAd=sqrt(responseAd+constant)} if(transf==-0.5){respAd=1/sqrt(responseAd+constant)} if(transf==-1){respAd=1/(responseAd+constant)} if(transf=="angular"){respAd=asin(sqrt((responseAd+constant)/100))} resp1=resp ordempadronizado=data.frame(f1,f2,f3,block,response,resp) organiz=data.frame(f1,f2,f3,block,response,resp) organiz=organiz[order(organiz$block),] organiz=organiz[order(organiz$f3),] organiz=organiz[order(organiz$f2),] organiz=organiz[order(organiz$f1),] f1=organiz$f1 f2=organiz$f2 f3=organiz$f3 block=organiz$block response=organiz$response resp=organiz$resp fator1=f1 fator2=f2 fator3=f3 fator1a=fator1 fator2a=fator2 fator3a=fator3 bloco=block fac.names=names.fat fatores<-data.frame(fator1,fator2,fator3) Fator1<-factor(fator1,levels=unique(fator1)); Fator2<-factor(fator2,levels=unique(fator2)); Fator3<-factor(fator3,levels=unique(fator3)) nv1<-length(summary(Fator1)); nv2<-length(summary(Fator2)); nv3<-length(summary(Fator3)) J<-(length(resp))/(nv1*nv2*nv3) lf1<-levels(Fator1); lf2<-levels(Fator2); lf3<-levels(Fator3) bloco=as.factor(bloco) # ================================= ## Anova # ================================= anava<-aov(resp~Fator1*Fator2*Fator3+bloco) anavaF3<-anova(anava) anovaF3=anavaF3 colnames(anovaF3)=c("GL","SQ","QM","Fcal","p-value") anavares<-aov(resp~as.factor(f1)* as.factor(f2)* as.factor(f3)+ as.factor(block),data = ordempadronizado) respad=anava$residuals/sqrt(anavaF3$`Mean Sq`[9]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} Ids=ifelse(respad>3 | respad<(-3), "darkblue","black") residplot=ggplot(data=data.frame(respad,Ids),aes(y=respad,x=1:length(respad)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(respad),label=1:length(respad),color=Ids,size=labelsize)+ scale_x_continuous(breaks=1:length(respad))+ theme_classic()+theme(axis.text.y = element_text(size=textsize), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) print(residplot) col1<-numeric(0) for(i in 1:c(nv1*nv2*nv3)) { col1<-c(col1, rep(i,J)) } col1<-c(col1,rep('ad',J)) col2<-c(bloco,rep(1:J)) col3<-c(resp,respAd) tabF3ad<-data.frame("TRAT"=col1, "BLOCO"=col2, "RESP2"=col3) TRAT<-factor(tabF3ad[,1]) BLOCO<-factor(tabF3ad[,2]) anava1<-aov(tabF3ad[,3]~ BLOCO + TRAT) anavaTr<-anova(anava1) SQAd=anavaTr[2,2]-sum(anavaF3[c(1,2,3,5,6,7,8),2]) DfAd=1 QMAd=SQAd FAd=QMAd/anavaTr[3,3] pAd=1-pf(FAd,DfAd,anavaTr[3,3]) DfE=anavaTr[3,1] SQE=anavaTr[3,2] QME=anavaTr[3,3] Fef=anavaF3$`Mean Sq`[1:8]/anavaTr[3,3] pef=1-pf(Fef,anavaF3$Df[1:8],DfE) anavaF3=rbind(anavaF3[1:8,], "Factorial vs Aditional"=c(DfAd,SQAd,QMAd,FAd,pAd), "Residuals"=c(DfE,SQE,QME,NA,NA)) anavaF3[1:8,4]=Fef anavaF3[1:8,5]=pef anavaF3[4,]=anavaTr[1,] anovaF3=anavaF3 # ================================= ## Saída inicial # ================================= #Teste de normalidade norm1<-shapiro.test(anava$residuals) cat(green(bold("\n------------------------------------------"))) cat(green(bold("\nNormality of errors"))) cat(green(bold("\n------------------------------------------"))) print(norm1) message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) # Teste de homogeneidade das variâncias homog1=bartlett.test(anava$residuals~paste(Fator1,Fator2,Fator3)) cat(green(bold("\n------------------------------------------"))) cat(green(bold("\nHomogeneity of Variances"))) cat(green(bold("\n------------------------------------------"))) print(homog1) message(if(homog1$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) # Independencia dos erros indep=dwtest(anava) cat(green(bold("\n------------------------------------------"))) cat(green(bold("\nIndependence from errors"))) cat(green(bold("\n------------------------------------------"))) print(indep) message(if(indep$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors are not independent"}) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV (%) = ",round(sqrt(anavaF3$`Mean Sq`[10])/mean(c(resp,respAd),na.rm=TRUE)*100,2))) cat(paste("\nMean Factorial = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian Factorial = ",round(median(response,na.rm=TRUE),4))) cat(paste("\nMean Aditional = ",round(mean(responseAd,na.rm=TRUE),4))) cat(paste("\nMedian Aditional = ",round(median(responseAd,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n------------------------------------------\n"))) anava1=as.matrix(data.frame(anovaF3)) colnames(anava1)=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anava1)=c(names.fat[1], names.fat[2], names.fat[3], "Block", paste(names.fat[1],"x",names.fat[2]), paste(names.fat[1],"x",names.fat[3]), paste(names.fat[2],"x",names.fat[3]), paste(names.fat[1],"x",names.fat[2],"x",names.fat[3]), "Factorial vs Aditional", "Residuals") print(anava1,na.print = "") cat("\n") if(transf==1 && norm1$p.value<0.05 | transf==1 && indep$p.value<0.05 | transf==1 &&homog1$p.value<0.05){ message("\n Your analysis is not valid, suggests using a non-parametric test and try to transform the data\n")}else{} if(transf != 1 && norm1$p.value<0.05 | transf!=1 && indep$p.value<0.05 | transf!=1 && homog1$p.value<0.05){ message("\n Your analysis is not valid, suggests using the function FATDIC.art\n")}else{} message(if(transf !=1){blue("\nNOTE: resp = transformed means; respO = averages without transforming\n")}) if(anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[7,5]>alpha.f && anavaF3[8,5]>alpha.f) { graficos=list(1,2,3) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold('Non-significant interaction: analyzing the simple effects'))) cat(green(bold("\n------------------------------------------\n"))) fatores<-data.frame('fator 1'=fator1,'fator 2' = fator2,'fator 3' = fator3) for(i in 1:3){ if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) cat(green(bold("\n------------------------------------------\n"))) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i], anavaF3[10,1],anavaF3[10,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[10,1], nrep = nrep, QME = anavaF3[10,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(resp,fatores[,i], anavaF3[10,1],anavaF3[10,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(resp,fatores[,i], anavaF3[10,1],anavaF3[10,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) print(letra1) cat(green(bold("\n------------------------------------------\n"))) #===================================================== if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos), color=1)}else{grafico=grafico+ geom_col(aes(fill=Tratamentos), fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+ if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3)} grafico=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[i]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family))} if(geom=="point"){grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_point(aes(color=Tratamentos))} else{grafico=grafico+ geom_point(aes(color=Tratamentos),color=fill,size=4)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3)} grafico=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[i]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family))+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="")} grafico=grafico+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="") print(grafico) } if(anavaF3[i,5]>alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) cat(green(bold("\n------------------------------------------\n"))) mean.table<-mean_stat(response,fatores[,i],mean) colnames(mean.table)<-c('Niveis','Medias') print(mean.table) grafico=NA} if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) cat(green(bold("\n------------------------------------------\n"))) dose=as.numeric(as.vector(unlist(fatores[,i]))) grafico=polynomial(dose,resp,grau = grau[i], DFres= anavaF3[10,1],SSq = anavaF3[10,2],ylab = ylab,xlab = xlab,point = point)[[1]] cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command")) cat(green(bold("\n------------------------------------------")))} graficos[[1]]=residplot graficos[[i+1]]=grafico } } if(anavaF3[8,5]>alpha.f && anavaF3[5,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(fac.names[1],'*',fac.names[2],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[1], ' within the combination of levels ', fac.names[2]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator2/Fator1+Fator3+Fator2+Fator2:Fator3+Fator1:Fator2:Fator3+bloco) l<-vector('list',nv2) names(l)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv2+j) l[[j]]<-v v<-numeric(0)} des1<-summary(des,split=list('Fator2:Fator1'=l))[[1]] des1[nrow(des1),]=c(DfE,SQE,QME,NA,NA) des1$`F value`=c(des1$`Mean Sq`[1:nrow(des1)-1]/des1$`Mean Sq`[nrow(des1)],NA) des1$`Pr(>F)`=c(1-pf(des1$`F value`[1:nrow(des1)-1], des1$Df[1:nrow(des1)-1], des1$Df[nrow(des1)]),NA) des1a=des1[-c(1,2,3,4,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[2], sep = ""), lf2[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra}} cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[2], " inside of the level of ",fac.names[1]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator1/Fator2+Fator3+Fator1+Fator1:Fator3+Fator1:Fator2:Fator3+bloco) l<-vector('list',nv1) names(l)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv1+j) l[[j]]<-v v<-numeric(0)} des1<-summary(des,split=list('Fator1:Fator2'=l))[[1]] des1[nrow(des1),]=c(DfE,SQE,QME,NA,NA) des1$`F value`=c(des1$`Mean Sq`[1:nrow(des1)-1]/des1$`Mean Sq`[nrow(des1)],NA) des1$`Pr(>F)`=c(1-pf(des1$`F value`[1:nrow(des1)-1], des1$Df[1:nrow(des1)-1], des1$Df[nrow(des1)]),NA) des1a=des1[-c(1,2,3,4,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv1) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[1], sep = ""), lf1[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico1[[i]]=tukey$groups[levels(trati),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico1[[i]]=duncan$groups[levels(trati),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico1[[i]]=lsd$groups[levels(trati),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)}} if(quali[1]==TRUE & quali[2]==TRUE){ f1=rep(levels(Fator1),e=length(levels(Fator2))) f2=rep(unique(as.character(Fator2)),length(levels(Fator1))) f1=factor(f1,levels = unique(f1)) f2=factor(f2,levels = unique(f2)) media=tapply(resp,paste(Fator1,Fator2), mean, na.rm=TRUE)[unique(paste(f1,f2))] if(point=="mean_sd"){desvio=tapply(response,paste(Fator1,Fator2), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,paste(Fator1,Fator2), sd, na.rm=TRUE)/ sqrt(tapply(response,paste(Fator1,Fator2), length))} desvio=desvio[unique(paste(f1,f2))] graph=data.frame(f1=f1, f2=f2, media, desvio, letra,letra1, numero=format(media,digits = dec)) numero=paste(graph$numero,graph$letra,graph$letra1,sep="") graph$numero=numero colint=ggplot(graph, aes(x=f2, y=media, fill=f1))+ ylab(ylab)+xlab(xlab)+ theme+ labs(fill=fac.names[1])+ geom_col(position = "dodge",color="black")+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=0.3,position = position_dodge(width=0.9))+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)+ theme(text=element_text(size=textsize,family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family))+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=T)),lty=2)+ scale_color_manual(values = "black")+labs(color="") colint1=colint print(colint) letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator2) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) print(matriz) cat("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")") } if(quali[1]==FALSE | quali[2]==FALSE){ if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk)}}} if(quali[1]==FALSE){ Fator1a=fator1a colint1=polynomial2(Fator1a, response, Fator2, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[10,1],SSq = anavaF3[10,2])} if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk)}} } if(quali[2]==FALSE){ Fator2a=fator2a colint1=polynomial2(Fator2a, response, Fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[10,1],SSq = anavaF3[10,2])} cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial2\" command\n"))} #Checar o Fator3 if(anavaF3[6,5]>alpha.f && anavaF3[7,5]>alpha.f) { i<-3 { #Para os fatores QUALITATIVOS, teste de Tukey if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',fac.names[i]))) cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[10,1],anavaF3[10,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[10,1], nrep = nrep, QME = anavaF3[10,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(resp,fatores[,i],anavaF3[10,1],anavaF3[10,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(resp,fatores[,i],anavaF3[10,1],anavaF3[10,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} print(letra1) cat(green(bold("\n------------------------------------------\n"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos),color=1)} else{grafico=grafico+ geom_col(aes(fill=Tratamentos),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3)} grafico1=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[3]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family))+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=T)),lty=2)+ scale_color_manual(values = "black")+labs(color="") print(grafico1)} } #Para os fatores QUANTITATIVOS, regressao if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat('Analyzing the simple effects of the factor ',fac.names[3]) cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) grafico1=polynomial(resp, fatores[,i],grau=grau[i], DFres= anavaF3[10,1],SSq = anavaF3[10,2],ylab = ylab,xlab = parse(text = xlab.factor[3]),point = point)[[1]] cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command"))} } } ##################################################################################################### #Interacao Fator1*Fator3 + fator2 ##################################################################################################### if(anavaF3[8,5]>alpha.f && anavaF3[6,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(fac.names[1],'*',fac.names[3],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------"))) #Desdobramento de FATOR 1 dentro do niveis de FATOR 3 cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[1], ' within the combination of levels ', fac.names[3]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator3/Fator1+Fator2+Fator3+Fator2:Fator3+Fator1:Fator2:Fator3+bloco) l<-vector('list',nv3) names(l)<-names(summary(Fator3)) v<-numeric(0) for(j in 1:nv3) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv3+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator3:Fator1'=l))[[1]] des1[nrow(des1),]=c(DfE,SQE,QME,NA,NA) des1$`F value`=c(des1$`Mean Sq`[1:nrow(des1)-1]/des1$`Mean Sq`[nrow(des1)],NA) des1$`Pr(>F)`=c(1-pf(des1$`F value`[1:nrow(des1)-1], des1$Df[1:nrow(des1)-1], des1$Df[nrow(des1)]),NA) des1a=des1[-c(1,2,3,4,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv3) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[3], sep = ""), lf3[j])) } rownames(des1a)=rn #============================ print(des1a) # Teste de Tukey if(quali[1]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra}} # Desdobramento de F3 dentro de F1 cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[3], " inside of the level of ",fac.names[1]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator1/Fator3+Fator1+Fator2+Fator2:Fator1+Fator1:Fator2:Fator3+bloco) l<-vector('list',nv1) names(l)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv3-2)) v<-cbind(v,i*nv1+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator1:Fator3'=l))[[1]] des1[nrow(des1),]=c(DfE,SQE,QME,NA,NA) des1$`F value`=c(des1$`Mean Sq`[1:nrow(des1)-1]/des1$`Mean Sq`[nrow(des1)],NA) des1$`Pr(>F)`=c(1-pf(des1$`F value`[1:nrow(des1)-1], des1$Df[1:nrow(des1)-1], des1$Df[nrow(des1)]),NA) des1a=des1[-c(1,2,3,4,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv1) { rn <- c(rn, paste(paste(names.fat[3], ":", names.fat[1], sep = ""), lf1[j])) } rownames(des1a)=rn #============================ print(des1a) #------------------------------------- # Teste de Tukey #------------------------------------- if(quali[1]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) tukeygrafico1[[i]]=tukey$groups[levels(trati),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)}} # ----------------------------- # Gráfico de colunas #------------------------------ if(quali[1]==TRUE & quali[3]==TRUE){ f1=rep(levels(Fator1),e=length(levels(Fator3))) f3=rep(unique(as.character(Fator3)),length(levels(Fator1))) f1=factor(f1,levels = unique(f1)) f3=factor(f3,levels = unique(f3)) media=tapply(response,paste(Fator1,Fator3), mean, na.rm=TRUE)[unique(paste(f1,f3))] if(point=="mean_sd"){desvio=tapply(response,paste(Fator1,Fator3), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,paste(Fator1,Fator3), sd, na.rm=TRUE)/ sqrt(tapply(response,paste(Fator1,Fator3), length))} desvio=desvio[unique(paste(f1,f3))] graph=data.frame(f1=f1, f3=f3, media, desvio, letra, letra1, numero=format(media,digits = dec)) numero=paste(graph$numero,graph$letra,graph$letra1,sep="") graph$numero=numero colint=ggplot(graph, aes(x=f3, y=media, fill=f1))+ geom_col(position = "dodge",color="black")+ ylab(ylab)+ xlab(xlab)+ theme+ labs(fill=fac.names[1])+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=0.3, position = position_dodge(width=0.9))+ geom_text(aes(y=media+desvio+sup, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)+ theme(text=element_text(size=textsize,family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family))+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=T)),lty=2)+ scale_color_manual(values = "black")+labs(color="") colint2=colint print(colint) letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator3) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) print(matriz) cat("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")") } if(quali[1]==FALSE | quali[3]==FALSE){ if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv3) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv3) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk)}}} if(quali[1]==FALSE){ Fator1a=fator1a colint2=polynomial2(Fator1a, response, Fator3, grau = grau13, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[10,1],SSq = anavaF3[10,2])} if(quali[3]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[10],anavaF3$`Sum Sq`[10],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(sk)}}} if(quali[3]==FALSE){ Fator3a=fator3a colint2=polynomial2(Fator3a, response, Fator1, grau = grau31, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[10,1],SSq = anavaF3[10,2])} cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial2\" command\n")) } if(anavaF3[5,5]>alpha.f && anavaF3[7,5]>alpha.f) { i<-2 { #Para os fatores QUALITATIVOS, teste de Tukey if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',fac.names[i]))) cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[10,1],anavaF3[10,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[10,1], nrep = nrep, QME = anavaF3[10,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(resp,fatores[,i],anavaF3[10,1],anavaF3[10,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(resp,fatores[,i],anavaF3[10,1],anavaF3[10,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} print(letra1) cat(green(bold("\n------------------------------------------"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos),color=1)} else{grafico=grafico+ geom_col(aes(fill=Tratamentos),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio},label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3) grafico2=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[2]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family))+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="") print(grafico2)} } #Para os fatores QUANTITATIVOS, regressao if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat('Analyzing the simple effects of the factor ',fac.names[2]) cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) grafico2=polynomial(resp, fatores[,i],grau=grau[i], DFres= anavaF3[10,1],SSq = anavaF3[10,2],ylab = ylab,xlab = parse(text = xlab.factor[2]),point = point)[[1]] cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command")) } cat('\n') } } } if(anavaF3[8,5]>alpha.f && anavaF3[7,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(fac.names[2],'*',fac.names[3],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[2], ' within the combination of levels ', fac.names[3]) cat("\n-------------------------------------------------\n") des<-aov(resp~Fator3/Fator2+Fator1+Fator3+Fator1:Fator3+Fator1:Fator2:Fator3+bloco) l<-vector('list',nv3) names(l)<-names(summary(Fator3)) v<-numeric(0) for(j in 1:nv3) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv3+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator3:Fator2'=l))[[1]] des1[nrow(des1),]=c(DfE,SQE,QME,NA,NA) des1$`F value`=c(des1$`Mean Sq`[1:nrow(des1)-1]/des1$`Mean Sq`[nrow(des1)],NA) des1$`Pr(>F)`=c(1-pf(des1$`F value`[1:nrow(des1)-1], des1$Df[1:nrow(des1)-1], des1$Df[nrow(des1)]),NA) des1a=des1[-c(1,2,3,4,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv3) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[3], sep = ""), lf3[j])) } rownames(des1a)=rn #============================ print(des1a) # Teste de Tukey if(quali[2]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra}} cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[3], " inside of the level of ",fac.names[2]) cat(green(bold("\n------------------------------------------\n"))) cat("\n") des<-aov(resp~Fator2/Fator3+Fator1+Fator2+Fator1:Fator2+Fator1:Fator2:Fator3+bloco) l<-vector('list',nv2) names(l)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv3-2)) v<-cbind(v,i*nv2+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator2:Fator3'=l))[[1]] des1[nrow(des1),]=c(DfE,SQE,QME,NA,NA) des1$`F value`=c(des1$`Mean Sq`[1:nrow(des1)-1]/des1$`Mean Sq`[nrow(des1)],NA) des1$`Pr(>F)`=c(1-pf(des1$`F value`[1:nrow(des1)-1], des1$Df[1:nrow(des1)-1], des1$Df[nrow(des1)]),NA) des1a=des1[-c(1,2,3,4,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[3], ":", names.fat[2], sep = ""), lf2[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[2]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) tukeygrafico1[[i]]=tukey$groups[levels(trati),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) duncangrafico1[[i]]=duncan$groups[levels(trati),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) lsdgrafico1[[i]]=lsd$groups[levels(trati),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)}} if(quali[2]==TRUE & quali[3]==TRUE){ f2=rep(levels(Fator2),e=length(levels(Fator3))) f3=rep(unique(as.character(Fator3)),length(levels(Fator2))) f2=factor(f2,levels = unique(f2)) f3=factor(f3,levels = unique(f3)) media=tapply(response,paste(Fator2,Fator3), mean, na.rm=TRUE)[unique(paste(f2,f3))] if(point=="mean_sd"){desvio=tapply(response,paste(Fator2,Fator3), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,paste(Fator2,Fator3), sd, na.rm=TRUE)/ sqrt(tapply(response,paste(Fator2,Fator3), length))} desvio=desvio[unique(paste(f2,f3))] graph=data.frame(f2=f2, f3=f3, media, desvio, letra,letra1, numero=format(media,digits = dec)) numero=paste(graph$numero,graph$letra,graph$letra1,sep="") graph$numero=numero colint=ggplot(graph, aes(x=f3, y=media, fill=f2))+ geom_col(position = "dodge",color="black")+ ylab(ylab)+xlab(xlab)+ theme+ labs(fill=fac.names[2])+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=0.3,position = position_dodge(width=0.9))+ geom_text(aes(y=media+desvio+sup, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)+ theme(text=element_text(size=textsize,family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family))+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=T)),lty=2)+ scale_color_manual(values = "black")+labs(color="") colint3=colint print(colint) letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras), ncol = length(levels(Fator2))))) rownames(matriz)=levels(Fator2) colnames(matriz)=levels(Fator3) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) print(matriz) cat("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")") } if(quali[2]==FALSE | quali[3]==FALSE){ if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk)}}} if(quali[2]==FALSE){ Fator2a=fator2a colint3=polynomial2(Fator2a, response, Fator3, grau = grau23, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[10,1],SSq = anavaF3[10,2])} if(quali[3]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(tukey)}} if (mcomp == "duncan"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(duncan)}} if (mcomp == "lsd"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[10],anavaF3$`Mean Sq`[10],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(lsd)}} if (mcomp == "sk"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(sk)}}} if(quali[3]==FALSE){ Fator3a=fator3a colint3=polynomial2(Fator3a, response, Fator2, grau = grau32, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[10,1],SSq = anavaF3[10,2])} cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial2\" command\n")) } if(anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f) { i<-1 { #Para os fatores QUALITATIVOS, teste de Tukey if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',fac.names[i]))) cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[10,1],anavaF3[10,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[10,1], nrep = nrep, QME = anavaF3[10,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(resp,fatores[,i],anavaF3[10,1],anavaF3[10,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(resp,fatores[,i],anavaF3[10,1],anavaF3[10,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} print(letra1) cat(green(bold("\n------------------------------------------"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra grafico=ggplot(dadosm,aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos),color=1)} else{grafico=grafico+ geom_col(aes(fill=Tratamentos),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3) grafico3=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[1]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family))+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="") print(grafico3)} } if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat('\nAnalyzing the simple effects of the factor ',fac.names[1],'\n') cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) grafico3=polynomial(resp, fatores[,i],grau=grau[i], DFres= anavaF3[10,1],SSq = anavaF3[10,2],ylab = ylab,xlab = parse(text = xlab.factor[1]),point = point)[[1]] cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command")) } cat('\n') } } } if(anavaF3[8,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\nInteraction",paste(fac.names[1],'*',fac.names[2],'*',fac.names[3],sep='')," significant: unfolding the interaction\n"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[1], ' within the combination of levels ', fac.names[2], 'and',fac.names[3]) cat(green(bold("\n------------------------------------------\n"))) m1=aov(resp~(Fator2*Fator3)/Fator1+bloco) anova(m1) pattern <- c(outer(levels(Fator2), levels(Fator3), function(x,y) paste("Fator2",x,":Fator3",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==5])) des1.tab <- summary(m1, split = list("Fator2:Fator3:Fator1" = des.tab)) des1.tab[[1]][nrow(des1.tab[[1]]),]=c(DfE,SQE,QME,NA,NA) des1.tab[[1]]$`F value`=c(des1.tab[[1]]$`Mean Sq`[1:nrow(des1.tab[[1]])-1]/ des1.tab[[1]]$`Mean Sq`[nrow(des1.tab[[1]])],NA) des1.tab[[1]]$`Pr(>F)`=c(1-pf(des1.tab[[1]]$`F value`[1:nrow(des1.tab[[1]])-1], des1.tab[[1]]$Df[1:nrow(des1.tab[[1]])-1], des1.tab[[1]]$Df[nrow(des1.tab[[1]])]),NA) desd=des1.tab[[1]][-c(1,2,3,4,5),] desd=data.frame(desd[-length(rownames(desd)),]) # rownames(desd)=cbind(paste("Fator2:",rep(levels(Fator2),length(levels(Fator3))), # "Fator3:",rep(levels(Fator3),e=length(levels(Fator2))))) rownames(desd)=cbind(paste(names.fat[2],":",rep(levels(Fator2),length(levels(Fator3))), names.fat[3],":",rep(levels(Fator3),e=length(levels(Fator2))))) colnames(desd)=c("Df", "Sum Sq", "Mean Sq", "F value", "Pr(>F)") print(desd) ii<-0 for(i in 1:nv2) { for(j in 1:nv3) { ii<-ii+1 if(quali[1]==TRUE){ cat('\n\n',fac.names[1],' inside of each level of ',lf2[i],' of ',fac.names[2],' and ',lf3[j],' of ',fac.names[3],"\n") if(mcomp=='tukey'){tukey=TUKEY(y = resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], trt = fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], DFerror = anavaF3[10,1], MSerror = anavaF3[10,3], alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") if(transf !=1){tukey$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]],mean, na.rm=TRUE)[rownames(tukey)]} print(tukey)} if(mcomp=='duncan'){duncan=duncan(resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], anavaF3[10,1], anavaF3[10,3], alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") if(transf !=1){duncan$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]],mean, na.rm=TRUE)[rownames(duncan)]} print(duncan)} if(mcomp=='lsd'){lsd=LSD(resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], anavaF3[10,1], anavaF3[10,3], alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") if(transf !=1){lsd$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]],mean, na.rm=TRUE)[rownames(lsd)]} print(lsd)} if(mcomp=='sk'){ fat= fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]] fat1=factor(fat,unique(fat)) levels(fat1)=1:length(levels(fat1)) resp1=resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]] nrep=table(fat1)[1] medias=sort(tapply(respi,fat1,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) sk=sk[as.character(unique(fat1)),] rownames(sk)=unique(fat) if(transf !=1){sk$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]],mean, na.rm=TRUE)[rownames(sk)]} print(sk)} } if(quali[1]==FALSE){ cat('\n',fac.names[1],' within the combination of levels ',lf2[i],' of ',fac.names[2],' and ',lf3[j],' of ',fac.names[3],"\n") polynomial(fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]],grau=grau123, DFres= anavaF3[10,1],SSq = anavaF3[10,2],ylab = ylab,xlab = xlab,point = point)[[1]]} } } cat('\n\n') cat("\n------------------------------------------\n") cat("Analyzing ", fac.names[2], ' within the combination of levels ', fac.names[1], 'and',fac.names[3]) cat("\n------------------------------------------\n") m1=aov(resp~(Fator1*Fator3)/Fator2+bloco) anova(m1) pattern <- c(outer(levels(Fator1), levels(Fator3), function(x,y) paste("Fator1",x,":Fator3",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==5])) des1.tab <- summary(m1, split = list("Fator1:Fator3:Fator2" = des.tab)) des1.tab[[1]][nrow(des1.tab[[1]]),]=c(DfE,SQE,QME,NA,NA) des1.tab[[1]]$`F value`=c(des1.tab[[1]]$`Mean Sq`[1:nrow(des1.tab[[1]])-1]/ des1.tab[[1]]$`Mean Sq`[nrow(des1.tab[[1]])],NA) des1.tab[[1]]$`Pr(>F)`=c(1-pf(des1.tab[[1]]$`F value`[1:nrow(des1.tab[[1]])-1], des1.tab[[1]]$Df[1:nrow(des1.tab[[1]])-1], des1.tab[[1]]$Df[nrow(des1.tab[[1]])]),NA) desd=des1.tab[[1]][-c(1,2,3,4,5),] desd=data.frame(desd[-length(rownames(desd)),]) # rownames(desd)=cbind(paste("Fator1:",rep(levels(Fator1),length(levels(Fator3))), # "Fator3:",rep(levels(Fator3),e=length(levels(Fator1))))) rownames(desd)=cbind(paste(names.fat[1],":",rep(levels(Fator1),length(levels(Fator3))), names.fat[3],":",rep(levels(Fator3),e=length(levels(Fator1))))) colnames(desd)=c("Df", "Sum Sq", "Mean Sq", "F value", "Pr(>F)") print(desd) ii<-0 for(k in 1:nv1) { for(j in 1:nv3) { ii<-ii+1 if(quali[2]==TRUE){ cat('\n\n',fac.names[2],' inside of each level of ',lf1[k],' of ',fac.names[1],' and ',lf3[j],' of ',fac.names[3],'\n') if(mcomp=='tukey'){tukey=TUKEY(resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], anavaF3[10,1], anavaF3[10,3], alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") if(transf !=1){tukey$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(tukey)]} print(tukey)} if(mcomp=='duncan'){duncan=duncan(resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], anavaF3[10,1], anavaF3[10,3], alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") if(transf !=1){duncan$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(duncan)]} print(duncan)} if(mcomp=='lsd'){lsd=LSD(resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], anavaF3[10,1], anavaF3[10,3], alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") if(transf !=1){lsd$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(lsd)]} print(lsd)} if(mcomp=='sk'){ fat=fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]] fat1=factor(fat,unique(fat)) levels(fat1)=1:length(levels(fat1)) resp1=resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]] nrep=table(fat1)[1] medias=sort(tapply(respi,fat1,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) sk=sk[as.character(unique(fat1)),] rownames(sk)=unique(fat) if(transf !=1){sk$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(sk)]} print(sk)} } if(quali[2]==FALSE){ cat('\n\n',fac.names[2],' within the combination of levels ',lf1[k],' of ',fac.names[1],' and ',lf3[j],' of ',fac.names[3],'\n') polynomial(fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]],grau=grau213, DFres= anavaF3[10,1],SSq = anavaF3[10,2],ylab = ylab,xlab = xlab,point = point)[[1]]} } } cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[3], ' within the combination of levels ', fac.names[1], 'and',fac.names[2]) cat(green(bold("\n------------------------------------------\n"))) m1=aov(resp~(Fator1*Fator2)/Fator3+bloco) anova(m1) pattern <- c(outer(levels(Fator1), levels(Fator2), function(x,y) paste("Fator1",x,":Fator2",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==5])) des1.tab <- summary(m1, split = list("Fator1:Fator2:Fator3" = des.tab)) des1.tab[[1]][nrow(des1.tab[[1]]),]=c(DfE,SQE,QME,NA,NA) des1.tab[[1]]$`F value`=c(des1.tab[[1]]$`Mean Sq`[1:nrow(des1.tab[[1]])-1]/ des1.tab[[1]]$`Mean Sq`[nrow(des1.tab[[1]])],NA) des1.tab[[1]]$`Pr(>F)`=c(1-pf(des1.tab[[1]]$`F value`[1:nrow(des1.tab[[1]])-1], des1.tab[[1]]$Df[1:nrow(des1.tab[[1]])-1], des1.tab[[1]]$Df[nrow(des1.tab[[1]])]),NA) desd=des1.tab[[1]][-c(1,2,3,4,5),] desd=data.frame(desd[-length(rownames(desd)),]) # rownames(desd)=cbind(paste("Fator1:",rep(levels(Fator1),length(levels(Fator2))), # "Fator2:",rep(levels(Fator2),e=length(levels(Fator1))))) rownames(desd)=cbind(paste(names.fat[1],":",rep(levels(Fator1),length(levels(Fator2))), names.fat[2],":",rep(levels(Fator2),e=length(levels(Fator1))))) colnames(desd)=c("Df", "Sum Sq", "Mean Sq", "F value", "Pr(>F)") print(desd) ii<-0 for(k in 1:nv1) { for(i in 1:nv2) { ii<-ii+1 if(quali[3]==TRUE){ cat('\n\n',fac.names[3],' inside of each level of ',lf1[k],' of ',fac.names[1],' and ',lf2[i],' of ',fac.names[2],'\n') if(mcomp=='tukey'){tukey=TUKEY(resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], anavaF3[10,1], anavaF3[10,3], alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") if(transf !=1){tukey$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(tukey)]} print(tukey)} if(mcomp=='duncan'){duncan=duncan(resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], anavaF3[10,1], anavaF3[10,3], alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") if(transf !=1){duncan$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(duncan)]} print(duncan)} if(mcomp=='lsd'){lsd=LSD(resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], anavaF3[10,1], anavaF3[10,3], alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") if(transf !=1){lsd$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(lsd)]} print(lsd)} if(mcomp=='sk'){ fat=fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]] fat1=factor(fat,unique(fat)) levels(fat1)=1:length(levels(fat1)) resp1=resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]] nrep=table(fat1)[1] medias=sort(tapply(respi,fat1,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[10], nrep = nrep, QME = anavaF3$`Mean Sq`[10], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) colnames(sk)=c("resp","letters") sk=sk[as.character(unique(fat1)),] rownames(sk)=unique(fat) if(transf !=1){sk$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(sk)]} print(sk)} } if(quali[3]==FALSE){ cat('\n\n',fac.names[3],' inside of each level of ',lf1[k],' of ',fac.names[1],' and ',lf2[i],' of ',fac.names[2],'\n') polynomial(fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]],grau=grau312, DFres= anavaF3[10,1],SSq = anavaF3[10,2],ylab = ylab,xlab = xlab,point = point)[[1]]} } } } if(anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[7,5]>alpha.f && anavaF3[8,5]>alpha.f){ if(anavaF3[1,5]<=alpha.f | anavaF3[2,5]<=alpha.f | anavaF3[3,5]<=alpha.f){ graficos}else{graficos=NA}} if(anavaF3[8,5]>alpha.f && anavaF3[5,5]<=alpha.f){ graficos=list(residplot,colint1) if(anavaF3[6,5]>alpha.f && anavaF3[7,5]>alpha.f && anavaF3[3,5]<=alpha.f){ graficos=list(residplot,colint1,grafico1)} graficos} if(anavaF3[8,5]>alpha.f && anavaF3[6,5]<=alpha.f){ graficos=list(residplot,colint2) if(anavaF3[5,5]>alpha.f && anavaF3[7,5]>alpha.f && anavaF3[2,5]<=alpha.f){ graficos=list(residplot,colint2,grafico2)} graficos} if(anavaF3[8,5]>alpha.f && anavaF3[7,5]<=alpha.f){ graficos=list(residplot,colint3) if(anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[1,5]<=alpha.f){ graficos=list(residplot,colint3,grafico3)}} if(anavaF3[8,5]>alpha.f && anavaF3[5,5]<=alpha.f && anavaF3[6,5]<=alpha.f){ graficos=list(residplot,colint1,colint2) graficos} if(anavaF3[8,5]>alpha.f && anavaF3[5,5]<=alpha.f && anavaF3[7,5]<=alpha.f){ graficos=list(residplot,colint1,colint3) graficos} if(anavaF3[8,5]>alpha.f && anavaF3[6,5]<=alpha.f && anavaF3[7,5]<=alpha.f){ graficos=list(residplot,colint2,colint3) graficos} if(anavaF3[8,5]<=alpha.f){graficos=list(residplot)} graficos=graficos }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/FAT3DBCad_function.R
#' Analysis: DIC experiments in triple factorial #' @description Analysis of an experiment conducted in a completely randomized design in a triple factorial scheme using analysis of variance of fixed effects. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param f1 Numeric or complex vector with factor 1 levels #' @param f2 Numeric or complex vector with factor 2 levels #' @param f3 Numeric or complex vector with factor 3 levels #' @param response Numerical vector containing the response of the experiment. #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{qualitative}) #' @param names.fat Allows labeling the factors 1, 2 and 3. #' @param grau Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with three elements. #' @param grau12 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 and qualitative factor 2 and quantitative factor 1. #' @param grau21 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 and qualitative factor 1 and quantitative factor 2. #' @param grau13 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 3, in the case of interaction f1 x f3 and qualitative factor 3 and quantitative factor 1. #' @param grau31 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f3 and qualitative factor 1 and quantitative factor 3. #' @param grau23 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 3, in the case of interaction f2 x f3 and qualitative factor 3 and quantitative factor 2. #' @param grau32 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f2 x f3 and qualitative factor 2 and quantitative factor 3. #' @param grau123 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 x f3 and quantitative factor 1. #' @param grau213 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 x f3 and quantitative factor 2. #' @param grau312 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 3, in the case of interaction f1 x f2 x f3 and quantitative factor 3. #' @param xlab treatments name (Accepts the \emph{expression}() function) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab.factor Provide a vector with two observations referring to the x-axis name of factors 1, 2 and 3, respectively, when there is an isolated effect of the factors. This argument uses `parse`. #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param transf Applies data transformation (\emph{default} is 1; for log consider 0; `angular` for angular transformation) #' @param constant Add a constant for transformation (enter value) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param geom Graph type (columns or segments) #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angulo x-axis scale text rotation #' @param textsize Font size #' @param labelsize Label Size #' @param dec Number of cells #' @param family Font family #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param point This function defines whether the point must have all points ("all"), mean ("mean"), standard deviation (\emph{default} - "mean_sd") or mean with standard error ("mean_se") if quali= FALSE. For quali=TRUE, `mean_sd` and `mean_se` change which information will be displayed in the error bar. #' @param angle.label label angle #' @note The order of the chart follows the alphabetical pattern. Please use `scale_x_discrete` from package ggplot2, `limits` argument to reorder x-axis. The bars of the column and segment graphs are standard deviation. #' @return The analysis of variance table, the Shapiro-Wilk error normality test, the Bartlett homogeneity test of variances, the Durbin-Watson error independence test, multiple comparison test (Tukey, LSD, Scott-Knott or Duncan) or adjustment of regression models up to grade 3 polynomial, in the case of quantitative treatments. The column chart for qualitative treatments is also returned.For significant triple interaction only, no graph is returned. #' @note The function does not perform multiple regression in the case of two or more quantitative factors. The bars of the column and segment graphs are standard deviation. #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @references #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' #' Ferreira, E. B., Cavalcanti, P. P., and Nogueira, D. A. (2014). ExpDes: an R package for ANOVA and experimental designs. Applied Mathematics, 5(19), 2952. #' #' Mendiburu, F., and de Mendiburu, M. F. (2019). Package ‘agricolae’. R Package, Version, 1-2. #' #' @keywords DIC #' @keywords Factorial #' @export #' @examples #' library(AgroR) #' data(enxofre) #' with(enxofre, FAT3DIC(f1, f2, f3, resp)) ###################################################################################### ## Analise de variancia para experimentos em DIC ###################################################################################### FAT3DIC=function(f1, f2, f3, response, norm="sw", alpha.t=0.05, alpha.f=0.05, quali=c(TRUE,TRUE,TRUE), mcomp='tukey', grau=c(NA,NA,NA), grau12=NA, # F1/F2 grau13=NA, # F1/F3 grau23=NA, # F2/F3 grau21=NA, # F2/F1 grau31=NA, # F3/F1 grau32=NA, # F3/F2 grau123=NA, grau213=NA, grau312=NA, transf=1, constant=0, names.fat=c("F1","F2","F3"), ylab="Response", xlab="", xlab.factor=c("F1","F2","F3"), sup=NA, fill="lightblue", theme=theme_classic(), angulo=0, family="sans", addmean=TRUE, errorbar=TRUE, dec=3, geom="bar", textsize=12, labelsize=4, point="mean_sd", angle.label=0) { if(is.na(sup==TRUE)){sup=0.2*mean(response)} if(angle.label==0){hjust=0.5}else{hjust=0} if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} ordempadronizado=data.frame(f1,f2,f3,response,resp) resp1=resp organiz=data.frame(f1,f2,f3,response,resp) organiz=organiz[order(organiz$f3),] organiz=organiz[order(organiz$f2),] organiz=organiz[order(organiz$f1),] f1=organiz$f1 f2=organiz$f2 f3=organiz$f3 response=organiz$response resp=organiz$resp fator1=f1 fator2=f2 fator3=f3 fator1a=fator1 fator2a=fator2 fator3a=fator3 requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") fatores<-data.frame(fator1,fator2,fator3) Fator1<-factor(fator1,levels=unique(fator1)); Fator2<-factor(fator2,levels=unique(fator2)); Fator3<-factor(fator3,levels=unique(fator3)) nv1<-length(summary(Fator1)); nv2<-length(summary(Fator2)); nv3<-length(summary(Fator3)) J<-(length(resp))/(nv1*nv2*nv3) lf1<-levels(Fator1); lf2<-levels(Fator2); lf3<-levels(Fator3) anava<-aov(resp~Fator1*Fator2*Fator3) anavaF3<-anova(anava) anovaF3=anavaF3 colnames(anovaF3)=c("GL","SQ","QM","Fcal","p-value") anavares<-aov(resp~as.factor(f1)* as.factor(f2)* as.factor(f3),data = ordempadronizado) respad=anavares$residuals/sqrt(anavaF3$`Mean Sq`[8]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} Ids=ifelse(respad>3 | respad<(-3), "darkblue","black") residplot=ggplot(data=data.frame(respad,Ids),aes(y=respad,x=1:length(respad)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(respad),label=1:length(respad),color=Ids,size=labelsize)+ scale_x_continuous(breaks=1:length(respad))+ theme_classic()+theme(axis.text.y = element_text(size=textsize), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) # print(residplot) #Teste de normalidade norm1<-shapiro.test(anava$residuals) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n------------------------------------------\n"))) print(norm1) message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) homog1=bartlett.test(anava$residuals~paste(Fator1,Fator2,Fator3)) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n------------------------------------------\n"))) print(homog1) message(if(homog1$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) indep=dwtest(anava) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Independence from errors"))) cat(green(bold("\n------------------------------------------\n"))) print(indep) message(if(indep$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors are not independent"}) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV (%) = ",round(sqrt(anavaF3$`Mean Sq`[8])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nMean = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian = ",round(median(response,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n------------------------------------------\n"))) cat(green(italic("Analysis of Variance"))) cat(green(bold("\n------------------------------------------\n"))) anava1=as.matrix(data.frame(anovaF3)) colnames(anava1)=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anava1)=c(names.fat[1], names.fat[2], names.fat[3], paste(names.fat[1],"x",names.fat[2]), paste(names.fat[1],"x",names.fat[3]), paste(names.fat[2],"x",names.fat[3]), paste(names.fat[1],"x",names.fat[2],"x",names.fat[3]), "Residuals") print(anava1,na.print = "") cat("\n") if(transf==1 && norm1$p.value<0.05 | transf==1 && indep$p.value<0.05 | transf==1 &&homog1$p.value<0.05){ message("\n Your analysis is not valid, suggests using a non-parametric test and try to transform the data\n")}else{} if(transf != 1 && norm1$p.value<0.05 | transf!=1 && indep$p.value<0.05 | transf!=1 && homog1$p.value<0.05){ message("\n Your analysis is not valid, suggests using the function FATDIC.art\n")}else{} message(if(transf !=1){blue("\nNOTE: resp = transformed means; respO = averages without transforming\n")}) if(anavaF3[4,5]>alpha.f && anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[7,5]>alpha.f) { graficos=list(1,2,3) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold('Non-significant interaction: analyzing the simple effects'))) cat(green(bold("\n------------------------------------------\n"))) fatores<-data.frame('fator 1'=fator1,'fator 2' = fator2,'fator 3' = fator3) for(i in 1:3){ if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) cat(green(bold("\n------------------------------------------\n"))) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[8,1],anavaF3[8,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[8,1], nrep = nrep, QME = anavaF3[8,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) print(letra1) cat(green(bold("\n------------------------------------------\n"))) #===================================================== if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$trats=factor(rownames(dadosm), levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio trats=dadosm$trats letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trats), color=1)}else{grafico=grafico+ geom_col(aes(fill=trats), fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+ if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3)} grafico=grafico+ theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[i]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") print(grafico)} # ================================ # grafico de segmentos # ================================ if(geom=="point"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trats))} else{grafico=grafico+ geom_point(aes(color=trats),color=fill,size=4)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3)} grafico=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[i]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") print(grafico)} } if(anavaF3[i,5]>alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) cat(green(bold("\n------------------------------------------\n"))) mean.table<-mean_stat(response,fatores[,i],mean) colnames(mean.table)<-c('Levels','Mean') print(mean.table) grafico=NA} if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) cat(green(bold("\n------------------------------------------\n"))) dose=as.numeric(as.vector(unlist(fatores[,i]))) grafico=polynomial(dose,resp,grau = grau[i], DFres= anavaF3[8,1],SSq = anavaF3[8,2],ylab = ylab,xlab = parse(text = xlab.factor[i]),point = point)[[1]] cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command\n")) cat(green(bold("\n------------------------------------------")))} cat('\n') graficos[[1]]=residplot graficos[[i+1]]=grafico } } if(anavaF3[7,5]>alpha.f && anavaF3[4,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(names.fat[1],'*',names.fat[2],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[1], ' inside of each level of ', names.fat[2]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator2/Fator1+Fator3+Fator2+Fator2:Fator3+Fator1:Fator2:Fator3) l<-vector('list',nv2) names(l)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv2+j) l[[j]]<-v v<-numeric(0)} des1<-summary(des,split=list('Fator2:Fator1'=l))[[1]] des1a=des1[-c(1,2,3,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[2], sep = ""), lf2[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra}} # Desdobramento de F2 dentro de F1 cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[2], " inside of the level of ",names.fat[1]) cat(green(bold("\n------------------------------------------\n"))) # Desdobramento de F1 dentro de F2 des<-aov(resp~Fator1/Fator2+Fator3+Fator1+Fator1:Fator3+Fator1:Fator2:Fator3) l<-vector('list',nv1) names(l)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv1+j) l[[j]]<-v v<-numeric(0)} des1<-summary(des,split=list('Fator1:Fator2'=l))[[1]] des1a=des1[-c(1,2,3,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv1) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[1], sep = ""), lf1[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator1 == lf1[i]], trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico1[[i]]=tukey$groups[levels(trati),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico1[[i]]=duncan$groups[levels(trati),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator1 == lf1[i]], trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico1[[i]]=lsd$groups[levels(trati),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # sk=sk(respi,trati,anavaF3$Df[8],anavaF3$`Sum Sq`[8],alpha.t) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)}} if(quali[1] & quali[2]==TRUE){ f1=rep(levels(Fator1),e=length(levels(Fator2))) f2=rep(unique(as.character(Fator2)),length(levels(Fator1))) media=tapply(response,paste(Fator1,Fator2), mean, na.rm=TRUE)[unique(paste(f1,f2))] # desvio=tapply(response,paste(Fator1,Fator2), sd, na.rm=TRUE)[unique(paste(f1,f2))] if(point=="mean_sd"){desvio=tapply(response,paste(Fator1,Fator2), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,paste(Fator1,Fator2), sd, na.rm=TRUE)/ sqrt(tapply(response,paste(Fator1,Fator2), length))} desvio=desvio[unique(paste(f1,f2))] f1=factor(f1,levels = unique(f1)) f2=factor(f2,levels = unique(f2)) graph=data.frame(f1=f1, f2=f2, media, desvio, letra, letra1, numero=format(media,digits = dec)) numero=paste(graph$numero,graph$letra,graph$letra1,sep="") graph$numero=numero colint=ggplot(graph, aes(x=f2, y=media, fill=f1))+ ylab(ylab)+xlab(xlab)+ theme+ geom_col(position = "dodge",color="black")+ labs(fill=names.fat[1])+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=0.3,position = position_dodge(width=0.9))+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)+ theme(text=element_text(size=textsize,family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family)) colint1=colint print(colint) letras=paste(graph$letra, graph$letra1, sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator2) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(matriz) message(black("\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")")) } if(quali[1]==FALSE | quali[2]==FALSE){ if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator1 == lf1[i]], trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator1 == lf1[i]], trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk)}} } if(quali[1]==FALSE){ Fator1=fator1a colint1=polynomial2(Fator1, response, Fator2, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[8,1],SSq = anavaF3[8,2])} if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups) }} if (mcomp == "duncan"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups) }} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator2 == lf2[i]],trati, mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk) }} } if(quali[2]==FALSE){ Fator2=fator2a#as.numeric(as.character(Fator2)) colint1=polynomial2(Fator2, response, Fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[8,1],SSq = anavaF3[8,2])} cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial2\" command\n")) } if(anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f) { i<-3 { if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',names.fat[3]))) cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[8,1],anavaF3[8,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ ad=data.frame(Fator1,Fator2,Fator3) nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[8,1], nrep = nrep, QME = anavaF3[8,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} print(letra1) cat(green(bold("\n-----------------------------------------------------------------"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra grafico=ggplot(dadosm, aes(x=Tratamentos,y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos),color=1)} else{grafico=grafico+ geom_col(aes(fill=Tratamentos),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra), family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3) grafico1=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[3]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") print(grafico1)} } if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat('Analyzing the simple effects of the factor ',names.fat[3]) cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) grafico1=polynomial(fatores[,i], resp,grau=grau[i], DFres= anavaF3[8,1],SSq = anavaF3[8,2],ylab = ylab,xlab = parse(text = xlab.factor[3]),point = point)[[1]] cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command"))} } } } if(anavaF3[7,5]>alpha.f && anavaF3[5,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(names.fat[1],'*',names.fat[3],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------\n"))) #Desdobramento de FATOR 1 dentro do niveis de FATOR 3 cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[1], ' inside of each level of ', names.fat[3]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator3/Fator1+Fator2+Fator3+Fator2:Fator3+Fator1:Fator2:Fator3) l<-vector('list',nv3) names(l)<-names(summary(Fator3)) v<-numeric(0) for(j in 1:nv3) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv3+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator3:Fator1'=l))[[1]] des1a=des1[-c(1,2,3,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv3) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[3], sep = ""), lf3[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator3 == lf3[i]], trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator3 == lf3[i]], trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator3 == lf3[i]],trati, mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !=1){sk$respo=tapply(response[Fator3 == lf3[i]],trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra}} cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[3], " inside of the level of ",names.fat[1]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator1/Fator3+Fator1+Fator2+Fator2:Fator1+Fator1:Fator2:Fator3) l<-vector('list',nv1) names(l)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv3-2)) v<-cbind(v,i*nv1+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator1:Fator3'=l))[[1]] des1a=des1[-c(1,2,3,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv1) { rn <- c(rn, paste(paste(names.fat[3], ":", names.fat[1], sep = ""), lf1[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico1[[i]]=tukey$groups[levels(trati),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico1[[i]]=duncan$groups[levels(trati),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] mod=aov(respi~trati) lsd=LSD(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico1[[i]]=lsd$groups[levels(trati),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] mod=aov(respi~trati) nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)}} if(quali[1]==TRUE & quali[3]==TRUE){ f1=rep(levels(Fator1),e=length(levels(Fator3))) f3=rep(unique(as.character(Fator3)),length(levels(Fator1))) media=tapply(response,paste(Fator1,Fator3), mean, na.rm=TRUE)[unique(paste(f1,f3))] # desvio=tapply(response,paste(Fator1,Fator3), sd, na.rm=TRUE)[unique(paste(f1,f3))] if(point=="mean_sd"){desvio=tapply(response,paste(Fator1,Fator3), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,paste(Fator1,Fator3), sd, na.rm=TRUE)/ sqrt(tapply(response,paste(Fator1,Fator3), length))} desvio=desvio[unique(paste(f1,f3))] f1=factor(f1,levels = unique(f1)) f3=factor(f3,levels = unique(f3)) graph=data.frame(f1=f1, f3=f3, media, desvio, letra,letra1, numero=format(media,digits = dec)) numero=paste(graph$numero,graph$letra,graph$letra1,sep="") graph$numero=numero colint=ggplot(graph, aes(x=f3, y=media, fill=f1))+ geom_col(position = "dodge",color="black")+ ylab(ylab)+xlab(xlab)+ theme+ labs(fill=names.fat[1])+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=0.3,position = position_dodge(width=0.9))+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)+ theme(text=element_text(size=textsize,family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family)) colint2=colint print(colint) letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator3) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) print(matriz) message(black("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")")) } if(quali[1]==FALSE | quali[3]==FALSE){ if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk)}} } if(quali[1]==FALSE){ Fator1=fator1a#as.numeric(as.character(Fator1)) colint2=polynomial2(Fator1, response, Fator3, grau = grau13, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[8,1],SSq = anavaF3[8,2])} if(quali[3]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator3 == lf3[i]], trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator3 == lf3[i]], trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator3 == lf3[i]],trati, mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !=1){sk$respo=tapply(response[Fator3 == lf3[i]],trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(sk)}}} if(quali[3]==FALSE){ Fator3=fator3a#as.numeric(as.character(Fator3)) colint2=polynomial2(Fator3, response, Fator1, grau = grau31, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[8,1],SSq = anavaF3[8,2])} cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial2\" command\n")) } #Checar o Fator2 if(anavaF3[4,5]>alpha.f && anavaF3[6,5]>alpha.f) { i<-2 { if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',names.fat[2]))) cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[8,1],anavaF3[8,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ ad=data.frame(Fator1,Fator2,Fator3) nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[8,1], nrep = nrep, QME = anavaF3[8,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} print(letra1) cat(green(bold("\n-----------------------------------------------------------------"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos),color=1)} else{grafico=grafico+ geom_col(aes(fill=Tratamentos),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3) grafico2=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[2]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") print(grafico2)} } if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat('Analyzing the simple effects of the factor ',names.fat[2]) cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) grafico2=polynomial(fatores[,i], resp,grau=grau[i], DFres= anavaF3[8,1],SSq = anavaF3[8,2],ylab = ylab,xlab = parse(text = xlab.factor[2]),point = point)[[1]] cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command")) } cat('\n') } } } if(anavaF3[7,5]>alpha.f && anavaF3[6,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\nInteraction",paste(names.fat[2],'*',names.fat[3],sep='')," significant: unfolding the interaction\n"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[2], ' inside of each level of ', names.fat[3]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator3/Fator2+Fator1+Fator3+Fator1:Fator3+Fator1:Fator2:Fator3) l<-vector('list',nv3) names(l)<-names(summary(Fator3)) v<-numeric(0) for(j in 1:nv3) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv3+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator3:Fator2'=l))[[1]] des1a=des1[-c(1,2,3,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv3) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[3], sep = ""), lf3[j])) } rownames(des1a)=rn #============================ print(des1a) cat(green(bold("\n------------------------------------------\n"))) if(quali[2]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] mod=aov(respi~trati) tukey=TUKEY(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator3 == lf3[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator3 == lf3[i]],trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator3 == lf3[i]],trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !=1){sk$respo=tapply(response[Fator3 == lf3[i]],trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra}} cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[3], " inside of the level of ",names.fat[2]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator2/Fator3+Fator1+Fator2+Fator1:Fator2+Fator1:Fator2:Fator3) l<-vector('list',nv2) names(l)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv3-2)) v<-cbind(v,i*nv2+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator2:Fator3'=l))[[1]] des1a=des1[-c(1,2,3,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[3], ":", names.fat[2], sep = ""), lf2[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[2]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] mod=aov(respi~trati) tukey=TUKEY(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator2 == lf2[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico1[[i]]=tukey$groups[levels(trati),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator2 == lf2[i]],trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico1[[i]]=duncan$groups[levels(trati),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator2 == lf2[i]],trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico1[[i]]=lsd$groups[levels(trati),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !=1){sk$respo=tapply(response[Fator2 == lf2[i]],trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)}} if(quali[2]==TRUE & quali[3]==TRUE){ f2=rep(levels(Fator2),e=length(levels(Fator3))) f3=rep(unique(as.character(Fator3)),length(levels(Fator2))) media=tapply(response,paste(Fator2,Fator3), mean, na.rm=TRUE)[unique(paste(f2,f3))] # desvio=tapply(response,paste(Fator2,Fator3), sd, na.rm=TRUE)[unique(paste(f2,f3))] if(point=="mean_sd"){desvio=tapply(response,paste(Fator2,Fator3), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,paste(Fator2,Fator3), sd, na.rm=TRUE)/ sqrt(tapply(response,paste(Fator2,Fator3), length))} desvio=desvio[unique(paste(f2,f3))] f2=factor(f2,levels = unique(f2)) f3=factor(f3,levels = unique(f3)) graph=data.frame(f2=f2, f3=f3, media, desvio, letra,letra1, numero=format(media,digits = dec)) numero=paste(graph$numero,graph$letra,graph$letra1,sep="") graph$numero=numero colint=ggplot(graph, aes(x=f3, y=media, fill=f2))+ geom_col(position = "dodge",color="black")+ ylab(ylab)+xlab(xlab)+ theme+ labs(fill=names.fat[2])+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=0.3,position = position_dodge(width=0.9))+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)+ theme(text=element_text(size=textsize,family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family)) colint3=colint print(colint) letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator2))))) rownames(matriz)=levels(Fator2) colnames(matriz)=levels(Fator3) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) print(matriz) message(black("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")")) } if(quali[2]==FALSE | quali[3]==FALSE){ if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] mod=aov(respi~trati) tukey=TUKEY(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator2 == lf2[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator2 == lf2[i]],trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator2 == lf2[i]],trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # sk=sk(respi,trati,anavaF3$Df[8],anavaF3$`Sum Sq`[8],alpha.t) if(transf !=1){sk$respo=tapply(response[Fator2 == lf2[i]],trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk)}} } if(quali[2]==FALSE){ Fator2=fator2a#as.numeric(as.character(Fator2)) colint3=polynomial2(Fator2, response, Fator3, grau = grau23, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[8,1],SSq = anavaF3[8,2])} if(quali[3]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] mod=aov(respi~trati) tukey=TUKEY(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){tukey$groups$respo=tapply(response[Fator3 == lf3[i]],trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){duncan$groups$respo=tapply(response[Fator3 == lf3[i]],trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[8],anavaF3$`Mean Sq`[8],alpha.t) if(transf !=1){lsd$groups$respo=tapply(response[Fator3 == lf3[i]],trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # sk=sk(respi,trati,anavaF3$Df[8],anavaF3$`Sum Sq`[8],alpha.t) if(transf !=1){sk$respo=tapply(response[Fator3 == lf3[i]],trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(sk)}} } if(quali[3]==FALSE){ Fator3=fator3a#as.numeric(as.character(Fator3)) colint3=polynomial2(Fator3, response, Fator2, grau = grau32, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[8,1],SSq = anavaF3[8,2])} cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial2\" command")) } if(anavaF3[4,5]>alpha.f && anavaF3[5,5]>alpha.f) { i<-1 { if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',names.fat[2]))) cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[8,1],anavaF3[8,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ ad=data.frame(Fator1,Fator2,Fator3) nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[8,1], nrep = nrep, QME = anavaF3[8,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(anava, colnames(ad[i]), alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} print(letra1) cat(green(bold("\n------------------------------------------\n"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos),color=1)} else{grafico=grafico+ geom_col(aes(fill=Tratamentos),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3) grafico3=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[1]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") print(grafico3)} } if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat('Analyzing the simple effects of the factor ',names.fat[2]) cat(green(bold("\n------------------------------------------\n"))) cat(names.fat[i]) grafico3=polynomial(fatores[,i], resp,grau=grau[i], DFres= anavaF3[8,1],SSq = anavaF3[8,2],ylab = ylab,xlab=parse(text = xlab.factor[1]),point = point)[[1]] print(grafico3) cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command")) } cat('\n') } } } if(anavaF3[7,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(names.fat[1],'*',names.fat[2],'*',names.fat[3],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[1], ' inside of each level of ', names.fat[2], 'and',names.fat[3]) cat(green(bold("\n------------------------------------------\n"))) m1=aov(resp~(Fator2*Fator3)/Fator1) anova(m1) pattern <- c(outer(levels(Fator2), levels(Fator3), function(x,y) paste("Fator2",x,":Fator3",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==4])) des1.tab <- summary(m1, split = list("Fator2:Fator3:Fator1" = des.tab)) desd=des1.tab[[1]][-c(1,2,3,4),] desd=data.frame(desd[-length(rownames(desd)),]) # rownames(desd)=cbind(paste("Fator2:",rep(levels(Fator2),length(levels(Fator3))), # "Fator3:",rep(levels(Fator3),e=length(levels(Fator2))))) # rownames(desd)=cbind(paste(names.fat[2],":",rep(levels(Fator2),length(levels(Fator3))), names.fat[3],":",rep(levels(Fator3),e=length(levels(Fator2))))) colnames(desd)=c("Df", "Sum Sq", "Mean Sq", "F value", "Pr(>F)") print(desd) ii<-0 for(i in 1:nv2) { for(j in 1:nv3) { ii<-ii+1 if(quali[1]==TRUE){ cat('\n',names.fat[1],' within the combination of levels ',lf2[i],' of ',names.fat[2],' and ',lf3[j],' of ',names.fat[3],"\n") if(mcomp=='tukey'){tukey=TUKEY(resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], anavaF3[8,1], anavaF3[8,3], alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") if(transf !=1){tukey$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], mean, na.rm=TRUE)[rownames(tukey)]} print(tukey)} if(mcomp=='duncan'){duncan=duncan(resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], anavaF3[8,1], anavaF3[8,3], alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") if(transf !=1){duncan$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], mean, na.rm=TRUE)[rownames(duncan)]} print(duncan)} if(mcomp=='lsd'){lsd=LSD(resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], anavaF3[8,1], anavaF3[8,3], alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") if(transf !=1){lsd$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], mean, na.rm=TRUE)[rownames(lsd)]} print(lsd)} if(mcomp=='sk'){ fat= fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]] fat1=factor(fat,unique(fat)) levels(fat1)=1:length(levels(fat1)) respi=resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]] nrep=table(fat1)[1] medias=sort(tapply(respi,fat1,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) colnames(sk)=c("resp","letters") sk=sk[as.character(unique(fat1)),] rownames(sk)=unique(fat) if(transf !=1){sk$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], mean, na.rm=TRUE)[rownames(sk)]} print(sk)} } if(quali[1]==FALSE){ cat('\n',names.fat[1],' within the combination of levels ',lf2[i],' of ',names.fat[2],' and ',lf3[j],' of ',names.fat[3],"\n") polynomial(fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]],grau=grau123, DFres= anavaF3[8,1],SSq = anavaF3[8,2],ylab=ylab,xlab=xlab,point = point)[[1]]} } } cat('\n\n') cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[2], ' inside of each level of ', names.fat[1], 'and',names.fat[3]) cat(green(bold("\n------------------------------------------\n"))) m1=aov(resp~(Fator1*Fator3)/Fator2) anova(m1) pattern <- c(outer(levels(Fator1), levels(Fator3), function(x,y) paste("Fator1",x,":Fator3",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==4])) des1.tab <- summary(m1, split = list("Fator1:Fator3:Fator2" = des.tab)) desd=des1.tab[[1]][-c(1,2,3,4),] desd=data.frame(desd[-length(rownames(desd)),]) # rownames(desd)=cbind(paste("Fator1:",rep(levels(Fator1),length(levels(Fator3))), # "Fator3:",rep(levels(Fator3),e=length(levels(Fator1))))) rownames(desd)=cbind(paste(names.fat[1],":",rep(levels(Fator1),length(levels(Fator3))), names.fat[3],":",rep(levels(Fator3),e=length(levels(Fator1))))) colnames(desd)=c("Df", "Sum Sq", "Mean Sq", "F value", "Pr(>F)") print(desd) ii<-0 for(k in 1:nv1) { for(j in 1:nv3) { ii<-ii+1 if(quali[2]==TRUE){ cat('\n\n',names.fat[2],' within the combination of levels ',lf1[k],' of ',names.fat[1],' and ',lf3[j],' of ',names.fat[3],'\n') if(mcomp=='tukey'){tukey=TUKEY(resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], anavaF3[8,1], anavaF3[8,3], alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") if(transf !=1){tukey$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(tukey)]} print(tukey)} if(mcomp=='duncan'){duncan=duncan(resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], anavaF3[8,1], anavaF3[8,3], alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") if(transf !=1){duncan$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(duncan)]} print(duncan)} if(mcomp=='lsd'){lsd=LSD(resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], anavaF3[8,1], anavaF3[8,3], alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") if(transf !=1){lsd$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(lsd)]} print(lsd)} if(mcomp=='sk'){ fat=fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]] fat1=factor(fat,unique(fat)) levels(fat1)=1:length(levels(fat1)) respi=resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]] nrep=table(fat1)[1] medias=sort(tapply(respi,fat1,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) colnames(sk)=c("resp","letters") sk=sk[as.character(unique(fat1)),] rownames(sk)=unique(fat) if(transf !=1){sk$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(sk)]} print(sk)} } if(quali[2]==FALSE){ cat('\n\n',names.fat[2],' within the combination of levels ',lf1[k],' of ',names.fat[1],' and ',lf3[j],' of ',names.fat[3],'\n') polynomial(fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], DFres= anavaF3[8,1],SSq = anavaF3[8,2],grau=grau213, ylab = ylab,xlab = xlab,point = point)[[1]]} } } #=================================================================== #Desdobramento de FATOR 3 dentro do niveis de FATOR 1 e FATOR 2 #=================================================================== cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", names.fat[3], ' inside of each level of ', names.fat[1], 'and',names.fat[2]) cat(green(bold("\n------------------------------------------\n"))) m1=aov(resp~(Fator1*Fator2)/Fator3) anova(m1) pattern <- c(outer(levels(Fator1), levels(Fator2), function(x,y) paste("Fator1",x,":Fator2",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==4])) des1.tab <- summary(m1, split = list("Fator1:Fator2:Fator3" = des.tab)) desd=des1.tab[[1]][-c(1,2,3,4),] desd=data.frame(desd[-length(rownames(desd)),]) # rownames(desd)=cbind(paste("Fator1:",rep(levels(Fator1),length(levels(Fator2))), # "Fator2:",rep(levels(Fator2),e=length(levels(Fator1))))) rownames(desd)=cbind(paste(names.fat[1],":",rep(levels(Fator1),length(levels(Fator2))), names.fat[2],":",rep(levels(Fator2),e=length(levels(Fator1))))) colnames(desd)=c("Df", "Sum Sq", "Mean Sq", "F value", "Pr(>F)") print(desd) ii<-0 for(k in 1:nv1) { for(i in 1:nv2) { ii<-ii+1 if(quali[3]==TRUE){ cat('\n\n',names.fat[3],' within the combination of levels ',lf1[k],' of ',names.fat[1],' and ',lf2[i],' of ',names.fat[2],'\n') if(mcomp=='tukey'){tukey=TUKEY(resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], anavaF3[8,1], anavaF3[8,3], alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") if(transf !=1){tukey$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(tukey)]} print(tukey)} if(mcomp=='duncan'){duncan=duncan(resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], anavaF3[8,1], anavaF3[8,3], alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") if(transf !=1){duncan$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(duncan)]} print(duncan)} if(mcomp=='lsd'){lsd=LSD(resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], anavaF3[8,1], anavaF3[8,3], alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") if(transf !=1){lsd$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(lsd)]} print(lsd)} if(mcomp=='sk'){ fat=fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]] fat1=factor(fat,unique(fat)) levels(fat1)=1:length(levels(fat1)) respi=resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]] nrep=table(fat1)[1] medias=sort(tapply(respi,fat1,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[8], nrep = nrep, QME = anavaF3$`Mean Sq`[8], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # sk=sk(resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], # fat1, # anavaF3$Df[8], # anavaF3$`Sum Sq`[8], # alpha.t) colnames(sk)=c("resp","letters") sk=sk[as.character(unique(fat1)),] rownames(sk)=unique(fat) if(transf !=1){sk$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(sk)]} print(sk)} } if(quali[3]==FALSE){ cat('\n\n',names.fat[3],' inside of each level of ',lf1[k],' of ',names.fat[1],' and ',lf2[i],' of ',names.fat[2],'\n') polynomial(fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]],grau=grau312, DFres= anavaF3[8,1],SSq = anavaF3[8,2],ylab = ylab,xlab = xlab,point = point)[[1]]} } } } if(anavaF3[4,5]>alpha.f && anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[7,5]>alpha.f){ if(anavaF3[1,5]<=alpha.f | anavaF3[2,5]<=alpha.f | anavaF3[3,5]<=alpha.f){ graficos}else{graficos=NA}} if(anavaF3[7,5]>alpha.f && anavaF3[4,5]<=alpha.f){ graficos=list(residplot,colint1) if(anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[3,5]<=alpha.f){ graficos=list(residplot,colint1,grafico1)} graficos} if(anavaF3[7,5]>alpha.f && anavaF3[5,5]<=alpha.f){ graficos=list(residplot,colint2) if(anavaF3[4,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[2,5]<=alpha.f){ graficos=list(residplot,colint2,grafico2)} graficos} if(anavaF3[7,5]>alpha.f && anavaF3[6,5]<=alpha.f){ graficos=list(residplot,colint3) if(anavaF3[4,5]>alpha.f && anavaF3[5,5]>alpha.f && anavaF3[1,5]<=alpha.f){ graficos=list(residplot,colint3,grafico3)}} if(anavaF3[7,5]>alpha.f && anavaF3[4,5]<=alpha.f && anavaF3[5,5]<=alpha.f){ graficos=list(residplot,colint1,colint2) graficos} if(anavaF3[7,5]>alpha.f && anavaF3[4,5]<=alpha.f && anavaF3[6,5]<=alpha.f){ graficos=list(residplot,colint1,colint3) graficos} if(anavaF3[7,5]>alpha.f && anavaF3[5,5]<=alpha.f && anavaF3[6,5]<=alpha.f){ graficos=list(residplot,colint2,colint3) graficos} if(anavaF3[7,5]<=alpha.f){graficos=list(residplot)} graficos=graficos }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/FAT3DIC_function.R
#' Analysis: DIC experiments in triple factorial with aditional #' #' @description Analysis of an experiment conducted in a completely randomized design in a triple factorial scheme with one aditional control using analysis of variance of fixed effects. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param f1 Numeric or complex vector with factor 1 levels #' @param f2 Numeric or complex vector with factor 2 levels #' @param f3 Numeric or complex vector with factor 3 levels #' @param repe Numerical or complex vector with blocks #' @param response Numerical vector containing the response of the experiment. #' @param responseAd Numerical vector containing the aditional response #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{qualitative}) #' @param names.fat Allows labeling the factors 1, 2 and 3. #' @param grau Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with three elements. #' @param grau12 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 and qualitative factor 2 and quantitative factor 1. #' @param grau21 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 and qualitative factor 1 and quantitative factor 2. #' @param grau13 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 3, in the case of interaction f1 x f3 and qualitative factor 3 and quantitative factor 1. #' @param grau31 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f3 and qualitative factor 1 and quantitative factor 3. #' @param grau23 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 3, in the case of interaction f2 x f3 and qualitative factor 3 and quantitative factor 2. #' @param grau32 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f2 x f3 and qualitative factor 2 and quantitative factor 3. #' @param grau123 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 x f3 and quantitative factor 1. #' @param grau213 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 x f3 and quantitative factor 2. #' @param grau312 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 3, in the case of interaction f1 x f2 x f3 and quantitative factor 3. #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab.factor Provide a vector with two observations referring to the x-axis name of factors 1, 2 and 3, respectively, when there is an isolated effect of the factors. This argument uses `parse`. #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param transf Applies data transformation (\emph{default} is 1; for log consider 0; `angular` for angular transformation) #' @param constant Add a constant for transformation (enter value) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param geom Graph type (columns or segments) #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angulo x-axis scale text rotation #' @param textsize Font size #' @param labelsize Label size #' @param dec Number of cells #' @param family Font family #' @param ad.label Aditional label #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param point This function defines whether the point must have all points ("all"), mean ("mean"), standard deviation (\emph{default} - "mean_sd") or mean with standard error ("mean_se") if quali= FALSE. For quali=TRUE, `mean_sd` and `mean_se` change which information will be displayed in the error bar. #' @param angle.label label angle #' @note The order of the chart follows the alphabetical pattern. Please use `scale_x_discrete` from package ggplot2, `limits` argument to reorder x-axis. The bars of the column and segment graphs are standard deviation. #' @return The analysis of variance table, the Shapiro-Wilk error normality test, the Bartlett homogeneity test of variances, the Durbin-Watson error independence test, multiple comparison test (Tukey, LSD, Scott-Knott or Duncan) or adjustment of regression models up to grade 3 polynomial, in the case of quantitative treatments. The column chart for qualitative treatments is also returned.For significant triple interaction only, no graph is returned. #' @note The function does not perform multiple regression in the case of two or more quantitative factors. The bars of the column and segment graphs are standard deviation. #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @references #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' #' Ferreira, E. B., Cavalcanti, P. P., and Nogueira, D. A. (2014). ExpDes: an R package for ANOVA and experimental designs. Applied Mathematics, 5(19), 2952. #' #' Mendiburu, F., and de Mendiburu, M. F. (2019). Package ‘agricolae’. R Package, Version, 1-2. #' #' @keywords DIC #' @keywords Factorial #' @export #' @examples #' library(AgroR) #' data(enxofre) #' respAd=c(2000,2400,2530,2100) #' with(enxofre, FAT3DIC.ad(f1, f2, f3, bloco, resp, respAd)) FAT3DIC.ad = function(f1, f2, f3, repe, response, responseAd, norm = "sw", alpha.f = 0.05, alpha.t = 0.05, quali = c(TRUE, TRUE, TRUE), mcomp = 'tukey', transf = 1, constant = 0, names.fat = c("F1", "F2", "F3"), ylab = "Response", xlab = "", xlab.factor=c("F1","F2","F3"), sup = NA, grau=c(NA,NA,NA), # isolado e interação tripla grau12=NA, # F1/F2 grau13=NA, # F1/F3 grau23=NA, # F2/F3 grau21=NA, # F2/F1 grau31=NA, # F3/F1 grau32=NA, # F3/F2 grau123=NA, grau213=NA, grau312=NA, fill = "lightblue", theme = theme_classic(), ad.label="Additional", angulo = 0, errorbar = TRUE, addmean = TRUE, family = "sans", dec = 3, geom = "bar", textsize = 12, labelsize=4, point="mean_sd", angle.label = 0) { if(is.na(sup==TRUE)){sup=0.2*mean(response)} if(angle.label==0){hjust=0.5}else{hjust=0} requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} if(transf==1){respAd=responseAd+constant}else{if(transf!="angular"){respAd=((responseAd+constant)^transf-1)/transf}} # if(transf==1){respAd=responseAd+constant}else{respAd=((responseAd+constant)^transf-1)/transf} if(transf==0){respAd=log(responseAd+constant)} if(transf==0.5){respAd=sqrt(responseAd+constant)} if(transf==-0.5){respAd=1/sqrt(responseAd+constant)} if(transf==-1){respAd=1/(responseAd+constant)} if(transf=="angular"){respAd=asin(sqrt((responseAd+constant)/100))} ordempadronizado=data.frame(f1,f2,f3,response,resp) resp1=resp organiz=data.frame(f1,f2,f3,response,resp) organiz=organiz[order(organiz$f3),] organiz=organiz[order(organiz$f2),] organiz=organiz[order(organiz$f1),] f1=organiz$f1 f2=organiz$f2 f3=organiz$f3 response=organiz$response resp=organiz$resp fator1=f1 fator2=f2 fator3=f3 fator1a=fator1 fator2a=fator2 fator3a=fator3 fac.names=names.fat fatores<-data.frame(fator1,fator2,fator3) Fator1<-factor(fator1,levels=unique(fator1)); Fator2<-factor(fator2,levels=unique(fator2)); Fator3<-factor(fator3,levels=unique(fator3)) nv1<-length(summary(Fator1)); nv2<-length(summary(Fator2)); nv3<-length(summary(Fator3)) J<-(length(resp))/(nv1*nv2*nv3) lf1<-levels(Fator1); lf2<-levels(Fator2); lf3<-levels(Fator3) repe=as.factor(repe) anava<-aov(resp~Fator1*Fator2*Fator3) anavaF3<-anova(anava) anovaF3=anavaF3 colnames(anovaF3)=c("GL","SQ","QM","Fcal","p-value") anavares<-aov(resp~as.factor(f1)* as.factor(f2)* as.factor(f3), data = ordempadronizado) respad=anavares$residuals/sqrt(anavaF3$`Mean Sq`[8]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} Ids=ifelse(respad>3 | respad<(-3), "darkblue","black") residplot=ggplot(data=data.frame(respad,Ids),aes(y=respad,x=1:length(respad)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(respad),label=1:length(respad),color=Ids,size=labelsize)+ scale_x_continuous(breaks=1:length(respad))+ theme_classic()+theme(axis.text.y = element_text(size=textsize), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) print(residplot) col1<-numeric(0) for(i in 1:c(nv1*nv2*nv3)) { col1<-c(col1, rep(i,J)) } col1<-c(col1,rep('ad',J)) col2<-c(repe,rep(1:J)) col3<-c(resp,respAd) tabF3ad<-data.frame("TRAT"=col1, "BLOCO"=col2, "RESP2"=col3) TRAT<-factor(tabF3ad[,1]) anava1<-aov(tabF3ad[,3]~ TRAT) anavaTr<-anova(anava1) SQAd=anavaTr[1,2]-sum(anavaF3[c(1,2,3,4,5,6,7),2]) DfAd=1 QMAd=SQAd FAd=QMAd/anavaTr[2,3] pAd=1-pf(FAd,DfAd,anavaTr[2,3]) DfE=anavaTr[2,1] SQE=anavaTr[2,2] QME=anavaTr[2,3] Fef=anavaF3$`Mean Sq`[1:7]/anavaTr[2,3] pef=1-pf(Fef,anavaF3$Df[1:7],DfE) anavaF3=rbind(anavaF3[1:7,], "Factorial vs Aditional"=c(DfAd,SQAd,QMAd,FAd,pAd), "Residuals"=c(DfE,SQE,QME,NA,NA)) anavaF3[1:7,4]=Fef anavaF3[1:7,5]=pef anovaF3=anavaF3 norm1<-shapiro.test(anava$residuals) cat(green(bold("\n------------------------------------------"))) cat(green(bold("\nNormality of errors"))) cat(green(bold("\n------------------------------------------"))) print(norm1) message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) homog1=bartlett.test(anava$residuals~paste(Fator1,Fator2,Fator3)) cat(green(bold("\n------------------------------------------"))) cat(green(bold("\nHomogeneity of Variances"))) cat(green(bold("\n------------------------------------------"))) print(homog1) message(if(homog1$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) indep=dwtest(anava) cat(green(bold("\n------------------------------------------"))) cat(green(bold("\nIndependence from errors"))) cat(green(bold("\n------------------------------------------"))) print(indep) message(if(indep$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors are not independent"}) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV (%) = ",round(sqrt(anavaF3$`Mean Sq`[9])/mean(c(resp,respAd),na.rm=TRUE)*100,2))) cat(paste("\nMean Factorial = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian Factorial = ",round(median(response,na.rm=TRUE),4))) cat(paste("\nMean Aditional = ",round(mean(responseAd,na.rm=TRUE),4))) cat(paste("\nMedian Aditional = ",round(median(responseAd,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n------------------------------------------\n"))) anava1=as.matrix(data.frame(anovaF3)) colnames(anava1)=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anava1)=c(names.fat[1], names.fat[2], names.fat[3], paste(names.fat[1],"x",names.fat[2]), paste(names.fat[1],"x",names.fat[3]), paste(names.fat[2],"x",names.fat[3]), paste(names.fat[1],"x",names.fat[2],"x",names.fat[3]), "Factorial vs Aditional", "Residuals") print(anava1,na.print = "") cat("\n") if(transf==1 && norm1$p.value<0.05 | transf==1 && indep$p.value<0.05 | transf==1 &&homog1$p.value<0.05){ message("\n Your analysis is not valid, suggests using a non-parametric test and try to transform the data\n")}else{} if(transf != 1 && norm1$p.value<0.05 | transf!=1 && indep$p.value<0.05 | transf!=1 && homog1$p.value<0.05){ message("\n Your analysis is not valid, suggests using the function FATDIC.art\n")}else{} message(if(transf !=1){blue("\nNOTE: resp = transformed means; respO = averages without transforming\n")}) if(anavaF3[4,5]>alpha.f && anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[7,5]>alpha.f) { graficos=list(1,2,3) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold('Non-significant interaction: analyzing the simple effects'))) cat(green(bold("\n------------------------------------------\n"))) fatores<-data.frame('fator 1'=fator1,'fator 2' = fator2,'fator 3' = fator3) for(i in 1:3){ if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) cat(green(bold("\n------------------------------------------\n"))) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i], anavaF3[9,1],anavaF3[9,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[9,1], nrep = nrep, QME = anavaF3[9,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(resp,fatores[,i], anavaF3[9,1],anavaF3[9,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(resp,fatores[,i], anavaF3[9,1],anavaF3[9,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) print(letra1) cat(green(bold("\n------------------------------------------\n"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos), color=1)}else{grafico=grafico+ geom_col(aes(fill=Tratamentos), fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+ if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3)} grafico=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[i]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family))} if(geom=="point"){grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_point(aes(color=Tratamentos))} else{grafico=grafico+ geom_point(aes(color=Tratamentos),color=fill,size=4)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3)} grafico=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[i]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family))} grafico=grafico+ geom_hline(aes(color=ad.label,group=ad.label, yintercept=mean(responseAd,na.rm=TRUE)),lty=2,show.legend = TRUE)+ scale_color_manual(values = "black")+labs(color="") print(grafico) } if(anavaF3[i,5]>alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) cat(green(bold("\n------------------------------------------\n"))) mean.table<-mean_stat(response,fatores[,i],mean) colnames(mean.table)<-c('Niveis','Medias') print(mean.table) grafico=NA} if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) cat(green(bold("\n------------------------------------------\n"))) dose=as.numeric(as.vector(unlist(fatores[,i]))) grafico=polynomial(dose,resp,grau = grau[i],ylab = ylab,xlab = xlab, DFres= anavaF3[9,1],SSq = anavaF3[9,2],point = point)[[1]] cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command")) cat(green(bold("\n------------------------------------------")))} graficos[[1]]=residplot graficos[[i+1]]=grafico } } if(anavaF3[7,5]>alpha.f && anavaF3[4,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(fac.names[1],'*',fac.names[2],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[1], ' within the combination of levels ', fac.names[2]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator2/Fator1+Fator3+Fator2+Fator2:Fator3+Fator1:Fator2:Fator3) l<-vector('list',nv2) names(l)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv2+j) l[[j]]<-v v<-numeric(0)} des1<-summary(des,split=list('Fator2:Fator1'=l))[[1]] des1[nrow(des1),]=c(DfE,SQE,QME,NA,NA) des1$`F value`=c(des1$`Mean Sq`[1:nrow(des1)-1]/des1$`Mean Sq`[nrow(des1)],NA) des1$`Pr(>F)`=c(1-pf(des1$`F value`[1:nrow(des1)-1], des1$Df[1:nrow(des1)-1], des1$Df[nrow(des1)]),NA) des1a=des1[-c(1,2,3,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[2], sep = ""), lf2[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra}} cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[2], " inside of the level of ",fac.names[1]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator1/Fator2+Fator3+Fator1+Fator1:Fator3+Fator1:Fator2:Fator3) l<-vector('list',nv1) names(l)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv1+j) l[[j]]<-v v<-numeric(0)} des1<-summary(des,split=list('Fator1:Fator2'=l))[[1]] des1[nrow(des1),]=c(DfE,SQE,QME,NA,NA) des1$`F value`=c(des1$`Mean Sq`[1:nrow(des1)-1]/des1$`Mean Sq`[nrow(des1)],NA) des1$`Pr(>F)`=c(1-pf(des1$`F value`[1:nrow(des1)-1], des1$Df[1:nrow(des1)-1], des1$Df[nrow(des1)]),NA) des1a=des1[-c(1,2,3,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv1) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[1], sep = ""), lf1[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico1[[i]]=tukey$groups[levels(trati),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico1[[i]]=duncan$groups[levels(trati),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico1[[i]]=lsd$groups[levels(trati),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)}} if(quali[1]==TRUE & quali[2]==TRUE){ f1=rep(levels(Fator1),e=length(levels(Fator2))) f2=rep(unique(as.character(Fator2)),length(levels(Fator1))) f1=factor(f1,levels = unique(f1)) f2=factor(f2,levels = unique(f2)) media=tapply(resp,paste(Fator1,Fator2), mean, na.rm=TRUE)[unique(paste(f1,f2))] # desvio=tapply(resp,paste(Fator1,Fator2), sd, na.rm=TRUE)[unique(paste(f1,f2))] if(point=="mean_sd"){desvio=tapply(response,paste(Fator1,Fator2), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,paste(Fator1,Fator2), sd, na.rm=TRUE)/ sqrt(tapply(response,paste(Fator1,Fator2), length))} desvio=desvio[unique(paste(f1,f2))] graph=data.frame(f1=f1, f2=f2, media, desvio, letra,letra1, numero=format(media,digits = dec)) numero=paste(graph$numero,graph$letra,graph$letra1,sep="") graph$numero=numero colint=ggplot(graph, aes(x=f2, y=media, fill=f1))+ ylab(ylab)+xlab(xlab)+ theme+ labs(fill=fac.names[1])+ geom_col(position = "dodge",color="black")+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=0.3,position = position_dodge(width=0.9))+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)+ theme(text=element_text(size=textsize,family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family))+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=T)),lty=2)+ scale_color_manual(values = "black")+labs(color="") colint1=colint print(colint) letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator2) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) print(matriz) cat("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")") } if(quali[1]==FALSE | quali[2]==FALSE){ if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk)}}} if(quali[1]==FALSE){ Fator1a=fator1a colint1=polynomial2(Fator1a, response, Fator3, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[9,1],SSq = anavaF3[9,2])} if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk)}} } if(quali[2]==FALSE){ Fator2a=fator2a colint1=polynomial2(Fator2a, response, Fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[9,1],SSq = anavaF3[9,2])} cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial2\" command\n"))} #Checar o Fator3 if(anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f) { i<-3 { if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',fac.names[i]))) cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[9,1],anavaF3[9,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[9,1], nrep = nrep, QME = anavaF3[9,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(resp,fatores[,i],anavaF3[9,1],anavaF3[9,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(resp,fatores[,i],anavaF3[9,1],anavaF3[9,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} print(letra1) cat(green(bold("\n------------------------------------------\n"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos),color=1)} else{grafico=grafico+ geom_col(aes(fill=Tratamentos),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3)} grafico1=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[3]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family))+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="") print(grafico1)} } if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat('Analyzing the simple effects of the factor ',fac.names[3]) cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) grafico1=polynomial(resp, fatores[,i],grau=grau[i],ylab = ylab,xlab = parse(text = xlab.factor[3]), DFres= anavaF3[9,1],SSq = anavaF3[9,2],point = point)[[1]] print(grafico1) cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command"))} } } if(anavaF3[7,5]>alpha.f && anavaF3[5,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(fac.names[1],'*',fac.names[3],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[1], ' within the combination of levels ', fac.names[3]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator3/Fator1+Fator2+Fator3+Fator2:Fator3+Fator1:Fator2:Fator3) l<-vector('list',nv3) names(l)<-names(summary(Fator3)) v<-numeric(0) for(j in 1:nv3) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv3+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator3:Fator1'=l))[[1]] des1[nrow(des1),]=c(DfE,SQE,QME,NA,NA) des1$`F value`=c(des1$`Mean Sq`[1:nrow(des1)-1]/des1$`Mean Sq`[nrow(des1)],NA) des1$`Pr(>F)`=c(1-pf(des1$`F value`[1:nrow(des1)-1], des1$Df[1:nrow(des1)-1], des1$Df[nrow(des1)]),NA) des1a=des1[-c(1,2,3,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv3) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[3], sep = ""), lf3[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra}} cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[3], " inside of the level of ",fac.names[1]) cat(green(bold("\n------------------------------------------\n"))) des<-aov(resp~Fator1/Fator3+Fator1+Fator2+Fator2:Fator1+Fator1:Fator2:Fator3) l<-vector('list',nv1) names(l)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv3-2)) v<-cbind(v,i*nv1+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator1:Fator3'=l))[[1]] des1[nrow(des1),]=c(DfE,SQE,QME,NA,NA) des1$`F value`=c(des1$`Mean Sq`[1:nrow(des1)-1]/des1$`Mean Sq`[nrow(des1)],NA) des1$`Pr(>F)`=c(1-pf(des1$`F value`[1:nrow(des1)-1], des1$Df[1:nrow(des1)-1], des1$Df[nrow(des1)]),NA) des1a=des1[-c(1,2,3,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv1) { rn <- c(rn, paste(paste(names.fat[3], ":", names.fat[1], sep = ""), lf1[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[1]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) tukeygrafico1[[i]]=tukey$groups[levels(trati),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)}} if(quali[1]==TRUE & quali[3]==TRUE){ f1=rep(levels(Fator1),e=length(levels(Fator3))) f3=rep(unique(as.character(Fator3)),length(levels(Fator1))) f1=factor(f1,levels = unique(f1)) f3=factor(f3,levels = unique(f3)) media=tapply(response,paste(Fator1,Fator3), mean, na.rm=TRUE)[unique(paste(f1,f3))] # desvio=tapply(response,paste(Fator1,Fator3), sd, na.rm=TRUE)[unique(paste(f1,f3))] if(point=="mean_sd"){desvio=tapply(response,paste(Fator1,Fator3), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,paste(Fator1,Fator3), sd, na.rm=TRUE)/ sqrt(tapply(response,paste(Fator1,Fator3), length))} desvio=desvio[unique(paste(f1,f3))] graph=data.frame(f1=f1, f3=f3, media, desvio, letra, letra1, numero=format(media,digits = dec)) numero=paste(graph$numero,graph$letra,graph$letra1,sep="") graph$numero=numero colint=ggplot(graph, aes(x=f3, y=media, fill=f1))+ geom_col(position = "dodge",color="black")+ ylab(ylab)+ xlab(xlab)+ theme+ labs(fill=fac.names[1])+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=0.3, position = position_dodge(width=0.9))+ geom_text(aes(y=media+desvio+sup, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)+ theme(text=element_text(size=textsize,family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family))+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=T)),lty=2)+ scale_color_manual(values = "black")+labs(color="") colint2=colint print(colint) letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator3) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) print(matriz) cat("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")") } if(quali[1]==FALSE | quali[3]==FALSE){ if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv3) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv3) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk)}}} if(quali[1]==FALSE){ Fator1a=fator1a colint2=polynomial2(Fator1a, response, Fator3, grau = grau13, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[9,1],SSq = anavaF3[9,2])} if(quali[3]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Sum Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf3[i],"of F3") cat("\n----------------------\n") print(sk)}}} if(quali[3]==FALSE){ Fator3a=fator3a colint2=polynomial2(Fator3a, response, Fator1, grau = grau31, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[9,1],SSq = anavaF3[9,2])} cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial2\" command\n")) } if(anavaF3[4,5]>alpha.f && anavaF3[6,5]>alpha.f) { i<-2 { if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',fac.names[i]))) cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[9,1],anavaF3[9,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[9,1], nrep = nrep, QME = anavaF3[9,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(resp,fatores[,i],anavaF3[9,1],anavaF3[9,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(resp,fatores[,i],anavaF3[9,1],anavaF3[9,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} print(letra1) cat(green(bold("\n------------------------------------------"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra grafico=ggplot(dadosm, aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos),color=1)} else{grafico=grafico+ geom_col(aes(fill=Tratamentos),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio},label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3) grafico2=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[2]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family))+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=T)),lty=2)+ scale_color_manual(values = "black")+labs(color="") print(grafico2)} } if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat('Analyzing the simple effects of the factor ',fac.names[2]) cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) grafico2=polynomial(resp, fatores[,i],grau=grau[i],ylab = ylab,xlab = parse(text = xlab.factor[2]), DFres= anavaF3[9,1],SSq = anavaF3[9,2],point = point)[[1]] print(grafico2) cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command")) } cat('\n') } } } if(anavaF3[7,5]>alpha.f && anavaF3[6,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(fac.names[2],'*',fac.names[3],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------\n"))) #Desdobramento de FATOR 2 dentro do niveis de FATOR 3 cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[2], ' within the combination of levels ', fac.names[3]) cat("\n-------------------------------------------------\n") des<-aov(resp~Fator3/Fator2+Fator1+Fator3+Fator1:Fator3+Fator1:Fator2:Fator3) l<-vector('list',nv3) names(l)<-names(summary(Fator3)) v<-numeric(0) for(j in 1:nv3) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv3+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator3:Fator2'=l))[[1]] des1[nrow(des1),]=c(DfE,SQE,QME,NA,NA) des1$`F value`=c(des1$`Mean Sq`[1:nrow(des1)-1]/des1$`Mean Sq`[nrow(des1)],NA) des1$`Pr(>F)`=c(1-pf(des1$`F value`[1:nrow(des1)-1], des1$Df[1:nrow(des1)-1], des1$Df[nrow(des1)]),NA) des1a=des1[-c(1,2,3,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv3) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[3], sep = ""), lf3[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[2]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra}} cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[3], " inside of the level of ",fac.names[2]) cat(green(bold("\n------------------------------------------\n"))) cat("\n") des<-aov(resp~Fator2/Fator3+Fator1+Fator2+Fator1:Fator2+Fator1:Fator2:Fator3) l<-vector('list',nv2) names(l)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv3-2)) v<-cbind(v,i*nv2+j) l[[j]]<-v v<-numeric(0) } des1<-summary(des,split=list('Fator2:Fator3'=l))[[1]] des1[nrow(des1),]=c(DfE,SQE,QME,NA,NA) des1$`F value`=c(des1$`Mean Sq`[1:nrow(des1)-1]/des1$`Mean Sq`[nrow(des1)],NA) des1$`Pr(>F)`=c(1-pf(des1$`F value`[1:nrow(des1)-1], des1$Df[1:nrow(des1)-1], des1$Df[nrow(des1)]),NA) des1a=des1[-c(1,2,3,length(des1[,1]),length(des1[,1])-1,length(des1[,1])-2),] #============================ rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[3], ":", names.fat[2], sep = ""), lf2[j])) } rownames(des1a)=rn #============================ print(des1a) if(quali[2]==TRUE & quali[3]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) tukeygrafico1[[i]]=tukey$groups[levels(trati),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) duncangrafico1[[i]]=duncan$groups[levels(trati),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) lsdgrafico1[[i]]=lsd$groups[levels(trati),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)}} if(quali[2]==TRUE & quali[3]==TRUE){ f2=rep(levels(Fator2),e=length(levels(Fator3))) f3=rep(unique(as.character(Fator3)),length(levels(Fator2))) f2=factor(f2,levels = unique(f2)) f3=factor(f3,levels = unique(f3)) media=tapply(response,paste(Fator2,Fator3), mean, na.rm=TRUE)[unique(paste(f2,f3))] # desvio=tapply(response,paste(Fator2,Fator3), sd, na.rm=TRUE)[unique(paste(f2,f3))] if(point=="mean_sd"){desvio=tapply(response,paste(Fator2,Fator3), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,paste(Fator2,Fator3), sd, na.rm=TRUE)/ sqrt(tapply(response,paste(Fator2,Fator3), length))} desvio=desvio[unique(paste(f2,f3))] graph=data.frame(f2=f2, f3=f3, media, desvio, letra,letra1, numero=format(media,digits = dec)) numero=paste(graph$numero,graph$letra,graph$letra1,sep="") graph$numero=numero colint=ggplot(graph, aes(x=f3, y=media, fill=f2))+ geom_col(position = "dodge",color="black")+ ylab(ylab)+xlab(xlab)+ theme+ labs(fill=fac.names[2])+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=0.3,position = position_dodge(width=0.9))+ geom_text(aes(y=media+desvio+sup, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)+ theme(text=element_text(size=textsize,family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family))+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=T)),lty=2)+ scale_color_manual(values = "black")+labs(color="") colint3=colint print(colint) letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras), ncol = length(levels(Fator2))))) rownames(matriz)=levels(Fator2) colnames(matriz)=levels(Fator3) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) print(matriz) cat("\n\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")") } if(quali[2]==FALSE | quali[3]==FALSE){ if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups)}} if (mcomp == "lsd"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups)}} if (mcomp == "sk"){ for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F3 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk)}}} if(quali[2]==FALSE){ Fator2a=fator2a colint3=polynomial2(Fator2a, response, Fator3, grau = grau23, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[9,1],SSq = anavaF3[9,2])} if(quali[3]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){tukey$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(tukey)}} if (mcomp == "duncan"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] duncan=duncan(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){duncan$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(duncan)}} if (mcomp == "lsd"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] lsd=LSD(respi,trati,anavaF3$Df[9],anavaF3$`Mean Sq`[9],alpha.t) if(transf !="1"){lsd$groups$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(lsd)}} if (mcomp == "sk"){ for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=resp[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) if(transf !="1"){sk$respo=tapply(respi,trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf3[i],"of F3") cat("\n----------------------\n") print(sk)}}} if(quali[3]==FALSE){ Fator3a=fator3a colint3=polynomial2(Fator3a, response, Fator2, grau = grau32, ylab=ylab, xlab=xlab, theme=theme, DFres= anavaF3[9,1],SSq = anavaF3[9,2])} cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial2\" command\n")) } if(anavaF3[4,5]>alpha.f && anavaF3[5,5]>alpha.f) { i<-1 { if(quali[i]==TRUE && anavaF3[i,5]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',fac.names[i]))) cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) if(mcomp=='tukey'){letra=TUKEY(resp,fatores[,i],anavaF3[9,1],anavaF3[9,3],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fatores[,i])[1] medias=sort(tapply(resp,fatores[i],mean, na.rm=TRUE),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3[9,1], nrep = nrep, QME = anavaF3[9,3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- duncan(resp,fatores[,i],anavaF3[9,1],anavaF3[9,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ ad=data.frame(Fator1,Fator2,Fator3) letra <- LSD(resp,fatores[,i],anavaF3[9,1],anavaF3[9,3], alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fatores[,i],mean, na.rm=TRUE)[rownames(letra1)]}} print(letra1) cat(green(bold("\n------------------------------------------"))) if(point=="mean_sd"){desvio=tapply(response, c(fatores[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fatores[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fatores[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1, media=tapply(response, c(fatores[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) dadosm$Tratamentos=factor(rownames(dadosm),levels = unique(unlist(fatores[i]))) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[as.character(unique(unlist(fatores[i]))),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio Tratamentos=dadosm$Tratamentos letra=dadosm$letra grafico=ggplot(dadosm,aes(x=Tratamentos, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=Tratamentos),color=1)} else{grafico=grafico+ geom_col(aes(fill=Tratamentos),fill=fill,color=1)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3) grafico3=grafico+theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[1]))+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family))+ geom_hline(aes(color=ad.label,group=ad.label,yintercept=mean(responseAd,na.rm=TRUE)),lty=2)+ scale_color_manual(values = "black")+labs(color="") print(grafico3)} } if(quali[i]==FALSE && anavaF3[i,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat('\nAnalyzing the simple effects of the factor ',fac.names[1],'\n') cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) grafico3=polynomial(resp, fatores[,i],ylab = ylab,xlab = parse(text = xlab.factor[1]),grau=grau[i], DFres= anavaF3[9,1],SSq = anavaF3[9,2],point = point)[[1]] print(grafico3) cat(green("\nTo edit graphical parameters, I suggest analyzing using the \"polynomial\" command")) } cat('\n') } } } if(anavaF3[7,5]<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\nInteraction",paste(fac.names[1],'*',fac.names[2],'*',fac.names[3],sep='')," significant: unfolding the interaction\n"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[1], ' within the combination of levels ', fac.names[2], 'and',fac.names[3]) cat(green(bold("\n------------------------------------------\n"))) m1=aov(resp~(Fator2*Fator3)/Fator1) anova(m1) pattern <- c(outer(levels(Fator2), levels(Fator3), function(x,y) paste("Fator2",x,":Fator3",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==4])) des1.tab <- summary(m1, split = list("Fator2:Fator3:Fator1" = des.tab)) des1.tab[[1]][nrow(des1.tab[[1]]),]=c(DfE,SQE,QME,NA,NA) des1.tab[[1]]$`F value`=c(des1.tab[[1]]$`Mean Sq`[1:nrow(des1.tab[[1]])-1]/ des1.tab[[1]]$`Mean Sq`[nrow(des1.tab[[1]])],NA) des1.tab[[1]]$`Pr(>F)`=c(1-pf(des1.tab[[1]]$`F value`[1:nrow(des1.tab[[1]])-1], des1.tab[[1]]$Df[1:nrow(des1.tab[[1]])-1], des1.tab[[1]]$Df[nrow(des1.tab[[1]])]),NA) desd=des1.tab[[1]][-c(1,2,3,4),] desd=data.frame(desd[-length(rownames(desd)),]) # rownames(desd)=cbind(paste("Fator2:",rep(levels(Fator2),length(levels(Fator3))), # "Fator3:",rep(levels(Fator3),e=length(levels(Fator2))))) rownames(desd)=cbind(paste(names.fat[2],":",rep(levels(Fator2),length(levels(Fator3))), names.fat[3],":",rep(levels(Fator3),e=length(levels(Fator2))))) colnames(desd)=c("Df", "Sum Sq", "Mean Sq", "F value", "Pr(>F)") print(desd) ii<-0 for(i in 1:nv2) { for(j in 1:nv3) { ii<-ii+1 if(quali[1]==TRUE){ cat('\n\n',fac.names[1],' inside of each level of ',lf2[i],' of ',fac.names[2],' and ',lf3[j],' of ',fac.names[3],"\n") if(mcomp=='tukey'){tukey=TUKEY(y = resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], trt = fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], DFerror = anavaF3[9,1], MSerror = anavaF3[9,3], alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") if(transf !=1){tukey$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]],mean, na.rm=TRUE)[rownames(tukey)]} print(tukey)} if(mcomp=='duncan'){duncan=duncan(resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], anavaF3[9,1], anavaF3[9,3], alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") if(transf !=1){duncan$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]],mean, na.rm=TRUE)[rownames(duncan)]} print(duncan)} if(mcomp=='lsd'){lsd=LSD(resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], anavaF3[9,1], anavaF3[9,3], alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") if(transf !=1){lsd$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]],mean, na.rm=TRUE)[rownames(lsd)]} print(lsd)} if(mcomp=='sk'){ fat= fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]] fat1=factor(fat,unique(fat)) levels(fat1)=1:length(levels(fat1)) resp1=resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]] nrep=table(fat1)[1] medias=sort(tapply(respi,fat1,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) sk=sk[as.character(unique(fat1)),] rownames(sk)=unique(fat) if(transf !=1){sk$respo=tapply(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]],mean, na.rm=TRUE)[rownames(sk)]} print(sk)} } if(quali[1]==FALSE){ cat('\n\n',fac.names[1],' inside of each level of ',lf2[i],' of ',fac.names[2],' and ',lf3[j],' of ',fac.names[3],"\n") polynomial(fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]],grau=grau123, DFres= anavaF3[9,1],SSq = anavaF3[9,2],ylab = ylab,xlab = xlab,point = point)[[1]]} } } cat('\n\n') cat("\n------------------------------------------\n") cat("Analyzing ", fac.names[2], ' within the combination of levels ', fac.names[1], 'and',fac.names[3]) cat("\n------------------------------------------\n") m1=aov(resp~(Fator1*Fator3)/Fator2) anova(m1) pattern <- c(outer(levels(Fator1), levels(Fator3), function(x,y) paste("Fator1",x,":Fator3",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==4])) des1.tab <- summary(m1, split = list("Fator1:Fator3:Fator2" = des.tab)) des1.tab[[1]][nrow(des1.tab[[1]]),]=c(DfE,SQE,QME,NA,NA) des1.tab[[1]]$`F value`=c(des1.tab[[1]]$`Mean Sq`[1:nrow(des1.tab[[1]])-1]/ des1.tab[[1]]$`Mean Sq`[nrow(des1.tab[[1]])],NA) des1.tab[[1]]$`Pr(>F)`=c(1-pf(des1.tab[[1]]$`F value`[1:nrow(des1.tab[[1]])-1], des1.tab[[1]]$Df[1:nrow(des1.tab[[1]])-1], des1.tab[[1]]$Df[nrow(des1.tab[[1]])]),NA) desd=des1.tab[[1]][-c(1,2,3,4),] desd=data.frame(desd[-length(rownames(desd)),]) # rownames(desd)=cbind(paste("Fator1:",rep(levels(Fator1),length(levels(Fator3))), # "Fator3:",rep(levels(Fator3),e=length(levels(Fator1))))) rownames(desd)=cbind(paste(names.fat[1],":",rep(levels(Fator1),length(levels(Fator3))), names.fat[3],":",rep(levels(Fator3),e=length(levels(Fator1))))) colnames(desd)=c("Df", "Sum Sq", "Mean Sq", "F value", "Pr(>F)") print(desd) ii<-0 for(k in 1:nv1) { for(j in 1:nv3) { ii<-ii+1 if(quali[2]==TRUE){ cat('\n\n',fac.names[2],' inside of each level of ',lf1[k],' of ',fac.names[1],' and ',lf3[j],' of ',fac.names[3],'\n') if(mcomp=='tukey'){tukey=TUKEY(resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], anavaF3[9,1], anavaF3[9,3], alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") if(transf !=1){tukey$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(tukey)]} print(tukey)} if(mcomp=='duncan'){duncan=duncan(resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], anavaF3[9,1], anavaF3[9,3], alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") if(transf !=1){duncan$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(duncan)]} print(duncan)} if(mcomp=='lsd'){lsd=LSD(resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], anavaF3[9,1], anavaF3[9,3], alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") if(transf !=1){lsd$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(lsd)]} print(lsd)} if(mcomp=='sk'){ fat=fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]] fat1=factor(fat,unique(fat)) levels(fat1)=1:length(levels(fat1)) resp1=resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]] nrep=table(fat1)[1] medias=sort(tapply(respi,fat1,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) sk=sk[as.character(unique(fat1)),] rownames(sk)=unique(fat) if(transf !=1){sk$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]],mean, na.rm=TRUE)[rownames(sk)]} print(sk)} } if(quali[2]==FALSE){ cat('\n\n',fac.names[2],' within the combination of levels ',lf1[k],' of ',fac.names[1],' and ',lf3[j],' of ',fac.names[3],'\n') polynomial(fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]],grau=grau213, DFres= anavaF3[9,1],SSq = anavaF3[9,2],ylab = ylab,xlab = xlab,point = point)[[1]]} } } cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[3], ' within the combination of levels ', fac.names[1], 'and',fac.names[2]) cat(green(bold("\n------------------------------------------\n"))) m1=aov(resp~(Fator1*Fator2)/Fator3) anova(m1) pattern <- c(outer(levels(Fator1), levels(Fator2), function(x,y) paste("Fator1",x,":Fator2",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==4])) des1.tab <- summary(m1, split = list("Fator1:Fator2:Fator3" = des.tab)) des1.tab[[1]][nrow(des1.tab[[1]]),]=c(DfE,SQE,QME,NA,NA) des1.tab[[1]]$`F value`=c(des1.tab[[1]]$`Mean Sq`[1:nrow(des1.tab[[1]])-1]/ des1.tab[[1]]$`Mean Sq`[nrow(des1.tab[[1]])],NA) des1.tab[[1]]$`Pr(>F)`=c(1-pf(des1.tab[[1]]$`F value`[1:nrow(des1.tab[[1]])-1], des1.tab[[1]]$Df[1:nrow(des1.tab[[1]])-1], des1.tab[[1]]$Df[nrow(des1.tab[[1]])]),NA) desd=des1.tab[[1]][-c(1,2,3,4),] desd=data.frame(desd[-length(rownames(desd)),]) # rownames(desd)=cbind(paste("Fator1:",rep(levels(Fator1),length(levels(Fator2))), # "Fator2:",rep(levels(Fator2),e=length(levels(Fator1))))) rownames(desd)=cbind(paste(names.fat[1],":",rep(levels(Fator1),length(levels(Fator2))), names.fat[2],":",rep(levels(Fator2),e=length(levels(Fator1))))) colnames(desd)=c("Df", "Sum Sq", "Mean Sq", "F value", "Pr(>F)") print(desd) ii<-0 for(k in 1:nv1) { for(i in 1:nv2) { ii<-ii+1 # if(1-pf(QM/QME,glf,glE)[ii]<=alpha.f){ if(quali[3]==TRUE){ cat('\n\n',fac.names[3],' inside of each level of ',lf1[k],' of ',fac.names[1],' and ',lf2[i],' of ',fac.names[2],'\n') if(mcomp=='tukey'){tukey=TUKEY(resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], anavaF3[9,1], anavaF3[9,3], alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") if(transf !=1){tukey$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(tukey)]} print(tukey)} if(mcomp=='duncan'){duncan=duncan(resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], anavaF3[9,1], anavaF3[9,3], alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") if(transf !=1){duncan$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(duncan)]} print(duncan)} if(mcomp=='lsd'){lsd=LSD(resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], anavaF3[9,1], anavaF3[9,3], alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") if(transf !=1){lsd$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(lsd)]} print(lsd)} if(mcomp=='sk'){ fat=fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]] fat1=factor(fat,unique(fat)) levels(fat1)=1:length(levels(fat1)) resp1=resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]] nrep=table(fat1)[1] medias=sort(tapply(respi,fat1,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = anavaF3$Df[9], nrep = nrep, QME = anavaF3$`Mean Sq`[9], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) colnames(sk)=c("resp","letters") sk=sk[as.character(unique(fat1)),] rownames(sk)=unique(fat) if(transf !=1){sk$respo=tapply(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], mean, na.rm=TRUE)[rownames(sk)]} print(sk)} } if(quali[3]==FALSE){ cat('\n\n',fac.names[3],' inside of each level of ',lf1[k],' of ',fac.names[1],' and ',lf2[i],' of ',fac.names[2],'\n') polynomial(fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]],grau=grau312, DFres= anavaF3[9,1],SSq = anavaF3[9,2],ylab = ylab,xlab = xlab,point = point)[[1]]} } } } if(anavaF3[4,5]>alpha.f && anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[7,5]>alpha.f){ if(anavaF3[1,5]<=alpha.f | anavaF3[2,5]<=alpha.f | anavaF3[3,5]<=alpha.f){ # print(residplot) graficos}else{graficos=NA}} if(anavaF3[7,5]>alpha.f && anavaF3[4,5]<=alpha.f){ graficos=list(residplot,colint1) if(anavaF3[5,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[3,5]<=alpha.f){ graficos=list(residplot,colint1,grafico1)} graficos} if(anavaF3[7,5]>alpha.f && anavaF3[5,5]<=alpha.f){ graficos=list(residplot,colint2) if(anavaF3[4,5]>alpha.f && anavaF3[6,5]>alpha.f && anavaF3[2,5]<=alpha.f){ graficos=list(residplot,colint2,grafico2)} graficos} if(anavaF3[7,5]>alpha.f && anavaF3[6,5]<=alpha.f){ graficos=list(residplot,colint3) if(anavaF3[4,5]>alpha.f && anavaF3[5,5]>alpha.f && anavaF3[1,5]<=alpha.f){ graficos=list(residplot,colint3,grafico3)}} if(anavaF3[7,5]>alpha.f && anavaF3[4,5]<=alpha.f && anavaF3[5,5]<=alpha.f){ graficos=list(residplot,colint1,colint2) graficos} if(anavaF3[7,5]>alpha.f && anavaF3[4,5]<=alpha.f && anavaF3[6,5]<=alpha.f){ graficos=list(residplot,colint1,colint3) graficos} if(anavaF3[7,5]>alpha.f && anavaF3[5,5]<=alpha.f && anavaF3[6,5]<=alpha.f){ graficos=list(residplot,colint2,colint3) graficos} if(anavaF3[7,5]<=alpha.f){graficos=list(residplot)} graficos=graficos }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/FAT3DICad_function.R
#' Analysis: DBC experiments in split-plot #' @description Analysis of an experiment conducted in a randomized block design in a split-plot scheme using fixed effects analysis of variance. #' @author Gabriel Danilo Shimizu #' @param f1 Numeric or complex vector with plot levels #' @param f2 Numeric or complex vector with subplot levels #' @param block Numeric or complex vector with blocks #' @param response Numeric vector with responses #' @param transf Applies data transformation (default is 1; for log consider 0) #' @param constant Add a constant for transformation (enter value) #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{qualitative}) #' @param grau Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with three elements. #' @param names.fat Name of factors #' @param grau12 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 and qualitative factor 2 and quantitative factor 1. #' @param grau21 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 and qualitative factor 1 and quantitative factor 2. #' @param geom Graph type (columns or segments (For simple effect only)) #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param xlab.factor Provide a vector with two observations referring to the x-axis name of factors 1 and 2, respectively, when there is an isolated effect of the factors. This argument uses `parse`. #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angle x-axis scale text rotation #' @param family Font family (\emph{default} is sans) #' @param color When the columns are different colors (Set fill-in argument as "trat") #' @param legend Legend title name #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param textsize Font size (\emph{default} is 12) #' @param labelsize Font size (\emph{default} is 4) #' @param dec Number of cells (\emph{default} is 3) #' @param ylim y-axis limit #' @param posi Legend position #' @param point This function defines whether the point must have all points ("all"), mean ("mean"), standard deviation (\emph{default} - "mean_sd") or mean with standard error ("mean_se") if quali= FALSE. For quali=TRUE, `mean_sd` and `mean_se` change which information will be displayed in the error bar. #' @param angle.label Label angle #' @note The order of the chart follows the alphabetical pattern. Please use `scale_x_discrete` from package ggplot2, `limits` argument to reorder x-axis. The bars of the column and segment graphs are standard deviation. #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @import ggplot2 #' @importFrom crayon green #' @importFrom crayon bold #' @importFrom crayon italic #' @importFrom crayon red #' @importFrom crayon blue #' @import stats #' @keywords DBC #' @keywords split-plot #' @references #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' @export #' @return The table of analysis of variance, the test of normality of errors (Shapiro-Wilk, Lilliefors, Anderson-Darling, Cramer-von Mises, Pearson and Shapiro-Francia), the test of homogeneity of variances (Bartlett), the test of multiple comparisons (Tukey, LSD, Scott-Knott or Duncan) or adjustment of regression models up to grade 3 polynomial, in the case of quantitative treatments. The column chart for qualitative treatments is also returned. The function also returns a standardized residual plot. #' @examples #' #' #============================== #' # Example tomate #' #============================== #' library(AgroR) #' data(tomate) #' with(tomate, PSUBDBC(parc, subp, bloco, resp, ylab="Dry mass (g)")) #' #' #============================== #' # Example orchard #' #============================== #' library(AgroR) #' data(orchard) #' with(orchard, PSUBDBC(A, B, Bloco, Resp, ylab="CBM")) PSUBDBC=function(f1, f2, block, response, norm="sw", alpha.f=0.05, alpha.t=0.05, quali=c(TRUE,TRUE), names.fat=c("F1","F2"), mcomp = "tukey", grau=c(NA,NA), grau12=NA, # F1/F2 grau21=NA, # F2/F1 transf=1, constant=0, geom="bar", theme=theme_classic(), ylab="Response", xlab="", xlab.factor=c("F1","F2"), color="rainbow", textsize=12, labelsize=4, dec=3, legend="Legend", errorbar=TRUE, addmean=TRUE, ylim=NA, point="mean_se", fill="lightblue", angle=0, family="sans", posi="right", angle.label=0){ if(angle.label==0){hjust=0.5}else{hjust=0} requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} ordempadronizado=data.frame(f1,f2,block,resp,response) resp1=resp organiz=data.frame(f1,f2,block,response,resp) organiz=organiz[order(organiz$block),] organiz=organiz[order(organiz$f2),] organiz=organiz[order(organiz$f1),] f1=organiz$f1 f2=organiz$f2 block=organiz$block response=organiz$response resp=organiz$resp fator1=f1 fator2=f2 fator1a=fator1 fator2a=fator2 fac = c("F1", "F2") cont <- c(1, 4) Fator1 <- factor(fator1, levels = unique(fator1)) Fator2 <- factor(fator2, levels = unique(fator2)) bloco <- factor(block) lf1 <- levels(Fator1) lf2 <- levels(Fator2) nv1 <- length(summary(Fator1)) nv2 <- length(summary(Fator2)) num=function(x){as.numeric(x)} sup=0.1*mean(response) # ================================ # Transformacao de dados # ================================ graph=data.frame(Fator1,Fator2,resp) # ----------------------------- # Analise de variancia # ----------------------------- mod=aov(resp~Fator1*Fator2+Fator1:bloco+bloco) anova=summary(mod)[[1]] anova=anova[c(1,3,5,2,4,6),] anova$`F value`[1]=anova$`Mean Sq`[1]/anova$`Mean Sq`[3] anova$`F value`[2]=anova$`Mean Sq`[2]/anova$`Mean Sq`[3] anova$`F value`[3]=NA anova$`Pr(>F)`[3]=NA anova$`Pr(>F)`[1]=1-pf(anova[1,4],anova[1,1],anova[3,1]) anova$`Pr(>F)`[2]=1-pf(anova[2,4],anova[2,1],anova[3,1]) anova1=anova anova=data.frame(anova) colnames(anova)=colnames(anova1) rownames(anova)=c("F1","Block","Error A", "F2", "F1 x F2", "Error B") tab=anova # ----------------------------- # Pressupostos # ----------------------------- modp=lme4::lmer(resp~Fator1*Fator2+(1|bloco/Fator1)+bloco,data = ordempadronizado) resids=residuals(modp,scaled=TRUE) Ids=ifelse(resids>3 | resids<(-3), "darkblue","black") residplot=ggplot(data=data.frame(resids,Ids), aes(y=resids,x=1:length(resids)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(resids),label=1:length(resids), color=Ids,size=labelsize)+ scale_x_continuous(breaks=1:length(resids))+ theme_classic()+theme(axis.text.y = element_text(size=textsize), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) # Normalidade dos erros if(norm=="sw"){norm1 = shapiro.test(resid(modp))} if(norm=="li"){norm1=lillie.test(resid(modp))} if(norm=="ad"){norm1=ad.test(resid(modp))} if(norm=="cvm"){norm1=cvm.test(resid(modp))} if(norm=="pearson"){norm1=pearson.test(resid(modp))} if(norm=="sf"){norm1=sf.test(resid(modp))} cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) homog1=bartlett.test(resid(modp)~Fator1) homog2=bartlett.test(resid(modp)~Fator2) homog3=bartlett.test(resid(modp)~paste(Fator1,Fator2)) cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Plot\n"))) statistic1=homog1$statistic phomog1=homog1$p.value method1=paste("Bartlett test","(",names(statistic1),")",sep="") homoge1=data.frame(Method=method1, Statistic=statistic1, "p-value"=phomog1) rownames(homoge1)="" print(homoge1) cat("\n") message(if(homog1$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) cat("\n----------------------------------------------------\n") cat(green(bold("Split-plot\n"))) statistic2=homog2$statistic phomog2=homog2$p.value method2=paste("Bartlett test","(",names(statistic2),")",sep="") homoge2=data.frame(Method=method2, Statistic=statistic2, "p-value"=phomog2) rownames(homoge2)="" print(homoge2) cat("\n") message(if(homog2$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) cat("\n----------------------------------------------------\n") cat(green(bold("Interaction\n"))) statistic3=homog3$statistic phomog3=homog3$p.value method3=paste("Bartlett test","(",names(statistic3),")",sep="") homoge3=data.frame(Method=method3, Statistic=statistic3, "p-value"=phomog3) rownames(homoge3)="" print(homoge3) cat("\n") message(if(homog3$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV1 (%) = ",round(sqrt(tab$`Mean Sq`[3])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nCV2 (%) = ",round(sqrt(tab$`Mean Sq`[6])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nMean = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian = ",round(median(response,na.rm=TRUE),4))) #cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) anova$`Pr(>F)`=ifelse(anova$`Pr(>F)`>0.001,round(anova$`Pr(>F)`,3),"p<0.001") rownames(anova)=c(names.fat[1],"Block","Error A",names.fat[2], paste(names.fat[1],":",names.fat[2]),"Error B") print(as.matrix(anova),na.print="",quote = FALSE) if(transf==1 && norm1$p.value<0.05 | transf==1 &&homog1$p.value<0.05){ message("\n Your analysis is not valid, suggests using a try to transform the data\n")}else{} message(if(transf !=1){blue("\nNOTE: resp = transformed means; respO = averages without transforming\n")}) fat <- data.frame(`fator 1`=factor(fator1), `fator 2`=factor(fator2)) fata <- data.frame(`fator 1`=fator1a, `fator 2`=fator2a) #------------------------------------ # Fatores isolados #------------------------------------ if (as.numeric(tab[5, 5]) > alpha.f) {cat(green(bold("-----------------------------------------------------------------\n"))) cat("No significant interaction") cat(green(bold("\n-----------------------------------------------------------------\n"))) graficos=list(1,2,3) for (i in 1:2) {if (num(tab[cont[i], 5]) <= alpha.f) {cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(bold(fac[i])) cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali[i]==TRUE){ ## Tukey if(mcomp=="tukey"){ letra <- TUKEY(resp, fat[, i],num(tab[3*i,1]), num(tab[3*i,2])/num(tab[3*i,1]), alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fat[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="duncan"){ letra <- duncan(resp, fat[, i],num(tab[3*i,1]), num(tab[3*i,2])/num(tab[3*i,1]), alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fat[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="lsd"){ letra <- LSD(resp, fat[, i],num(tab[3*i,1]), num(tab[3*i,2])/num(tab[3*i,1]), alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fat[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fat[, i])[1] medias=sort(tapply(resp,fat[, i],mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(tab[3*i,1]), nrep = nrep, QME = num(tab[3*i,2])/num(tab[3*i,1]), alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) # letra1 <- sk(resp, fat[, i],num(tab[3*i,1]), # num(tab[3*i,2]), alpha.t) colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fat[,i],mean, na.rm=TRUE)[rownames(letra1)]}} teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) print(letra1) ordem=unique(as.vector(unlist(fat[i]))) ordem=unique(as.vector(unlist(fat[i]))) if(point=="mean_sd"){desvio=tapply(response, c(fat[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fat[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fat[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1[rownames(letra1),], media=tapply(response, c(fat[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) # dadosm=data.frame(letra1, # desvio=tapply(response, c(fat[i]), sd, na.rm=TRUE)[rownames(letra1)]) # dadosm$media=tapply(response, c(fat[i]), mean, na.rm=TRUE)[rownames(letra1)] dadosm$trats=factor(rownames(dadosm),levels = ordem) dadosm$limite=dadosm$media+dadosm$desvio lim.y=dadosm$limite[which.max(abs(dadosm$limite))] if(is.na(ylim[1])==TRUE && lim.y<0){ylim=c(1.5*lim.y,0)} if(is.na(ylim[1])==TRUE && lim.y>0){ylim=c(0,1.5*lim.y)} if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio trats=dadosm$trats letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trats),color=1)} else{grafico=grafico+ geom_col(aes(fill=trats), fill=fill, color=1)} grafico=grafico+ theme+ylim(ylim)+ ylab(ylab)+ xlab(parse(text = xlab.factor[i])) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+ if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio, color=1), color="black", width=0.3)} if(angle !=0){grafico=grafico+ theme(axis.text.x=element_text(hjust = 1.01, angle = angle))} grafico=grafico+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family), legend.position = "none")} # ================================ # grafico de segmentos # ================================ if(geom=="point"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trats),size=4)} else{grafico=grafico+ geom_point(aes(color=trats),color=fill,size=4)} grafico=grafico+ theme+ ylab(ylab)+ xlab(parse(text = xlab.factor[i])) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+ if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra), family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3)} if(angle !=0){grafico=grafico+ theme(axis.text.x=element_text(hjust = 1.01, angle = angle))} grafico=grafico+ylim(ylim)+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family), legend.position = "none")} grafico=grafico if(color=="gray"){grafico=grafico+scale_fill_grey()} # print(grafico) cat("\n\n") } # # Regression if(quali[i]==FALSE){ # dose=as.numeric(as.character(as.vector(unlist(fat[i])))) dose=as.vector(unlist(fata[i])) grafico=polynomial(dose, resp, grau = grau[i], ylab = ylab, xlab = parse(text = xlab.factor[i]), posi = posi, point = point, theme = theme, textsize = textsize, family=family,DFres = num(tab[3*i,1]), SSq=num(tab[3*i,2])) grafico=grafico[[1]]} graficos[[i+1]]=grafico } graficos[[1]]=residplot } if(as.numeric(tab[1,5])>=alpha.f && as.numeric(tab[4,5])<alpha.f){ cat(green(bold("-----------------------------------------------------------------\n"))) cat("Isolated factors 1 not significant") cat(green(bold("\n-----------------------------------------------------------------\n"))) d1=data.frame(tapply(response,fator1,mean, na.rm=TRUE)) colnames(d1)="Mean" print(d1)} if(as.numeric(tab[4,5])>=alpha.f && as.numeric(tab[1,5])<alpha.f){ cat(green(bold("-----------------------------------------------------------------\n"))) cat("Isolated factors 2 not significant") cat(green(bold("\n-----------------------------------------------------------------\n"))) d1=data.frame(tapply(response,fator2,mean, na.rm=TRUE)) colnames(d1)="Mean" print(d1) } if(as.numeric(tab[1,5])>=alpha.f && as.numeric(tab[4,5])>=alpha.f){ cat(green(bold("-----------------------------------------------------------------\n"))) cat("Isolated factors not significant") cat(green(bold("\n-----------------------------------------------------------------\n"))) print(tapply(response,list(fator1,fator2),mean, na.rm=TRUE))} } #------------------------------------- # Desdobramento de F1 dentro de F2 #------------------------------------- if (as.numeric(tab[5, 5]) <= alpha.f) { cat(green(bold("-----------------------------------------------------------------\n"))) cat("Significant interaction: analyzing the interaction") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analyzing ", names.fat[1], " inside of each level of ", names.fat[2]))) cat(green(bold("\n-----------------------------------------------------------------\n"))) l2 <- names(summary(Fator2)) sq <- numeric(0) for (k in 1:nv2) { soma <- numeric(0) for (j in 1:nv1) { sub <- resp[Fator1 == levels(Fator1)[j] & Fator2 == levels(Fator2)[k]] q.som <- length(sub) soma <- c(soma, sum(sub))} sq <- c(sq, sum(soma^2)/q.som - sum(soma)^2/(q.som * length(soma)))} gl.sattert <- (num(tab[3,3])+(nv2-1)*num(tab[6,3]))^2/((num(tab[3,3])^2/num(tab[3,1]))+(((nv2-1)*num(tab[6,3]))^2/num(tab[6,1]))) gl.f1f2 <- c(rep(nv1 - 1, nv2), gl.sattert) sq <- c(sq, NA) qm.f1f2 <- sq[1:nv2]/gl.f1f2[1:nv2] qm.ecomb <- (num(tab[3,3])+(nv2-1)*num(tab[6,3]))/nv2 qm.f1f2 <- c(qm.f1f2, qm.ecomb) fc.f1f2 <- c(qm.f1f2[1:nv2]/qm.f1f2[nv2 + 1], NA) p.f1f2 <- c(1 - pf(fc.f1f2, gl.f1f2, gl.sattert)) tab.f1f2 <- data.frame(GL = gl.f1f2, SQ = sq, QM = qm.f1f2, Fc = fc.f1f2, "p-value" = p.f1f2) nome.f1f2 <- numeric(0) for (j in 1:nv2) {nome.f1f2 <- c(nome.f1f2, paste(fac[1], " : ", fac[2], " ", l2[j], " ", sep = ""))} nome.f1f2 <- c(nome.f1f2, "Combined error") rownames(tab.f1f2) <- nome.f1f2 tab.f1f2 <- round(tab.f1f2, 6) tab.f1f2[nv2 + 1, 2] <- tab.f1f2[nv2 + 1, 3] * tab.f1f2[nv2 + 1, 1] tab.f1f2[nv2 + 1, 5] <- tab.f1f2[nv2 + 1, 4] <- "" rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[2], sep = ""), lf2[j])) } rownames(tab.f1f2)=c(rn,"Combined error") print(tab.f1f2) desdobramento1=tab.f1f2 #------------------------------------- # Teste de Tukey #------------------------------------- if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv2) { tukey=TUKEY(resp[fat[,2]==l2[i]], fat[,1][fat[,2]==l2[i]], num(tab.f1f2[nv2+1,1]), num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]), alpha.t) colnames(tukey$groups)=c("resp","groups") tukeygrafico[[i]]=tukey$groups[as.character(unique(fat[,1][fat[,2]==l2[i]])),2] ordem[[i]]=rownames(tukey$groups[as.character(unique(fat[,1][fat[,2]==l2[i]])),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { duncan=duncan(resp[fat[,2]==l2[i]], fat[,1][fat[,2]==l2[i]], num(tab.f1f2[nv2+1,1]), num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]), alpha.t) colnames(duncan$groups)=c("resp","groups") duncangrafico[[i]]=duncan$groups[as.character(unique(fat[,1][fat[,2]==l2[i]])),2] ordem[[i]]=rownames(duncan$groups[as.character(unique(fat[,1][fat[,2]==l2[i]])),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv2) { lsd=lsd(resp[fat[,2]==l2[i]], fat[,1][fat[,2]==l2[i]], num(tab.f1f2[nv2+1,1]), num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]), alpha.t) colnames(lsd$groups)=c("resp","groups") lsdgrafico[[i]]=lsd$groups[as.character(unique(fat[,1][fat[,2]==l2[i]])),2] ordem[[i]]=rownames(lsd$groups[as.character(unique(fat[,1][fat[,2]==l2[i]])),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv2) { respi=resp[fat[,2]==l2[i]] trati=fat[,1][fat[,2]==l2[i]] # trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) # respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(tab.f1f2[nv2 +1, 1]), nrep = nrep, QME = num(tab.f1f2[nv2 + 1, 3]), alpha = alpha.t) sk=data.frame(resp=medias,groups=sk) # # sk=sk(respi,trati, # num(tab.f1f2[nv2+1,1]), # num(tab.f1f2[nv2+1,2]),alpha.t) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} } #------------------------------------- # Desdobramento de F2 dentro de F1 #------------------------------------- cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analyzing ", names.fat[2], " inside of the level of ",names.fat[1]))) cat(green(bold("\n-----------------------------------------------------------------\n"))) l1 <- names(summary(Fator1)) sq <- numeric(0) for (k in 1:nv1) { soma <- numeric(0) for (j in 1:nv2) { parc <- resp[Fator1 == levels(Fator1)[k] & Fator2 == levels(Fator2)[j]] q.som <- length(parc) soma <- c(soma, sum(parc))} sq <- c(sq, sum(soma^2)/q.som-sum(soma)^2/(q.som*length(soma)))} gl.f2f1 <- c(rep(nv2 - 1, nv1), tab[6, 1]) sq <- c(sq, as.numeric(tab[6, 2])) qm.f2f1 <- sq/gl.f2f1 fc.f2f1 <- c(qm.f2f1[1:nv1]/num(tab[6, 3]), NA) p.f2f1 <- c(1 - pf(fc.f2f1, gl.f2f1, num(tab[6,1]))) tab.f2f1 <- data.frame(GL=gl.f2f1, SQ=sq, QM=qm.f2f1, Fc=fc.f2f1, "p-value"=p.f2f1) nome.f2f1 <- numeric(0) for (j in 1:nv1) {nome.f2f1 <- c(nome.f2f1, paste(fac[2], " : ",fac[1], " ", l1[j], " ", sep = ""))} nome.f2f1 <- c(nome.f2f1, "Error b") rownames(tab.f2f1) <- nome.f2f1 tab.f2f1 <- round(tab.f2f1, 6) tab.f2f1[nv1 + 1, 5] <- tab.f2f1[nv1 + 1, 4] <- "" rn<-numeric(0) for (i in 1:nv1) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[1], sep = ""), lf1[i])) } rownames(tab.f2f1)=c(rn,"Error b") print(tab.f2f1) desdobramento2=tab.f2f1 #------------------------------------- # Teste de Tukey #------------------------------------- if(quali[1]==TRUE && quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { tukey=TUKEY(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(tukey$groups)=c("resp","groups") tukeygrafico1[[i]]=tukey$groups[as.character(unique(fat[,2][fat[, 1] == l1[i]])),2] if(transf !="1"){tukey$groups$respo=tapply(response[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(tukey$groups)]} } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv1) { duncan=duncan(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(duncan$groups)=c("resp","groups") duncangrafico1[[i]]=duncan$groups[levels(fat[,2][fat[, 1] == l1[i]]),2] if(transf !="1"){duncan$groups$respo=tapply(response[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(duncan$groups)]} } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv1) { lsd=LSD(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(lsd$groups)=c("resp","groups") duncangrafico1[[i]]=lsd$groups[levels(fat[,2][fat[, 1] == l1[i]]),2] if(transf !="1"){lsd$groups$respo=tapply(response[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(lsd$groups)]} } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { respi=resp[fat[, 1] == l1[i]] trati=fat[,2][fat[, 1] == l1[i]] trati=factor(trati,unique(trati)) nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(tab.f2f1[nv1 +1, 1]), nrep = nrep, QME = num(tab.f2f1[nv1 + 1, 3]), alpha = alpha.t) sk=data.frame(resp=medias,groups=sk) # sk=sk(respi,trati, # num(tab.f2f1[nv1 +1, 1]), # num(tab.f2f1[nv1 + 1, 2]),alpha.t) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)} } # ----------------------------- # Gráfico de colunas #------------------------------ if(quali[1]==TRUE & quali[2]==TRUE){ media=tapply(response,list(Fator1,Fator2), mean, na.rm=TRUE) # desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE) if(point=="mean_sd"){desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE)/ sqrt(tapply(response,list(Fator1,Fator2), length))} graph=data.frame(f1=rep(rownames(media),length(colnames(media))), f2=rep(colnames(media),e=length(rownames(media))), media=as.vector(media), desvio=as.vector(desvio)) limite=graph$media+graph$desvio lim.y=limite[which.max(abs(limite))] if(is.na(ylim[1])==TRUE && lim.y<0){ylim=c(1.5*lim.y,0)} if(is.na(ylim[1])==TRUE && lim.y>0){ylim=c(0,1.5*lim.y)} rownames(graph)=paste(graph$f1,graph$f2) graph=graph[paste(rep(unique(Fator1),e=length(colnames(media))), rep(unique(Fator2),length(rownames(media)))),] graph$letra=letra graph$letra1=letra1 graph$f1=factor(graph$f1,levels = unique(Fator1)) graph$f2=factor(graph$f2,levels = unique(Fator2)) if(addmean==TRUE){graph$numero=paste(format(graph$media,digits = dec), graph$letra,graph$letra1, sep="")} if(addmean==FALSE){graph$numero=paste(graph$letra,graph$letra1)} f1=graph$f1 f2=graph$f2 media=graph$media desvio=graph$desvio numero=graph$numero colint=ggplot(graph, aes(x=f1, y=media, fill=f2))+ geom_col(position = "dodge",color="black")+ ylab(ylab)+xlab(xlab)+ylim(ylim)+ theme if(errorbar==TRUE){colint=colint+ geom_errorbar(data=graph, aes(ymin=media-desvio, ymax=media+desvio), width=0.3, position = position_dodge(width=0.9))} if(errorbar==TRUE){colint=colint+ geom_text(aes(y=media+sup+ if(sup<0){-desvio}else{desvio}, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){colint=colint+ geom_text(aes(y=media+sup,label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)} colint=colint+theme(text=element_text(size=textsize), legend.position = posi, axis.text = element_text(size=textsize, color="black"), axis.title = element_text(size=textsize, color="black"))+ labs(fill=legend) if(angle !=0){colint=colint+theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} if(color=="gray"){colint=colint+scale_fill_grey()} print(colint) grafico=colint letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator2) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(matriz) message(black("\n\nAverages followed by the same lowercase letter in the column and uppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")")) } if(quali[1]==FALSE && color=="gray"| quali[2]==FALSE && color=="gray"){ if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { tukey=TUKEY(resp[fat[,2]==l2[i]], fat[,1][fat[,2]==l2[i]], num(tab.f1f2[nv2+1,1]), num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]), alpha.t) colnames(tukey$groups)=c("resp","groups") cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv2) { duncan=duncan(resp[fat[,2]==l2[i]], fat[,1][fat[,2]==l2[i]], num(tab.f1f2[nv2+1,1]), num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]), alpha.t) colnames(duncan$groups)=c("resp","groups") cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups) }} if (mcomp == "lsd"){ for (i in 1:nv2) { lsd=lsd(resp[fat[,2]==l2[i]], fat[,1][fat[,2]==l2[i]], num(tab.f1f2[nv2+1,1]), num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]), alpha.t) colnames(lsd$groups)=c("resp","groups") cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups) }} if (mcomp == "sk"){ for (i in 1:nv2) { respi=resp[fat[,2]==l2[i]] trati=fat[,1][fat[,2]==l2[i]] trati=factor(trati,levels = unique(trati)) nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(tab.f1f2[nv2 +1, 1]), nrep = nrep, QME = num(tab.f1f2[nv2 + 1, 3]), alpha = alpha.t) sk=data.frame(resp=medias,groups=sk) # sk=sk(respi,trati, # num(tab.f1f2[nv2+1,1]), # num(tab.f1f2[nv2+1,2]),alpha.t) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk)}} } if(quali[2]==FALSE){ fator2a=fator2a#as.numeric(as.character(fator2)) grafico=polynomial2(fator2a, resp, fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, point=point, posi= posi, ylim=ylim, textsize=textsize, family=family, DFres = num(tab.f2f1[nv1+1,1]), SSq = num(tab.f2f1[nv1+1,2])) if(quali[1]==FALSE & quali[2]==FALSE){ graf=list(grafico,NA)} } if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { tukey=TUKEY(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(tukey$groups)=c("resp","groups") if(transf !="1"){tukey$groups$respo=tapply(response[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups) }} if (mcomp == "duncan"){ for (i in 1:nv1) { duncan=duncan(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(duncan$groups)=c("resp","groups") if(transf !="1"){duncan$groups$respo=tapply(response[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups) }} if (mcomp == "lsd"){ for (i in 1:nv1) { lsd=LSD(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(lsd$groups)=c("resp","groups") if(transf !="1"){lsd$groups$respo=tapply(response[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups) }} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { respi=resp[fat[, 1] == l1[i]] trati=fat[,2][fat[, 1] == l1[i]] trati=factor(trati,unique(trati)) nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(tab.f2f1[nv1 +1, 1]), nrep = nrep, QME = num(tab.f2f1[nv1 + 1, 3]), alpha = alpha.t) sk=data.frame(resp=medias,groups=sk) # # sk=sk(respi,trati, # num(tab.f2f1[nv1 +1, 1]), # num(tab.f2f1[nv1 + 1, 2]),alpha.t) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk) }} } if(quali[1]==FALSE){ fator1a=fator1a#as.numeric(as.character(fator1)) grafico=polynomial2(fator1a, resp, fator2, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, point=point, posi = posi, ylim=ylim, textsize=textsize, family=family, DFres = num(tab.f1f2[nv2 +1, 1]), SSq = num(tab.f1f2[nv2 + 1, 2])) if(quali[1]==FALSE & quali[2]==FALSE){ graf[[2]]=grafico grafico=graf} } } if(quali[1]==FALSE && color=="rainbow"| quali[2]==FALSE && color=="rainbow"){ if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { tukey=TUKEY(resp[fat[,2]==l2[i]], fat[,1][fat[,2]==l2[i]], num(tab.f1f2[nv2+1,1]), num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]), alpha.t) colnames(tukey$groups)=c("resp","groups") cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups)}} if (mcomp == "duncan"){ for (i in 1:nv2) { duncan=duncan(resp[fat[,2]==l2[i]], fat[,1][fat[,2]==l2[i]], num(tab.f1f2[nv2+1,1]), num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]), alpha.t) colnames(duncan$groups)=c("resp","groups") cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups) }} if (mcomp == "lsd"){ for (i in 1:nv2) { lsd=lsd(resp[fat[,2]==l2[i]], fat[,1][fat[,2]==l2[i]], num(tab.f1f2[nv2+1,1]), num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]), alpha.t) colnames(lsd$groups)=c("resp","groups") cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups) }} if (mcomp == "sk"){ for (i in 1:nv2) { respi=resp[fat[,2]==l2[i]] trati=fat[,1][fat[,2]==l2[i]] # trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) # respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(tab.f1f2[nv2 +1, 1]), nrep = nrep, QME = num(tab.f1f2[nv2 + 1, 3]), alpha = alpha.t) sk=data.frame(resp=medias,groups=sk) # sk=sk(respi,trati, # num(tab.f1f2[nv2+1,1]), # num(tab.f1f2[nv2+1,2]),alpha.t) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk)}} } if(quali[2]==FALSE){ fator2a=fator2a#as.numeric(as.character(fator2)) grafico=polynomial2_color(fator2a, resp, fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, point=point, posi=posi, ylim=ylim, textsize=textsize, family=family, DFres = num(tab.f2f1[nv1+1,1]), SSq = num(tab.f2f1[nv1+1,2])) if(quali[1]==FALSE & quali[2]==FALSE){ graf=list(grafico,NA)} } if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { tukey=TUKEY(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(tukey$groups)=c("resp","groups") if(transf !="1"){tukey$groups$respo=tapply(response[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups) }} if (mcomp == "duncan"){ for (i in 1:nv1) { duncan=duncan(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(duncan$groups)=c("resp","groups") if(transf !="1"){duncan$groups$respo=tapply(response[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups) }} if (mcomp == "lsd"){ for (i in 1:nv1) { lsd=LSD(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(lsd$groups)=c("resp","groups") if(transf !="1"){lsd$groups$respo=tapply(response[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups) }} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { respi=resp[fat[, 1] == l1[i]] trati=fat[,2][fat[, 1] == l1[i]] trati=factor(trati,unique(trati)) nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(tab.f2f1[nv1 +1, 1]), nrep = nrep, QME = num(tab.f2f1[nv1 + 1, 3]), alpha = alpha.t) sk=data.frame(resp=medias,groups=sk) # # sk=sk(respi,trati, # num(tab.f2f1[nv1 +1, 1]), # num(tab.f2f1[nv1 + 1, 2]),alpha.t) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk) }} } if(quali[1]==FALSE){ fator1a=fator1a#as.numeric(as.character(fator1)) grafico=polynomial2_color(fator1a,resp,fator2, grau = grau12, ylab=ylab,xlab=xlab, theme=theme,point=point, posi = posi,ylim=ylim, textsize=textsize, family=family, DFres = num(tab.f1f2[nv2 +1, 1]), SSq = num(tab.f1f2[nv2 + 1, 2])) if(quali[1]==FALSE & quali[2]==FALSE){ graf[[2]]=grafico grafico=graf} } } } if(as.numeric(tab[5, 5])>alpha.f){ names(graficos)=c("residplot","graph1","graph2") graficos}else{colints=list(residplot,grafico)} }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/PSUBDBC_function.R
#' Analysis: DIC experiments in split-plot #' @description Analysis of an experiment conducted in a completely randomized design in a split-plot scheme using fixed effects analysis of variance. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param f1 Numeric or complex vector with plot levels #' @param f2 Numeric or complex vector with subplot levels #' @param block Numeric or complex vector with blocks #' @param response Numeric vector with responses #' @param transf Applies data transformation (default is 1; for log consider 0) #' @param constant Add a constant for transformation (enter value) #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{qualitative}) #' @param names.fat Name of factors #' @param grau Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with three elements. #' @param grau12 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 2, in the case of interaction f1 x f2 and qualitative factor 2 and quantitative factor 1. #' @param grau21 Polynomial degree in case of quantitative factor (\emph{default} is 1). Provide a vector with n levels of factor 1, in the case of interaction f1 x f2 and qualitative factor 1 and quantitative factor 2. #' @param geom Graph type (columns or segments (For simple effect only)) #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param xlab.factor Provide a vector with two observations referring to the x-axis name of factors 1 and 2, respectively, when there is an isolated effect of the factors. This argument uses `parse`. #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angle x-axis scale text rotation #' @param family Font family (\emph{default} is sans) #' @param color When the columns are different colors (Set fill-in argument as "trat") #' @param legend Legend title name #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param textsize Font size (\emph{default} is 12) #' @param labelsize Label size (\emph{default} is 4) #' @param dec Number of cells (\emph{default} is 3) #' @param ylim y-axis limit #' @param posi Legend position #' @param point This function defines whether the point must have all points ("all"), mean ("mean"), standard deviation (\emph{default} - "mean_sd") or mean with standard error ("mean_se") if quali= FALSE. For quali=TRUE, `mean_sd` and `mean_se` change which information will be displayed in the error bar. #' @param angle.label Label angle #' @note The order of the chart follows the alphabetical pattern. Please use `scale_x_discrete` from package ggplot2, `limits` argument to reorder x-axis. The bars of the column and segment graphs are standard deviation. #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @import ggplot2 #' @importFrom crayon green #' @importFrom crayon bold #' @importFrom crayon italic #' @importFrom crayon red #' @importFrom crayon blue #' @import stats #' @keywords DBC #' @export #' @references #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' @return The table of analysis of variance, the test of normality of errors (Shapiro-Wilk, Lilliefors, Anderson-Darling, Cramer-von Mises, Pearson and Shapiro-Francia), the test of homogeneity of variances (Bartlett), the test of multiple comparisons (Tukey, LSD, Scott-Knott or Duncan) or adjustment of regression models up to grade 3 polynomial, in the case of quantitative treatments. The column chart for qualitative treatments is also returned. The function also returns a standardized residual plot. #' @examples #' #' #=================================== #' # Example tomate #' #=================================== #' # Obs. Consider that the "tomato" experiment is a completely randomized design. #' library(AgroR) #' data(tomate) #' with(tomate, PSUBDIC(parc, subp, bloco, resp, ylab="Dry mass (g)")) PSUBDIC=function(f1, f2, block, response, norm="sw", alpha.f=0.05, alpha.t=0.05, quali=c(TRUE,TRUE), names.fat=c("F1","F2"), mcomp = "tukey", grau=c(NA,NA), grau12=NA, # F1/F2 grau21=NA, # F2/F1 transf=1, constant=0, geom="bar", theme=theme_classic(), ylab="Response", xlab="", xlab.factor=c("F1","F2"), fill="lightblue", angle=0, family="sans", color="rainbow", legend="Legend", errorbar=TRUE, addmean=TRUE, textsize=12, labelsize=4, dec=3, ylim=NA, posi="right", point="mean_se", angle.label=0){ if(angle.label==0){hjust=0.5}else{hjust=0} requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} ordempadronizado=data.frame(f1,f2,block,resp,response) resp1=resp organiz=data.frame(f1,f2,block,response,resp) organiz=organiz[order(organiz$block),] organiz=organiz[order(organiz$f2),] organiz=organiz[order(organiz$f1),] f1=organiz$f1 f2=organiz$f2 block=organiz$block response=organiz$response resp=organiz$resp fator1=f1 fator2=f2 fator1a=fator1 fator2a=fator2 bloco=block fac = c("F1", "F2") cont <- c(1, 3) Fator1 <- factor(fator1, levels = unique(fator1)) Fator2 <- factor(fator2, levels = unique(fator2)) bloco <- factor(block) nv1 <- length(summary(Fator1)) nv2 <- length(summary(Fator2)) lf1 <- levels(Fator1) lf2 <- levels(Fator2) num=function(x){as.numeric(x)} sup=0.1*mean(response) graph=data.frame(Fator1,Fator2,resp) # ----------------------------- # Analise de variancia # ----------------------------- mod=aov(resp~Fator1*Fator2+Fator1:bloco) anova=summary(mod)[[1]] anova=anova[c(1,4,2,3,5),] anova$`F value`[1]=anova$`Mean Sq`[1]/anova$`Mean Sq`[2] anova$`F value`[2]=NA anova$`Pr(>F)`[2]=NA anova$`Pr(>F)`[1]=1-pf(anova[1,4],anova[1,1],anova[2,1]) anova1=anova anova=data.frame(anova) colnames(anova)=colnames(anova1) rownames(anova)=c("F1","Error A", "F2", "F1 x F2", "Error B") tab=anova glres=tab$Df[c(2,5)] qmres= tab$`Mean Sq`[c(2,5)] # ----------------------------- # Pressupostos # ----------------------------- modp=lme4::lmer(resp~Fator1*Fator2+(1|bloco/Fator1),data = ordempadronizado) resids=residuals(modp,scaled=TRUE) Ids=ifelse(resids>3 | resids<(-3), "darkblue","black") residplot=ggplot(data=data.frame(resids,Ids), aes(y=resids,x=1:length(resids)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(resids),label=1:length(resids), color=Ids,size=labelsize)+ scale_x_continuous(breaks=1:length(resids))+ theme_classic()+theme(axis.text.y = element_text(size=textsize), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) # Normalidade dos erros if(norm=="sw"){norm1 = shapiro.test(resid(modp))} if(norm=="li"){norm1=lillie.test(resid(modp))} if(norm=="ad"){norm1=ad.test(resid(modp))} if(norm=="cvm"){norm1=cvm.test(resid(modp))} if(norm=="pearson"){norm1=pearson.test(resid(modp))} if(norm=="sf"){norm1=sf.test(resid(modp))} cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) homog1=bartlett.test(resid(modp)~Fator1) homog2=bartlett.test(resid(modp)~Fator2) homog3=bartlett.test(resid(modp)~paste(Fator1, Fator2)) cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Plot\n"))) statistic1=homog1$statistic phomog1=homog1$p.value method1=paste("Bartlett test","(",names(statistic1),")",sep="") homoge1=data.frame(Method=method1, Statistic=statistic1, "p-value"=phomog1) rownames(homoge1)="" print(homoge1) cat("\n") message(if(homog1$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) cat("\n----------------------------------------------------\n") cat(green(bold("Split-plot\n"))) statistic2=homog2$statistic phomog2=homog2$p.value method2=paste("Bartlett test","(",names(statistic2),")",sep="") homoge2=data.frame(Method=method2, Statistic=statistic2, "p-value"=phomog2) rownames(homoge2)="" print(homoge2) cat("\n") message(if(homog2$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) cat("\n----------------------------------------------------\n") cat(green(bold("Interaction\n"))) statistic3=homog3$statistic phomog3=homog3$p.value method3=paste("Bartlett test","(",names(statistic3),")",sep="") homoge3=data.frame(Method=method3, Statistic=statistic3, "p-value"=phomog3) rownames(homoge3)="" print(homoge3) cat("\n") message(if(homog3$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV1 (%) = ",round(sqrt(tab$`Mean Sq`[2])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nCV2 (%) = ",round(sqrt(tab$`Mean Sq`[5])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nMean = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian = ",round(median(response,na.rm=TRUE),4))) #cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) anova$`Pr(>F)`=ifelse(anova$`Pr(>F)`>0.001, round(anova$`Pr(>F)`,3),"p<0.001") print(as.matrix(anova),na.print="",quote = FALSE) if(transf==1 && norm1$p.value<0.05 | transf==1 &&homog1$p.value<0.05){ message("\n \nYour analysis is not valid, suggests using a try to transform the data\n")}else{} message(if(transf !=1){blue("\nNOTE: resp = transformed means; respO = averages without transforming\n")}) ########################################################################## fat <- data.frame(`fator 1`=fator1, `fator 2`=fator2) fata <- data.frame(`fator 1`=fator1a, `fator 2`=fator2a) #------------------------------------ # Fatores isolados #------------------------------------ if (as.numeric(tab[4, 5]) > alpha.f) {cat(green(bold("-----------------------------------------------------------------\n"))) cat("No significant interaction") cat(green(bold("\n-----------------------------------------------------------------\n"))) graficos=list(1,2,3) for (i in 1:2) {if (num(tab[cont[i], 5]) <= alpha.f) {cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(bold(fac[i])) cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali[i]==TRUE){ ## Tukey if(mcomp=="tukey"){ letra <- TUKEY(resp, fat[, i],num(glres[i]), num(qmres[i]), alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fat[,i],mean, na.rm=TRUE)[rownames(letra1)]}} ## duncan if(mcomp=="duncan"){ letra <- duncan(resp, fat[, i],num(glres[i]), num(qmres[i]), alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fat[,i],mean, na.rm=TRUE)[rownames(letra1)]}} ## LSD if(mcomp=="lsd"){ letra <- LSD(resp, fat[, i],num(glres[i]), num(qmres[i]), alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") if(transf !=1){letra1$respo=tapply(response,fat[,i],mean, na.rm=TRUE)[rownames(letra1)]}} if(mcomp=="sk"){ nrep=table(fat[, i])[1] medias=sort(tapply(resp,fat[, i],mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(glres[i]), nrep = nrep, QME = num(qmres[i]), alpha = alpha.t) letra1=data.frame(resp=medias,groups=sk) if(transf !=1){letra1$respo=tapply(response,fat[,i],mean, na.rm=TRUE)[rownames(letra1)]}} teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) print(letra1) ordem=unique(as.vector(unlist(fat[i]))) if(point=="mean_sd"){desvio=tapply(response, c(fat[i]), sd, na.rm=TRUE)[rownames(letra1)]} if(point=="mean_se"){desvio=(tapply(response, c(fat[i]), sd, na.rm=TRUE)/ sqrt(tapply(response, c(fat[i]), length)))[rownames(letra1)]} dadosm=data.frame(letra1[rownames(letra1),], media=tapply(response, c(fat[i]), mean, na.rm=TRUE)[rownames(letra1)], desvio=desvio) # dadosm=data.frame(letra1, # desvio=tapply(response, c(fat[i]), sd, na.rm=TRUE)[rownames(letra1)]) # dadosm$media=tapply(response, c(fat[i]), mean, na.rm=TRUE)[rownames(letra1)] dadosm$trats=factor(rownames(dadosm),levels = ordem) dadosm$limite=dadosm$media+dadosm$desvio lim.y=dadosm$limite[which.max(abs(dadosm$limite))] if(is.na(ylim[1])==TRUE && lim.y<0){ylim=c(1.5*lim.y,0)} if(is.na(ylim[1])==TRUE && lim.y>0){ylim=c(0,1.5*lim.y)} if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} media=dadosm$media desvio=dadosm$desvio trats=dadosm$trats letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trats),color=1)} else{grafico=grafico+ geom_col(aes(fill=trats),fill=fill,color=1)} grafico=grafico+ theme+ylim(ylim)+ ylab(ylab)+ xlab(parse(text = xlab.factor[i])) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black", width=0.3)} if(angle !=0){grafico=grafico+ theme(axis.text.x=element_text(hjust = 1.01, angle = angle))} grafico=grafico+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family), legend.position = "none")} # ================================ # grafico de segmentos # ================================ if(geom=="point"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trats),size=4)} else{grafico=grafico+ geom_point(aes(color=trats), color=fill,size=4)} grafico=grafico+ theme+ylim(ylim)+ ylab(ylab)+ xlab(parse(text = xlab.factor[i])) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=letra),family=family,angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=0.3)} if(angle !=0){grafico=grafico+ theme(axis.text.x=element_text(hjust = 1.01, angle = angle))} grafico=grafico+ylim(ylim)+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family), legend.position = "none")} grafico=grafico if(color=="gray"){grafico=grafico+scale_fill_grey()} # print(grafico) cat("\n\n") } # # Regression if(quali[i]==FALSE){ # dose=as.numeric(as.character(as.vector(unlist(fat[i])))) dose=as.vector(unlist(fata[i])) grafico=polynomial(dose, resp, grau = grau[i], ylab=ylab, xlab=parse(text = xlab.factor[i]), point=point, theme=theme, posi=posi, textsize=textsize, family=family, DFres = num(tab[2*i,1]), SSq=num(tab[2*i,2])) grafico=grafico[[1]]} graficos[[i+1]]=grafico } } graficos[[1]]=residplot if(as.numeric(tab[1,5])>=alpha.f && as.numeric(tab[3,5])<alpha.f){ cat(green(bold("-----------------------------------------------------------------\n"))) cat("Isolated factors 1 not significant") cat(green(bold("\n-----------------------------------------------------------------\n"))) d1=data.frame(tapply(response,fator1,mean, na.rm=TRUE)) colnames(d1)="Mean" print(d1)} if(as.numeric(tab[3,5])>=alpha.f && as.numeric(tab[1,5])<alpha.f){ cat(green(bold("-----------------------------------------------------------------\n"))) cat("Isolated factors 2 not significant") cat(green(bold("\n-----------------------------------------------------------------\n"))) d1=data.frame(tapply(response,fator2,mean, na.rm=TRUE)) colnames(d1)="Mean" print(d1)} if(as.numeric(tab[1,5])>=alpha.f && as.numeric(tab[3,5])>=alpha.f){ cat(green(bold("-----------------------------------------------------------------\n"))) cat("Isolated factors not significant") cat(green(bold("\n-----------------------------------------------------------------\n"))) print(tapply(response,list(fator1,fator2),mean, na.rm=TRUE))} } #------------------------------------- # Desdobramento de F1 dentro de F2 #------------------------------------- if (as.numeric(tab[4, 5]) <= alpha.f) { cat(green(bold("\n-----------------------------------------------------------------\n"))) cat("Significant interaction: analyzing the interaction") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analyzing ", fac[1], " inside of each level of ", fac[2]))) cat(green(bold("\n-----------------------------------------------------------------\n"))) l2 <- names(summary(Fator2)) sq <- numeric(0) for (k in 1:nv2) { soma <- numeric(0) for (j in 1:nv1) { sub <- resp[Fator1 == levels(Fator1)[j] & Fator2 == levels(Fator2)[k]] q.som <- length(sub) soma <- c(soma, sum(sub))} sq <- c(sq, sum(soma^2)/q.som - sum(soma)^2/(q.som * length(soma)))} gl.sattert <- (num(tab[2,3])+(nv2-1)*num(tab[5,3]))^2/((num(tab[2,3])^2/num(tab[2,1]))+(((nv2-1)*num(tab[5,3]))^2/num(tab[5,1]))) gl.f1f2 <- c(rep(nv1 - 1, nv2), gl.sattert) sq <- c(sq, NA) qm.f1f2 <- sq[1:nv2]/gl.f1f2[1:nv2] qm.ecomb <- (num(tab[2,3])+(nv2-1)*num(tab[5,3]))/nv2 qm.f1f2 <- c(qm.f1f2, qm.ecomb) fc.f1f2 <- c(qm.f1f2[1:nv2]/qm.f1f2[nv2 + 1], NA) p.f1f2 <- c(1 - pf(fc.f1f2, gl.f1f2, gl.sattert)) tab.f1f2 <- data.frame(GL = gl.f1f2, SQ = sq, QM = qm.f1f2, Fc = fc.f1f2, "p-value" = p.f1f2) nome.f1f2 <- numeric(0) for (j in 1:nv2) {nome.f1f2 <- c(nome.f1f2, paste(fac[1], " : ", fac[2], " ", l2[j], " ", sep = ""))} nome.f1f2 <- c(nome.f1f2, "Combined error") rownames(tab.f1f2) <- nome.f1f2 tab.f1f2 <- round(tab.f1f2, 6) tab.f1f2[nv2 + 1, 2] <- tab.f1f2[nv2 + 1, 3] * tab.f1f2[nv2 + 1, 1] tab.f1f2[nv2 + 1, 5] <- tab.f1f2[nv2 + 1, 4] <- "" rn<-numeric(0) for (j in 1:nv2) { rn <- c(rn, paste(paste(names.fat[1], ":", names.fat[2], sep = ""), lf2[j])) } rownames(tab.f1f2)=c(rn,"Combined error") print(tab.f1f2) desdobramento1=tab.f1f2 #------------------------------------- # Teste de Tukey #------------------------------------- if(quali[1]==TRUE & quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv2) { tukey=TUKEY(resp[fat[,2]==l2[i]], fat[,1][fat[,2]==l2[i]], num(tab.f1f2[nv2+1,1]), num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]),alpha.t) colnames(tukey$groups)=c("resp","groups") if(transf !="1"){tukey$groups$respo=tapply(response[Fator2 == l2[i]],fat[,1][fat[,2]==l2[i]],mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico[[i]]=tukey$groups[as.character(unique(fat[,1][fat[,2]==l2[i]])),2] ordem[[i]]=rownames(tukey$groups[as.character(unique(fat[,1][fat[,2]==l2[i]])),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(factor(datag$ordem,levels=unique(Fator1))),] letra=datag$letra} if (mcomp == "duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { duncan=duncan(resp[fat[,2]==l2[i]], fat[,1][fat[,2]==l2[i]], num(tab.f1f2[nv2+1,1]),num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]),alpha.t) colnames(duncan$groups)=c("resp","groups") if(transf !="1"){duncan$groups$respo=tapply(response[Fator2 == l2[i]],fat[,1][fat[,2]==l2[i]],mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico[[i]]=duncan$groups[as.character(unique(fat[,1][fat[,2]==l2[i]])),2] ordem[[i]]=rownames(duncan$groups[as.character(unique(fat[,1][fat[,2]==l2[i]])),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv2) { lsd=LSD(resp[fat[,2]==l2[i]],fat[,1][fat[,2]==l2[i]],num(tab.f1f2[nv2+1,1]),num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]),alpha.t) colnames(lsd$groups)=c("resp","groups") if(transf !="1"){lsd$groups$respo=tapply(response[Fator2 == l2[i]],fat[,1][fat[,2]==l2[i]],mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico[[i]]=lsd$groups[as.character(unique(fat[,1][fat[,2]==l2[i]])),2] ordem[[i]]=rownames(lsd$groups[as.character(unique(fat[,1][fat[,2]==l2[i]])),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} if (mcomp == "sk"){ skgrafico=c() ordem=c() for (i in 1:nv2) { respi=resp[fat[,2]==l2[i]] trati=fat[,1][fat[,2]==l2[i]] # trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) # respi=resp[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(tab.f1f2[nv2 +1, 1]), nrep = nrep, QME = num(tab.f1f2[nv2 + 1, 3]), alpha = alpha.t) sk=data.frame(resp=medias,groups=sk) # sk=sk(respi,trati, # num(tab.f1f2[nv2+1,1]), # num(tab.f1f2[nv2+1,2]),alpha.t) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk)]} skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra} } #------------------------------------- # Desdobramento de F2 dentro de F1 #------------------------------------- cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analyzing ", fac[2], " inside of the level of ",fac[1]))) cat(green(bold("\n-----------------------------------------------------------------\n"))) l1 <- names(summary(Fator1)) sq <- numeric(0) for (k in 1:nv1) { soma <- numeric(0) for (j in 1:nv2) { parc <- resp[Fator1 == levels(Fator1)[k] & Fator2 == levels(Fator2)[j]] q.som <- length(parc) soma <- c(soma, sum(parc))} sq <- c(sq, sum(soma^2)/q.som-sum(soma)^2/(q.som*length(soma)))} gl.f2f1 <- c(rep(nv2 - 1, nv1), tab[5, 1]) sq <- c(sq, as.numeric(tab[5, 2])) qm.f2f1 <- sq/gl.f2f1 fc.f2f1 <- c(qm.f2f1[1:nv1]/num(tab[5, 3]), NA) p.f2f1 <- c(1 - pf(fc.f2f1, gl.f2f1, num(tab[5,1]))) tab.f2f1 <- data.frame(GL=gl.f2f1, SQ=sq, QM=qm.f2f1, Fc=fc.f2f1, "p-value"=p.f2f1) nome.f2f1 <- numeric(0) for (j in 1:nv1) {nome.f2f1 <- c(nome.f2f1, paste(fac[2], " : ",fac[1], " ", l1[j], " ", sep = ""))} nome.f2f1 <- c(nome.f2f1, "Error b") rownames(tab.f2f1) <- nome.f2f1 tab.f2f1 <- round(tab.f2f1, 6) tab.f2f1[nv1 + 1, 5] <- tab.f2f1[nv1 + 1, 4] <- "" rn<-numeric(0) for (i in 1:nv1) { rn <- c(rn, paste(paste(names.fat[2], ":", names.fat[1], sep = ""), lf1[i])) } rownames(tab.f2f1)=c(rn,"Error b") print(tab.f2f1) desdobramento2=tab.f2f1 #------------------------------------- # Teste de Tukey #------------------------------------- if(quali[1]==TRUE && quali[2]==TRUE){ if (mcomp == "tukey"){ tukeygrafico1=c() for (i in 1:nv1) { tukey=TUKEY(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(tukey$groups)=c("resp","groups") if(transf !="1"){tukey$groups$respo=tapply(response[Fator1 == l1[i]],fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(tukey$groups)]} tukeygrafico1[[i]]=tukey$groups[as.character(unique(fat[,2][fat[, 1] == l1[i]])),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if (mcomp == "duncan"){ duncangrafico1=c() for (i in 1:nv1) { duncan=duncan(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(duncan$groups)=c("resp","groups") if(transf !="1"){duncan$groups$respo=tapply(response[Fator1 == l1[i]],fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(duncan$groups)]} duncangrafico1[[i]]=duncan$groups[as.character(unique(fat[,2][fat[, 1] == l1[i]])),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if (mcomp == "lsd"){ lsdgrafico1=c() for (i in 1:nv1) { lsd=LSD(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(lsd$groups)=c("resp","groups") if(transf !="1"){lsd$groups$respo=tapply(response[Fator1 == l1[i]],fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(lsd$groups)]} lsdgrafico1[[i]]=lsd$groups[as.character(unique(fat[,2][fat[, 1] == l1[i]])),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { respi=resp[fat[, 1] == l1[i]] trati=fat[,2][fat[, 1] == l1[i]] trati=factor(trati,unique(trati)) nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(tab.f2f1[nv1 +1, 1]), nrep = nrep, QME = num(tab.f2f1[nv1 + 1, 3]), alpha = alpha.t) sk=data.frame(resp=medias,groups=sk) # sk=sk(respi,trati, # num(tab.f2f1[nv1 +1, 1]), # num(tab.f2f1[nv1 + 1, 2]),alpha.t) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)} } # ----------------------------- # Gráfico de colunas #------------------------------ if(quali[1]==TRUE & quali[2]==TRUE){ media=tapply(response,list(Fator1,Fator2), mean, na.rm=TRUE) # desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE) if(point=="mean_sd"){desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE)} if(point=="mean_se"){desvio=tapply(response,list(Fator1,Fator2), sd, na.rm=TRUE)/ sqrt(tapply(response,list(Fator1,Fator2), length))} graph=data.frame(f1=rep(rownames(media),length(colnames(media))), f2=rep(colnames(media),e=length(rownames(media))), media=as.vector(media), desvio=as.vector(desvio)) limite=graph$media+graph$desvio lim.y=limite[which.max(abs(limite))] if(is.na(ylim[1])==TRUE && lim.y<0){ylim=c(1.5*lim.y,0)} if(is.na(ylim[1])==TRUE && lim.y>0){ylim=c(0,1.5*lim.y)} rownames(graph)=paste(graph$f1,graph$f2) graph=graph[paste(rep(unique(Fator1),e=length(colnames(media))), rep(unique(Fator2),length(rownames(media)))),] graph$letra=letra graph$letra1=letra1 graph$f1=factor(graph$f1,levels = unique(Fator1)) graph$f2=factor(graph$f2,levels = unique(Fator2)) if(addmean==TRUE){graph$numero=paste(format(graph$media,digits = dec), graph$letra, graph$letra1, sep="")} if(addmean==FALSE){graph$numero=format(graph$media, digits = dec)} f1=graph$f1 f2=graph$f2 media=graph$media desvio=graph$desvio numero=graph$numero colint=ggplot(graph, aes(x=f1, y=media, fill=f2))+ geom_col(position = "dodge",color="black")+ ylab(ylab)+xlab(xlab)+theme+ylim(ylim) if(errorbar==TRUE){colint=colint+ geom_errorbar(data=graph, aes(ymin=media-desvio, ymax=media+desvio), width=0.3, position = position_dodge(width=0.9))} if(errorbar==TRUE){colint=colint+ geom_text(aes(y=media+sup+ if(sup<0){-desvio}else{desvio}, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)} if(errorbar==FALSE){colint=colint+ geom_text(aes(y=media+sup, label=numero), position = position_dodge(width=0.9),angle=angle.label, hjust=hjust,size=labelsize)} colint=colint+ theme(text=element_text(size=textsize), axis.text = element_text(size=textsize,color="black"), axis.title = element_text(size=textsize,color="black"), legend.position = posi)+labs(fill=legend) if(angle !=0){colint=colint+theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} if(color=="gray"){colint=colint+scale_fill_grey()} print(colint) letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media, digits = dec), letras), ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator2) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(matriz) message(black("\n\nAverages followed by the same lowercase letter in the column and uppercase in the row do not differ by the",mcomp,"(p<",alpha.t,")")) grafico=colint } if(quali[1]==FALSE && color=="gray"| quali[2]==FALSE && color=="gray"){ if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { tukey=TUKEY(resp[fat[,2]==l2[i]],fat[,1][fat[,2]==l2[i]],num(tab.f1f2[nv2+1,1]),num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]),alpha.t) colnames(tukey$groups)=c("resp","groups") if(transf !="1"){tukey$groups$respo=tapply(response[Fator2 == l2[i]],fat[,1][fat[,2]==l2[i]],mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups) }} if (mcomp == "duncan"){ for (i in 1:nv2) { duncan=duncan(resp[fat[,2]==l2[i]],fat[,1][fat[,2]==l2[i]],num(tab.f1f2[nv2+1,1]),num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]),alpha.t) colnames(duncan$groups)=c("resp","groups") if(transf !="1"){duncan$groups$respo=tapply(response[Fator2 == l2[i]],fat[,1][fat[,2]==l2[i]],mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups) } } if (mcomp == "lsd"){ for (i in 1:nv2) { lsd=LSD(resp[fat[,2]==l2[i]],fat[,1][fat[,2]==l2[i]], num(tab.f1f2[nv2+1,1]),num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]),alpha.t) colnames(lsd$groups)=c("resp","groups") if(transf !="1"){lsd$groups$respo=tapply(response[Fator2 == l2[i]],fat[,1][fat[,2]==l2[i]],mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups) }} if (mcomp == "sk"){ for (i in 1:nv2) { respi=resp[fat[,2]==l2[i]] trati=fat[,1][fat[,2]==l2[i]] trati=factor(trati,levels = unique(trati)) nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(tab.f1f2[nv2 +1, 1]), nrep = nrep, QME = num(tab.f1f2[nv2 + 1, 3]), alpha = alpha.t) sk=data.frame(resp=medias,groups=sk) # sk=sk(respi,trati, # num(tab.f2f1[nv1+1,1]), # num(tab.f2f1[nv1+1,2]),alpha.t) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk) }} } if(quali[2]==FALSE){ fator2a=fator2a#as.numeric(as.character(fator2)) grafico=polynomial2(fator2a, resp, fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, point=point, posi=posi, ylim=ylim, textsize=textsize, family=family, DFres = num(tab.f2f1[nv1+1,1]), SSq = num(tab.f2f1[nv1+1,2])) if(quali[1]==FALSE & quali[2]==FALSE){ graf=list(grafico,NA)} } if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { tukey=TUKEY(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(tukey$groups)=c("resp","groups") if(transf !="1"){tukey$groups$respo=tapply(response[Fator1 == l1[i]],fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups) } } if (mcomp == "duncan"){ for (i in 1:nv1) { duncan=duncan(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(duncan$groups)=c("resp","groups") if(transf !="1"){duncan$groups$respo=tapply(response[Fator1 == l1[i]],fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups) } } if (mcomp == "lsd"){ for (i in 1:nv1) { lsd=LSD(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(lsd$groups)=c("resp","groups") if(transf !="1"){lsd$groups$respo=tapply(response[Fator1 == l1[i]],fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups) } } if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { respi=resp[fat[, 1] == l1[i]] trati=fat[,2][fat[, 1] == l1[i]] trati=factor(trati,unique(trati)) nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(tab.f2f1[nv1 +1, 1]), nrep = nrep, QME = num(tab.f2f1[nv1 + 1, 3]), alpha = alpha.t) sk=data.frame(resp=medias,groups=sk) # sk=sk(respi,trati, # num(tab.f2f1[nv1 +1, 1]), # num(tab.f2f1[nv1 + 1, 2]),alpha.t) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk) }} } if(quali[1]==FALSE){ fator1a=fator1a#as.numeric(as.character(fator1)) grafico=polynomial2(fator1a, resp, fator2, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, point=point, posi=posi, ylim=ylim, textsize=textsize, family=family, DFres = num(tab.f1f2[nv2+1,1]), SSq = num(tab.f1f2[nv2+1,2])) if(quali[1]==FALSE & quali[2]==FALSE){ graf[[2]]=grafico grafico=graf} } } if(quali[1]==FALSE && color=="rainbow"| quali[2]==FALSE && color=="rainbow"){ if(quali[2]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv2) { tukey=TUKEY(resp[fat[,2]==l2[i]],fat[,1][fat[,2]==l2[i]], num(tab.f1f2[nv2+1,1]),num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]),alpha.t) colnames(tukey$groups)=c("resp","groups") if(transf !="1"){tukey$groups$respo=tapply(response[Fator2 == l2[i]],fat[,1][fat[,2]==l2[i]],mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(tukey$groups) }} if (mcomp == "duncan"){ for (i in 1:nv2) { duncan=duncan(resp[fat[,2]==l2[i]],fat[,1][fat[,2]==l2[i]],num(tab.f1f2[nv2+1,1]),num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]),alpha.t) colnames(duncan$groups)=c("resp","groups") if(transf !="1"){duncan$groups$respo=tapply(response[Fator2 == l2[i]],fat[,1][fat[,2]==l2[i]],mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(duncan$groups) } } if (mcomp == "lsd"){ for (i in 1:nv2) { lsd=LSD(resp[fat[,2]==l2[i]],fat[,1][fat[,2]==l2[i]],num(tab.f1f2[nv2+1,1]),num(tab.f1f2[nv2+1,2])/num(tab.f1f2[nv2+1,1]),alpha.t) colnames(lsd$groups)=c("resp","groups") if(transf !="1"){lsd$groups$respo=tapply(response[Fator2 == l2[i]],fat[,1][fat[,2]==l2[i]],mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(lsd$groups) }} if (mcomp == "sk"){ for (i in 1:nv2) { respi=resp[fat[,2]==l2[i]] trati=fat[,1][fat[,2]==l2[i]] trati=factor(trati,levels = unique(trati)) nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(tab.f1f2[nv2 +1, 1]), nrep = nrep, QME = num(tab.f1f2[nv2 + 1, 3]), alpha = alpha.t) sk=data.frame(resp=medias,groups=sk) # sk=sk(respi,trati, # num(tab.f1f2[nv2+1,1]), # num(tab.f1f2[nv2+1,2]),alpha.t) if(transf !="1"){sk$respo=tapply(response[Fator2 == lf2[i]], trati,mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F1 within level",lf2[i],"of F2") cat("\n----------------------\n") print(sk)}} } if(quali[2]==FALSE){ fator2a=fator2a#as.numeric(as.character(fator2)) grafico=polynomial2_color(fator2a, resp, fator1, grau = grau21, ylab=ylab, xlab=xlab, theme=theme, point=point, posi=posi, ylim=ylim, textsize=textsize, family=family, DFres = num(tab.f2f1[nv1 +1, 1]), SSq = num(tab.f2f1[nv1 + 1, 2])) if(quali[1]==FALSE & quali[2]==FALSE){ graf=list(grafico,NA)} } if(quali[1]==FALSE){ if (mcomp == "tukey"){ for (i in 1:nv1) { tukey=TUKEY(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(tukey$groups)=c("resp","groups") if(transf !="1"){tukey$groups$respo=tapply(response[Fator1 == l1[i]],fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(tukey$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(tukey$groups) } } if (mcomp == "duncan"){ for (i in 1:nv1) { duncan=duncan(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(duncan$groups)=c("resp","groups") if(transf !="1"){duncan$groups$respo=tapply(response[Fator1 == l1[i]],fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(duncan$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(duncan$groups) } } if (mcomp == "lsd"){ for (i in 1:nv1) { lsd=LSD(resp[fat[, 1] == l1[i]], fat[,2][fat[, 1] == l1[i]], num(tab.f2f1[nv1 +1, 1]), num(tab.f2f1[nv1 + 1, 2])/num(tab.f2f1[nv1 +1, 1]),alpha.t) colnames(lsd$groups)=c("resp","groups") if(transf !="1"){lsd$groups$respo=tapply(response[Fator1 == l1[i]],fat[,2][fat[, 1] == l1[i]],mean, na.rm=TRUE)[rownames(lsd$groups)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(lsd$groups) } } if (mcomp == "sk"){ skgrafico1=c() for (i in 1:nv1) { respi=resp[fat[, 1] == l1[i]] trati=fat[,2][fat[, 1] == l1[i]] trati=factor(trati,unique(trati)) nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = num(tab.f2f1[nv1 +1, 1]), nrep = nrep, QME = num(tab.f2f1[nv1 + 1, 3]), alpha = alpha.t) sk=data.frame(resp=medias,groups=sk) # sk=sk(respi,trati, # num(tab.f2f1[nv1 +1, 1]), # num(tab.f2f1[nv1 + 1, 2]),alpha.t) if(transf !=1){sk$respo=tapply(response[Fator1 == lf1[i]],trati, mean, na.rm=TRUE)[rownames(sk)]} cat("\n----------------------\n") cat("Multiple comparison of F2 within level",lf1[i],"of F1") cat("\n----------------------\n") print(sk) }} } if(quali[1]==FALSE){ fator1a=fator1a#as.numeric(as.character(fator1)) grafico=polynomial2_color(fator1a, resp, fator2, grau = grau12, ylab=ylab, xlab=xlab, theme=theme, point=point, posi=posi, ylim=ylim, textsize=textsize, family=family, DFres = num(tab.f1f2[nv2 +1, 1]), SSq = num(tab.f1f2[nv2 + 1, 2])) if(quali[1]==FALSE & quali[2]==FALSE){ graf[[2]]=grafico grafico=graf} } } colints=grafico } if(as.numeric(tab[4, 5])>alpha.f){ names(graficos)=c("residplot","graph1","graph2") graficos}else{colints=list(residplot,colints)} }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/PSUBDIC_function.R
#' Analysis: Plot subdivided into randomized blocks with a subplot in a double factorial scheme #' #' @description This function performs the analysis of a randomized block design in a split-plot with a subplot in a double factorial scheme. #' #' @param f1 Numeric or complex vector with plot levels #' @param f2 Numeric or complex vector with splitplot levels #' @param f3 Numeric or complex vector with splitsplitplot levels #' @param block Numeric or complex vector with blocks #' @param resp Numeric vector with responses #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD and Duncan) #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param homog Homogeneity test of variances (\emph{default} is Bartlett) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' #' @export #' @return Analysis of variance of fixed effects and multiple comparison test of Tukey, Scott-Knott, LSD or Duncan. #' #' @examples #' f1=rep(c("PD","PDE","C"), e = 40);f1=factor(f1,unique(f1)) #' f2=rep(c(300,400), e = 20,3);f2=factor(f2,unique(f2)) #' f3=rep(c("c1", "c2", "c3", "c4"), e = 5,6);f3=factor(f3,unique(f3)) #' bloco=rep(paste("B",1:5),24); bloco=factor(bloco,unique(bloco)) #' set.seed(10) #' resp=rnorm(120,50,5) #' PSUBFAT2DBC(f1,f2,f3,bloco,resp,alpha.f = 0.5) # force triple interaction #' PSUBFAT2DBC(f1,f2,f3,bloco,resp,alpha.f = 0.4) # force double interaction PSUBFAT2DBC=function(f1, f2, f3, block, resp, alpha.f=0.05, alpha.t=0.05, norm="sw", homog="bt", mcomp="tukey"){ requireNamespace("crayon") fac.names=c("F1","F2","F3") Fator1=f1 Fator2=f2 Fator3=f3 f1<-factor(f1,levels=unique(f1)) f2<-factor(f2,levels=unique(f2)) f3<-factor(f3,levels=unique(f3)) nv1=length(levels(f1)) nv2 = length(levels(f2)) nv3 = length(levels(f3)) bloco=factor(block,levels = unique(block)) nbl<-length(summary(bloco)) j<-(length(resp))/(nv1*nv2*nv3) lf1<-levels(f1); lf2<-levels(f2); lf3<-levels(f3) mod=aov(resp ~ f1 * f2 * f3 + Error(bloco:f1)+bloco) modres=aov(resp ~ f1 * f2 * f3 + bloco:f1 + bloco) a = summary(mod) if(norm=="sw"){norm1 = shapiro.test(modres$res)} if(norm=="li"){norm1=lillie.test(modres$residuals)} if(norm=="ad"){norm1=ad.test(modres$residuals)} if(norm=="cvm"){norm1=cvm.test(modres$residuals)} if(norm=="pearson"){norm1=pearson.test(modres$residuals)} if(norm=="sf"){norm1=sf.test(modres$residuals)} trat=as.factor(paste(f1,f2,f3)) if(homog=="bt"){ homog1 = bartlett.test(modres$res ~ trat) statistic=homog1$statistic phomog=homog1$p.value method=paste("Bartlett test","(",names(statistic),")",sep="") } if(homog=="levene"){ homog1 = levenehomog(modres$res~trat)[1,] statistic=homog1$`F value`[1] phomog=homog1$`Pr(>F)`[1] method="Levene's Test (center = median)(F)" names(homog1)=c("Df", "statistic","p.value")} modresa=anova(modres) resids=modres$residuals/sqrt(modresa$`Mean Sq`[10]) Ids=ifelse(resids>3 | resids<(-3), "darkblue","black") residplot=ggplot(data=data.frame(resids,Ids), aes(y=resids,x=1:length(resids)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(resids),label=1:length(resids), color=Ids,size=4.5)+ scale_x_continuous(breaks=1:length(resids))+ theme_classic()+theme(axis.text.y = element_text(size=12), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) homoge=data.frame(Method=method, Statistic=statistic, "p-value"=phomog) rownames(homoge)="" print(homoge) cat("\n") respad=modres$residuals/sqrt(modresa$`Mean Sq`[10]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} message(if(homog1$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) error1=data.frame(a$`Error: bloco:f1`[[1]]) error2=data.frame(a$`Error: Within`[[1]]) cat(paste("\nCV 1 (%) = ",round(sqrt(error1$Mean.Sq[3])/ mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nCV 2 (%) = ",round(sqrt(error2$Mean.Sq[7])/ mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nMean = ",round(mean(resp,na.rm=TRUE),4))) cat(paste("\nMedian = ",round(median(resp,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") anava = rbind(data.frame(a$`Error: bloco:f1`[[1]]), data.frame(a$`Error: Within`[[1]])) rownames(anava) = c("F1", "block", "Error A", "F2", "F3", "F1 x F2", "F1 x F3", "F2 x F3", "F1 x F2 x F3", "Residuals") colnames(anava) = c("df", "SS", "MS", "F-value", "p") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(as.matrix(anava), na.print = "") qmres = c(anava$MS[3], anava$MS[10]) GLres = c(anava$df[3], anava$df[10]) if (anava$p[9] > alpha.f) { teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) if (anava$p[1] < alpha.f & anava$p[6] > alpha.f & anava$p[7] > alpha.f){ if(mcomp=="tukey"){comp=TUKEY(resp, f1, DFerror = anava$df[3], MSerror = anava$MS[3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="lsd"){comp=LSD(resp, f1, DFerror = anava$df[3], MSerror = anava$MS[3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="duncan"){comp=duncan(resp, f1, DFerror = anava$df[3], MSerror = anava$MS[3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="sk"){ nrep=table(f1)[1] medias=sort(tapply(resp,f1,mean),decreasing = TRUE) comp=scottknott(medias,alpha = alpha.t, df1 = anava$df[3], nrep = nrep, QME = anava$MS[3]) comp=data.frame(resp=medias,groups=comp)} cat("\n==================================\n") cat("F1") cat("\n==================================\n") print(comp)} if (anava$p[4] < alpha.f& anava$p[6] > alpha.f& anava$p[8] > alpha.f){ if(mcomp=="tukey"){comp=TUKEY(resp, f2, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="lsd"){comp=LSD(resp, f2, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="duncan"){comp=duncan(resp, f2, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="sk"){ nrep=table(f2)[1] medias=sort(tapply(resp,f2,mean),decreasing = TRUE) comp=scottknott(medias,alpha = alpha.t, df1 = anava$df[10], nrep = nrep, QME = anava$MS[10]) comp=data.frame(resp=medias,groups=comp)} cat("\n==================================\n") cat("F2") cat("\n==================================\n") print(comp)} if (anava$p[5] < alpha.f& anava$p[7] > alpha.f& anava$p[8] > alpha.f){ if(mcomp=="tukey"){comp=TUKEY(resp, f3, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="lsd"){comp=LSD(resp, f3, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="duncan"){comp=duncan(resp, f3, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="sk"){ nrep=table(f3)[1] medias=sort(tapply(resp,f3,mean),decreasing = TRUE) comp=scottknott(medias,alpha = alpha.t, df1 = anava$df[10], nrep = nrep, QME = anava$MS[10]) comp=data.frame(resp=medias,groups=comp)} cat("\n==================================\n") cat("F3") cat("\n==================================\n") print(comp)} cat("\n") } #============================================= # desdobramento de f1 x f2 #============================================= if (anava$p[9] > alpha.f & anava$p[6] < alpha.f) { mod = aov(resp ~ f1 / f2 + f2:f3 + f1:f3 + f1:f2:f3 + Error(bloco:f1)+bloco) l2 <- vector('list', nv1) names(l2) <- names(summary(f1)) v <- numeric(0) for (j in 1:nv1) { for (i in 0:(nv2 - 2)) v <- cbind(v, i * nv1 + j) l2[[j]] <- v v <- numeric(0) } des1.tab <- summary(mod, split = list('f1:f2' = l2)) desdf2f1 = data.frame(des1.tab$`Error: Within`[[1]]) colnames(desdf2f1) = c("Df", "Sum sq", "Mean Sq", "F value", "Pr(>F)") nlin = nrow(desdf2f1) desdf2f1 = desdf2f1[-c(nlin - 1, nlin - 2, nlin - 3), ] cat(green(bold("\n-----------------------------------------------------\n"))) cat("Analyzing ", fac.names[2], ' inside of each level of ', fac.names[1]) cat(green(bold("\n-----------------------------------------------------\n"))) print(as.matrix(desdf2f1), na.print = "") compf1f2=c() letterf1f2=c() for (i in 1:nv1) { trat1 = f2[f1 == levels(f1)[i]] resp1 = resp[f1 == levels(f1)[i]] nrep=table(trat1)[1] if(mcomp=="tukey"){comp = TUKEY(resp1, trat1, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="lsd"){comp = LSD(resp1, trat1, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="duncan"){comp = duncan(resp1, trat1, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="sk"){ medias=sort(tapply(resp1,trat1,mean),decreasing = TRUE) comp = scottknott(medias,df1 = anava$df[10], QME = anava$MS[10],nrep = nrep) comp=data.frame(resp=medias,groups=comp)} comp=comp[unique(as.character(f2)),] compf1f2[[i]]=comp$resp letterf1f2[[i]]=comp$groups # cat("\n======================\n") # cat(levels(f1)[i]) # cat("\n======================\n") # print(comp) } #=========================================== mod = aov(resp ~ f2 / f1 + f2:f3 + f1:f3 + f1:f2:f3 + Error(bloco / f2)) summary(mod) l1 <- vector('list', nv2) names(l1) <- names(summary(f2)) v <- numeric(0) for (j in 1:nv2) { for (i in 0:(nv1 - 2)) v <- cbind(v, i * nv2 + j) l1[[j]] <- v v <- numeric(0) } desd1.tab <- summary(mod, split = list('f2:f1' = l1)) desd = data.frame(desd1.tab$`Error: Within`[[1]]) nlinhas = nrow(desd) desd = desd[-c(1, nlinhas - 3, nlinhas - 2, nlinhas - 1, nlinhas), ] qmresf1f2 = (qmres[1] + (nv2 - 1) * qmres[2]) / nv2 nf1f2 = ((qmres[1] + (nv2 - 1) * qmres[2]) ^ 2) / ((qmres[1] ^ 2) / GLres[1] + (((nv2 - 1) * qmres[2]) ^ 2) / GLres[2]) nf1f2 = round(nf1f2) desd$F.value = desd$Mean.Sq / qmresf1f2 nline = nrow(desd) for (i in 1:nline) { desd$Pr..F.[i] = 1 - pf(desd$F.value[i], desd$Df[i], nf1f2) } f1f2 = data.frame(desd1.tab$`Error: Within`[[1]])[1, ] desdf1f2 = rbind(f1f2, desd, c(nf1f2, qmresf1f2 / nf1f2, qmresf1f2, NA, NA)) nline1 = nrow(desdf1f2) rownames(desdf1f2)[nline1] = "Residuals combined" colnames(desdf1f2) = c("Df", "Sum sq", "Mean Sq", "F value", "Pr(>F)") cat(green(bold("\n-----------------------------------------------------\n"))) cat("Analyzing ", fac.names[1], ' inside of each level of ', fac.names[2]) cat(green(bold("\n-----------------------------------------------------\n"))) print(as.matrix(desdf1f2), na.print = "") compf2f1=c() letterf2f1=c() for (i in 1:nv2) { trat1 = f1[f2 == levels(f2)[i]] resp1 = resp[f2 == levels(f2)[i]] if(mcomp=="tukey"){comp = TUKEY(resp1, trat1, DFerror = desdf1f2[nrow(desdf1f2), 1], MSerror = desdf1f2[nrow(desdf1f2), 3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="lsd"){comp = LSD(resp1, trat1, DFerror = desdf1f2[nrow(desdf1f2), 1], MSerror = desdf1f2[nrow(desdf1f2), 3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="duncan"){comp = duncan(resp1, trat1, DFerror = desdf1f2[nrow(desdf1f2), 1], MSerror = desdf1f2[nrow(desdf1f2), 3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="sk"){ medias=sort(tapply(resp1,trat1,mean),decreasing = TRUE) comp = scottknott(medias,df1 = desdf1f2[nrow(desdf1f2), 1], QME = desdf1f2[nrow(desdf1f2), 3],nrep = nrep) comp=data.frame(resp=medias,groups=comp)} comp=comp[unique(as.character(f1)),] compf2f1[[i]]=comp$resp letterf2f1[[i]]=comp$groups # cat("\n======================\n") # cat(levels(f2)[i]) # cat("\n======================\n") # print(comp) } final=paste(round(unlist(compf1f2),3), paste(unlist(letterf1f2), toupper(c(t(matrix(unlist(letterf2f1),ncol=length(levels(f2)))))),sep = "")) final=data.frame(matrix(final,ncol=length(unique(f1)))) colnames(final)=as.character(unique(f1)) rownames(final)=as.character(unique(f2)) # cat("\n======================\n") # cat("Multiple comparasion") # cat("\n======================\n") # print(final) } #========================================================= # desdobramento de f1 x f3 #========================================================= if (anava$p[9] > alpha.f & anava$p[7] < alpha.f) { mod = aov(resp ~ f1 / f3 + f2:f3 + f1:f2 + f1:f2:f3 + Error(bloco / f1)) l3 <- vector('list', nv1) names(l3) <- names(summary(f1)) v <- numeric(0) for (j in 1:nv1) { for (i in 0:(nv3 - 2)) v <- cbind(v, i * nv1 + j) l3[[j]] <- v v <- numeric(0) } des1.tab <- summary(mod, split = list('f1:f3' = l3)) desdf3f1 = data.frame(des1.tab$`Error: Within`[[1]]) colnames(desdf3f1) = c("Df", "Sum sq", "Mean Sq", "F value", "Pr(>F)") nlin = nrow(desdf3f1) desdf3f1 = desdf3f1[-c(nlin - 1, nlin - 2, nlin - 3), ] cat(green(bold("\n-----------------------------------------------------\n"))) cat("Analyzing ", fac.names[3], ' inside of each level of ', fac.names[1]) cat(green(bold("\n-----------------------------------------------------\n"))) print(as.matrix(desdf3f1), na.print = "") compf1f3=c() letterf1f3=c() for (i in 1:nv1) { trat1 = f3[f1 == levels(f1)[i]] resp1 = resp[f1 == levels(f1)[i]] nrep=table(trat1)[1] if(mcomp=="tukey"){comp = TUKEY(resp1, trat1, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="lsd"){comp = LSD(resp1, trat1, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="duncan"){comp = duncan(resp1, trat1, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="sk"){ medias=sort(tapply(resp1,trat1,mean),decreasing = TRUE) comp = scottknott(medias,df1 = anava$df[10], QME = anava$MS[10],nrep = nrep) comp=data.frame(resp=medias,groups=comp)} comp=comp[unique(as.character(f3)),] compf1f3[[i]]=comp$resp letterf1f3[[i]]=comp$groups # cat("\n======================\n") # cat(levels(f1)[i]) # cat("\n======================\n") # print(comp) } #=========================================== mod = aov(resp ~ f3 / f1 + f1:f2 + f2:f3 + f1:f2:f3 + Error(bloco / f3)) summary(mod) l1 <- vector('list', nv3) names(l1) <- names(summary(f3)) v <- numeric(0) for (j in 1:nv3) { for (i in 0:(nv1 - 2)) v <- cbind(v, i * nv3 + j) l1[[j]] <- v v <- numeric(0) } desd1.tab <- summary(mod, split = list('f3:f1' = l1)) desd = data.frame(desd1.tab$`Error: Within`[[1]]) nlinhas = nrow(desd) desd = desd[-c(1, nlinhas - 3, nlinhas - 2, nlinhas - 1, nlinhas), ] qmresf1f3 = (qmres[1] + (nv3 - 1) * qmres[2]) / nv3 nf1f3 = ((qmres[1] + (nv3 - 1) * qmres[2]) ^ 2) / ((qmres[1] ^ 2) / GLres[1] + (((nv3 - 1) * qmres[2]) ^ 2) / GLres[2]) nf1f3 = round(nf1f3) desd$F.value = desd$Mean.Sq / qmresf1f3 nline = nrow(desd) for (i in 1:nline) { desd$Pr..F.[i] = 1 - pf(desd$F.value[i], desd$Df[i], nf1f3) } f1f3 = data.frame(desd1.tab$`Error: Within`[[1]])[1, ] desdf1f3 = rbind(f1f3, desd, c(nf1f3, qmresf1f3 / nf1f3, qmresf1f3, NA, NA)) nline1 = nrow(desdf1f3) rownames(desdf1f3)[nline1] = "Residuals combined" colnames(desdf1f3) = c("Df", "Sum sq", "Mean Sq", "F value", "Pr(>F)") cat(green(bold("\n-----------------------------------------------------\n"))) cat("Analyzing ", fac.names[1], ' inside of each level of ', fac.names[3]) cat(green(bold("\n-----------------------------------------------------\n"))) print(as.matrix(desdf1f3), na.print = "") compf3f1=c() letterf3f1=c() for (i in 1:nv3) { trat1 = f1[f3 == levels(f3)[i]] resp1 = resp[f3 == levels(f3)[i]] nrep=table(trat1)[1] if(mcomp=="tukey"){comp = TUKEY(resp1, trat1, DFerror = desdf1f3[nrow(desdf1f3), 1], MSerror = desdf1f3[nrow(desdf1f3), 3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="lsd"){comp = LSD(resp1, trat1, DFerror = desdf1f3[nrow(desdf1f3), 1], MSerror = desdf1f3[nrow(desdf1f3), 3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="duncan"){comp = duncan(resp1, trat1, DFerror = desdf1f3[nrow(desdf1f3), 1], MSerror = desdf1f3[nrow(desdf1f3), 3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="sk"){ medias=sort(tapply(resp1,trat1,mean),decreasing = TRUE) comp = scottknott(medias,df1 = desdf1f3[nrow(desdf1f3), 1], QME = desdf1f3[nrow(desdf1f3), 3],nrep = nrep) comp=data.frame(resp=medias,groups=comp)} comp=comp[unique(as.character(f1)),] compf3f1[[i]]=comp$resp letterf3f1[[i]]=comp$groups } final=paste(round(unlist(compf1f3),3), paste(unlist(letterf1f3), toupper(c(t(matrix(unlist(letterf3f1),ncol=length(levels(f3)))))),sep = "")) final=data.frame(matrix(final,ncol=length(unique(f1)))) colnames(final)=as.character(unique(f1)) rownames(final)=as.character(unique(f3)) cat("\n======================\n") cat("Multiple comparasion") cat("\n======================\n") print(final) } # desdobramento de f2 x f3 if (anava$p[9] > alpha.f & anava$p[8] < alpha.f) { mod = aov(resp ~ f2 / f3 + f1:f2 + f1:f3 + f1:f2:f3 + Error(bloco / f1)) l3 <- vector('list', nv2) names(l3) <- names(summary(f2)) v <- numeric(0) for (j in 1:nv2) { for (i in 0:(nv3 - 2)) v <- cbind(v, i * nv2 + j) l3[[j]] <- v v <- numeric(0) } des1.tab <- summary(mod, split = list('f2:f3' = l3)) desdf3f2 = data.frame(des1.tab$`Error: Within`[[1]]) colnames(desdf3f2) = c("Df", "Sum sq", "Mean Sq", "F value", "Pr(>F)") nlin = nrow(desdf3f2) desdf3f2 = desdf3f2[-c(nlin - 1, nlin - 2, nlin - 3), ] cat(green(bold("\n-----------------------------------------------------\n"))) cat("Analyzing ", fac.names[3], ' inside of each level of ', fac.names[2]) cat(green(bold("\n-----------------------------------------------------\n"))) print(as.matrix(desdf3f2), na.print = "") compf2f3=c() letterf2f3=c() for (i in 1:nv2) { trat1 = f3[f2 == levels(f2)[i]] resp1 = resp[f2 == levels(f2)[i]] nrep=table(trat1)[1] if(mcomp=="tukey"){comp = TUKEY(resp1, trat1, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="lsd"){comp = LSD(resp1, trat1, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="duncan"){comp = duncan(resp1, trat1, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="sk"){ medias=sort(tapply(resp1,trat1,mean),decreasing = TRUE) comp = scottknott(medias,df1 = anava$df[10], QME = anava$MS[10],nrep = nrep) comp=data.frame(resp=medias,groups=comp)} comp=comp[unique(as.character(f3)),] compf2f3[[i]]=comp$resp letterf2f3[[i]]=comp$groups } mod = aov(resp ~ f3 / f2 + f1:f2 + f1:f3 + f1:f2:f3 + Error(bloco / f1)) l2 <- vector('list', nv3) names(l2) <- names(summary(f3)) v <- numeric(0) for (j in 1:nv3) { for (i in 0:(nv2 - 2)) v <- cbind(v, i * nv3 + j) l2[[j]] <- v v <- numeric(0) } des1.tab <- summary(mod, split = list('f3:f2' = l2)) desdf2f3 = data.frame(des1.tab$`Error: Within`[[1]]) colnames(desdf2f3) = c("Df", "Sum sq", "Mean Sq", "F value", "Pr(>F)") nlin = nrow(desdf2f3) desdf2f3 = desdf2f3[-c(nlin - 1, nlin - 2, nlin - 3), ] cat(green(bold("\n-----------------------------------------------------\n"))) cat("Analyzing ", fac.names[2], ' inside of each level of ', fac.names[3]) cat(green(bold("\n-----------------------------------------------------\n"))) print(as.matrix(desdf2f3), na.print = "") compf3f2=c() letterf3f2=c() for (i in 1:nv3) { trat1 = f2[f3 == levels(f3)[i]] resp1 = resp[f3 == levels(f3)[i]] nrep=table(trat1)[1] if(mcomp=="tukey"){comp = TUKEY(resp1, trat1, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="lsd"){comp = LSD(resp1, trat1, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="duncan"){comp = duncan(resp1, trat1, DFerror = anava$df[10], MSerror = anava$MS[10]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="sk"){ medias=sort(tapply(resp1,trat1,mean),decreasing = TRUE) comp = scottknott(medias,df1 = anava$df[10], QME = anava$MS[10],nrep = nrep) comp=data.frame(resp=medias,groups=comp)} comp=comp[unique(as.character(f2)),] compf3f2[[i]]=comp$resp letterf3f2[[i]]=comp$groups } final=paste(round(unlist(compf2f3),3), paste(unlist(letterf2f3), toupper(c(t(matrix(unlist(letterf3f2),ncol=length(levels(f3)))))),sep = "")) final=data.frame(matrix(final,ncol=length(unique(f2)))) colnames(final)=as.character(unique(f2)) rownames(final)=as.character(unique(f3)) cat("\n======================\n") cat("Multiple comparasion") cat("\n======================\n") print(final) } #================================================== # desdobramento de f1 x f2 x f3 #================================================== if (anava$p[9] < alpha.f) { # desdobramnto de f2 m1=aov(resp~(f1*f3)/f2+Error(bloco/f1)) summary(m1) pattern <- c(outer(levels(f1), levels(f3), function(x,y) paste("f1",x,":f3",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1$Within)[m1$Within$assign==4])) des1.tab <- summary(m1, split = list("f1:f3:f2" = des.tab)) des1.tab=data.frame(des1.tab$`Error: Within`[[1]]) nomes=expand.grid(levels(f1),levels(f3)) nomes=paste(nomes$Var1,nomes$Var2) nomes=c("f3","f1:f3","f1:f3:f2", paste(" f1:f3:f2",nomes),"residuals") colnames(des1.tab) = c("Df", "Sum sq", "Mean Sq", "F value", "Pr(>F)") rownames(des1.tab)=nomes cat(green(bold("\n-----------------------------------------------------\n"))) cat("Analyzing ", fac.names[2], ' inside of each level of ', fac.names[1], 'and',fac.names[3]) cat(green(bold("\n-----------------------------------------------------\n"))) print(as.matrix(des1.tab[-c(1,2),]), na.print = "") fatores=data.frame(f1,f2,f3) ii<-0 for(k in 1:nv1) { for(i in 1:nv2) { ii<-ii+1 cat("\n\n------------------------------------------") cat('\n',fac.names[2],' within the combination of levels ',lf1[k],' of ',fac.names[1],' and ',lf3[i],' of ',fac.names[3],'\n') cat("------------------------------------------\n") respi=resp[fatores[,1]==lf1[k] & fatores[,3]==lf3[i]] trati=fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[i]] nlinhas=nrow(des1.tab) nrep=table(trati)[1] if(mcomp=="tukey"){comp = TUKEY(respi, trati, DFerror = des1.tab[nlinhas,1], MSerror = des1.tab[nlinhas,3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="lsd"){comp = LSD(respi, trati, DFerror = des1.tab[nlinhas,1], MSerror = des1.tab[nlinhas,3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="duncan"){comp = duncan(respi, trati, DFerror = des1.tab[nlinhas,1], MSerror = des1.tab[nlinhas,3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="sk"){ medias=sort(tapply(respi,trati,mean),decreasing = TRUE) comp = scottknott(medias,df1 = des1.tab[nlinhas,1], QME = des1.tab[nlinhas,3],nrep = nrep) comp=data.frame(resp=medias,groups=comp)} print(comp)} } #################################################### # dentro de f3, testa 1 dentro 2 - usar qm normal m1=aov(resp~(f1*f2)/f3) anova(m1) pattern <- c(outer(levels(f1), levels(f2), function(x,y) paste("f1",x,":f2",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==4])) des1.tab <- summary(m1, split = list("f1:f2:f3" = des.tab)) des1.tab=data.frame(des1.tab[[1]]) nomes=expand.grid(levels(f1),levels(f2)) nomes=paste(nomes$Var1,nomes$Var2) nomes=c("f1","f2","f1:f2","f1:f2:f3", paste(" f1:f2:f3:",nomes),"residuals") colnames(des1.tab) = c("Df", "Sum sq", "Mean Sq", "F value", "Pr(>F)") rownames(des1.tab)=nomes cat(green(bold("\n-----------------------------------------------------\n"))) cat("Analyzing ", fac.names[3], ' inside of each level of ', fac.names[1], 'and',fac.names[2]) cat(green(bold("\n-----------------------------------------------------\n"))) print(as.matrix(des1.tab[-c(1,2,3),]), na.print = "") ii<-0 for(k in 1:nv1) { for(j in 1:nv2) { ii<-ii+1 cat("\n\n------------------------------------------") cat('\n',fac.names[3],' within the combination of levels ',lf1[k], ' of ',fac.names[1],' and ',lf2[j],' of ',fac.names[2],'\n') cat("------------------------------------------\n") if(mcomp=="tukey"){ trati=fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]] respi=resp[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]] nlinhas=nrow(des1.tab) if(mcomp=="tukey"){comp = TUKEY(respi, trati, DFerror = des1.tab[nlinhas,1], MSerror = des1.tab[nlinhas,3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="lsd"){comp = LSD(respi, trati, DFerror = des1.tab[nlinhas,1], MSerror = des1.tab[nlinhas,3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="duncan"){comp = duncan(respi, trati, DFerror = des1.tab[nlinhas,1], MSerror = des1.tab[nlinhas,3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="sk"){ medias=sort(tapply(respi,trati,mean),decreasing = TRUE) comp = scottknott(medias,df1 = des1.tab[nlinhas,1], QME = des1.tab[nlinhas,3],nrep = nrep) comp=data.frame(resp=medias,groups=comp)} print(comp)} } } #======================= # teste de f1, testa 2 e 3 - QM normal m1=aov(resp~(f2*f3)/f1) anova(m1) pattern <- c(outer(levels(f2), levels(f3), function(x,y) paste("f2",x,":f3",y,":",sep=""))) des.tab <- sapply(pattern, simplify=FALSE, grep, x=names(coef(m1)[m1$assign==4])) des1.tab <- summary(m1, split = list("f2:f3:f1" = des.tab)) nv23 = nv3 * nv2 qmresf1f3 = (qmres[1] + (nv23 - 1) * qmres[2]) / nv23 nf1f3 = ((qmres[1] + (nv23 - 1) * qmres[2]) ^ 2) / ((qmres[1] ^ 2) / GLres[1] + (((nv23 - 1) * qmres[2]) ^ 2) / GLres[2]) nf1f3 = round(nf1f3) des1.tab=data.frame(des1.tab[[1]]) nl = nrow(des1.tab) dtf2 = des1.tab[-c(1, 2, 3, 4,nl), ] nline = nrow(dtf2) for (i in 1:nline) { dtf2$F.value[i] = dtf2$Mean.Sq[i] / qmresf1f3 dtf2$Pr..F.[i] = 1 - pf(dtf2$F.value[i], dtf2$Df[i], nf1f3) } f11 = dtf2[3, ] desd = rbind(f11, dtf2, c(nf1f3, qmresf1f3 * nf1f3, qmresf1f3, NA, NA)) nline1 = nrow(desd) rownames(desd)[nline1] = "Residuals combined" colnames(desd) = c("Df", "Sum sq", "Mean Sq", "F value", "Pr(>F)") nomes=expand.grid(levels(f2),levels(f3)) nomes=paste(nomes$Var1,nomes$Var2) nomes=c("f3:f2:f1", paste(" f3:f2:f1:",nomes),"Residuals combined") colnames(desd) = c("Df", "Sum sq", "Mean Sq", "F value", "Pr(>F)") rownames(desd)=nomes cat(green(bold("\n-----------------------------------------------------\n"))) cat("Analyzing ", fac.names[1], ' inside of each level of ', fac.names[2], 'and',fac.names[3]) cat(green(bold("\n-----------------------------------------------------\n"))) print(as.matrix(desd), na.print = "") ii<-0 for(i in 1:nv2) { for(j in 1:nv3) { ii<-ii+1 cat("\n\n------------------------------------------") cat('\n',fac.names[1],' within the combination of levels ',lf2[i],' of ',fac.names[2],' and ',lf3[j],' of ',fac.names[3],"\n") cat("------------------------------------------\n") respi=resp[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]] trati=fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]] nlinhas=nrow(desd) if(mcomp=="tukey"){comp = TUKEY(respi, trati, DFerror = des1.tab[nlinhas,1], MSerror = des1.tab[nlinhas,3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="lsd"){comp = LSD(respi, trati, DFerror = des1.tab[nlinhas,1], MSerror = des1.tab[nlinhas,3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="duncan"){comp = duncan(respi, trati, DFerror = des1.tab[nlinhas,1], MSerror = des1.tab[nlinhas,3]) comp=comp$groups colnames(comp)=c("resp","groups")} if(mcomp=="sk"){ medias=sort(tapply(respi,trati,mean),decreasing = TRUE) comp = scottknott(medias,df1 = des1.tab[nlinhas,1], QME = des1.tab[nlinhas,3],nrep = nrep) comp=data.frame(resp=medias,groups=comp)} print(comp)} } } }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/PSUBFAT2DBC_function.R
#' Analysis: DBC experiments in split-split-plot #' @description Analysis of an experiment conducted in a randomized block design in a split-split-plot scheme using analysis of variance of fixed effects. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param f1 Numeric or complex vector with plot levels #' @param f2 Numeric or complex vector with splitplot levels #' @param f3 Numeric or complex vector with splitsplitplot levels #' @param block Numeric or complex vector with blocks #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD and Duncan) #' @param response Numeric vector with responses #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param dec Number of cells #' @note The PSUBSUBDBC function does not present residual analysis, interaction breakdown, graphs and implementations of various multiple comparison or regression tests. The function only returns the analysis of variance and multiple comparison test of Tukey, LSD or Duncan. #' @return Analysis of variance of fixed effects and multiple comparison test of Tukey, LSD or Duncan. #' @keywords DBC #' @export #' @examples #' library(AgroR) #' data(enxofre) #' with(enxofre, PSUBSUBDBC(f1, f2, f3, bloco, resp)) PSUBSUBDBC=function(f1, f2, f3, block, response, alpha.f=0.05, alpha.t=0.05, dec=3, mcomp="tukey"){ fac.names=c("F1","F2","F3") fator1=as.factor(f1) fator2=as.factor(f2) fator3=as.factor(f3) bloco=as.factor(block) resp=response fatores<-data.frame(fator1,fator2,fator3) Fator1<-factor(fator1,levels=unique(fator1)) Fator2<-factor(fator2,levels=unique(fator2)) Fator3<-factor(fator3,levels=unique(fator3)) nv1<-length(summary(Fator1)) nv2<-length(summary(Fator2)) nv3<-length(summary(Fator3)) nbl<-length(summary(bloco)) J<-(length(response))/(nv1*nv2*nv3) lf1<-levels(Fator1); lf2<-levels(Fator2); lf3<-levels(Fator3) # pressupostos m1=aov(response~Fator1*Fator2*Fator3+bloco/Fator1+bloco/Fator1/Fator2) summary(m1) norm=shapiro.test(m1$residuals) homog=bartlett.test(m1$residuals~paste(Fator1,Fator2,Fator3)) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm$method,"(",names(norm$statistic),")",sep=""), Statistic=norm$statistic, "p-value"=norm$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) statistic1=homog$statistic phomog1=homog$p.value method1=paste("Bartlett test","(",names(statistic1),")",sep="") homoge1=data.frame(Method=method1, Statistic=statistic1, "p-value"=phomog1) rownames(homoge1)="" print(homoge1) cat("\n") message(if(homog$p.value[1]>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) mod=aov(response~Fator1*Fator2*Fator3+ Error(bloco/Fator1/paste(Fator1,Fator2))) a=summary(mod) anava=rbind(data.frame(a$`Error: bloco:Fator1`[[1]]), data.frame(a$`Error: bloco:Fator1:paste(Fator1, Fator2)`[[1]]), data.frame(a$`Error: Within`[[1]])) anavap=anava anava$F.value=ifelse(is.na(anava$F.value)==TRUE,"",round(anava$F.value,5)) anava$Pr..F.=ifelse(is.na(anava$Pr..F.)==TRUE,"",round(anava$Pr..F.,5)) rownames(anava)=c("F1","Error A","F2","F1 x F2", "Error B", "F3", "F1 x F3", "F2 x F3", "F1 x F2 x F3","Residuals") colnames(anava)=c("Df","Sum Sq","Mean Sq","F value","Pr(>F)") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV plot (%) = ",round(sqrt(anava$`Mean Sq`[2])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nCV split plot (%) = ",round(sqrt(anava$`Mean Sq`[5])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nCV split split plot (%) = ",round(sqrt(anava$`Mean Sq`[10])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nMean = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian = ",round(median(response,na.rm=TRUE),4))) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(anava) fatores<-data.frame('fator 1'=fator1, 'fator 2' = fator2, 'fator 3' = fator3) qmres=c(as.numeric(anavap[2,3]), as.numeric(anavap[5,3]), as.numeric(anavap[10,3])) GL=c(as.numeric(anavap[2,1]), as.numeric(anavap[5,1]), as.numeric(anavap[10,1])) pvalor=c(as.numeric(anavap[1,5]), as.numeric(anavap[3,5]), as.numeric(anavap[6,5])) ################################################################################################ # Efeitos simples ################################################################################################ if(as.numeric(anavap[9,5])>alpha.f && as.numeric(anavap[8,5])>alpha.f && as.numeric(anavap[7,5])>alpha.f && as.numeric(anavap[4,5])>alpha.f) { graficos=list(1,2,3) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold('Non-significant interaction: analyzing the simple effects'))) cat(green(bold("\n------------------------------------------\n"))) for(i in 1:3){ if(pvalor[i]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) cat(green(bold("\n------------------------------------------\n"))) if(mcomp=="tukey"){ letra=TUKEY(response, fatores[,i], GL[i], qmres[i],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") print(letra1)} if(mcomp=="lsd"){ letra=LSD(response, fatores[,i], GL[i], qmres[i],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") print(letra1)} if(mcomp=="duncan"){ letra=duncan(response, fatores[,i], GL[i], qmres[i],alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups") print(letra1)} if(mcomp=="sk"){ nrep=table(fatores[, i])[1] medias=sort(tapply(resp,fatores[, i],mean),decreasing = TRUE) letra=scottknott(means = medias, df1 = GL[i], nrep = nrep, QME = qmres[i], alpha = alpha.t) letra1=data.frame(resp=medias,groups=letra) teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) print(letra1)} } if(pvalor[i]>alpha.f) { cat(fac.names[i]) cat(green(bold("\n------------------------------------------\n"))) mean.table<-mean_stat(response,fatores[,i],mean) colnames(mean.table)<-c('Levels','Mean') print(mean.table) grafico=NA cat(green(bold("\n------------------------------------------")))} cat('\n') } } ##################################################################### #Interacao Fator1*Fator2 + Fator3 ##################################################################### # corrigir para variancia complexa qmresf1f2=(qmres[1]+(nv2-1)*qmres[2])/nv2 # gl de Satterthwaite (nf1f2=((qmres[1]+(nv2-1)*qmres[2])^2)/ ((qmres[1]^2)/GL[1]+(((nv2-1)*qmres[2])^2)/GL[2])) nf1f2=round(nf1f2) if(as.numeric(anavap[9,5])>alpha.f && as.numeric(anavap[4,5])<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(fac.names[1],'*',fac.names[2],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[2], " inside of the level of ",fac.names[1]) cat(green(bold("\n------------------------------------------\n"))) ################################################################# #### desdobramento f2 dentro de f1 ################################################################# mod=aov(response~Fator1/Fator2+Fator2:Fator3+Fator1:Fator3+Fator1:Fator2:Fator3+ Error(bloco/Fator1/Fator2)) summary(mod) l2<-vector('list',nv1) names(l2)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv1+j) l2[[j]]<-v v<-numeric(0) } des1.tab<-summary(mod,split=list('Fator1:Fator2'=l2)) desdf2f1=data.frame(des1.tab$`Error: bloco:Fator1:Fator2`[[1]]) colnames(desdf2f1)=c("Df","Sum sq","Mean Sq", "F value","Pr(>F)") print(as.matrix(desdf2f1),na.print="") ############################################### #### desdobramento f1 dentro de f2 ############################################### cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[1], " inside of the level of ",fac.names[2]) cat(green(bold("\n------------------------------------------\n"))) mod=aov(response~Fator2/Fator1+Fator2:Fator3+ Fator1:Fator3+Fator1:Fator2:Fator3+ Error(bloco/Fator2)) summary(mod) l1<-vector('list',nv2) names(l1)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv2+j) l1[[j]]<-v v<-numeric(0) } desd1.tab<-summary(mod,split=list('Fator2:Fator1'=l1)) desd1.tab desd=data.frame(desd1.tab$`Error: Within`[[1]]) desd nlinhas=nrow(desd) desd=desd[-c(1,nlinhas-3,nlinhas-2,nlinhas-1,nlinhas),] qmresf1f2=(qmres[1]+(nv2-1)*qmres[2])/nv2 nf1f2=((qmres[1]+(nv2-1)*qmres[2])^2)/ ((qmres[1]^2)/GL[1]+(((nv2-1)*qmres[2])^2)/GL[2]) nf1f2=round(nf1f2) desd$F.value=desd$Mean.Sq/qmresf1f2 nline=nrow(desd) for(i in 1:nline){ desd$Pr..F.[i]=1-pf(desd$F.value[i],desd$Df[i],nf1f2)} f1f2=data.frame(desd1.tab$`Error: Within`[[1]])[1,] desdf1f2=rbind(f1f2,desd,c(nf1f2,qmresf1f2/nf1f2,qmresf1f2,NA,NA)) nline1=nrow(desdf1f2) rownames(desdf1f2)[nline1]="Residuals combined" colnames(desdf1f2)=c("Df","Sum sq","Mean Sq", "F value","Pr(>F)") print(as.matrix(desdf1f2),na.print = "") cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) if(mcomp=="tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,nf1f2,qmresf1f2,alpha.t) tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),]) } letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,GL[2],qmres[2],alpha.t) tukeygrafico1[[i]]=tukey$groups[levels(trati),2] } letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if(mcomp=="lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator2 == lf2[i]] lsd=LSD(respi,trati,nf1f2,qmresf1f2,alpha.t) lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),]) } letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra lsdgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator1 == lf1[i]] lsd=LSD(respi,trati,GL[2],qmres[2],alpha.t) lsdgrafico1[[i]]=lsd$groups[levels(trati),2] } letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if(mcomp=="duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator2 == lf2[i]] duncan=duncan(respi,trati,nf1f2,qmresf1f2,alpha.t) duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),]) } letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra duncangrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator1 == lf1[i]] duncan=duncan(respi,trati,GL[2],qmres[2],alpha.t) duncangrafico1[[i]]=duncan$groups[levels(trati),2] } letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if(mcomp=="sk"){ skgrafico=c() ordem=c() for (i in 1:nv2) { trati=fatores[, 1][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = nf1f2, nrep = nrep, QME = qmresf1f2, alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) # sk=sk(respi,trati,nf1f2,qmresf1f2/nf1f2,alpha.t) skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),]) } letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 2][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = GL[2], nrep = nrep, QME = qmres[2], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico1[[i]]=sk[levels(trati),2] } letra1=unlist(skgrafico1) letra1=toupper(letra1)} f1=rep(levels(Fator1),e=length(levels(Fator2))) f2=rep(unique(as.character(Fator2)),length(levels(Fator1))) media=tapply(response,paste(Fator1,Fator2), mean, na.rm=TRUE)[unique(paste(f1,f2))] desvio=tapply(response,paste(Fator1,Fator2), sd, na.rm=TRUE)[unique(paste(f1,f2))] f1=factor(f1,levels = unique(f1)) f2=factor(f2,levels = unique(f2)) graph=data.frame(f1=f1, f2=f2, media, desvio, letra,letra1, numero=format(media,digits = dec)) numero=graph$numero letras=paste(graph$letra, graph$letra1, sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras), ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator2) print(matriz) message(black("\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the", mcomp, "(p<",alpha.t,")\n")) #Checar o Fator3 if(as.numeric(anavap[7,5])>alpha.f && as.numeric(anavap[8,5])>alpha.f) { if(pvalor[3]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',fac.names[3]))) cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) if(mcomp=="tukey"){letra=TUKEY(response,fatores[,i],GL[3],qmres[3],alpha.t)} if(mcomp=="lsd"){letra=LSD(response,fatores[,i],GL[3],qmres[3],alpha.t)} if(mcomp=="duncan"){letra=duncan(response,fatores[,i],GL[3],qmres[3],alpha.t)} letra1 <- letra$groups; colnames(letra1)=c("resp","groups") print(letra1) cat(green(bold("\n-----------------------------------------------------------------"))) } } } ##################################################################################################### #Interacao Fator1*Fator3 + fator2 ##################################################################################################### # corrigir para variancia complexa qmresf1f3=(qmres[1]+(nv3-1)*qmres[3])/nv3 # gl de Satterthwaite (nf1f3=((qmres[1]+(nv3-1)*qmres[3])^2)/ ((qmres[1]^2)/GL[1]+(((nv3-1)*qmres[3])^2)/GL[3])) nf1f3=round(nf1f3) if(as.numeric(anavap[9,5])>alpha.f && as.numeric(anavap[7,5])<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\nInteraction",paste(fac.names[1],'*',fac.names[3],sep='')," significant: unfolding the interaction\n"))) cat(green(bold("\n------------------------------------------\n"))) ################################################################# #### desdobramento f3 dentro de f1 ################################################################# cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[3], " inside of the level of ",fac.names[1]) cat(green(bold("\n------------------------------------------\n"))) mod=aov(response~Fator1/Fator3+Fator1*Fator2+ Fator1:Fator2:Fator3+ Error(bloco/Fator1/Fator2)) summary(mod) l3<-vector('list',nv1) names(l3)<-names(summary(Fator1)) v<-numeric(0) for(j in 1:nv1) { for(i in 0:(nv3-2)) v<-cbind(v,i*nv1+j) l3[[j]]<-v v<-numeric(0) } desd1.tab<-summary(mod,split=list('Fator1:Fator3'=l3)) desdf3f1=data.frame(desd1.tab$`Error: Within`[[1]]) nlines=nrow(desdf3f1) desdf3f1=desdf3f1[-(nlines-1),] colnames(desdf3f1)=c("Df","Sum sq","Mean Sq", "F value","Pr(>F)") print(as.matrix(desdf3f1),na.print="") ############################################### #### desdobramento f1 dentro de f3 ############################################### cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[1], " inside of the level of ",fac.names[3]) cat(green(bold("\n------------------------------------------\n"))) mod=aov(response~Fator3/Fator1+Fator2*Fator3+Fator1:Fator2:Fator3+ Error(bloco/Fator2)) summary(mod) l1<-vector('list',nv3) names(l1)<-names(summary(Fator3)) v<-numeric(0) for(j in 1:nv3) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv3+j) l1[[j]]<-v v<-numeric(0) } desd1.tab<-summary(mod,split=list('Fator3:Fator1'=l1)) desd1.tab desd=data.frame(desd1.tab$`Error: Within`[[1]]) nlinhas=nrow(desd) desd=desd[-c(1,2,nlinhas-2,nlinhas-1,nlinhas),] qmresf1f3=(qmres[1]+(nv3-1)*qmres[3])/nv3 nf1f3=((qmres[1]+(nv3-1)*qmres[3])^2)/ ((qmres[1]^2)/GL[1]+(((nv3-1)*qmres[3])^2)/GL[3]) nf1f3=round(nf1f3) desd$F.value=desd$Mean.Sq/qmresf1f3 nline=nrow(desd) for(i in 1:nline){ desd$Pr..F.[i]=1-pf(desd$F.value[i],desd$Df[i],nf1f3)} f1f3=data.frame(desd1.tab$`Error: Within`[[1]])[2,] desdf1f3=rbind(f1f3,desd,c(nf1f3,qmresf1f3/nf1f3,qmresf1f3,NA,NA)) nline1=nrow(desdf1f3) rownames(desdf1f3)[nline1]="Residuals combined" colnames(desdf1f3)=c("Df","Sum sq","Mean Sq", "F value","Pr(>F)") print(as.matrix(desdf1f3),na.print = "") cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) if(mcomp=="tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,nf1f3,qmresf1f3,alpha.t) tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),])} letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra tukeygrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator1 == lf1[i]] tukey=TUKEY(respi,trati,GL[3],qmres[3],alpha.t) tukeygrafico1[[i]]=tukey$groups[levels(trati),2]} letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if(mcomp=="lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator3 == lf3[i]] lsd=LSD(respi,trati,nf1f3,qmresf1f3,alpha.t) lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),])} letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra lsdgrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator1 == lf1[i]] lsd=LSD(respi,trati,GL[3],qmres[3],alpha.t) lsdgrafico1[[i]]=lsd$groups[levels(trati),2]} letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if(mcomp=="duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator3 == lf3[i]] duncan=duncan(respi,trati,nf1f3,qmresf1f3,alpha.t) duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),])} letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra duncangrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator1 == lf1[i]] duncan=duncan(respi,trati,GL[3],qmres[3],alpha.t) duncangrafico1[[i]]=duncan$groups[levels(trati),2]} letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if(mcomp=="sk"){ skgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 1][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = nf1f3, nrep = nrep, QME = qmresf1f3, alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),])} letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra skgrafico1=c() for (i in 1:nv1) { trati=fatores[, 3][Fator1 == lf1[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator1 == lf1[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = GL[3], nrep = nrep, QME = qmres[3], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico1[[i]]=sk[levels(trati),2]} letra1=unlist(skgrafico1) letra1=toupper(letra1)} f1=rep(levels(Fator1),e=length(levels(Fator3))) f3=rep(unique(as.character(Fator3)),length(levels(Fator1))) media=tapply(response,paste(Fator1,Fator3), mean, na.rm=TRUE)[unique(paste(f1,f3))] desvio=tapply(response,paste(Fator1,Fator3), sd, na.rm=TRUE)[unique(paste(f1,f3))] f1=factor(f1,levels = unique(f1)) f3=factor(f3,levels = unique(f3)) graph=data.frame(f1=f1, f3=f3, media, desvio, letra,letra1, numero=format(media,digits = dec)) numero=graph$numero letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator1))))) rownames(matriz)=levels(Fator1) colnames(matriz)=levels(Fator3) print(matriz) message(black("\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the", mcomp, "(p<",alpha.t,")\n")) #Checar o Fator2 if(as.numeric(anavap[4,5])>alpha.f && as.numeric(anavap[6,5])>alpha.f) { i=2 cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',fac.names[2]))) cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) if(mcomp=="tukey"){letra=TUKEY(response,fatores[,i], GL[2], qmres[2],alpha.t)} if(mcomp=="lsd"){letra=LSD(response,fatores[,i], GL[2], qmres[2],alpha.t)} if(mcomp=="duncan"){letra=duncan(response,fatores[,i], GL[2], qmres[2],alpha.t)} letra1 <- letra$groups; colnames(letra1)=c("resp","groups") print(letra1) cat(green(bold("\n-----------------------------------------------------------------"))) } } ###################################################################################################################### #Interacao Fator2*Fator3 + fator1 ###################################################################################################################### # corrigir para variancia complexa qmresf2f3=(qmres[2]+(nv3-1)*qmres[3])/nv3 # gl de Satterthwaite (nf2f3=((qmres[2]+(nv3-1)*qmres[3])^2)/ ((qmres[2]^2)/GL[2]+(((nv3-1)*qmres[3])^2)/GL[3])) nf2f3=round(nf2f3) if(as.numeric(anavap[9,5])>alpha.f && as.numeric(anavap[8,5])<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(fac.names[2],'*',fac.names[3],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[2], ' inside of each level of ', fac.names[3]) cat(green(bold("\n------------------------------------------\n"))) #### desdobramento f3 dentro de f2 mod=aov(response~Fator1*Fator2+Fator2/Fator3+Fator1:Fator2:Fator3+ Error(bloco/Fator1/Fator2)) summary(mod) l3<-vector('list',nv2) names(l3)<-names(summary(Fator2)) v<-numeric(0) for(j in 1:nv2) { for(i in 0:(nv3-2)) v<-cbind(v,i*nv2+j) l3[[j]]<-v v<-numeric(0) } des1.tab<-summary(mod,split=list('Fator2:Fator3'=l3)) desd=data.frame(des1.tab$`Error: Within`[[1]]) nlinhas=nrow(desd) desdf3f2=desd[-c(nlinhas-1),] colnames(desdf3f2)=c("Df","Sum sq","Mean Sq", "F value","Pr(>F)") print(as.matrix(desdf3f2),na.print="") #### desdobramento f2 dentro de f3 cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[3], ' inside of each level of ', fac.names[2]) cat(green(bold("\n------------------------------------------\n"))) mod=aov(response~Fator1+Fator3/Fator2+Fator1:Fator2+Fator1:Fator2:Fator3+ Error(bloco/Fator1)) l2<-vector('list',nv3) names(l2)<-names(summary(Fator3)) v<-numeric(0) for(j in 1:nv3) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv3+j) l2[[j]]<-v v<-numeric(0) } des1.tab<-summary(mod,split=list('Fator3:Fator2'=l2)) desd=data.frame(des1.tab$`Error: Within`[[1]]) nlinhas=nrow(desd) desd=desd[-c(1,2,nlinhas-1,nlinhas-2,nlinhas),] qmresf2f3=(qmres[2]+(nv3-1)*qmres[3])/nv3 nf2f3=((qmres[2]+(nv3-1)*qmres[3])^2)/ ((qmres[2]^2)/GL[2]+(((nv3-1)*qmres[3])^2)/GL[3]) nf2f3=round(nf2f3) desd$F.value=desd$Mean.Sq/qmresf2f3 nline=nrow(desd) for(i in 1:nline){ desd$Pr..F.[i]=1-pf(desd$F.value[i],desd$Df[i],nf2f3)} f3f2=data.frame(des1.tab$`Error: Within`[[1]])[2,] desdf2f3=rbind(f3f2,desd,c(nf2f3,qmresf2f3/nf2f3,qmresf2f3,NA,NA)) nline1=nrow(desdf2f3) rownames(desdf2f3)[nline1]="Residuals combined" colnames(desdf2f3)=c("Df","Sum sq","Mean Sq", "F value","Pr(>F)") print(as.matrix(desdf2f3),na.print ="") cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Final table"))) cat(green(bold("\n------------------------------------------\n"))) if(mcomp=="tukey"){ tukeygrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator3 == lf3[i]] tukey=TUKEY(respi,trati,nf2f3,qmresf2f3,alpha.t) tukeygrafico[[i]]=tukey$groups[levels(trati),2] ordem[[i]]=rownames(tukey$groups[levels(trati),])} letra=unlist(tukeygrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra tukeygrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator2 == lf2[i]] tukey=TUKEY(respi,trati,GL[3],qmres[3],alpha.t) tukeygrafico1[[i]]=tukey$groups[levels(trati),2]} letra1=unlist(tukeygrafico1) letra1=toupper(letra1)} if(mcomp=="lsd"){ lsdgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator3 == lf3[i]] lsd=LSD(respi,trati,nf2f3,qmresf2f3,alpha.t) lsdgrafico[[i]]=lsd$groups[levels(trati),2] ordem[[i]]=rownames(lsd$groups[levels(trati),])} letra=unlist(lsdgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra lsdgrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator2 == lf2[i]] lsd=LSD(respi,trati,GL[3],qmres[3],alpha.t) lsdgrafico1[[i]]=lsd$groups[levels(trati),2]} letra1=unlist(lsdgrafico1) letra1=toupper(letra1)} if(mcomp=="duncan"){ duncangrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator3 == lf3[i]] duncan=duncan(respi,trati,nf2f3,qmresf2f3,alpha.t) duncangrafico[[i]]=duncan$groups[levels(trati),2] ordem[[i]]=rownames(duncan$groups[levels(trati),])} letra=unlist(duncangrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra duncangrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator2 == lf2[i]] duncan=duncan(respi,trati,GL[3],qmres[3],alpha.t) duncangrafico1[[i]]=duncan$groups[levels(trati),2]} letra1=unlist(duncangrafico1) letra1=toupper(letra1)} if(mcomp=="sk"){ skgrafico=c() ordem=c() for (i in 1:nv3) { trati=fatores[, 2][Fator3 == lf3[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator3 == lf3[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = nf2f3, nrep = nrep, QME = qmresf2f3, alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico[[i]]=sk[levels(trati),2] ordem[[i]]=rownames(sk[levels(trati),])} letra=unlist(skgrafico) datag=data.frame(letra,ordem=unlist(ordem)) datag$ordem=factor(datag$ordem,levels = unique(datag$ordem)) datag=datag[order(datag$ordem),] letra=datag$letra skgrafico1=c() for (i in 1:nv2) { trati=fatores[, 3][Fator2 == lf2[i]] trati=factor(trati,levels = unique(trati)) respi=response[Fator2 == lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = GL[3], nrep = nrep, QME = qmres[3], alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) skgrafico1[[i]]=sk[levels(trati),2]} letra1=unlist(skgrafico1) letra1=toupper(letra1)} f2=rep(levels(Fator2),e=length(levels(Fator3))) f3=rep(unique(as.character(Fator3)),length(levels(Fator2))) media=tapply(response,paste(Fator2,Fator3), mean, na.rm=TRUE)[unique(paste(f2,f3))] desvio=tapply(response,paste(Fator2,Fator3), sd, na.rm=TRUE)[unique(paste(f2,f3))] f2=factor(f2,levels = unique(f2)) f3=factor(f3,levels = unique(f3)) graph=data.frame(f2=f2, f3=f3, media, desvio, letra,letra1, numero=format(media,digits = dec)) numero=graph$numero letras=paste(graph$letra,graph$letra1,sep="") matriz=data.frame(t(matrix(paste(format(graph$media,digits = dec),letras),ncol = length(levels(Fator2))))) rownames(matriz)=levels(Fator2) colnames(matriz)=levels(Fator3) print(matriz) message(black("\nAverages followed by the same lowercase letter in the column and \nuppercase in the row do not differ by the Tukey (p<",alpha.t,")\n")) #Checar o Fator1 if(as.numeric(anavap[4,5])>alpha.f && as.numeric(anavap[7,5])>alpha.f) { i<-1 if(pvalor[i]<=alpha.f) { cat(green(bold("\n------------------------------------------\n"))) cat(green(italic('Analyzing the simple effects of the factor ',fac.names[1]))) cat(green(bold("\n------------------------------------------\n"))) cat(fac.names[i]) if(mcomp=="tukey"){letra=TUKEY(response,fatores[,i],GL[1],qmres[1],alpha.t)} if(mcomp=="lsd"){letra=LSD(response,fatores[,i],GL[1],qmres[1],alpha.t)} if(mcomp=="duncan"){letra=duncan(response,fatores[,i],GL[1],qmres[1],alpha.t)} letra1 <- letra$groups; colnames(letra1)=c("resp","groups") print(letra1) cat(green(bold("\n------------------------------------------\n"))) } } } ######################################################################################################################### #Para interacao tripla significativa, desdobramento ######################################################################################################################### qmresf2f3=(qmres[2]+(nv3-1)*qmres[3])/nv3 (nf2f3=((qmres[2]+(nv3-1)*qmres[3])^2)/ ((qmres[2]^2)/GL[2]+(((nv3-1)*qmres[3])^2)/GL[3])) nf2f3=round(nf2f3) glconj=c(nf1f2,nf1f3,nf2f3) qmconj=c(qmresf1f2,qmresf1f3,qmresf2f3) if(as.numeric(anavap[9,5])<=alpha.f){ cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("Interaction",paste(fac.names[1],'*',fac.names[2],'*',fac.names[3],sep='')," significant: unfolding the interaction"))) cat(green(bold("\n------------------------------------------\n"))) cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[1], ' inside of each level of ', fac.names[3], 'and',fac.names[2]) cat(green(bold("\n------------------------------------------\n"))) # testando f2 # substituir qmres qmresf1f3=(qmres[1]+(nv3-1)*qmres[3])/nv3 nf1f3=((qmres[1]+(nv3-1)*qmres[3])^2)/ ((qmres[1]^2)/GL[1]+(((nv3-1)*qmres[3])^2)/GL[3]) nf1f3=round(nf1f3) mod=aov(response~Fator3/Fator2/Fator1) summary(mod) l1<-vector('list',(nv2*nv3)) nomes=expand.grid(names(summary(Fator2)), names(summary(Fator3))) names(l1)<-paste(nomes$Var1,nomes$Var2) v<-numeric(0) for(j in 1:(nv3*nv2)) { for(i in 0:(nv1-2)) v<-cbind(v,i*nv3*nv2+j) l1[[j]]<-v v<-numeric(0) } dtf2a=data.frame(summary(mod,split=list('Fator3:Fator2:Fator1'=l1))[[1]]) nl=nrow(dtf2a) dtf2=dtf2a[-c(1,2,3,nl),] nline=nrow(dtf2) for(i in 1:nline){ dtf2$F.value[i]=dtf2$Mean.Sq[i]/qmresf1f3 dtf2$Pr..F.[i]=1-pf(dtf2$F.value[i],dtf2$Df[i],nf1f3) } f11=dtf2a[3,] desd=rbind(f11, dtf2, c(nf2f3,qmresf1f3*nf1f3,qmresf1f3,NA,NA)) nline1=nrow(desd) rownames(desd)[nline1]="Residuals combined" colnames(desd)=c("Df","Sum sq","Mean Sq", "F value","Pr(>F)") print(as.matrix(desd),na.print="") ii<-0 for(i in 1:nv2) { for(j in 1:nv3) { ii<-ii+1 cat("\n\n------------------------------------------") cat('\n',fac.names[1],' within the combination of levels ',lf2[i],' of ',fac.names[2],' and ',lf3[j],' of ',fac.names[3],"\n") cat("------------------------------------------\n") if(mcomp=="tukey"){ tukey=TUKEY(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], nf1f3, qmresf1f3, alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") print(tukey)} if(mcomp=="lsd"){ lsd=LSD(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], nf1f3, qmresf1f3, alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") print(lsd)} if(mcomp=="duncan"){ duncan=duncan(response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]], fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]], nf1f3, qmresf1f3, alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") print(duncan)} if(mcomp=="sk"){ respi=response[fatores[,2]==lf2[i] & fatores[,3]==lf3[j]] trati=fatores[,1][Fator2==lf2[i] & Fator3==lf3[j]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = nf1f3, nrep = nrep, QME = qmresf1f3, alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) print(sk)} } } cat('\n\n') cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[2], ' inside of each level of ', fac.names[1], 'and',fac.names[3]) cat(green(bold("\n------------------------------------------\n"))) # desdobrando f1 qmresf2f3=(qmres[2]+(nv3-1)*qmres[3])/nv3 nf2f3=((qmres[2]+(nv3-1)*qmres[3])^2)/ ((qmres[2]^2)/GL[2]+(((nv3-1)*qmres[3])^2)/GL[3]) nf2f3=round(nf2f3) mod=aov(response~Fator2/Fator1/Fator3) summary(mod) l2<-vector('list',(nv1*nv3)) nomes=expand.grid(names(summary(Fator1)), names(summary(Fator3))) names(l2)<-paste(nomes$Var1,nomes$Var2) v<-numeric(0) for(j in 1:(nv3*nv1)) { for(i in 0:(nv2-2)) v<-cbind(v,i*nv1*nv3+j) l2[[j]]<-v v<-numeric(0) } dtf1a=data.frame(summary(mod,split=list('Fator2:Fator1:Fator3'=l2))[[1]]) nl=nrow(dtf1a) dtf1=dtf1a[-c(1,2,3,nl),] nline=nrow(dtf1) for(i in 1:nline){ dtf1$F.value[i]=dtf1$Mean.Sq[i]/qmresf2f3 dtf1$Pr..F.[i]=1-pf(dtf1$F.value[i],dtf1$Df[i],nf2f3) } f11=dtf1a[3,] desd=rbind(f11, dtf1, c(nf2f3,qmresf2f3*nf2f3,qmresf2f3,NA,NA)) nline1=nrow(desd) rownames(desd)[nline1]="Residuals combined" colnames(desd)=c("Df","Sum sq","Mean Sq", "F value","Pr(>F)") print(as.matrix(desd),na.print = "") ii<-0 for(k in 1:nv1) { for(j in 1:nv3) { ii<-ii+1 cat("\n\n------------------------------------------") cat('\n',fac.names[2],' within the combination of levels ',lf1[k],' of ',fac.names[1],' and ',lf3[j],' of ',fac.names[3],'\n') cat("------------------------------------------\n") if(mcomp=="tukey"){ tukey=TUKEY(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], nf2f3, qmresf2f3, alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") print(tukey)} if(mcomp=="lsd"){ lsd=LSD(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], nf2f3, qmresf2f3, alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") print(lsd)} if(mcomp=="duncan"){ duncan=duncan(response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]], fatores[,2][Fator1==lf1[k] & fatores[,3]==lf3[j]], nf2f3, qmresf2f3, alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") print(duncan)} if(mcomp=="sk"){ respi=response[fatores[,1]==lf1[k] & fatores[,3]==lf3[j]] trati=fatores[,2][Fator1==lf1[k] & Fator3==lf3[j]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = nf2f3, nrep = nrep, QME = qmresf2f3, alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) print(sk)} } } cat(green(bold("\n------------------------------------------\n"))) cat("Analyzing ", fac.names[3], ' inside of each level of ', fac.names[1], 'and',fac.names[2]) cat(green(bold("\n------------------------------------------\n"))) mod=aov(response~Fator1/Fator2/Fator3+bloco+Error(bloco/fator1/fator2)) summary(mod) l3<-vector('list',(nv2*nv1)) nomes=expand.grid(names(summary(Fator1)), names(summary(Fator2))) names(l3)<-paste(nomes$Var1,nomes$Var2) v<-numeric(0) for(j in 1:(nv1*nv2)) { for(i in 0:(nv3-2)) v<-cbind(v,i*nv2*nv1+j) l3[[j]]<-v v<-numeric(0) } dtf33=summary(mod,split=list('Fator1:Fator2:Fator3'=l3)) dtf3=data.frame(dtf33$`Error: Within`[[1]]) colnames(dtf3)=c("Df","Sum sq","Mean Sq", "F value","Pr(>F)") print(as.matrix(dtf3),na.print = "") ii<-0 for(k in 1:nv1) { for(i in 1:nv2) { ii<-ii+1 cat("\n\n------------------------------------------") cat('\n',fac.names[3],' within the combination of levels ',lf1[k],' of ',fac.names[1],' and ',lf2[i],' of ',fac.names[2],'\n') cat("------------------------------------------\n") if(mcomp=="tukey"){ tukey=TUKEY(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], GL[3], qmres[3], alpha.t) tukey=tukey$groups;colnames(tukey)=c("resp","letters") print(tukey)} if(mcomp=="lsd"){ lsd=LSD(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], GL[3], qmres[3], alpha.t) lsd=lsd$groups;colnames(lsd)=c("resp","letters") print(lsd)} if(mcomp=="duncan"){ duncan=duncan(response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]], GL[3], qmres[3], alpha.t) duncan=duncan$groups;colnames(duncan)=c("resp","letters") print(duncan)} if(mcomp=="sk"){ respi=response[fatores[,1]==lf1[k] & fatores[,2]==lf2[i]] trati=fatores[,3][fatores[,1]==lf1[k] & fatores[,2]==lf2[i]] nrep=table(trati)[1] medias=sort(tapply(respi,trati,mean),decreasing = TRUE) sk=scottknott(means = medias, df1 = nf1f3, nrep = nrep, QME = qmresf1f3, alpha = alpha.t) sk=data.frame(respi=medias,groups=sk) print(sk)} } } } }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/PSUBSUB_function.R
#' Utils: Area under the curve #' @description Performs the calculation of the area under the progress curve. Initially created for the plant disease area, whose name is "area under the disease progress curve", it can be adapted to various areas of agrarian science. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @param data Data.frame containing evaluations in columns. Column names must be numeric and not dates or characters #' @note Just enter the data. Exclude treatment columns. See example. #' @return Returns a vector with the area values under the curve #' @references #' #' Campbell, C. L., and Madden, L. V. (1990). Introduction to plant disease epidemiology. John Wiley and Sons. #' #' @seealso \link{transf}, \link{sketch} #' @examples #' #' #======================================= #' # Using the simulate1 dataset #' #======================================= #' data("simulate1") #' #' # Converting to readable format for function #' dados=cbind(simulate1[simulate1$tempo==1,3], #' simulate1[simulate1$tempo==2,3], #' simulate1[simulate1$tempo==3,3], #' simulate1[simulate1$tempo==4,3], #' simulate1[simulate1$tempo==5,3], #' simulate1[simulate1$tempo==6,3]) #' colnames(dados)=c(1,2,3,4,5,6) #' dados #' #' # Creating the treatment vector #' resp=aacp(dados) #' trat=simulate1$trat[simulate1$tempo==1] #' #' # Analyzing by DIC function #' DIC(trat,resp) #' @export aacp=function(data){ aac <- function(x, y){ ox <- order(x) x <- x[ox] y <- y[ox] alt <- diff(x) bas <- y[-length(y)]+diff(y)/2 a <- sum(alt*bas) return(a)} tempo=as.numeric(colnames(data)) aacp=c(1:nrow(data)) for(i in 1:nrow(data)){ aacp[i]=aac(tempo,as.numeric(data[i,])) } print(aacp) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/aac_function.R
#' Dataset: Germination of seeds of \emph{Aristolochia} sp. as a function of temperature. #' #' The data come from an experiment conducted at the Seed Analysis #' Laboratory of the Agricultural Sciences Center of the State #' University of Londrina, in which five temperatures (15, 20, 25, #' 30 and 35C) were evaluated in the germination of \emph{Aristolochia elegans}. #' The experiment was conducted in a completely randomized #' design with four replications of 25 seeds each. #' #' @docType data #' #' @usage data("aristolochia") #' #' @format data.frame containing data set #' \describe{ #' \item{\code{trat}}{numeric vector with factor 1} #' \item{\code{resp}}{Numeric vector with response} #' } #' @keywords datasets #' @seealso \link{cloro}, \link{laranja}, \link{enxofre}, \link{laranja}, \link{mirtilo}, \link{passiflora}, \link{phao}, \link{porco}, \link{pomegranate}, \link{simulate1}, \link{simulate2}, \link{simulate3}, \link{tomate}, \link{weather} #' @examples #' data(aristolochia) "aristolochia"
/scratch/gouwar.j/cran-all/cranData/AgroR/R/aristolochia_data.R
# funcoes auxiliares para as funcoes de analise # algumas sao do pacote agricolae # Mendiburu, F., and de Mendiburu, M. F. (2019). Package ‘agricolae’. R Package, Version, 1-2. mean_stat <- function (y, x, stat = "mean") {k<-0 numerico<- NULL if(is.null(ncol(x))){ if(is.numeric(x)){ k<-1 numerico[1]<-1}} else{ ncolx<-ncol(x) for (i in 1:ncolx) { if(is.numeric(x[,i])){ k<-k+1 numerico[k]<-i }}} cx <- deparse(substitute(x)) cy <- deparse(substitute(y)) x <- data.frame(c1 = 1, x) y <- data.frame(v1 = 1, y) nx <- ncol(x) ny <- ncol(y) namex <- names(x) namey <- names(y) if (nx == 2) namex <- c("c1", cx) if (ny == 2) namey <- c("v1", cy) namexy <- c(namex, namey) for (i in 1:nx) { x[, i] <- as.character(x[, i])} z <- NULL for (i in 1:nx){z <- paste(z, x[, i], sep = "&")} w <- NULL for (i in 1:ny) { m <- tapply(y[, i], z, stat) m <- as.matrix(m) w <- cbind(w, m)} nw <- nrow(w) c <- rownames(w) v <- rep("", nw * nx) dim(v) <- c(nw, nx) for (i in 1:nw) { for (j in 1:nx) { v[i, j] <- strsplit(c[i], "&")[[1]][j + 1]}} rownames(w) <- NULL junto <- data.frame(v[, -1], w) junto <- junto[, -nx] names(junto) <- namexy[c(-1, -(nx + 1))] if(k==1 & nx==2) { junto[,numerico[1]]<-as.character(junto[,numerico[1]]) junto[,numerico[1]]<-as.numeric(junto[,numerico[1]]) junto<-junto[order(junto[,1]),]} if (k>0 & nx > 2) { for (i in 1:k){ junto[,numerico[i]]<-as.character(junto[,numerico[i]]) junto[,numerico[i]]<-as.numeric(junto[,numerico[i]])} junto<-junto[do.call("order", c(junto[,1:(nx-1)])),]} rownames(junto)<-1:(nrow(junto)) return(junto)} levenehomog <- function (y, ...) { UseMethod("levenehomog")} levenehomog.default <- function (y, group, center=median, ...) { if (!is.numeric(y)) stop(deparse(substitute(y)), " is not a numeric variable") if (!is.factor(group)){warning(deparse(substitute(group)), " coerced to factor.") group <- as.factor(group)} valid <- complete.cases(y, group) meds <- tapply(y[valid], group[valid], center, ...) resp <- abs(y - meds[group]) table <- anova(lm(resp ~ group))[, c(1, 4, 5)] rownames(table)[2] <- " " dots <- deparse(substitute(...)) attr(table, "heading") <- paste("Levene's Test (center = ", deparse(substitute(center)), if(!(dots == "NULL")) paste(":", dots), ")", sep="") table} levenehomog.formula <- function(y, data, ...) { form <- y mf <- if (missing(data)) model.frame(form) else model.frame(form, data) if (any(sapply(2:dim(mf)[2], function(j) is.numeric(mf[[j]])))) stop("Levene's test is not appropriate with quantitative explanatory variables.") y <- mf[,1] if(dim(mf)[2]==2) group <- mf[,2] else { if (length(grep("\\+ | \\| | \\^ | \\:",form))>0) stop("Model must be completely crossed formula only.") group <- interaction(mf[,2:dim(mf)[2]])} levenehomog.default(y=y, group=group, ...)} levenehomog.lm <- function(y, ...) { m <- model.frame(y) m$..y <- model.response(m) f <- formula(y) f[2] <- expression(..y) levenehomog.formula(f, data=m, ...)} ordenacao=function (treatment, means, alpha, pvalue, console){ n <- length(means) z <- data.frame(treatment, means) letras<-c(letters[1:26],LETTERS[1:26],1:9, c(".","+","-","*","/","#","$","%","&","^","[","]",":", "@",";","_","?","!","=","#",rep(" ",2000))) w <- z[order(z[, 2], decreasing = TRUE), ] M<-rep("",n) k<-1 k1<-0 j<-1 i<-1 cambio<-n cambio1<-0 chequeo=0 M[1]<-letras[k] q <- as.numeric(rownames(w)) #Check while(j<n) { chequeo<-chequeo+1 if (chequeo > n) break for(i in j:n) { s<-pvalue[q[i],q[j]]>alpha if(s) { if(lastC(M[i]) != letras[k])M[i]<-paste(M[i],letras[k],sep="") } else { k<-k+1 cambio<-i cambio1<-0 ja<-j for(jj in cambio:n) M[jj]<-paste(M[jj],"",sep="") # El espacio M[cambio]<-paste(M[cambio],letras[k],sep="") for( v in ja:cambio) { if(pvalue[q[v],q[cambio]]<=alpha) {j<-j+1 cambio1<-1 } else break } break } } if (cambio1 ==0 )j<-j+1 } w<-data.frame(w,stat=M) trt <- as.character(w$treatment) means <- as.numeric(w$means) output <- data.frame(means, groups=M) rownames(output)<-trt if(k>81) cat("\n",k,"groups are estimated.The number of groups exceeded the maximum of 81 labels. change to group=FALSE.\n") invisible(output) } lastC <- function(x) { y<-sub(" +$", "",x) p1<-nchar(y) cc<-substr(y,p1,p1) return(cc)} duncan <- function(y, trt, DFerror, MSerror, alpha = 0.05, group = TRUE, main = NULL, console = FALSE) {name.y <- paste(deparse(substitute(y))) name.t <- paste(deparse(substitute(trt))) if(is.null(main))main<-paste(name.y,"~", name.t) clase<-c("aov","lm") if("aov"%in%class(y) | "lm"%in%class(y)){ if(is.null(main))main<-y$call A<-y$model DFerror<-df.residual(y) MSerror<-deviance(y)/DFerror y<-A[,1] ipch<-pmatch(trt,names(A)) nipch<- length(ipch) for(i in 1:nipch){ if (is.na(ipch[i])) return(if(console)cat("Name: ", trt, "\n", names(A)[-1], "\n"))} name.t<- names(A)[ipch][1] trt <- A[, ipch] if (nipch > 1){ trt <- A[, ipch[1]] for(i in 2:nipch){ name.t <- paste(name.t,names(A)[ipch][i],sep=":") trt <- paste(trt,A[,ipch[i]],sep=":") }} name.y <- names(A)[1] } junto <- subset(data.frame(y, trt), is.na(y) == FALSE) Mean<-mean(junto[,1]) CV<-sqrt(MSerror)*100/Mean medians<-mean_stat(junto[,1],junto[,2],stat="median") for(i in c(1,5,2:4)) { x <- mean_stat(junto[,1],junto[,2],function(x)quantile(x)[i]) medians<-cbind(medians,x[,2]) } medians<-medians[,3:7] names(medians)<-c("Min","Max","Q25","Q50","Q75") means <- mean_stat(junto[,1],junto[,2],stat="mean") # change sds <- mean_stat(junto[,1],junto[,2],stat="sd") #change nn <- mean_stat(junto[,1],junto[,2],stat="length") # change means<-data.frame(means,std=sds[,2],r=nn[,2],medians) names(means)[1:2]<-c(name.t,name.y) ntr<-nrow(means) Tprob<-NULL k<-0 for(i in 2:ntr){ k<-k+1 x <- suppressWarnings(warning(qtukey((1-alpha)^(i-1), i, DFerror))) if(x=="NaN")break else Tprob[k]<-x } if(k<(ntr-1)){ for(i in k:(ntr-1)){ f <- Vectorize(function(x)ptukey(x,i+1,DFerror)-(1-alpha)^i) Tprob[i]<-uniroot(f, c(0,100))$root } } Tprob<-as.numeric(Tprob) nr <- unique(nn[,2]) if(console){ cat("\nStudy:", main) cat("\n\nDuncan's new multiple range test\nfor",name.y,"\n") cat("\nMean Square Error: ",MSerror,"\n\n") cat(paste(name.t,",",sep="")," means\n\n") print(data.frame(row.names = means[,1], means[,2:6])) } if(length(nr) == 1 ) sdtdif <- sqrt(MSerror/nr) else { nr1 <- 1/mean(1/nn[,2]) sdtdif <- sqrt(MSerror/nr1) } DUNCAN <- Tprob * sdtdif names(DUNCAN)<-2:ntr duncan<-data.frame(Table=Tprob,CriticalRange=DUNCAN) if ( group & length(nr) == 1 & console){ cat("\nAlpha:",alpha,"; DF Error:",DFerror,"\n") cat("\nCritical Range\n") print(DUNCAN)} if ( group & length(nr) != 1 & console) cat("\nGroups according to probability of means differences and alpha level(",alpha,")\n") if ( length(nr) != 1) duncan<-NULL Omeans<-order(means[,2],decreasing = TRUE) #correccion 2019, 1 abril. Ordindex<-order(Omeans) comb <-utils::combn(ntr,2) nn<-ncol(comb) dif<-rep(0,nn) DIF<-dif LCL<-dif UCL<-dif pvalue<-dif odif<-dif sig<-NULL for (k in 1:nn) { i<-comb[1,k] j<-comb[2,k] dif[k]<-means[i,2]-means[j,2] DIF[k]<-abs(dif[k]) nx<-abs(i-j)+1 odif[k] <- abs(Ordindex[i]- Ordindex[j])+1 pvalue[k]<- round(1-ptukey(DIF[k]/sdtdif,odif[k],DFerror)^(1/(odif[k]-1)),4) LCL[k] <- dif[k] - DUNCAN[odif[k]-1] UCL[k] <- dif[k] + DUNCAN[odif[k]-1] sig[k]<-" " if (pvalue[k] <= 0.001) sig[k]<-"***" else if (pvalue[k] <= 0.01) sig[k]<-"**" else if (pvalue[k] <= 0.05) sig[k]<-"*" else if (pvalue[k] <= 0.1) sig[k]<-"." } if(!group){ tr.i <- means[comb[1, ],1] tr.j <- means[comb[2, ],1] comparison<-data.frame("difference" = dif, pvalue=pvalue,"signif."=sig,LCL,UCL) rownames(comparison)<-paste(tr.i,tr.j,sep=" - ") if(console){cat("\nComparison between treatments means\n\n") print(comparison)} groups=NULL } if (group) { comparison=NULL Q<-matrix(1,ncol=ntr,nrow=ntr) p<-pvalue k<-0 for(i in 1:(ntr-1)){ for(j in (i+1):ntr){ k<-k+1 Q[i,j]<-p[k] Q[j,i]<-p[k] } } groups <- ordenacao(means[, 1], means[, 2],alpha, Q,console) names(groups)[1]<-name.y if(console) { cat("\nMeans with the same letter are not significantly different.\n\n") print(groups) } } parameters<-data.frame(test="Duncan",name.t=name.t,ntr = ntr,alpha=alpha) statistics<-data.frame(MSerror=MSerror,Df=DFerror,Mean=Mean,CV=CV) rownames(parameters)<-" " rownames(statistics)<-" " rownames(means)<-means[,1] means<-means[,-1] output<-list(statistics=statistics,parameters=parameters, duncan=duncan, means=means,comparison=comparison,groups=groups) class(output)<-"group" invisible(output) } TUKEY <- function(y, trt, DFerror, MSerror, alpha=0.05, group=TRUE, main = NULL,unbalanced=FALSE,console=FALSE){ name.y <- paste(deparse(substitute(y))) name.t <- paste(deparse(substitute(trt))) if(is.null(main))main<-paste(name.y,"~", name.t) clase<-c("aov","lm") if("aov"%in%class(y) | "lm"%in%class(y)){ if(is.null(main))main<-y$call A<-y$model DFerror<-df.residual(y) MSerror<-deviance(y)/DFerror y<-A[,1] ipch<-pmatch(trt,names(A)) nipch<- length(ipch) for(i in 1:nipch){ if (is.na(ipch[i])) return(if(console)cat("Name: ", trt, "\n", names(A)[-1], "\n")) } name.t<- names(A)[ipch][1] trt <- A[, ipch] if (nipch > 1){ trt <- A[, ipch[1]] for(i in 2:nipch){ name.t <- paste(name.t,names(A)[ipch][i],sep=":") trt <- paste(trt,A[,ipch[i]],sep=":") }} name.y <- names(A)[1] } junto <- subset(data.frame(y, trt), is.na(y) == FALSE) Mean<-mean(junto[,1]) CV<-sqrt(MSerror)*100/Mean medians<-mean_stat(junto[,1],junto[,2],stat="median") for(i in c(1,5,2:4)) { x <- mean_stat(junto[,1],junto[,2],function(x)quantile(x)[i]) medians<-cbind(medians,x[,2]) } medians<-medians[,3:7] names(medians)<-c("Min","Max","Q25","Q50","Q75") means <- mean_stat(junto[,1],junto[,2],stat="mean") sds <- mean_stat(junto[,1],junto[,2],stat="sd") nn <- mean_stat(junto[,1],junto[,2],stat="length") means<-data.frame(means,std=sds[,2],r=nn[,2],medians) names(means)[1:2]<-c(name.t,name.y) ntr<-nrow(means) Tprob <- qtukey(1-alpha,ntr, DFerror) nr<-unique(nn[, 2]) nr1<-1/mean(1/nn[,2]) if(console){ cat("\nStudy:", main) cat("\n\nHSD Test for",name.y,"\n") cat("\nMean Square Error: ",MSerror,"\n\n") cat(paste(name.t,",",sep="")," means\n\n") print(data.frame(row.names = means[,1], means[,2:6])) cat("\nAlpha:",alpha,"; DF Error:",DFerror,"\n") cat("Critical Value of Studentized Range:", Tprob,"\n") } HSD <- Tprob * sqrt(MSerror/nr) statistics<-data.frame(MSerror=MSerror,Df=DFerror,Mean=Mean,CV=CV,MSD=HSD) if ( group & length(nr) == 1 & console) cat("\nMinimun Significant Difference:",HSD,"\n") if ( group & length(nr) != 1 & console) cat("\nGroups according to probability of means differences and alpha level(",alpha,")\n") if ( length(nr) != 1) statistics<-data.frame(MSerror=MSerror,Df=DFerror,Mean=Mean,CV=CV) comb <-utils::combn(ntr,2) nn<-ncol(comb) dif<-rep(0,nn) sig<-NULL LCL<-dif UCL<-dif pvalue<-rep(0,nn) for (k in 1:nn) { i<-comb[1,k] j<-comb[2,k] dif[k]<-means[i,2]-means[j,2] sdtdif<-sqrt(MSerror * 0.5*(1/means[i,4] + 1/means[j,4])) if(unbalanced)sdtdif<-sqrt(MSerror /nr1) pvalue[k]<- round(1-ptukey(abs(dif[k])/sdtdif,ntr,DFerror),4) LCL[k] <- dif[k] - Tprob*sdtdif UCL[k] <- dif[k] + Tprob*sdtdif sig[k]<-" " if (pvalue[k] <= 0.001) sig[k]<-"***" else if (pvalue[k] <= 0.01) sig[k]<-"**" else if (pvalue[k] <= 0.05) sig[k]<-"*" else if (pvalue[k] <= 0.1) sig[k]<-"." } if(!group){ tr.i <- means[comb[1, ],1] tr.j <- means[comb[2, ],1] comparison<-data.frame("difference" = dif, pvalue=pvalue,"signif."=sig,LCL,UCL) rownames(comparison)<-paste(tr.i,tr.j,sep=" - ") if(console){cat("\nComparison between treatments means\n\n") print(comparison)} groups=NULL } if (group) { comparison=NULL Q<-matrix(1,ncol=ntr,nrow=ntr) p<-pvalue k<-0 for(i in 1:(ntr-1)){ for(j in (i+1):ntr){ k<-k+1 Q[i,j]<-p[k] Q[j,i]<-p[k] } } groups <- ordenacao(means[, 1], means[, 2],alpha, Q,console) names(groups)[1]<-name.y if(console) { cat("\nTreatments with the same letter are not significantly different.\n\n") print(groups) } } parameters<-data.frame(test="Tukey",name.t=name.t,ntr = ntr, StudentizedRange=Tprob,alpha=alpha) rownames(parameters)<-" " rownames(statistics)<-" " rownames(means)<-means[,1] means<-means[,-1] output<-list(statistics=statistics,parameters=parameters, means=means,comparison=comparison,groups=groups) class(output)<-"group" invisible(output) } LSD = function(y, trt, DFerror, MSerror, alpha = 0.05, p.adj = c("none", "holm", "hommel", "hochberg", "bonferroni", "BH", "BY", "fdr"), group = TRUE, main = NULL, console = FALSE) { p.adj <- match.arg(p.adj) clase <- c("aov", "lm") name.y <- paste(deparse(substitute(y))) name.t <- paste(deparse(substitute(trt))) if(is.null(main))main<-paste(name.y,"~", name.t) if ("aov" %in% class(y) | "lm" %in% class(y)) { if(is.null(main))main<-y$call A <- y$model DFerror <- df.residual(y) MSerror <- deviance(y)/DFerror y <- A[, 1] ipch <- pmatch(trt, names(A)) nipch<- length(ipch) for(i in 1:nipch){ if (is.na(ipch[i])) return(if(console)cat("Name: ", trt, "\n", names(A)[-1], "\n"))} name.t<- names(A)[ipch][1] trt <- A[, ipch] if (nipch > 1){ trt <- A[, ipch[1]] for(i in 2:nipch){ name.t <- paste(name.t,names(A)[ipch][i],sep=":") trt <- paste(trt,A[,ipch[i]],sep=":") }} name.y <- names(A)[1]} junto <- subset(data.frame(y, trt), is.na(y) == FALSE) Mean<-mean(junto[,1]) CV<-sqrt(MSerror)*100/Mean medians<-mean_stat(junto[,1],junto[,2],stat="median") for(i in c(1,5,2:4)) { x <- mean_stat(junto[,1],junto[,2],function(x)quantile(x)[i]) medians<-cbind(medians,x[,2])} medians<-medians[,3:7] names(medians)<-c("Min","Max","Q25","Q50","Q75") means <- mean_stat(junto[, 1], junto[, 2], stat = "mean") sds <- mean_stat(junto[, 1], junto[, 2], stat = "sd") nn <- mean_stat(junto[, 1], junto[, 2], stat = "length") std.err <- sqrt(MSerror)/sqrt(nn[, 2]) Tprob <- qt(1 - alpha/2, DFerror) LCL <- means[, 2] - Tprob * std.err UCL <- means[, 2] + Tprob * std.err means <- data.frame(means, std=sds[,2], r = nn[, 2], LCL, UCL,medians) names(means)[1:2] <- c(name.t, name.y) ntr <- nrow(means) nk <- choose(ntr, 2) if (p.adj != "none") { a <- 1e-06 b <- 1 for (i in 1:100) { x <- (b + a)/2 xr <- rep(x, nk) d <- p.adjust(xr, p.adj)[1] - alpha ar <- rep(a, nk) fa <- p.adjust(ar, p.adj)[1] - alpha if (d * fa < 0) b <- x if (d * fa > 0) a <- x} Tprob <- qt(1 - x/2, DFerror) } nr <- unique(nn[, 2]) if(console){ cat("\nStudy:", main) if(console)cat("\n\nLSD t Test for", name.y, "\n") if (p.adj != "none")cat("P value adjustment method:", p.adj, "\n") cat("\nMean Square Error: ", MSerror, "\n\n") cat(paste(name.t, ",", sep = ""), " means and individual (", (1 - alpha) * 100, "%) CI\n\n") print(data.frame(row.names = means[, 1], means[, 2:8])) cat("\nAlpha:", alpha, "; DF Error:", DFerror) cat("\nCritical Value of t:", Tprob, "\n")} statistics<-data.frame(MSerror=MSerror,Df=DFerror,Mean=Mean,CV=CV) if (length(nr) == 1) LSD <- Tprob * sqrt(2 * MSerror/nr) if ( group & length(nr) == 1 & console) { if(p.adj=="none") cat("\nleast Significant Difference:",LSD,"\n") else cat("\nMinimum Significant Difference:",LSD,"\n")} if ( group & length(nr) != 1 & console) cat("\nGroups according to probability of means differences and alpha level(",alpha,")\n") if ( length(nr) == 1 & p.adj=="none") statistics<-data.frame(statistics, t.value=Tprob,LSD=LSD) if ( length(nr) == 1 & p.adj!="none") statistics<-data.frame(statistics, t.value=Tprob,MSD=LSD) LSD=" " comb <- utils::combn(ntr, 2) nn <- ncol(comb) dif <- rep(0, nn) pvalue <- dif sdtdif <- dif sig <- rep(" ", nn) for (k in 1:nn) { i <- comb[1, k] j <- comb[2, k] dif[k] <-means[i, 2] - means[j, 2] sdtdif[k] <- sqrt(MSerror * (1/means[i, 4] + 1/means[j,4])) pvalue[k] <- 2 * (1 - pt(abs(dif[k])/sdtdif[k], DFerror))} if (p.adj != "none") pvalue <- p.adjust(pvalue, p.adj) pvalue <- round(pvalue,4) for (k in 1:nn) { if (pvalue[k] <= 0.001) sig[k] <- "***" else if (pvalue[k] <= 0.01) sig[k] <- "**" else if (pvalue[k] <= 0.05) sig[k] <- "*" else if (pvalue[k] <= 0.1) sig[k] <- "."} tr.i <- means[comb[1, ], 1] tr.j <- means[comb[2, ], 1] LCL <- dif - Tprob * sdtdif UCL <- dif + Tprob * sdtdif comparison <- data.frame(difference = dif, pvalue = pvalue, "signif."=sig, LCL, UCL) if (p.adj !="bonferroni" & p.adj !="none"){ comparison<-comparison[,1:3] } rownames(comparison) <- paste(tr.i, tr.j, sep = " - ") if (!group) { if(console){ cat("\nComparison between treatments means\n\n") print(comparison)} groups <- NULL} if (group){ comparison=NULL Q<-matrix(1,ncol=ntr,nrow=ntr) p<-pvalue k<-0 for(i in 1:(ntr-1)){ for(j in (i+1):ntr){ k<-k+1 Q[i,j]<-p[k] Q[j,i]<-p[k]}} groups <- ordenacao(means[, 1], means[, 2],alpha, Q,console) names(groups)[1]<-name.y if(console) { cat("\nTreatments with the same letter are not significantly different.\n\n") print(groups)} } parameters<-data.frame(test="Fisher-LSD",p.ajusted=p.adj,name.t=name.t,ntr = ntr,alpha=alpha) rownames(parameters)<-" " rownames(statistics)<-" " rownames(means)<-means[,1] means<-means[,-1] output<-list(statistics=statistics,parameters=parameters, means=means,comparison=comparison,groups=groups) class(output)<-"group" invisible(output) } sk<-function(y, trt, DFerror, SSerror, alpha = 0.05, group = TRUE, main = NULL){ sk <- function(medias,s2,dfr,prob){ bo <- 0 si2 <- s2 defr <- dfr parou <- 1 np <- length(medias) - 1 for (i in 1:np){ g1 <- medias[1:i] g2 <- medias[(i+1):length(medias)] B0 <- sum(g1)^2/length(g1) + sum(g2)^2/length(g2) - (sum(g1) + sum(g2))^2/length(c(g1,g2)) if (B0 > bo) {bo <- B0 parou <- i} } g1 <- medias[1:parou] g2 <- medias[(parou+1):length(medias)] teste <- c(g1,g2) sigm2 <- (sum(teste^2) - sum(teste)^2/length(teste) + defr*si2)/(length(teste) + defr) lamb <- pi*bo/(2*sigm2*(pi-2)) v0 <- length(teste)/(pi-2) p <- pchisq(lamb,v0,lower.tail = FALSE) if (p < prob) { for (i in 1:length(g1)){ cat(names(g1[i]),"\n",file="sk_groups",append=TRUE)} cat("*","\n",file="sk_groups",append=TRUE)} if (length(g1)>1){sk(g1,s2,dfr,prob)} if (length(g2)>1){sk(g2,s2,dfr,prob)} } trt=factor(trt,unique(trt)) trt1=trt levels(trt)=paste("T",1:length(levels(trt)),sep = "") medias <- sort(tapply(y,trt,mean),decreasing=TRUE) dfr <- DFerror rep <- tapply(y,trt,length) s0 <- MSerror <-SSerror/DFerror s2 <- s0/rep[1] prob <- alpha sk(medias,s2,dfr,prob) f <- names(medias) names(medias) <- 1:length(medias) resultado <- data.frame("r"=0,"f"=f,"m"=medias) if (file.exists("sk_groups") == FALSE) {stop} else{ xx <- read.table("sk_groups") file.remove("sk_groups") x <- xx[[1]] x <- as.vector(x) z <- 1 for (j in 1:length(x)){ if (x[j] == "*") {z <- z+1} for (i in 1:length(resultado$f)){ if (resultado$f[i]==x[j]){ resultado$r[i] <- z;} } } } letras<-letters if(length(resultado$r)>26) { l<-floor(length(resultado$r)/26) for(i in 1:l) letras<-c(letras,paste(letters,i,sep='')) } res <- 1 for (i in 1:(length(resultado$r)-1)) { if (resultado$r[i] != resultado$r[i+1]){ resultado$r[i] <- letras[res] res <- res+1 if (i == (length(resultado$r)-1)){ resultado$r[i+1] <- letras[res]} } else{ resultado$r[i] <- letras[res] if (i == (length(resultado$r)-1)){ resultado$r[i+1] <- letras[res] } } } names(resultado) <- c("groups","Tratamentos","Means") resultado1=resultado[,c(3,1)] rownames(resultado1)=resultado$Tratamentos final=list(resultado1)[[1]] final=final[as.character(unique(trt)),] rownames(final)=as.character(unique(trt1)) final } scottknott=function(means, df1, QME, nrep, alpha=0.05){ sk1=function(means, df1, QME, nrep, alpha=alpha) { means=sort(means,decreasing=TRUE) n=1:(length(means)-1) n=as.list(n) f=function(n){list(means[c(1:n)],means[-c(1:n)])} g=lapply(n, f) b1=function(x){(sum(g[[x]][[1]])^2)/length(g[[x]][[1]]) + (sum(g[[x]][[2]])^2)/length(g[[x]][[2]])- (sum(c(g[[x]][[1]],g[[x]][[2]]))^2)/length(c(g[[x]][[1]],g[[x]][[2]]))} p=1:length(g) values=sapply(p,b1) minimo=min(values); maximo=max(values) alfa=(1/(length(means)+df1))*(sum((means-mean(means))^2)+(df1*QME/nrep)) lambda=(pi/(2*(pi-2)))*(maximo/alfa) vq=qchisq((alpha),lower.tail=FALSE, df=length(means)/(pi-2)) ll=1:length(values); da=data.frame(ll,values); da=da[order(-values),] ran=da$ll[1] r=g[[ran]]; r=as.list(r) i=ifelse(vq>lambda|length(means)==1, 1,2) means=list(means) res=list(means, r) return(res[[i]]) } u=sk1(means, df1, QME, nrep, alpha=alpha) u=lapply(u, sk1, df1=df1, QME=QME, nrep=nrep, alpha=alpha) sk2=function(u){ v1=function(...){c(u[[1]])} v2=function(...){c(u[[1]],u[[2]])} v3=function(...){c(u[[1]],u[[2]],u[[3]])} v4=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]])} v5=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]],u[[5]])} v6=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]],u[[5]],u[[6]])} v7=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]],u[[5]],u[[6]],u[[7]])} v8=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]],u[[5]],u[[6]],u[[7]],u[[8]])} v9=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]],u[[5]],u[[6]],u[[7]],u[[8]],u[[9]])} v10=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]],u[[5]],u[[6]],u[[7]],u[[8]],u[[9]],u[[10]])} lv=list(v1,v2,v3,v4,v5,v6,v7,v8,v9,v10) l=length(u) ti=lv[[l]] u=ti() u=lapply(u, sk1, df1=df1, QME=QME, nrep=nrep, alpha=alpha) return(u) } u=sk2(u);u=sk2(u);u=sk2(u);u=sk2(u);u=sk2(u) u=sk2(u);u=sk2(u);u=sk2(u);u=sk2(u);u=sk2(u) v1=function(...){c(u[[1]])} v2=function(...){c(u[[1]],u[[2]])} v3=function(...){c(u[[1]],u[[2]],u[[3]])} v4=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]])} v5=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]],u[[5]])} v6=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]],u[[5]],u[[6]])} v7=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]],u[[5]],u[[6]],u[[7]])} v8=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]],u[[5]],u[[6]],u[[7]],u[[8]])} v9=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]],u[[5]],u[[6]],u[[7]],u[[8]],u[[9]])} v10=function(...){c(u[[1]],u[[2]],u[[3]],u[[4]],u[[5]],u[[6]],u[[7]],u[[8]],u[[9]],u[[10]])} lv=list(v1,v2,v3,v4,v5,v6,v7,v8,v9,v10) l=length(u) ti=lv[[l]] u=ti() rp=u l2=lapply(rp, length) l2=unlist(l2) rp2=rep(letters[1:length(rp)], l2) return(rp2)}
/scratch/gouwar.j/cran-all/cranData/AgroR/R/auxiliar_function.R
#' Graph: Barplot for Dunnett test #' @export #' @description The function performs the construction of a column chart of Dunnett's test. #' @param output.dunnett Numerical or complex vector with treatments #' @param sup Number of units above the standard deviation or average bar on the graph #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param fill Fill column. Use vector with two elements c(control, different treatment) #' @param add.mean Plot the average value on the graph (\emph{default} is TRUE) #' @param round Number of cells #' @return Returns a column chart of Dunnett's test. The colors indicate difference from the control. #' @importFrom multcomp glht #' @importFrom multcomp mcp #' @examples #' #' #==================================================== #' # randomized block design in factorial double #' #==================================================== #' library(AgroR) #' data(cloro) #' attach(cloro) #' respAd=c(268, 322, 275, 350, 320) #' a=FAT2DBC.ad(f1, f2, bloco, resp, respAd, #' ylab="Number of nodules", #' legend = "Stages",mcomp="sk") #' data=rbind(data.frame(trat=paste(f1,f2,sep = ""),bloco=bloco,resp=resp), #' data.frame(trat=c("Test","Test","Test","Test","Test"), #' bloco=unique(bloco),resp=respAd)) #' a= with(data,dunnett(trat = trat, #' resp = resp, #' control = "Test", #' block=bloco,model = "DBC")) #' bar_dunnett(a) bar_dunnett=function(output.dunnett, ylab="Response", xlab="", fill=c("#F8766D","#00BFC4"), sup=NA, add.mean=TRUE, round=2){ resp=output.dunnett$plot$resp trat=output.dunnett$plot$trat if(is.na(sup[1])==TRUE){sup=0.1*mean(resp)} controle=output.dunnett$plot$control medias=tapply(resp,trat,mean) medias=medias[order(medias,decreasing = TRUE)] ordem=as.vector(names(medias)) ordem1=c(controle,ordem[!ordem==controle]) trat=factor(trat,ordem1) medias=tapply(resp,trat,mean) trat1=factor(names(medias),levels = levels(trat)) requireNamespace("ggplot2") estimativa=output.dunnett$plot$data[order(output.dunnett$plot$data$Estimate,decreasing = TRUE),] if(add.mean==FALSE){ggplot(data.frame(trat1,medias))+output.dunnett$plot$graph$theme+ geom_col(aes(x=trat1,y=medias, fill=c("a",ifelse(estimativa$sig=="*","b","a"))), color="black",show.legend = FALSE)+ labs(x=xlab,y=ylab)+ geom_label(aes(x=trat1,y=medias+sup, label=c("a",ifelse(estimativa$sig=="*","b","a"))), family=output.dunnett$plot$fontfamily)+ scale_fill_manual(values=fill)} if(add.mean==TRUE){ggplot(data.frame(trat1,medias))+output.dunnett$plot$graph$theme+ geom_col(aes(x=trat1,y=medias, fill=c("a",ifelse(estimativa$sig=="*","b","a"))), color="black",show.legend = FALSE)+ labs(x=xlab,y=ylab)+ geom_label(aes(x=trat1,y=medias+sup, label=paste(round(medias,round), c("a",ifelse(estimativa$sig=="*","b","a")))), family=output.dunnett$plot$fontfamily)+ scale_fill_manual(values=fill)} }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/bar_dunnett.R
#' Graph: Bar graph for one factor with facets #' #' @description This is a function of the bar graph for one factor with facets #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param model DIC, DBC or DQL object #' @param facet vector with facets #' @param theme ggplot2 theme #' @param fill fill bars #' @param horiz horizontal bar or point (\emph{default} is FALSE) #' @param geom graph type (columns or segments) #' @param width.bar width of the error bars of a regression graph. #' @param pointsize Point size #' @param facet.background Color background in facet #' @export #' @return Returns a bar chart for one factor #' #' @examples #' library(AgroR) #' data("laranja") #' a=with(laranja, DBC(trat, bloco, resp, #' mcomp = "sk",angle=45,sup = 10,family = "serif", #' ylab = "Number of fruits/plants")) #' barfacet(a,c("S1","S1","S1","S1","S1", #' "S2","S2","S3","S3")) barfacet=function(model, facet=NULL, theme=theme_bw(), horiz=FALSE, geom="bar", fill="lightblue", pointsize=4.5, width.bar=0.15, facet.background="gray80"){ requireNamespace("ggplot2") data=model[[1]]$data media=data$media desvio=data$desvio trats=data$trats limite=data$limite letra=data$letra groups=data$groups if(is.null(facet[1])==FALSE){ data$trats=as.character(data$trats) fac=factor(facet,unique(facet)) nomes=unique(facet) comp=tapply(fac,fac,length) n=length(levels(fac)) graph=as.list(1:n) data$fac=fac sup=model[[1]]$plot$sup if(geom=="point" & horiz==FALSE){ graph=ggplot(data, aes(x=trats, y=media))+ theme+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio),width=model[[1]]$plot$width.bar)+ geom_point(fill=fill,shape=21,size=pointsize,color="black")+ geom_text(aes(y=media+desvio+sup, x=trats, label = letra),vjust=0)+ labs(x=model[[1]]$labels$x, y=model[[1]]$labels$y)+ facet_grid(~fac,scales = "free", space='free')+ theme(axis.text = element_text(size=model[[1]]$plot$textsize,color="black"), strip.text = element_text(size=model[[1]]$plot$textsize), legend.position = "none", # axis.text.x = element_text(angle = model[[1]]$plot$angle), strip.background = element_rect(fill=facet.background))+ ylim(layer_scales(model[[1]])$y$range$range*1.1)} if(geom=="bar" & horiz==FALSE){ graph=ggplot(data, aes(x=trats, y=media))+ theme+ geom_col(fill=fill,size=0.3,color="black")+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio),width=model[[1]]$plot$width.bar)+ geom_text(aes(y=media+desvio+sup, x=trats, label = letra),vjust=0)+ labs(x=model[[1]]$labels$x, y=model[[1]]$labels$y)+ facet_grid(~fac,scales = "free", space='free')+ theme(axis.text = element_text(size=model[[1]]$plot$textsize,color="black"), strip.text = element_text(size=model[[1]]$plot$textsize), legend.position = "none", # axis.text.x = element_text(angle = model[[1]]$plot$angle), strip.background = element_rect(fill=facet.background))+ ylim(layer_scales(model[[1]])$y$range$range*1.1)}} if(geom=="point" & horiz==TRUE){ graph=ggplot(data, aes(y=trats, x=media))+ theme+ geom_errorbar(aes(xmin=media-desvio, xmax=media+desvio),width=model[[1]]$plot$width.bar)+ geom_point(fill=fill,shape=21,size=pointsize,color="black")+ geom_text(aes(x=media+desvio+sup, y=trats, label = letra),hjust=0)+ labs(y=model[[1]]$labels$x, x=model[[1]]$labels$y)+ facet_grid(fac,scales = "free", space='free')+ theme(axis.text = element_text(size=model[[1]]$plot$textsize,color="black"), strip.text = element_text(size=model[[1]]$plot$textsize), legend.position = "none",strip.background = element_rect(fill=facet.background))+ xlim(layer_scales(model[[1]])$y$range$range*1.1)} if(geom=="bar" & horiz==TRUE){ graph=ggplot(data, aes(y=trats, x=media))+ theme+ geom_col(fill=fill,size=0.3,color="black")+ geom_errorbar(aes(xmin=media-desvio, xmax=media+desvio),width=model[[1]]$plot$width.bar)+ geom_text(aes(x=media+desvio+sup, y=trats, label = letra),hjust=0)+ labs(y=model[[1]]$labels$x, x=model[[1]]$labels$y)+ facet_grid(fac,scales = "free", space='free')+ theme(axis.text = element_text(size=model[[1]]$plot$textsize,color="black"), strip.text = element_text(size=model[[1]]$plot$textsize), legend.position = "none",strip.background = element_rect(fill=facet.background))+ xlim(layer_scales(model[[1]])$y$range$range*1.1)} graph}
/scratch/gouwar.j/cran-all/cranData/AgroR/R/barfacets_function.R
#' Graph: Bar graph for one factor model 2 #' #' @description This is a function of the bar graph for one factor #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param model DIC, DBC or DQL object #' @param fill Fill bars #' @param point.color Point color #' @param point.size Point size #' @param point.shape Format point #' @param text.color Text color #' @param label.color Label color #' @param bar.color Errorbar color #' @param title.size Title size #' @param y.text Y-axis height for x-axis legend #' @param add.info Add other information #' @param y.info Y-axis height for other information #' @param color.info Color text information #' @param width.bar Width error bar #' @param width.col Width Column #' @export #' @return Returns a bar chart for one factor #' @seealso \link{radargraph}, \link{barplot_positive}, \link{plot_TH}, \link{plot_TH1}, \link{corgraph}, \link{spider_graph}, \link{line_plot}, \link{plot_cor}, \link{plot_interaction}, \link{plot_jitter}, \link{seg_graph}, \link{TBARPLOT.reverse} #' @examples #' data("laranja") #'a=with(laranja, DBC(trat, bloco, resp, #' mcomp = "sk",angle=45,sup = 10, #' family = "serif", #' ylab = "Number of fruits/plants")) #'bar_graph2(a) #'bar_graph2(a,fill="darkblue",point.color="orange",text.color='white') bar_graph2=function(model, point.color="black", point.size=2, point.shape=16, text.color="black", label.color="black", bar.color="black", title.size=14, y.text=0, add.info=NA, y.info=0, width.col=0.9, width.bar=0, color.info="black", fill="lightblue"){ requireNamespace("ggplot2") data=model[[1]]$data media=data$media desvio=data$desvio trats=data$trats limite=data$limite letra=data$letra groups=data$groups sup=model[[1]]$plot$sup graph=ggplot(data,aes(x=trats, y=media))+ model[[1]]$theme+ geom_col(fill=fill,size=0.3,color="black",width = width.col)+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio),color=bar.color,width=width.bar)+ geom_point(size=point.size,color=point.color,fill=point.color)+ geom_text(aes(y=media+desvio+sup, x=trats, label = letra),vjust=0,size=model[[1]]$plot$labelsize,color=label.color,family=model[[1]]$plot$family)+ geom_text(aes(y=y.text, x=trats, label = trats),hjust=0,angle=90,size=model[[1]]$plot$labelsize,color=text.color,family=model[[1]]$plot$family)+ labs(x=model[[1]]$labels$x, y=model[[1]]$labels$y)+ theme(axis.text = element_text(size=model[[1]]$plot$textsize,color="black"), axis.title = element_text(size=title.size,color="black"), axis.text.x = element_blank(), strip.text = element_text(size=model[[1]]$plot$textsize), legend.position = "none")+ scale_x_discrete(limits=trats)+ ylim(layer_scales(model[[1]])$y$range$range*1.1) if(is.na(add.info[1])==FALSE){ graph=graph+geom_text(aes(y=y.info,x=trats,label=add.info),hjust=0, size=model[[1]]$plot$labelsize,color=color.info, family=model[[1]]$plot$family) } graph }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/bargraph2_function.R
#' Graph: Bar graph for one factor #' #' @description This is a function of the bar graph for one factor #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param model DIC, DBC or DQL object #' @param fill fill bars #' @param horiz Horizontal Column (\emph{default} is TRUE) #' @param width.col Width Column #' @export #' @return Returns a bar chart for one factor #' @seealso \link{radargraph}, \link{barplot_positive}, \link{plot_TH}, \link{plot_TH1}, \link{corgraph}, \link{spider_graph}, \link{line_plot}, \link{plot_cor}, \link{plot_interaction}, \link{plot_jitter}, \link{seg_graph}, \link{TBARPLOT.reverse} #' @examples #' data("laranja") #'a=with(laranja, DBC(trat, bloco, resp, #' mcomp = "sk",angle=45, #' ylab = "Number of fruits/plants")) #'bar_graph(a,horiz = FALSE) bar_graph=function(model, fill="lightblue", horiz=TRUE, width.col=0.9){ requireNamespace("ggplot2") data=model[[1]]$data media=data$media desvio=data$desvio trats=data$trats limite=data$limite letra=data$letra groups=data$groups sup=model[[1]]$plot$sup if(horiz==TRUE){ graph=ggplot(data,aes(y=trats, x=media))+ model[[1]]$theme+ geom_col(size=0.3,fill=fill, color="black",width = width.col)+ geom_errorbar(aes(xmin=media-desvio, xmax=media+desvio),width=model[[1]]$plot$width.bar)+ geom_text(aes(x=media+desvio+sup, y=trats, label = letra),hjust=0)+ labs(y=model[[1]]$labels$x, x=model[[1]]$labels$y)+ theme(axis.text = element_text(size=model[[1]]$plot$textsize,color="black"), strip.text = element_text(size=model[[1]]$plot$textsize), legend.position = "none")+ scale_y_discrete(limits=trats)+ xlim(layer_scales(model[[1]])$y$range$range*1.1)} if(horiz==FALSE){ graph=ggplot(data,aes(x=trats, y=media))+ model[[1]]$theme+ geom_col(fill=fill,size=0.3,color="black",width = width.col)+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio),width=model[[1]]$plot$width.bar)+ geom_text(aes(y=media+desvio+sup, x=trats, label = letra),vjust=0)+ labs(x=model[[1]]$labels$x, y=model[[1]]$labels$y)+ theme(axis.text = element_text(size=model[[1]]$plot$textsize,color="black"), strip.text = element_text(model[[1]]$plot$textsize), legend.position = "none")+ scale_x_discrete(limits=trats)+ ylim(layer_scales(model[[1]])$y$range$range*1.1)} graph }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/bargraph_function.R
#' Graph: Group DIC, DBC and DQL functions column charts #' #' @description Groups two or more column charts exported from DIC, DBC or DQL function #' @param analysis List with DIC, DBC or DQL object #' @param labels Vector with the name of the facets #' @param ocult.facet Hide facets #' @param ocult.box Hide box #' @param facet.size Font size facets #' @param ylab Y-axis name #' @param width.bar Width error bar #' @param width.col Width Column #' @param sup Number of units above the standard deviation or average bar on the graph #' #' @return Returns a column chart grouped by facets #' #' @export #' @examples #' library(AgroR) #' data("laranja") #' a=with(laranja, DBC(trat, bloco, resp, ylab = "Number of fruits/plants")) #' b=with(laranja, DBC(trat, bloco, resp, ylab = "Number of fruits/plants")) #' c=with(laranja, DBC(trat, bloco, resp, ylab = "Number of fruits/plants")) #' bargraph_onefactor(analysis = list(a,b,c), labels = c("One","Two","Three"),ocult.box = TRUE) bargraph_onefactor=function(analysis, labels=NULL, ocult.facet=FALSE, ocult.box=FALSE, facet.size=14, ylab=NULL, width.bar=0.3, width.col=0.9, sup=NULL){ requireNamespace("ggplot2") results=as.list(1:length(analysis)) for(i in 1:length(analysis)){ if(is.null(labels)==TRUE){analysis[[i]][[1]]$plot$dadosm$facet=rep(i, e=nrow(analysis[[i]][[1]]$plot$dadosm))}else{ analysis[[i]][[1]]$plot$dadosm$facet=rep(labels[i],e=nrow(analysis[[i]][[1]]$plot$dadosm))} results[[i]]=analysis[[i]][[1]]$plot$dadosm} tabela=do.call("rbind",results) if(is.null(sup)==TRUE){sup=0.1*mean(tabela$media)} media=tabela$media desvio=tabela$desvio trats=tabela$trats letra=tabela$letra graph=ggplot(tabela,aes(y=media,x=trats))+ geom_col(color="black",fill="lightblue",width = width.col)+ facet_grid(~facet,scales = "free", space='free')+ geom_errorbar(aes(ymin=media-desvio,ymax=media+desvio),width=width.bar)+ geom_text(aes(y=media+desvio+sup,x=trats,label=letra))+xlab("")+ analysis[[1]][[1]]$theme+theme(strip.text = element_text(size=facet.size)) if(is.null(ylab)==TRUE){graph=graph+ylab(analysis[[1]][[1]]$plot$ylab)}else{graph=graph+ylab(ylab)} if(ocult.facet==TRUE){graph=graph+theme(strip.text = element_blank())} if(ocult.box==TRUE){graph=graph+theme(strip.background = element_blank())} list(graph)[[1]]}
/scratch/gouwar.j/cran-all/cranData/AgroR/R/bargraph_onefactor.R
#' Graph: Group FAT2DIC, FAT2DBC, PSUBDIC or PSUBDBC functions column charts #' #' @description Groups two or more column charts exported from FAT2DIC, FAT2DBC, PSUBDIC or PSUBDBC function #' @param analysis List with DIC, DBC or DQL object #' @param labels Vector with the name of the facets #' @param ocult.facet Hide facets #' @param ocult.box Hide box #' @param facet.size Font size facets #' @param ylab Y-axis name #' @param width.bar Width bar #' @param sup Number of units above the standard deviation or average bar on the graph #' #' @return Returns a column chart grouped by facets #' #' @export #' @examples #' library(AgroR) #' data(corn) #' a=with(corn, FAT2DIC(A, B, Resp, quali=c(TRUE, TRUE),ylab="Heigth (cm)")) #' b=with(corn, FAT2DIC(A, B, Resp, mcomp="sk", quali=c(TRUE, TRUE),ylab="Heigth (cm)")) #' bargraph_twofactor(analysis = list(a,b), labels = c("One","Two"),ocult.box = TRUE) bargraph_twofactor=function(analysis, labels=NULL, ocult.facet=FALSE, ocult.box=FALSE, facet.size=14, ylab=NULL, width.bar=0.3, sup=NULL){ requireNamespace("ggplot2") results=as.list(1:length(analysis)) for(i in 1:length(analysis)){ if(is.null(labels)==TRUE){analysis[[i]][[2]]$plot$graph$facet=rep(i, e=nrow(analysis[[i]][[2]]$plot$graph))}else{ analysis[[i]][[2]]$plot$graph$facet=rep(labels[i],e=nrow(analysis[[i]][[2]]$plot$graph))} results[[i]]=analysis[[i]][[2]]$plot$graph} tabela=do.call("rbind",results) if(is.null(sup)==TRUE){sup=0.1*mean(tabela$media)} media=tabela$media desvio=tabela$desvio f1=tabela$f1 f2=tabela$f2 numero=tabela$numero letra=tabela$letra graph=ggplot(tabela,aes(y=media,x=f1,fill=f2))+ geom_col(color="black",position = position_dodge(width = 0.9))+ facet_grid(~facet,scales = "free", space='free')+ geom_errorbar(aes(ymin=media-desvio,ymax=media+desvio),width=width.bar,position = position_dodge(width=0.9))+ geom_text(aes(y=media+desvio+sup,x=f1,label=numero),position = position_dodge(width=0.9))+xlab("")+ analysis[[1]][[2]]$theme+theme(strip.text = element_text(size=facet.size)) if(is.null(ylab)==TRUE){graph=graph+ylab(analysis[[1]][[2]]$plot$ylab)}else{graph=graph+ylab(ylab)} if(ocult.facet==TRUE){graph=graph+theme(strip.text = element_blank())} if(ocult.box==TRUE){graph=graph+theme(strip.background = element_blank())} list(graph)[[1]]}
/scratch/gouwar.j/cran-all/cranData/AgroR/R/bargraph_twofactor.R
#' Graph: Positive barplot #' #' @description Column chart with two variables that assume a positive response and represented by opposite sides, such as dry mass of the area and dry mass of the root #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @param a Object of DIC, DBC or DQL functions #' @param b Object of DIC, DBC or DQL functions #' @param ylab Y axis names #' @param var_name Name of the variable #' @param fill_color Bar fill color #' @param legend.title Legend title #' @param width.bar Width error bar #' @param width.col Width Column #' @seealso \link{radargraph}, \link{sk_graph}, \link{plot_TH}, \link{corgraph}, \link{spider_graph}, \link{line_plot} #' @return The function returns a column chart with two positive sides #' @note When there is only an effect of the isolated factor in the case of factorial or subdivided plots, it is possible to use the barplot_positive function. #' @export #' @examples #' data("passiflora") #' attach(passiflora) #' a=with(passiflora, DBC(trat, bloco, MSPA)) #' b=with(passiflora, DBC(trat, bloco, MSR)) #' barplot_positive(a, b, var_name = c("DMAP","DRM"), ylab = "Dry root (g)") #' #' a=with(passiflora, DIC(trat, MSPA,test = "noparametric")) #' b=with(passiflora, DIC(trat, MSR)) #' barplot_positive(a, b, var_name = c("DMAP","DRM"), ylab = "Dry root (g)") barplot_positive=function(a, b, ylab="Response", var_name=c("Var1","Var2"), legend.title="Variable", fill_color=c("darkgreen", "brown"), width.col=0.9, width.bar=0.2){ requireNamespace("ggplot2") if(a[[1]]$plot$test=="parametric" & b[[1]]$plot$test=="parametric"){ dataA=a[[1]]$data dataB=b[[1]]$data dataB$media=dataB$media*-1 dataB$desvio=dataB$desvio*-1 dataB$limite=dataB$limite*-1.1 dataA$limite=dataA$limite*1.1 if(colnames(dataA)[3]=="respO"){dataA=dataA[,-3]} if(colnames(dataB)[3]=="respO"){dataB=dataB[,-3]} data=rbind(dataA, dataB) data$vari=rep(c("Var1","Var2"), e=length(rownames(dataA))) data$vari=as.factor(data$vari) levels(data$vari)=var_name trats=data$trats media=data$media vari=data$vari desvio=data$desvio limite=data$limite letra=data$letra graph=ggplot(data,aes(x=trats, y=media, fill=vari))+ geom_col(color="black",width = width.col)+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=width.bar)+ scale_y_continuous(breaks = pretty(media*1.5), labels = abs(pretty(media*1.5)))+ theme_classic()+xlab("")+ylab(ylab)+ geom_text(aes(y=limite, label=letra),family = a[[1]]$plot$family)+ scale_fill_manual(values=fill_color, labels = c(var_name))+ geom_hline(yintercept=0)+ labs(fill=legend.title)+ theme(axis.text = element_text(size=a[[1]]$theme$axis.text$size, color = a[[1]]$theme$axis.text$colour,family = a[[1]]$plot$family), axis.title = element_text(size=a[[1]]$plot$textsize,family = a[[1]]$plot$family), legend.text = element_text(size=a[[1]]$plot$textsize,family = a[[1]]$plot$family), legend.title = element_text(size=a[[1]]$plot$textsize,family = a[[1]]$plot$family))} if(a[[1]]$plot$test=="noparametric" & b[[1]]$plot$test=="noparametric"){ dataA=a[[1]]$data dataB=b[[1]]$data dataB$media=dataB$media*-1 dataB$std=dataB$std*-1 dataB$limite=dataB$limite*-1.1 dataA$limite=dataA$limite*1.1 # if(colnames(dataA)[3]=="respO"){dataA=dataA[,-3]} # if(colnames(dataB)[3]=="respO"){dataB=dataB[,-3]} data=rbind(dataA, dataB) data$vari=rep(c("Var1","Var2"), e=length(rownames(dataA))) data$vari=as.factor(data$vari) levels(data$vari)=var_name trats=data$trats media=data$media vari=data$vari desvio=data$std limite=data$limite letra=data$letra graph=ggplot(data,aes(x=trats, y=media, fill=vari))+ geom_col(color="black",width = width.col)+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=width.bar)+ scale_y_continuous(breaks = pretty(media*1.5), labels = abs(pretty(media*1.5)))+ theme_classic()+xlab("")+ylab(ylab)+ geom_text(aes(y=limite, label=letra),family = a[[1]]$plot$family)+ scale_fill_manual(values=fill_color, labels = c(var_name))+ geom_hline(yintercept=0)+ labs(fill=legend.title)+ theme(axis.text = element_text(size=a[[1]]$theme$axis.text$size, color = a[[1]]$theme$axis.text$colour,family = a[[1]]$plot$family), axis.title = element_text(size=a[[1]]$plot$textsize,family = a[[1]]$plot$family), legend.text = element_text(size=a[[1]]$plot$textsize,family = a[[1]]$plot$family), legend.title = element_text(size=a[[1]]$plot$textsize,family = a[[1]]$plot$family))} if(a[[1]]$plot$test=="parametric" & b[[1]]$plot$test=="noparametric"){ dataA=a[[1]]$data dataB=b[[1]]$data dataB$media=dataB$media*-1 dataB$desvio=dataB$std*-1 dataB$limite=dataB$limite*-1.1 dataA$limite=dataA$limite*1.1 dataB=dataB[,c(12,13,14,15,16)] if(a[[1]]$plot$transf == 1){dataA=dataA[,c(5,3,6,7,4)]} if(a[[1]]$plot$transf != 1){dataA=dataA[,c(6,4,7,8,5)]} data=rbind(dataA, dataB) data$vari=rep(c("Var1","Var2"), e=length(rownames(dataA))) data$vari=as.factor(data$vari) levels(data$vari)=var_name trats=data$trats media=data$media vari=data$vari desvio=data$desvio limite=data$limite letra=data$letra graph=ggplot(data,aes(x=trats, y=media, fill=vari))+ geom_col(color="black",width = width.col)+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=width.bar)+ scale_y_continuous(breaks = pretty(media*1.5), labels = abs(pretty(media*1.5)))+ theme_classic()+xlab("")+ylab(ylab)+ geom_text(aes(y=limite, label=letra),family = a[[1]]$plot$family)+ scale_fill_manual(values=fill_color, labels = c(var_name))+ geom_hline(yintercept=0)+ labs(fill=legend.title)+ theme(axis.text = element_text(size=a[[1]]$theme$axis.text$size, color = a[[1]]$theme$axis.text$colour,family = a[[1]]$plot$family), axis.title = element_text(size=a[[1]]$plot$textsize,family = a[[1]]$plot$family), legend.text = element_text(size=a[[1]]$plot$textsize,family = a[[1]]$plot$family), legend.title = element_text(size=a[[1]]$plot$textsize,family = a[[1]]$plot$family)) } if(a[[1]]$plot$test=="noparametric" & b[[1]]$plot$test=="parametric"){ dataA=a[[1]]$data dataB=b[[1]]$data dataB$media=dataB$media*-1 dataB$desvio=dataB$desvio*-1 dataB$limite=dataB$limite*-1.1 dataA$limite=dataA$limite*1.1 dataA=dataA[,c(12,13,14,15,3)] colnames(dataA)[5]="desvio" if(a[[1]]$plot$transf == 1){dataB=dataB[,c(5,3,6,7,4)]} if(a[[1]]$plot$transf != 1){dataB=dataB[,c(6,4,7,8,5)]} data=rbind(dataA, dataB) data$vari=rep(c("Var1","Var2"), e=length(rownames(dataA))) data$vari=as.factor(data$vari) levels(data$vari)=var_name trats=data$trats media=data$media vari=data$vari desvio=data$desvio limite=data$limite letra=data$letra graph=ggplot(data,aes(x=trats, y=media, fill=vari))+ geom_col(color="black",width = width.col)+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=width.bar)+ scale_y_continuous(breaks = pretty(media*1.5), labels = abs(pretty(media*1.5)))+ theme_classic()+xlab("")+ylab(ylab)+ geom_text(aes(y=limite, label=letra),family = a[[1]]$plot$family)+ scale_fill_manual(values=fill_color, labels = c(var_name))+ geom_hline(yintercept=0)+ labs(fill=legend.title)+ theme(axis.text = element_text(size=a[[1]]$theme$axis.text$size, color = a[[1]]$theme$axis.text$colour,family = a[[1]]$plot$family), axis.title = element_text(size=a[[1]]$plot$textsize,family = a[[1]]$plot$family), legend.text = element_text(size=a[[1]]$plot$textsize,family = a[[1]]$plot$family), legend.title = element_text(size=a[[1]]$plot$textsize,family = a[[1]]$plot$family)) } graph}
/scratch/gouwar.j/cran-all/cranData/AgroR/R/barplot_positive_function.R
#'Dataset: Bean #' #'@description #'An experiment to evaluate the effect of different strains of Azospirillum on common bean cultivar #'IPR Sabia was carried out in a greenhouse. A completely randomized design with five strains was used. #'of Azospirillum (treatments) and five repetitions. The response variable analyzed was grain production per plant #'(g plant-1). #' #' @docType data #' #' @usage data("bean") #' #' @format data.frame containing data set #' \describe{ #' \item{\code{trat}}{numeric vector with treatment} #' \item{\code{prod}}{Numeric vector with grain production per plant} #' } #' @keywords datasets #' @seealso \link{aristolochia}, \link{cloro}, \link{laranja}, \link{enxofre}, \link{laranja}, \link{mirtilo}, \link{passiflora}, \link{phao}, \link{porco}, \link{pomegranate}, \link{simulate1}, \link{simulate2}, \link{simulate3}, \link{tomate}, \link{weather} #' @examples #' data(bean) "bean"
/scratch/gouwar.j/cran-all/cranData/AgroR/R/bean_dataset.R
#' Dataset: Sodium dichloroisocyanurate in soybean #' #' @description An experiment was conducted in a greenhouse in pots at the State #' University of Londrina. The work has the objective of evaluating #' the application of sodium dichloroisocyanurate (DUP) in soybean in #' 4 periods of application in soybean inoculated or not with Rhizobium #' and its influence on the number of nodules. The experiment was #' conducted in a completely randomized design with five replications. #' #' @docType data #' #' @usage data(cloro) #' #' @format data.frame containing data set #' \describe{ #' \item{\code{f1}}{Categorical vector with factor 1} #' \item{\code{f2}}{Categorical vector with factor 2} #' \item{\code{bloco}}{Categorical vector with block} #' \item{\code{resp}}{Numeric vector with number nodules} #' } #' @keywords datasets #' @seealso \link{enxofre}, \link{laranja}, \link{mirtilo}, \link{pomegranate}, \link{porco}, \link{sensorial}, \link{simulate1}, \link{simulate2}, \link{simulate3}, \link{tomate}, \link{weather}, \link{phao}, \link{passiflora}, \link{aristolochia} #' #' @references Rony Kauling Tonelli. Efeito do uso de dicloroisocianurato de sodio sobre a nodulacao em raizes de soja. 2016. Trabalho de Conclusao de Curso. (Graduacao em Agronomia) - Universidade Estadual de Londrina. #' #' @examples #' data(cloro) "cloro"
/scratch/gouwar.j/cran-all/cranData/AgroR/R/cloro.R
#' Utils: Interval of confidence for groups #' #' @description Calculates confidence interval for groups #' @param resp numeric vector with responses #' @param group vector with groups or list with two factors #' @param alpha confidence level of the interval #' @param type lower or upper range #' @export #' @return returns a numeric vector with confidence interval grouped by treatment. #' @examples #' #' #=================================== #' # One factor #' #=================================== #' #' dados=rnorm(100,10,1) #' trat=rep(paste("T",1:10),10) #' confinterval(dados,trat) #' #' #=================================== #' # Two factor #' #=================================== #' f1=rep(c("A","B"),e=50) #' f2=rep(paste("T",1:5),e=10,2) #' confinterval(dados,list(f1,f2)) confinterval=function(resp, group, alpha=0.95, type="upper"){ if(is.list(group)==FALSE){ lower=c() upper=c() for(i in 1:length(unique(group))){ group=factor(group,unique(group)) ic=t.test(resp[group==levels(group)[i]]) lower[i]=ic$conf.int[1] upper[i]=ic$conf.int[2]} names(lower)=levels(group) names(upper)=levels(group)} if(is.list(group)==TRUE){ f1=group[[1]] f2=group[[2]] group=paste(f1,f2) f1=factor(f1,unique(f1)) f2=factor(f2,unique(f2)) lower=c() upper=c() for(i in 1:length(unique(group))){ group=factor(group,unique(group)) ic=t.test(resp[group==levels(group)[i]]) lower[i]=ic$conf.int[1] upper[i]=ic$conf.int[2]} lower=matrix(lower,nrow=length(levels(f2))) colnames(lower)=levels(f1) rownames(lower)=levels(f2) upper=matrix(upper,nrow=length(levels(f2))) colnames(upper)=levels(f1) rownames(upper)=levels(f2)} if(type=="upper"){saida=upper} if(type=="lower"){saida=lower} saida}
/scratch/gouwar.j/cran-all/cranData/AgroR/R/confinterval_function.R
#' Analysis: Joint analysis of experiments in randomized block design #' #' @description Function of the AgroR package for joint analysis of experiments conducted in a randomized qualitative or quantitative single-block design with balanced data. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param trat Numerical or complex vector with treatments #' @param block Numerical or complex vector with blocks #' @param local Numeric or complex vector with locations or times #' @param response Numerical vector containing the response of the experiment. #' @param transf Applies data transformation (default is 1; for log consider 0) #' @param constant Add a constant for transformation (enter value) #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param homog Homogeneity test of variances (\emph{default} is Bartlett) #' @param homog.value Reference value for homogeneity of experiments. By default, this ratio should not be greater than 7 #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{default} is qualitative) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param grau Degree of polynomial in case of quantitative factor (\emph{default} is 1) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param title Graph title #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param dec Number of cells #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angulo x-axis scale text rotation #' @param textsize Font size #' @param family Font family #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @note In this function there are three possible outcomes. When the ratio between the experiments is greater than 7, the separate analyzes are returned, without however using the square of the joint residue. When the ratio is less than 7, but with significant interaction, the effects are tested using the square of the joint residual. When there is no significant interaction and the ratio is less than 7, the joint analysis between the experiments is returned. #' @note The ordering of the graph is according to the sequence in which the factor levels are arranged in the data sheet. The bars of the column and segment graphs are standard deviation. #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @return Returns the assumptions of the analysis of variance, the assumption of the joint analysis by means of a QMres ratio matrix, the analysis of variance, the multiple comparison test or regression. #' @references #' #' Ferreira, P. V. Estatistica experimental aplicada a agronomia. Edufal, 2018. #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' @keywords DBC #' @keywords Joint Analysis #' @export #' @examples #' library(AgroR) #' data(mirtilo) #' #' #=================================== #' # No significant interaction #' #=================================== #' with(mirtilo, conjdbc(trat, bloco, exp, resp)) #' #' #=================================== #' # Significant interaction #' #=================================== #' data(eucalyptus) #' with(eucalyptus, conjdbc(trati, bloc, exp, resp)) conjdbc=function(trat, block, local, response, transf=1, constant = 0, norm="sw", homog="bt", homog.value=7, theme=theme_classic(), mcomp="tukey", quali=TRUE, alpha.f=0.05, alpha.t=0.05, grau=NA, ylab="response", title="", xlab="", fill="lightblue", angulo=0, textsize=12, dec=3, family="sans", errorbar=TRUE){ sup=0.2*mean(response, na.rm=TRUE) requireNamespace("crayon") requireNamespace("ggplot2") if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} tratnum=trat tratamento=factor(trat,levels=unique(trat)) bloco=as.factor(block) local=as.factor(local) a = anova(aov(resp ~ local + local:bloco + tratamento + local:tratamento))[c(4:5), ] b = summary(aov(resp ~ bloco+local + local:bloco + tratamento + Error(local:(bloco + tratamento)))) c = aov(resp ~ local + local:bloco + tratamento + local:tratamento) dados=data.frame(resp,response,tratamento,local,bloco,tratnum) anova=c() tukey=c() graficos=list() nlocal=length(levels(local)) qmres=data.frame(QM=1:nlocal) for(i in 1:length(levels(local))){ anova[[i]]=anova(aov(resp~tratamento+bloco, data=dados[dados$local==levels(dados$local)[i],])) qm=anova[[i]]$`Mean Sq`[3] qmres[i,1]=c(qm) names(anova)[i]=levels(local)[i] aov1=aov(resp~tratamento+bloco, data=dados[dados$local==levels(dados$local)[i],])} matriza=matrix(rep(qmres[[1]],e=length(qmres[[1]])), ncol=length(qmres[[1]])) matrizb=matrix(rep(qmres[[1]],length(qmres[[1]])), ncol=length(qmres[[1]])) ratio=matriza/matrizb rownames(ratio)=levels(local) colnames(ratio)=levels(local) razao=data.frame(resp1=c(ratio), var1=rep(rownames(ratio),e=length(rownames(ratio))), var2=rep(colnames(ratio),length(colnames(ratio)))) var1=razao$var1 var2=razao$var2 resp1=razao$resp1 ratioplot=ggplot(razao, aes(x=var2, y=var1, fill=resp1))+ geom_tile(color="gray50",size=1)+ scale_x_discrete(position = "top")+ scale_fill_distiller(palette = "RdBu",direction = 1)+ ylab("Numerator")+ xlab("Denominator")+ geom_label(aes(label=format(resp1,digits=2)),fill="white")+ labs(fill="ratio")+ theme(axis.text = element_text(size=12,color="black"), legend.text = element_text(size=12), axis.ticks = element_blank(), panel.background = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank())+ labs(caption = "The ratio must be less than 7 (Ferreira et al., 2018)", title="Matrix of average square of the residue") print(ratioplot) QMRES=as.vector(qmres$QM) qmresmedio=max(QMRES)/min(QMRES) b1=matrix(unlist(b$`Error: local:tratamento`), ncol=5,2) b2=matrix(c(unlist(b$`Error: local:bloco`),NA,NA,NA,NA,NA,NA), ncol=5,3)[2:3,] datas=rbind(b1[1,],b2);colnames(datas)=colnames(a) datas=rbind(datas,a[1,]) nexp=length(unique(local)) ntrat=length(unique(trat)) nrep=table(trat)/nexp GL=a$Df[2] resmed=data.frame(rbind(c(GL,NA,mean(QMRES),NA,NA))) colnames(resmed)=colnames(datas) datas=rbind(datas,resmed) rownames(datas)=c("Trat","Exp","Block/Local","Exp:Trat","Average residue") datas[2,4]=datas[2,3]/datas[4,3] datas[3,4]=datas[3,3]/datas[4,3] datas[2,5]=1-pf(datas[2,4],datas[2,1],datas[4,1]) datas[3,5]=1-pf(datas[3,4],datas[3,1],datas[4,1]) datas[5,2]=datas[5,3]*datas[5,1] d=aov(resp~tratamento*local+bloco+local/bloco) if(norm=="sw"){norm1 = shapiro.test(d$res)} if(norm=="li"){norm1=nortest::lillie.test(d$residuals)} if(norm=="ad"){norm1=nortest::ad.test(d$residuals)} if(norm=="cvm"){norm1=nortest::cvm.test(d$residuals)} if(norm=="pearson"){norm1=nortest::pearson.test(d$residuals)} if(norm=="sf"){norm1=nortest::sf.test(d$residuals)} if(homog=="bt"){ homog1 = bartlett.test(d$res ~ trat) statistic=homog1$statistic phomog=homog1$p.value method=paste("Bartlett test","(",names(statistic),")",sep="") } if(homog=="levene"){ homog1 = levenehomog(d$res~trat) statistic=homog1$`F value`[1] phomog=homog1$`Pr(>F)`[1] method="Levene's Test (center = median)(F)" names(homog1)=c("Df", "F value","p.value")} indep = dwtest(d) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") cat(if(norm1$p.value>0.05){black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) homoge=data.frame(Method=method, Statistic=statistic, "p-value"=phomog) rownames(homoge)="" print(homoge) cat("\n") cat(if(homog1$p.value>0.05){black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous\n"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Independence from errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) indepe=data.frame(Method=paste(indep$method,"(", names(indep$statistic),")",sep=""), Statistic=indep$statistic, "p-value"=indep$p.value) rownames(indepe)="" print(indepe) cat("\n") cat(black(if(indep$p.value>0.05){black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors are not independent"})) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Test Homogeneity of experiments"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(qmresmedio) cat("\nBased on the analysis of variance and homogeneity of experiments, it can be concluded that: ") if(qmresmedio<homog.value && a$`Pr(>F)`[1]>alpha.f){ message(black("The experiments can be analyzed together"))}else{ message("Experiments cannot be analyzed together (Separate by experiment)")} cat("\n\n") modres=anova(d) respad=d$res/sqrt(modres$`Mean Sq`[6]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} resids=d$res/sqrt(modres$`Mean Sq`[6]) Ids=ifelse(resids>3 | resids<(-3), "darkblue","black") residplot=ggplot(data=data.frame(resids,Ids),aes(y=resids,x=1:length(resids)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(resids),label=1:length(resids),color=Ids,size=4)+ scale_x_continuous(breaks=1:length(resids))+ theme_classic()+theme(axis.text.y = element_text(size=12), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) print(residplot) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(as.matrix(datas),na.print = "") cat(green(bold("\n-----------------------------------------------------------------\n"))) # anova=as.list(1:length(levels(local))) # tukey=as.list(1:length(levels(local))) anova=c() tukey=c() if(qmresmedio > homog.value){ if(quali==TRUE){ for(i in 1:length(levels(local))){ anova[[i]]=anova(aov(resp~tratamento+bloco, data=dados[dados$local==levels(dados$local)[i],])) aov1=aov(resp~tratamento+bloco, data=dados[dados$local==levels(dados$local)[i],]) anova[[i]]=as.matrix(data.frame(anova[[i]])) colnames(anova[[i]])=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anova[[i]])=c("Trat","Block","Residuals") if(mcomp=="tukey"){tukey[[i]]=TUKEY(aov1,"tratamento", alpha = alpha.t)$groups[unique(as.character(trat)),] comp=TUKEY(aov1,"tratamento")$groups} if(mcomp=="duncan"){tukey[[i]]=duncan(aov1,"tratamento", alpha = alpha.t)$groups[unique(as.character(trat)),] comp=duncan(aov1,"tratamento")$groups} if(mcomp=="lsd"){tukey[[i]]=LSD(aov1,"tratamento", alpha = alpha.t)$groups[unique(as.character(trat)),] comp=LSD(aov1,"tratamento")$groups} if(mcomp=="sk"){ anova=anova(aov1) data=dados[dados$local==levels(dados$local)[i],] nrep=table(data$trat)[1] medias=sort(tapply(data$resp,data$trat,mean),decreasing = TRUE) letra=scottknott(means = medias, df1 = anova$Df[3], nrep = nrep, QME = anova$`Mean Sq`[3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=letra)[unique(as.character(trat)),] tukey[[i]]=letra1 comp=letra1 } if(transf=="1"){}else{tukey[[i]]$respo=with(dados[dados$local==levels(dados$local)[i],], tapply(response, tratamento, mean, na.rm=TRUE))[rownames(comp)]} names(tukey)[i]=levels(local)[i] names(anova)[i]=levels(local)[i] dadosm=data.frame(comp, media=with(dados[dados$local==levels(dados$local)[i],], tapply(response, tratamento, mean, na.rm=TRUE))[rownames(comp)], desvio=with(dados[dados$local==levels(dados$local)[i],], tapply(response, tratamento, sd, na.rm=TRUE))[rownames(comp)]) dadosm$trats=factor(rownames(dadosm),unique(trat)) dadosm$limite=dadosm$media+dadosm$desvio dadosm$letra=paste(format(dadosm$media,digits = dec), dadosm$groups) dadosm=dadosm[unique(as.character(trat)),] trats=dadosm$trats limite=dadosm$limite letra=dadosm$letra media=dadosm$media desvio=dadosm$desvio grafico=ggplot(dadosm, aes(x=trats, y=media))+ geom_col(aes(fill=trats),fill=fill,color=1)+ theme_classic()+ ylab(ylab)+ xlab(xlab)+ylim(0,1.5*max(limite))+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio),color="black", width=0.3)+ geom_text(aes(y=media+desvio+sup, label=letra))+ theme(text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), legend.position = "none") graficos[[i]]=grafico} print(anova,na.print = "") teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) cat(green(bold("-----------------------------------------------------------------\n"))) print(tukey) } if(quali==FALSE){ for(i in 1:length(levels(local))){ data=dados[dados$local==levels(dados$local)[i],] dose1=data$tratnum resp=data$response grafico=polynomial(dose1, resp,grau = grau, textsize=textsize, family=family, ylab=ylab, xlab=xlab, theme=theme, posi="top", se=errorbar)[[1]] graficos[[i]]=grafico} }} if(a$`Pr(>F)`[1] < alpha.f && qmresmedio < homog.value){ GLconj=datas$Df[5] SQconj=datas$`Sum Sq`[5] QMconj=datas$`Mean Sq`[5] if(quali==TRUE){ for(i in 1:length(levels(local))){ anova[[i]]=anova(aov(resp~tratamento+bloco, data=dados[dados$local==levels(dados$local)[i],])) anova[[i]][3,]=c(GLconj,SQconj,QMconj,NA,NA) anova[[i]][1,4]=anova[[i]][1,3]/anova[[i]][3,3] anova[[i]][2,4]=anova[[i]][2,3]/anova[[i]][3,3] anova[[i]][1,5]=1-pf(anova[[i]][1,4],anova[[i]][1,1],anova[[i]][3,1]) anova[[i]][2,5]=1-pf(anova[[i]][2,4],anova[[i]][2,1],anova[[i]][3,1]) anova[[i]]=as.matrix(data.frame(anova[[i]])) colnames(anova[[i]])=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anova[[i]])=c("Trat","Block","Average Residual") trat1=dados[dados$local==levels(dados$local)[i],]$tratamento resp1=dados[dados$local==levels(dados$local)[i],]$resp # anova1=anova(aov(resp~tratamento+bloco, # data=dados[dados$local==levels(dados$local)[i],])) # aov1=aov(resp~tratamento+bloco, data=dados[dados$local==levels(dados$local)[i],]) if(quali==TRUE){ if(mcomp=="tukey"){tukey[[i]]=TUKEY(resp1,trat1,DFerror = GLconj,MSerror = QMconj, alpha = alpha.t)$groups[unique(as.character(trat)),] comp=TUKEY(resp1,trat1,DFerror = GLconj,MSerror = QMconj, alpha = alpha.t)$groups} if(mcomp=="duncan"){tukey[[i]]=duncan(resp1,trat1,DFerror = GLconj,MSerror = QMconj, alpha = alpha.t)$groups[unique(as.character(trat)),] comp=duncan(resp1,trat1,DFerror = GLconj,MSerror = QMconj, alpha = alpha.t)$groups} if(mcomp=="lsd"){tukey[[i]]=LSD(resp1,trat1,DFerror = GLconj,MSerror = QMconj, alpha = alpha.t)$groups[unique(as.character(trat)),] comp=LSD(resp1,trat1,DFerror = GLconj,MSerror = QMconj, alpha = alpha.t)$groups} if(mcomp=="sk"){ anova=anova(aov1) data=dados[dados$local==levels(dados$local)[i],] nrep=table(data$trat)[1] medias=sort(tapply(data$resp,data$trat,mean),decreasing = TRUE) letra=scottknott(means = medias, df1 = GLconj, nrep = nrep, QME = QMconj, alpha = alpha.t) letra1=data.frame(resp=medias,groups=letra)[unique(as.character(trat)),] tukey[[i]]=letra1 comp=letra1 } if(transf=="1"){}else{tukey[[i]]$respo=with(dados[dados$local==levels(dados$local)[i],], tapply(response, tratamento, mean, na.rm=TRUE))[rownames(comp)]} names(tukey)[i]=levels(local)[i] names(anova)[i]=levels(local)[i] dadosm=data.frame(comp, media=with(dados[dados$local==levels(dados$local)[i],], tapply(response, tratamento, mean, na.rm=TRUE))[rownames(comp)], desvio=with(dados[dados$local==levels(dados$local)[i],], tapply(response, tratamento, sd, na.rm=TRUE))[rownames(comp)]) dadosm$trats=factor(rownames(dadosm),unique(trat)) dadosm$limite=dadosm$media+dadosm$desvio dadosm$letra=paste(format(dadosm$media,digits = dec), dadosm$groups) dadosm=dadosm[unique(as.character(trat)),] trats=dadosm$trats limite=dadosm$limite letra=dadosm$letra media=dadosm$media desvio=dadosm$desvio grafico=ggplot(dadosm, aes(x=trats, y=media))+ geom_col(aes(fill=trats),fill=fill,color=1)+ theme_classic()+ ylab(ylab)+ xlab(xlab)+ylim(0,1.5*max(limite))+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio),color="black", width=0.3)+ geom_text(aes(y=media+desvio+sup, label=letra))+ theme(text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), legend.position = "none") graficos[[i]]=grafico} } print(anova,na.print = "") teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) cat(green(bold("-----------------------------------------------------------------\n"))) print(tukey)} if(quali==FALSE){ for(i in 1:length(levels(local))){ data=dados[dados$local==levels(dados$local)[i],] dose1=data$tratnum resp=data$response grafico=polynomial(dose1, resp,grau = grau, textsize=textsize, family=family, ylab=ylab, xlab=xlab, theme=theme, posi="top", SSq = SQconj, DFres = GLconj, se=errorbar)[[1]] graficos[[i]]=grafico} } } if(a$`Pr(>F)`[1] > alpha.f && qmresmedio < homog.value){ if(quali==TRUE){ if(mcomp=="tukey"){ tukeyjuntos=(TUKEY(resp,tratamento,a$Df[1], a$`Mean Sq`[1], alpha = alpha.t)) if(transf!="1"){tukeyjuntos$groups$respo=tapply(response, tratamento, mean, na.rm=TRUE)[rownames(tukeyjuntos$groups)]} tukeyjuntos=tukeyjuntos$groups} if(mcomp=="duncan"){ tukeyjuntos=duncan(resp,tratamento,a$Df[1], a$`Mean Sq`[1], alpha = alpha.t) if(transf!="1"){tukeyjuntos$groups$respo=tapply(response, tratamento, mean, na.rm=TRUE)[rownames(tukeyjuntos$groups)]} tukeyjuntos=tukeyjuntos$groups} if(mcomp=="lsd"){ tukeyjuntos=LSD(resp,tratamento,a$Df[1], a$`Mean Sq`[1], alpha = alpha.t) if(transf!="1"){tukeyjuntos$groups$respo=tapply(response, tratamento, mean, na.rm=TRUE)[rownames(tukeyjuntos$groups)]} tukeyjuntos=tukeyjuntos$groups} if(mcomp=="sk"){ nrep=table(tratamento)[1] medias=sort(tapply(resp,tratamento,mean),decreasing = TRUE) letra=scottknott(means = medias, df1 = a$Df[1], nrep = nrep, QME = a$`Mean Sq`[1], alpha = alpha.t) tukeyjuntos=data.frame(resp=medias,groups=letra) if(transf!="1"){tukeyjuntos$respo=tapply(response, tratamento, mean, na.rm=TRUE)[rownames(tukeyjuntos)]}} dadosm=data.frame(tukeyjuntos, media=tapply(response, tratamento, mean, na.rm=TRUE)[rownames(tukeyjuntos)], desvio=tapply(response, tratamento, sd, na.rm=TRUE)[rownames(tukeyjuntos)]) dadosm$trats=factor(rownames(dadosm),unique(trat)) dadosm$limite=dadosm$media+dadosm$desvio dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups) dadosm=dadosm[unique(as.character(trat)),] media=dadosm$media desvio=dadosm$desvio limite=dadosm$limite trats=dadosm$trats letra=dadosm$letra grafico1=ggplot(dadosm,aes(x=trats,y=media)) if(fill=="trat"){grafico1=grafico+ geom_col(aes(fill=trats),color=1)} else{grafico1=grafico1+ geom_col(aes(fill=trats),fill=fill,color=1)} if(errorbar==TRUE){grafico1=grafico1+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family)} if(errorbar==FALSE){grafico1=grafico1+ geom_text(aes(y=media+sup,label=letra),family=family)} if(errorbar==TRUE){grafico1=grafico1+ geom_errorbar(data=dadosm,aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black", width=0.3)} grafico1=grafico1+theme+ ylab(ylab)+ xlab(xlab)+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family), legend.position = "none") if(angulo !=0){grafico1=grafico1+theme(axis.text.x=element_text(hjust = 1.01,angle = angulo))} teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) cat(green(bold("-----------------------------------------------------------------\n"))) print(tukeyjuntos) } if(quali==FALSE){grafico1=polynomial(tratnum, response,grau = grau, textsize=textsize, family=family, ylab=ylab, xlab=xlab, theme=theme, posi="top",SSq = SQconj,DFres = GLconj, se=errorbar) } graficos=list(grafico1) } cat(if(transf=="1"){}else{blue("\nNOTE: resp = transformed means; respO = averages without transforming\n")}) if(transf==1 && norm1$p.value<0.05 | transf==1 && indep$p.value<0.05 | transf==1 && homog1$p.value<0.05){cat(red("\n \nWarning!!! Your analysis is not valid, suggests using a non-parametric test and try to transform the data"))} if(transf != 1 && norm1$p.value<0.05 | transf!=1 && indep$p.value<0.05 | transf!=1 && homog1$p.value<0.05){cat(red("\n \nWarning!!! Your analysis is not valid, suggests using a non-parametric test"))} # print(graficos) graph=as.list(graficos) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/conjdbc_function.R
#' Analysis: Joint analysis of experiments in completely randomized design #' #' @description Function of the AgroR package for joint analysis of experiments conducted in a completely randomized design with a qualitative or quantitative factor with balanced data. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param trat Numerical or complex vector with treatments #' @param repet Numerical or complex vector with repetitions #' @param local Numeric or complex vector with locations or times #' @param response Numerical vector containing the response of the experiment. #' @param transf Applies data transformation (default is 1; for log consider 0) #' @param constant Add a constant for transformation (enter value) #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param homog Homogeneity test of variances (\emph{default} is Bartlett) #' @param homog.value Reference value for homogeneity of experiments. By default, this ratio should not be greater than 7 #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{default} is qualitative) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param grau Degree of polynomial in case of quantitative factor (\emph{default} is 1) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param title Graph title #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param dec Number of cells #' @param color When the columns are different colors (Set fill-in argument as "trat") #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angulo x-axis scale text rotation #' @param textsize Font size #' @param family Font family #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @note In this function there are three possible outcomes. When the ratio between the experiments is greater than 7, the separate analyzes are returned, without however using the square of the joint residue. When the ratio is less than 7, but with significant interaction, the effects are tested using the square of the joint residual. When there is no significant interaction and the ratio is less than 7, the joint analysis between the experiments is returned. #' @note The ordering of the graph is according to the sequence in which the factor levels are arranged in the data sheet. The bars of the column and segment graphs are standard deviation. #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @return Returns the assumptions of the analysis of variance, the assumption of the joint analysis by means of a QMres ratio matrix, the analysis of variance, the multiple comparison test or regression. #' @keywords DIC #' @keywords Joint Analysis #' @references #' #' Ferreira, P. V. Estatistica experimental aplicada a agronomia. Edufal, 2018. #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' @export #' @examples #' library(AgroR) #' data(mirtilo) #' with(mirtilo, conjdic(trat, bloco, exp, resp)) conjdic=function(trat, repet, local, response, transf=1, constant = 0, norm="sw", homog="bt", mcomp="tukey", homog.value=7, quali=TRUE, alpha.f=0.05, alpha.t=0.05, grau=NA, theme=theme_classic(), ylab="response", title="", xlab="", color="rainbow", fill="lightblue", angulo=0, textsize=12, dec=3, family="sans", errorbar=TRUE){ sup=0.2*mean(response, na.rm=TRUE) requireNamespace("crayon") requireNamespace("ggplot2") if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} tratnum=trat tratamento=factor(trat,levels = unique(trat)) bloco=as.factor(repet) local=as.factor(local) a = anova(aov(resp ~ local + tratamento + local:tratamento))[c(3:4), ] b = summary(aov(resp ~ local + local:bloco + tratamento + Error(local/tratamento))) c = aov(resp ~ local + local:bloco + tratamento + local:tratamento) dados=data.frame(resp,response,tratamento,local,bloco,tratnum) anova=c() tukey=c() graficos=list() nlocal=length(levels(local)) qmres=data.frame(QM=1:nlocal) for(i in 1:length(levels(local))){ anova[[i]]=anova(aov(resp~tratamento, data=dados[dados$local==levels(dados$local)[i],])) qm=anova[[i]]$`Mean Sq`[2] qmres[i,1]=c(qm) names(anova)[i]=levels(local)[i] aov1=aov(resp~tratamento, data=dados[dados$local==levels(dados$local)[i],])} matriza=matrix(rep(qmres[[1]],e=length(qmres[[1]])), ncol=length(qmres[[1]])) matrizb=matrix(rep(qmres[[1]],length(qmres[[1]])), ncol=length(qmres[[1]])) ratio=matriza/matrizb rownames(ratio)=levels(local) colnames(ratio)=levels(local) razao=data.frame(resp1=c(ratio), var1=rep(rownames(ratio),e=length(rownames(ratio))), var2=rep(colnames(ratio),length(colnames(ratio)))) var1=razao$var1 var2=razao$var2 resp1=razao$resp1 ratioplot=ggplot(razao, aes(x=var2, y=var1, fill=resp1))+ geom_tile(color="gray50",size=1)+ scale_x_discrete(position = "top")+ scale_fill_distiller(palette = "RdBu",direction = 1)+ ylab("Numerator")+xlab("Denominator")+ geom_label(aes(label=format(resp1,digits=2)),fill="white")+ labs(fill="ratio")+ theme(axis.text = element_text(size=12,color="black"), legend.text = element_text(size=12), axis.ticks = element_blank(), panel.background = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank())+ labs(caption = "The ratio must be less than 7 (Ferreira et al., 2018)", title="Matrix of average square of the residue") print(ratioplot) (QMRES=as.vector(qmres$QM)) (qmresmedio=max(QMRES)/min(QMRES)) b1=matrix(unlist(b$`Error: local:tratamento`), ncol=5,2) b2=matrix(c(unlist(b$`Error: local`),NA,NA), ncol=5,1) datas=rbind(b1[1,],b2[1,]);colnames(datas)=colnames(a) datas=rbind(datas,a[1,]) nexp=length(unique(local)) ntrat=length(unique(trat)) nrep=table(trat)/nexp GL=a$Df[2]#nexp*(ntrat*nrep[1]-(ntrat-1)) resmed=data.frame(rbind(c(GL,NA,mean(QMRES),NA,NA))) colnames(resmed)=colnames(datas) datas=rbind(datas,resmed) rownames(datas)=c("Trat","Exp","Exp:Trat","Average residue") datas[2,4]=datas[2,3]/datas[3,3] datas[2,5]=1-pf(datas[2,4],datas[2,1],datas[3,1]) datas[4,2]=datas[4,3]*datas[4,1] d=aov(resp~tratamento*local+bloco:local) if(norm=="sw"){norm1 = shapiro.test(d$res)} if(norm=="li"){norm1=nortest::lillie.test(d$residuals)} if(norm=="ad"){norm1=nortest::ad.test(d$residuals)} if(norm=="cvm"){norm1=nortest::cvm.test(d$residuals)} if(norm=="pearson"){norm1=nortest::pearson.test(d$residuals)} if(norm=="sf"){norm1=nortest::sf.test(d$residuals)} if(homog=="bt"){ homog1 = bartlett.test(d$res ~ trat) statistic=homog1$statistic phomog=homog1$p.value method=paste("Bartlett test","(",names(statistic),")",sep="") } if(homog=="levene"){ homog1 = levenehomog(d$res~trat) statistic=homog1$`F value`[1] phomog=homog1$`Pr(>F)`[1] method="Levene's Test (center = median)(F)" names(homog1)=c("Df", "F value","p.value")} indep = dwtest(d) modres=anova(d) resids=d$res/sqrt(modres$`Mean Sq`[5]) out=resids[resids>3 | resids<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} Ids=ifelse(resids>3 | resids<(-3), "darkblue","black") residplot=ggplot(data=data.frame(resids,Ids),aes(y=resids,x=1:length(resids)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(resids),label=1:length(resids),color=Ids,size=4)+ scale_x_continuous(breaks=1:length(resids))+ theme_classic()+theme(axis.text.y = element_text(size=12), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) print(residplot) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm1$p.value>0.05){black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) homoge=data.frame(Method=method, Statistic=statistic, "p-value"=phomog) rownames(homoge)="" print(homoge) cat("\n") message(if(homog1$p.value>0.05){black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous\n"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Independence from errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) indepe=data.frame(Method=paste(indep$method,"(", names(indep$statistic),")",sep=""), Statistic=indep$statistic, "p-value"=indep$p.value) rownames(indepe)="" print(indepe) cat("\n") message(if(indep$p.value>0.05){black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors are not independent"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Test Homogeneity of experiments"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(qmresmedio) message(blue("\nBased on the analysis of variance and homogeneity of experiments, it can be concluded that: ")) if(qmresmedio<homog.value && a$`Pr(>F)`[1]>alpha.f){ message(black("The experiments can be analyzed together"))}else{ message("Experiments cannot be analyzed together (Separate by experiment)")} cat("\n\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(as.matrix(datas),na.print = "") cat(green(bold("\n-----------------------------------------------------------------\n"))) if(qmresmedio > homog.value){ if(a$`Pr(>F)`[1] < alpha.f && quali==TRUE | qmresmedio > homog.value && quali==TRUE){ for(i in 1:length(levels(local))){ anova[[i]]=anova(aov(resp~tratamento, data=dados[dados$local==levels(dados$local)[i],])) aov1=aov(resp~tratamento, data=dados[dados$local==levels(dados$local)[i],]) anova[[i]]=as.matrix(data.frame(anova[[i]])) colnames(anova[[i]])=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anova[[i]])=c("Trat","Residuals") if(quali==TRUE){ if(mcomp=="tukey"){tukey[[i]]=TUKEY(aov1,"tratamento",alpha = alpha.t)$groups comp=TUKEY(aov1,"tratamento",alpha = alpha.t)$groups} if(mcomp=="duncan"){tukey[[i]]=duncan(aov1,"tratamento",alpha = alpha.t)$groups comp=duncan(aov1,"tratamento",alpha = alpha.t)$groups} if(mcomp=="lsd"){tukey[[i]]=LSD(aov1,"tratamento",alpha = alpha.t)$groups comp=LSD(aov1,"tratamento",alpha = alpha.t)$groups} if(mcomp=="sk"){ anova=anova(aov1) data=dados[dados$local==levels(dados$local)[i],] nrep=table(data$trat)[1] medias=sort(tapply(data$resp,data$trat,mean),decreasing = TRUE) letra=scottknott(means = medias, df1 = anova$Df[2], nrep = nrep, QME = anova$`Mean Sq`[2], alpha = alpha.t) letra1=data.frame(resp=medias,groups=letra) tukey[[i]]=letra1 comp=letra1} if(transf=="1"){}else{tukey[[i]]$respo=with(dados[dados$local==levels(dados$local)[i],], tapply(response, tratamento, mean, na.rm=TRUE))[rownames(comp)]} names(tukey)[i]=levels(local)[i] dadosm=data.frame(comp, media=with(dados[dados$local==levels(dados$local)[i],], tapply(response, tratamento, mean, na.rm=TRUE))[rownames(comp)], desvio=with(dados[dados$local==levels(dados$local)[i],], tapply(response, tratamento, sd, na.rm=TRUE))[rownames(comp)]) dadosm$trats=factor(rownames(dadosm),unique(trat)) dadosm$limite=dadosm$media+dadosm$desvio dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups) dadosm=dadosm[unique(as.character(trat)),] media=dadosm$media desvio=dadosm$desvio limite=dadosm$limite trats=dadosm$trats letra=dadosm$letra grafico=ggplot(dadosm, aes(x=trats, y=media))+ geom_col(aes(fill=trats),fill=fill,color=1)+ theme_classic()+ ylab(ylab)+ xlab(xlab)+ylim(0,1.5*max(limite))+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), color="black",width=0.3)+ geom_text(aes(y=media+desvio+sup, label=letra))+ theme(text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), legend.position = "none")} graficos[[i]]=grafico} print(anova,na.print = "") teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) cat(green(bold("-----------------------------------------------------------------\n"))) print(tukey)} if(a$`Pr(>F)`[1] < alpha.f && quali==FALSE | qmresmedio > homog.value && quali==FALSE){ for(i in 1:length(levels(local))){ data=dados[dados$local==levels(dados$local)[i],] dose1=data$tratnum resp=data$response grafico=polynomial(dose1, resp,grau = grau, textsize=textsize, family=family, ylab=ylab, xlab=xlab, theme=theme, posi="top", se=errorbar)[[1]] graficos[[i]]=grafico[[1]]}} } if(a$`Pr(>F)`[1] < alpha.f && qmresmedio < homog.value){ GLconj=datas$Df[4] SQconj=datas$`Sum Sq`[4] QMconj=datas$`Mean Sq`[4] if(a$`Pr(>F)`[1] < alpha.f && quali==TRUE | qmresmedio > homog.value && quali==TRUE){ for(i in 1:length(levels(local))){ anova[[i]]=anova(aov(resp~tratamento, data=dados[dados$local==levels(dados$local)[i],])) anova[[i]][2,]=c(GLconj,SQconj,QMconj,NA,NA) anova[[i]][1,4]=anova[[i]][1,3]/anova[[i]][2,3] anova[[i]][1,5]=1-pf(anova[[i]][1,4],anova[[i]][1,1],anova[[i]][2,1]) anova[[i]]=as.matrix(data.frame(anova[[i]])) colnames(anova[[i]])=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anova[[i]])=c("Trat","Average Residual") trat1=dados[dados$local==levels(dados$local)[i],]$tratamento resp1=dados[dados$local==levels(dados$local)[i],]$resp if(quali==TRUE){ if(mcomp=="tukey"){tukey[[i]]=TUKEY(resp1,trat1,DFerror = GLconj,MSerror = QMconj,alpha = alpha.t)$groups comp=TUKEY(resp1,trat1,DFerror = GLconj,MSerror = QMconj,alpha = alpha.t)$groups} if(mcomp=="duncan"){tukey[[i]]=duncan(resp1,trat1,DFerror = GLconj,MSerror = QMconj,alpha = alpha.t)$groups comp=duncan(resp1,trat1,DFerror = GLconj,MSerror = QMconj,alpha = alpha.t)$groups} if(mcomp=="lsd"){tukey[[i]]=LSD(resp1,trat1,DFerror = GLconj,MSerror = QMconj,alpha = alpha.t)$groups comp=LSD(resp1,trat1,DFerror = GLconj,MSerror = QMconj,alpha = alpha.t)$groups} if(mcomp=="sk"){ data=dados[dados$local==levels(dados$local)[i],] nrep=table(data$trat)[1] medias=sort(tapply(data$resp,data$trat,mean),decreasing = TRUE) letra=scottknott(means = medias, df1 = GLconj, nrep = nrep, QME = QMconj, alpha = alpha.t) letra1=data.frame(resp=medias,groups=letra) tukey[[i]]=letra1 comp=letra1} if(transf=="1"){}else{tukey[[i]]$respo=with(dados[dados$local==levels(dados$local)[i],], tapply(response, tratamento, mean, na.rm=TRUE))[rownames(comp)]} names(tukey)[i]=levels(local)[i] dadosm=data.frame(comp, media=with(dados[dados$local==levels(dados$local)[i],], tapply(response, tratamento, mean, na.rm=TRUE))[rownames(comp)], desvio=with(dados[dados$local==levels(dados$local)[i],], tapply(response, tratamento, sd, na.rm=TRUE))[rownames(comp)]) dadosm$trats=factor(rownames(dadosm),unique(trat)) dadosm$limite=dadosm$media+dadosm$desvio dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups) dadosm=dadosm[unique(as.character(trat)),] media=dadosm$media desvio=dadosm$desvio limite=dadosm$limite trats=dadosm$trats letra=dadosm$letra grafico=ggplot(dadosm, aes(x=trats, y=media))+ geom_col(aes(fill=trats),fill=fill,color=1)+ theme_classic()+ ylab(ylab)+ xlab(xlab)+ylim(0,1.5*max(limite))+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), color="black",width=0.3)+ geom_text(aes(y=media+desvio+sup, label=letra))+ theme(text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), legend.position = "none")} graficos[[i]]=grafico} print(anova,na.print = "") teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) cat(green(bold("-----------------------------------------------------------------\n"))) print(tukey)} if(a$`Pr(>F)`[1] < alpha.f && quali==FALSE | qmresmedio > homog.value && quali==FALSE){ for(i in 1:length(levels(local))){ data=dados[dados$local==levels(dados$local)[i],] dose1=data$tratnum resp=data$response grafico=polynomial(dose1, resp,grau = grau, textsize=textsize, family=family, ylab=ylab, xlab=xlab, theme=theme,SSq = SQconj, DFres = GLconj, posi="top", se=errorbar)[[1]] graficos[[i]]=grafico[[1]]}} } if(a$`Pr(>F)`[1] > alpha.f && qmresmedio < homog.value){ GLconj=datas$Df[4] SQconj=datas$`Sum Sq`[4] QMconj=datas$`Mean Sq`[4] if(quali==TRUE){ if(mcomp=="tukey"){ tukeyjuntos=TUKEY(resp,tratamento,a$Df[1], a$`Mean Sq`[1], alpha = alpha.t) if(transf!="1"){tukeyjuntos$groups$repo=tapply(response, tratamento, mean, na.rm=TRUE)[rownames(tukeyjuntos$groups)]} tukeyjuntos=tukeyjuntos$groups} if(mcomp=="duncan"){ tukeyjuntos=duncan(resp,tratamento,a$Df[1], a$`Mean Sq`[1], alpha = alpha.t) if(transf!="1"){tukeyjuntos$groups$repo=tapply(response, tratamento, mean, na.rm=TRUE)[rownames(tukeyjuntos$groups)]} tukeyjuntos=tukeyjuntos$groups} if(mcomp=="lsd"){ tukeyjuntos=LSD(resp,tratamento,a$Df[1], a$`Mean Sq`[1], alpha = alpha.t) if(transf!="1"){tukeyjuntos$groups$repo=tapply(response, tratamento, mean, na.rm=TRUE)[rownames(tukeyjuntos$groups)]} tukeyjuntos=tukeyjuntos$groups} if(mcomp=="sk"){ nrep=table(tratamento)[1] medias=sort(tapply(resp,tratamento,mean),decreasing = TRUE) letra=scottknott(means = medias, df1 = a$Df[1], nrep = nrep, QME = a$`Mean Sq`[1], alpha = alpha.t) tukeyjuntos=data.frame(resp=medias,groups=letra) if(transf!="1"){tukeyjuntos$respo=tapply(response, tratamento, mean, na.rm=TRUE)[rownames(tukeyjuntos)]}} dadosm=data.frame(tukeyjuntos, media=tapply(response, tratamento, mean, na.rm=TRUE)[rownames(tukeyjuntos)], desvio=tapply(response, tratamento, sd, na.rm=TRUE)[rownames(tukeyjuntos)]) dadosm$trats=factor(rownames(dadosm),unique(trat)) dadosm$limite=dadosm$media+dadosm$desvio dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups) dadosm=dadosm[unique(as.character(trat)),] media=dadosm$media desvio=dadosm$desvio limite=dadosm$limite trats=dadosm$trats letra=dadosm$letra grafico1=ggplot(dadosm, aes(x=trats,y=media)) if(fill=="trat"){grafico1=grafico1+ geom_col(aes(fill=trats),color=1)}else{grafico1=grafico1+ geom_col(aes(fill=trats),fill=fill,color=1)} if(errorbar==TRUE){grafico1=grafico1+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family)} if(errorbar==FALSE){grafico1=grafico1+ geom_text(aes(y=media+sup, label=letra),family=family)} if(errorbar==TRUE){grafico1=grafico1+ geom_errorbar(data=dadosm,aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black", width=0.3)} grafico1=grafico1+theme+ ylab(ylab)+ xlab(xlab)+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family), legend.position = "none") if(angulo !=0){grafico1=grafico1+theme(axis.text.x=element_text(hjust = 1.01,angle = angulo))} teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste,"\n")))) cat(green(bold("-----------------------------------------------------------------\n"))) print(tukeyjuntos) print(grafico1) graficos=list(grafico1) } if(quali==FALSE){grafico1=polynomial(tratnum, response,grau = grau, textsize=textsize, family=family, ylab=ylab, xlab=xlab, theme=theme, SSq = SQconj, DFres = GLconj, posi="top", se=errorbar) } graficos=list(grafico1[[1]]) } cat(if(transf=="1"){}else{blue("\nNOTE: resp = transformed means; respO = averages without transforming\n")}) if(transf==1 && norm1$p.value<0.05 | transf==1 && indep$p.value<0.05 | transf==1 && homog1$p.value<0.05){cat(red("\n \nWarning!!! Your analysis is not valid, suggests using a non-parametric test and try to transform the data"))} if(transf != 1 && norm1$p.value<0.05 | transf!=1 && indep$p.value<0.05 | transf!=1 && homog1$p.value<0.05){cat(red("\n \nWarning!!! Your analysis is not valid, suggests using a non-parametric test"))} # print(graficos) graph=as.list(graficos) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/conjdic_function.R
#' Analysis: Joint analysis of experiments in randomized block design in scheme factorial double #' #' @description Function of the AgroR package for joint analysis of experiments conducted in a randomized factorial double in block design with balanced data. The function generates the joint analysis through two models. Model 1: F-test of the effects of Factor 1, Factor 2 and F1 x F2 interaction are used in reference to the mean square of the interaction with the year. Model 2: F-test of the Factor 1, Factor 2 and F1 x F2 interaction effects are used in reference to the mean square of the residual. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param f1 Numeric or complex vector with factor 1 levels #' @param f2 Numeric or complex vector with factor 2 levels #' @param block Numerical or complex vector with blocks #' @param experiment Numeric or complex vector with locations or times #' @param response Numerical vector containing the response of the experiment. #' @param transf Applies data transformation (default is 1; for log consider 0) #' @param constant Add a constant for transformation (enter value) #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param homog Homogeneity test of variances (\emph{default} is Bartlett) #' @param homog.value Reference value for homogeneity of experiments. By default, this ratio should not be greater than 7 #' @param model Define model of the analysis of variance #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @return Returns the assumptions of the analysis of variance, the assumption of the joint analysis by means of a QMres ratio matrix and analysis of variance #' #' @note The function is still limited to analysis of variance and assumptions only. #' #' @references #' #' Ferreira, P. V. Estatistica experimental aplicada a agronomia. Edufal, 2018. #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' @keywords Double factorial #' @keywords Joint Analysis #' @export #' @examples #' library(AgroR) #' ano=factor(rep(c(2018,2019,2020),e=48)) #' f1=rep(rep(c("A","B","C"),e=16),3) #' f2=rep(rep(rep(c("a1","a2","a3","a4"),e=4),3),3) #' resp=rnorm(48*3,10,1) #' bloco=rep(c("b1","b2","b3","b4"),36) #' dados=data.frame(ano,f1,f2,resp,bloco) #' with(dados,conjfat2dbc(f1,f2,bloco,ano,resp, model=1)) conjfat2dbc=function(f1, f2, block, experiment, response, transf = 1, constant = 0, model=1, norm = "sw", homog = "bt", homog.value=7, alpha.f = 0.05, alpha.t = 0.05) { sup=0.2*mean(response, na.rm=TRUE) requireNamespace("crayon") requireNamespace("ggplot2") if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} f1=factor(f1,unique(f1)) f2=factor(f2,unique(f2)) bloco=as.factor(block) local=as.factor(experiment) #============================================================ dados=data.frame(f1,f2,bloco,local,resp,response) anova=c() nlocal=length(levels(local)) qmres=data.frame(QM=1:nlocal) #============================================================ for(i in 1:length(levels(local))){ anova[[i]]=anova(aov(resp~f1*f2+bloco, data=dados[dados$local==levels(dados$local)[i],])) qm=anova[[i]]$`Mean Sq`[5] qmres[i,1]=c(qm) names(anova)[i]=levels(local)[i]} matriza=matrix(rep(qmres[[1]],e=length(qmres[[1]])), ncol=length(qmres[[1]])) matrizb=matrix(rep(qmres[[1]],length(qmres[[1]])), ncol=length(qmres[[1]])) ratio=matriza/matrizb rownames(ratio)=levels(local) colnames(ratio)=levels(local) razao=data.frame(resp1=c(ratio), var1=rep(rownames(ratio),e=length(rownames(ratio))), var2=rep(colnames(ratio),length(colnames(ratio)))) var1=razao$var1 var2=razao$var2 resp1=razao$resp1 ratioplot=ggplot(razao, aes(x=var2, y=var1, fill=resp1))+ geom_tile(color="gray50",size=1)+ scale_x_discrete(position = "top")+ scale_fill_distiller(palette = "RdBu",direction = 1)+ ylab("Numerator")+ xlab("Denominator")+ geom_label(aes(label=format(resp1,digits=2)),fill="white")+ labs(fill="ratio")+ theme(axis.text = element_text(size=12,color="black"), legend.text = element_text(size=12), axis.ticks = element_blank(), panel.background = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank())+ labs(caption = "The ratio must be less than 7 (Ferreira et al., 2018)", title="Matrix of average square of the residue") print(ratioplot) QMRES=as.vector(qmres$QM) qmresmedio=max(QMRES)/min(QMRES) # modelo 1 # modelo tratando f1, f2 e interacao em relação a interacao (f1 x f2 x ano) if(model==1){interacao=anova(aov(resp~f1+f2+f1:f2+local:bloco+f1:f2:local)) interacao$`Mean Sq`[6]=mean(QMRES) interacao$`Sum Sq`[6]=interacao$`Mean Sq`[6]*interacao$Df[6] interacao$`F value`[5]=interacao$`Mean Sq`[5]/interacao$`Mean Sq`[6] interacao$`Pr(>F)`[5]=1-pf(interacao$`F value`[5],interacao$Df[6],interacao$Df[6]) interacao$`F value`[1:4]=interacao$`Mean Sq`[1:4]/interacao$`Mean Sq`[5] interacao$`Pr(>F)`[1:4]=1-pf(interacao$`F value`[1:4],interacao$Df[1:4],interacao$Df[5]) pfint=interacao$`Pr(>F)`[5]} # modelo 2 # todos fixos if(model==2){ interacao <- anova(aov(resp ~ f1*f2+local:bloco + f1:f2:local)) pfint=interacao$`Pr(>F)`[5]} #======================================================================= d=aov(resp~f1+f2+f1:f2+local:bloco+f1:f2:local) if(norm=="sw"){norm1 = shapiro.test(d$res)} if(norm=="li"){norm1=nortest::lillie.test(d$residuals)} if(norm=="ad"){norm1=nortest::ad.test(d$residuals)} if(norm=="cvm"){norm1=nortest::cvm.test(d$residuals)} if(norm=="pearson"){norm1=nortest::pearson.test(d$residuals)} if(norm=="sf"){norm1=nortest::sf.test(d$residuals)} if(homog=="bt"){ homog1 = bartlett.test(d$res ~ paste(f1,f2)) statistic=homog1$statistic phomog=homog1$p.value method=paste("Bartlett test","(",names(statistic),")",sep="") } if(homog=="levene"){ homog1 = levenehomog(d$res~paste(f1,f2)) statistic=homog1$`F value`[1] phomog=homog1$`Pr(>F)`[1] method="Levene's Test (center = median)(F)" names(homog1)=c("Df", "F value","p.value")} indep = dwtest(d) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") cat(if(norm1$p.value>0.05){black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) homoge=data.frame(Method=method, Statistic=statistic, "p-value"=phomog) rownames(homoge)="" print(homoge) cat("\n") cat(if(homog1$p.value>0.05){black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous\n"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Independence from errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) indepe=data.frame(Method=paste(indep$method,"(", names(indep$statistic),")",sep=""), Statistic=indep$statistic, "p-value"=indep$p.value) rownames(indepe)="" print(indepe) cat("\n") cat(black(if(indep$p.value>0.05){black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors are not independent"})) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Test Homogeneity of experiments"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(qmresmedio) cat("\nBased on the analysis of variance and homogeneity of experiments, it can be concluded that: ") if(qmresmedio<homog.value && pfint[1]>alpha.f){ message(black("The experiments can be analyzed together"))}else{ message("Experiments cannot be analyzed together (Separate by experiment)")} cat("\n\n") modres=anova(d) respad=d$res/sqrt(modres$`Mean Sq`[6]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} resids=d$res/sqrt(modres$`Mean Sq`[6]) Ids=ifelse(resids>3 | resids<(-3), "darkblue","black") residplot=ggplot(data=data.frame(resids,Ids),aes(y=resids,x=1:length(resids)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(resids),label=1:length(resids),color=Ids,size=4)+ scale_x_continuous(breaks=1:length(resids))+ theme_classic()+theme(axis.text.y = element_text(size=12), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) print(residplot) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(as.matrix(interacao),na.print = "") cat(green(bold("\n-----------------------------------------------------------------\n"))) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/conjfatdbc_function.R
#' Graph: Plot Pearson correlation with interval of confidence #' #' @description Plot Pearson correlation with interval of confidence #' @param data data.frame with responses #' @param background background fill (\emph{default} is TRUE) #' @param axis.size Axes font size (\emph{default} is 12) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param theme ggplot theme (\emph{default} is theme_classic()) #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @return The function returns a new graphical approach to correlation. #' @importFrom RColorBrewer brewer.pal #' @importFrom grDevices colorRampPalette #' @importFrom utils combn #' @importFrom grDevices blues9 #' @export #' @examples #' data("pomegranate") #' cor_ic(pomegranate[,-1]) cor_ic=function(data, background=TRUE, axis.size=12, ylab="", xlab="Correlation (r)", theme=theme_classic()){ method="pearson" requireNamespace("RColorBrewer") requireNamespace("ggplot2") make_gradient <- function(deg = 45, n = 100, cols = blues9) { cols <- colorRampPalette(cols)(n + 1) rad <- deg / (180 / pi) mat <- matrix( data = rep(seq(0, 1, length.out = n) * sin(rad), n), byrow = FALSE, ncol = n )+matrix( data = rep(seq(0, 1, length.out = n) * cos(rad), n), byrow = TRUE, ncol = n ) mat <- mat - min(mat) mat <- mat / max(mat) mat <- 1 + mat * n mat <- matrix(data = cols[round(mat)], ncol = n) grid::rasterGrob( image = mat, width = unit(1, "npc"), height = unit(1, "npc"), interpolate = TRUE ) } g <- make_gradient( deg = 180, n = 500, cols = brewer.pal(9, "RdBu")[9:1]) df_list <- lapply(1:(ncol(combn(1:ncol(data), m = 2))), function(y) data[, combn(1:ncol(data), m = 2)[,y]]) # combs=factorial(length(colnames(data))-1) combs=length(df_list) combin=1:combs combin1=1:combs combin2=1:combs vari=1:combs pvalor=1:combs for(i in 1:combs){ vari[i]=paste(colnames(df_list[[i]])[1],"x", colnames(df_list[[i]])[2]) combin[i]=cor.test(unlist(df_list[[i]][,1]), unlist(df_list[[i]][,2]),method = method)$estimate combin1[i]=cor.test(unlist(df_list[[i]][,1]), unlist(df_list[[i]][,2]),method = method)$conf.int[1] combin2[i]=cor.test(unlist(df_list[[i]][,1]), unlist(df_list[[i]][,2]),method = method)$conf.int[2] pvalor[i]=cor.test(unlist(df_list[[i]][,1]), unlist(df_list[[i]][,2]),method = method)$p.value } pvalue=ifelse(pvalor<0.01,"**",ifelse(pvalor<0.05,"*","")) data=data.frame(combin,combin1,combin2,vari) graph=ggplot(data,aes(x=combin,y=vari)) if(background==TRUE){graph=graph+ annotation_custom( grob = g, xmin = -1, xmax = 1, ymin = -Inf, ymax = Inf)} graph=graph+geom_vline(xintercept = c(-1,0,1), lty=c(2,2,2),color=c("red","black","blue"),size=1)+ geom_errorbar(aes(xmin=combin2,xmax=combin1),size=1,width=0.1)+ geom_point(size=5,shape=21,color="black",fill="gray")+theme+ geom_label(aes(label=paste(round(combin,2),pvalue,sep = "")), vjust=-0.5)+ theme(axis.text = element_text(size=axis.size))+ labs(y=ylab, x=xlab) print(graph) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/coric_function.R
#' Dataset: Corn #' #' @description A 3 x 2 factorial experiment was carried out to compare three #' new corn hybrids considering the change in sowing density, being #' 55 thousand or 65 thousand seeds per hectare. For this case, #' the researcher is not interested in estimating values for other #' densities, but only in verifying if one density differs from #' the other. The experiment was carried out according to a #' completely randomized design with 4 repetitions of each treatment. #' #' #' @docType data #' #' @usage data(corn) #' #' @format data.frame containing data set #' \describe{ #' \item{\code{A}}{Categorical vector with hybrids} #' \item{\code{B}}{Categorical vector with density} #' \item{\code{resp}}{Numeric vector with response} #' } #' @keywords datasets #' @seealso \link{enxofre}, \link{laranja}, \link{mirtilo}, \link{pomegranate}, \link{porco}, \link{sensorial}, \link{simulate1}, \link{simulate2}, \link{simulate3}, \link{tomate}, \link{weather}, \link{phao}, \link{passiflora}, \link{aristolochia} #' #' @examples #' data(corn) "corn"
/scratch/gouwar.j/cran-all/cranData/AgroR/R/corn_dataset.R
#' Graph: Correlogram #' @description Correlation analysis function (Pearson or Spearman) #' @param data data.frame with responses #' @param axissize Axes font size (\emph{default} is 12) #' @param legendsize Legend font size (\emph{default} is 12) #' @param legendposition Legend position (\emph{default} is c(0.9,0.2)) #' @param legendtitle Legend title (\emph{default} is "Correlation") #' @param method Method correlation (\emph{default} is Pearson) #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @return The function returns a correlation matrix #' @export #' @examples #' data("pomegranate") #' corgraph(pomegranate[,-1]) corgraph=function(data, axissize=12, legendsize=12, legendposition=c(0.9,0.2), legendtitle="Correlation", method="pearson"){ dm=data requireNamespace("ggplot2") pearson=function(data,method="pearson"){ corre=cor(data,method=method) col_combinations = expand.grid(names(data), names(data)) cor_test_wrapper = function(col_name1, col_name2, data_frame) { cor.test(data_frame[[col_name1]], data_frame[[col_name2]],method=method,exact=FALSE)$p.value} p_vals = mapply(cor_test_wrapper, col_name1 = col_combinations[[1]], col_name2 = col_combinations[[2]], MoreArgs = list(data_frame = data)) pvalue=matrix(p_vals, ncol(data), ncol(data), dimnames = list(names(data), names(data))) list(r=corre,P=pvalue)} cr <- cor(dm,method = method) cr[upper.tri(cr, diag=TRUE)] <- NA dnovo=expand.grid(rownames(cr),colnames(cr)) dnovo$cor=c(cr) dados=dnovo[!is.na(dnovo$cor),] pvalor=pearson(dm,method=method) pvalor=pvalor$P pvalor[upper.tri(pvalor, diag=TRUE)] <- NA dnovo1=expand.grid(rownames(pvalor),colnames(pvalor)) dnovo1$p=c(pvalor) pvalor=dnovo1[!is.na(dnovo1$p),] p=ifelse(unlist(pvalor$p)<0.01,"**", ifelse(unlist(pvalor$p)<0.05,"*"," ")) dados$p=p dados1=data.frame(dados[,1:3],p=pvalor$p) print(dados1) Var2=dados$Var2 Var1=dados$Var1 cor=dados$cor p=dados$p grafico=ggplot(dados,aes(x=Var2, y=Var1, fill=cor))+ geom_tile(color="gray50",size=1)+ scale_x_discrete(position = "top")+ scale_fill_distiller(palette = "RdBu",direction = 1,limits=c(-1,1))+ geom_label(aes(label=paste(format(cor,digits=2),p)), fill="lightyellow",label.size = 1)+ ylab("")+xlab("")+ labs(fill=legendtitle)+ theme(axis.text = element_text(size=axissize,color="black"), legend.text = element_text(size=legendsize), legend.position = legendposition, axis.ticks = element_blank(), panel.background = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank())+ labs(caption = "*p<0.05; **p<0.01") print(grafico) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/correlation_function.R
#' Dataset: Covercrops #' #' @description Consider a 3 ×3 factorial experiment in randomized blocks, with #' 4 replications, on the influence of three new soybean cultivars (A1, A2 and A3) #' and the use of three types of green manure (B1, B2 and B3) on yield in 100 m2 plots. #' #' #' @docType data #' #' @usage data(covercrops) #' #' @format data.frame containing data set #' \describe{ #' \item{\code{A}}{Categorical vector with cultivars} #' \item{\code{B}}{Categorical vector with green manure} #' \item{\code{Bloco}}{Categorical vector with block} #' \item{\code{Resp}}{Numeric vector with yield} #' } #' @keywords datasets #' @seealso \link{enxofre}, \link{laranja}, \link{mirtilo}, \link{pomegranate}, \link{porco}, \link{sensorial}, \link{simulate1}, \link{simulate2}, \link{simulate3}, \link{tomate}, \link{weather}, \link{phao}, \link{passiflora}, \link{aristolochia} #' #' #' @examples #' data(covercrops) "covercrops"
/scratch/gouwar.j/cran-all/cranData/AgroR/R/covercrops_dataset.R
#' Utils: Experimental sketch #' #' @description Experimental sketching function #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @name sketch #' @param trat Vector with factor A levels #' @param trat1 Vector with levels of factor B (Set to NULL if not factorial or psub) #' @param trat2 Vector with levels of factor C (Set to NULL if not factorial) #' @param r Number of repetitions #' @param design Experimental design (see note) #' @param pos Repeat position (line or column), #' @param color.sep Color box #' @param ID plot Add only identification in sketch #' @param print.ID Print table ID #' @param label.x text in x #' @param label.y text in y #' @param labelsize Label size #' @param legendsize Title legend size #' @param axissize Axis size #' @param add.streets.x Adds streets by separating treatments in row or column. The user must supply a numeric vector grouping the rows or columns that must be together. See the example. #' @param add.streets.y Adds streets by separating treatments in row or column. The user must supply a numeric vector grouping the rows or columns that must be together. See the example. #' @param export.csv Save table template based on sketch in csv #' @param comment.caption Add comment in caption #' @importFrom utils write.csv #' @keywords croqui #' @keywords experimental #' @return Returns an experimental sketch according to the specified design. #' @note The sketches have only a rectangular shape, and the blocks (in the case of randomized blocks) can be in line or in a column. #' @note For the design argument, you can choose from the following options: #' \describe{ #' \item{\code{design="DIC"}}{Completely randomized design} #' \item{\code{design="DBC"}}{Randomized block design} #' \item{\code{design="DQL"}}{Latin square design} #' \item{\code{design="FAT2DIC"}}{DIC experiments in double factorial} #' \item{\code{design="FAT2DBC"}}{DBC experiments in double factorial} #' \item{\code{design="FAT3DIC"}}{DIC experiments in triple factorial} #' \item{\code{design="FAT3DBC"}}{DBC experiments in triple factorial} #' \item{\code{design="PSUBDIC"}}{DIC experiments in split-plot} #' \item{\code{design="PSUBDBC"}}{DBC experiments in split-plot} #' \item{\code{design="PSUBSUBDBC"}}{DBC experiments in split-split-plot} #' \item{\code{design="STRIP-PLOT"}}{Strip-plot DBC experiments} #' } #' @note For the color.sep argument, you can choose from the following options: #' \describe{ #' \item{\code{design="DIC"}}{use "all" or "none"} #' \item{\code{design="DBC"}}{use "all","bloco" or "none"} #' \item{\code{design="DQL"}}{use "all", "column", "line" or "none"} #' \item{\code{design="FAT2DIC"}}{use "all", "f1", "f2" or "none"} #' \item{\code{design="FAT2DBC"}}{use "all", "f1", "f2", "block" or "none"} #' \item{\code{design="FAT3DIC"}}{use "all", "f1", "f2", "f3" or "none"} #' \item{\code{design="FAT3DBC"}}{use "all", "f1", "f2", "f3", "block" or "none"} #' \item{\code{design="PSUBDIC"}}{use "all", "f1", "f2" or "none"} #' \item{\code{design="PSUBDBC"}}{use "all", "f1", "f2", "block" or "none"} #' \item{\code{design="PSUBSUBDBC"}}{use "all", "f1", "f2", "f3", "block" or "none"} #' } #' #' @references #' Mendiburu, F., & de Mendiburu, M. F. (2019). Package ‘agricolae’. R Package, Version, 1-2. #' @export #' @examples #' Trat=paste("Tr",1:6) #' #' #============================= #' # Completely randomized design #' #============================= #' sketch(Trat,r=3) #' sketch(Trat,r=3,pos="column") #' sketch(Trat,r=3,color.sep="none") #' sketch(Trat,r=3,color.sep="none",ID=TRUE,print.ID=TRUE) #' sketch(Trat,r=3,pos="column",add.streets.x=c(1,1,2,2,3,3)) #' #' #============================= #' # Randomized block design #' #============================= #' sketch(Trat, r=3, design="DBC") #' sketch(Trat, r=3, design="DBC",pos="column") #' sketch(Trat, r=3, design="DBC",pos="column",add.streets.x=c(1,1,2)) #' sketch(Trat, r=3, design="DBC",pos="column",add.streets.x=c(1,2,3), add.streets.y=1:6) #' sketch(Trat, r=3, design="DBC",pos="line",add.streets.y=c(1,2,3), add.streets.x=1:6) #' #' #============================= #' # Completely randomized experiments in double factorial #' #============================= #' sketch(trat=c("A","B"), #' trat1=c("A","B","C"), #' design = "FAT2DIC", #' r=3) #' #' sketch(trat=c("A","B"), #' trat1=c("A","B","C"), #' design = "FAT2DIC", #' r=3, #' pos="column") sketch=function(trat, trat1=NULL, trat2=NULL, r, design="DIC", pos="line", color.sep="all", ID=FALSE, print.ID=TRUE, add.streets.y=NA, add.streets.x=NA, label.x="", label.y="", axissize=12, legendsize=12, labelsize=4, export.csv=FALSE, comment.caption=NULL){ requireNamespace("ggplot2") design.crd <-function(trt,r,serie=2,seed=0,kinds="Super-Duper",randomization=TRUE) { number<-0 if(serie>0) number<-10^serie junto<-data.frame(trt,r) junto<-junto[order(junto[,1]),] TR<-as.character(junto[,1]) r<-as.numeric(junto[,2]) y <- rep(TR[1], r[1]) tr <- length(TR) if (seed == 0) { genera<-runif(1) seed <-.Random.seed[3] } set.seed(seed,kinds) parameters<-list(design="crd",trt=trt,r=r,serie=serie,seed=seed,kinds=kinds,randomization) for (i in 2:tr) y <- c(y, rep(TR[i], r[i])) trat<-y if(randomization)trat <- sample(y, length(y), replace = FALSE) plots <- number+1:length(trat) dca<-data.frame(plots, trat) dca[,1]<-as.numeric(dca[,1]) xx<-dca[order(dca[,2],dca[,1]),] r1<-seq(1,r[1]) for (i in 2:length(r)) { r1<-c(r1,seq(1,r[i])) } yy<-data.frame(plots=xx[,1],r=r1,xx[,2]) book<-yy[order(yy[,1]),] rownames(book)<-rownames(yy) names(book)[3]<-c(paste(deparse(substitute(trt)))) outdesign<-list(parameters=parameters,book=book) return(outdesign) } design.rcbd <-function (trt, r,serie=2,seed=0, kinds="Super-Duper",first=TRUE, continue=FALSE,randomization=TRUE){ number<-10 if(serie>0) number<-10^serie ntr <- length(trt) if (seed == 0) { genera<-runif(1) seed <-.Random.seed[3] } set.seed(seed,kinds) parameters<-list(design="rcbd",trt=trt,r=r,serie=serie,seed=seed,kinds=kinds,randomization) mtr <-trt if(randomization)mtr <- sample(trt, ntr, replace = FALSE) block <- c(rep(1, ntr)) for (y in 2:r) { block <- c(block, rep(y, ntr)) if(randomization)mtr <- c(mtr, sample(trt, ntr, replace = FALSE)) } if(randomization){ if(!first) mtr[1:ntr]<-trt } plots <- block*number+(1:ntr) book <- data.frame(plots, block = as.factor(block), trt = as.factor(mtr)) names(book)[3] <- c(paste(deparse(substitute(trt)))) names(book)[3]<-c(paste(deparse(substitute(trt)))) if(continue){ start0<-10^serie if(serie==0) start0<-0 book$plots<-start0+1:nrow(book) } outdesign<-list(parameters=parameters,sketch=matrix(book[,3], byrow = TRUE, ncol = ntr),book=book) return(outdesign) } design.lsd <-function (trt,serie=2,seed=0, kinds="Super-Duper",first=TRUE, randomization=TRUE){ number<-10 if(serie>0) number<-10^serie r <- length(trt) if (seed == 0) { genera<-runif(1) seed <-.Random.seed[3] } set.seed(seed,kinds) parameters<-list(design="lsd",trt=trt,r=r,serie=serie,seed=seed,kinds=kinds,randomization) a <- 1:(r * r) dim(a) <- c(r, r) for (i in 1:r) { for (j in 1:r) { k <- i + j - 1 if (k > r) k <- i + j - r - 1 a[i, j] <- k } } m<-2:r if(randomization)m<-sample(2:r,r-1) a<-a[,c(1,m)] if(randomization){ if (first) { m<-sample(1:r,r) a<-a[m,] }} trat<-trt[a] columna <- rep(gl(r, 1), r) fila <- gl(r, r) fila <- as.character(fila) fila <- as.numeric(fila) plots <- fila*number+(1:r) book <- data.frame(plots, row = as.factor(fila), col = as.factor(columna), trat = as.factor(trat)) names(book)[4] <- c(paste(deparse(substitute(trt)))) outdesign<-list(parameters=parameters,sketch=matrix(book[,4], byrow = TRUE, ncol = r),book=book) return(outdesign) } design.split <-function (trt1, trt2,r=NULL, design=c("rcbd","crd","lsd"),serie = 2, seed = 0, kinds = "Super-Duper", first=TRUE,randomization=TRUE){ n1<-length(trt1) n2<-length(trt2) if (seed == 0) { genera<-runif(1) seed <-.Random.seed[3] } set.seed(seed,kinds) design <- match.arg(design) number<-10^serie +1 if (design == "crd") { plan<-design.crd(trt1,r,serie, seed, kinds,randomization) k<-3 } if (design == "rcbd"){ plan<-design.rcbd(trt1,r,serie, seed, kinds, first,randomization) k<-3 } if (design == "lsd") { plan<-design.lsd(trt1,serie, seed, kinds, first,randomization) r<-n1 k<-4 } book<-plan$book parameters<-plan$parameters names(parameters)[2]<-"trt1" parameters$applied<-parameters$design parameters$design<-"split" parameters$trt2<-trt2 j<-0 B<-list() for(i in c(1,7,2,8,3:6)){ j<-j+1 B[[j]]<-parameters[[i]] names(B)[j]<-names(parameters)[i] } nplot<-nrow(book) d<-NULL if(randomization){ for(i in 1:nplot)d<-rbind(d,sample(trt2,n2)) } else{ d<-rbind(d,trt2[1:n2]) } aa<-data.frame(book,trt2=d[,1]) for(j in 2:n2) aa<-rbind(aa,data.frame(book,trt2=d[,j])) aa<-aa[order(aa[,1]),] splots<-rep(gl(n2,1),nplot) book <- data.frame(plots=aa[,1],splots,aa[,-1]) rownames(book)<-1:(nrow(book)) names(book)[k+1] <- c(paste(deparse(substitute(trt1)))) names(book)[k+2] <- c(paste(deparse(substitute(trt2)))) outdesign<-list(parameters=B,book=book) return(outdesign) } design.ab <-function(trt, r=NULL,serie=2,design=c("rcbd","crd","lsd"),seed=0,kinds="Super-Duper", first=TRUE,randomization=TRUE ){ design <- match.arg(design) if( design=="rcbd" | design=="crd") posicion <- 3 else posicion <- 4 serie<-serie; seed<-seed; kinds<-kinds; first<-first; ntr<-length(trt) fact<-NULL tr0<-1:trt[1] k<-0 a<-trt[1];b<-trt[2] for(i in 1:a){ for(j in 1:b){ k<-k+1 fact[k]<-paste(tr0[i],j) } } if(ntr >2) { for(m in 3:ntr){ k<-0 tr0<-fact fact<-NULL a<-a*b b<-trt[m] for(i in 1:a){ for(j in 1:b){ k<-k+1 fact[k]<-paste(tr0[i],j) } } } } if(design=="rcbd")plan<-design.rcbd(trt=fact, r, serie, seed, kinds, first,randomization ) if(design=="crd")plan<-design.crd(trt=fact, r, serie, seed, kinds,randomization) if(design=="lsd")plan<-design.lsd(trt=fact, serie, seed, kinds, first,randomization ) parameters<-plan$parameters parameters$applied<-parameters$design parameters$design<-"factorial" plan<-plan$book trt<-as.character(plan[,posicion]) nplan<-nrow(plan) A<-rep(" ",nplan*ntr) dim(A)<-c(nplan,ntr) colnames(A)<-LETTERS[1:ntr] for(i in 1:nplan) { A[i,]<-unlist(strsplit(trt[i], " ")) } A<-as.data.frame(A) book<-data.frame(plan[,1:(posicion-1)],A) outdesign<-list(parameters=parameters,book=book) return(outdesign) } #================= if(design=="DIC" | design=="dic"){sort=design.crd(trat,r,serie=0) data=sort$book data$x=rep(1:length(unique(data$trat)),r) data$x=factor(data$x,unique(data$x)) data$y=rep(1:r,e=length(unique(data$trat))) data$y=factor(data$y,unique(data$y)) x=data$x y=data$y if(color.sep=="all"){separate=data$trat} if(color.sep=="none"){separate=rep("white",e=length(data$trat))} if(pos=="line"){graph=ggplot(data,aes(x=x,y=y))+ geom_tile(aes(fill=separate),color="black")+ labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize),legend.text=element_text(size=axissize),legend.title=element_text(size=legendsize))} if(pos=="column"){graph=ggplot(data,aes(y=x,x=y,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize),legend.text=element_text(size=axissize),legend.title=element_text(size=legendsize))} if(is.na(add.streets.x[1])==FALSE | is.na(add.streets.y[1])==FALSE){ if(pos=="line"){ data$y=factor(data$y,levels = rev(unique(data$y))) graph=ggplot(data,aes(x=x,y=y,fill=separate))+ geom_tile(color="black")+ labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize),legend.text=element_text(size=axissize),legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())} if(pos=="column"){ graph=ggplot(data,aes(y=x,x=y,fill=separate))+ geom_tile(color="black")+ labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize),legend.text=element_text(size=axissize),legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(add.streets.y,(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(add.streets.y,(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())}} if(color.sep=="none"){graph=graph+ scale_fill_manual(values = "white",label="plots")+ labs(fill="")} if(ID==FALSE){graph=graph+geom_text(aes(label=trat),size=labelsize)} if(ID==TRUE){graph=graph+geom_text(aes(label=1:length(trat)),size=labelsize)} tabela=data.frame("ID"=data$plots, "trat"=data$trat)} #================= if(design=="DBC" | design=="dbc"){sort=design.rcbd(trat,r,serie=0) data=sort$book data$x=rep(1:length(unique(data$trat)),r) data$x=factor(data$x,unique(data$x)) x=data$x block=data$block if(color.sep=="all"){separate=data$trat} if(color.sep=="block"){separate=data$block} if(color.sep=="none"){separate=rep("white",e=length(data$trat))} #====================================== # if(line.divisor.block>=2){ # quantcoluna=length(trat)/line.divisor.block # if(is.integer(quantcoluna)==FALSE){ # quant=c(rep(ceiling(quantcoluna),line.divisor.block-1),floor(quantcoluna)) # sublinhas=rep(1:line.divisor.block, # quant) # data$x=rep(rep(1:ceiling(length(unique(data$trat))/line.divisor.block), # line.divisor.block)[1:length(trat)], # ceiling(length(unique(block))/line.divisor.block)) # }else{sublinhas=rep(1:line.divisor.block, # quantcoluna)} # data$block=paste(block,"L",sublinhas)} #=========================================== if(pos=="line"){graph=ggplot(data,aes(x=x,y=block,fill=separate))+ geom_tile(color="black")+labs(y="Block",x=label.x,fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize),axis.title=element_text(size=axissize),legend.title=element_text(size=legendsize),legend.text=element_text(size=axissize))} if(pos=="column"){graph=ggplot(data,aes(y=x,x=block,fill=separate))+ geom_tile(color="black")+labs(y=label.y,x="Block",fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize),axis.title=element_text(size=axissize),legend.title=element_text(size=legendsize),legend.text=element_text(size=axissize))} if(is.na(add.streets.x[1])==FALSE | is.na(add.streets.y[1])==FALSE){ data$block=factor(data$block,levels = rev(unique(data$block))) if(pos=="line"){ graph=ggplot(data,aes(x=x,y=block,fill=separate))+ geom_tile(color="black")+ labs(x=label.x,y="Block",fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), legend.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.x = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())} if(pos=="column"){ data$block=factor(data$block,levels = unique(data$block)) graph=ggplot(data,aes(y=x,x=block,fill=separate))+ geom_tile(color="black")+ labs(x="Block",y=label.y,fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize),axis.title=element_text(size=axissize),legend.text=element_text(size=axissize),legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(add.streets.y,(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(add.streets.y,(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.y = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())}} if(color.sep=="none"){graph=graph+ scale_fill_manual(values = "white",label="plots")+ labs(fill="")} if(color.sep=="block"){graph=graph+labs(fill="block")} if(ID==FALSE){graph=graph+geom_text(aes(label=trat),size=labelsize)} if(ID==TRUE){graph=graph+geom_text(aes(label=1:length(trat)),size=labelsize)} tabela=data.frame("ID"=data$plots, "block"=data$block, "trat"=data$trat)} #================= if(design=="DQL" | design=="dql"){sort=design.lsd(trat,r,serie=0) data=sort$book if(color.sep=="all"){separate=data$trat} if(color.sep=="line"){separate=data$row} if(color.sep=="column"){separate=data$col} if(color.sep=="none"){separate=rep("white",e=length(data$trat))} graph=ggplot(data,aes(x=row,y=col,fill=separate))+ geom_tile(color="black")+labs(x="Row",y="Column",fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=axissize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE | is.na(add.streets.y[1])==FALSE){ data$row=factor(data$row,levels = rev(unique(data$row))) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,e=(length(data$row)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(rev(add.streets.y),(length(data$row)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,e=(length(data$row)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(rev(add.streets.y),(length(data$row)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(strip.background = element_blank(), strip.text = element_blank(), line = element_blank())} if(color.sep=="none"){graph=graph+ scale_fill_manual(values = "white",label="plots")+ labs(fill="")} if(color.sep=="line"){graph=graph+labs(fill="Line")} if(color.sep=="column"){graph=graph+labs(fill="Column")} if(ID==FALSE){graph=graph+geom_text(aes(label=trat),size=labelsize)} if(ID==TRUE){graph=graph+geom_text(aes(label=1:length(trat)),size=labelsize)} tabela=data.frame("ID"=data$plots, "line"=data$row, "column"=data$col, "trat"=data$trat)} #================= if(design=="PSUBDIC" | design=="psubdic"){sort=design.split(trat,trat1,r,design = "crd",serie=0) data=sort$book data$x=rep(1:length(unique(paste(data$trat,data$trat1))),r) data$x=factor(data$x,unique(data$x)) data$y=rep(1:r,e=length(unique(paste(data$trat,data$trat1)))) data$y=factor(data$y,unique(data$y)) x=data$x y=data$y if(color.sep=="all"){separate=paste(data$trat,data$trat1)} if(color.sep=="f1"){separate=data$trat} if(color.sep=="f2"){separate=data$trat1} if(color.sep=="none"){separate=rep("white",e=length(data$trat))} if(pos=="column"){graph=ggplot(data,aes(x=y,y=x,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(pos=="line"){graph=ggplot(data,aes(x=x,y=y,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(is.na(add.streets.x[1])==FALSE | is.na(add.streets.y[1])==FALSE){ if(pos=="line"){ data$y=factor(data$y,levels = rev(unique(data$y))) graph=ggplot(data,aes(x=x,y=y,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.x = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())} if(pos=="column"){ data$y=factor(data$y,levels = unique(data$y)) graph=ggplot(data,aes(y=x,x=y,fill=separate))+ geom_tile(color="black")+ labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(rev(add.streets.y),(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(rev(add.streets.y),(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.y = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())}} if(color.sep=="none"){graph=graph+ scale_fill_manual(values = "white",label="plots")+ labs(fill="")} if(ID==FALSE){graph=graph+geom_text(aes(label=paste(trat,trat1)),size=labelsize)} if(ID==TRUE){graph=graph+geom_text(aes(label=1:length(paste(trat,trat1))),size=labelsize)} tabela=data.frame("ID"=1:length(data$plots), "plot"=data$trat, "split_plot"=data$trat1, "Repetition"=data$r)} #================ if(design=="PSUBDBC" | design=="psubdbc"){sort=design.split(trat,trat1,r,design = "rcbd",serie=0) data=sort$book data$x=rep(1:length(unique(paste(data$trat,data$trat1))),r) data$x=factor(data$x,unique(data$x)) x=data$x block=data$block if(color.sep=="all"){separate=paste(data$trat,data$trat1)} if(color.sep=="block"){separate=data$block} if(color.sep=="f1"){separate=data$trat} if(color.sep=="f2"){separate=data$trat1} if(color.sep=="none"){separate=rep("white",e=length(data$trat))} #====================================== # if(line.divisor.block>=2){ # quantcoluna=length(unique(paste(data$trat,data$trat1)))/line.divisor.block # if(is.integer(quantcoluna)==FALSE){ # quant=c(rep(ceiling(quantcoluna),line.divisor.block-1),floor(quantcoluna)) # sublinhas=rep(1:line.divisor.block, # quant) # data$x=rep(rep(1:ceiling(length(unique(paste(data$trat,data$trat1)))/ # line.divisor.block), # line.divisor.block)[1:length(unique(paste(data$trat,data$trat1)))], # length(unique(block))/line.divisor.block) # }else{sublinhas=rep(1:line.divisor.block, # quantcoluna)} # data$block=paste(block,"L",sublinhas)} if(pos=="column"){graph=ggplot(data,aes(y=x,x=block,fill=separate))+ geom_tile(color="black")+labs(x="Block",y=label.y,fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(pos=="line"){graph=ggplot(data,aes(y=block,x=x,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y="Block",fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(is.na(add.streets.x[1])==FALSE | is.na(add.streets.y[1])==FALSE){ if(pos=="line"){ data$y=factor(data$block,levels = rev(unique(data$block))) graph=ggplot(data,aes(x=x,y=y,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y="Block",fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.x = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())} if(pos=="column"){ data$y=factor(data$block,levels = unique(data$block)) graph=ggplot(data,aes(y=x,x=y,fill=separate))+ geom_tile(color="black")+ labs(x="Block",y=label.y,fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(rev(add.streets.y),(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(rev(add.streets.y),(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.y = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())}} if(color.sep=="none"){graph=graph+ scale_fill_manual(values = "white",label="plots")+ labs(fill="")} if(color.sep=="block"){graph=graph+labs(fill="block")} if(ID==FALSE){graph=graph+geom_text(aes(label=paste(trat,trat1)),size=labelsize)} if(ID==TRUE){graph=graph+geom_text(aes(label=1:length(paste(trat,trat1))),size=labelsize)} tabela=data.frame("ID"=1:length(data$plots), "plot"=data$trat, "split_plot"=data$trat1, "Block"=data$block)} faixas=function(t1,t2,r){ faixas.design=function (trt1, trt2, r, serie = 2, seed = 0, kinds = "Super-Duper", randomization = TRUE){ number <- 10 if (serie > 0) number <- 10^serie n1 <- length(trt1) n2 <- length(trt2) if (seed == 0) { genera <- runif(1) seed <- .Random.seed[3]} set.seed(seed, kinds) a <- trt1[1:n1] b <- trt2[1:n2] if (randomization) { a <- sample(trt1, n1) b <- sample(trt2, n2)} fila <- rep(b, n1) columna <- a[gl(n1, n2)] block <- rep(1, n1 * n2) if (r > 1) { for (i in 2:r) { a <- trt1[1:n1] b <- trt2[1:n2] if (randomization){ a <- sample(trt1, n1) b <- sample(trt2, n2)} fila <- c(fila, rep(b, n1)) columna <- c(columna, a[gl(n1, n2)]) block <- c(block, rep(i, n1 * n2)) } } parameters <- list(design = "strip", trt1 = trt1, trt2 = trt2, r = r, serie = serie, seed = seed, kinds = kinds) plots <- block * number + 1:(n1 * n2) book <- data.frame(plots, block = as.factor(block), column = as.factor(columna), row = as.factor(fila)) names(book)[3] <- c(paste(deparse(substitute(trt1)))) names(book)[4] <- c(paste(deparse(substitute(trt2)))) outdesign <- list(parameters = parameters, book = book) return(outdesign) } outdesign <-faixas.design(t1,t2,r, serie=2,seed=45,kinds ="Super-Duper") # seed = 45 book <-outdesign$book # field book book$block=factor(book$block,levels = unique(book$block)) graphs=as.list(1:length(levels(book$block))) for(i in 1:length(levels(book$block))){ d1=book[book$block==levels(book$block)[i],] d1$t1=factor(d1$t1,unique(d1$t1)) d1$t2=factor(d1$t2,unique(d1$t2)) graphs[[i]]=ggplot(d1,aes(x=t1,y=t2,fill=paste(t1,t2)))+geom_tile(color="black",show.legend = FALSE)+ facet_wrap(~paste("Block",block))+ ylab("")+xlab("")+ geom_text(aes(label=paste(t1,t2)))+ theme_classic()+theme(axis.line = element_blank(), axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize), strip.text = element_text(size=12))} requireNamespace("cowplot") graph=do.call("plot_grid", c(graphs, ncol=length(levels(book$block)))) print(graph)} if(design=="STRIP-PLOT" | design=="stripplot"){(graph=faixas(trat,trat1,r))} #================= if(design=="FAT2DIC" | design=="fat2dic"){sort=design.ab(c(length(trat),length(trat1)),r,design = "crd",serie=0) sort$book$A=as.factor(sort$book$A) sort$book$B=as.factor(sort$book$B) levels(sort$book$A)=trat levels(sort$book$B)=trat1 sort$book$trat=paste(sort$book$A,sort$book$B) data=sort$book data$x=rep(1:length(unique(paste(data$trat,data$trat1))),r) data$x=factor(data$x,unique(data$x)) data$y=rep(1:r,e=length(unique(paste(data$trat,data$trat1)))) data$y=factor(data$y,unique(data$y)) A=data$A B=data$B x=data$x y=data$y if(color.sep=="all"){separate=paste(data$A,data$B)} if(color.sep=="f1"){separate=data$A} if(color.sep=="f2"){separate=data$B} if(color.sep=="none"){separate=rep("white",e=length(data$A))} if(pos=="column"){graph=ggplot(data,aes(x=y,y=x,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(pos=="line"){graph=ggplot(data,aes(x=x,y=y,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(is.na(add.streets.x[1])==FALSE | is.na(add.streets.y[1])==FALSE){ if(pos=="line"){ data$y=factor(data$y,levels = rev(unique(data$y))) graph=ggplot(data,aes(x=x,y=y,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.x = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())} if(pos=="column"){ data$y=factor(data$y,levels = unique(data$y)) graph=ggplot(data,aes(y=x,x=y,fill=separate))+ geom_tile(color="black")+ labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(rev(add.streets.y),(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(rev(add.streets.y),(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.y = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())}} if(color.sep=="none"){graph=graph+ scale_fill_manual(values = "white",label="plots")+ labs(fill="")} if(ID==FALSE){graph=graph+geom_text(aes(label=paste(A,B)),size=labelsize)} if(ID==TRUE){graph=graph+geom_text(aes(label=1:length(paste(A,B))),size=labelsize)} tabela=data.frame("ID"=1:length(data$plots), "Factor 1"=data$A, "Factor 2"=data$B)} #================= if(design=="FAT2DBC" | design=="fat2dbc"){ sort=design.ab(c(length(trat),length(trat1)),r,design = "rcbd",serie=0) sort$book$A=as.factor(sort$book$A) sort$book$B=as.factor(sort$book$B) levels(sort$book$A)=trat levels(sort$book$B)=trat1 sort$book$trat=paste(sort$book$A,sort$book$B) data=sort$book data$x=rep(1:length(unique(paste(data$trat,data$trat1))),r) data$x=factor(data$x,unique(data$x)) A=data$A B=data$B x=data$x block=data$block if(color.sep=="all"){separate=paste(data$A,data$B)} if(color.sep=="block"){separate=data$block} if(color.sep=="f1"){separate=data$A} if(color.sep=="f2"){separate=data$B} if(color.sep=="none"){separate=rep("white",e=length(data$A))} #====================================== # if(line.divisor.block>=2){ # quantcoluna=length(unique(paste(data$trat,data$trat1)))/line.divisor.block # if(is.integer(quantcoluna)==FALSE){ # quant=c(rep(ceiling(quantcoluna),line.divisor.block-1),floor(quantcoluna)) # sublinhas=rep(1:line.divisor.block, # quant) # data$x=rep(rep(1:ceiling(length(unique(paste(data$trat,data$trat1)))/ # line.divisor.block), # line.divisor.block)[1:length(unique(paste(data$trat,data$trat1)))], # length(unique(block))/line.divisor.block) # }else{sublinhas=rep(1:line.divisor.block, # quantcoluna)} # data$block=paste(block,"L",sublinhas)} #========================================= if(pos=="column"){graph=ggplot(data,aes(y=x,x=block,fill=separate))+ geom_tile(color="black")+labs(x="Block",y=label.y,fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(pos=="line"){graph=ggplot(data,aes(y=block,x=x,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y="Block",fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(is.na(add.streets.x[1])==FALSE | is.na(add.streets.y[1])==FALSE){ if(pos=="line"){ data$y=factor(data$block,levels = rev(unique(data$block))) graph=ggplot(data,aes(x=x,y=y,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y="Block",fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), legend.text=element_text(size=legendsize), axis.title=element_text(size=axissize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.x = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())} if(pos=="column"){ data$y=factor(data$block,levels = unique(data$block)) graph=ggplot(data,aes(y=x,x=y,fill=separate))+ geom_tile(color="black")+ labs(x="Block",y=label.y,fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(rev(add.streets.y),(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(rev(add.streets.y),(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.y = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())}} if(color.sep=="none"){graph=graph+ scale_fill_manual(values = "white",label="plots")+ labs(fill="")} if(color.sep=="block"){graph=graph+labs(fill="block")} if(ID==FALSE){graph=graph+geom_text(aes(label=paste(A,B)),size=labelsize)} if(ID==TRUE){graph=graph+geom_text(aes(label=1:length(paste(A,B))),size=labelsize)} tabela=data.frame("ID"=1:length(data$plots), "Factor 1"=data$A, "Factor 2"=data$B, "Block"=data$block)} ####################################################### if(design=="FAT3DIC" | design=="fat3dic"){ trat=expand.grid(trat,trat1,trat2) tr=paste(trat$Var1,"@#",trat$Var2,"@#",trat$Var3) trats=rep(tr,r) sorteio=sample(trats) sortd=data.frame(t(matrix(unlist(strsplit(sorteio,"@#")),nrow=3))) sorteio=paste(sortd$X1,sortd$X2,sortd$X3) x=rep(1:(length(sorteio)/r),r) y=rep(1:r,e=(length(sorteio)/r)) data=data.frame(x,y,sorteio) data$x=factor(data$x,unique(data$x)) data$y=factor(data$y,unique(data$y)) if(color.sep=="all"){separate=sorteio} if(color.sep=="f1"){separate=sortd$X1} if(color.sep=="f2"){separate=sortd$X2} if(color.sep=="f3"){separate=sortd$X3} if(color.sep=="none"){separate=rep("white",e=length(sorteio))} if(pos=="line"){graph=ggplot(data,aes(x=y,y=x,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(pos=="column"){graph=ggplot(data,aes(x=x,y=y,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(is.na(add.streets.x[1])==FALSE | is.na(add.streets.y[1])==FALSE){ if(pos=="line"){ data$y=factor(data$y,levels = rev(unique(data$y))) graph=ggplot(data,aes(x=x,y=y,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.x = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())} if(pos=="column"){ data$y=factor(data$y,levels = unique(data$y)) graph=ggplot(data,aes(y=x,x=y,fill=separate))+ geom_tile(color="black")+ labs(x=label.x,y=label.y,fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(rev(add.streets.y),(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(rev(add.streets.y),(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.y = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())}} if(color.sep=="none"){graph=graph+ scale_fill_manual(values = "white",label="plots")+ labs(fill="")} if(ID==FALSE){graph=graph+geom_text(aes(label=sorteio),size=labelsize)} if(ID==TRUE){graph=graph+geom_text(aes(label=1:length(sorteio)),size=labelsize)} tabela=data.frame("ID"=1:length(data$plots), "Factor 1"=sortd$X1, "Factor 2"=sortd$X2, "Factor 3"=sortd$X3)} if(design=="FAT3DBC" | design=="fat3dbc"){ trat=expand.grid(trat,trat1,trat2) tr=paste(trat$Var1,"@#",trat$Var2,"@#",trat$Var3) sorteio=matrix(NA,ncol=length(tr),nrow=r) for(i in 1:r){ sorteio[i,]=sample(tr) } sorteio=as.vector(sorteio) sortd=data.frame(t(matrix(unlist(strsplit(sorteio,"@#")),nrow=3))) sorteio=paste(sortd$X1,sortd$X2,sortd$X3) x=rep(1:(length(sorteio)/r),e=r) y=rep(1:r,(length(sorteio)/r)) data=data.frame(x,y,sorteio) data$x=factor(data$x,unique(data$x)) data$y=factor(data$y,unique(data$y)) if(color.sep=="all"){separate=sorteio} if(color.sep=="f1"){separate=sortd$X1} if(color.sep=="f2"){separate=sortd$X2} if(color.sep=="f3"){separate=sortd$X3} if(color.sep=="none"){separate=rep("white",e=length(sorteio))} if(pos=="line"){graph=ggplot(data,aes(x=y,y=x,fill=separate))+ geom_tile(color="black")+labs(x="block",y=label.y,fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(pos=="column"){graph=ggplot(data,aes(x=x,y=y,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y="block",fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(is.na(add.streets.x[1])==FALSE | is.na(add.streets.y[1])==FALSE){ if(pos=="line"){ data$y=factor(data$y,levels = rev(unique(data$y))) graph=ggplot(data,aes(x=x,y=y,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y="Block",fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(add.streets.y,(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(add.streets.y,(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.x = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())} if(pos=="column"){ data$y=factor(data$y,levels = unique(data$y)) graph=ggplot(data,aes(y=x,x=y,fill=separate))+ geom_tile(color="black")+ labs(x="Block",y=label.y,fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(rev(add.streets.y),e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(rev(add.streets.y),e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.y = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())}} if(color.sep=="none"){graph=graph+ scale_fill_manual(values = "white",label="plots")+ labs(fill="")} if(color.sep=="block"){graph=graph+labs(fill="block")} if(ID==FALSE){graph=graph+geom_text(aes(label=sorteio),size=labelsize)} if(ID==TRUE){graph=graph+geom_text(aes(label=1:length(sorteio)),size=labelsize)} } if(design=="PSUBSUBDBC" | design=="psubsubdbc"){ sorteio=list() for(i in 1:r){ sorteio[[i]]=sample(trat)} nv1=length(trat) nv2=length(trat1) nv3=length(trat2) sorteiof1=rep(unlist(sorteio),e=nv2*nv3) sorteiof2=list() for(i in 1:(r*nv1)){ sorteiof2[[i]]=sample(trat1)} sorteiof2=rep(unlist(sorteiof2),e=nv3) sorteiof3=list() for(i in 1:(r*nv1*nv2)){ sorteiof3[[i]]=sample(trat2)} sorteiof3=unlist(sorteiof3) data.frame(sorteiof1,sorteiof2,sorteiof3) tr=paste(sorteiof1," x ",sorteiof2," x ",sorteiof3) sorteio=as.vector(tr) x=rep(1:(length(sorteio)/r),r) y=rep(1:r,e=(length(sorteio)/r)) data=data.frame(x,y,sorteio) data$x=factor(data$x,unique(data$x)) data$y=factor(data$y,unique(data$y)) if(color.sep=="all"){separate=sorteio} if(color.sep=="f1"){separate=sorteiof1} if(color.sep=="f2"){separate=sorteiof2} if(color.sep=="f3"){separate=sorteiof3} if(color.sep=="none"){separate=rep("white",e=length(sorteio))} if(pos=="line"){graph=ggplot(data,aes(x=y,y=x,fill=separate))+ geom_tile(color="black")+labs(x="block",y=label.y,fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(pos=="column"){graph=ggplot(data,aes(x=x,y=y,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y="block",fill="Treatments")+ theme_classic()+theme(axis.line = element_blank())+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize))} if(is.na(add.streets.x[1])==FALSE | is.na(add.streets.y[1])==FALSE){ if(pos=="line"){ data$y=factor(data$y,levels = rev(unique(data$y))) graph=ggplot(data,aes(x=x,y=y,fill=separate))+ geom_tile(color="black")+labs(x=label.x,y="Block",fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(add.streets.y,e=(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.x = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())} if(pos=="column"){ data$y=factor(data$y,levels = unique(data$y)) graph=ggplot(data,aes(y=x,x=y,fill=separate))+ geom_tile(color="black")+ labs(x="Block",y=label.y,fill="Treatments")+ theme_classic()+theme(axis.text=element_text(size=axissize), axis.title=element_text(size=axissize), legend.text=element_text(size=legendsize), legend.title=element_text(size=legendsize)) if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==TRUE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 graph=graph+facet_grid(~data$ruas1,scales="free",space = "free")} if(is.na(add.streets.y[1])==FALSE & is.na(add.streets.x[1])==TRUE){ ruas2=rep(rev(add.streets.y),(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~.,scales="free",space = "free")} if(is.na(add.streets.x[1])==FALSE & is.na(add.streets.y[1])==FALSE){ ruas1=rep(add.streets.x,e=(length(data$x)/length(add.streets.x))) data$ruas1=ruas1 ruas2=rep(rev(add.streets.y),(length(data$x)/length(add.streets.y))) data$ruas2=ruas2 graph=graph+facet_grid(data$ruas2~data$ruas1,scales="free",space = "free")} graph=graph+ theme(axis.text.y = element_blank(), strip.background = element_blank(), strip.text = element_blank(), line = element_blank())}} if(color.sep=="none"){graph=graph+ scale_fill_manual(values = "white",label="plots")+ labs(fill="")} if(color.sep=="block"){graph=graph+labs(fill="block")} if(ID==FALSE){graph=graph+geom_text(aes(label=sorteio),size=labelsize)} if(ID==TRUE){graph=graph+geom_text(aes(label=1:length(sorteio)),size=labelsize)} tabela=data.frame("ID"=1:length(data$plots), "plot"=sorteiof1, "split_plot"=sorteiof2, "split_split_plot"=sorteiof3)} if(isTRUE(ID)==TRUE & isTRUE(print.ID)==TRUE){print(data)} if(export.csv==TRUE){write.csv(tabela,"dataset.csv")} #================= if(design!="STRIP-PLOT"){print(graph+labs(caption = comment.caption))} }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/croqui_function.R
#' Analysis: Randomized block design with an additional treatment for quantitative factor #' #' @description Statistical analysis of experiments conducted in a randomized block design with an additional treatment and balanced design with a factor considering the fixed model. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param trat Numerical or complex vector with treatments #' @param block Numerical or complex vector with blocks #' @param response Numerical vector containing the response of the experiment. #' @param responsead Numerical vector with additional treatment responses #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param homog Homogeneity test of variances (\emph{default} is Bartlett) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param grau Degree of polynomial in case of quantitative factor (\emph{default} is 1) #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param family Font family #' @param pointsize Point size #' @param linesize line size (Trendline and Error Bar) #' @param width.bar width of the error bars of a regression graph. #' @param posi Legend position #' @param point Defines whether to plot mean ("mean"), mean with standard deviation ("mean_sd" - \emph{default}) or mean with standard error (\emph{default} - "mean_se"). For quali=FALSE or quali=TRUE. #' @note In some experiments, the researcher may study a quantitative factor, such as fertilizer doses, and present a control, such as a reference fertilizer, treated as a qualitative control. In these cases, there is a difference between considering only the residue in the unfolding of the polynomial, removing or not the qualitative treatment, or since a treatment is excluded from the analysis. In this approach, the residue used is also considering the qualitative treatment, a method similar to the factorial scheme with additional control. #' @return The table of analysis of variance, the test of normality of errors (Shapiro-Wilk ("sw"), Lilliefors ("li"), Anderson-Darling ("ad"), Cramer-von Mises ("cvm"), Pearson ("pearson") and Shapiro-Francia ("sf")), the test of homogeneity of variances (Bartlett ("bt") or Levene ("levene")), the test of independence of Durbin-Watson errors, adjustment of regression models up to grade 3 polynomial. The function also returns a standardized residual plot. #' @keywords DIC #' @keywords additional treatment #' @export #' @examples #' doses = c(rep(c(1:5),e=3)) #' resp = c(3, 4, 3, 5, 5, 6, 7, 7, 8, 4, 4, 5, 2, 2, 3) #' bloco = rep(c("B1","B2","B3","B4","B5"),3) #' dbc.ad(doses, bloco, resp, responsead=rnorm(3,6,0.1),grau=2) dbc.ad=function(trat, block, response, responsead, grau = 1, norm="sw", homog="bt", alpha.f=0.05, theme=theme_classic(), ylab="response", xlab="independent", family="sans", posi="top", pointsize=4.5, linesize=0.8, width.bar=NA, point="mean_sd"){ if(is.na(width.bar)==TRUE){width.bar=0.1*mean(trat)} if(is.na(grau)==TRUE){grau=1} trat1=as.factor(trat) bloco1=as.factor(block) mod=aov(response~trat1+bloco1) an=anova(mod) trati=as.factor(c(trat,rep("Controle",length(responsead)))) blocoi=as.factor(c(block,block[1:length(responsead)])) mod1=aov(c(response,responsead)~blocoi+trati) an1=anova(mod1) anava1=rbind(an[1,],an1[1,],an1[2,],an1[3,]) anava1$Df[3]=1 anava1$`Sum Sq`[3]=anava1$`Sum Sq`[3]-sum(anava1$`Sum Sq`[1]) anava1$`Mean Sq`[3]=anava1$`Sum Sq`[3]/anava1$Df[3] anava1$`F value`[1:3]=anava1$`Mean Sq`[1:3]/anava1$`Mean Sq`[4] rownames(anava1)[1:3]=c("Factor","Block","Factor vs control") for(i in 1:nrow(anava1)-1){ anava1$`Pr(>F)`[i]=1-pf(anava1$`F value`[i],anava1$Df[i],anava1$Df[4]) } respad=mod1$residuals/sqrt(anava1$`Mean Sq`[4]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} if(norm=="sw"){norm1 = shapiro.test(mod1$res)} if(norm=="li"){norm1=nortest::lillie.test(mod1$residuals)} if(norm=="ad"){norm1=nortest::ad.test(mod1$residuals)} if(norm=="cvm"){norm1=nortest::cvm.test(mod1$residuals)} if(norm=="pearson"){norm1=nortest::pearson.test(mod1$residuals)} if(norm=="sf"){norm1=nortest::sf.test(mod1$residuals)} if(homog=="bt"){ homog1 = bartlett.test(mod1$res ~ trati) statistic=homog1$statistic phomog=homog1$p.value method=paste("Bartlett test","(",names(statistic),")",sep="") } if(homog=="levene"){ homog1 = levenehomog(mod1$res~trati)[1,] statistic=homog1$`F value`[1] phomog=homog1$`Pr(>F)`[1] method="Levene's Test (center = median)(F)" names(homog1)=c("Df", "statistic","p.value")} indep = dwtest(mod1) Ids=ifelse(respad>3 | respad<(-3), "darkblue","black") residplot=ggplot(data=data.frame(respad,Ids),aes(y=respad,x=1:length(respad)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(respad),label=1:length(respad),color=Ids,size=4)+ scale_x_continuous(breaks=1:length(respad))+ theme_classic()+theme(axis.text.y = element_text(size=12), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) print(residplot) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) homoge=data.frame(Method=method, Statistic=statistic, "p-value"=phomog) rownames(homoge)="" print(homoge) cat("\n") message(if(homog1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level,hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected.Therefore, the variances are not homogeneous"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Independence from errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) indepe=data.frame(Method=paste(indep$method,"(", names(indep$statistic),")",sep=""), Statistic=indep$statistic, "p-value"=indep$p.value) rownames(indepe)="" print(indepe) cat("\n") message(if(indep$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected.Therefore, errors are not independent"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV (%) = ",round(sqrt(anava1$`Mean Sq`[4])/mean(response,na.rm=TRUE)*100,2))) cat(paste("\nMean = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian = ",round(median(response,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(anava1) a=AgroR::polynomial(trat,response,DFres = anava1$Df[4], SSq = anava1$`Sum Sq`[4], ylab = ylab, xlab=xlab, theme = theme, point = point, grau = grau, posi = posi, family = family, pointsize = pointsize, linesize = linesize, width.bar = width.bar) print(a[[1]])}
/scratch/gouwar.j/cran-all/cranData/AgroR/R/dbc_ad.R
#' Analysis: Randomized block design #' #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @description This is a function of the AgroR package for statistical analysis of experiments conducted in a randomized block and balanced design with a factor considering the fixed model. The function presents the option to use non-parametric method or transform the dataset. #' @param trat Numerical or complex vector with treatments #' @param block Numerical or complex vector with blocks #' @param response Numerical vector containing the response of the experiment. #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param homog Homogeneity test of variances (\emph{default} is Bartlett) #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{default} is qualitative) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param grau Degree of polynomial in case of quantitative factor (\emph{default} is 1) #' @param transf Applies data transformation (default is 1; for log consider 0; `angular` for angular transformation) #' @param constant Add a constant for transformation (enter value) #' @param test "parametric" - Parametric test or "noparametric" - non-parametric test #' @param geom graph type (columns, boxes or segments) #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param CV Plotting the coefficient of variation and p-value of Anova (\emph{default} is TRUE) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angle x-axis scale text rotation #' @param family Font family #' @param textsize Font size #' @param labelsize Label size #' @param dec Number of cells #' @param width.column Width column if geom="bar" #' @param width.bar Width errorbar #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param posi Legend position #' @param point Defines whether to plot mean ("mean"), mean with standard deviation ("mean_sd" - \emph{default}) or mean with standard error ("mean_se"). For parametric test it is possible to plot the square root of QMres (mean_qmres). #' @param labelsize Label size #' @param angle.label label angle #' @param ylim Define a numerical sequence referring to the y scale. You can use a vector or the `seq` command. #' @note Enable ggplot2 package to change theme argument. #' @note The ordering of the graph is according to the sequence in which the factor levels are arranged in the data sheet. The bars of the column and segment graphs are standard deviation. #' @note CV and p-value of the graph indicate coefficient of variation and p-value of the F test of the analysis of variance. #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @keywords DBC #' @keywords Experimental #' @references #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' #' Mendiburu, F., and de Mendiburu, M. F. (2019). Package ‘agricolae’. R Package, Version, 1-2. #' #' @seealso \link{DIC}, \link{DQL} #' @export #' @return The table of analysis of variance, the test of normality of errors (Shapiro-Wilk ("sw"), Lilliefors ("li"), Anderson-Darling ("ad"), Cramer-von Mises ("cvm"), Pearson ("pearson") and Shapiro-Francia ("sf")), the test of homogeneity of variances (Bartlett ("bt") or Levene ("levene")), the test of independence of Durbin-Watson errors, the test of multiple comparisons (Tukey ("tukey"), LSD ("lsd"), Scott-Knott ("sk") or Duncan ("duncan")) or adjustment of regression models up to grade 3 polynomial, in the case of quantitative treatments. Non-parametric analysis can be used by the Friedman test. The column, segment or box chart for qualitative treatments is also returned. The function also returns a standardized residual plot. #' @examples #' library(AgroR) #' #' #============================= #' # Example laranja #' #============================= #' data(laranja) #' attach(laranja) #' DBC(trat, bloco, resp, mcomp = "sk", angle=45, ylab = "Number of fruits/plants") #' #' #============================= #' # Friedman test #' #============================= #' DBC(trat, bloco, resp, test="noparametric", ylab = "Number of fruits/plants") #' #' #============================= #' # Example soybean #' #============================= #' data(soybean) #' with(soybean, DBC(cult, bloc, prod, #' ylab=expression("Grain yield"~(kg~ha^-1)))) DBC=function(trat, block, response, norm="sw", homog="bt", alpha.f=0.05, alpha.t=0.05, quali=TRUE, mcomp="tukey", grau=1, transf=1, constant=0, test="parametric", geom="bar", theme=theme_classic(), sup=NA, CV=TRUE, ylab="response", xlab="", textsize=12, labelsize=4, fill="lightblue", angle=0, family="sans", dec=3, width.column=NULL, width.bar=0.3, addmean=TRUE, errorbar=TRUE, posi="top", point="mean_sd", angle.label=0, ylim=NA){ requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") if(is.na(sup==TRUE)){sup=0.2*mean(response, na.rm=TRUE)} if(angle.label==0){hjust=0.5}else{hjust=0} if(test=="parametric"){ if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} trat1=trat trat=as.factor(trat) bloco=as.factor(block) a = anova(aov(resp ~ trat + bloco)) b = aov(resp ~ trat + bloco) anava=a colnames(anava)=c("GL","SQ","QM","Fcal","p-value") respad=b$residuals/sqrt(a$`Mean Sq`[3]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} if(norm=="sw"){norm1 = shapiro.test(b$res)} if(norm=="li"){norm1=lillie.test(b$residuals)} if(norm=="ad"){norm1=ad.test(b$residuals)} if(norm=="cvm"){norm1=cvm.test(b$residuals)} if(norm=="pearson"){norm1=pearson.test(b$residuals)} if(norm=="sf"){norm1=sf.test(b$residuals)} if(homog=="bt"){ homog1 = bartlett.test(b$res ~ trat) statistic=homog1$statistic phomog=homog1$p.value method=paste("Bartlett test","(",names(statistic),")",sep="") } if(homog=="levene"){ homog1 = levenehomog(b$res~trat)[1,] statistic=homog1$`F value`[1] phomog=homog1$`Pr(>F)`[1] method="Levene's Test (center = median)(F)" names(homog1)=c("Df", "statistic","p.value")} indep = dwtest(b) resids=b$residuals/sqrt(a$`Mean Sq`[3]) Ids=ifelse(resids>3 | resids<(-3), "darkblue","black") residplot=ggplot(data=data.frame(resids,Ids),aes(y=resids,x=1:length(resids)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(resids),label=1:length(resids),color=Ids,size=4)+ scale_x_continuous(breaks=1:length(resids))+ theme_classic()+ theme(axis.text.y = element_text(size=12), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) print(residplot) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) homoge=data.frame(Method=method, Statistic=statistic, "p-value"=phomog) rownames(homoge)="" print(homoge) cat("\n") message(if(homog1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Independence from errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) indepe=data.frame(Method=paste(indep$method,"(", names(indep$statistic),")",sep=""), Statistic=indep$statistic, "p-value"=indep$p.value) rownames(indepe)="" print(indepe) cat("\n") message(if(indep$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors are not independent"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV (%) = ",round(sqrt(a$`Mean Sq`[3])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nMStrat/MST = ",round(a$`Mean Sq`[1]/(a$`Mean Sq`[3]+a$`Mean Sq`[2]+a$`Mean Sq`[1]),2))) cat(paste("\nMean = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian = ",round(median(response,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) anava1=as.matrix(data.frame(anava)) colnames(anava1)=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anava1)=c("Treatment","Block","Residuals") print(anava1,na.print = "") cat("\n") message(if (a$`Pr(>F)`[1]<alpha.f){ black("As the calculated p-value, it is less than the 5% significance level. The hypothesis H0 of equality of means is rejected. Therefore, at least two treatments differ")} else {"As the calculated p-value is greater than the 5% significance level, H0 is not rejected"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali==TRUE){ teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste)))) }else{cat(green(bold("Regression")))} cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali==TRUE){ if(mcomp=="tukey"){ letra <- TUKEY(b, "trat", alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups")} if(mcomp=="sk"){ nrep=table(trat)[1] medias=sort(tapply(resp,trat,mean),decreasing = TRUE) letra=scottknott(means = medias, df1 = a$Df[3], nrep = nrep, QME = a$`Mean Sq`[3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=letra)} if(mcomp=="duncan"){ letra <- duncan(b, "trat", alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups")} if(mcomp=="lsd"){ letra <- LSD(b, "trat", alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups")} media = tapply(response, trat, mean, na.rm=TRUE) if(transf=="1"){letra1}else{letra1$respO=media[rownames(letra1)]} print(if(a$`Pr(>F)`[1]<alpha.f){letra1}else{"H0 is not rejected"}) cat("\n") cat(if(transf=="1"){}else{blue("resp = transformed means; respO = averages without transforming")}) if(transf==1 && norm1$p.value<0.05 | transf==1 && indep$p.value<0.05 | transf==1 &&homog1$p.value<0.05){ message("\nYour analysis is not valid, suggests using a non-parametric \ntest and try to transform the data")} else{} if(transf != 1 && norm1$p.value<0.05 | transf!=1 && indep$p.value<0.05 | transf!=1 && homog1$p.value<0.05){cat(red("\n \nWarning!!! Your analysis is not valid, suggests using a non-parametric \ntest"))}else{} if(point=="mean_sd"){ dadosm=data.frame(letra1, media=tapply(response, trat, mean, na.rm=TRUE)[rownames(letra1)], desvio=tapply(response, trat, sd, na.rm=TRUE)[rownames(letra1)])} if(point=="mean_se"){ dadosm=data.frame(letra1, media=tapply(response, trat, mean, na.rm=TRUE)[rownames(letra1)], desvio=(tapply(response, trat, sd, na.rm=TRUE)/sqrt(tapply(response, trat, length)))[rownames(letra1)])} if(point=="mean_qmres"){ dadosm=data.frame(letra1, media=tapply(response, trat, mean, na.rm=TRUE)[rownames(letra1)], desvio=rep(sqrt(a$`Mean Sq`[3]),e=length(levels(trat))))} dadosm$trats=factor(rownames(dadosm),levels = unique(trat)) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[unique(as.character(trat)),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} trats=dadosm$trats limite=dadosm$limite media=dadosm$media desvio=dadosm$desvio letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm,aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trats),color=1,width = width.column)} else{grafico=grafico+geom_col(aes(fill=trats), fill=fill,color=1,width = width.column)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio},label=letra), family=family,angle=angle.label, size=labelsize,hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra), family=family,angle=angle.label,size=labelsize, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black", width=width.bar)}} if(geom=="point"){grafico=ggplot(dadosm,aes(x=trats, y=media)) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio},label=letra), family=family,angle=angle.label, size=labelsize,hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,size=labelsize,angle=angle.label, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black", width=width.bar)} if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trats),size=5)} else{grafico=grafico+ geom_point(aes(color=trats),fill="gray",pch=21,color="black",size=5)}} if(geom=="box"){ datam1=data.frame(trats=factor(trat,levels = unique(as.character(trat))),response) dadosm2=data.frame(letra1, superior=tapply(response, trat, mean, na.rm=TRUE)[rownames(letra1)]) dadosm2$trats=rownames(dadosm2) dadosm2=dadosm2[unique(as.character(trat)),] dadosm2$limite=dadosm$media+dadosm$desvio # dadosm2$letra=paste(format(dadosm$media,digits = dec),dadosm$groups) if(addmean==TRUE){dadosm2$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm2$letra=dadosm$groups} trats=dadosm2$trats limite=dadosm2$limite superior=dadosm2$superior letra=dadosm2$letra stat_box=ggplot(datam1,aes(x=trats,y=response))+geom_boxplot() superior=ggplot_build(stat_box)$data[[1]]$ymax dadosm2$superior=superior+sup grafico=ggplot(datam1,aes(x=trats, y=response)) if(fill=="trat"){grafico=grafico+ geom_boxplot(aes(fill=trats))} else{grafico=grafico+ geom_boxplot(aes(fill=trats),fill=fill)} grafico=grafico+ geom_text(data=dadosm2,aes(y=superior,label=letra), family=family,size=labelsize,angle=angle.label, hjust=hjust)} grafico=grafico+theme+ ylab(ylab)+ xlab(xlab)+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family), legend.position = "none") if(is.na(ylim[1])==FALSE){ grafico=grafico+scale_y_continuous(breaks = ylim, limits = c(min(ylim),max(ylim)))} if(angle !=0){grafico=grafico+theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} if(CV==TRUE){grafico=grafico+labs(caption=paste("p-value", if(a$`Pr(>F)`[1]<0.0001){paste("<", 0.0001)} else{paste("=", round(a$`Pr(>F)`[1],4))},"; CV = ", round(abs(sqrt(a$`Mean Sq`[3])/mean(resp))*100,2),"%"))} } if(quali==FALSE){ trat=trat1 if(grau==1){graph=polynomial(trat,response, grau = 1,xlab=xlab,ylab=ylab,textsize=textsize, family=family,posi=posi,point=point,SSq = a$`Sum Sq`[3],DFres = a$Df[3])} if(grau==2){graph=polynomial(trat,response, grau = 2,xlab=xlab,ylab=ylab,textsize=textsize, family=family,posi=posi,point=point,SSq = a$`Sum Sq`[3],DFres = a$Df[3])} if(grau==3){graph=polynomial(trat,response, grau = 3,xlab=xlab,ylab=ylab,textsize=textsize, family=family,posi=posi,point=point,SSq = a$`Sum Sq`[3],DFres = a$Df[3])} grafico=graph[[1]] if(is.na(ylim[1])==FALSE){ grafico=grafico+scale_y_continuous(breaks = ylim, limits = c(min(ylim),max(ylim)))} print(grafico) }} if(test=="noparametric"){ friedman=function(judge,trt,evaluation,alpha=0.05,group=TRUE,main=NULL,console=FALSE){ name.x <- paste(deparse(substitute(judge))) name.y <- paste(deparse(substitute(evaluation))) name.t <- paste(deparse(substitute(trt))) name.j <- paste(deparse(substitute(judge))) if(is.null(main))main<-paste(name.y,"~", name.j,"+",name.t) datos <- data.frame(judge, trt, evaluation) matriz <- by(datos[,3], datos[,1:2], function(x) mean(x,na.rm=TRUE)) matriz <-as.data.frame(matriz[,]) name<-as.character(colnames(matriz)) ntr <-length(name) m<-dim(matriz) v<-array(0,m) for (i in 1:m[1]){ v[i,]<-rank(matriz[i,]) } vv<-as.numeric(v) junto <- data.frame(evaluation, trt) medians<-mean_stat(junto[,1],junto[,2],stat="median") for(i in c(1,5,2:4)) { x <- mean_stat(junto[,1],junto[,2],function(x)quantile(x)[i]) medians<-cbind(medians,x[,2])} medians<-medians[,3:7] names(medians)<-c("Min","Max","Q25","Q50","Q75") Means <- mean_stat(junto[,1],junto[,2],stat="mean") sds <- mean_stat(junto[,1],junto[,2],stat="sd") nn <- mean_stat(junto[,1],junto[,2],stat="length") nr<-unique(nn[,2]) s<-array(0,m[2]) for (j in 1:m[2]){ s[j]<-sum(v[,j])} Means<-data.frame(Means,std=sds[,2],r=nn[,2],medians) names(Means)[1:2]<-c(name.t,name.y) means<-Means[,c(1:2,4)] rownames(Means)<-Means[,1] Means<-Means[,-1] means[,2]<-s rs<-array(0,m[2]) rs<-s-m[1]*(m[2]+1)/2 T1<-12*t(rs)%*%rs/(m[1]*m[2]*(m[2]+1)) T2<-(m[1]-1)*T1/(m[1]*(m[2]-1)-T1) if(console){ cat("\nStudy:",main,"\n\n") cat(paste(name.t,",",sep="")," Sum of the ranks\n\n") print(data.frame(row.names = means[,1], means[,-1])) cat("\nFriedman") cat("\n===============")} A1<-0 for (i in 1:m[1]) A1 <- A1 + t(v[i,])%*%v[i,] DFerror <-(m[1]-1)*(m[2]-1) Tprob<-qt(1-alpha/2,DFerror) LSD<-as.numeric(Tprob*sqrt(2*(m[1]*A1-t(s)%*%s)/DFerror)) C1 <-m[1]*m[2]*(m[2]+1)^2/4 T1.aj <-(m[2]-1)*(t(s)%*%s-m[1]*C1)/(A1-C1) T2.aj <-(m[1]-1)*T1.aj/(m[1]*(m[2]-1)-T1.aj) p.value<-1-pchisq(T1.aj,m[2]-1) p.noadj<-1-pchisq(T1,m[2]-1) PF<-1-pf(T2.aj, ntr-1, (ntr-1)*(nr-1) ) if(console){ cat("\nAdjusted for ties") cat("\nCritical Value:",T1.aj) cat("\nP.Value Chisq:",p.value) cat("\nF Value:",T2.aj) cat("\nP.Value F:",PF,"\n") cat("\nPost Hoc Analysis\n") } statistics<-data.frame(Chisq=T1.aj,Df=ntr-1,p.chisq=p.value,F=T2.aj,DFerror=DFerror,p.F=PF,t.value=Tprob,LSD) if ( group & length(nr) == 1 & console){ cat("\nAlpha:",alpha,"; DF Error:",DFerror) cat("\nt-Student:",Tprob) cat("\nLSD:", LSD,"\n") } if ( group & length(nr) != 1 & console) cat("\nGroups according to probability of treatment differences and alpha level(",alpha,")\n") if ( length(nr) != 1) statistics<-data.frame(Chisq=T1.aj,Df=ntr-1,p.chisq=p.value,F=T2.aj,DFerror=DFerror,p.F=PF) comb <-utils::combn(ntr,2) nn<-ncol(comb) dif<-rep(0,nn) pvalue<-rep(0,nn) LCL<-dif UCL<-dif sig<-NULL LSD<-rep(0,nn) stat<-rep("ns",nn) for (k in 1:nn) { i<-comb[1,k] j<-comb[2,k] dif[k]<-s[comb[1,k]]-s[comb[2,k]] sdtdif<- sqrt(2*(m[1]*A1-t(s)%*%s)/DFerror) pvalue[k]<- round(2*(1-pt(abs(dif[k])/sdtdif,DFerror)),4) LSD[k]<-round(Tprob*sdtdif,2) LCL[k] <- dif[k] - LSD[k] UCL[k] <- dif[k] + LSD[k] sig[k]<-" " if (pvalue[k] <= 0.001) sig[k]<-"***" else if (pvalue[k] <= 0.01) sig[k]<-"**" else if (pvalue[k] <= 0.05) sig[k]<-"*" else if (pvalue[k] <= 0.1) sig[k]<-"." } if(!group){ tr.i <- means[comb[1, ],1] tr.j <- means[comb[2, ],1] comparison<-data.frame("difference" = dif, pvalue=pvalue,"signif."=sig,LCL,UCL) rownames(comparison)<-paste(tr.i,tr.j,sep=" - ") if(console){cat("\nComparison between treatments\nSum of the ranks\n\n") print(comparison)} groups=NULL } if (group) { Q<-matrix(1,ncol=ntr,nrow=ntr) p<-pvalue k<-0 for(i in 1:(ntr-1)){ for(j in (i+1):ntr){ k<-k+1 Q[i,j]<-p[k] Q[j,i]<-p[k] } } groups <- ordenacao(means[, 1], means[, 2],alpha, Q,console) names(groups)[1]<-"Sum of ranks" if(console) { cat("\nTreatments with the same letter are not significantly different.\n\n") print(groups) } comparison<-NULL } parameters<-data.frame(test="Friedman",name.t=name.t,ntr = ntr,alpha=alpha) rownames(parameters)<-" " rownames(statistics)<-" " Means<-data.frame(rankSum=means[,2],Means) Means<-Means[,c(2,1,3:9)] output<-list(statistics=statistics,parameters=parameters, means=Means,comparison=comparison,groups=groups) class(output)<-"group" invisible(output) } trat=trat bloco=block fried=friedman(bloco,trat,response,alpha=alpha.t) cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(italic("Statistics"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(fried$statistics) cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(italic("Parameters"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(fried$parameters) cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(italic(paste("Multiple Comparison Test:","LSD")))) cat(green(bold("\n-----------------------------------------------------------------\n"))) saida=cbind(fried$means[rownames(fried$means), c(1,3)],fried$groups[rownames(fried$means),]) colnames(saida)=c("Mean","SD","Rank","Groups") print(saida) dadosm=data.frame(fried$means,fried$groups[rownames(fried$means),]) dadosm$trats=factor(rownames(dadosm),levels = unique(trat)) dadosm$media=tapply(response,trat,mean, na.rm=TRUE)[rownames(fried$means)] if(point=="mean_sd"){dadosm$std=tapply(response,trat,sd, na.rm=TRUE)[rownames(fried$means)]} if(point=="mean_se"){dadosm$std=tapply(response,trat,sd, na.rm=TRUE)/ sqrt(tapply(response,trat,length))[rownames(fried$means)]} dadosm$limite=dadosm$response+dadosm$std dadosm$letra=paste(format(dadosm$response,digits = dec),dadosm$groups) if(addmean==TRUE){dadosm$letra=paste(format(dadosm$response,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} # dadosm=dadosm[unique(trat),] trats=dadosm$trats limite=dadosm$limite media=dadosm$media std=dadosm$std letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm,aes(x=trats, y=response)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trats),color=1,width = width.column)} else{grafico=grafico+ geom_col(aes(fill=trats),fill=fill,color=1,width = width.column)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-std}else{std},label=letra), family=family,size=labelsize,angle=angle.label, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),size=labelsize,family=family,angle=angle.label, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(aes(ymin=response-std, ymax=response+std), color="black", width=width.bar)}} if(geom=="point"){grafico=ggplot(dadosm, aes(x=trats, y=response)) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-std}else{std},label=letra), family=family,size=labelsize,angle=angle.label, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),size=labelsize,family=family,angle=angle.label, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(aes(ymin=response-std, ymax=response+std), color="black", width=width.bar)} if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trats),size=5)} else{grafico=grafico+ geom_point(aes(color=trats),fill="gray",pch=21,color="black",size=5)}} if(geom=="box"){ datam1=data.frame(trats=factor(trat,levels = unique(as.character(trat))),response) dadosm2=data.frame(fried$means) dadosm2$trats=factor(rownames(dadosm),levels = unique(trat)) dadosm2$limite=dadosm2$response+dadosm2$std # dadosm2$letra=paste(format(dadosm$response,digits = dec), # dadosm$groups) if(addmean==TRUE){dadosm2$letra=paste(format(dadosm$response,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm2$letra=dadosm$groups} dadosm2=dadosm2[unique(as.character(trat)),] trats=dadosm2$trats limite=dadosm2$limite letra=dadosm2$letra stat_box=ggplot(datam1,aes(x=trats,y=response))+geom_boxplot() superior=ggplot_build(stat_box)$data[[1]]$ymax dadosm2$superior=superior+sup grafico=ggplot(datam1,aes(x=trats, y=response)) if(fill=="trat"){grafico=grafico+ geom_boxplot(aes(fill=trats))} else{grafico=grafico+ geom_boxplot(aes(fill=trats),fill=fill)} grafico=grafico+ geom_text(data=dadosm2, aes(y=superior, label=letra),family=family,size=labelsize,angle=angle.label, hjust=hjust)} grafico=grafico+theme+ ylab(ylab)+ xlab(xlab)+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family), legend.position = "none") if(angle !=0){grafico=grafico+theme(axis.text.x=element_text(hjust = 1.01, angle = angle))} } if(is.na(ylim[1])==FALSE){ grafico=grafico+scale_y_continuous(breaks = ylim, limits = c(min(ylim),max(ylim)))} if(quali==TRUE){print(grafico)} grafico=list(grafico) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/dbc_function.R
#' Analysis: Randomized block design evaluated over time #' #' @description Function of the AgroR package for analysis of experiments conducted in a balanced qualitative, single-factorial randomized block design with multiple assessments over time, however without considering time as a factor. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Gonçalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param trat Numerical or complex vector with treatments #' @param block Numerical or complex vector with blocks #' @param time Numerical or complex vector with times #' @param response Numerical vector containing the response of the experiment. #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD ("lsd"), Scott-Knott ("sk"), Duncan ("duncan") and Friedman ("fd")) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param error Add error bar (SD) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param textsize Font size of the texts and titles of the axes #' @param labelsize Font size of the labels #' @param pointsize Point size #' @param family Font family #' @param dec Number of cells #' @param geom Graph type (columns - "bar" or segments "point") #' @param legend Legend title #' @param posi Legend position #' @param ylim Define a numerical sequence referring to the y scale. You can use a vector or the `seq` command. #' @param width.bar width error bar #' @param size.bar size error bar #' @param xnumeric Declare x as numeric (\emph{default} is FALSE) #' @param all.letters Adds all label letters regardless of whether it is significant or not. #' @note The ordering of the graph is according to the sequence in which the factor levels are arranged in the data sheet. The bars of the column and segment graphs are standard deviation. #' @keywords dbct #' @keywords Experimental #' @seealso \link{DBC}, \link{DICT}, \link{DQLT} #' @references #' #' Principles and procedures of statistics a biometrical approach Steel & Torry & Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' @details The p-value of the analysis of variance, the normality test for Shapiro-Wilk errors, the Bartlett homogeneity test of variances, the independence of Durbin-Watson errors and the multiple comparison test (Tukey, Scott-Knott, LSD or Duncan). #' @export #' @return The function returns the p-value of Anova, the assumptions of normality of errors, homogeneity of variances and independence of errors, multiple comparison test, as well as a line graph #' @examples #' rm(list=ls()) #' data(simulate2) #' attach(simulate2) #' #' #=================================== #' # default #' #=================================== #' DBCT(trat, bloco, tempo, resp) #' DBCT(trat, bloco, tempo, resp,fill="rainbow") #' #' #=================================== #' # segment chart #' #=================================== #' DBCT(trat, bloco, tempo, resp, geom="point") DBCT=function(trat, block, time, response, alpha.f=0.05, alpha.t=0.05, mcomp="tukey", geom="bar", theme=theme_classic(), fill="gray", ylab="Response", xlab="Independent", textsize=12, labelsize=5, pointsize=4.5, error=TRUE, family="sans", sup=0, addmean=FALSE, posi=c(0.1,0.8), legend="Legend", ylim=NA, width.bar=0.2, size.bar=0.8, dec=3, xnumeric=FALSE, all.letters=FALSE){ requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") requireNamespace("ggrepel") trat=as.factor(trat) resp=response block=as.factor(block) time=factor(time,unique(time)) dados=data.frame(resp,trat,block,time) if(mcomp=="tukey"){ tukeyg=c() ordem=c() normg=c() homog=c() indepg=c() anovag=c() cv=c() for(i in 1:length(levels(time))){ mod=aov(resp~trat+block, data=dados[dados$time==levels(dados$time)[i],]) anovag[[i]]=anova(mod)$`Pr(>F)`[1] cv[[i]]=sqrt(anova(mod)$`Mean Sq`[3])/mean(mod$model$resp)*100 tukey=TUKEY(mod,"trat",alpha = alpha.t) tukey$groups=tukey$groups[unique(as.character(trat)),2] if(all.letters==FALSE){ if(anova(mod)$`Pr(>F)`[1]>alpha.f){tukey$groups=c("ns",rep(" ",length(unique(trat))-1))}} tukeyg[[i]]=as.character(tukey$groups) ordem[[i]]=rownames(tukey$groups) norm=shapiro.test(mod$residuals) homo=with(dados[dados$time==levels(dados$time)[i],], bartlett.test(mod$residuals~trat)) indep=dwtest(mod) normg[[i]]=norm$p.value homog[[i]]=homo$p.value indepg[[i]]=indep$p.value } m=unlist(tukeyg) nor=unlist(normg) hom=unlist(homog) ind=unlist(indepg) an=unlist(anovag) cv=unlist(cv) press=data.frame(an,nor,hom,ind,cv) colnames(press)=c("p-value ANOVA","Shapiro-Wilk","Bartlett","Durbin-Watson","CV (%)")} if(mcomp=="lsd"){ lsdg=c() ordem=c() normg=c() homog=c() indepg=c() anovag=c() cv=c() for(i in 1:length(levels(time))){ mod=aov(resp~trat+block, data=dados[dados$time==levels(dados$time)[i],]) anovag[[i]]=anova(mod)$`Pr(>F)`[1] cv[[i]]=sqrt(anova(mod)$`Mean Sq`[3])/mean(mod$model$resp)*100 lsd=LSD(mod,"trat",alpha = alpha.t) lsd$groups=lsd$groups[unique(as.character(trat)),2] if(all.letters==FALSE){ if(anova(mod)$`Pr(>F)`[1]>alpha.f){lsd$groups=c("ns",rep(" ",length(unique(trat))-1))}} lsdg[[i]]=as.character(lsd$groups) ordem[[i]]=rownames(lsd$groups) norm=shapiro.test(mod$residuals) homo=with(dados[dados$time==levels(dados$time)[i],], bartlett.test(mod$residuals~trat)) indep=dwtest(mod) normg[[i]]=norm$p.value homog[[i]]=homo$p.value indepg[[i]]=indep$p.value } m=unlist(lsdg) nor=unlist(normg) hom=unlist(homog) ind=unlist(indepg) an=unlist(anovag) cv=unlist(cv) press=data.frame(an,nor,hom,ind,cv) colnames(press)=c("p-value ANOVA","Shapiro-Wilk","Bartlett","Durbin-Watson","CV (%)")} if(mcomp=="duncan"){ duncang=c() ordem=c() normg=c() homog=c() indepg=c() anovag=c() cv=c() for(i in 1:length(levels(time))){ mod=aov(resp~trat+block, data=dados[dados$time==levels(dados$time)[i],]) anovag[[i]]=anova(mod)$`Pr(>F)`[1] cv[[i]]=sqrt(anova(mod)$`Mean Sq`[3])/mean(mod$model$resp)*100 duncan=duncan(mod,"trat",alpha = alpha.t) duncan$groups=duncan$groups[unique(as.character(trat)),2] if(all.letters==FALSE){ if(anova(mod)$`Pr(>F)`[1]>alpha.f){duncan$groups=c("ns",rep(" ",length(unique(trat))-1))}} duncang[[i]]=as.character(duncan$groups) ordem[[i]]=rownames(duncan$groups) norm=shapiro.test(mod$residuals) homo=with(dados[dados$time==levels(dados$time)[i],], bartlett.test(mod$residuals~trat)) indep=dwtest(mod) normg[[i]]=norm$p.value homog[[i]]=homo$p.value indepg[[i]]=indep$p.value } m=unlist(duncang) nor=unlist(normg) hom=unlist(homog) ind=unlist(indepg) an=unlist(anovag) cv=unlist(cv) press=data.frame(an,nor,hom,ind,cv) colnames(press)=c("p-value ANOVA","Shapiro-Wilk","Bartlett","Durbin-Watson","CV (%)")} if(mcomp=="sk"){ scott=c() normg=c() homog=c() indepg=c() anovag=c() cv=c() for(i in 1:length(levels(time))){ mod=aov(resp~trat+block, data=dados[dados$time==levels(dados$time)[i],]) anovag[[i]]=anova(mod)$`Pr(>F)`[1] cv[[i]]=sqrt(anova(mod)$`Mean Sq`[3])/mean(mod$model$resp)*100 nrep=with(dados[dados$time==levels(dados$time)[i],], table(trat)[1]) ao=anova(mod) medias=with(dados[dados$time==levels(dados$time)[i],], sort(tapply(resp,trat,mean),decreasing = TRUE)) letra=scottknott(means = medias, df1 = ao$Df[3], nrep = nrep, QME = ao$`Mean Sq`[3], alpha = alpha.t) letra1=data.frame(resp=medias,groups=letra) letra1=letra1[unique(as.character(trat)),] data=letra1$groups if(all.letters==FALSE){ if(anova(mod)$`Pr(>F)`[1]>alpha.f){data=c("ns",rep(" ",length(unique(trat))-1))}} scott[[i]]=as.character(data) norm=shapiro.test(mod$residuals) homo=with(dados[dados$time==levels(dados$time)[i],], bartlett.test(mod$residuals~trat)) indep=dwtest(mod) normg[[i]]=norm$p.value homog[[i]]=homo$p.value indepg[[i]]=indep$p.value } m=unlist(scott) nor=unlist(normg) hom=unlist(homog) ind=unlist(indepg) an=unlist(anovag) cv=unlist(cv) press=data.frame(an,nor,hom,ind,cv) colnames(press)=c("p-value ANOVA","Shapiro-Wilk","Bartlett","Durbin-Watson","CV (%)")} if(mcomp=="fd"){ friedman=function(judge,trt,evaluation,alpha=0.05,group=TRUE,main=NULL,console=FALSE){ name.x <- paste(deparse(substitute(judge))) name.y <- paste(deparse(substitute(evaluation))) name.t <- paste(deparse(substitute(trt))) name.j <- paste(deparse(substitute(judge))) if(is.null(main))main<-paste(name.y,"~", name.j,"+",name.t) datos <- data.frame(judge, trt, evaluation) matriz <- by(datos[,3], datos[,1:2], function(x) mean(x,na.rm=TRUE)) matriz <-as.data.frame(matriz[,]) name<-as.character(colnames(matriz)) ntr <-length(name) m<-dim(matriz) v<-array(0,m) for (i in 1:m[1]){ v[i,]<-rank(matriz[i,]) } vv<-as.numeric(v) junto <- data.frame(evaluation, trt) medians<-mean_stat(junto[,1],junto[,2],stat="median") for(i in c(1,5,2:4)) { x <- mean_stat(junto[,1],junto[,2],function(x)quantile(x)[i]) medians<-cbind(medians,x[,2])} medians<-medians[,3:7] names(medians)<-c("Min","Max","Q25","Q50","Q75") Means <- mean_stat(junto[,1],junto[,2],stat="mean") sds <- mean_stat(junto[,1],junto[,2],stat="sd") nn <- mean_stat(junto[,1],junto[,2],stat="length") nr<-unique(nn[,2]) s<-array(0,m[2]) for (j in 1:m[2]){ s[j]<-sum(v[,j])} Means<-data.frame(Means,std=sds[,2],r=nn[,2],medians) names(Means)[1:2]<-c(name.t,name.y) means<-Means[,c(1:2,4)] rownames(Means)<-Means[,1] Means<-Means[,-1] means[,2]<-s rs<-array(0,m[2]) rs<-s-m[1]*(m[2]+1)/2 T1<-12*t(rs)%*%rs/(m[1]*m[2]*(m[2]+1)) T2<-(m[1]-1)*T1/(m[1]*(m[2]-1)-T1) if(console){ cat("\nStudy:",main,"\n\n") cat(paste(name.t,",",sep="")," Sum of the ranks\n\n") print(data.frame(row.names = means[,1], means[,-1])) cat("\nFriedman") cat("\n===============")} A1<-0 for (i in 1:m[1]) A1 <- A1 + t(v[i,])%*%v[i,] DFerror <-(m[1]-1)*(m[2]-1) Tprob<-qt(1-alpha/2,DFerror) LSD<-as.numeric(Tprob*sqrt(2*(m[1]*A1-t(s)%*%s)/DFerror)) C1 <-m[1]*m[2]*(m[2]+1)^2/4 T1.aj <-(m[2]-1)*(t(s)%*%s-m[1]*C1)/(A1-C1) T2.aj <-(m[1]-1)*T1.aj/(m[1]*(m[2]-1)-T1.aj) p.value<-1-pchisq(T1.aj,m[2]-1) p.noadj<-1-pchisq(T1,m[2]-1) PF<-1-pf(T2.aj, ntr-1, (ntr-1)*(nr-1) ) if(console){ cat("\nAdjusted for ties") cat("\nCritical Value:",T1.aj) cat("\nP.Value Chisq:",p.value) cat("\nF Value:",T2.aj) cat("\nP.Value F:",PF,"\n") cat("\nPost Hoc Analysis\n") } statistics<-data.frame(Chisq=T1.aj,Df=ntr-1,p.chisq=p.value,F=T2.aj,DFerror=DFerror,p.F=PF,t.value=Tprob,LSD) if ( group & length(nr) == 1 & console){ cat("\nAlpha:",alpha,"; DF Error:",DFerror) cat("\nt-Student:",Tprob) cat("\nLSD:", LSD,"\n") } if ( group & length(nr) != 1 & console) cat("\nGroups according to probability of treatment differences and alpha level(",alpha,")\n") if ( length(nr) != 1) statistics<-data.frame(Chisq=T1.aj,Df=ntr-1,p.chisq=p.value,F=T2.aj,DFerror=DFerror,p.F=PF) comb <-utils::combn(ntr,2) nn<-ncol(comb) dif<-rep(0,nn) pvalue<-rep(0,nn) LCL<-dif UCL<-dif sig<-NULL LSD<-rep(0,nn) stat<-rep("ns",nn) for (k in 1:nn) { i<-comb[1,k] j<-comb[2,k] dif[k]<-s[comb[1,k]]-s[comb[2,k]] sdtdif<- sqrt(2*(m[1]*A1-t(s)%*%s)/DFerror) pvalue[k]<- round(2*(1-pt(abs(dif[k])/sdtdif,DFerror)),4) LSD[k]<-round(Tprob*sdtdif,2) LCL[k] <- dif[k] - LSD[k] UCL[k] <- dif[k] + LSD[k] sig[k]<-" " if (pvalue[k] <= 0.001) sig[k]<-"***" else if (pvalue[k] <= 0.01) sig[k]<-"**" else if (pvalue[k] <= 0.05) sig[k]<-"*" else if (pvalue[k] <= 0.1) sig[k]<-"." } if(!group){ tr.i <- means[comb[1, ],1] tr.j <- means[comb[2, ],1] comparison<-data.frame("difference" = dif, pvalue=pvalue,"signif."=sig,LCL,UCL) rownames(comparison)<-paste(tr.i,tr.j,sep=" - ") if(console){cat("\nComparison between treatments\nSum of the ranks\n\n") print(comparison)} groups=NULL } if (group) { Q<-matrix(1,ncol=ntr,nrow=ntr) p<-pvalue k<-0 for(i in 1:(ntr-1)){ for(j in (i+1):ntr){ k<-k+1 Q[i,j]<-p[k] Q[j,i]<-p[k] } } groups <- ordenacao(means[, 1], means[, 2],alpha, Q,console) names(groups)[1]<-"Sum of ranks" if(console) { cat("\nTreatments with the same letter are not significantly different.\n\n") print(groups) } comparison<-NULL } parameters<-data.frame(test="Friedman",name.t=name.t,ntr = ntr,alpha=alpha) rownames(parameters)<-" " rownames(statistics)<-" " Means<-data.frame(rankSum=means[,2],Means) Means<-Means[,c(2,1,3:9)] output<-list(statistics=statistics,parameters=parameters, means=Means,comparison=comparison,groups=groups) class(output)<-"group" invisible(output) } fdg=c() ordem=c() anovag=c() for(i in 1:length(levels(time))){ fd=friedman(block,trat,resp,alpha=alpha.t) anovag[[i]]=mod$statistics$p.chisq fd$groups=fd$groups[unique(as.character(trat)),2] if(all.letters==TRUE){ if(anova(mod)$`Pr(>F)`[1]>alpha.f){fd$groups=c("ns",rep(" ",length(unique(trat))-1))}} fdg[[i]]=as.character(duncan$groups) ordem[[i]]=rownames(fd$groups) } m=unlist(fdg) an=unlist(anovag) press=data.frame(fd);colnames(press)=c("p-value Chisq Friedman")} cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("ANOVA and assumptions"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(press) dadosm=data.frame(time=as.character(rep(unique(time),e=length(unique(as.character(trat))))), trat=rep(unique(as.character(trat)),length(unique(time))), media=c(tapply(resp,list(trat,time),mean, na.rm=TRUE)[unique(as.character(trat)),]), desvio=c(tapply(resp,list(trat,time),sd, na.rm=TRUE)[unique(as.character(trat)),]), letra=m) if(xnumeric==TRUE){dadosm$time=as.numeric(as.character(dadosm$time))} if(xnumeric==FALSE){dadosm$time=factor(dadosm$time,unique(dadosm$time))} time=dadosm$time trat=dadosm$trat media=dadosm$media desvio=dadosm$desvio letra=dadosm$letra if(geom=="point"){ grafico=ggplot(dadosm,aes(y=media, x=time))+ geom_point(aes(shape=factor(trat, levels=unique(as.character(trat))), group=factor(trat, levels=unique(as.character(trat)))),size=pointsize)+ geom_line(aes(lty=factor(trat, levels=unique(as.character(trat))), group=factor(trat, levels=unique(as.character(trat)))),size=0.8)+ ylab(ylab)+ xlab(xlab)+theme+ theme(text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), legend.position = posi, legend.text = element_text(size = textsize))+labs(shape=legend, lty=legend) if(error==TRUE){grafico=grafico+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), size=size.bar, width=width.bar)} if(addmean==FALSE && error==FALSE){grafico=grafico+ geom_text_repel(aes(y=media+sup,label=letra),family=family,size=labelsize)} if(addmean==TRUE && error==FALSE){grafico=grafico+ geom_text_repel(aes(y=media+sup, label=paste(format(media,digits = dec), letra)),family=family,size=labelsize)} if(addmean==FALSE && error==TRUE){grafico=grafico+ geom_text_repel(aes(y=desvio+media+sup, label=letra),family=family,size=labelsize)} if(addmean==TRUE && error==TRUE){grafico=grafico+ geom_text_repel(aes(y=desvio+media+sup, label=paste(format(media,digits = dec), letra)),family=family,size=labelsize)} if(is.na(ylim[1])==FALSE){grafico=grafico+scale_y_continuous(breaks = ylim)} } if(geom=="bar"){ if(sup==0){sup=0.1*mean(dadosm$media)} grafico=ggplot(dadosm,aes(y=media, x=as.factor(time), fill=factor(trat,levels = unique(trat))))+ geom_col(position = "dodge",color="black")+ ylab(ylab)+ xlab(xlab)+theme+ theme(text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), legend.position = posi, legend.text = element_text(size = textsize))+labs(fill=legend) if(error==TRUE){grafico=grafico+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=width.bar, size=size.bar, position = position_dodge(width=0.9))} if(addmean==FALSE && error==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra), size=labelsize,family=family, position = position_dodge(width=0.9))} if(addmean==TRUE && error==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=paste(format(media,digits = dec),letra)), size=labelsize,family=family, position = position_dodge(width=0.9))} if(addmean==FALSE && error==TRUE){grafico=grafico+ geom_text(aes(y=desvio+media+sup,label=letra), size=labelsize,family=family, position = position_dodge(width=0.9))} if(addmean==TRUE && error==TRUE){grafico=grafico+ geom_text(aes(y=desvio+media+sup, label=paste(format(media,digits = dec),letra)), size=labelsize,family=family, position = position_dodge(width=0.9))} } if(fill=="gray"){grafico=grafico+scale_fill_grey(start = 1, end = 0.1)} if(is.na(ylim)==FALSE){grafico=grafico+scale_y_continuous(breaks = ylim)} graficos=as.list(grafico) print(grafico) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/dbct_function.R
#' Descriptive: Descriptive analysis (Two factors) #' #' @description It performs the descriptive analysis of an experiment with two factors of interest. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param f1 Numeric or complex vector with factor 1 levels #' @param f2 Numeric or complex vector with factor 2 levels #' @param response Numerical vector containing the response of the experiment. #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @keywords Descriptive #' @keywords Experimental #' @return The function returns exploratory measures of position and dispersion, such as mean, median, maximum, minimum, coefficient of variation, etc ... #' @export #' @examples #' library(AgroR) #' data(cloro) #' with(cloro, desc2fat(f1,f2,resp)) ###################################################################################### ## Analise descritiva ###################################################################################### desc2fat=function(f1, f2, response, ylab="Response", theme=theme_classic()){ requireNamespace("crayon") requireNamespace("ggplot2") f1=as.factor(f1) f2=as.factor(f2) #=========================== # Geral #=========================== Media = mean(response, na.rm=TRUE) Mediana = median(response, na.rm=TRUE) Minimo = min(response, na.rm=TRUE) Maximo = max(response, na.rm=TRUE) Variancia = var(response, na.rm=TRUE) Desvio = sd(response, na.rm=TRUE) CV = Desvio / Media * 100 juntos=cbind(Media, Mediana, Minimo, Maximo, Variancia, Desvio, CV) Media = tapply(response, list(f1, f2), mean, na.rm=TRUE) Mediana = tapply(response, list(f1, f2), median, na.rm=TRUE) Minimo = tapply(response, list(f1, f2), min, na.rm=TRUE) Maximo = tapply(response, list(f1, f2), max, na.rm=TRUE) Variancia = tapply(response, list(f1, f2), var, na.rm=TRUE) Desvio = tapply(response, list(f1, f2), sd, na.rm=TRUE) CV = Desvio / Media * 100 juntos1 = list( "Mean" = Media, "Median" = Mediana, "Min" = Minimo, "Max" = Maximo, "Variance" = Variancia, "SD"=Desvio, "CV(%)"=CV) #=========================== # Fator 1 #=========================== dados=data.frame(f1,response) grafico=ggplot(dados,aes(x=f1,y=response))+ geom_boxplot(aes(fill=f1, group=f1),show.legend = F)+ ylab(ylab)+theme grafico=grafico+ theme(text = element_text(size=12,color="black"), axis.title = element_text(size=12,color="black"), axis.text = element_text(size=12,color="black")) grafico=as.list(grafico) Media = tapply(response, f1, mean, na.rm=TRUE) Mediana = tapply(response, f1, median, na.rm=TRUE) Minimo = tapply(response, f1, min, na.rm=TRUE) Maximo = tapply(response, f1, max, na.rm=TRUE) Variancia = tapply(response, f1, var, na.rm=TRUE) Desvio = tapply(response, f1, sd, na.rm=TRUE) CV = Desvio / Media * 100 juntos2 = cbind(Media, Mediana, Minimo, Maximo, Variancia, Desvio, CV) colnames(juntos2)=c("Mean","Median","Min","Max","Variance","SD","CV(%)") #=========================== # Fator 2 #=========================== dados=data.frame(f2,response) grafico1=ggplot(dados,aes(x=f2,y=response))+ geom_boxplot(aes(fill=f2, group=f2),show.legend = F)+ ylab(ylab)+theme grafico1=grafico1+ theme(text = element_text(size=12,color="black"), axis.title = element_text(size=12,color="black"), axis.text = element_text(size=12,color="black")) grafico1=as.list(grafico1) Media = tapply(response, f2, mean, na.rm=TRUE) Mediana = tapply(response, f2, median, na.rm=TRUE) Minimo = tapply(response, f2, min, na.rm=TRUE) Maximo = tapply(response, f2, max, na.rm=TRUE) Variancia = tapply(response, f2, var, na.rm=TRUE) Desvio = tapply(response, f2, sd, na.rm=TRUE) CV = Desvio / Media * 100 juntos3=cbind(Media, Mediana, Minimo, Maximo, Variancia, Desvio, CV) colnames(juntos3)=c("Mean","Median","Min","Max","Variance","SD","CV(%)") #=========================== # Fator 1 x Fator 2 #=========================== dados=data.frame(f1,f2,response) grafico2=ggplot(dados,aes(x=f1,y=response, fill=f2))+ geom_boxplot()+ ylab(ylab)+theme grafico2=grafico2+ theme(text = element_text(size=12,color="black"), axis.title = element_text(size=12,color="black"), axis.text = element_text(size=12,color="black")) grafico2=as.list(grafico2) #=========================== # Interacao #=========================== inter1=ggplot(dados,aes(x=f1,y=response, color=f2))+ stat_summary(fun.data = mean_se)+stat_summary(aes(color=f2, group=f2), geom="line", fun.data = mean_se)+ ylab(ylab)+theme inter1=inter1+ theme(text = element_text(size=12,color="black"), axis.title = element_text(size=12,color="black"), axis.text = element_text(size=12,color="black")) inter2=ggplot(dados,aes(x=f2,y=response, color=f1))+ stat_summary(fun.data = mean_se)+stat_summary(aes(color=f1, group=f1), geom="line", fun.data = mean_se)+ ylab(ylab)+theme inter2=inter2+ theme(text = element_text(size=12,color="black"), axis.title = element_text(size=12,color="black"), axis.text = element_text(size=12,color="black")) inter1=as.list(inter1) inter2=as.list(inter2) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(italic("general description"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(juntos) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(italic("Interaction"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(juntos1) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(italic("f1"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(juntos2) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(italic("f2"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(juntos3) cat(green(bold("\n-----------------------------------------------------------------\n"))) cowplot::plot_grid(grafico,grafico1,grafico2) cowplot::plot_grid(inter1,inter2,ncol=2) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/desc2fat_function.R
#' Descriptive: Descriptive analysis (Three factors) #' #' @description Performs the descriptive graphical analysis of an experiment with three factors of interest. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param f1 Numeric or complex vector with factor 1 levels #' @param f2 Numeric or complex vector with factor 2 levels #' @param f3 Numeric or complex vector with factor 3 levels #' @param response Numerical vector containing the response of the experiment. #' @param legend.title Legend title #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab x name (Accepts the \emph{expression}() function) #' @param theme ggplot theme #' @param plot "interaction" or "box" #' @keywords Descriptive #' @keywords Experimental #' @return The function returns a triple interaction graph. #' @export #' @examples #' library(AgroR) #' data(enxofre) #' with(enxofre, desc3fat(f1, f2, f3, resp)) ###################################################################################### ## Analise descritiva ###################################################################################### desc3fat=function(f1, f2, f3, response, legend.title="Legend", xlab="", ylab="Response", theme=theme_classic(), plot="interaction"){ f1=as.factor(f1) f2=as.factor(f2) f3=as.factor(f3) requireNamespace("ggplot2") #=========================== # Fator 1 x Fator 2 #=========================== dados=data.frame(f1,f2,f3, response) #=========================== # Geral #=========================== Media = mean(response, na.rm=TRUE) Mediana = median(response, na.rm=TRUE) Minimo = min(response, na.rm=TRUE) Maximo = max(response, na.rm=TRUE) Variancia = var(response, na.rm=TRUE) Desvio = sd(response, na.rm=TRUE) CV = Desvio / Media * 100 juntos=cbind(Media, Mediana, Minimo, Maximo, Variancia, Desvio, CV) colnames(juntos)=c("Mean","Median","Min","Max","Variance","SD","CV(%)") rownames(juntos)="" cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(italic("General description"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(juntos) #=========================== # Fator 1 #=========================== Media = tapply(response, f1, mean, na.rm=TRUE) Mediana = tapply(response, f1, median, na.rm=TRUE) Minimo = tapply(response, f1, min, na.rm=TRUE) Maximo = tapply(response, f1, max, na.rm=TRUE) Variancia = tapply(response, f1, var, na.rm=TRUE) Desvio = tapply(response, f1, sd, na.rm=TRUE) CV = Desvio / Media * 100 juntos2 = cbind(Media, Mediana, Minimo, Maximo, Variancia, Desvio, CV) colnames(juntos2)=c("Mean","Median","Min","Max","Variance","SD","CV(%)") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(italic("F1"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(juntos2) #=========================== # Fator 2 #=========================== Media = tapply(response, f2, mean, na.rm=TRUE) Mediana = tapply(response, f2, median, na.rm=TRUE) Minimo = tapply(response, f2, min, na.rm=TRUE) Maximo = tapply(response, f2, max, na.rm=TRUE) Variancia = tapply(response, f2, var, na.rm=TRUE) Desvio = tapply(response, f2, sd, na.rm=TRUE) CV = Desvio / Media * 100 juntos3=cbind(Media, Mediana, Minimo, Maximo, Variancia, Desvio, CV) colnames(juntos3)=c("Mean","Median","Min","Max","Variance","SD","CV(%)") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(italic("F2"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(juntos3) #=========================== # Fator 3 #=========================== Media = tapply(response, f3, mean, na.rm=TRUE) Mediana = tapply(response, f3, median, na.rm=TRUE) Minimo = tapply(response, f3, min, na.rm=TRUE) Maximo = tapply(response, f3, max, na.rm=TRUE) Variancia = tapply(response, f3, var, na.rm=TRUE) Desvio = tapply(response, f3, sd, na.rm=TRUE) CV = Desvio / Media * 100 juntos3=cbind(Media, Mediana, Minimo, Maximo, Variancia, Desvio, CV) colnames(juntos3)=c("Mean","Median","Min","Max","Variance","SD","CV(%)") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(italic("F3"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(juntos3) #=========================== # inter #=========================== Media = tapply(response, paste(f1,f2,f3), mean, na.rm=TRUE) Mediana = tapply(response, paste(f1,f2,f3), median, na.rm=TRUE) Minimo = tapply(response, paste(f1,f2,f3), min, na.rm=TRUE) Maximo = tapply(response, paste(f1,f2,f3), max, na.rm=TRUE) Variancia = tapply(response, paste(f1,f2,f3), var, na.rm=TRUE) Desvio = tapply(response, paste(f1,f2,f3), sd, na.rm=TRUE) CV = Desvio / Media * 100 juntos4=cbind(Media, Mediana, Minimo, Maximo, Variancia, Desvio, CV) colnames(juntos4)=c("Mean","Median","Min","Max","Variance","SD","CV(%)") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(italic("Interaction"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(juntos4) if(plot=="box"){ grafico=ggplot(dados,aes(x=f1,y=response, fill=f2))+ stat_boxplot(geom='errorbar', linetype=1, position = position_dodge(width = 0.75),width=0.5)+ geom_boxplot()+xlab(xlab)+labs(fill=legend.title)+ ylab(ylab)+theme+facet_wrap(~f3) grafico=grafico+ theme(text = element_text(size=12,color="black"), axis.title = element_text(size=12,color="black"), axis.text = element_text(size=12,color="black"), strip.text = element_text(size=13)) } #=========================== # Interacao #=========================== if(plot=="interaction"){ grafico=ggplot(dados,aes(x=f1,y=response, color=f2))+ stat_summary(fun.data = mean_se)+ stat_summary(aes(color=f2, group=f2), geom="line", fun.data = mean_se)+ ylab(ylab)+xlab(xlab)+labs(fill=legend.title)+theme+facet_wrap(~f3) grafico=grafico+ theme(text = element_text(size=12,color="black"), axis.title = element_text(size=12,color="black"), axis.text = element_text(size=12,color="black"), strip.text = element_text(size=13))} print(grafico) grafico=as.list(grafico) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/desc3fat_function.R
#' Descriptive: Descriptive analysis #' @description Performs the descriptive analysis of an experiment with a factor of interest. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param response Numerical vector containing the response of the experiment. #' @param trat Numerical or complex vector with treatments #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab x name (Accepts the \emph{expression}() function) #' @param ylim y-axis scale #' @keywords Descriptive #' @keywords Experimental #' @seealso \link{desc2fat}, \link{tabledesc},\link{dispvar} #' @return The function returns exploratory measures of position and dispersion, such as mean, median, maximum, minimum, coefficient of variation, etc ... #' @export #' @examples #' library(AgroR) #' data("pomegranate") #' with(pomegranate, desc(trat,WL)) ###################################################################################### ## Analise descritiva ###################################################################################### desc=function(trat, response, ylab="Response", xlab="Treatment", ylim=NA){ requireNamespace("crayon") requireNamespace("ggplot2") trat=as.factor(trat) Media=mean(response) Mediana=median(response) Minimo=min(response) Maximo=max(response) Variancia=var(response) Desvio=sd(response) CV=Desvio/Media*100 juntos=cbind(Media,Mediana,Minimo,Maximo,Variancia,Desvio,CV) rownames(juntos)="General" colnames(juntos)=c("Mean","Median","Min","Max","Variance","SD","CV(%)") Media=tapply(response, trat, mean, na.rm=TRUE) Mediana=tapply(response, trat, median, na.rm=TRUE) Minimo=tapply(response, trat, min, na.rm=TRUE) Maximo=tapply(response, trat, max, na.rm=TRUE) Variancia=tapply(response, trat, var, na.rm=TRUE) Desvio=tapply(response, trat, sd, na.rm=TRUE) CV=Desvio/Media*100 juntos1=cbind(Media,Mediana,Minimo,Maximo,Variancia,Desvio,CV) colnames(juntos1)=c("Mean","Median","Min","Max","Variance","SD","CV(%)") dados=data.frame(trat,response) grafico=ggplot(dados,aes(x=trat,y=response))+ geom_boxplot(aes(fill=trat, group=trat),show.legend = FALSE)+ geom_jitter(aes(group=trat),show.legend = F, width=0.1,alpha=0.2)+ ylab(ylab)+xlab(xlab)+theme_classic() if(is.na(ylim)==TRUE){grafico=grafico}else{grafico=grafico+ylim(ylim)} grafico=grafico+ theme(text = element_text(size=14,color="black"), axis.text = element_text(size=12,color="black"), axis.title = element_text(size=14,color="black"))+ geom_text(aes(label=rownames(dados)),size=4, nudge_x = 0.1) print(grafico) cat(green(bold("\n-----------------------------------------------------------------\n"))) green(italic(cat("General description"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(juntos) cat(green(bold("\n-----------------------------------------------------------------\n"))) green(italic(cat("Treatment"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(juntos1) grafico=as.list(grafico) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/desc_function.R
#' Analysis: Regression analysis by orthogonal polynomials for double factorial scheme with additional control #' #' @description Regression analysis by orthogonal polynomials for double factorial scheme with additional control. Cases in which the additional belongs to the regression curve, being common to the qualitative levels. In these cases, the additional (usually dose 0/control treatment) is not part of the factor arrangement. One option addressed by this function is to analyze a priori as a double factorial scheme with an additional one and correct the information a posteriore using information from the initial analysis, such as the degree of freedom and the sum of squares of the residue. #' #' @param output Output from a FAT2DIC.ad or FAT2DBC.ad function #' @param ad.value Additional treatment quantitative factor level #' @param design Type of experimental project (FAT2DIC.ad or FAT2DBC.ad) #' @param grau Degree of the polynomial (only for the isolated effect of the quantitative factor) #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @export #' @examples #' #' #================================================== #' # Data set #' trat=rep(c("A","B","C"),e=12) #' dose=rep(rep(c(200,400,600,800),e=3),3) #' d0=c(40,45,48) #' respo=c(60,55,56, 60,65,66, 70,75,76, #' 80,85,86, 50,55,56, 70,75,76, #' 60,65,66, 50,45,46, 50,45,46, #' 50,55,66, 70,75,76, 80,85,86) #' repe=rep(c("R1","R2","R3"),12) #' #================================================== #' # Analysis FAT2DIC.ad #' resu=FAT2DIC.ad(trat,dose,repe = repe,respo,responseAd = d0,quali = c(TRUE,FALSE),grau21 = c(1,2,1)) #' #' #================================================== #' # Regression analysis #' desd_fat2_quant_ad(resu,ad.value=0,design="FAT2DIC.ad") #' #' #' # Data set #' trat=rep(c("A","B"),e=12) #' dose=rep(rep(c(200,400,600,800),e=3),2) #' d0=c(40,45,48) #' respo=c(60,55,56,60,65,66,70,75,76,80,85,86,50,45,46,50,55,66,70,75,76,80,85,86) #' repe=rep(c("R1","R2","R3"),8) #' #================================================== #' # Analysis FAT2DIC.ad #' resu=FAT2DIC.ad(trat,dose,repe = repe,respo,responseAd = d0,quali = c(TRUE,FALSE)) #' #================================================== #' # Regression analysis #' desd_fat2_quant_ad(resu,ad.value=0,design="FAT2DIC.ad",grau=1) desd_fat2_quant_ad=function(output, ad.value=0, design="FAT2DIC.ad", grau=1){ alpha.f=output[[1]]$plot$alpha.f alpha.t=output[[1]]$plot$alpha.t grau21=output[[1]]$plot$grau21 ylab=output[[1]]$plot$ylab xlab=parse(text=output[[1]]$plot$xlab.factor[2]) posi=output[[1]]$plot$posi theme=output[[1]]$plot$theme textsize=output[[1]]$plot$textsize point=output[[1]]$plot$point family=output[[1]]$plot$family dados=output[[1]]$plot$ordempadronizado ana=output[[1]]$plot$anava nni=length(unique(dados$f1)) respad=output[[1]]$plot$respAd dose0=rep(respad,nni) trat0=rep(unique(dados$f1),e=length(respad)) doses0=rep(ad.value,length(dose0)) f1=c(dados$f1,trat0) f2=c(dados$f2,doses0) resp=c(dados$resp,dose0) if(design=="FAT2DIC.ad"){ if(ana$`Pr(>F)`[3]>alpha.f & ana$`Pr(>F)`[2]<alpha.f){print(polynomial(f2, resp, grau = grau, ylab = ylab, xlab=xlab, posi=posi, theme=theme, textsize=textsize, point=point, family=family, SSq = ana$`Sum Sq`[5], DFres = ana$Df[5])[[1]])} if(ana$`Pr(>F)`[3]<alpha.f){saida=polynomial2_color(f2,resp,f1,grau = grau21,SSq = ana$`Sum Sq`[5],DFres = ana$Df[5])}} if(design=="FAT2DBC.ad"){ if(ana$`Pr(>F)`[4]>alpha.f & ana$`Pr(>F)`[2]<alpha.f){print(polynomial(f2, resp, grau = grau, ylab = ylab, xlab=xlab, posi=posi, theme=theme, textsize=textsize, point=point, family=family, SSq = ana$`Sum Sq`[6], DFres = ana$Df[6])[[1]])} if(ana$`Pr(>F)`[4]<alpha.f){saida=polynomial2_color(f2,resp,f1,grau = grau21,SSq = ana$`Sum Sq`[6],DFres = ana$Df[6])}}}
/scratch/gouwar.j/cran-all/cranData/AgroR/R/desd_fat2_quant_ad.R
#' Analysis: Completely randomized design with an additional treatment for quantitative factor #' #' @description Statistical analysis of experiments conducted in a completely randomized with an additional treatment and balanced design with a factor considering the fixed model. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param trat Numerical or complex vector with treatments #' @param response Numerical vector containing the response of the experiment. #' @param responsead Numerical vector with additional treatment responses #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param homog Homogeneity test of variances (\emph{default} is Bartlett) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param grau Degree of polynomial in case of quantitative factor (\emph{default} is 1) #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param family Font family #' @param pointsize Point size #' @param linesize line size (Trendline and Error Bar) #' @param width.bar width of the error bars of a regression graph. #' @param posi Legend position #' @param point Defines whether to plot mean ("mean"), mean with standard deviation ("mean_sd" - \emph{default}) or mean with standard error (\emph{default} - "mean_se"). For quali=FALSE or quali=TRUE. #' @note In some experiments, the researcher may study a quantitative factor, such as fertilizer doses, and present a control, such as a reference fertilizer, treated as a qualitative control. In these cases, there is a difference between considering only the residue in the unfolding of the polynomial, removing or not the qualitative treatment, or since a treatment is excluded from the analysis. In this approach, the residue used is also considering the qualitative treatment, a method similar to the factorial scheme with additional control. #' @return The table of analysis of variance, the test of normality of errors (Shapiro-Wilk ("sw"), Lilliefors ("li"), Anderson-Darling ("ad"), Cramer-von Mises ("cvm"), Pearson ("pearson") and Shapiro-Francia ("sf")), the test of homogeneity of variances (Bartlett ("bt") or Levene ("levene")), the test of independence of Durbin-Watson errors, adjustment of regression models up to grade 3 polynomial. The function also returns a standardized residual plot. #' @keywords DIC #' @keywords additional treatment #' @export #' @examples #' datadicad=data.frame(doses = c(rep(c(1:5),e=3)), #' resp = c(3,4,3,5,5,6,7,7,8,4,4,5,2,2,3)) #' with(datadicad,dic.ad(doses, resp, rnorm(3,6,0.1),grau=2)) dic.ad=function(trat, response, responsead, grau = 1, norm="sw", homog="bt", alpha.f=0.05, theme=theme_classic(), ylab="response", xlab="independent", family="sans", posi="top", pointsize=4.5, linesize=0.8, width.bar=NA, point="mean_sd"){ if(is.na(width.bar)==TRUE){width.bar=0.1*mean(trat)} if(is.na(grau)==TRUE){grau=1} trat1=as.factor(trat) mod=aov(response~trat1) an=anova(mod) trati=as.factor(c(trat,rep("Controle",length(responsead)))) mod1=aov(c(response,responsead)~trati) an1=anova(mod1) anava1=rbind(an[1,],an1[1,],an1[2,]) anava1$Df[2]=1 anava1$`Sum Sq`[2]=anava1$`Sum Sq`[2]-sum(anava1$`Sum Sq`[1]) anava1$`Mean Sq`[2]=anava1$`Sum Sq`[2]/anava1$Df[2] anava1$`F value`[1:2]=anava1$`Mean Sq`[1:2]/anava1$`Mean Sq`[3] rownames(anava1)[1:2]=c("Factor","Factor vs control") for(i in 1:nrow(anava1)-1){ anava1$`Pr(>F)`[i]=1-pf(anava1$`F value`[i],anava1$Df[i],anava1$Df[3]) } respad=mod1$residuals/sqrt(anava1$`Mean Sq`[3]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} if(norm=="sw"){norm1 = shapiro.test(mod1$res)} if(norm=="li"){norm1=nortest::lillie.test(mod1$residuals)} if(norm=="ad"){norm1=nortest::ad.test(mod1$residuals)} if(norm=="cvm"){norm1=nortest::cvm.test(mod1$residuals)} if(norm=="pearson"){norm1=nortest::pearson.test(mod1$residuals)} if(norm=="sf"){norm1=nortest::sf.test(mod1$residuals)} if(homog=="bt"){ homog1 = bartlett.test(mod1$res ~ trati) statistic=homog1$statistic phomog=homog1$p.value method=paste("Bartlett test","(",names(statistic),")",sep="") } if(homog=="levene"){ homog1 = levenehomog(mod1$res~trati)[1,] statistic=homog1$`F value`[1] phomog=homog1$`Pr(>F)`[1] method="Levene's Test (center = median)(F)" names(homog1)=c("Df", "statistic","p.value")} indep = dwtest(mod1) Ids=ifelse(respad>3 | respad<(-3), "darkblue","black") residplot=ggplot(data=data.frame(respad,Ids),aes(y=respad,x=1:length(respad)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(respad),label=1:length(respad),color=Ids,size=4)+ scale_x_continuous(breaks=1:length(respad))+ theme_classic()+theme(axis.text.y = element_text(size=12), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) print(residplot) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) homoge=data.frame(Method=method, Statistic=statistic, "p-value"=phomog) rownames(homoge)="" print(homoge) cat("\n") message(if(homog1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level,hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected.Therefore, the variances are not homogeneous"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Independence from errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) indepe=data.frame(Method=paste(indep$method,"(", names(indep$statistic),")",sep=""), Statistic=indep$statistic, "p-value"=indep$p.value) rownames(indepe)="" print(indepe) cat("\n") message(if(indep$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected.Therefore, errors are not independent"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV (%) = ",round(sqrt(anava1$`Mean Sq`[3])/mean(response,na.rm=TRUE)*100,2))) cat(paste("\nMean = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian = ",round(median(response,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(anava1) a=AgroR::polynomial(trat,response,DFres = anava1$Df[3], SSq = anava1$`Sum Sq`[3], ylab = ylab, xlab=xlab, theme = theme, point = point, grau = grau, posi = posi, family = family, pointsize = pointsize, linesize = linesize, width.bar = width.bar) print(a[[1]])}
/scratch/gouwar.j/cran-all/cranData/AgroR/R/dic_ad.R
#' Analysis: Completely randomized design #' #' @description Statistical analysis of experiments conducted in a completely randomized and balanced design with a factor considering the fixed model. The function presents the option to use non-parametric method or transform the dataset. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param trat Numerical or complex vector with treatments #' @param response Numerical vector containing the response of the experiment. #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param homog Homogeneity test of variances (\emph{default} is Bartlett) #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{default} is qualitative) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param grau Degree of polynomial in case of quantitative factor (\emph{default} is 1) #' @param transf Applies data transformation (\emph{default} is 1; for log consider 0, `angular` for angular transformation) #' @param constant Add a constant for transformation (enter value) #' @param test "parametric" - Parametric test or "noparametric" - non-parametric test #' @param mcompNP Multiple comparison test (LSD (\emph{default}) or dunn) #' @param p.adj Method for adjusting p values for Kruskal-Wallis ("none","holm","hommel", "hochberg", "bonferroni", "BH", "BY", "fdr") #' @param geom Graph type (columns, boxes or segments) #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param CV Plotting the coefficient of variation and p-value of Anova (\emph{default} is TRUE) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param textsize Font size #' @param labelsize Label size #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angle x-axis scale text rotation #' @param family Font family #' @param dec Number of cells #' @param width.column Width column if geom="bar" #' @param width.bar Width errorbar #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param posi Legend position #' @param point Defines whether to plot mean ("mean"), mean with standard deviation ("mean_sd" - \emph{default}) or mean with standard error ("mean_se"). For quali=FALSE or quali=TRUE. For parametric test it is possible to plot the square root of QMres (mean_qmres) #' @param angle.label label angle #' @param ylim Define a numerical sequence referring to the y scale. You can use a vector or the `seq` command. #' @import ggplot2 #' @import stats #' @import multcompView #' @import gtools #' @importFrom crayon green #' @importFrom crayon bold #' @importFrom crayon italic #' @importFrom crayon red #' @importFrom crayon blue #' @importFrom crayon black #' @importFrom nortest lillie.test #' @importFrom nortest ad.test #' @importFrom nortest cvm.test #' @importFrom nortest pearson.test #' @importFrom nortest sf.test #' @importFrom utils setTxtProgressBar #' @importFrom utils txtProgressBar #' @importFrom graphics abline #' @importFrom ggrepel geom_label_repel #' @importFrom ggrepel geom_text_repel #' @importFrom emmeans emmeans #' @importFrom multcomp cld #' @importFrom lme4 lmer #' @importFrom graphics par #' @importFrom utils read.table #' @importFrom cowplot plot_grid #' @importFrom lmtest dwtest #' @importFrom stats cor.test #' @note Enable ggplot2 package to change theme argument. #' @note The ordering of the graph is according to the sequence in which the factor levels are arranged in the data sheet. The bars of the column and segment graphs are standard deviation. #' @note Post hoc test in nonparametric is using the criterium Fisher's least significant difference (p-adj="holm"). #' @note CV and p-value of the graph indicate coefficient of variation and p-value of the F test of the analysis of variance. #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @references #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' W.J. Conover, Practical Nonparametrics Statistics. 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' #' Mendiburu, F., and de Mendiburu, M. F. (2019). Package ‘agricolae’. R Package, Version, 1-2. #' #' Hothorn, T. et al. Package ‘lmtest’. Testing linear regression models. https://cran. r-project. org/web/packages/lmtest/lmtest. pdf. Accessed, v. 6, 2015. #' #' @return The table of analysis of variance, the test of normality of errors (Shapiro-Wilk ("sw"), Lilliefors ("li"), Anderson-Darling ("ad"), Cramer-von Mises ("cvm"), Pearson ("pearson") and Shapiro-Francia ("sf")), the test of homogeneity of variances (Bartlett ("bt") or Levene ("levene")), the test of independence of Durbin-Watson errors, the test of multiple comparisons (Tukey ("tukey"), LSD ("lsd"), Scott-Knott ("sk") or Duncan ("duncan")) or adjustment of regression models up to grade 3 polynomial, in the case of quantitative treatments. Non-parametric analysis can be used by the Kruskal-Wallis test. The column, segment or box chart for qualitative treatments is also returned. The function also returns a standardized residual plot. #' @keywords DIC #' @keywords Experimental #' @seealso \link{DBC} \link{DQL} #' @export #' @examples #' library(AgroR) #' data(pomegranate) #' #' with(pomegranate, DIC(trat, WL, ylab = "Weight loss (%)")) # tukey #' with(pomegranate, DIC(trat, WL, mcomp = "sk", ylab = "Weight loss (%)")) #' with(pomegranate, DIC(trat, WL, mcomp = "duncan", ylab = "Weight loss (%)")) #' #' #============================= #' # Kruskal-Wallis #' #============================= #' with(pomegranate, DIC(trat, WL, test = "noparametric", ylab = "Weight loss (%)")) #' #' #' #============================= #' # chart type #' #============================= #' with(pomegranate, DIC(trat, WL, geom="point", ylab = "Weight loss (%)")) #' with(pomegranate, DIC(trat, WL, ylab = "Weight loss (%)", xlab="Treatments")) #' #' #============================= #' # quantitative factor #' #============================= #' data("phao") #' with(phao, DIC(dose,comp,quali=FALSE,grau=2, #' xlab = expression("Dose"~(g~vase^-1)), #' ylab="Leaf length (cm)")) #' #' #============================= #' # data transformation #' #============================= #' data("pepper") #' with(pepper, DIC(Acesso, VitC, transf = 0,ylab="Vitamin C")) DIC <- function(trat, response, norm="sw", homog="bt", alpha.f=0.05, alpha.t=0.05, quali=TRUE, mcomp="tukey", grau=1, transf=1, constant=0, test="parametric", mcompNP="LSD", p.adj="holm", geom="bar", theme=theme_classic(), ylab="Response", sup=NA, CV=TRUE, xlab="", fill="lightblue", angle=0, family="sans", textsize=12, labelsize=4, dec=3, width.column=NULL, width.bar=0.3, addmean=TRUE, errorbar=TRUE, posi="top", point="mean_sd", angle.label=0, ylim=NA){ if(is.na(sup==TRUE)){sup=0.1*mean(response)} if(angle.label==0){hjust=0.5}else{hjust=0} requireNamespace("nortest") requireNamespace("crayon") requireNamespace("ggplot2") if(test=="parametric"){ if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} trat1=trat trat=as.factor(trat) a = anova(aov(resp ~ trat)) aa = summary(aov(resp ~ trat)) b = aov(resp ~ trat) anava=a colnames(anava)=c("GL","SQ","QM","Fcal","p-value") respad=b$residuals/sqrt(a$`Mean Sq`[2]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} if(norm=="sw"){norm1 = shapiro.test(b$res)} if(norm=="li"){norm1=nortest::lillie.test(b$residuals)} if(norm=="ad"){norm1=nortest::ad.test(b$residuals)} if(norm=="cvm"){norm1=nortest::cvm.test(b$residuals)} if(norm=="pearson"){norm1=nortest::pearson.test(b$residuals)} if(norm=="sf"){norm1=nortest::sf.test(b$residuals)} if(homog=="bt"){ homog1 = bartlett.test(b$res ~ trat) statistic=homog1$statistic phomog=homog1$p.value method=paste("Bartlett test","(",names(statistic),")",sep="") } if(homog=="levene"){ homog1 = levenehomog(b$res~trat)[1,] statistic=homog1$`F value`[1] phomog=homog1$`Pr(>F)`[1] method="Levene's Test (center = median)(F)" names(homog1)=c("Df", "statistic","p.value")} indep = dwtest(b) resids=b$residuals/sqrt(a$`Mean Sq`[2]) Ids=ifelse(resids>3 | resids<(-3), "darkblue","black") residplot=ggplot(data=data.frame(resids,Ids),aes(y=resids,x=1:length(resids)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(resids),label=1:length(resids),color=Ids,size=4)+ scale_x_continuous(breaks=1:length(resids))+ theme_classic()+theme(axis.text.y = element_text(size=12), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) print(residplot) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) homoge=data.frame(Method=method, Statistic=statistic, "p-value"=phomog) rownames(homoge)="" print(homoge) cat("\n") message(if(homog1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level,hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected.Therefore, the variances are not homogeneous"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Independence from errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) indepe=data.frame(Method=paste(indep$method,"(", names(indep$statistic),")",sep=""), Statistic=indep$statistic, "p-value"=indep$p.value) rownames(indepe)="" print(indepe) cat("\n") message(if(indep$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected.Therefore, errors are not independent"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV (%) = ",round(sqrt(a$`Mean Sq`[2])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nMStrat/MST = ",round(a$`Mean Sq`[1]/(a$`Mean Sq`[2]+a$`Mean Sq`[1]),2))) cat(paste("\nMean = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian = ",round(median(response,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) anava1=as.matrix(data.frame(anava)) colnames(anava1)=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anava1)=c("Treatment","Residuals") print(anava1,na.print = "") cat("\n\n") message(if (a$`Pr(>F)`[1]<alpha.f){ black("As the calculated p-value, it is less than the 5% significance level.The hypothesis H0 of equality of means is rejected. Therefore, at least two treatments differ")} else {"As the calculated p-value is greater than the 5% significance level, H0 is not rejected"}) cat(green(bold("\n\n-----------------------------------------------------------------\n"))) if(quali==TRUE){ teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste))))}else{cat(green(bold("Regression")))} cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali==TRUE){ if(mcomp=="tukey"){ letra <- TUKEY(b, "trat", alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups")} if(mcomp=="sk"){ nrep=table(trat)[1] medias=sort(tapply(resp,trat,mean),decreasing = TRUE) letra=scottknott(means = medias, df1 = a$Df[2], nrep = nrep, QME = a$`Mean Sq`[2], alpha = alpha.t) letra1=data.frame(resp=medias,groups=letra)} if(mcomp=="duncan"){ letra <- duncan(b, "trat", alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups")} if(mcomp=="lsd"){ letra <- LSD(b, "trat", alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups")} media = tapply(response, trat, mean, na.rm=TRUE) if(transf=="1"){letra1}else{letra1$respO=media[rownames(letra1)]} print(if(a$`Pr(>F)`[1]<alpha.f){letra1}else{"H0 is not rejected"}) cat("\n") message(if(transf=="1"){}else{blue("\nNOTE: resp = transformed means; respO = averages without transforming\n")}) if(transf==1 && norm1$p.value<0.05 | transf==1 && indep$p.value<0.05 | transf==1 &&homog1$p.value<0.05){ message("\nYour analysis is not valid, suggests using a non-parametric test and try to transform the data") } else{} if(transf != 1 && norm1$p.value<0.05 | transf!=1 && indep$p.value<0.05 | transf!=1 && homog1$p.value<0.05){cat(red("\nWarning!!! Your analysis is not valid, suggests using a non-parametric test"))}else{} if(point=="mean_sd"){ dadosm=data.frame(letra1, media=tapply(response, trat, mean, na.rm=TRUE)[rownames(letra1)], desvio=tapply(response, trat, sd, na.rm=TRUE)[rownames(letra1)])} if(point=="mean_se"){ dadosm=data.frame(letra1, media=tapply(response, trat, mean, na.rm=TRUE)[rownames(letra1)], desvio=(tapply(response, trat, sd, na.rm=TRUE)/sqrt(tapply(response, trat, length)))[rownames(letra1)])} if(point=="mean_qmres"){ dadosm=data.frame(letra1, media=tapply(response, trat, mean, na.rm=TRUE)[rownames(letra1)], desvio=rep(sqrt(a$`Mean Sq`[2]),e=length(levels(trat))))} dadosm$trats=factor(rownames(dadosm),levels = unique(trat)) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[unique(as.character(trat)),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} trats=dadosm$trats limite=dadosm$limite media=dadosm$media desvio=dadosm$desvio letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm,aes(x=trats,y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trats),color=1,width=width.column)}else{grafico=grafico+ geom_col(aes(fill=trats),fill=fill,color=1,width=width.column)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label,size=labelsize, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),family=family,size=labelsize,angle=angle.label, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=width.bar)}} if(geom=="point"){grafico=ggplot(dadosm,aes(x=trats, y=media)) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,angle=angle.label,size=labelsize, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=letra),family=family,angle=angle.label, size=labelsize,hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black",width=width.bar)} if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trats),size=5)} else{grafico=grafico+ geom_point(aes(color=trats), color="black", fill=fill,shape=21,size=5)}} if(geom=="box"){ datam1=data.frame(trats=factor(trat,levels = unique(as.character(trat))), response) dadosm2=data.frame(letra1, superior=tapply(response, trat, mean, na.rm=TRUE)[rownames(letra1)]) dadosm2$trats=rownames(dadosm2) dadosm2=dadosm2[unique(as.character(trat)),] dadosm2$limite=dadosm$media+dadosm$desvio # dadosm2$letra=paste(format(dadosm$media,digits = dec),dadosm$groups) if(addmean==TRUE){dadosm2$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm2$letra=dadosm$groups} trats=dadosm2$trats limite=dadosm2$limite superior=dadosm2$superior letra=dadosm2$letra stat_box=ggplot(datam1,aes(x=trats,y=response))+geom_boxplot() superior=ggplot_build(stat_box)$data[[1]]$ymax dadosm2$superior=superior+sup grafico=ggplot(datam1,aes(x=trats,y=response)) if(fill=="trat"){grafico=grafico+geom_boxplot(aes(fill=trats))} else{grafico=grafico+ geom_boxplot(aes(fill=trats),fill=fill)} grafico=grafico+ geom_text(data=dadosm2, aes(y=superior, label=letra), family = family,size=labelsize,angle=angle.label, hjust=hjust)} grafico=grafico+ theme+ ylab(ylab)+ xlab(xlab)+ theme(text = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), legend.position = "none") if(angle !=0){grafico=grafico+ theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} if(CV==TRUE){grafico=grafico+ labs(caption=paste("p-value", if(a$`Pr(>F)`[1]<0.0001){paste("<", 0.0001)} else{paste("=", round(a$`Pr(>F)`[1],4))},"; CV = ", round(abs(sqrt(a$`Mean Sq`[2])/mean(resp))*100,2),"%"))} if(is.na(ylim[1])==FALSE){ grafico=grafico+scale_y_continuous(breaks = ylim, limits = c(min(ylim),max(ylim)))} grafico=as.list(grafico) } if(quali==FALSE){ trat=trat1 # trat=as.numeric(as.character(trat)) if(grau==1){graph=polynomial(trat,response, grau = 1,textsize=textsize,xlab=xlab,ylab=ylab, family=family,posi=posi,point=point)} if(grau==2){graph=polynomial(trat,response, grau = 2,textsize=textsize,xlab=xlab,ylab=ylab, family=family,posi=posi,point=point)} if(grau==3){graph=polynomial(trat,response, grau = 3,textsize=textsize,xlab=xlab,ylab=ylab, family=family,posi=posi,point=point)} grafico=graph[[1]] if(is.na(ylim[1])==FALSE){ grafico=grafico+scale_y_continuous(breaks = ylim, limits = c(min(ylim),max(ylim)))} print(grafico) }} if(test=="noparametric"){ kruskal=function (y, trt, alpha = 0.05, p.adj = c("none", "holm", "hommel", "hochberg", "bonferroni", "BH", "BY", "fdr"), group = TRUE, main = NULL,console=FALSE){ name.y <- paste(deparse(substitute(y))) name.t <- paste(deparse(substitute(trt))) if(is.null(main))main<-paste(name.y,"~", name.t) p.adj <- match.arg(p.adj) junto <- subset(data.frame(y, trt), is.na(y) == FALSE) N <- nrow(junto) medians<-mean_stat(junto[,1],junto[,2],stat="median") for(i in c(1,5,2:4)) { x <- mean_stat(junto[,1],junto[,2],function(x)quantile(x)[i]) medians<-cbind(medians,x[,2])} medians<-medians[,3:7] names(medians)<-c("Min","Max","Q25","Q50","Q75") Means <- mean_stat(junto[,1],junto[,2],stat="mean") sds <- mean_stat(junto[,1],junto[,2], stat="sd") nn <- mean_stat(junto[,1],junto[,2],stat="length") Means<-data.frame(Means,std=sds[,2],r=nn[,2],medians) rownames(Means)<-Means[,1] Means<-Means[,-1] names(Means)[1]<-name.y junto[, 1] <- rank(junto[, 1]) means <- mean_stat(junto[, 1], junto[, 2], stat = "sum") sds <- mean_stat(junto[, 1], junto[, 2], stat = "sd") nn <- mean_stat(junto[, 1], junto[, 2], stat = "length") means <- data.frame(means, r = nn[, 2]) names(means)[1:2] <- c(name.t, name.y) ntr <- nrow(means) nk <- choose(ntr, 2) DFerror <- N - ntr rs <- 0 U <- 0 for (i in 1:ntr) { rs <- rs + means[i, 2]^2/means[i, 3] U <- U + 1/means[i, 3] } S <- (sum(junto[, 1]^2) - (N * (N + 1)^2)/4)/(N - 1) H <- (rs - (N * (N + 1)^2)/4)/S p.chisq <- 1 - pchisq(H, ntr - 1) if(console){ cat("\nStudy:", main) cat("\nKruskal-Wallis test's\nTies or no Ties\n") cat("\nCritical Value:", H) cat("\nDegrees of freedom:", ntr - 1) cat("\nPvalue Chisq :", p.chisq, "\n\n")} DFerror <- N - ntr Tprob <- qt(1 - alpha/2, DFerror) MSerror <- S * ((N - 1 - H)/(N - ntr)) means[, 2] <- means[, 2]/means[, 3] if(console){cat(paste(name.t, ",", sep = ""), " means of the ranks\n\n") print(data.frame(row.names = means[, 1], means[, -1])) cat("\nPost Hoc Analysis\n")} if (p.adj != "none") { if(console)cat("\nP value adjustment method:", p.adj) a <- 1e-06 b <- 1 for (i in 1:100) { x <- (b + a)/2 xr <- rep(x, nk) d <- p.adjust(xr, p.adj)[1] - alpha ar <- rep(a, nk) fa <- p.adjust(ar, p.adj)[1] - alpha if (d * fa < 0) b <- x if (d * fa > 0) a <- x} Tprob <- qt(1 - x/2, DFerror) } nr <- unique(means[, 3]) if (group & console){ cat("\nt-Student:", Tprob) cat("\nAlpha :", alpha)} if (length(nr) == 1) LSD <- Tprob * sqrt(2 * MSerror/nr) statistics<-data.frame(Chisq=H,Df=ntr-1,p.chisq=p.chisq) if ( group & length(nr) == 1 & console) cat("\nMinimum Significant Difference:",LSD,"\n") if ( group & length(nr) != 1 & console) cat("\nGroups according to probability of treatment differences and alpha level.\n") if ( length(nr) == 1) statistics<-data.frame(statistics,t.value=Tprob,MSD=LSD) comb <- utils::combn(ntr, 2) nn <- ncol(comb) dif <- rep(0, nn) LCL <- dif UCL <- dif pvalue <- dif sdtdif <- dif for (k in 1:nn) { i <- comb[1, k] j <- comb[2, k] dif[k] <- means[i, 2] - means[j, 2] sdtdif[k] <- sqrt(MSerror * (1/means[i,3] + 1/means[j, 3])) pvalue[k] <- 2*(1 - pt(abs(dif[k])/sdtdif[k],DFerror)) } if (p.adj != "none") pvalue <- p.adjust(pvalue, p.adj) pvalue <- round(pvalue,4) sig <- rep(" ", nn) for (k in 1:nn) { if (pvalue[k] <= 0.001) sig[k] <- "***" else if (pvalue[k] <= 0.01) sig[k] <- "**" else if (pvalue[k] <= 0.05) sig[k] <- "*" else if (pvalue[k] <= 0.1) sig[k] <- "." } tr.i <- means[comb[1, ], 1] tr.j <- means[comb[2, ], 1] LCL <- dif - Tprob * sdtdif UCL <- dif + Tprob * sdtdif comparison <- data.frame(Difference = dif, pvalue = pvalue, "Signif."=sig, LCL, UCL) if (p.adj !="bonferroni" & p.adj !="none"){ comparison<-comparison[,1:3] statistics<-data.frame(Chisq=H,p.chisq=p.chisq)} rownames(comparison) <- paste(tr.i, tr.j, sep = " - ") if (!group) { groups<-NULL if(console){ cat("\nComparison between treatments mean of the ranks.\n\n") print(comparison) } } if (group) { comparison=NULL Q<-matrix(1,ncol=ntr,nrow=ntr) p<-pvalue k<-0 for(i in 1:(ntr-1)){ for(j in (i+1):ntr){ k<-k+1 Q[i,j]<-p[k] Q[j,i]<-p[k] } } groups <- ordenacao(means[, 1], means[, 2],alpha, Q, console) names(groups)[1]<-name.y if(console) { cat("\nTreatments with the same letter are not significantly different.\n\n") print(groups) } } ranks=means Means<-data.frame(rank=ranks[,2],Means) Means<-Means[,c(2,1,3:9)] parameters<-data.frame(test="Kruskal-Wallis",p.ajusted=p.adj,name.t=name.t,ntr = ntr,alpha=alpha) rownames(parameters)<-" " rownames(statistics)<-" " output<-list(statistics=statistics,parameters=parameters, means=Means,comparison=comparison,groups=groups) class(output)<-"group" invisible(output) } if(mcompNP=="LSD"){krusk=kruskal(response,trat,p.adj = p.adj,alpha=alpha.t)} if(mcompNP=="dunn"){ krusk=kruskal(response,trat,p.adj = p.adj,alpha=alpha.t) krusk1=dunn(trat, response, method = p.adj, alpha=alpha.t) krusk$groups=krusk$groups[unique(trat),] krusk$groups$groups=krusk1$`Post-hoc`$dunn} cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(italic("Statistics"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(krusk$statistics) cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(italic("Parameters"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(krusk$parameters) cat(green(bold("\n\n-----------------------------------------------------------------\n"))) cat(green(italic(paste("Multiple Comparison Test:","LSD")))) cat(green(bold("\n-----------------------------------------------------------------\n"))) saida=cbind(krusk$means[,c(1,3)],krusk$groups[rownames(krusk$means),]) colnames(saida)=c("Mean","SD","Rank","Groups") print(saida) dadosm=data.frame(krusk$means,krusk$groups[rownames(krusk$means),]) dadosm$trats=factor(rownames(dadosm),levels = unique(trat)) dadosm$media=tapply(response,trat,mean, na.rm=TRUE)[rownames(krusk$means)] if(point=="mean_sd"){dadosm$std=tapply(response, trat, sd, na.rm=TRUE)[rownames(krusk$means)]} if(point=="mean_se"){dadosm$std=tapply(response, trat, sd, na.rm=TRUE)/ sqrt(tapply(response, trat, length))[rownames(krusk$means)]} dadosm$limite=dadosm$response+dadosm$std if(addmean==TRUE){dadosm$letra=paste(format(dadosm$response,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} trats=dadosm$trats limite=dadosm$limite media=dadosm$media std=dadosm$std letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm, aes(x=trats,y=response)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trats),color=1,width=width.column)} else{grafico=grafico+ geom_col(aes(fill=trats),fill=fill,color=1,width=width.column)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-std}else{std}, label=letra),family=family,size=labelsize,angle=angle.label, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),size=labelsize,family=family,angle=angle.label, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm,aes(ymin=response-std, ymax=response+std, color=1), color="black",width=width.bar)}} if(geom=="point"){grafico=ggplot(dadosm, aes(x=trats, y=response)) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-std}else{std}, label=letra), family=family,angle=angle.label,size=labelsize, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=letra), family=family,angle=angle.label, size=labelsize,hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=response-std, ymax=response+std, color=1), color="black",width=width.bar)} if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trats),size=5)} else{grafico=grafico+ geom_point(aes(color=trats), color="black", fill=fill,shape=21,size=5)}} if(geom=="box"){ datam1=data.frame(trats=factor(trat,levels = unique(as.character(trat))),response) dadosm2=data.frame(krusk$means) dadosm2$trats=rownames(dadosm2) dadosm2$limite=dadosm2$response+dadosm2$std # dadosm2$letra=paste(format(dadosm2$response,digits = dec), # dadosm$groups) if(addmean==TRUE){dadosm2$letra=paste(format(dadosm2$response,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm2$letra=dadosm$groups} dadosm2=dadosm2[unique(as.character(trat)),] trats=dadosm2$trats limite=dadosm2$limite letra=dadosm2$letra stat_box=ggplot(datam1,aes(x=trats,y=response))+geom_boxplot() superior=ggplot_build(stat_box)$data[[1]]$ymax dadosm2$superior=superior+sup grafico=ggplot(datam1, aes(x=trats, y=response)) if(fill=="trat"){grafico=grafico+ geom_boxplot(aes(fill=1))} else{grafico=grafico+ geom_boxplot(aes(fill=trats),fill=fill)} grafico=grafico+ geom_text(data=dadosm2, aes(y=superior, label=letra), family = family,angle=angle.label, size=labelsize,hjust=hjust)} grafico=grafico+theme+ ylab(ylab)+ xlab(xlab)+ theme(text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), legend.position = "none") if(is.na(ylim[1])==FALSE){ grafico=grafico+scale_y_continuous(breaks = ylim, limits = c(min(ylim),max(ylim)))} if(angle !=0){grafico=grafico+theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} } if(quali==TRUE){print(grafico)} graficos=list(grafico)#[[1]] }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/dic_function.R
#' Analysis: Completely randomized design evaluated over time #' #' @description Function of the AgroR package for the analysis of experiments conducted in a completely randomized, qualitative, uniform qualitative design with multiple assessments over time, however without considering time as a factor. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param trat Numerical or complex vector with treatments #' @param time Numerical or complex vector with times #' @param response Numerical vector containing the response of the experiment. #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD ("lsd"), Scott-Knott ("sk"), Duncan ("duncan") and Kruskal-Wallis ("kw")) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab treatments name (Accepts the \emph{expression}() function) #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param error Add error bar #' @param sup Number of units above the standard deviation or average bar on the graph #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param textsize Font size of the texts and titles of the axes #' @param labelsize Font size of the labels #' @param pointsize Point size #' @param family Font family #' @param dec Number of cells #' @param geom Graph type (columns - "bar" or segments "point") #' @param legend Legend title #' @param posi Legend position #' @param ylim Define a numerical sequence referring to the y scale. You can use a vector or the `seq` command. #' @param width.bar width error bar #' @param size.bar size error bar #' @param xnumeric Declare x as numeric (\emph{default} is FALSE) #' @param p.adj Method for adjusting p values for Kruskal-Wallis ("none","holm","hommel", "hochberg", "bonferroni", "BH", "BY", "fdr") #' @param all.letters Adds all label letters regardless of whether it is significant or not. #' @note The ordering of the graph is according to the sequence in which the factor levels are arranged in the data sheet. The bars of the column and segment graphs are standard deviation. #' @keywords dict #' @keywords Experimental #' @seealso \link{DIC}, \link{DBCT}, \link{DQLT} #' @return The function returns the p-value of Anova, the assumptions of normality of errors, homogeneity of variances and independence of errors, multiple comparison test, as well as a line graph #' @references #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' @export #' @examples #' rm(list=ls()) #' data(simulate1) #' attach(simulate1) #' with(simulate1, DICT(trat, tempo, resp)) #' with(simulate1, DICT(trat, tempo, resp, fill="rainbow",family="serif")) #' with(simulate1, DICT(trat, tempo, resp,geom="bar",sup=40)) #' with(simulate1, DICT(trat, tempo, resp,geom="point",sup=40)) DICT=function(trat, time, response, alpha.f=0.05, alpha.t=0.05, mcomp="tukey", theme=theme_classic(), geom="bar", xlab="Independent", ylab="Response", p.adj="holm", dec=3, fill="gray", error=TRUE, textsize=12, labelsize=5, pointsize=4.5, family="sans", sup=0, addmean=FALSE, legend="Legend", ylim=NA, width.bar=0.2, size.bar=0.8, posi=c(0.1,0.8), xnumeric=FALSE, all.letters=FALSE){ requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") requireNamespace("ggrepel") resp=response trat=as.factor(trat) time=factor(time,unique(time)) dados=data.frame(resp,trat,time) if(mcomp=="tukey"){ tukeyg=c() ordem=c() normg=c() homog=c() indepg=c() anovag=c() cv=c() for(i in 1:length(levels(time))){ mod=aov(resp~trat, data=dados[dados$time==levels(dados$time)[i],]) anovag[[i]]=anova(mod)$`Pr(>F)`[1] cv[[i]]=sqrt(anova(mod)$`Mean Sq`[2])/mean(mod$model$resp)*100 norm=shapiro.test(mod$residuals) homo=with(dados[dados$time==levels(dados$time)[i],], bartlett.test(mod$residuals~trat)) indep=dwtest(mod) tukey=TUKEY(mod,"trat",alpha = alpha.t) tukey$groups=tukey$groups[unique(as.character(trat)),2] if(all.letters==FALSE){ if(anova(mod)$`Pr(>F)`[1]>alpha.f){tukey$groups=c("ns",rep(" ",length(unique(trat))-1))}} tukeyg[[i]]=as.character(tukey$groups) normg[[i]]=norm$p.value homog[[i]]=homo$p.value indepg[[i]]=indep$p.value ordem[[i]]=rownames(tukey$groups) } m=unlist(tukeyg) nor=unlist(normg) hom=unlist(homog) ind=unlist(indepg) an=unlist(anovag) cv=unlist(cv) press=data.frame(an,nor,hom,ind,cv) colnames(press)=c("p-value ANOVA","Shapiro-Wilk","Bartlett","Durbin-Watson","CV (%)")} if(mcomp=="lsd"){ lsdg=c() ordem=c() normg=c() homog=c() indepg=c() anovag=c() cv=c() for(i in 1:length(levels(time))){ mod=aov(resp~trat, data=dados[dados$time==levels(dados$time)[i],]) anovag[[i]]=anova(mod)$`Pr(>F)`[1] cv[[i]]=sqrt(anova(mod)$`Mean Sq`[2])/mean(mod$model$resp)*100 lsd=LSD(mod,"trat",alpha = alpha.t) lsd$groups=lsd$groups[unique(as.character(trat)),2] if(all.letters==FALSE){ if(anova(mod)$`Pr(>F)`[1]>alpha.f){lsd$groups=c("ns",rep(" ",length(unique(trat))-1))}} lsdg[[i]]=as.character(lsd$groups) ordem[[i]]=rownames(lsd$groups) norm=shapiro.test(mod$residuals) homo=with(dados[dados$time==levels(dados$time)[i],], bartlett.test(mod$residuals~trat)) indep=dwtest(mod) normg[[i]]=norm$p.value homog[[i]]=homo$p.value indepg[[i]]=indep$p.value } m=unlist(lsdg) nor=unlist(normg) hom=unlist(homog) ind=unlist(indepg) an=unlist(anovag) cv=unlist(cv) press=data.frame(an,nor,hom,ind,cv) colnames(press)=c("p-value ANOVA","Shapiro-Wilk","Bartlett","Durbin-Watson","CV (%)")} if(mcomp=="duncan"){ duncang=c() ordem=c() normg=c() homog=c() indepg=c() anovag=c() cv=c() for(i in 1:length(levels(time))){ mod=aov(resp~trat, data=dados[dados$time==levels(dados$time)[i],]) anovag[[i]]=anova(mod)$`Pr(>F)`[1] cv[[i]]=sqrt(anova(mod)$`Mean Sq`[2])/mean(mod$model$resp)*100 duncan=duncan(mod,"trat",alpha = alpha.t) duncan$groups=duncan$groups[unique(as.character(trat)),2] if(all.letters==FALSE){ if(anova(mod)$`Pr(>F)`[1]>alpha.f){duncan$groups=c("ns",rep(" ",length(unique(trat))-1))}} norm=shapiro.test(mod$residuals) homo=with(dados[dados$time==levels(dados$time)[i],], bartlett.test(mod$residuals~trat)) indep=dwtest(mod) normg[[i]]=norm$p.value homog[[i]]=homo$p.value indepg[[i]]=indep$p.value duncang[[i]]=as.character(duncan$groups) ordem[[i]]=rownames(duncan$groups) } m=unlist(duncang) nor=unlist(normg) hom=unlist(homog) ind=unlist(indepg) an=unlist(anovag) cv=unlist(cv) press=data.frame(an,nor,hom,ind,cv) colnames(press)=c("p-value ANOVA","Shapiro-Wilk","Bartlett","Durbin-Watson","CV (%)")} if(mcomp=="sk"){ scott=c() normg=c() homog=c() indepg=c() anovag=c() cv=c() for(i in 1:length(levels(time))){ mod=aov(resp~trat, data=dados[dados$time==levels(dados$time)[i],]) anovag[[i]]=anova(mod)$`Pr(>F)`[1] cv[[i]]=sqrt(anova(mod)$`Mean Sq`[2])/mean(mod$model$resp)*100 norm=shapiro.test(mod$residuals) homo=with(dados[dados$time==levels(dados$time)[i],], bartlett.test(mod$residuals~trat)) indep=dwtest(mod) nrep=with(dados[dados$time==levels(dados$time)[i],], table(trat)[1]) ao=anova(mod) medias=with(dados[dados$time==levels(dados$time)[i],], sort(tapply(resp,trat,mean),decreasing = TRUE)) letra=scottknott(means = medias, df1 = ao$Df[2], nrep = nrep, QME = ao$`Mean Sq`[2], alpha = alpha.t) letra1=data.frame(resp=medias,groups=letra) letra1=letra1[unique(as.character(trat)),] data=letra1$groups if(all.letters==FALSE){ if(anova(mod)$`Pr(>F)`[1]>alpha.f){data=c("ns",rep(" ",length(unique(trat))-1))}} data=data scott[[i]]=data normg[[i]]=norm$p.value homog[[i]]=homo$p.value indepg[[i]]=indep$p.value } m=unlist(scott) nor=unlist(normg) hom=unlist(homog) ind=unlist(indepg) an=unlist(anovag) cv=unlist(cv) press=data.frame(an,nor,hom,ind,cv) colnames(press)=c("p-value ANOVA","Shapiro-Wilk","Bartlett","Durbin-Watson","CV (%)")} if(mcomp=="kw"){ kruskal=function (y, trt, alpha = 0.05, p.adj = c("none", "holm", "hommel", "hochberg", "bonferroni", "BH", "BY", "fdr"), group = TRUE, main = NULL,console=FALSE){ name.y <- paste(deparse(substitute(y))) name.t <- paste(deparse(substitute(trt))) if(is.null(main))main<-paste(name.y,"~", name.t) p.adj <- match.arg(p.adj) junto <- subset(data.frame(y, trt), is.na(y) == FALSE) N <- nrow(junto) medians<-mean_stat(junto[,1],junto[,2],stat="median") for(i in c(1,5,2:4)) { x <- mean_stat(junto[,1],junto[,2],function(x)quantile(x)[i]) medians<-cbind(medians,x[,2])} medians<-medians[,3:7] names(medians)<-c("Min","Max","Q25","Q50","Q75") Means <- mean_stat(junto[,1],junto[,2],stat="mean") sds <- mean_stat(junto[,1],junto[,2], stat="sd") nn <- mean_stat(junto[,1],junto[,2],stat="length") Means<-data.frame(Means,std=sds[,2],r=nn[,2],medians) rownames(Means)<-Means[,1] Means<-Means[,-1] names(Means)[1]<-name.y junto[, 1] <- rank(junto[, 1]) means <- mean_stat(junto[, 1], junto[, 2], stat = "sum") sds <- mean_stat(junto[, 1], junto[, 2], stat = "sd") nn <- mean_stat(junto[, 1], junto[, 2], stat = "length") means <- data.frame(means, r = nn[, 2]) names(means)[1:2] <- c(name.t, name.y) ntr <- nrow(means) nk <- choose(ntr, 2) DFerror <- N - ntr rs <- 0 U <- 0 for (i in 1:ntr) { rs <- rs + means[i, 2]^2/means[i, 3] U <- U + 1/means[i, 3] } S <- (sum(junto[, 1]^2) - (N * (N + 1)^2)/4)/(N - 1) H <- (rs - (N * (N + 1)^2)/4)/S p.chisq <- 1 - pchisq(H, ntr - 1) if(console){ cat("\nStudy:", main) cat("\nKruskal-Wallis test's\nTies or no Ties\n") cat("\nCritical Value:", H) cat("\nDegrees of freedom:", ntr - 1) cat("\nPvalue Chisq :", p.chisq, "\n\n")} DFerror <- N - ntr Tprob <- qt(1 - alpha/2, DFerror) MSerror <- S * ((N - 1 - H)/(N - ntr)) means[, 2] <- means[, 2]/means[, 3] if(console){cat(paste(name.t, ",", sep = ""), " means of the ranks\n\n") print(data.frame(row.names = means[, 1], means[, -1])) cat("\nPost Hoc Analysis\n")} if (p.adj != "none") { if(console)cat("\nP value adjustment method:", p.adj) a <- 1e-06 b <- 1 for (i in 1:100) { x <- (b + a)/2 xr <- rep(x, nk) d <- p.adjust(xr, p.adj)[1] - alpha ar <- rep(a, nk) fa <- p.adjust(ar, p.adj)[1] - alpha if (d * fa < 0) b <- x if (d * fa > 0) a <- x} Tprob <- qt(1 - x/2, DFerror) } nr <- unique(means[, 3]) if (group & console){ cat("\nt-Student:", Tprob) cat("\nAlpha :", alpha)} if (length(nr) == 1) LSD <- Tprob * sqrt(2 * MSerror/nr) statistics<-data.frame(Chisq=H,Df=ntr-1,p.chisq=p.chisq) if ( group & length(nr) == 1 & console) cat("\nMinimum Significant Difference:",LSD,"\n") if ( group & length(nr) != 1 & console) cat("\nGroups according to probability of treatment differences and alpha level.\n") if ( length(nr) == 1) statistics<-data.frame(statistics,t.value=Tprob,MSD=LSD) comb <- utils::combn(ntr, 2) nn <- ncol(comb) dif <- rep(0, nn) LCL <- dif UCL <- dif pvalue <- dif sdtdif <- dif for (k in 1:nn) { i <- comb[1, k] j <- comb[2, k] dif[k] <- means[i, 2] - means[j, 2] sdtdif[k] <- sqrt(MSerror * (1/means[i,3] + 1/means[j, 3])) pvalue[k] <- 2*(1 - pt(abs(dif[k])/sdtdif[k],DFerror)) } if (p.adj != "none") pvalue <- p.adjust(pvalue, p.adj) pvalue <- round(pvalue,4) sig <- rep(" ", nn) for (k in 1:nn) { if (pvalue[k] <= 0.001) sig[k] <- "***" else if (pvalue[k] <= 0.01) sig[k] <- "**" else if (pvalue[k] <= 0.05) sig[k] <- "*" else if (pvalue[k] <= 0.1) sig[k] <- "." } tr.i <- means[comb[1, ], 1] tr.j <- means[comb[2, ], 1] LCL <- dif - Tprob * sdtdif UCL <- dif + Tprob * sdtdif comparison <- data.frame(Difference = dif, pvalue = pvalue, "Signif."=sig, LCL, UCL) if (p.adj !="bonferroni" & p.adj !="none"){ comparison<-comparison[,1:3] statistics<-data.frame(Chisq=H,p.chisq=p.chisq)} rownames(comparison) <- paste(tr.i, tr.j, sep = " - ") if (!group) { groups<-NULL if(console){ cat("\nComparison between treatments mean of the ranks.\n\n") print(comparison) } } if (group) { comparison=NULL Q<-matrix(1,ncol=ntr,nrow=ntr) p<-pvalue k<-0 for(i in 1:(ntr-1)){ for(j in (i+1):ntr){ k<-k+1 Q[i,j]<-p[k] Q[j,i]<-p[k] } } groups <- ordenacao(means[, 1], means[, 2],alpha, Q, console) names(groups)[1]<-name.y if(console) { cat("\nTreatments with the same letter are not significantly different.\n\n") print(groups) } } ranks=means Means<-data.frame(rank=ranks[,2],Means) Means<-Means[,c(2,1,3:9)] parameters<-data.frame(test="Kruskal-Wallis",p.ajusted=p.adj,name.t=name.t,ntr = ntr,alpha=alpha) rownames(parameters)<-" " rownames(statistics)<-" " output<-list(statistics=statistics,parameters=parameters, means=Means,comparison=comparison,groups=groups) class(output)<-"group" invisible(output) } kwg=c() ordem=c() normg=c() homog=c() indepg=c() anovag=c() for(i in 1:length(levels(time))){ data=dados[dados$time==levels(dados$time)[i],] mod=with(data,kruskal(resp,trat,p.adj = p.adj,alpha = alpha.t)) anovag[[i]]=mod$statistics$p.chisq norm="" homo="" indep="" kw=mod kw$groups=kw$groups[unique(as.character(trat)),2] if(all.letters==FALSE){ if(mod$statistics$p.chisq>alpha.f){kw$groups=c("ns", rep(" ",length(unique(trat))-1))}} normg[[i]]=norm homog[[i]]=homo indepg[[i]]=indep kwg[[i]]=as.character(kw$groups) ordem[[i]]=rownames(kw$groups) } m=unlist(kwg) nor=unlist(normg) hom=unlist(homog) ind=unlist(indepg) an=unlist(anovag) press=data.frame(an) colnames(press)=c("p-value Kruskal")} cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("ANOVA and assumptions"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(press) dadosm=data.frame(#time=as.numeric(as.character(rep(unique(time),e=length(unique(as.character(trat)))))), time=as.character(rep(unique(time),e=length(unique(as.character(trat))))), trat=rep(unique(as.character(trat)),length(unique(time))), media=c(tapply(resp,list(trat,time),mean, na.rm=TRUE)[unique(as.character(trat)),]), desvio=c(tapply(resp,list(trat,time),sd, na.rm=TRUE)[unique(as.character(trat)),]), letra=m) if(xnumeric==TRUE){dadosm$time=as.numeric(as.character(dadosm$time))} if(xnumeric==FALSE){dadosm$time=factor(dadosm$time,unique(dadosm$time))} time=dadosm$time trat=dadosm$trat media=dadosm$media desvio=dadosm$desvio letra=dadosm$letra if(geom=="point"){ grafico=ggplot(dadosm,aes(y=media, x=time))+ geom_point(aes(shape=factor(trat, levels=unique(as.character(trat))), group=factor(trat,levels=unique(as.character(trat)))),size=pointsize)+ geom_line(aes(lty=factor(trat,levels=unique(as.character(trat))), group=factor(trat,levels=unique(as.character(trat)))),size=0.8)+ ylab(ylab)+ xlab(xlab)+theme+ theme(text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), legend.position = posi, legend.text = element_text(size = textsize))+ labs(shape=legend, lty=legend) if(error==TRUE){grafico=grafico+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=width.bar,size=size.bar)} if(addmean==FALSE && error==FALSE){grafico=grafico+ geom_text_repel(aes(y=media+sup,label=letra),family=family,size=labelsize)} if(addmean==TRUE && error==FALSE){grafico=grafico+ geom_text_repel(aes(y=media+sup, label=paste(format(media,digits = dec),letra)),family=family,size=labelsize)} if(addmean==FALSE && error==TRUE){grafico=grafico+ geom_text_repel(aes(y=desvio+media+sup, label=letra),family=family,size=labelsize)} if(addmean==TRUE && error==TRUE){grafico=grafico+ geom_text_repel(aes(y=desvio+media+sup, label=paste(format(media,digits = dec),letra)),family=family,size=labelsize)} if(is.na(ylim[1])==FALSE){grafico=grafico+scale_y_continuous(breaks = ylim)} } if(geom=="bar"){ if(sup==0){sup=0.1*mean(dadosm$media)} grafico=ggplot(dadosm,aes(y=media, x=as.factor(time), fill=factor(trat,levels = unique(trat))))+ geom_col(position = "dodge",color="black")+ ylab(ylab)+ xlab(xlab)+theme+ theme(text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), legend.position = posi, legend.text = element_text(size = textsize))+labs(fill=legend) if(error==TRUE){grafico=grafico+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=width.bar,size=size.bar, position = position_dodge(width=0.9))} if(addmean==FALSE && error==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra), position = position_dodge(width=0.9),family=family,size=labelsize)} if(addmean==TRUE && error==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=paste(format(media,digits = dec),letra)), position = position_dodge(width=0.9),family=family,size=labelsize)} if(addmean==FALSE && error==TRUE){grafico=grafico+ geom_text(aes(y=desvio+media+sup,label=letra), position = position_dodge(width=0.9),family=family,size=labelsize)} if(addmean==TRUE && error==TRUE){grafico=grafico+ geom_text(aes(y=desvio+media+sup, label=paste(format(media,digits = dec),letra)), position = position_dodge(width=0.9),family=family,size=labelsize)} } if(fill=="gray"){grafico=grafico+scale_fill_grey(start = 1, end = 0.1)} if(is.na(ylim[1])==FALSE){grafico=grafico+scale_y_continuous(breaks = ylim)} graficos=as.list(grafico) print(grafico) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/dict_function.R
#' Descriptive: Boxplot with standardized data #' #' @description It makes a graph with the variables and/or treatments with the standardized data. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param data data.frame containing the response of the experiment. #' @param trat Numerical or complex vector with treatments #' @param theme ggplot2 theme (\emph{default} is theme_bw()) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param fill Defines chart color #' @param textsize Font size #' @param family Font family #' @keywords Descriptive #' @keywords Experimental #' @export #' @return Returns a chart of boxes with standardized data #' @examples #' library(AgroR) #' data("pomegranate") #' dispvar(pomegranate[,-1]) #' trat=pomegranate$trat #' dispvar(pomegranate[,-1], trat) dispvar=function(data, trat=NULL, theme=theme_bw(), ylab="Standard mean", xlab="Variable", family="serif", textsize=12, fill="lightblue"){ requireNamespace("ggplot2") if(is.null(trat)==TRUE){datap=scale(data) datap=data.frame(datap) resp=unlist(c(datap)) trat1=rep(colnames(datap),e=length(datap[,1])) trat1=factor(trat1,levels = unique(trat1)) dados=data.frame(trat1,resp) grafico=ggplot(dados,aes(x=trat1,y=resp)) if(fill=="trat"){grafico=grafico+geom_boxplot(aes(fill="trat"))} else{grafico=grafico+geom_boxplot(fill=fill)+ geom_jitter(fill=fill, width=0.1,alpha=0.2)+ stat_summary(fill=fill,fun="mean",color="red",geom="point",size=2,shape=8)} grafico=grafico+theme+ ylab(ylab)+ xlab(xlab)+ theme(text = element_text(size=textsize, family = family, colour = "black"), axis.title = element_text(size=textsize, family = family, colour = "black"), axis.text = element_text(size=textsize, family = family, colour = "black")) print(grafico)} if(is.null(trat[1])==FALSE){ datap=scale(data) datap=data.frame(datap) resp=unlist(c(datap)) trat1=as.factor(rep(colnames(datap),e=length(datap[,1]))) trat=as.factor(rep(trat, length(colnames(datap)))) trat1=factor(trat1, levels=unique(trat1)) trat=factor(trat, levels=unique(trat)) dados=data.frame(trat1,trat,resp) grafico=ggplot(dados,aes(x=trat,y=resp)) grafico=grafico+geom_boxplot(aes(fill=trat))+ geom_jitter(aes(fill=trat), width=0.1,alpha=0.2)+ stat_summary(aes(fill=trat),fun="mean",color="red",geom="point",size=2,shape=8) grafico=grafico+theme+ ylab(ylab)+ xlab(xlab)+facet_wrap(facets = trat1)+ theme(text = element_text(size=textsize, family = family, colour = "black"), axis.title = element_text(size=textsize, family = family, colour = "black"), axis.text = element_text(size=textsize, family = family, colour = "black")) print(grafico) } grafico=as.list(grafico) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/dispvar_function.R
#' Analysis: Latin square design #' #' @description This is a function of the AgroR package for statistical analysis of experiments conducted in Latin Square and balanced design with a factor considering the fixed model. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param trat Numerical or complex vector with treatments #' @param line Numerical or complex vector with lines #' @param column Numerical or complex vector with columns #' @param response Numerical vector containing the response of the experiment. #' @param norm Error normality test (\emph{default} is Shapiro-Wilk) #' @param homog Homogeneity test of variances (\emph{default} is Bartlett) #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param quali Defines whether the factor is quantitative or qualitative (\emph{default} is qualitative) #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param grau Degree of polynomial in case of quantitative factor (\emph{default} is 1) #' @param transf Applies data transformation (default is 1; for log consider 0; `angular` for angular transformation) #' @param constant Add a constant for transformation (enter value) #' @param geom Graph type (columns, boxes or segments) #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param CV Plotting the coefficient of variation and p-value of Anova (\emph{default} is TRUE) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param angle x-axis scale text rotation #' @param family Font family #' @param textsize Font size #' @param labelsize Label size #' @param dec Number of cells #' @param width.column Width column if geom="bar" #' @param width.bar Width errorbar #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param errorbar Plot the standard deviation bar on the graph (In the case of a segment and column graph) - \emph{default} is TRUE #' @param posi Legend position #' @param point Defines whether to plot mean ("mean"), mean with standard deviation ("mean_sd" - \emph{default}) or mean with standard error ("mean_se"). For parametric test it is possible to plot the square root of QMres (mean_qmres). #' @param angle.label label angle #' @param ylim Define a numerical sequence referring to the y scale. You can use a vector or the `seq` command. #' @note The ordering of the graph is according to the sequence in which the factor levels are arranged in the data sheet. The bars of the column and segment graphs are standard deviation. #' @note CV and p-value of the graph indicate coefficient of variation and p-value of the F test of the analysis of variance. #' @note In the final output when transformation (transf argument) is different from 1, the columns resp and respo in the mean test are returned, indicating transformed and non-transformed mean, respectively. #' @keywords DQL #' @keywords Experimental #' @references #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' #' Mendiburu, F., and de Mendiburu, M. F. (2019). Package ‘agricolae’. R Package, Version, 1-2. #' #' @return The table of analysis of variance, the test of normality of errors (Shapiro-Wilk ("sw"), Lilliefors ("li"), Anderson-Darling ("ad"), Cramer-von Mises ("cvm"), Pearson ("pearson") and Shapiro-Francia ("sf")), the test of homogeneity of variances (Bartlett ("bt") or Levene ("levene")), the test of independence of Durbin-Watson errors, the test of multiple comparisons (Tukey ("tukey"), LSD ("lsd"), Scott-Knott ("sk") or Duncan ("duncan")) or adjustment of regression models up to grade 3 polynomial, in the case of quantitative treatments. The column, segment or box chart for qualitative treatments is also returned. The function also returns a standardized residual plot. #' @seealso \link{DIC}, \link{DBC} #' @export #' @examples #' library(AgroR) #' data(porco) #' with(porco, DQL(trat, linhas, colunas, resp, ylab="Weigth (kg)")) ###################################################################################### ## Analise de variancia para experimentos em DQL ###################################################################################### DQL=function(trat, line, column, response, norm="sw", homog="bt", alpha.f=0.05, alpha.t=0.05, quali=TRUE, mcomp="tukey", grau=1, transf=1, constant=0, geom="bar", theme=theme_classic(), sup=NA, CV=TRUE, ylab="Response", xlab="", textsize=12, labelsize=4, fill="lightblue", angle=0, family="sans", dec=3, width.column=NULL, width.bar=0.3, addmean=TRUE, errorbar=TRUE, posi="top", point="mean_sd", angle.label=0, ylim=NA) {if(is.na(sup==TRUE)){sup=0.2*mean(response)} if(angle.label==0){hjust=0.5}else{hjust=0} requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") if(transf==1){resp=response+constant}else{if(transf!="angular"){resp=((response+constant)^transf-1)/transf}} # if(transf==1){resp=response+constant}else{resp=((response+constant)^transf-1)/transf} if(transf==0){resp=log(response+constant)} if(transf==0.5){resp=sqrt(response+constant)} if(transf==-0.5){resp=1/sqrt(response+constant)} if(transf==-1){resp=1/(response+constant)} if(transf=="angular"){resp=asin(sqrt((response+constant)/100))} trat1=trat trat=as.factor(trat) line=as.factor(line) column=as.factor(column) a = anova(aov(resp ~ trat + line + column)) b = aov(resp ~ trat + line + column) media = tapply(response, trat, mean, na.rm=TRUE) anava=a colnames(anava)=c("GL","SQ","QM","Fcal","p-value") respad=b$residuals/sqrt(a$`Mean Sq`[4]) out=respad[respad>3 | respad<(-3)] out=names(out) out=if(length(out)==0)("No discrepant point")else{out} ## Normalidade dos erros if(norm=="sw"){norm1 = shapiro.test(b$res)} if(norm=="li"){norm1=lillie.test(b$residuals)} if(norm=="ad"){norm1=ad.test(b$residuals)} if(norm=="cvm"){norm1=cvm.test(b$residuals)} if(norm=="pearson"){norm1=pearson.test(b$residuals)} if(norm=="sf"){norm1=sf.test(b$residuals)} if(homog=="bt"){ homog1 = bartlett.test(b$res ~ trat) statistic=homog1$statistic phomog=homog1$p.value method=paste("Bartlett test","(",names(statistic),")",sep="") } if(homog=="levene"){ homog1 = levenehomog(b$res~trat)[1,] statistic=homog1$`F value`[1] phomog=homog1$`Pr(>F)`[1] method="Levene's Test (center = median)(F)" names(homog1)=c("Df", "statistic","p.value")} indep = dwtest(b) resids=b$residuals/sqrt(a$`Mean Sq`[4]) Ids=ifelse(resids>3 | resids<(-3), "darkblue","black") residplot=ggplot(data=data.frame(resids,Ids),aes(y=resids,x=1:length(resids)))+ geom_point(shape=21,color="gray",fill="gray",size=3)+ labs(x="",y="Standardized residuals")+ geom_text(x=1:length(resids),label=1:length(resids),color=Ids,size=4)+ scale_x_continuous(breaks=1:length(resids))+ theme_classic()+theme(axis.text.y = element_text(size=12), axis.text.x = element_blank())+ geom_hline(yintercept = c(0,-3,3),lty=c(1,2,2),color="red",size=1) print(residplot) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Normality of errors (Shapiro-Wilk"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) normal=data.frame(Method=paste(norm1$method,"(",names(norm1$statistic),")",sep=""), Statistic=norm1$statistic, "p-value"=norm1$p.value) rownames(normal)="" print(normal) cat("\n") message(if(norm1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered normal")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, errors do not follow a normal distribution"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Homogeneity of Variances"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) homoge=data.frame(Method=method, Statistic=statistic, "p-value"=phomog) rownames(homoge)="" print(homoge) cat("\n") message(if(homog1$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, the variances can be considered homogeneous")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected. Therefore, the variances are not homogeneous"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Independence from errors"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) indepe=data.frame(Method=paste(indep$method,"(", names(indep$statistic),")",sep=""), Statistic=indep$statistic, "p-value"=indep$p.value) rownames(indepe)="" print(indepe) cat("\n") message(if(indep$p.value>0.05){ black("As the calculated p-value is greater than the 5% significance level, hypothesis H0 is not rejected. Therefore, errors can be considered independent")} else {"As the calculated p-value is less than the 5% significance level, H0 is rejected.\n Therefore, errors are not independent"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Additional Information"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(paste("\nCV (%) = ",round(sqrt(a$`Mean Sq`[4])/mean(resp,na.rm=TRUE)*100,2))) cat(paste("\nMStrat/MST = ",round(a$`Mean Sq`[1]/(a$`Mean Sq`[4]+a$`Mean Sq`[3]+a$`Mean Sq`[2]+a$`Mean Sq`[1]),2))) cat(paste("\nMean = ",round(mean(response,na.rm=TRUE),4))) cat(paste("\nMedian = ",round(median(response,na.rm=TRUE),4))) cat("\nPossible outliers = ", out) cat("\n") cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("Analysis of Variance"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) anava1=as.matrix(data.frame(anava)) colnames(anava1)=c("Df","Sum Sq","Mean.Sq","F value","Pr(F)" ) rownames(anava1)=c("Treatment","Line","Column","Residuals") print(anava1,na.print = "") cat("\n") message(if (a$`Pr(>F)`[1]<alpha.f){ black("As the calculated p-value, it is less than the 5% significance level. The hypothesis H0 of equality of means is rejected. Therefore, at least two treatments differ")} else {"As the calculated p-value is greater than the 5% significance level, H0 is not rejected"}) cat(green(bold("\n-----------------------------------------------------------------\n"))) if(quali==TRUE){teste=if(mcomp=="tukey"){"Tukey HSD"}else{ if(mcomp=="sk"){"Scott-Knott"}else{ if(mcomp=="lsd"){"LSD-Fischer"}else{ if(mcomp=="duncan"){"Duncan"}}}} cat(green(italic(paste("Multiple Comparison Test:",teste)))) }else{cat(green(bold("Regression")))} cat(green(bold("\n-----------------------------------------------------------------\n"))) # ================================ # Comparação multipla # ================================ if(quali==TRUE){ ## Tukey if(mcomp=="tukey"){ letra <- TUKEY(b, "trat", alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups")} ## Scott-Knott if(mcomp=="sk"){ nrep=table(trat)[1] medias=sort(tapply(resp,trat,mean),decreasing = TRUE) letra=scottknott(means = medias, df1 = a$Df[4], nrep = nrep, QME = a$`Mean Sq`[4], alpha = alpha.t) letra1=data.frame(resp=medias,groups=letra)} ## Duncan if(mcomp=="duncan"){ letra <- duncan(b, "trat", alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups")} ## LSD if(mcomp=="lsd"){ letra <- LSD(b, "trat", alpha=alpha.t) letra1 <- letra$groups; colnames(letra1)=c("resp","groups")} media = tapply(response, trat, mean, na.rm=TRUE) if(transf=="1"){letra1}else{letra1$respO=media[rownames(letra1)]} print(if(a$`Pr(>F)`[1]<alpha.f){letra1}else{"H0 is not rejected"}) cat("\n") message(if(transf=="1"){}else{blue("resp = transformed means; respO = averages without transforming")}) if(transf==1 && norm1$p.value<0.05 | transf==1 && indep$p.value<0.05 | transf==1 &&homog1$p.value<0.05){ message("\n \nWarning!!! Your analysis is not valid, suggests using a try to transform the data")}else{} if(point=="mean_sd"){ dadosm=data.frame(letra1, media=tapply(response, trat, mean, na.rm=TRUE)[rownames(letra1)], desvio=tapply(response, trat, sd, na.rm=TRUE)[rownames(letra1)])} if(point=="mean_se"){ dadosm=data.frame(letra1, media=tapply(response, trat, mean, na.rm=TRUE)[rownames(letra1)], desvio=(tapply(response, trat, sd, na.rm=TRUE)/sqrt(tapply(response, trat, length)))[rownames(letra1)])} if(point=="mean_qmres"){ dadosm=data.frame(letra1, media=tapply(response, trat, mean, na.rm=TRUE)[rownames(letra1)], desvio=rep(sqrt(a$`Mean Sq`[4]),e=length(levels(trat))))} dadosm$trats=factor(rownames(dadosm),levels = unique(trat)) dadosm$limite=dadosm$media+dadosm$desvio dadosm=dadosm[unique(as.character(trat)),] if(addmean==TRUE){dadosm$letra=paste(format(dadosm$media,digits = dec),dadosm$groups)} if(addmean==FALSE){dadosm$letra=dadosm$groups} trats=dadosm$trats limite=dadosm$limite media=dadosm$media desvio=dadosm$desvio letra=dadosm$letra if(geom=="bar"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(fill=="trat"){grafico=grafico+ geom_col(aes(fill=trats),color=1,width = width.column)} else{grafico=grafico+ geom_col(aes(fill=trats),fill=fill,color=1,width = width.column)} if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),size=labelsize,family=family,angle=angle.label, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),size=labelsize,family=family,angle=angle.label, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black", width=width.bar)}} if(geom=="point"){grafico=ggplot(dadosm, aes(x=trats, y=media)) if(errorbar==TRUE){grafico=grafico+ geom_text(aes(y=media+sup+if(sup<0){-desvio}else{desvio}, label=letra),family=family,size=labelsize,angle=angle.label, hjust=hjust)} if(errorbar==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=letra), family=family,size=labelsize,angle=angle.label, hjust=hjust)} if(errorbar==TRUE){grafico=grafico+ geom_errorbar(data=dadosm, aes(ymin=media-desvio, ymax=media+desvio,color=1), color="black", width=width.bar)} if(fill=="trat"){grafico=grafico+ geom_point(aes(color=trats),size=5)} else{grafico=grafico+ geom_point(aes(color=trats), color=fill, size=5)}} if(geom=="box"){ datam1=data.frame(trats=factor(trat,levels = unique(as.character(trat))),response) dadosm2=data.frame(letra1,superior=tapply(response, trat, mean, na.rm=TRUE)[rownames(letra1)]) dadosm2$trats=rownames(dadosm2) dadosm2=dadosm2[unique(as.character(trat)),] dadosm2$limite=dadosm$media+dadosm$desvio dadosm2$letra=paste(format(dadosm$media,digits = dec),dadosm$groups) trats=dadosm2$trats limite=dadosm2$limite superior=dadosm2$superior letra=dadosm2$letra stat_box=ggplot(datam1,aes(x=trats,y=response))+geom_boxplot() superior=ggplot_build(stat_box)$data[[1]]$ymax dadosm2$superior=superior+sup grafico=ggplot(datam1, aes(x=trats, y=response)) if(fill=="trat"){grafico=grafico+geom_boxplot(aes(fill=trats))} else{grafico=grafico+geom_boxplot(aes(fill=trats),fill=fill)} grafico=grafico+ geom_text(data=dadosm2, aes(y=superior, label=letra), family=family,size=labelsize,angle=angle.label, hjust=hjust)} grafico=grafico+theme+ ylab(ylab)+ xlab(xlab)+ theme(text = element_text(size=textsize,color="black",family=family), axis.text = element_text(size=textsize,color="black",family=family), axis.title = element_text(size=textsize,color="black",family=family), legend.position = "none") if(is.na(ylim[1])==FALSE){ grafico=grafico+scale_y_continuous(breaks = ylim, limits = c(min(ylim),max(ylim)))} if(angle !=0){grafico=grafico+theme(axis.text.x=element_text(hjust = 1.01,angle = angle))} if(CV==TRUE){grafico=grafico+labs(caption=paste("p-value ", if(a$`Pr(>F)`[1]<0.0001){paste("<", 0.0001)} else{paste("=", round(a$`Pr(>F)`[1],4))},"; CV = ", round(abs(sqrt(a$`Mean Sq`[4])/mean(resp))*100,2),"%"))} } if(quali==FALSE){ trat=trat1 if(grau==1){graph=polynomial(trat,response, grau = 1,xlab=xlab,ylab=ylab,textsize=textsize, family=family,posi=posi,point=point,SSq=a$`Sum Sq`[4],DFres = a$Df[4])} if(grau==2){graph=polynomial(trat,response, grau = 2,xlab=xlab,ylab=ylab,textsize=textsize, family=family,posi=posi,point=point,SSq=a$`Sum Sq`[4],DFres = a$Df[4])} if(grau==3){graph=polynomial(trat,response, grau = 3,xlab=xlab,ylab=ylab,textsize=textsize, family=family,posi=posi,point=point,SSq=a$`Sum Sq`[4],DFres = a$Df[4])} grafico=graph[[1]] if(is.na(ylim[1])==FALSE){ grafico=grafico+scale_y_continuous(breaks = ylim, limits = c(min(ylim),max(ylim)))} print(grafico) } if(quali==TRUE){print(grafico)} graficos=list(grafico)#[[1]] }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/dql_function.R
#' Analysis: Latin square design evaluated over time #' #' @description Function of the AgroR package for the analysis of experiments conducted in a balanced qualitative single-square Latin design with multiple assessments over time, however without considering time as a factor. #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @author Leandro Simoes Azeredo Goncalves #' @author Rodrigo Yudi Palhaci Marubayashi #' @param trat Numerical or complex vector with treatments #' @param line Numerical or complex vector with line #' @param column Numerical or complex vector with column #' @param time Numerical or complex vector with times #' @param response Numerical vector containing the response of the experiment. #' @param alpha.f Level of significance of the F test (\emph{default} is 0.05) #' @param alpha.t Significance level of the multiple comparison test (\emph{default} is 0.05) #' @param mcomp Multiple comparison test (Tukey (\emph{default}), LSD, Scott-Knott and Duncan) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param fill Defines chart color (to generate different colors for different treatments, define fill = "trat") #' @param theme ggplot2 theme (\emph{default} is theme_classic()) #' @param error Add error bar (SD) #' @param sup Number of units above the standard deviation or average bar on the graph #' @param addmean Plot the average value on the graph (\emph{default} is TRUE) #' @param textsize Font size of the texts and titles of the axes #' @param labelsize Font size of the labels #' @param pointsize Point size #' @param family Font family #' @param dec Number of cells #' @param geom Graph type (columns - "bar" or segments "point") #' @param legend Legend title #' @param posi Legend position #' @param ylim Define a numerical sequence referring to the y scale. You can use a vector or the `seq` command. #' @param width.bar width error bar #' @param size.bar size error bar #' @param xnumeric Declare x as numeric (\emph{default} is FALSE) #' @param all.letters Adds all label letters regardless of whether it is significant or not. #' @note The ordering of the graph is according to the sequence in which the factor levels are arranged in the data sheet. The bars of the column and segment graphs are standard deviation. #' @keywords dqlt #' @keywords Experimental #' @return The function returns the p-value of Anova, the assumptions of normality of errors, homogeneity of variances and independence of errors, multiple comparison test, as well as a line graph #' @seealso \link{DQL}, \link{DICT}, \link{DBCT} #' @details The p-value of the analysis of variance, the normality test for Shapiro-Wilk errors, the Bartlett homogeneity test of variances, the independence of Durbin-Watson errors and the multiple comparison test ( Tukey, Scott-Knott, LSD or Duncan). #' @references #' #' Principles and procedures of statistics a biometrical approach Steel, Torry and Dickey. Third Edition 1997 #' #' Multiple comparisons theory and methods. Departament of statistics the Ohio State University. USA, 1996. Jason C. Hsu. Chapman Hall/CRC. #' #' Practical Nonparametrics Statistics. W.J. Conover, 1999 #' #' Ramalho M.A.P., Ferreira D.F., Oliveira A.C. 2000. Experimentacao em Genetica e Melhoramento de Plantas. Editora UFLA. #' #' Scott R.J., Knott M. 1974. A cluster analysis method for grouping mans in the analysis of variance. Biometrics, 30, 507-512. #' @export #' @examples #' rm(list=ls()) #' data(simulate3) #' attach(simulate3) #' DQLT(trat, linhas, colunas, tempo, resp) DQLT=function(trat, line, column, time, response, alpha.f=0.05, alpha.t=0.05, mcomp="tukey", error=TRUE, xlab="Independent", ylab="Response", textsize=12, labelsize=5, pointsize=4.5, family="sans", sup=0, addmean=FALSE, posi=c(0.1,0.8), geom="bar", fill="gray", legend="Legend", ylim=NA, width.bar=0.2, size.bar=0.8, dec=3, theme=theme_classic(), xnumeric=FALSE, all.letters=FALSE){ requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("nortest") requireNamespace("ggrepel") trat=as.factor(trat) resp=response line=as.factor(line) column=as.factor(column) tempo=factor(time,unique(time)) dados=data.frame(resp,trat,column, line, tempo) if(mcomp=="tukey"){ tukeyg=c() ordem=c() normg=c() homog=c() indepg=c() anovag=c() cv=c() for(i in 1:length(levels(tempo))){ mod=aov(resp~trat+line+column, data=dados[dados$tempo==levels(dados$tempo)[i],]) anovag[[i]]=anova(mod)$`Pr(>F)`[1] cv[[i]]=sqrt(anova(mod)$`Mean Sq`[4])/mean(mod$model$resp)*100 tukey=TUKEY(mod,"trat",alpha = alpha.t) tukey$groups=tukey$groups[unique(as.character(trat)),2] if(all.letters==FALSE){ if(anova(mod)$`Pr(>F)`[1]>alpha.f){tukey$groups=c("ns",rep(" ",length(unique(trat))-1))}} tukeyg[[i]]=as.character(tukey$groups) ordem[[i]]=rownames(tukey$groups) norm=shapiro.test(mod$residuals) homo=with(dados[dados$tempo==levels(dados$tempo)[i],], bartlett.test(mod$residuals~trat)) indep=dwtest(mod) normg[[i]]=norm$p.value homog[[i]]=homo$p.value indepg[[i]]=indep$p.value } m=unlist(tukeyg) nor=unlist(normg) hom=unlist(homog) ind=unlist(indepg) an=unlist(anovag) cv=unlist(cv) press=data.frame(an,nor,hom,ind,cv) colnames(press)=c("p-value ANOVA","Shapiro-Wilk","Bartlett","Durbin-Watson","CV (%)")} if(mcomp=="lsd"){ lsdg=c() ordem=c() normg=c() homog=c() indepg=c() anovag=c() cv=c() for(i in 1:length(levels(tempo))){ mod=aov(resp~trat+line+column, data=dados[dados$tempo==levels(dados$tempo)[i],]) anovag[[i]]=anova(mod)$`Pr(>F)`[1] cv[[i]]=sqrt(anova(mod)$`Mean Sq`[4])/mean(mod$model$resp)*100 lsd=LSD(mod,"trat",alpha = alpha.t) lsd$groups=lsd$groups[unique(as.character(trat)),2] if(all.letters==FALSE){ if(anova(mod)$`Pr(>F)`[1]>alpha.f){lsd$groups=c("ns",rep(" ",length(unique(trat))-1))}} lsdg[[i]]=as.character(lsd$groups) ordem[[i]]=rownames(lsd$groups) norm=shapiro.test(mod$residuals) homo=with(dados[dados$tempo==levels(dados$tempo)[i],], bartlett.test(mod$residuals~trat)) indep=dwtest(mod) normg[[i]]=norm$p.value homog[[i]]=homo$p.value indepg[[i]]=indep$p.value } m=unlist(lsdg) nor=unlist(normg) hom=unlist(homog) ind=unlist(indepg) an=unlist(anovag) cv=unlist(cv) press=data.frame(an,nor,hom,ind,cv) colnames(press)=c("p-value ANOVA","Shapiro-Wilk","Bartlett","Durbin-Watson","CV(%)")} if(mcomp=="duncan"){ duncang=c() ordem=c() normg=c() homog=c() indepg=c() anovag=c() cv=c() for(i in 1:length(levels(tempo))){ mod=aov(resp~trat+line+column, data=dados[dados$tempo==levels(dados$tempo)[i],]) anovag[[i]]=anova(mod)$`Pr(>F)`[1] cv[[i]]=sqrt(anova(mod)$`Mean Sq`[4])/mean(mod$model$resp)*100 duncan=duncan(mod,"trat",alpha = alpha.t) duncan$groups=duncan$groups[unique(as.character(trat)),2] if(all.letters==FALSE){ if(anova(mod)$`Pr(>F)`[1]>alpha.f){duncan$groups=c("ns",rep(" ",length(unique(trat))-1))}} duncang[[i]]=as.character(duncan$groups) ordem[[i]]=rownames(duncan$groups) norm=shapiro.test(mod$residuals) homo=with(dados[dados$tempo==levels(dados$tempo)[i],], bartlett.test(mod$residuals~trat)) indep=dwtest(mod) normg[[i]]=norm$p.value homog[[i]]=homo$p.value indepg[[i]]=indep$p.value } m=unlist(duncang) nor=unlist(normg) hom=unlist(homog) ind=unlist(indepg) an=unlist(anovag) cv=unlist(cv) press=data.frame(an,nor,hom,ind,cv) colnames(press)=c("p-value ANOVA","Shapiro-Wilk","Bartlett","Durbin-Watson","CV (%)")} if(mcomp=="sk"){ scott=c() normg=c() homog=c() indepg=c() anovag=c() cv=c() for(i in 1:length(levels(tempo))){ mod=aov(resp~trat+line+column, data=dados[dados$tempo==levels(dados$tempo)[i],]) anovag[[i]]=anova(mod)$`Pr(>F)`[1] cv[[i]]=sqrt(anova(mod)$`Mean Sq`[4])/mean(mod$model$resp)*100 nrep=with(dados[dados$tempo==levels(dados$tempo)[i],], table(trat)[1]) ao=anova(mod) medias=with(dados[dados$tempo==levels(dados$tempo)[i],], sort(tapply(resp,trat,mean),decreasing = TRUE)) letra=scottknott(means = medias, df1 = ao$Df[4], nrep = nrep, QME = ao$`Mean Sq`[4], alpha = alpha.t) letra1=data.frame(resp=medias,groups=letra) letra1=letra1[unique(as.character(trat)),] data=letra1$groups if(all.letters==FALSE){ if(anova(mod)$`Pr(>F)`[1]>alpha.f){data=c("ns",rep(" ",length(unique(trat))-1))}} data=data scott[[i]]=data norm=shapiro.test(mod$residuals) homo=with(dados[dados$tempo==levels(dados$tempo)[i],], bartlett.test(mod$residuals~trat)) indep=dwtest(mod) normg[[i]]=norm$p.value homog[[i]]=homo$p.value indepg[[i]]=indep$p.value } m=unlist(scott) nor=unlist(normg) hom=unlist(homog) ind=unlist(indepg) an=unlist(anovag) cv=unlist(cv) press=data.frame(an,nor,hom,ind,cv) colnames(press)=c("p-value ANOVA","Shapiro-Wilk","Bartlett","Durbin-Watson","CV (%)")} cat(green(bold("\n-----------------------------------------------------------------\n"))) cat(green(bold("ANOVA and assumptions"))) cat(green(bold("\n-----------------------------------------------------------------\n"))) print(press) dadosm=data.frame(tempo=as.character(rep(levels(tempo),e=length(levels(trat)))), trat=rep(levels(trat),length(levels(tempo))), media=c(tapply(resp,list(trat,tempo),mean, na.rm=TRUE)), desvio=c(tapply(resp,list(trat,tempo),sd, na.rm=TRUE)), letra=m) if(xnumeric==TRUE){dadosm$tempo=as.numeric(as.character(dadosm$tempo))} if(xnumeric==FALSE){dadosm$tempo=factor(dadosm$tempo,unique(dadosm$tempo))} time=dadosm$tempo trat=dadosm$trat media=dadosm$media desvio=dadosm$desvio letra=dadosm$letra if(geom=="point"){ grafico=ggplot(dadosm,aes(y=media, x=tempo))+ geom_point(aes(shape=factor(trat,levels=unique(as.character(trat))), group=factor(trat, levels=unique(as.character(trat)))),size=pointsize)+ geom_line(aes(lty=factor(trat, levels=unique(as.character(trat))), group=factor(trat, levels=unique(as.character(trat)))),size=0.8)+ ylab(ylab)+ xlab(xlab)+ theme+ theme(text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), legend.position = posi, legend.text = element_text(size = textsize))+labs(shape=legend, lty=legend) if(error==TRUE){grafico=grafico+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=width.bar,size=size.bar)} if(addmean==FALSE && error==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),size=labelsize,family=family)} if(addmean==TRUE && error==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=paste(format(media,digits = dec), letra)),size=labelsize,family=family)} if(addmean==FALSE && error==TRUE){grafico=grafico+ geom_text(aes(y=desvio+media+sup, label=letra),size=labelsize,family=family)} if(addmean==TRUE && error==TRUE){grafico=grafico+ geom_text(aes(y=desvio+media+sup, label=paste(format(media,digits = dec), letra)),size=labelsize,family=family)} if(is.na(ylim[1])==FALSE){grafico=grafico+scale_y_continuous(breaks = ylim)} } if(geom=="bar"){ if(sup==0){sup=0.1*mean(dadosm$media)} grafico=ggplot(dadosm, aes(y=media, x=as.factor(tempo), fill=factor(trat,levels = unique(trat))))+ geom_col(position = "dodge",color="black")+ ylab(ylab)+ xlab(xlab)+theme+ theme(text = element_text(size=textsize,color="black", family = family), axis.title = element_text(size=textsize,color="black", family = family), axis.text = element_text(size=textsize,color="black", family = family), legend.position = posi, legend.text = element_text(size = textsize))+labs(fill=legend) if(error==TRUE){grafico=grafico+ geom_errorbar(aes(ymin=media-desvio, ymax=media+desvio), width=width.bar, size=size.bar, position = position_dodge(width=0.9))} if(addmean==FALSE && error==FALSE){grafico=grafico+ geom_text(aes(y=media+sup,label=letra),size=labelsize, position = position_dodge(width=0.9),family=family)} if(addmean==TRUE && error==FALSE){grafico=grafico+ geom_text(aes(y=media+sup, label=paste(format(media,digits = dec), letra)),size=labelsize,family=family, position = position_dodge(width=0.9))} if(addmean==FALSE && error==TRUE){grafico=grafico+ geom_text(aes(y=desvio+media+sup,label=letra),size=labelsize,family=family, position = position_dodge(width=0.9))} if(addmean==TRUE && error==TRUE){grafico=grafico+ geom_text(aes(y=desvio+media+sup, label=paste(format(media,digits = dec),letra)), size=labelsize,family=family, position = position_dodge(width=0.9))} } if(fill=="gray"){grafico=grafico+scale_fill_grey(start = 1, end = 0.1)} if(is.na(ylim[1])==FALSE){grafico=grafico+scale_y_continuous(breaks = ylim)} graficos=as.list(grafico) print(grafico) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/dqlt_function.R
#' Analysis: Post-hoc Dunn #' #' @description Perform Kruskal wallis and dunn post-hoc test #' @param resp Vector with response #' @param trat Numerical or complex vector with treatments #' @param method the p-value for multiple comparisons ("none", "bonferroni", "sidak", "holm", "hs", "hochberg", "bh", "by"). The default is no adjustment for multiple comparisons #' @param alpha Significance level of the post-hoc (\emph{default} is 0.05) #' @param decreasing Should the order of the letters be increasing or decreasing. #' @return Kruskal-wallis and dunn's post-hoc test returns #' @importFrom utils capture.output #' @importFrom dunn.test dunn.test #' @export #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @examples #' library(AgroR) #' data(pomegranate) #' #' with(pomegranate, dunn(trat, WL)) dunn=function(trat, resp, method="holm", alpha=0.05, decreasing=TRUE) {requireNamespace("dunn.test") dtres <- capture.output(res <- dunn.test::dunn.test(resp, trat, method, kw = TRUE, altp = TRUE)) res <- data.frame(res[-which(names(res) == "chi2")])[,c(4, 1, 2, 3)] names(res) <- c("Comparison", "Z", "P.unadj", "P.adj") vec2mat2=function (x, sep = "-"){splits <- strsplit(x, sep) n.spl <- sapply(splits, length) if (any(n.spl != 2)) stop("Names must contain exactly one '", sep, "' each; instead got ", paste(x, collapse = ", ")) x2 <- t(as.matrix(as.data.frame(splits))) dimnames(x2) <- list(x, NULL) x2} multcompLetters=function (x, compare = "<", threshold = alpha, Letters = c(letters, LETTERS, "."), reversed = decreasing) {x.is <- deparse(substitute(x)) if (any(class(x) == "dist")) x <- as.matrix(x) if (!is.logical(x)) x <- do.call(compare, list(x, threshold)) dimx <- dim(x) { if ((length(dimx) == 2) && (dimx[1] == dimx[2])) { Lvls <- dimnames(x)[[1]] if (length(Lvls) != dimx[1]) stop("Names requred for ", x.is) else { x2. <- t(outer(Lvls, Lvls, paste, sep = "")) x2.n <- outer(Lvls, Lvls, function(x1, x2) nchar(x2)) x2.2 <- x2.[lower.tri(x2.)] x2.2n <- x2.n[lower.tri(x2.n)] x2a <- substring(x2.2, 1, x2.2n) x2b <- substring(x2.2, x2.2n + 1) x2 <- cbind(x2a, x2b) x <- x[lower.tri(x)] } } else { namx <- names(x) if (length(namx) != length(x)) stop("Names required for ", x.is) x2 <- vec2mat2(namx) Lvls <- unique(as.vector(x2))}} n <- length(Lvls) LetMat <- array(TRUE, dim = c(n, 1), dimnames = list(Lvls, NULL)) k2 <- sum(x) if (k2 == 0) { Ltrs <- rep(Letters[1], n) names(Ltrs) <- Lvls dimnames(LetMat)[[2]] <- Letters[1] return(list(Letters = Ltrs, LetterMatrix = LetMat))} distinct.pairs <- x2[x, , drop = FALSE] absorb <- function(A.) { k. <- dim(A.)[2] if (k. > 1) { for (i. in 1:(k. - 1)) for (j. in (i. + 1):k.) { if (all(A.[A.[, j.], i.])) { A. <- A.[, -j., drop = FALSE] return(absorb(A.))} else { if (all(A.[A.[, i.], j.])) { A. <- A.[, -i., drop = FALSE] return(absorb(A.)) } } } } A. } for (i in 1:k2) { dpi <- distinct.pairs[i, ] ijCols <- (LetMat[dpi[1], ] & LetMat[dpi[2], ]) if (any(ijCols)) { A1 <- LetMat[, ijCols, drop = FALSE] A1[dpi[1], ] <- FALSE LetMat[dpi[2], ijCols] <- FALSE LetMat <- cbind(LetMat, A1) LetMat <- absorb(LetMat) } } sortCols <- function(B) { firstRow <- apply(B, 2, function(x) which(x)[1]) B <- B[, order(firstRow)] firstRow <- apply(B, 2, function(x) which(x)[1]) reps <- (diff(firstRow) == 0) if (any(reps)) { nrep <- table(which(reps)) irep <- as.numeric(names(nrep)) k <- dim(B)[1] for (i in irep) { i. <- i:(i + nrep[as.character(i)]) j. <- (firstRow[i] + 1):k B[j., i.] <- sortCols(B[j., i., drop = FALSE]) } } B } LetMat. <- sortCols(LetMat) if (reversed) LetMat. <- LetMat.[, rev(1:ncol(LetMat.))] k.ltrs <- dim(LetMat.)[2] makeLtrs <- function(kl, ltrs = Letters) { kL <- length(ltrs) if (kl < kL) return(ltrs[1:kl]) ltrecurse <- c(paste(ltrs[kL], ltrs[-kL], sep = ""), ltrs[kL]) c(ltrs[-kL], makeLtrs(kl - kL + 1, ltrecurse)) } Ltrs <- makeLtrs(k.ltrs, Letters) dimnames(LetMat.)[[2]] <- Ltrs LetVec <- rep(NA, n) names(LetVec) <- Lvls for (i in 1:n) LetVec[i] <- paste(Ltrs[LetMat.[i, ]], collapse = "") nch.L <- nchar(Ltrs) blk.L <- rep(NA, k.ltrs) for (i in 1:k.ltrs) blk.L[i] <- paste(rep(" ", nch.L[i]), collapse = "") monoVec <- rep(NA, n) names(monoVec) <- Lvls for (j in 1:n) { ch2 <- blk.L if (any(LetMat.[j, ])) ch2[LetMat.[j, ]] <- Ltrs[LetMat.[j, ]] monoVec[j] <- paste(ch2, collapse = "") } InsertAbsorb <- list(Letters = LetVec, monospacedLetters = monoVec, LetterMatrix = LetMat.) class(InsertAbsorb) <- "multcompLetters" InsertAbsorb} cldList=function (formula = NULL, data = NULL, comparison = NULL, p.value = NULL, threshold = alpha, print.comp = FALSE, remove.space = TRUE, remove.equal = TRUE, remove.zero = TRUE, swap.colon = TRUE, swap.vs = FALSE){if (!is.null(formula)) { p.value = eval(parse(text = paste0("data", "$", all.vars(formula[[2]])[1]))) comparison = eval(parse(text = paste0("data", "$", all.vars(formula[[3]])[1])))} Comparison = (as.numeric(p.value) <= threshold) if (sum(Comparison) == 0) {stop("No significant differences.", call. = FALSE)} if (remove.space == TRUE) {comparison = gsub(" ", "", comparison)} if (remove.equal == TRUE) {comparison = gsub("=", "", comparison)} if (remove.zero == TRUE) {comparison = gsub("0", "", comparison)} if (swap.colon == TRUE) {comparison = gsub(":", "-", comparison)} if (swap.vs == TRUE) {comparison = gsub("vs", "-", comparison)} names(Comparison) = comparison if (print.comp == TRUE) { Y = data.frame(Comparisons = names(Comparison), p.value = p.value, Value = Comparison, Threshold = threshold) cat("\n", "\n") print(Y) cat("\n", "\n")} MCL = multcompLetters(Comparison) Group = names(MCL$Letters) Letter = as.character(MCL$Letters) MonoLetter = as.character(MCL$monospacedLetters) Z = data.frame(Group, Letter, MonoLetter) return(Z)} resp1=resp names(resp1)=trat postos=rank(resp1) somaposto=tapply(postos,names(postos),sum) N=tapply(postos,names(postos), length) postosmedios=somaposto/N media=tapply(resp1,trat,mean) mediana=tapply(resp1,trat,median) dunns=cldList(P.adj~Comparison, data=res) tabela=data.frame("group"=dunns$Group, "Sum Rank"=somaposto, "Mean Rank"=postosmedios, "Mean"=media, "Median"=mediana, "dunn"=dunns$Letter) krusk=kruskal.test(resp,trat,method=method) chi=krusk$statistic pvalor=krusk$p.value list("Statistic"=chi, "p-value"=pvalor, "Post-hoc"=tabela)}
/scratch/gouwar.j/cran-all/cranData/AgroR/R/dunn_function.R
#' Analysis: Dunnett test #' @export #' @description The function performs the Dunnett test #' @param trat Numerical or complex vector with treatments #' @param resp Numerical vector containing the response of the experiment. #' @param control Treatment considered control (write identical to the name in the vector) #' @param model Experimental design (DIC, DBC or DQL) #' @param block Numerical or complex vector with blocks #' @param line Numerical or complex vector with lines #' @param column Numerical or complex vector with columns #' @param alpha.t Significance level (\emph{default} is 0.05) #' @param label Variable label #' @param pointsize Point size #' @param pointshape Shape #' @param textsize Font size #' @param linesize Line size #' @param labelsize Label size #' @param errorsize Errorbar size #' @param widthsize Width errorbar #' @param fontfamily font family #' @note Do not use the "-" symbol or space in treatment names #' @return I return the Dunnett test for experiments in a completely randomized design, randomized blocks or Latin square. #' @importFrom multcomp glht #' @importFrom multcomp mcp #' @examples #' #' #==================================================== #' # complete randomized design #' #==================================================== #' data("pomegranate") #' with(pomegranate,dunnett(trat=trat,resp=WL,control="T1")) #' #' #==================================================== #' # randomized block design in factorial double #' #==================================================== #' library(AgroR) #' data(cloro) #' attach(cloro) #' respAd=c(268, 322, 275, 350, 320) #' a=FAT2DBC.ad(f1, f2, bloco, resp, respAd, #' ylab="Number of nodules", #' legend = "Stages",mcomp="sk") #' data=rbind(data.frame(trat=paste(f1,f2,sep = ""),bloco=bloco,resp=resp), #' data.frame(trat=c("Test","Test","Test","Test","Test"), #' bloco=unique(bloco),resp=respAd)) #' with(data,dunnett(trat = trat, #' resp = resp, #' control = "Test", #' block=bloco,model = "DBC")) dunnett=function(trat, resp, control, model="DIC", block=NA, column=NA, line=NA, alpha.t=0.05, pointsize=5, pointshape=21, linesize=1, labelsize=4, textsize=12, errorsize=1, widthsize=0.2, label="Response", fontfamily="sans"){ trat1=as.factor(trat) trat=as.factor(trat) levels(trat1)=paste("T",1:length(levels(trat1)),sep = "") controle=as.character(trat1[trat==control][1]) if(model=="DIC"){mod=aov(resp~trat1)} if(model=="DBC"){ block=as.factor(block) mod=aov(resp~trat1+block)} if(model=="DQL"){ column=as.factor(column) line=as.factor(line) mod=aov(resp~trat1+column+line)} requireNamespace("multcomp") dados=data.frame(trat1,resp) contras=unique(trat1)[!unique(trat1)==controle] a=confint(glht(mod, linfct = mcp(trat1=paste(contras,"-", controle, "==0",sep=""))), level = 1-alpha.t,) a=summary(a) teste=cbind(a$confint, round(a$test$tstat,4), round(a$test$pvalues,4)) nomes=rownames(teste) nomes1=as.factor(t(matrix(unlist(strsplit(nomes," - ")),nrow=2))[,1]) levels(nomes1)=levels(trat)[!levels(trat)==control] rownames(teste)=paste(control," - ",nomes1) teste=data.frame(teste) colnames(teste)=c("Estimate","IC-lwr","IC-upr","t value","p-value") teste$sig=ifelse(teste$`p-value`>alpha.t,"ns", ifelse(teste$`p-value`<alpha.t,"*","")) print(teste) data=data.frame(teste) `IC-lwr`=data$IC.lwr `IC-upr`=data$IC.upr sig=data$sig Estimate=data$Estimate graph=ggplot(data,aes(y=rownames(data),x=Estimate))+ geom_errorbar(aes(xmin=`IC-lwr`,xmax=`IC-upr`),width=widthsize,size=errorsize)+ geom_point(shape=pointshape,size=pointsize,color="black",fill="gray")+ theme_classic()+ labs(y="")+ geom_vline(xintercept = 0,lty=2,size=linesize)+ geom_label(aes(label=paste(round(Estimate,3), sig)),fill="lightyellow",size=labelsize, vjust=-0.5,family=fontfamily)+ theme(axis.text = element_text(size=textsize,family = fontfamily), axis.title = element_text(size=textsize,family = fontfamily)) plot(graph) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/dunnet_function.R
#' Dataset: Emergence of passion fruit seeds over time . #' #' The data come from an experiment conducted at the State University #' of Londrina, aiming to study the emergence of yellow passion fruit #' seeds over time. Data are partial from one of the treatments studied. #' Four replicates with eight seeds each were used. #' #' @docType data #' #' @usage data("emerg") #' #' @format data.frame containing data set #' \describe{ #' \item{\code{time}}{numeric vector with time} #' \item{\code{resp}}{Numeric vector with emergence} #' } #' @keywords datasets #' @seealso \link{aristolochia}, \link{cloro}, \link{laranja}, \link{enxofre}, \link{laranja}, \link{mirtilo}, \link{passiflora}, \link{phao}, \link{porco}, \link{pomegranate}, \link{simulate1}, \link{simulate2}, \link{simulate3}, \link{tomate}, \link{weather} #' @examples #' data(emerg) "emerg"
/scratch/gouwar.j/cran-all/cranData/AgroR/R/emerg.R
#' Dataset: Sulfur data #' #' The experiment was carried out in a randomized block design in a #' 3 x 3 x 3 triple factorial scheme: syrup volume (75, 225 and 675 L), #' sulfur doses (150, 450, 1350) and time of application (vegetative, #' complete cycle and reproductive system) with four repetitions. #' Yield in kg / ha of soybean was evaluated. #' #' @docType data #' #' @usage data(enxofre) #' #' @format data.frame containing data set #' \describe{ #' \item{\code{f1}}{Categorical vector with factor 1} #' \item{\code{f2}}{Categorical vector with factor 2} #' \item{\code{f2}}{Categorical vector with factor 3} #' \item{\code{bloco}}{Categorical vector with block} #' \item{\code{resp}}{Numeric vector} #' } #' @keywords datasets #' @seealso \link{cloro}, \link{laranja}, \link{mirtilo}, \link{pomegranate}, \link{porco}, \link{sensorial}, \link{simulate1}, \link{simulate2}, \link{simulate3}, \link{tomate}, \link{weather}, \link{phao}, \link{passiflora}, \link{aristolochia} #' @examples #' data(enxofre) "enxofre"
/scratch/gouwar.j/cran-all/cranData/AgroR/R/enxofre.R
#' Dataset: Eucaliptus grandis Barbin (2013) #' #' The data refer to the height in meters of *Eucalyptus grandis* plants, #' with 7 years of age, from three trials (Araraquara - Exp 1; Bento #' Quintino - Exp 2; Mogi-Guacu - Exp 3) in randomized blocks, under #' 6 progenies. The data were taken from the book by Decio Barbin #' (2013) and are from the Instituto Florestal de Tupi/SP. #' #' @docType data #' #' @usage data("eucalyptus") #' #' @references Planejamento e Analise Estatistica de Experimentos Agronomicos (2013) - Decio Barbin - pg. 177 #' #' @format data.frame containing data set #' \describe{ #' \item{\code{trati}}{Categorical vector with treatments} #' \item{\code{bloc}}{Categorical vector with block} #' \item{\code{exp}}{Categorical vector with experiment} #' \item{\code{resp}}{Numeric vector} #' } #' #' @keywords datasets #' #' @seealso \link{cloro}, \link{enxofre}, \link{laranja}, \link{pomegranate}, \link{porco}, \link{sensorial}, \link{simulate1}, \link{simulate2}, \link{simulate3}, \link{tomate}, \link{weather} #' #' @examples #' data(eucalyptus) "eucalyptus"
/scratch/gouwar.j/cran-all/cranData/AgroR/R/eucalyptus_data.R
#' Graph: Invert letters for two factor chart #' @description invert uppercase and lowercase letters in graph for factorial scheme the subdivided plot with significant interaction #' #' @param analysis FAT2DIC, FAT2DBC, PSUBDIC or PSUBDBC object #' @export #' @return Return column chart for two factors #' @examples #' data(covercrops) #' attach(covercrops) #' a=FAT2DBC(A, B, Bloco, Resp, ylab=expression("Yield"~(Kg~"100 m"^2)), #' legend = "Cover crops",alpha.f = 0.3,family = "serif") #' ibarplot.double(a) ibarplot.double=function(analysis){ lista=analysis[[1]]$plot data=analysis[[1]]$plot$graph media=data$media desvio=data$desvio f1=data$f1 f2=data$f2 letra=data$letra letra1=data$letra1 requireNamespace("ggplot2") ggplot(data,aes(x=f1,y=media,fill=f2))+ lista$theme+ ylab(lista$ylab)+ geom_col(position = position_dodge(width = 0.9),color="black")+ geom_errorbar(aes(ymin=media-desvio,ymax=media+desvio), position = position_dodge(width = 0.9),width=0.3)+ geom_text(aes(y=media+desvio+lista$sup,label=paste(round(media,lista$dec), toupper(letra), tolower(letra1),sep = "")), position = position_dodge(width = 0.9),family=lista$family)+ theme(axis.text = element_text(size = lista$textsize,family = lista$family,color="black"), axis.title = element_text(size = lista$textsize,family = lista$family,color="black"), legend.text = element_text(size = lista$textsize,family = lista$family,color="black"), legend.title = element_text(size = lista$textsize,family = lista$family,color="black")) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/ibarplot_double.R
#' Analysis: Method to evaluate similarity of experiments based on QMres #' #' @description This function presents a method to evaluate similarity of experiments based on a matrix of QMres of all against all. This is used as a measure of similarity and applied in clustering. #' @param qmres Vector containing mean squares of residuals or output from list DIC or DBC function #' @param information Option to choose the return type. `matrix`, `bar` or `cluster` #' @param method.cluster Grouping method #' @return Returns a residual mean square ratio matrix, bar graph with ratios sorted in ascending order, or cluster analysis. #' @keywords Joint analysis #' @author Gabriel Danilo Shimizu, \email{[email protected]} #' @export #' @examples #' qmres=c(0.344429, 0.300542, 0.124833, 0.04531, 0.039571, 0.011812, 0.00519) #' jointcluster(qmres,information = "cluster") #' jointcluster(qmres,information = "matrix") #' jointcluster(qmres,information = "bar") #' #' data(mirtilo) #' m=lapply(unique(mirtilo$exp),function(x){ #' m=with(mirtilo[mirtilo$exp==x,],DBC(trat,bloco,resp))}) #' jointcluster(m) jointcluster=function(qmres, information="matrix", method.cluster="ward.D"){ if(is.list(qmres)==TRUE){ if(nrow(qmres[[1]][[1]]$plot$anava)==2){ qmres1=numeric(0) for(i in 1:length(qmres)){ qmres1[i]=qmres[[i]][[1]]$plot$anava$QM[2]}} if(nrow(qmres[[1]][[1]]$plot$anava)==3){ qmres1=numeric(0) for(i in 1:length(qmres)){ qmres1[i]=qmres[[i]][[1]]$plot$anava$QM[3]}} qmres=qmres1} resp=qmres requireNamespace("ggplot2") matriztodos=function(resp){ expe=paste("Exp",1:length(resp)) dados=expand.grid(expe,expe) Var1=dados$Var1 Var2=dados$Var2 dados$resp1=rep(resp,length(resp)) dados$resp2=rep(resp,e=length(resp)) dados$resp=dados$resp1/dados$resp2 graph=ggplot(dados,aes(x=Var1,y=Var2,fill=resp))+ geom_tile(color = "gray50", linewidth = 1) + scale_x_discrete(position = "top") + scale_fill_distiller(palette = "RdBu", direction = -1) + ylab("Numerator") + xlab("Denominator") + geom_label(aes(label = format(resp, digits = 2)), fill = "white") + labs(fill = "ratio") + theme(axis.text = element_text(size = 12, color = "black"), legend.text = element_text(size = 12), axis.ticks = element_blank(), panel.background = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank()) graph} matrizmaiores=function(resp){ expe=paste("Exp",1:length(resp)) dados=expand.grid(expe,expe) dados$resp1=rep(resp,length(resp)) dados$resp2=rep(resp,e=length(resp)) dados$respAB=dados$resp1/dados$resp2 dados$respBA=dados$resp2/dados$resp1 dados$respAB[dados$respAB>dados$respBA] dados n=c() for(i in 2:length(resp)){ n[[i-1]]=rep(i:length(resp))+length(resp)*(i-2)} unlist(n) dados=dados[unlist(n),] dados$maior=pmax(dados$respAB,dados$respBA) dados$combinacao=paste(dados$Var1,dados$Var2) combinacao=dados$combinacao maior=dados$maior datam=dados[order(dados$maior),] datam$combinacao=factor(datam$combinacao,datam$combinacao) graph=ggplot(datam, aes(y=combinacao,x=maior))+ geom_col(fill="lightblue",color="black")+ geom_vline(xintercept = 7,lty=1)+ labs(y="Combination",x="Ratio MSr/MSr")+ theme(axis.text = element_text(size = 12, color = "black"), legend.text = element_text(size = 12), axis.ticks = element_blank(), panel.background = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank()) graph} similar=function(resp,k=2,method){ expe=paste("Exp",1:length(resp)) dados=expand.grid(expe,expe) dados$resp1=rep(resp,length(resp)) dados$resp2=rep(resp,e=length(resp)) dados$respAB=dados$resp1/dados$resp2 dados$respBA=dados$resp2/dados$resp1 dados$maior=pmax(dados$respAB,dados$respBA) matriz=matrix(dados$maior,ncol=length(resp)) rownames(matriz)=paste("Exp",1:length(resp)) colnames(matriz)=paste("Exp",1:length(resp)) matriz=hclust(d = as.dist(matriz),method = method) plot(matriz,ylab="Ratio MSr/MSr",main="Cluster experiment",xlab="")} if(information=="matrix"){print(matriztodos(qmres))} if(information=="bar"){print(matrizmaiores(qmres))} if(information=="cluster"){similar(qmres,method = method.cluster)} }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/jointcluster.R
#' Dataset: Orange plants under different rootstocks #' #' An experiment was conducted with the objective of studying the behavior #' of nine rootstocks for the Valencia orange tree. The data set refers to #' the 1973 evaluation (12 years old). The rootstocks are: T1: Tangerine Sunki; #' T2: National rough lemon; T3: Florida rough lemon; T4: Cleopatra tangerine; #' T5: Citranger-troyer; T6: Trifoliata; T7: Clove Tangerine; T8: Country orange; #' T9: Clove Lemon. The number of fruits per plant was evaluated. #' #' @docType data #' #' @usage data(laranja) #' #' @format data.frame containing data set #' \describe{ #' \item{\code{f1}}{Categorical vector with treatments} #' \item{\code{bloco}}{Categorical vector with block} #' \item{\code{resp}}{Numeric vector with number of fruits per plant} #' } #' #' @keywords datasets #' #' @seealso \link{cloro}, \link{enxofre}, \link{mirtilo}, \link{pomegranate}, \link{porco}, \link{sensorial}, \link{simulate1}, \link{simulate2}, \link{simulate3}, \link{tomate}, \link{weather}, \link{phao}, \link{passiflora}, \link{aristolochia} #' #' @references Planejamento e Analise Estatistica de Experimentos Agronomicos (2013) - Decio Barbin - pg. 72 #' #' @examples #' data(laranja) "laranja"
/scratch/gouwar.j/cran-all/cranData/AgroR/R/laranja.R
#' Analysis: Logistic regression #' #' @description Logistic regression is a very popular analysis in agrarian sciences, such as in fruit growth curves, seed germination, etc...The logistic function performs the analysis using 3 or 4 parameters of the logistic model, being imported from the LL function .3 or LL.4 of the drc package (Ritz & Ritz, 2016). #' @param trat Numerical or complex vector with treatments #' @param resp Numerical vector containing the response of the experiment. #' @param npar Number of model parameters #' @param error Error bar (It can be SE - \emph{default}, SD or FALSE) #' @param ylab Variable response name (Accepts the \emph{expression}() function) #' @param xlab Treatments name (Accepts the \emph{expression}() function) #' @param theme ggplot2 theme (\emph{default} is theme_bw()) #' @param legend.position Legend position (\emph{default} is c(0.3,0.8)) #' @param r2 Coefficient of determination of the mean or all values (\emph{default} is all) #' @param scale Sets x scale (\emph{default} is none, can be "log") #' @param width.bar Bar width #' @param textsize Font size #' @param font.family Font family (\emph{default} is sans) #' @return The function allows the automatic graph and equation construction of the logistic model, provides important statistics, such as the Akaike (AIC) and Bayesian (BIC) inference criteria, coefficient of determination (r2), square root of the mean error ( RMSE). #' @details The three-parameter log-logistic function with lower limit 0 is #' \deqn{f(x) = 0 + \frac{d}{1+\exp(b(\log(x)-\log(e)))}} #' The four-parameter log-logistic function is given by the expression #' \deqn{f(x) = c + \frac{d-c}{1+\exp(b(\log(x)-\log(e)))}} #' The function is symmetric about the inflection point (e). #' @author Model imported from the drc package (Ritz et al., 2016) #' @author Gabriel Danilo Shimizu #' @author Leandro Simoes Azeredo Goncalves #' @references Seber, G. A. F. and Wild, C. J (1989) Nonlinear Regression, New York: Wiley and Sons (p. 330). #' @references Ritz, C.; Strebig, J.C.; Ritz, M.C. Package ‘drc’. Creative Commons: Mountain View, CA, USA, 2016. #' @importFrom drc LL.3 #' @importFrom drc LL.4 #' @importFrom drc drm #' @import gtools #' @export #' #' @examples #' data("emerg") #' with(emerg, logistic(time, resp,xlab="Time (days)",ylab="Emergence (%)")) #' with(emerg, logistic(time, resp,npar="LL.4",xlab="Time (days)",ylab="Emergence (%)")) #' logistic=function(trat, resp, npar="LL.3", error="SE", ylab="Dependent", xlab=expression("Independent"), theme=theme_classic(), legend.position="top", r2="all", width.bar=NA, scale="none", textsize=12, font.family="sans"){ requireNamespace("drc") requireNamespace("crayon") requireNamespace("ggplot2") requireNamespace("gtools") ymean=tapply(resp,trat,mean) if(is.na(width.bar)==TRUE){width.bar=0.01*mean(trat)} if(error=="SE"){ysd=tapply(resp,trat,sd)/sqrt(tapply(resp,trat,length))} if(error=="SD"){ysd=tapply(resp,trat,sd)} if(error=="FALSE"){ysd=0} desvio=ysd xmean=tapply(trat,trat,mean) if(npar=="LL.3"){mod=drm(resp~trat,fct=LL.3()) coef=summary(mod) b=coef$coefficients[,1][1] d=coef$coefficients[,1][2] e=coef$coefficients[,1][3] if(r2=="all"){r2=cor(resp, fitted(mod))^2} if(r2=="mean"){r2=cor(ymean, predict(mod,newdata=data.frame(trat=unique(trat))))^2} r2=floor(r2*100)/100 equation=sprintf("~~~y==frac(%0.3e, 1+e^(%0.3e*(log(x)-log(%0.3e)))) ~~~~~ italic(R^2) == %0.2f", d,b,e,r2) xp=seq(min(trat),max(trat),length.out = 1000) preditos=data.frame(x=xp, y=predict(mod,newdata = data.frame(trat=xp))) } if(npar=="LL.4"){mod=drm(resp~trat,fct=LL.4()) coef=summary(mod) b=coef$coefficients[,1][1] c=coef$coefficients[,1][2] d=coef$coefficients[,1][3] e=coef$coefficients[,1][4] if(r2=="all"){r2=cor(resp, fitted(mod))^2} if(r2=="mean"){r2=cor(ymean, predict(mod,newdata=data.frame(trat=unique(trat))))^2} r2=floor(r2*100)/100 equation=sprintf("~~~y == %0.3e + frac(%0.3e %s %0.3e, 1+e^(%0.3e*(log(x)-log(%0.3e)))) ~~~~~ italic(R^2) == %0.2f", c, d, ifelse(c >= 0, "+", "-"), abs(c), b, e, r2) xp=seq(min(trat),max(trat),length.out = 1000) preditos=data.frame(x=xp, y=predict(mod,newdata = data.frame(trat=xp)))} predesp=predict(mod) predobs=resp rmse=sqrt(mean((predesp-predobs)^2)) x=preditos$x y=preditos$y s=equation data=data.frame(xmean,ymean) data1=data.frame(trat=xmean,resp=ymean) graph=ggplot(data,aes(x=xmean,y=ymean)) if(error!="FALSE"){graph=graph+geom_errorbar(aes(ymin=ymean-ysd,ymax=ymean+ysd), width=width.bar,size=0.8)} graph=graph+ geom_point(aes(color="black"),size=4.5,shape=21,fill="gray")+ theme+ geom_line(data=preditos,aes(x=x, y=y,color="black"),size=0.8)+ scale_color_manual(name="",values=1,label=parse(text = equation))+ theme(axis.text = element_text(size=textsize,color="black",family = font.family), axis.title = element_text(family = font.family), legend.position = legend.position, legend.text = element_text(size=textsize,family = font.family), legend.direction = "vertical", legend.text.align = 0, legend.justification = 0)+ ylab(ylab)+xlab(xlab) if(scale=="log"){graph=graph+scale_x_log10()} aic=AIC(mod) bic=BIC(mod) graphs=data.frame("Parameter"=c("AIC", "BIC", "r-squared", "RMSE"), "values"=c(aic, bic, r2, rmse)) models=data.frame(coef$coefficients) models$Sig=ifelse(models$p.value>0.05,"ns",ifelse(models$p.value<0.01,"**","*")) colnames(models)=c("Estimate","Std Error","t value","P-value","") graficos=list("Coefficients"=models, "values"=graphs, graph=graph) print(graficos) }
/scratch/gouwar.j/cran-all/cranData/AgroR/R/logistic_function.R
#' Dataset: Cutting blueberry data #' #' An experiment was carried out in order to evaluate the rooting #' (resp1) of blueberry cuttings as a function of the cutting size #' (Treatment Colume). This experiment was repeated three times #' (Location column) and a randomized block design with four #' replications was adopted. #' #' @docType data #' #' @usage data(mirtilo) #' #' @format data.frame containing data set #' \describe{ #' \item{\code{trat}}{Categorical vector with treatments} #' \item{\code{exp}}{Categorical vector with experiment} #' \item{\code{bloco}}{Categorical vector with block} #' \item{\code{resp}}{Numeric vector} #' } #' @keywords datasets #' @seealso \link{cloro}, \link{enxofre}, \link{laranja}, \link{pomegranate}, \link{porco}, \link{sensorial}, \link{simulate1}, \link{simulate2}, \link{simulate3}, \link{tomate}, \link{weather} #' @examples #' data(mirtilo) #' attach(mirtilo) "mirtilo"
/scratch/gouwar.j/cran-all/cranData/AgroR/R/mirtilo.R