domain
stringclasses
48 values
url
stringlengths
35
137
text
stringlengths
0
836k
topic
stringclasses
13 values
bayesball.github.io
https://bayesball.github.io/EDA/plotting-ii.html
14 Plotting II ============== As you know, the United States takes a census every 10 years. The dataset `us.pop` in the `LearnBayes` package displays the population of the U.S. during each census from 1790 to 2000\. ``` library(LearnEDAfunctions) library(tidyverse) head(us.pop) ``` ``` ## YEAR POP ## 1 1790 3.9 ## 2 1800 5.3 ## 3 1810 7.2 ## 4 1820 9.6 ## 5 1830 12.9 ## 6 1840 17.1 ``` First we plot population against year. What do we see in this plot? ``` ggplot(us.pop, aes(YEAR, POP)) + geom_point() + ylab("Population") ``` Obviously the population of the U.S. has been increasing over time and that’s the main message from this graph. But we already knew about the increasing population. We want to learn more. Specifically … * Can we describe how the population has increased over time? In other words, what is the pattern of growth of the U.S. population? * Once we have a handle on the rate of population growth, are there years where the population was different from this general pattern of growth? 14\.1 Fitting a line to the population in the later years --------------------------------------------------------- Now the simplest model that we can fit to these data is a line. Can we fit a line effectively to this graph? No – one line won’t fit these data well. There is strong curvature in the plot for the early years of the U.S. history. However, the graph for the later years looks pretty linear and it makes some sense to fit a line to the right\-half of the plot. On the figure below, I’ve drawn a line by eye to the last nine points. ``` ggplot(us.pop, aes(YEAR, POP)) + geom_point() + ylab("Population") + geom_abline(slope = 2.3404, intercept = -4403.7) ``` The equation of this line is \\\[ FIT \= 2\.3404 \\, YEAR \- 4403\.7 \\] The slope is 2\.3 which means that, for the later years (1930 – 2000\), the population of the United States has been increasing by about 2\.3 million each year. After we have described the fit, then we look at the residuals. For each year, we compute \\\[ RESIDUAL \= POPULATION \- FIT \\] and we graph the residuals against the year. ``` us.pop %>% mutate(Fit = -4403.7 + 2.3404 * YEAR, Residual = POP - Fit) -> us.pop ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` We notice the strong pattern in the residuals for the early years. This is expected, since the linear fit is only suitable for the population during the recent years. Actually, we don’t care about the large residuals for the early years – let’s look at the residual graph only for the later years. ``` ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) + xlim(1930, 2000) + ylim(-10, 10) ``` The residuals in this graph look relatively constant, with a few notable exceptions: * There is a large residual in 1930 which means that the population is higher than what is predicted using the linear model. * The residual in 1950 is smaller than the residuals in the surrounding years. * The residual in 2000 is somewhat large. Can we provide any explanation for these small or large residuals? The major event in the 1940’s was World War II. Many Americans participated in the war and I think this would account for some population decline – maybe this explains the small residual in 1950\. The large residuals in 1930 and 2000 mean that the rate of population growth in these years was a bit higher than the 2\.3 million per year. To me the large residual in 2000 is the most interesting – what is accounting for the extra population growth in the last ten years? 14\.2 Fitting a line to the log population in the early years ------------------------------------------------------------- Remember the curvature in the graph of population versus year that we saw in the early years? The interpretation of this curvature that the U.S. population growth in the early years was not linear but exponential. This means that the U.S. population was increasing at a constant rate during these formative years. How can we summarize this exponential growth? First, we reexpress population by using a log – the figure below plots the log population against year. ``` us.pop %>% mutate(log.POP = log10(POP)) -> us.pop ggplot(us.pop, aes(YEAR, log.POP)) + geom_point() ``` Looking at this plot, it looks pretty much like a straight\-line relationship for the early years. So we fit a line by eye: ``` ggplot(us.pop, aes(YEAR, log.POP)) + geom_point() + geom_abline(slope = 0.0129, intercept = -22.455) ``` This line has slope .0129 and intercept \\(\-22\.455\\), so our fit for log population for the early years is \\\[ FIT (log population) \= .0129 year \- 22\.455\. \\] If we take each side to the 10th power, then we get the relationship \\\[ FIT (population) \= 10^{.0129 year \- 22\.455}. \\] Here \\(10^{.0129} \= 1\.0301\\), so the population of the U.S. was increasing by a 3 percent rate in the early years. Looking further, we can compute residuals from our line fit to the (year, log population) data. Here the residual would be \\\[ RESIDUAL \= {\\rm log \\, population} \- FIT ({\\rm log \\, population}). \\] Here is the corresponding residual plot. ``` us.pop %>% mutate(LogFit = -22.4553 + 0.0129 * YEAR, Residual = log.POP - LogFit) -> us.pop ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` We focus our attention to the early years where the fit was linear. ``` ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) + xlim(1790, 1900) + ylim(-0.2, 0.2) ``` I don’t see any pattern in the residuals for the years 1790\-1860, indicating that the line fit to the log population was pretty good in this time interval. 14\.1 Fitting a line to the population in the later years --------------------------------------------------------- Now the simplest model that we can fit to these data is a line. Can we fit a line effectively to this graph? No – one line won’t fit these data well. There is strong curvature in the plot for the early years of the U.S. history. However, the graph for the later years looks pretty linear and it makes some sense to fit a line to the right\-half of the plot. On the figure below, I’ve drawn a line by eye to the last nine points. ``` ggplot(us.pop, aes(YEAR, POP)) + geom_point() + ylab("Population") + geom_abline(slope = 2.3404, intercept = -4403.7) ``` The equation of this line is \\\[ FIT \= 2\.3404 \\, YEAR \- 4403\.7 \\] The slope is 2\.3 which means that, for the later years (1930 – 2000\), the population of the United States has been increasing by about 2\.3 million each year. After we have described the fit, then we look at the residuals. For each year, we compute \\\[ RESIDUAL \= POPULATION \- FIT \\] and we graph the residuals against the year. ``` us.pop %>% mutate(Fit = -4403.7 + 2.3404 * YEAR, Residual = POP - Fit) -> us.pop ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` We notice the strong pattern in the residuals for the early years. This is expected, since the linear fit is only suitable for the population during the recent years. Actually, we don’t care about the large residuals for the early years – let’s look at the residual graph only for the later years. ``` ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) + xlim(1930, 2000) + ylim(-10, 10) ``` The residuals in this graph look relatively constant, with a few notable exceptions: * There is a large residual in 1930 which means that the population is higher than what is predicted using the linear model. * The residual in 1950 is smaller than the residuals in the surrounding years. * The residual in 2000 is somewhat large. Can we provide any explanation for these small or large residuals? The major event in the 1940’s was World War II. Many Americans participated in the war and I think this would account for some population decline – maybe this explains the small residual in 1950\. The large residuals in 1930 and 2000 mean that the rate of population growth in these years was a bit higher than the 2\.3 million per year. To me the large residual in 2000 is the most interesting – what is accounting for the extra population growth in the last ten years? 14\.2 Fitting a line to the log population in the early years ------------------------------------------------------------- Remember the curvature in the graph of population versus year that we saw in the early years? The interpretation of this curvature that the U.S. population growth in the early years was not linear but exponential. This means that the U.S. population was increasing at a constant rate during these formative years. How can we summarize this exponential growth? First, we reexpress population by using a log – the figure below plots the log population against year. ``` us.pop %>% mutate(log.POP = log10(POP)) -> us.pop ggplot(us.pop, aes(YEAR, log.POP)) + geom_point() ``` Looking at this plot, it looks pretty much like a straight\-line relationship for the early years. So we fit a line by eye: ``` ggplot(us.pop, aes(YEAR, log.POP)) + geom_point() + geom_abline(slope = 0.0129, intercept = -22.455) ``` This line has slope .0129 and intercept \\(\-22\.455\\), so our fit for log population for the early years is \\\[ FIT (log population) \= .0129 year \- 22\.455\. \\] If we take each side to the 10th power, then we get the relationship \\\[ FIT (population) \= 10^{.0129 year \- 22\.455}. \\] Here \\(10^{.0129} \= 1\.0301\\), so the population of the U.S. was increasing by a 3 percent rate in the early years. Looking further, we can compute residuals from our line fit to the (year, log population) data. Here the residual would be \\\[ RESIDUAL \= {\\rm log \\, population} \- FIT ({\\rm log \\, population}). \\] Here is the corresponding residual plot. ``` us.pop %>% mutate(LogFit = -22.4553 + 0.0129 * YEAR, Residual = log.POP - LogFit) -> us.pop ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` We focus our attention to the early years where the fit was linear. ``` ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) + xlim(1790, 1900) + ylim(-0.2, 0.2) ``` I don’t see any pattern in the residuals for the years 1790\-1860, indicating that the line fit to the log population was pretty good in this time interval.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/plotting-ii.html
14 Plotting II ============== As you know, the United States takes a census every 10 years. The dataset `us.pop` in the `LearnBayes` package displays the population of the U.S. during each census from 1790 to 2000\. ``` library(LearnEDAfunctions) library(tidyverse) head(us.pop) ``` ``` ## YEAR POP ## 1 1790 3.9 ## 2 1800 5.3 ## 3 1810 7.2 ## 4 1820 9.6 ## 5 1830 12.9 ## 6 1840 17.1 ``` First we plot population against year. What do we see in this plot? ``` ggplot(us.pop, aes(YEAR, POP)) + geom_point() + ylab("Population") ``` Obviously the population of the U.S. has been increasing over time and that’s the main message from this graph. But we already knew about the increasing population. We want to learn more. Specifically … * Can we describe how the population has increased over time? In other words, what is the pattern of growth of the U.S. population? * Once we have a handle on the rate of population growth, are there years where the population was different from this general pattern of growth? 14\.1 Fitting a line to the population in the later years --------------------------------------------------------- Now the simplest model that we can fit to these data is a line. Can we fit a line effectively to this graph? No – one line won’t fit these data well. There is strong curvature in the plot for the early years of the U.S. history. However, the graph for the later years looks pretty linear and it makes some sense to fit a line to the right\-half of the plot. On the figure below, I’ve drawn a line by eye to the last nine points. ``` ggplot(us.pop, aes(YEAR, POP)) + geom_point() + ylab("Population") + geom_abline(slope = 2.3404, intercept = -4403.7) ``` The equation of this line is \\\[ FIT \= 2\.3404 \\, YEAR \- 4403\.7 \\] The slope is 2\.3 which means that, for the later years (1930 – 2000\), the population of the United States has been increasing by about 2\.3 million each year. After we have described the fit, then we look at the residuals. For each year, we compute \\\[ RESIDUAL \= POPULATION \- FIT \\] and we graph the residuals against the year. ``` us.pop %>% mutate(Fit = -4403.7 + 2.3404 * YEAR, Residual = POP - Fit) -> us.pop ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` We notice the strong pattern in the residuals for the early years. This is expected, since the linear fit is only suitable for the population during the recent years. Actually, we don’t care about the large residuals for the early years – let’s look at the residual graph only for the later years. ``` ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) + xlim(1930, 2000) + ylim(-10, 10) ``` The residuals in this graph look relatively constant, with a few notable exceptions: * There is a large residual in 1930 which means that the population is higher than what is predicted using the linear model. * The residual in 1950 is smaller than the residuals in the surrounding years. * The residual in 2000 is somewhat large. Can we provide any explanation for these small or large residuals? The major event in the 1940’s was World War II. Many Americans participated in the war and I think this would account for some population decline – maybe this explains the small residual in 1950\. The large residuals in 1930 and 2000 mean that the rate of population growth in these years was a bit higher than the 2\.3 million per year. To me the large residual in 2000 is the most interesting – what is accounting for the extra population growth in the last ten years? 14\.2 Fitting a line to the log population in the early years ------------------------------------------------------------- Remember the curvature in the graph of population versus year that we saw in the early years? The interpretation of this curvature that the U.S. population growth in the early years was not linear but exponential. This means that the U.S. population was increasing at a constant rate during these formative years. How can we summarize this exponential growth? First, we reexpress population by using a log – the figure below plots the log population against year. ``` us.pop %>% mutate(log.POP = log10(POP)) -> us.pop ggplot(us.pop, aes(YEAR, log.POP)) + geom_point() ``` Looking at this plot, it looks pretty much like a straight\-line relationship for the early years. So we fit a line by eye: ``` ggplot(us.pop, aes(YEAR, log.POP)) + geom_point() + geom_abline(slope = 0.0129, intercept = -22.455) ``` This line has slope .0129 and intercept \\(\-22\.455\\), so our fit for log population for the early years is \\\[ FIT (log population) \= .0129 year \- 22\.455\. \\] If we take each side to the 10th power, then we get the relationship \\\[ FIT (population) \= 10^{.0129 year \- 22\.455}. \\] Here \\(10^{.0129} \= 1\.0301\\), so the population of the U.S. was increasing by a 3 percent rate in the early years. Looking further, we can compute residuals from our line fit to the (year, log population) data. Here the residual would be \\\[ RESIDUAL \= {\\rm log \\, population} \- FIT ({\\rm log \\, population}). \\] Here is the corresponding residual plot. ``` us.pop %>% mutate(LogFit = -22.4553 + 0.0129 * YEAR, Residual = log.POP - LogFit) -> us.pop ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` We focus our attention to the early years where the fit was linear. ``` ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) + xlim(1790, 1900) + ylim(-0.2, 0.2) ``` I don’t see any pattern in the residuals for the years 1790\-1860, indicating that the line fit to the log population was pretty good in this time interval. 14\.1 Fitting a line to the population in the later years --------------------------------------------------------- Now the simplest model that we can fit to these data is a line. Can we fit a line effectively to this graph? No – one line won’t fit these data well. There is strong curvature in the plot for the early years of the U.S. history. However, the graph for the later years looks pretty linear and it makes some sense to fit a line to the right\-half of the plot. On the figure below, I’ve drawn a line by eye to the last nine points. ``` ggplot(us.pop, aes(YEAR, POP)) + geom_point() + ylab("Population") + geom_abline(slope = 2.3404, intercept = -4403.7) ``` The equation of this line is \\\[ FIT \= 2\.3404 \\, YEAR \- 4403\.7 \\] The slope is 2\.3 which means that, for the later years (1930 – 2000\), the population of the United States has been increasing by about 2\.3 million each year. After we have described the fit, then we look at the residuals. For each year, we compute \\\[ RESIDUAL \= POPULATION \- FIT \\] and we graph the residuals against the year. ``` us.pop %>% mutate(Fit = -4403.7 + 2.3404 * YEAR, Residual = POP - Fit) -> us.pop ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` We notice the strong pattern in the residuals for the early years. This is expected, since the linear fit is only suitable for the population during the recent years. Actually, we don’t care about the large residuals for the early years – let’s look at the residual graph only for the later years. ``` ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) + xlim(1930, 2000) + ylim(-10, 10) ``` The residuals in this graph look relatively constant, with a few notable exceptions: * There is a large residual in 1930 which means that the population is higher than what is predicted using the linear model. * The residual in 1950 is smaller than the residuals in the surrounding years. * The residual in 2000 is somewhat large. Can we provide any explanation for these small or large residuals? The major event in the 1940’s was World War II. Many Americans participated in the war and I think this would account for some population decline – maybe this explains the small residual in 1950\. The large residuals in 1930 and 2000 mean that the rate of population growth in these years was a bit higher than the 2\.3 million per year. To me the large residual in 2000 is the most interesting – what is accounting for the extra population growth in the last ten years? 14\.2 Fitting a line to the log population in the early years ------------------------------------------------------------- Remember the curvature in the graph of population versus year that we saw in the early years? The interpretation of this curvature that the U.S. population growth in the early years was not linear but exponential. This means that the U.S. population was increasing at a constant rate during these formative years. How can we summarize this exponential growth? First, we reexpress population by using a log – the figure below plots the log population against year. ``` us.pop %>% mutate(log.POP = log10(POP)) -> us.pop ggplot(us.pop, aes(YEAR, log.POP)) + geom_point() ``` Looking at this plot, it looks pretty much like a straight\-line relationship for the early years. So we fit a line by eye: ``` ggplot(us.pop, aes(YEAR, log.POP)) + geom_point() + geom_abline(slope = 0.0129, intercept = -22.455) ``` This line has slope .0129 and intercept \\(\-22\.455\\), so our fit for log population for the early years is \\\[ FIT (log population) \= .0129 year \- 22\.455\. \\] If we take each side to the 10th power, then we get the relationship \\\[ FIT (population) \= 10^{.0129 year \- 22\.455}. \\] Here \\(10^{.0129} \= 1\.0301\\), so the population of the U.S. was increasing by a 3 percent rate in the early years. Looking further, we can compute residuals from our line fit to the (year, log population) data. Here the residual would be \\\[ RESIDUAL \= {\\rm log \\, population} \- FIT ({\\rm log \\, population}). \\] Here is the corresponding residual plot. ``` us.pop %>% mutate(LogFit = -22.4553 + 0.0129 * YEAR, Residual = log.POP - LogFit) -> us.pop ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` We focus our attention to the early years where the fit was linear. ``` ggplot(us.pop, aes(YEAR, Residual)) + geom_point() + geom_hline(yintercept = 0) + xlim(1790, 1900) + ylim(-0.2, 0.2) ``` I don’t see any pattern in the residuals for the years 1790\-1860, indicating that the line fit to the log population was pretty good in this time interval.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/straightening.html
15 Straightening ================ We have talked about methods for working with \\((x, y)\\) data. First we graph the data to explore the relationship between \\(x\\) and \\(y\\), next we fit a simple line model to describe the relationship, and then we look at the residuals to look for finer structure in the data. In our U.S. population example, we saw that there may be a non\-linear relationship in the plot, and we transformed the \\(y\\) variable (population) to first straighten the plot before we fit a line. In this lecture, we further discuss the situation where a non\-linear pattern exists in the scatterplot, and discuss how we can reexpress the \\(x\\) and/or the \\(y\\) variables to make the pattern straight. 15\.1 Meet the data ------------------- We work with a “golden oldie” dataset – this is a dataset that has been used in the statistics literature to illustrate transforming data. This data is a nice illustration of a nonlinear relationship. The transformation of the \\(x\\) and \\(y\\) variables in this example suggests that the data may not have been measured in the most convenient scale. A number of measurements were made on 38 1978\-79 model automobiles. (Data was supplied by Consumer Reports and was used in the article “Building Regression Models Interactively.” by H. V. Henderson and P. F. Velleman (1981\), Biometrics, 37, 391\-411\.) For each car, we measure (1\) the nationality of the manufacturer, (2\) the name, (3\) the mileage, measured in miles per gallon (MPG), (4\) the weight, (5\) the drive ratio, (6\) the horsepower, (7\) the displacement of the car (in cubic inches), and (8\) the number of cylinders. The data follow: ``` library(LearnEDAfunctions) library(tidyverse) head(car.measurements) ``` ``` ## Country Car MPG Weight Drive.Ratio Horsepower ## 1 U.S. Buick_Estate_Wagon 16.9 4.360 2.73 155 ## 2 U.S. Ford_Country_Squire_Wagon 15.5 4.054 2.26 142 ## 3 U.S. Chevy_Malibu_Wagon 19.2 3.605 2.56 125 ## 4 U.S. Chrysler_LeBaron_Wagon 18.5 3.940 2.45 150 ## 5 U.S. Chevette 30.0 2.155 3.70 68 ## 6 Japan Toyota_Corona 27.5 2.560 3.05 95 ## Displacement Cylinders ## 1 350 8 ## 2 351 8 ## 3 267 8 ## 4 360 8 ## 5 98 4 ## 6 134 4 ``` 15\.2 A nonlinear relationship ------------------------------ Here we explore the relationship between a car’s mileage and it’s displacement. Now most of us are aware of a relationship – smaller cars generally get better gas mileage. Our goal here is to describe the relationship between size and mileage and look for cars that deviate from this general pattern. We plot mileage against displacement below. ``` ggplot(car.measurements, aes(Displacement, MPG)) + geom_point() ``` As expected, we see a pretty strong negative relationship. But what is evident is that this is not a straight\-line relationship. Rather it looks curved – a curve drawn through the points makes this clear. ``` ggplot(car.measurements, aes(Displacement, MPG)) + geom_point() + geom_smooth(method = "loess", se = FALSE) ``` Recall what we did in our U.S. population study when we detected a curved relationship between population and year. We reexpressed the population using a log transformation and fit a line to the (year, log pop) data. We follow a similar strategy here. The difference is that we may reexpress the \\(x\\) and/or the \\(y\\) variables, and we use the family of power transformations (using different choices for the power \\(p\\)) in our reexpression. 15\.3 Three summary points and measuring curvature -------------------------------------------------- Recall that our method of fitting a line, the resistant line, is based on working with three summary points. Likewise, we use three summary points to measure the curvature in the plot and help us choose an appropriate reexpression to straighten. As before, we find three summary points by breaking the \\(x\\)\-values (here \\(x\\) is displacement) into three groups of approximate equal size (low, middle, and high), and finding the median \\(x\\)\-value and the median \\(y\\)\-value in each group. Here the three summary points are \\\[ (x\_L, y\_L) \= (98, 31\), (x\_M, y\_M) \= (148\.5, 24\.25\), (x\_R, y\_R) \= (302, 18\.2\). \\] These points are plotted as red dots in the figure below. ``` summary.points <- data.frame(x = c(98, 148.5, 302), y = c(31, 24.25, 18.2)) ggplot(car.measurements, aes(Displacement, MPG)) + geom_point() + geom_point(data = summary.points, aes(x, y), color="red", size = 3) ``` We can measure the curvature in this plot by first computing the “left slope” (the slope of the line connecting the left and middle summary points), the “right slope” (the slope of the segment connecting the middle and right summary points). The ratio of the right slope to the left slope, the so\-called half\-slope ratio, is a measure of the curvature of the plot. The figure below illustrates the computation of the half\-slope ratio. The left slope of the segment connecting the left two points is \\\[ m\_L \= (31\-24\.25\)/(98\-148\.5\) \= \-.144 \\] and the slope of the segment connecting the two right points is \\\[ m\_R \= (24\.25\-18\.2\)/(148\.5\-302\) \= \-.039\. \\] So the half\-slope ratio, denoted \\(b\_{HS}\\), is \\\[ b\_{HS} \= \\frac{m\_R}{m\_L} \= \\frac{\-.039}{\-.144} \= .27\. \\] ``` ggplot() + geom_point(data = car.measurements, aes(Displacement, MPG)) + geom_point(data = summary.points, aes(x, y), color="red", size = 3) + geom_smooth(data = filter(summary.points, x < 150), aes(x, y), method = "lm", color="blue") + geom_smooth(data = filter(summary.points, x > 145), aes(x, y), method = "lm", color="blue") ``` A half\-slope ratio close to the value indicates straightness in the plot. Here the value \\(b\_{HS}\\) \= .27 which indicates curvature that is concave up. 15\.4 Reexpressing to reduce curvature -------------------------------------- In many cases, we can reduce the curvature in the plot by applying a suitable power transformation to either the \\(x\\) or \\(y\\) variables. Recall the ladder of power transformations: We start with the raw data which corresponds to \\(p \= 1\\) and either go up or down the ladder of transformations to change the shape of the data. How do we decide which direction on the ladder to transform the x and y variables? There is a simple rule based on the type of curvature in the plot. Suppose that we observe a graph with the following curvature. We look at the direction of the bulge in the curvature that we indicate by an arrow. Note that the curvature bulges towards small values of \\(x\\) and small values of \\(y\\). If the direction of the bulge for one variable goes towards small values, we go down the ladder of transformations; if the bulge is towards large values, we go up the ladder of transformations. Since this graph bulges towards small values for both variable, we can reexpress \\(x\\) by square root of \\(x\\), or log \\(x\\), or we could reexpress \\(y\\) by a root or a log. Actually, there are four types of curvature that can be improved by a power transformation, shown below. Once you see the direction of the bulge, the direction to transform the variables is clear. For example, look at the bottom right graph. The bulge in this curvature is towards small \\(x\\) and large \\(y\\). To straighten this plot, we could either * reexpress \\(x\\) by a root (moving down the ladder) or * reexpress \\(y\\) by a square (moving up the ladder). 15\.5 Straightening in the example ---------------------------------- Let’s illustrate this procedure for our data. We do our calculations in a spreadsheet where it is easy to change power transformations for \\(x\\) and \\(y\\). For ease of comparison of different transformations, we use the matched transformations \\\[ \\frac{x^p \- 1}{p}, \\, \\, \\frac{y^p \- 1}{p}. \\] Below we write a short function `straightening.work` to illustrate the computations. We start with the original data – the following table shows the left, center, and right summary points, the transformed summary points (tx, ty), the left and right slopes and the half\-slope ratio. This is old news – the ratio is .275 which indicates curvature in the plot. ``` straightening.work <- function(sp, px, py){ sp$tx <- with(sp, (x ^ px - 1) / px) sp$ty <- with(sp, (y ^ py - 1) / py) sp$slope[1] <- with(sp, diff(ty[1:2]) / diff(tx[1:2])) sp$slope[2] <- with(sp, diff(ty[2:3]) / diff(tx[2:3])) sp$half.slope.ratio <- with(sp, slope[2] / slope[1]) sp$slope[3] <- NA sp$half.slope.ratio[2:3] <- NA row.names(sp) <- c("Left", "Center", "Right") sp} straightening.work(summary.points, 1, 1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 97.0 30.00 -0.13366337 0.2948727 ## Center 148.5 24.25 147.5 23.25 -0.03941368 NA ## Right 302.0 18.20 301.0 17.20 NA NA ``` Remember the bulge is towards small values of \\(x\\) (displacement), so we’ll try taking a square root of \\(x\\): ``` straightening.work(summary.points, 0.5, 1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 17.79899 30.00 -1.4760147 0.3947231 ## Center 148.5 24.25 22.37212 23.25 -0.5826171 NA ## Right 302.0 18.20 32.75629 17.20 NA NA ``` This is better – the half\-slope ratio has increased from .275 to .368, but it’s still not close to the goal value of 1\. Let’s go further down the ladder of transformations for \\(x\\) – we’ll try a log (\\(p\\) \= 0\) – we actually use \\(p\\) \= .001 in the spreadsheet that approximates a log. ``` straightening.work(summary.points, 0.001, 1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 4.595495 30.00 -16.163243 0.5244925 ## Center 148.5 24.25 5.013109 23.25 -8.477499 NA ## Right 302.0 18.20 5.726763 17.20 NA NA ``` We’re doing better – the half\-slope ratio is now .488\. Let’s now try to move towards straightness by reexpressing \\(y\\). The bulge goes towards small \\(y\\), so we’ll try a log (\\(p\\) \= 0\) for \\(y\\) (mileage): ``` straightening.work(summary.points, 0.001, 0.001) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 4.595495 3.439890 -0.5899825 0.683707 ## Center 148.5 24.25 5.013109 3.193505 -0.4033752 NA ## Right 302.0 18.20 5.726763 2.905635 NA NA ``` The ratio is now \\(b\_{HS}\\) \= .642 (the power for \\(x\\) is \\(p\\) \= 0, for \\(y\\) is \\(p\\) \=0\). Let’s keep going down the ladder for \\(y\\) and try \\(p\\) \= \-1 that corresponds to reciprocals. ``` straightening.work(summary.points, 0.001, -1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 4.595495 0.9677419 -0.02150082 0.8933663 ## Center 148.5 24.25 5.013109 0.9587629 -0.01920811 NA ## Right 302.0 18.20 5.726763 0.9450549 NA NA ``` Now we’re close – \\(b\_{HS} \= .845\\). Last, we try moving \\(x\\) down the ladder to \\(p \= \-.33\\) (we’ll explain this choice soon). ``` straightening.work(summary.points, -0.33, -1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 2.362910 0.9677419 -0.1049744 1.074659 ## Center 148.5 24.25 2.448446 0.9587629 -0.1128117 NA ## Right 302.0 18.20 2.569958 0.9450549 NA NA ``` Bingo – the half\-slope ratio is \\(b\_{HS}\\) \= 1\.017, so it appears that we’ve straightened the plot by transforming \\(x\\) to the \-1/3 power and \\(y\\) to the \-1 power. 15\.6 Making sense of the reexpression -------------------------------------- We started by plotting mileage against displacement of a car, where mileage was measured in miles per gallon and displacement is measured in cubic inches. What are the units of the transformed variables? If we change mileage to mileage\\(^{\-1}\\), then the units are gallons per mile; if we change displacement to displacement\\(^{\-1/3}\\), the new units are 1/inches. So we have found a linear relationship between mileage (in gallons per mile) and displacement (measured as 1/inches). This suggests that gallons per mile may be a more suitable scale for measuring mileage. 15\.7 Straightening, fitting, and flattening -------------------------------------------- We found a suitable transformation by working only on three summary points. We have to check out our proposed reexpression to see if we have indeed straightened the plot In the plot below, we graph \\(\-1/y\\) against \\(\-1/x^{.33}\\). ``` car.measurements <- mutate(car.measurements, new.x = Displacement ^ (- 0.33), new.y = - 1 / MPG) ggplot(car.measurements, aes(new.x, new.y)) + geom_point() ``` This looks pretty straight. To detect any remaining curvature, we will fit a line and inspect the residuals. ``` fit <- rline(new.y ~ new.x, car.measurements, iter=5) c(fit$a, fit$b, fit$xC) ``` ``` ## [1] -0.04249751 0.34584665 0.19202478 ``` We fit a line shown below – it has intercept \\(\-0\.0424\\), slope \\(\-0\.3458\\), and \\(x\_C \= \-0\.1920\\), so our line fit is \\\[ Mileage \= \-0\.0424 \- 0\.3458 \\left(\\frac{\-1}{displacement^{.33}} \+ 0\.1920\\right) \\] The residuals from this line fit are shown below. ``` car.measurements <- mutate(car.measurements, Residual = fit$residual) ggplot(car.measurements, aes(new.x, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` What do we see in this residual plot? * First, I don’t see any obvious curvature in the plot. This means that we were successful in straightening the association pattern in the graph. * Two unusual small residuals stand out. These correspond to cars that had unusually small mileages given their size. I might avoid buying these cars if there were similar cars available with better mileages. 15\.1 Meet the data ------------------- We work with a “golden oldie” dataset – this is a dataset that has been used in the statistics literature to illustrate transforming data. This data is a nice illustration of a nonlinear relationship. The transformation of the \\(x\\) and \\(y\\) variables in this example suggests that the data may not have been measured in the most convenient scale. A number of measurements were made on 38 1978\-79 model automobiles. (Data was supplied by Consumer Reports and was used in the article “Building Regression Models Interactively.” by H. V. Henderson and P. F. Velleman (1981\), Biometrics, 37, 391\-411\.) For each car, we measure (1\) the nationality of the manufacturer, (2\) the name, (3\) the mileage, measured in miles per gallon (MPG), (4\) the weight, (5\) the drive ratio, (6\) the horsepower, (7\) the displacement of the car (in cubic inches), and (8\) the number of cylinders. The data follow: ``` library(LearnEDAfunctions) library(tidyverse) head(car.measurements) ``` ``` ## Country Car MPG Weight Drive.Ratio Horsepower ## 1 U.S. Buick_Estate_Wagon 16.9 4.360 2.73 155 ## 2 U.S. Ford_Country_Squire_Wagon 15.5 4.054 2.26 142 ## 3 U.S. Chevy_Malibu_Wagon 19.2 3.605 2.56 125 ## 4 U.S. Chrysler_LeBaron_Wagon 18.5 3.940 2.45 150 ## 5 U.S. Chevette 30.0 2.155 3.70 68 ## 6 Japan Toyota_Corona 27.5 2.560 3.05 95 ## Displacement Cylinders ## 1 350 8 ## 2 351 8 ## 3 267 8 ## 4 360 8 ## 5 98 4 ## 6 134 4 ``` 15\.2 A nonlinear relationship ------------------------------ Here we explore the relationship between a car’s mileage and it’s displacement. Now most of us are aware of a relationship – smaller cars generally get better gas mileage. Our goal here is to describe the relationship between size and mileage and look for cars that deviate from this general pattern. We plot mileage against displacement below. ``` ggplot(car.measurements, aes(Displacement, MPG)) + geom_point() ``` As expected, we see a pretty strong negative relationship. But what is evident is that this is not a straight\-line relationship. Rather it looks curved – a curve drawn through the points makes this clear. ``` ggplot(car.measurements, aes(Displacement, MPG)) + geom_point() + geom_smooth(method = "loess", se = FALSE) ``` Recall what we did in our U.S. population study when we detected a curved relationship between population and year. We reexpressed the population using a log transformation and fit a line to the (year, log pop) data. We follow a similar strategy here. The difference is that we may reexpress the \\(x\\) and/or the \\(y\\) variables, and we use the family of power transformations (using different choices for the power \\(p\\)) in our reexpression. 15\.3 Three summary points and measuring curvature -------------------------------------------------- Recall that our method of fitting a line, the resistant line, is based on working with three summary points. Likewise, we use three summary points to measure the curvature in the plot and help us choose an appropriate reexpression to straighten. As before, we find three summary points by breaking the \\(x\\)\-values (here \\(x\\) is displacement) into three groups of approximate equal size (low, middle, and high), and finding the median \\(x\\)\-value and the median \\(y\\)\-value in each group. Here the three summary points are \\\[ (x\_L, y\_L) \= (98, 31\), (x\_M, y\_M) \= (148\.5, 24\.25\), (x\_R, y\_R) \= (302, 18\.2\). \\] These points are plotted as red dots in the figure below. ``` summary.points <- data.frame(x = c(98, 148.5, 302), y = c(31, 24.25, 18.2)) ggplot(car.measurements, aes(Displacement, MPG)) + geom_point() + geom_point(data = summary.points, aes(x, y), color="red", size = 3) ``` We can measure the curvature in this plot by first computing the “left slope” (the slope of the line connecting the left and middle summary points), the “right slope” (the slope of the segment connecting the middle and right summary points). The ratio of the right slope to the left slope, the so\-called half\-slope ratio, is a measure of the curvature of the plot. The figure below illustrates the computation of the half\-slope ratio. The left slope of the segment connecting the left two points is \\\[ m\_L \= (31\-24\.25\)/(98\-148\.5\) \= \-.144 \\] and the slope of the segment connecting the two right points is \\\[ m\_R \= (24\.25\-18\.2\)/(148\.5\-302\) \= \-.039\. \\] So the half\-slope ratio, denoted \\(b\_{HS}\\), is \\\[ b\_{HS} \= \\frac{m\_R}{m\_L} \= \\frac{\-.039}{\-.144} \= .27\. \\] ``` ggplot() + geom_point(data = car.measurements, aes(Displacement, MPG)) + geom_point(data = summary.points, aes(x, y), color="red", size = 3) + geom_smooth(data = filter(summary.points, x < 150), aes(x, y), method = "lm", color="blue") + geom_smooth(data = filter(summary.points, x > 145), aes(x, y), method = "lm", color="blue") ``` A half\-slope ratio close to the value indicates straightness in the plot. Here the value \\(b\_{HS}\\) \= .27 which indicates curvature that is concave up. 15\.4 Reexpressing to reduce curvature -------------------------------------- In many cases, we can reduce the curvature in the plot by applying a suitable power transformation to either the \\(x\\) or \\(y\\) variables. Recall the ladder of power transformations: We start with the raw data which corresponds to \\(p \= 1\\) and either go up or down the ladder of transformations to change the shape of the data. How do we decide which direction on the ladder to transform the x and y variables? There is a simple rule based on the type of curvature in the plot. Suppose that we observe a graph with the following curvature. We look at the direction of the bulge in the curvature that we indicate by an arrow. Note that the curvature bulges towards small values of \\(x\\) and small values of \\(y\\). If the direction of the bulge for one variable goes towards small values, we go down the ladder of transformations; if the bulge is towards large values, we go up the ladder of transformations. Since this graph bulges towards small values for both variable, we can reexpress \\(x\\) by square root of \\(x\\), or log \\(x\\), or we could reexpress \\(y\\) by a root or a log. Actually, there are four types of curvature that can be improved by a power transformation, shown below. Once you see the direction of the bulge, the direction to transform the variables is clear. For example, look at the bottom right graph. The bulge in this curvature is towards small \\(x\\) and large \\(y\\). To straighten this plot, we could either * reexpress \\(x\\) by a root (moving down the ladder) or * reexpress \\(y\\) by a square (moving up the ladder). 15\.5 Straightening in the example ---------------------------------- Let’s illustrate this procedure for our data. We do our calculations in a spreadsheet where it is easy to change power transformations for \\(x\\) and \\(y\\). For ease of comparison of different transformations, we use the matched transformations \\\[ \\frac{x^p \- 1}{p}, \\, \\, \\frac{y^p \- 1}{p}. \\] Below we write a short function `straightening.work` to illustrate the computations. We start with the original data – the following table shows the left, center, and right summary points, the transformed summary points (tx, ty), the left and right slopes and the half\-slope ratio. This is old news – the ratio is .275 which indicates curvature in the plot. ``` straightening.work <- function(sp, px, py){ sp$tx <- with(sp, (x ^ px - 1) / px) sp$ty <- with(sp, (y ^ py - 1) / py) sp$slope[1] <- with(sp, diff(ty[1:2]) / diff(tx[1:2])) sp$slope[2] <- with(sp, diff(ty[2:3]) / diff(tx[2:3])) sp$half.slope.ratio <- with(sp, slope[2] / slope[1]) sp$slope[3] <- NA sp$half.slope.ratio[2:3] <- NA row.names(sp) <- c("Left", "Center", "Right") sp} straightening.work(summary.points, 1, 1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 97.0 30.00 -0.13366337 0.2948727 ## Center 148.5 24.25 147.5 23.25 -0.03941368 NA ## Right 302.0 18.20 301.0 17.20 NA NA ``` Remember the bulge is towards small values of \\(x\\) (displacement), so we’ll try taking a square root of \\(x\\): ``` straightening.work(summary.points, 0.5, 1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 17.79899 30.00 -1.4760147 0.3947231 ## Center 148.5 24.25 22.37212 23.25 -0.5826171 NA ## Right 302.0 18.20 32.75629 17.20 NA NA ``` This is better – the half\-slope ratio has increased from .275 to .368, but it’s still not close to the goal value of 1\. Let’s go further down the ladder of transformations for \\(x\\) – we’ll try a log (\\(p\\) \= 0\) – we actually use \\(p\\) \= .001 in the spreadsheet that approximates a log. ``` straightening.work(summary.points, 0.001, 1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 4.595495 30.00 -16.163243 0.5244925 ## Center 148.5 24.25 5.013109 23.25 -8.477499 NA ## Right 302.0 18.20 5.726763 17.20 NA NA ``` We’re doing better – the half\-slope ratio is now .488\. Let’s now try to move towards straightness by reexpressing \\(y\\). The bulge goes towards small \\(y\\), so we’ll try a log (\\(p\\) \= 0\) for \\(y\\) (mileage): ``` straightening.work(summary.points, 0.001, 0.001) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 4.595495 3.439890 -0.5899825 0.683707 ## Center 148.5 24.25 5.013109 3.193505 -0.4033752 NA ## Right 302.0 18.20 5.726763 2.905635 NA NA ``` The ratio is now \\(b\_{HS}\\) \= .642 (the power for \\(x\\) is \\(p\\) \= 0, for \\(y\\) is \\(p\\) \=0\). Let’s keep going down the ladder for \\(y\\) and try \\(p\\) \= \-1 that corresponds to reciprocals. ``` straightening.work(summary.points, 0.001, -1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 4.595495 0.9677419 -0.02150082 0.8933663 ## Center 148.5 24.25 5.013109 0.9587629 -0.01920811 NA ## Right 302.0 18.20 5.726763 0.9450549 NA NA ``` Now we’re close – \\(b\_{HS} \= .845\\). Last, we try moving \\(x\\) down the ladder to \\(p \= \-.33\\) (we’ll explain this choice soon). ``` straightening.work(summary.points, -0.33, -1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 2.362910 0.9677419 -0.1049744 1.074659 ## Center 148.5 24.25 2.448446 0.9587629 -0.1128117 NA ## Right 302.0 18.20 2.569958 0.9450549 NA NA ``` Bingo – the half\-slope ratio is \\(b\_{HS}\\) \= 1\.017, so it appears that we’ve straightened the plot by transforming \\(x\\) to the \-1/3 power and \\(y\\) to the \-1 power. 15\.6 Making sense of the reexpression -------------------------------------- We started by plotting mileage against displacement of a car, where mileage was measured in miles per gallon and displacement is measured in cubic inches. What are the units of the transformed variables? If we change mileage to mileage\\(^{\-1}\\), then the units are gallons per mile; if we change displacement to displacement\\(^{\-1/3}\\), the new units are 1/inches. So we have found a linear relationship between mileage (in gallons per mile) and displacement (measured as 1/inches). This suggests that gallons per mile may be a more suitable scale for measuring mileage. 15\.7 Straightening, fitting, and flattening -------------------------------------------- We found a suitable transformation by working only on three summary points. We have to check out our proposed reexpression to see if we have indeed straightened the plot In the plot below, we graph \\(\-1/y\\) against \\(\-1/x^{.33}\\). ``` car.measurements <- mutate(car.measurements, new.x = Displacement ^ (- 0.33), new.y = - 1 / MPG) ggplot(car.measurements, aes(new.x, new.y)) + geom_point() ``` This looks pretty straight. To detect any remaining curvature, we will fit a line and inspect the residuals. ``` fit <- rline(new.y ~ new.x, car.measurements, iter=5) c(fit$a, fit$b, fit$xC) ``` ``` ## [1] -0.04249751 0.34584665 0.19202478 ``` We fit a line shown below – it has intercept \\(\-0\.0424\\), slope \\(\-0\.3458\\), and \\(x\_C \= \-0\.1920\\), so our line fit is \\\[ Mileage \= \-0\.0424 \- 0\.3458 \\left(\\frac{\-1}{displacement^{.33}} \+ 0\.1920\\right) \\] The residuals from this line fit are shown below. ``` car.measurements <- mutate(car.measurements, Residual = fit$residual) ggplot(car.measurements, aes(new.x, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` What do we see in this residual plot? * First, I don’t see any obvious curvature in the plot. This means that we were successful in straightening the association pattern in the graph. * Two unusual small residuals stand out. These correspond to cars that had unusually small mileages given their size. I might avoid buying these cars if there were similar cars available with better mileages.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/straightening.html
15 Straightening ================ We have talked about methods for working with \\((x, y)\\) data. First we graph the data to explore the relationship between \\(x\\) and \\(y\\), next we fit a simple line model to describe the relationship, and then we look at the residuals to look for finer structure in the data. In our U.S. population example, we saw that there may be a non\-linear relationship in the plot, and we transformed the \\(y\\) variable (population) to first straighten the plot before we fit a line. In this lecture, we further discuss the situation where a non\-linear pattern exists in the scatterplot, and discuss how we can reexpress the \\(x\\) and/or the \\(y\\) variables to make the pattern straight. 15\.1 Meet the data ------------------- We work with a “golden oldie” dataset – this is a dataset that has been used in the statistics literature to illustrate transforming data. This data is a nice illustration of a nonlinear relationship. The transformation of the \\(x\\) and \\(y\\) variables in this example suggests that the data may not have been measured in the most convenient scale. A number of measurements were made on 38 1978\-79 model automobiles. (Data was supplied by Consumer Reports and was used in the article “Building Regression Models Interactively.” by H. V. Henderson and P. F. Velleman (1981\), Biometrics, 37, 391\-411\.) For each car, we measure (1\) the nationality of the manufacturer, (2\) the name, (3\) the mileage, measured in miles per gallon (MPG), (4\) the weight, (5\) the drive ratio, (6\) the horsepower, (7\) the displacement of the car (in cubic inches), and (8\) the number of cylinders. The data follow: ``` library(LearnEDAfunctions) library(tidyverse) head(car.measurements) ``` ``` ## Country Car MPG Weight Drive.Ratio Horsepower ## 1 U.S. Buick_Estate_Wagon 16.9 4.360 2.73 155 ## 2 U.S. Ford_Country_Squire_Wagon 15.5 4.054 2.26 142 ## 3 U.S. Chevy_Malibu_Wagon 19.2 3.605 2.56 125 ## 4 U.S. Chrysler_LeBaron_Wagon 18.5 3.940 2.45 150 ## 5 U.S. Chevette 30.0 2.155 3.70 68 ## 6 Japan Toyota_Corona 27.5 2.560 3.05 95 ## Displacement Cylinders ## 1 350 8 ## 2 351 8 ## 3 267 8 ## 4 360 8 ## 5 98 4 ## 6 134 4 ``` 15\.2 A nonlinear relationship ------------------------------ Here we explore the relationship between a car’s mileage and it’s displacement. Now most of us are aware of a relationship – smaller cars generally get better gas mileage. Our goal here is to describe the relationship between size and mileage and look for cars that deviate from this general pattern. We plot mileage against displacement below. ``` ggplot(car.measurements, aes(Displacement, MPG)) + geom_point() ``` As expected, we see a pretty strong negative relationship. But what is evident is that this is not a straight\-line relationship. Rather it looks curved – a curve drawn through the points makes this clear. ``` ggplot(car.measurements, aes(Displacement, MPG)) + geom_point() + geom_smooth(method = "loess", se = FALSE) ``` Recall what we did in our U.S. population study when we detected a curved relationship between population and year. We reexpressed the population using a log transformation and fit a line to the (year, log pop) data. We follow a similar strategy here. The difference is that we may reexpress the \\(x\\) and/or the \\(y\\) variables, and we use the family of power transformations (using different choices for the power \\(p\\)) in our reexpression. 15\.3 Three summary points and measuring curvature -------------------------------------------------- Recall that our method of fitting a line, the resistant line, is based on working with three summary points. Likewise, we use three summary points to measure the curvature in the plot and help us choose an appropriate reexpression to straighten. As before, we find three summary points by breaking the \\(x\\)\-values (here \\(x\\) is displacement) into three groups of approximate equal size (low, middle, and high), and finding the median \\(x\\)\-value and the median \\(y\\)\-value in each group. Here the three summary points are \\\[ (x\_L, y\_L) \= (98, 31\), (x\_M, y\_M) \= (148\.5, 24\.25\), (x\_R, y\_R) \= (302, 18\.2\). \\] These points are plotted as red dots in the figure below. ``` summary.points <- data.frame(x = c(98, 148.5, 302), y = c(31, 24.25, 18.2)) ggplot(car.measurements, aes(Displacement, MPG)) + geom_point() + geom_point(data = summary.points, aes(x, y), color="red", size = 3) ``` We can measure the curvature in this plot by first computing the “left slope” (the slope of the line connecting the left and middle summary points), the “right slope” (the slope of the segment connecting the middle and right summary points). The ratio of the right slope to the left slope, the so\-called half\-slope ratio, is a measure of the curvature of the plot. The figure below illustrates the computation of the half\-slope ratio. The left slope of the segment connecting the left two points is \\\[ m\_L \= (31\-24\.25\)/(98\-148\.5\) \= \-.144 \\] and the slope of the segment connecting the two right points is \\\[ m\_R \= (24\.25\-18\.2\)/(148\.5\-302\) \= \-.039\. \\] So the half\-slope ratio, denoted \\(b\_{HS}\\), is \\\[ b\_{HS} \= \\frac{m\_R}{m\_L} \= \\frac{\-.039}{\-.144} \= .27\. \\] ``` ggplot() + geom_point(data = car.measurements, aes(Displacement, MPG)) + geom_point(data = summary.points, aes(x, y), color="red", size = 3) + geom_smooth(data = filter(summary.points, x < 150), aes(x, y), method = "lm", color="blue") + geom_smooth(data = filter(summary.points, x > 145), aes(x, y), method = "lm", color="blue") ``` A half\-slope ratio close to the value indicates straightness in the plot. Here the value \\(b\_{HS}\\) \= .27 which indicates curvature that is concave up. 15\.4 Reexpressing to reduce curvature -------------------------------------- In many cases, we can reduce the curvature in the plot by applying a suitable power transformation to either the \\(x\\) or \\(y\\) variables. Recall the ladder of power transformations: We start with the raw data which corresponds to \\(p \= 1\\) and either go up or down the ladder of transformations to change the shape of the data. How do we decide which direction on the ladder to transform the x and y variables? There is a simple rule based on the type of curvature in the plot. Suppose that we observe a graph with the following curvature. We look at the direction of the bulge in the curvature that we indicate by an arrow. Note that the curvature bulges towards small values of \\(x\\) and small values of \\(y\\). If the direction of the bulge for one variable goes towards small values, we go down the ladder of transformations; if the bulge is towards large values, we go up the ladder of transformations. Since this graph bulges towards small values for both variable, we can reexpress \\(x\\) by square root of \\(x\\), or log \\(x\\), or we could reexpress \\(y\\) by a root or a log. Actually, there are four types of curvature that can be improved by a power transformation, shown below. Once you see the direction of the bulge, the direction to transform the variables is clear. For example, look at the bottom right graph. The bulge in this curvature is towards small \\(x\\) and large \\(y\\). To straighten this plot, we could either * reexpress \\(x\\) by a root (moving down the ladder) or * reexpress \\(y\\) by a square (moving up the ladder). 15\.5 Straightening in the example ---------------------------------- Let’s illustrate this procedure for our data. We do our calculations in a spreadsheet where it is easy to change power transformations for \\(x\\) and \\(y\\). For ease of comparison of different transformations, we use the matched transformations \\\[ \\frac{x^p \- 1}{p}, \\, \\, \\frac{y^p \- 1}{p}. \\] Below we write a short function `straightening.work` to illustrate the computations. We start with the original data – the following table shows the left, center, and right summary points, the transformed summary points (tx, ty), the left and right slopes and the half\-slope ratio. This is old news – the ratio is .275 which indicates curvature in the plot. ``` straightening.work <- function(sp, px, py){ sp$tx <- with(sp, (x ^ px - 1) / px) sp$ty <- with(sp, (y ^ py - 1) / py) sp$slope[1] <- with(sp, diff(ty[1:2]) / diff(tx[1:2])) sp$slope[2] <- with(sp, diff(ty[2:3]) / diff(tx[2:3])) sp$half.slope.ratio <- with(sp, slope[2] / slope[1]) sp$slope[3] <- NA sp$half.slope.ratio[2:3] <- NA row.names(sp) <- c("Left", "Center", "Right") sp} straightening.work(summary.points, 1, 1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 97.0 30.00 -0.13366337 0.2948727 ## Center 148.5 24.25 147.5 23.25 -0.03941368 NA ## Right 302.0 18.20 301.0 17.20 NA NA ``` Remember the bulge is towards small values of \\(x\\) (displacement), so we’ll try taking a square root of \\(x\\): ``` straightening.work(summary.points, 0.5, 1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 17.79899 30.00 -1.4760147 0.3947231 ## Center 148.5 24.25 22.37212 23.25 -0.5826171 NA ## Right 302.0 18.20 32.75629 17.20 NA NA ``` This is better – the half\-slope ratio has increased from .275 to .368, but it’s still not close to the goal value of 1\. Let’s go further down the ladder of transformations for \\(x\\) – we’ll try a log (\\(p\\) \= 0\) – we actually use \\(p\\) \= .001 in the spreadsheet that approximates a log. ``` straightening.work(summary.points, 0.001, 1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 4.595495 30.00 -16.163243 0.5244925 ## Center 148.5 24.25 5.013109 23.25 -8.477499 NA ## Right 302.0 18.20 5.726763 17.20 NA NA ``` We’re doing better – the half\-slope ratio is now .488\. Let’s now try to move towards straightness by reexpressing \\(y\\). The bulge goes towards small \\(y\\), so we’ll try a log (\\(p\\) \= 0\) for \\(y\\) (mileage): ``` straightening.work(summary.points, 0.001, 0.001) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 4.595495 3.439890 -0.5899825 0.683707 ## Center 148.5 24.25 5.013109 3.193505 -0.4033752 NA ## Right 302.0 18.20 5.726763 2.905635 NA NA ``` The ratio is now \\(b\_{HS}\\) \= .642 (the power for \\(x\\) is \\(p\\) \= 0, for \\(y\\) is \\(p\\) \=0\). Let’s keep going down the ladder for \\(y\\) and try \\(p\\) \= \-1 that corresponds to reciprocals. ``` straightening.work(summary.points, 0.001, -1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 4.595495 0.9677419 -0.02150082 0.8933663 ## Center 148.5 24.25 5.013109 0.9587629 -0.01920811 NA ## Right 302.0 18.20 5.726763 0.9450549 NA NA ``` Now we’re close – \\(b\_{HS} \= .845\\). Last, we try moving \\(x\\) down the ladder to \\(p \= \-.33\\) (we’ll explain this choice soon). ``` straightening.work(summary.points, -0.33, -1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 2.362910 0.9677419 -0.1049744 1.074659 ## Center 148.5 24.25 2.448446 0.9587629 -0.1128117 NA ## Right 302.0 18.20 2.569958 0.9450549 NA NA ``` Bingo – the half\-slope ratio is \\(b\_{HS}\\) \= 1\.017, so it appears that we’ve straightened the plot by transforming \\(x\\) to the \-1/3 power and \\(y\\) to the \-1 power. 15\.6 Making sense of the reexpression -------------------------------------- We started by plotting mileage against displacement of a car, where mileage was measured in miles per gallon and displacement is measured in cubic inches. What are the units of the transformed variables? If we change mileage to mileage\\(^{\-1}\\), then the units are gallons per mile; if we change displacement to displacement\\(^{\-1/3}\\), the new units are 1/inches. So we have found a linear relationship between mileage (in gallons per mile) and displacement (measured as 1/inches). This suggests that gallons per mile may be a more suitable scale for measuring mileage. 15\.7 Straightening, fitting, and flattening -------------------------------------------- We found a suitable transformation by working only on three summary points. We have to check out our proposed reexpression to see if we have indeed straightened the plot In the plot below, we graph \\(\-1/y\\) against \\(\-1/x^{.33}\\). ``` car.measurements <- mutate(car.measurements, new.x = Displacement ^ (- 0.33), new.y = - 1 / MPG) ggplot(car.measurements, aes(new.x, new.y)) + geom_point() ``` This looks pretty straight. To detect any remaining curvature, we will fit a line and inspect the residuals. ``` fit <- rline(new.y ~ new.x, car.measurements, iter=5) c(fit$a, fit$b, fit$xC) ``` ``` ## [1] -0.04249751 0.34584665 0.19202478 ``` We fit a line shown below – it has intercept \\(\-0\.0424\\), slope \\(\-0\.3458\\), and \\(x\_C \= \-0\.1920\\), so our line fit is \\\[ Mileage \= \-0\.0424 \- 0\.3458 \\left(\\frac{\-1}{displacement^{.33}} \+ 0\.1920\\right) \\] The residuals from this line fit are shown below. ``` car.measurements <- mutate(car.measurements, Residual = fit$residual) ggplot(car.measurements, aes(new.x, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` What do we see in this residual plot? * First, I don’t see any obvious curvature in the plot. This means that we were successful in straightening the association pattern in the graph. * Two unusual small residuals stand out. These correspond to cars that had unusually small mileages given their size. I might avoid buying these cars if there were similar cars available with better mileages. 15\.1 Meet the data ------------------- We work with a “golden oldie” dataset – this is a dataset that has been used in the statistics literature to illustrate transforming data. This data is a nice illustration of a nonlinear relationship. The transformation of the \\(x\\) and \\(y\\) variables in this example suggests that the data may not have been measured in the most convenient scale. A number of measurements were made on 38 1978\-79 model automobiles. (Data was supplied by Consumer Reports and was used in the article “Building Regression Models Interactively.” by H. V. Henderson and P. F. Velleman (1981\), Biometrics, 37, 391\-411\.) For each car, we measure (1\) the nationality of the manufacturer, (2\) the name, (3\) the mileage, measured in miles per gallon (MPG), (4\) the weight, (5\) the drive ratio, (6\) the horsepower, (7\) the displacement of the car (in cubic inches), and (8\) the number of cylinders. The data follow: ``` library(LearnEDAfunctions) library(tidyverse) head(car.measurements) ``` ``` ## Country Car MPG Weight Drive.Ratio Horsepower ## 1 U.S. Buick_Estate_Wagon 16.9 4.360 2.73 155 ## 2 U.S. Ford_Country_Squire_Wagon 15.5 4.054 2.26 142 ## 3 U.S. Chevy_Malibu_Wagon 19.2 3.605 2.56 125 ## 4 U.S. Chrysler_LeBaron_Wagon 18.5 3.940 2.45 150 ## 5 U.S. Chevette 30.0 2.155 3.70 68 ## 6 Japan Toyota_Corona 27.5 2.560 3.05 95 ## Displacement Cylinders ## 1 350 8 ## 2 351 8 ## 3 267 8 ## 4 360 8 ## 5 98 4 ## 6 134 4 ``` 15\.2 A nonlinear relationship ------------------------------ Here we explore the relationship between a car’s mileage and it’s displacement. Now most of us are aware of a relationship – smaller cars generally get better gas mileage. Our goal here is to describe the relationship between size and mileage and look for cars that deviate from this general pattern. We plot mileage against displacement below. ``` ggplot(car.measurements, aes(Displacement, MPG)) + geom_point() ``` As expected, we see a pretty strong negative relationship. But what is evident is that this is not a straight\-line relationship. Rather it looks curved – a curve drawn through the points makes this clear. ``` ggplot(car.measurements, aes(Displacement, MPG)) + geom_point() + geom_smooth(method = "loess", se = FALSE) ``` Recall what we did in our U.S. population study when we detected a curved relationship between population and year. We reexpressed the population using a log transformation and fit a line to the (year, log pop) data. We follow a similar strategy here. The difference is that we may reexpress the \\(x\\) and/or the \\(y\\) variables, and we use the family of power transformations (using different choices for the power \\(p\\)) in our reexpression. 15\.3 Three summary points and measuring curvature -------------------------------------------------- Recall that our method of fitting a line, the resistant line, is based on working with three summary points. Likewise, we use three summary points to measure the curvature in the plot and help us choose an appropriate reexpression to straighten. As before, we find three summary points by breaking the \\(x\\)\-values (here \\(x\\) is displacement) into three groups of approximate equal size (low, middle, and high), and finding the median \\(x\\)\-value and the median \\(y\\)\-value in each group. Here the three summary points are \\\[ (x\_L, y\_L) \= (98, 31\), (x\_M, y\_M) \= (148\.5, 24\.25\), (x\_R, y\_R) \= (302, 18\.2\). \\] These points are plotted as red dots in the figure below. ``` summary.points <- data.frame(x = c(98, 148.5, 302), y = c(31, 24.25, 18.2)) ggplot(car.measurements, aes(Displacement, MPG)) + geom_point() + geom_point(data = summary.points, aes(x, y), color="red", size = 3) ``` We can measure the curvature in this plot by first computing the “left slope” (the slope of the line connecting the left and middle summary points), the “right slope” (the slope of the segment connecting the middle and right summary points). The ratio of the right slope to the left slope, the so\-called half\-slope ratio, is a measure of the curvature of the plot. The figure below illustrates the computation of the half\-slope ratio. The left slope of the segment connecting the left two points is \\\[ m\_L \= (31\-24\.25\)/(98\-148\.5\) \= \-.144 \\] and the slope of the segment connecting the two right points is \\\[ m\_R \= (24\.25\-18\.2\)/(148\.5\-302\) \= \-.039\. \\] So the half\-slope ratio, denoted \\(b\_{HS}\\), is \\\[ b\_{HS} \= \\frac{m\_R}{m\_L} \= \\frac{\-.039}{\-.144} \= .27\. \\] ``` ggplot() + geom_point(data = car.measurements, aes(Displacement, MPG)) + geom_point(data = summary.points, aes(x, y), color="red", size = 3) + geom_smooth(data = filter(summary.points, x < 150), aes(x, y), method = "lm", color="blue") + geom_smooth(data = filter(summary.points, x > 145), aes(x, y), method = "lm", color="blue") ``` A half\-slope ratio close to the value indicates straightness in the plot. Here the value \\(b\_{HS}\\) \= .27 which indicates curvature that is concave up. 15\.4 Reexpressing to reduce curvature -------------------------------------- In many cases, we can reduce the curvature in the plot by applying a suitable power transformation to either the \\(x\\) or \\(y\\) variables. Recall the ladder of power transformations: We start with the raw data which corresponds to \\(p \= 1\\) and either go up or down the ladder of transformations to change the shape of the data. How do we decide which direction on the ladder to transform the x and y variables? There is a simple rule based on the type of curvature in the plot. Suppose that we observe a graph with the following curvature. We look at the direction of the bulge in the curvature that we indicate by an arrow. Note that the curvature bulges towards small values of \\(x\\) and small values of \\(y\\). If the direction of the bulge for one variable goes towards small values, we go down the ladder of transformations; if the bulge is towards large values, we go up the ladder of transformations. Since this graph bulges towards small values for both variable, we can reexpress \\(x\\) by square root of \\(x\\), or log \\(x\\), or we could reexpress \\(y\\) by a root or a log. Actually, there are four types of curvature that can be improved by a power transformation, shown below. Once you see the direction of the bulge, the direction to transform the variables is clear. For example, look at the bottom right graph. The bulge in this curvature is towards small \\(x\\) and large \\(y\\). To straighten this plot, we could either * reexpress \\(x\\) by a root (moving down the ladder) or * reexpress \\(y\\) by a square (moving up the ladder). 15\.5 Straightening in the example ---------------------------------- Let’s illustrate this procedure for our data. We do our calculations in a spreadsheet where it is easy to change power transformations for \\(x\\) and \\(y\\). For ease of comparison of different transformations, we use the matched transformations \\\[ \\frac{x^p \- 1}{p}, \\, \\, \\frac{y^p \- 1}{p}. \\] Below we write a short function `straightening.work` to illustrate the computations. We start with the original data – the following table shows the left, center, and right summary points, the transformed summary points (tx, ty), the left and right slopes and the half\-slope ratio. This is old news – the ratio is .275 which indicates curvature in the plot. ``` straightening.work <- function(sp, px, py){ sp$tx <- with(sp, (x ^ px - 1) / px) sp$ty <- with(sp, (y ^ py - 1) / py) sp$slope[1] <- with(sp, diff(ty[1:2]) / diff(tx[1:2])) sp$slope[2] <- with(sp, diff(ty[2:3]) / diff(tx[2:3])) sp$half.slope.ratio <- with(sp, slope[2] / slope[1]) sp$slope[3] <- NA sp$half.slope.ratio[2:3] <- NA row.names(sp) <- c("Left", "Center", "Right") sp} straightening.work(summary.points, 1, 1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 97.0 30.00 -0.13366337 0.2948727 ## Center 148.5 24.25 147.5 23.25 -0.03941368 NA ## Right 302.0 18.20 301.0 17.20 NA NA ``` Remember the bulge is towards small values of \\(x\\) (displacement), so we’ll try taking a square root of \\(x\\): ``` straightening.work(summary.points, 0.5, 1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 17.79899 30.00 -1.4760147 0.3947231 ## Center 148.5 24.25 22.37212 23.25 -0.5826171 NA ## Right 302.0 18.20 32.75629 17.20 NA NA ``` This is better – the half\-slope ratio has increased from .275 to .368, but it’s still not close to the goal value of 1\. Let’s go further down the ladder of transformations for \\(x\\) – we’ll try a log (\\(p\\) \= 0\) – we actually use \\(p\\) \= .001 in the spreadsheet that approximates a log. ``` straightening.work(summary.points, 0.001, 1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 4.595495 30.00 -16.163243 0.5244925 ## Center 148.5 24.25 5.013109 23.25 -8.477499 NA ## Right 302.0 18.20 5.726763 17.20 NA NA ``` We’re doing better – the half\-slope ratio is now .488\. Let’s now try to move towards straightness by reexpressing \\(y\\). The bulge goes towards small \\(y\\), so we’ll try a log (\\(p\\) \= 0\) for \\(y\\) (mileage): ``` straightening.work(summary.points, 0.001, 0.001) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 4.595495 3.439890 -0.5899825 0.683707 ## Center 148.5 24.25 5.013109 3.193505 -0.4033752 NA ## Right 302.0 18.20 5.726763 2.905635 NA NA ``` The ratio is now \\(b\_{HS}\\) \= .642 (the power for \\(x\\) is \\(p\\) \= 0, for \\(y\\) is \\(p\\) \=0\). Let’s keep going down the ladder for \\(y\\) and try \\(p\\) \= \-1 that corresponds to reciprocals. ``` straightening.work(summary.points, 0.001, -1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 4.595495 0.9677419 -0.02150082 0.8933663 ## Center 148.5 24.25 5.013109 0.9587629 -0.01920811 NA ## Right 302.0 18.20 5.726763 0.9450549 NA NA ``` Now we’re close – \\(b\_{HS} \= .845\\). Last, we try moving \\(x\\) down the ladder to \\(p \= \-.33\\) (we’ll explain this choice soon). ``` straightening.work(summary.points, -0.33, -1) ``` ``` ## x y tx ty slope half.slope.ratio ## Left 98.0 31.00 2.362910 0.9677419 -0.1049744 1.074659 ## Center 148.5 24.25 2.448446 0.9587629 -0.1128117 NA ## Right 302.0 18.20 2.569958 0.9450549 NA NA ``` Bingo – the half\-slope ratio is \\(b\_{HS}\\) \= 1\.017, so it appears that we’ve straightened the plot by transforming \\(x\\) to the \-1/3 power and \\(y\\) to the \-1 power. 15\.6 Making sense of the reexpression -------------------------------------- We started by plotting mileage against displacement of a car, where mileage was measured in miles per gallon and displacement is measured in cubic inches. What are the units of the transformed variables? If we change mileage to mileage\\(^{\-1}\\), then the units are gallons per mile; if we change displacement to displacement\\(^{\-1/3}\\), the new units are 1/inches. So we have found a linear relationship between mileage (in gallons per mile) and displacement (measured as 1/inches). This suggests that gallons per mile may be a more suitable scale for measuring mileage. 15\.7 Straightening, fitting, and flattening -------------------------------------------- We found a suitable transformation by working only on three summary points. We have to check out our proposed reexpression to see if we have indeed straightened the plot In the plot below, we graph \\(\-1/y\\) against \\(\-1/x^{.33}\\). ``` car.measurements <- mutate(car.measurements, new.x = Displacement ^ (- 0.33), new.y = - 1 / MPG) ggplot(car.measurements, aes(new.x, new.y)) + geom_point() ``` This looks pretty straight. To detect any remaining curvature, we will fit a line and inspect the residuals. ``` fit <- rline(new.y ~ new.x, car.measurements, iter=5) c(fit$a, fit$b, fit$xC) ``` ``` ## [1] -0.04249751 0.34584665 0.19202478 ``` We fit a line shown below – it has intercept \\(\-0\.0424\\), slope \\(\-0\.3458\\), and \\(x\_C \= \-0\.1920\\), so our line fit is \\\[ Mileage \= \-0\.0424 \- 0\.3458 \\left(\\frac{\-1}{displacement^{.33}} \+ 0\.1920\\right) \\] The residuals from this line fit are shown below. ``` car.measurements <- mutate(car.measurements, Residual = fit$residual) ggplot(car.measurements, aes(new.x, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` What do we see in this residual plot? * First, I don’t see any obvious curvature in the plot. This means that we were successful in straightening the association pattern in the graph. * Two unusual small residuals stand out. These correspond to cars that had unusually small mileages given their size. I might avoid buying these cars if there were similar cars available with better mileages.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/smoothing.html
16 Smoothing ============ 16\.1 Meet the data ------------------- The dataset `braves.attendance` in the `LearnEDAfunctions` package contains the home attendance (the unit is 100 people) for the Atlanta Braves baseball team (shown in chronological order) for every game they played in the 1995 season. The objective here is to gain some understanding about the general pattern of ballpark attendance during the year. Also we are interested in looking for unusual small or large attendances that deviate from the general pattern. ``` library(LearnEDAfunctions) library(tidyverse) head(braves.attendance) ``` ``` ## Game Attendance ## 1 1 320 ## 2 2 261 ## 3 3 332 ## 4 4 378 ## 5 5 341 ## 6 6 277 ``` 16\.2 We need to smooth ----------------------- To start, we plot the attendance numbers against the game number. ``` ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() ``` Looking at this graph, it’s hard to pick up any pattern in the attendance numbers. There is a lot of up\-and\-down variation in the counts – this large variation hides any general pattern in the attendance that might be visible. There is definitely a need for a method that can help pick up a general pattern. In this lecture, we describe one method of smoothing this sequence of attendance counts. This method has several characteristics: 1. This method is a hand smoother, which means that it is a method that one could do by hand. The advantages of using a hand smoother, rather than a computer smoother, were probably more relevant 30 years ago when Tukey wrote his EDA text. However, the hand smoothing has the advantage of being relatively easy to explain and we can see the rationale behind the design of this method. 2. This method is resistant, in that it will be insensitive to unusual or extreme values. We make the basic assumption that we observe a sequence of data at regularly spaced time points. If this assumption is met, we can ignore the time indicator and focus on the sequence of data values: \\\[ 320, 261, 332, 378, 341, 277, 331, 360, ... \\] 16\.3 Running medians --------------------- Our basic method for smoothing a sequence of data takes medians of three, which we abbreviate by “3”. Look at the first 10 attendance numbers shown in the table below. ``` slice(braves.attendance, 1:10) ``` ``` ## Game Attendance ## 1 1 320 ## 2 2 261 ## 3 3 332 ## 4 4 378 ## 5 5 341 ## 6 6 277 ## 7 7 331 ## 8 8 360 ## 9 9 288 ## 10 10 270 ``` The median of the first three numbers is median {320, 261, 332} \= 320 — this is the smoothed value for the 2nd game. The median of the 2nd, 3rd, 4th numbers is 332 – this is the smoothed value for the 3rd game. The median of the 3rd, 4th, 5th numbers is 341\. If we continue this procedure, called running medians, for the 10 games, we get the following “3 smooth”: ``` Smooth3 <- c(NA, 320, 332, 341, 341, 331, 331, 331, 288, NA) cbind(braves.attendance[1:10, ], Smooth3) ``` ``` ## Game Attendance Smooth3 ## 1 1 320 NA ## 2 2 261 320 ## 3 3 332 332 ## 4 4 378 341 ## 5 5 341 341 ## 6 6 277 331 ## 7 7 331 331 ## 8 8 360 331 ## 9 9 288 288 ## 10 10 270 NA ``` 16\.4 End\-value smoothing -------------------------- How do we handle the end values? We do a special procedure, called end\-value smoothing (EVS), to handle the end values. This isn’t that hard to do, but it is hard to explain – we’ll give Tukey’s definition of it and then illustrate for several examples. **Here is Tukey’s definition of EVS:** “The change from the end smoothed value to the next\-to\-end smoothed value is between 0 to 2 times the change from the next\-to\-end smoothed value to the next\-to\-end\-but\-one smoothed value. Subject to this being true, the end smoothed value is as close to the end value as possible.” Let’s use several fake examples to illustrate this rule: **Example 1:** | – | End | Next\-to\-end | Next\-to\-end\-but\-one | | --- | --- | --- | --- | | Data | 10 | | | | Smooth | ??? | 40 | 50 | Here we see that \- the change from the next\-to\-end smoothed value to the next\-to\-end\-but\-one smoothed value is 50\-40 \= 10 \- so we want the change from the end smoothed value to the next\-to\-end smoothed value to be between 0 and 2 times 10, or between 0 and 20\. This means that the end\-smoothed value could be any value between 20 and 60\. Subject to this being true, we want the end\-smoothed value to be as close to the end value, 10, as possible. So the end smoothed value would be 20 (the value between 20 and 60 which is closest to 20\). **Example 2:** | – | End | Next\-to\-end | Next\-to\-end\-but\-one | | --- | --- | --- | --- | | Data | 50 | | | | Smooth | ??? | 40 | 60 | In this case … \- the difference between the two smoothed values next to the end is 60 \- 40 \= 20 \- the difference between the first and 2nd smoothed values can be anywhere between 0 and 2 x 20 or 0 and 40\. So the end smoothed value could be any value between 0 and 80\. What is the value between 0 and 80 that’s closest to the end value 50? Clearly it is 50, so the end smoothed value is 50\. We are just copying\-on the end data value to be the smoothed value. **Example 3:** | – | End | Next\-to\-end | Next\-to\-end\-but\-one | | --- | --- | --- | --- | | Data | 50 | | | | Smooth | ??? | 80 | 80 | In this last example, \- the difference between the two smoothed values next to the end is 80 \- 80 \= 0 \- the difference between the first and 2nd smoothed values can be anywhere between 0 and 2 x 0, or 0 and 0\. So here we have no choice – the end\-smoothed value has to be 80\. When we do repeated medians (3\), we will use this EVS technique to smooth the end values. Suppose we apply this repeated medians procedure again to this sequence – we call this smooth “33”. If we keep on applying running medians until there is no change in the sequence, we call the procedure “3R” (R stands for repeated application of the “3” procedure). In the “3R” column of the following table, we apply this procedure applied to the ballpark attendance data; a graph of the data follows. ``` braves.attendance <- mutate(braves.attendance, smooth.3R = as.vector(smooth(Attendance, kind="3R"))) ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3R), color="red") ``` 16\.5 Splitting --------------- Looking at the graph, the 3R procedure does a pretty good job of smoothing the sequence. We see interesting patterns in the smoothed data. But the 3R smooth looks a bit artificial. One problem is that this smooth will result in odd\-looking peaks and valleys. The idea of the splitting technique is to break up these 2\-high peaks or 2\-low valleys. This is how we try to split: * We identify all of the 2\-high peaks or 2\-low valleys. * For each 2\-high peak or 2\-low valley * We divide the sequence at the peak (or valley) to form two subsequences. * We perform end\-value\-smoothing at the end of each subsequence. * We combine the two subsequences and perform 3R to smooth over what we did. Here is a simple illustration of splitting 1. We identify a 2\-low valley at the smoothed values 329, 329 2. We split between the two 329’s to form two subsequences and pretend each 329 is an end value. 3. Now perform EVS twice – once to smooth the top sequence, and one to smooth the bottom sequence assuming 329 is the end value. The results are shown in the EVS column below. 4. We complete the splitting operation by doing a 3R smooth at the end. | Smooth | EVS | 3R | | --- | --- | --- | | 380 | 380 | | | 341 | 341 | 341 | | 329 | 329 | 341 | | 329 | 360 | 360 | | 366 | 366 | 366 | | 369 | 369 | | If we perform this splitting operation for all two\-high peaks and two\-low valleys, we call it an “S”. Finally, we repeat this splitting twice – the resulting smooth is called “3RSS”. Here is a graph of this smooth. ``` smooth.3RSS <- smooth(braves.attendance$Attendance, kind="3RSS") braves.attendance <- mutate(braves.attendance, smooth.3RSS = as.vector(smooth(Attendance, kind="3RSS"))) ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3R), color="red") + geom_line(aes(Game, smooth.3RSS), color="blue") ``` If you compare the 3R (red) and 3RSS (blue) smooths, you’ll see that we are somewhat successful in removing these two\-wide peaks and valleys. 16\.6 Hanning ------------- This 3RSS smooth still looks a bit rough – we believe that there is a smoother pattern of attendance change across games. The median and splitting operations that we have performed will have no effect on monotone sequences like \\\[ 295, 324, 332, 332, 380 \\] We need to do another operation that will help in smoothing these monotone sequences. A well\-known smoothing technique for time\-series data is called hanning – in this operation you take a weighted average of the current smoothed value and the two neighboring smoothed values. By hand, we can accomplish a hanning operation in two steps: 1. for each value, we compute a skip mean – this is the mean of the observations directly before and after the current observation 2. we take the average of the current smoothed value and the skip mean value We illustrate hanning for a subset of our ballpark attendance data set. Let’s focus on the bold value “320” in the table which represents the current smoothed value. In the skip mean column, indicated by “\>”, we find the mean of the two neighbors 320 and 331\. Then in the hanning column, indicated by “H”, we find the mean of the smoothed value 320 and the skip mean 325\.5 – the hanning value (rounded to the nearest integer) is 323\. | 3R | \\(\>\\) | H | | --- | --- | --- | | 320 | 320 | 320 | | 320 | 320 | 320 | | 320 | 325\.5 | 323 | | 331 | 325\.5 | 328 | | 331 | 331 | 331 | | 331 | 331 | 331 | How do we hann an end\-value such as 320? We simply copy\-on the end value in the hanning column (H). If this hanning operation is performed on the previous smooth, we get a 3RSSH smooth shown below. ``` braves.attendance <- mutate(braves.attendance, smooth.3RSSH = han(as.vector(smooth(Attendance, kind="3RSS")))) ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3RSSH), color="red") ``` Note that this hanning operation is good in smoothing the bumpy monotone sequences that we saw in the 3RSS smooth. 16\.7 The smooth and the rough ------------------------------ In the table below, we have displayed the raw attendance data and the 3RSSH smooth. We represent the data as \\\[ DATA \= SMOOTH \+ ROUGH, \\] where “SMOOTH” refers to the smoothed fit obtained using the 3RSSH operation and “RESIDUAL” is the difference between the data and the smooth. We compute values of the rough into the variable `Rough` and display some of the values. ``` braves.attendance <- mutate(braves.attendance, Rough = Attendance - smooth.3RSS) slice(braves.attendance, 1:10) ``` ``` ## Game Attendance smooth.3R smooth.3RSS smooth.3RSSH Rough ## 1 1 320 320 320 320.00 0 ## 2 2 261 320 320 323.00 -59 ## 3 3 332 332 332 331.25 0 ## 4 4 378 341 341 336.25 37 ## 5 5 341 341 331 333.50 10 ## 6 6 277 331 331 331.00 -54 ## 7 7 331 331 331 331.00 0 ## 8 8 360 331 331 320.25 29 ## 9 9 288 288 288 294.25 0 ## 10 10 270 270 270 274.50 0 ``` 16\.8 Reroughing ---------------- In practice, it may seem that the 3RSSH smooth is a bit too heavy – it seems to remove too much structure from the data. So it is desirable to add a little more variation in the smooth to make it more similar to the original data. There is a procedure called reroughing that is designed to add a bit more variation to our smooth. Essentially what we do is smooth values of our rough (using the 3RSSH technique) and add this “smoothed rough” to the initial smooth to get a final smooth. Remember we write our data as \\\[ DATA \= SMOOTH \+ ROUGH . \\] If we smooth our rough, we can write the rough as \\\[ ROUGH \= (SMOOTHED \\, ROUGH) \+ (ROUGH \\, ROUGH) . \\] Substituting, we get \\\[ DATA \= SMOOTH \+ (SMOOTHED \\, ROUGH) \+ (ROUGH \\, ROUGH) \\] \\\[ \= (FINAL \\, SMOOTH) \+ (ROUGH \\, ROUGH) . \\] We won’t go through the details of this reroughing for our example since it is a bit tedious and best left for a computer. If we wanted to do this by hand, we would first smooth the residuals ``` options(width=60) braves.attendance$Rough ``` ``` ## [1] 0 -59 0 37 10 -54 0 29 0 0 -15 ## [12] 0 -8 0 51 31 -23 0 119 0 -20 29 ## [23] -117 0 117 -55 0 81 0 -16 -31 0 122 ## [34] -15 0 10 96 0 -18 0 -41 41 103 -14 ## [45] 0 10 -6 0 0 0 0 13 0 0 -34 ## [56] 43 0 -1 0 0 62 130 0 -28 2 0 ## [67] -8 0 0 0 35 0 ``` using the 3RSSH smooth that we described above. Then, when we are done, we add the smooth of this rough back to the original smooth to get our final smooth. This operation of reroughing using a 3RSSH smooth is called \\\[ 3RSSH, TWICE, \\] where “TWICE” refers to the twicing operation. ``` braves.attendance <- mutate(braves.attendance, smooth.3RS3R.twice = as.vector(smooth(Attendance, kind="3RS3R", twiceit=TRUE))) ``` 16\.9 Interpreting the final smooth and rough --------------------------------------------- Whew! – we’re done describing our method of smoothing. Let’s now interpret what we have when we smooth our ballpark attendance data. The first figure plots the smoothed attendance data. ``` ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3RS3R.twice), col="red") ``` What do we see in this smooth? * The Atlanta Braves’ attendance had an immediate drop in the early games after the excitement of the beginning of the season wore off. * After this early drop, the attendance showed a general increase from about game 10 to about game 50\. * Attendance reached a peak about game 50, then the attendance dropped off suddenly and there was a valley about game 60\. Maybe the Braves clinched a spot in the baseball playoffs about game 50 and the remaining games were not very important and relatively few people attended. * There was a sudden rise in attendance at the end of the season – at this point, maybe the fans were getting excited about the playoff games that were about to begin. Next, we plot the rough to see the deviations from the general pattern that we see in the smooth. ``` braves.attendance <- mutate(braves.attendance, FinalRough = Attendance - smooth.3RS3R.twice) ggplot(braves.attendance, aes(Game, FinalRough)) + geom_point() + geom_hline(yintercept = 0, color = "blue") ``` What do we see in the rough (residuals)? * There are a number of dates (9 to be exact) where the rough is around \+100 – for these particular games, the attendance was about 10,000 more than the general pattern. Can you think of any reason why ballpark attendance would be unusually high for some days? I can think of a simple explanation. There are typically more fans attending baseball games during the weekend. I suspect that these extreme high rough values correspond to games played Saturday or Sunday. * There is one date (looks like game 22\) where the rough was about \-120 – this means that the attendance this day was about 12,000 smaller than the general pattern in the smooth. Are there explanations for poor baseball attendance? I suspect that it may be weather related – maybe it was unusually cold that day. * I don’t see any general pattern in the rough when plotted as a function of time. Most of the rough values are between \-50 and \+50 which translate to attendance values between \-5000 and \+5000\. 16\.1 Meet the data ------------------- The dataset `braves.attendance` in the `LearnEDAfunctions` package contains the home attendance (the unit is 100 people) for the Atlanta Braves baseball team (shown in chronological order) for every game they played in the 1995 season. The objective here is to gain some understanding about the general pattern of ballpark attendance during the year. Also we are interested in looking for unusual small or large attendances that deviate from the general pattern. ``` library(LearnEDAfunctions) library(tidyverse) head(braves.attendance) ``` ``` ## Game Attendance ## 1 1 320 ## 2 2 261 ## 3 3 332 ## 4 4 378 ## 5 5 341 ## 6 6 277 ``` 16\.2 We need to smooth ----------------------- To start, we plot the attendance numbers against the game number. ``` ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() ``` Looking at this graph, it’s hard to pick up any pattern in the attendance numbers. There is a lot of up\-and\-down variation in the counts – this large variation hides any general pattern in the attendance that might be visible. There is definitely a need for a method that can help pick up a general pattern. In this lecture, we describe one method of smoothing this sequence of attendance counts. This method has several characteristics: 1. This method is a hand smoother, which means that it is a method that one could do by hand. The advantages of using a hand smoother, rather than a computer smoother, were probably more relevant 30 years ago when Tukey wrote his EDA text. However, the hand smoothing has the advantage of being relatively easy to explain and we can see the rationale behind the design of this method. 2. This method is resistant, in that it will be insensitive to unusual or extreme values. We make the basic assumption that we observe a sequence of data at regularly spaced time points. If this assumption is met, we can ignore the time indicator and focus on the sequence of data values: \\\[ 320, 261, 332, 378, 341, 277, 331, 360, ... \\] 16\.3 Running medians --------------------- Our basic method for smoothing a sequence of data takes medians of three, which we abbreviate by “3”. Look at the first 10 attendance numbers shown in the table below. ``` slice(braves.attendance, 1:10) ``` ``` ## Game Attendance ## 1 1 320 ## 2 2 261 ## 3 3 332 ## 4 4 378 ## 5 5 341 ## 6 6 277 ## 7 7 331 ## 8 8 360 ## 9 9 288 ## 10 10 270 ``` The median of the first three numbers is median {320, 261, 332} \= 320 — this is the smoothed value for the 2nd game. The median of the 2nd, 3rd, 4th numbers is 332 – this is the smoothed value for the 3rd game. The median of the 3rd, 4th, 5th numbers is 341\. If we continue this procedure, called running medians, for the 10 games, we get the following “3 smooth”: ``` Smooth3 <- c(NA, 320, 332, 341, 341, 331, 331, 331, 288, NA) cbind(braves.attendance[1:10, ], Smooth3) ``` ``` ## Game Attendance Smooth3 ## 1 1 320 NA ## 2 2 261 320 ## 3 3 332 332 ## 4 4 378 341 ## 5 5 341 341 ## 6 6 277 331 ## 7 7 331 331 ## 8 8 360 331 ## 9 9 288 288 ## 10 10 270 NA ``` 16\.4 End\-value smoothing -------------------------- How do we handle the end values? We do a special procedure, called end\-value smoothing (EVS), to handle the end values. This isn’t that hard to do, but it is hard to explain – we’ll give Tukey’s definition of it and then illustrate for several examples. **Here is Tukey’s definition of EVS:** “The change from the end smoothed value to the next\-to\-end smoothed value is between 0 to 2 times the change from the next\-to\-end smoothed value to the next\-to\-end\-but\-one smoothed value. Subject to this being true, the end smoothed value is as close to the end value as possible.” Let’s use several fake examples to illustrate this rule: **Example 1:** | – | End | Next\-to\-end | Next\-to\-end\-but\-one | | --- | --- | --- | --- | | Data | 10 | | | | Smooth | ??? | 40 | 50 | Here we see that \- the change from the next\-to\-end smoothed value to the next\-to\-end\-but\-one smoothed value is 50\-40 \= 10 \- so we want the change from the end smoothed value to the next\-to\-end smoothed value to be between 0 and 2 times 10, or between 0 and 20\. This means that the end\-smoothed value could be any value between 20 and 60\. Subject to this being true, we want the end\-smoothed value to be as close to the end value, 10, as possible. So the end smoothed value would be 20 (the value between 20 and 60 which is closest to 20\). **Example 2:** | – | End | Next\-to\-end | Next\-to\-end\-but\-one | | --- | --- | --- | --- | | Data | 50 | | | | Smooth | ??? | 40 | 60 | In this case … \- the difference between the two smoothed values next to the end is 60 \- 40 \= 20 \- the difference between the first and 2nd smoothed values can be anywhere between 0 and 2 x 20 or 0 and 40\. So the end smoothed value could be any value between 0 and 80\. What is the value between 0 and 80 that’s closest to the end value 50? Clearly it is 50, so the end smoothed value is 50\. We are just copying\-on the end data value to be the smoothed value. **Example 3:** | – | End | Next\-to\-end | Next\-to\-end\-but\-one | | --- | --- | --- | --- | | Data | 50 | | | | Smooth | ??? | 80 | 80 | In this last example, \- the difference between the two smoothed values next to the end is 80 \- 80 \= 0 \- the difference between the first and 2nd smoothed values can be anywhere between 0 and 2 x 0, or 0 and 0\. So here we have no choice – the end\-smoothed value has to be 80\. When we do repeated medians (3\), we will use this EVS technique to smooth the end values. Suppose we apply this repeated medians procedure again to this sequence – we call this smooth “33”. If we keep on applying running medians until there is no change in the sequence, we call the procedure “3R” (R stands for repeated application of the “3” procedure). In the “3R” column of the following table, we apply this procedure applied to the ballpark attendance data; a graph of the data follows. ``` braves.attendance <- mutate(braves.attendance, smooth.3R = as.vector(smooth(Attendance, kind="3R"))) ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3R), color="red") ``` 16\.5 Splitting --------------- Looking at the graph, the 3R procedure does a pretty good job of smoothing the sequence. We see interesting patterns in the smoothed data. But the 3R smooth looks a bit artificial. One problem is that this smooth will result in odd\-looking peaks and valleys. The idea of the splitting technique is to break up these 2\-high peaks or 2\-low valleys. This is how we try to split: * We identify all of the 2\-high peaks or 2\-low valleys. * For each 2\-high peak or 2\-low valley * We divide the sequence at the peak (or valley) to form two subsequences. * We perform end\-value\-smoothing at the end of each subsequence. * We combine the two subsequences and perform 3R to smooth over what we did. Here is a simple illustration of splitting 1. We identify a 2\-low valley at the smoothed values 329, 329 2. We split between the two 329’s to form two subsequences and pretend each 329 is an end value. 3. Now perform EVS twice – once to smooth the top sequence, and one to smooth the bottom sequence assuming 329 is the end value. The results are shown in the EVS column below. 4. We complete the splitting operation by doing a 3R smooth at the end. | Smooth | EVS | 3R | | --- | --- | --- | | 380 | 380 | | | 341 | 341 | 341 | | 329 | 329 | 341 | | 329 | 360 | 360 | | 366 | 366 | 366 | | 369 | 369 | | If we perform this splitting operation for all two\-high peaks and two\-low valleys, we call it an “S”. Finally, we repeat this splitting twice – the resulting smooth is called “3RSS”. Here is a graph of this smooth. ``` smooth.3RSS <- smooth(braves.attendance$Attendance, kind="3RSS") braves.attendance <- mutate(braves.attendance, smooth.3RSS = as.vector(smooth(Attendance, kind="3RSS"))) ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3R), color="red") + geom_line(aes(Game, smooth.3RSS), color="blue") ``` If you compare the 3R (red) and 3RSS (blue) smooths, you’ll see that we are somewhat successful in removing these two\-wide peaks and valleys. 16\.6 Hanning ------------- This 3RSS smooth still looks a bit rough – we believe that there is a smoother pattern of attendance change across games. The median and splitting operations that we have performed will have no effect on monotone sequences like \\\[ 295, 324, 332, 332, 380 \\] We need to do another operation that will help in smoothing these monotone sequences. A well\-known smoothing technique for time\-series data is called hanning – in this operation you take a weighted average of the current smoothed value and the two neighboring smoothed values. By hand, we can accomplish a hanning operation in two steps: 1. for each value, we compute a skip mean – this is the mean of the observations directly before and after the current observation 2. we take the average of the current smoothed value and the skip mean value We illustrate hanning for a subset of our ballpark attendance data set. Let’s focus on the bold value “320” in the table which represents the current smoothed value. In the skip mean column, indicated by “\>”, we find the mean of the two neighbors 320 and 331\. Then in the hanning column, indicated by “H”, we find the mean of the smoothed value 320 and the skip mean 325\.5 – the hanning value (rounded to the nearest integer) is 323\. | 3R | \\(\>\\) | H | | --- | --- | --- | | 320 | 320 | 320 | | 320 | 320 | 320 | | 320 | 325\.5 | 323 | | 331 | 325\.5 | 328 | | 331 | 331 | 331 | | 331 | 331 | 331 | How do we hann an end\-value such as 320? We simply copy\-on the end value in the hanning column (H). If this hanning operation is performed on the previous smooth, we get a 3RSSH smooth shown below. ``` braves.attendance <- mutate(braves.attendance, smooth.3RSSH = han(as.vector(smooth(Attendance, kind="3RSS")))) ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3RSSH), color="red") ``` Note that this hanning operation is good in smoothing the bumpy monotone sequences that we saw in the 3RSS smooth. 16\.7 The smooth and the rough ------------------------------ In the table below, we have displayed the raw attendance data and the 3RSSH smooth. We represent the data as \\\[ DATA \= SMOOTH \+ ROUGH, \\] where “SMOOTH” refers to the smoothed fit obtained using the 3RSSH operation and “RESIDUAL” is the difference between the data and the smooth. We compute values of the rough into the variable `Rough` and display some of the values. ``` braves.attendance <- mutate(braves.attendance, Rough = Attendance - smooth.3RSS) slice(braves.attendance, 1:10) ``` ``` ## Game Attendance smooth.3R smooth.3RSS smooth.3RSSH Rough ## 1 1 320 320 320 320.00 0 ## 2 2 261 320 320 323.00 -59 ## 3 3 332 332 332 331.25 0 ## 4 4 378 341 341 336.25 37 ## 5 5 341 341 331 333.50 10 ## 6 6 277 331 331 331.00 -54 ## 7 7 331 331 331 331.00 0 ## 8 8 360 331 331 320.25 29 ## 9 9 288 288 288 294.25 0 ## 10 10 270 270 270 274.50 0 ``` 16\.8 Reroughing ---------------- In practice, it may seem that the 3RSSH smooth is a bit too heavy – it seems to remove too much structure from the data. So it is desirable to add a little more variation in the smooth to make it more similar to the original data. There is a procedure called reroughing that is designed to add a bit more variation to our smooth. Essentially what we do is smooth values of our rough (using the 3RSSH technique) and add this “smoothed rough” to the initial smooth to get a final smooth. Remember we write our data as \\\[ DATA \= SMOOTH \+ ROUGH . \\] If we smooth our rough, we can write the rough as \\\[ ROUGH \= (SMOOTHED \\, ROUGH) \+ (ROUGH \\, ROUGH) . \\] Substituting, we get \\\[ DATA \= SMOOTH \+ (SMOOTHED \\, ROUGH) \+ (ROUGH \\, ROUGH) \\] \\\[ \= (FINAL \\, SMOOTH) \+ (ROUGH \\, ROUGH) . \\] We won’t go through the details of this reroughing for our example since it is a bit tedious and best left for a computer. If we wanted to do this by hand, we would first smooth the residuals ``` options(width=60) braves.attendance$Rough ``` ``` ## [1] 0 -59 0 37 10 -54 0 29 0 0 -15 ## [12] 0 -8 0 51 31 -23 0 119 0 -20 29 ## [23] -117 0 117 -55 0 81 0 -16 -31 0 122 ## [34] -15 0 10 96 0 -18 0 -41 41 103 -14 ## [45] 0 10 -6 0 0 0 0 13 0 0 -34 ## [56] 43 0 -1 0 0 62 130 0 -28 2 0 ## [67] -8 0 0 0 35 0 ``` using the 3RSSH smooth that we described above. Then, when we are done, we add the smooth of this rough back to the original smooth to get our final smooth. This operation of reroughing using a 3RSSH smooth is called \\\[ 3RSSH, TWICE, \\] where “TWICE” refers to the twicing operation. ``` braves.attendance <- mutate(braves.attendance, smooth.3RS3R.twice = as.vector(smooth(Attendance, kind="3RS3R", twiceit=TRUE))) ``` 16\.9 Interpreting the final smooth and rough --------------------------------------------- Whew! – we’re done describing our method of smoothing. Let’s now interpret what we have when we smooth our ballpark attendance data. The first figure plots the smoothed attendance data. ``` ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3RS3R.twice), col="red") ``` What do we see in this smooth? * The Atlanta Braves’ attendance had an immediate drop in the early games after the excitement of the beginning of the season wore off. * After this early drop, the attendance showed a general increase from about game 10 to about game 50\. * Attendance reached a peak about game 50, then the attendance dropped off suddenly and there was a valley about game 60\. Maybe the Braves clinched a spot in the baseball playoffs about game 50 and the remaining games were not very important and relatively few people attended. * There was a sudden rise in attendance at the end of the season – at this point, maybe the fans were getting excited about the playoff games that were about to begin. Next, we plot the rough to see the deviations from the general pattern that we see in the smooth. ``` braves.attendance <- mutate(braves.attendance, FinalRough = Attendance - smooth.3RS3R.twice) ggplot(braves.attendance, aes(Game, FinalRough)) + geom_point() + geom_hline(yintercept = 0, color = "blue") ``` What do we see in the rough (residuals)? * There are a number of dates (9 to be exact) where the rough is around \+100 – for these particular games, the attendance was about 10,000 more than the general pattern. Can you think of any reason why ballpark attendance would be unusually high for some days? I can think of a simple explanation. There are typically more fans attending baseball games during the weekend. I suspect that these extreme high rough values correspond to games played Saturday or Sunday. * There is one date (looks like game 22\) where the rough was about \-120 – this means that the attendance this day was about 12,000 smaller than the general pattern in the smooth. Are there explanations for poor baseball attendance? I suspect that it may be weather related – maybe it was unusually cold that day. * I don’t see any general pattern in the rough when plotted as a function of time. Most of the rough values are between \-50 and \+50 which translate to attendance values between \-5000 and \+5000\.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/smoothing.html
16 Smoothing ============ 16\.1 Meet the data ------------------- The dataset `braves.attendance` in the `LearnEDAfunctions` package contains the home attendance (the unit is 100 people) for the Atlanta Braves baseball team (shown in chronological order) for every game they played in the 1995 season. The objective here is to gain some understanding about the general pattern of ballpark attendance during the year. Also we are interested in looking for unusual small or large attendances that deviate from the general pattern. ``` library(LearnEDAfunctions) library(tidyverse) head(braves.attendance) ``` ``` ## Game Attendance ## 1 1 320 ## 2 2 261 ## 3 3 332 ## 4 4 378 ## 5 5 341 ## 6 6 277 ``` 16\.2 We need to smooth ----------------------- To start, we plot the attendance numbers against the game number. ``` ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() ``` Looking at this graph, it’s hard to pick up any pattern in the attendance numbers. There is a lot of up\-and\-down variation in the counts – this large variation hides any general pattern in the attendance that might be visible. There is definitely a need for a method that can help pick up a general pattern. In this lecture, we describe one method of smoothing this sequence of attendance counts. This method has several characteristics: 1. This method is a hand smoother, which means that it is a method that one could do by hand. The advantages of using a hand smoother, rather than a computer smoother, were probably more relevant 30 years ago when Tukey wrote his EDA text. However, the hand smoothing has the advantage of being relatively easy to explain and we can see the rationale behind the design of this method. 2. This method is resistant, in that it will be insensitive to unusual or extreme values. We make the basic assumption that we observe a sequence of data at regularly spaced time points. If this assumption is met, we can ignore the time indicator and focus on the sequence of data values: \\\[ 320, 261, 332, 378, 341, 277, 331, 360, ... \\] 16\.3 Running medians --------------------- Our basic method for smoothing a sequence of data takes medians of three, which we abbreviate by “3”. Look at the first 10 attendance numbers shown in the table below. ``` slice(braves.attendance, 1:10) ``` ``` ## Game Attendance ## 1 1 320 ## 2 2 261 ## 3 3 332 ## 4 4 378 ## 5 5 341 ## 6 6 277 ## 7 7 331 ## 8 8 360 ## 9 9 288 ## 10 10 270 ``` The median of the first three numbers is median {320, 261, 332} \= 320 — this is the smoothed value for the 2nd game. The median of the 2nd, 3rd, 4th numbers is 332 – this is the smoothed value for the 3rd game. The median of the 3rd, 4th, 5th numbers is 341\. If we continue this procedure, called running medians, for the 10 games, we get the following “3 smooth”: ``` Smooth3 <- c(NA, 320, 332, 341, 341, 331, 331, 331, 288, NA) cbind(braves.attendance[1:10, ], Smooth3) ``` ``` ## Game Attendance Smooth3 ## 1 1 320 NA ## 2 2 261 320 ## 3 3 332 332 ## 4 4 378 341 ## 5 5 341 341 ## 6 6 277 331 ## 7 7 331 331 ## 8 8 360 331 ## 9 9 288 288 ## 10 10 270 NA ``` 16\.4 End\-value smoothing -------------------------- How do we handle the end values? We do a special procedure, called end\-value smoothing (EVS), to handle the end values. This isn’t that hard to do, but it is hard to explain – we’ll give Tukey’s definition of it and then illustrate for several examples. **Here is Tukey’s definition of EVS:** “The change from the end smoothed value to the next\-to\-end smoothed value is between 0 to 2 times the change from the next\-to\-end smoothed value to the next\-to\-end\-but\-one smoothed value. Subject to this being true, the end smoothed value is as close to the end value as possible.” Let’s use several fake examples to illustrate this rule: **Example 1:** | – | End | Next\-to\-end | Next\-to\-end\-but\-one | | --- | --- | --- | --- | | Data | 10 | | | | Smooth | ??? | 40 | 50 | Here we see that \- the change from the next\-to\-end smoothed value to the next\-to\-end\-but\-one smoothed value is 50\-40 \= 10 \- so we want the change from the end smoothed value to the next\-to\-end smoothed value to be between 0 and 2 times 10, or between 0 and 20\. This means that the end\-smoothed value could be any value between 20 and 60\. Subject to this being true, we want the end\-smoothed value to be as close to the end value, 10, as possible. So the end smoothed value would be 20 (the value between 20 and 60 which is closest to 20\). **Example 2:** | – | End | Next\-to\-end | Next\-to\-end\-but\-one | | --- | --- | --- | --- | | Data | 50 | | | | Smooth | ??? | 40 | 60 | In this case … \- the difference between the two smoothed values next to the end is 60 \- 40 \= 20 \- the difference between the first and 2nd smoothed values can be anywhere between 0 and 2 x 20 or 0 and 40\. So the end smoothed value could be any value between 0 and 80\. What is the value between 0 and 80 that’s closest to the end value 50? Clearly it is 50, so the end smoothed value is 50\. We are just copying\-on the end data value to be the smoothed value. **Example 3:** | – | End | Next\-to\-end | Next\-to\-end\-but\-one | | --- | --- | --- | --- | | Data | 50 | | | | Smooth | ??? | 80 | 80 | In this last example, \- the difference between the two smoothed values next to the end is 80 \- 80 \= 0 \- the difference between the first and 2nd smoothed values can be anywhere between 0 and 2 x 0, or 0 and 0\. So here we have no choice – the end\-smoothed value has to be 80\. When we do repeated medians (3\), we will use this EVS technique to smooth the end values. Suppose we apply this repeated medians procedure again to this sequence – we call this smooth “33”. If we keep on applying running medians until there is no change in the sequence, we call the procedure “3R” (R stands for repeated application of the “3” procedure). In the “3R” column of the following table, we apply this procedure applied to the ballpark attendance data; a graph of the data follows. ``` braves.attendance <- mutate(braves.attendance, smooth.3R = as.vector(smooth(Attendance, kind="3R"))) ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3R), color="red") ``` 16\.5 Splitting --------------- Looking at the graph, the 3R procedure does a pretty good job of smoothing the sequence. We see interesting patterns in the smoothed data. But the 3R smooth looks a bit artificial. One problem is that this smooth will result in odd\-looking peaks and valleys. The idea of the splitting technique is to break up these 2\-high peaks or 2\-low valleys. This is how we try to split: * We identify all of the 2\-high peaks or 2\-low valleys. * For each 2\-high peak or 2\-low valley * We divide the sequence at the peak (or valley) to form two subsequences. * We perform end\-value\-smoothing at the end of each subsequence. * We combine the two subsequences and perform 3R to smooth over what we did. Here is a simple illustration of splitting 1. We identify a 2\-low valley at the smoothed values 329, 329 2. We split between the two 329’s to form two subsequences and pretend each 329 is an end value. 3. Now perform EVS twice – once to smooth the top sequence, and one to smooth the bottom sequence assuming 329 is the end value. The results are shown in the EVS column below. 4. We complete the splitting operation by doing a 3R smooth at the end. | Smooth | EVS | 3R | | --- | --- | --- | | 380 | 380 | | | 341 | 341 | 341 | | 329 | 329 | 341 | | 329 | 360 | 360 | | 366 | 366 | 366 | | 369 | 369 | | If we perform this splitting operation for all two\-high peaks and two\-low valleys, we call it an “S”. Finally, we repeat this splitting twice – the resulting smooth is called “3RSS”. Here is a graph of this smooth. ``` smooth.3RSS <- smooth(braves.attendance$Attendance, kind="3RSS") braves.attendance <- mutate(braves.attendance, smooth.3RSS = as.vector(smooth(Attendance, kind="3RSS"))) ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3R), color="red") + geom_line(aes(Game, smooth.3RSS), color="blue") ``` If you compare the 3R (red) and 3RSS (blue) smooths, you’ll see that we are somewhat successful in removing these two\-wide peaks and valleys. 16\.6 Hanning ------------- This 3RSS smooth still looks a bit rough – we believe that there is a smoother pattern of attendance change across games. The median and splitting operations that we have performed will have no effect on monotone sequences like \\\[ 295, 324, 332, 332, 380 \\] We need to do another operation that will help in smoothing these monotone sequences. A well\-known smoothing technique for time\-series data is called hanning – in this operation you take a weighted average of the current smoothed value and the two neighboring smoothed values. By hand, we can accomplish a hanning operation in two steps: 1. for each value, we compute a skip mean – this is the mean of the observations directly before and after the current observation 2. we take the average of the current smoothed value and the skip mean value We illustrate hanning for a subset of our ballpark attendance data set. Let’s focus on the bold value “320” in the table which represents the current smoothed value. In the skip mean column, indicated by “\>”, we find the mean of the two neighbors 320 and 331\. Then in the hanning column, indicated by “H”, we find the mean of the smoothed value 320 and the skip mean 325\.5 – the hanning value (rounded to the nearest integer) is 323\. | 3R | \\(\>\\) | H | | --- | --- | --- | | 320 | 320 | 320 | | 320 | 320 | 320 | | 320 | 325\.5 | 323 | | 331 | 325\.5 | 328 | | 331 | 331 | 331 | | 331 | 331 | 331 | How do we hann an end\-value such as 320? We simply copy\-on the end value in the hanning column (H). If this hanning operation is performed on the previous smooth, we get a 3RSSH smooth shown below. ``` braves.attendance <- mutate(braves.attendance, smooth.3RSSH = han(as.vector(smooth(Attendance, kind="3RSS")))) ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3RSSH), color="red") ``` Note that this hanning operation is good in smoothing the bumpy monotone sequences that we saw in the 3RSS smooth. 16\.7 The smooth and the rough ------------------------------ In the table below, we have displayed the raw attendance data and the 3RSSH smooth. We represent the data as \\\[ DATA \= SMOOTH \+ ROUGH, \\] where “SMOOTH” refers to the smoothed fit obtained using the 3RSSH operation and “RESIDUAL” is the difference between the data and the smooth. We compute values of the rough into the variable `Rough` and display some of the values. ``` braves.attendance <- mutate(braves.attendance, Rough = Attendance - smooth.3RSS) slice(braves.attendance, 1:10) ``` ``` ## Game Attendance smooth.3R smooth.3RSS smooth.3RSSH Rough ## 1 1 320 320 320 320.00 0 ## 2 2 261 320 320 323.00 -59 ## 3 3 332 332 332 331.25 0 ## 4 4 378 341 341 336.25 37 ## 5 5 341 341 331 333.50 10 ## 6 6 277 331 331 331.00 -54 ## 7 7 331 331 331 331.00 0 ## 8 8 360 331 331 320.25 29 ## 9 9 288 288 288 294.25 0 ## 10 10 270 270 270 274.50 0 ``` 16\.8 Reroughing ---------------- In practice, it may seem that the 3RSSH smooth is a bit too heavy – it seems to remove too much structure from the data. So it is desirable to add a little more variation in the smooth to make it more similar to the original data. There is a procedure called reroughing that is designed to add a bit more variation to our smooth. Essentially what we do is smooth values of our rough (using the 3RSSH technique) and add this “smoothed rough” to the initial smooth to get a final smooth. Remember we write our data as \\\[ DATA \= SMOOTH \+ ROUGH . \\] If we smooth our rough, we can write the rough as \\\[ ROUGH \= (SMOOTHED \\, ROUGH) \+ (ROUGH \\, ROUGH) . \\] Substituting, we get \\\[ DATA \= SMOOTH \+ (SMOOTHED \\, ROUGH) \+ (ROUGH \\, ROUGH) \\] \\\[ \= (FINAL \\, SMOOTH) \+ (ROUGH \\, ROUGH) . \\] We won’t go through the details of this reroughing for our example since it is a bit tedious and best left for a computer. If we wanted to do this by hand, we would first smooth the residuals ``` options(width=60) braves.attendance$Rough ``` ``` ## [1] 0 -59 0 37 10 -54 0 29 0 0 -15 ## [12] 0 -8 0 51 31 -23 0 119 0 -20 29 ## [23] -117 0 117 -55 0 81 0 -16 -31 0 122 ## [34] -15 0 10 96 0 -18 0 -41 41 103 -14 ## [45] 0 10 -6 0 0 0 0 13 0 0 -34 ## [56] 43 0 -1 0 0 62 130 0 -28 2 0 ## [67] -8 0 0 0 35 0 ``` using the 3RSSH smooth that we described above. Then, when we are done, we add the smooth of this rough back to the original smooth to get our final smooth. This operation of reroughing using a 3RSSH smooth is called \\\[ 3RSSH, TWICE, \\] where “TWICE” refers to the twicing operation. ``` braves.attendance <- mutate(braves.attendance, smooth.3RS3R.twice = as.vector(smooth(Attendance, kind="3RS3R", twiceit=TRUE))) ``` 16\.9 Interpreting the final smooth and rough --------------------------------------------- Whew! – we’re done describing our method of smoothing. Let’s now interpret what we have when we smooth our ballpark attendance data. The first figure plots the smoothed attendance data. ``` ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3RS3R.twice), col="red") ``` What do we see in this smooth? * The Atlanta Braves’ attendance had an immediate drop in the early games after the excitement of the beginning of the season wore off. * After this early drop, the attendance showed a general increase from about game 10 to about game 50\. * Attendance reached a peak about game 50, then the attendance dropped off suddenly and there was a valley about game 60\. Maybe the Braves clinched a spot in the baseball playoffs about game 50 and the remaining games were not very important and relatively few people attended. * There was a sudden rise in attendance at the end of the season – at this point, maybe the fans were getting excited about the playoff games that were about to begin. Next, we plot the rough to see the deviations from the general pattern that we see in the smooth. ``` braves.attendance <- mutate(braves.attendance, FinalRough = Attendance - smooth.3RS3R.twice) ggplot(braves.attendance, aes(Game, FinalRough)) + geom_point() + geom_hline(yintercept = 0, color = "blue") ``` What do we see in the rough (residuals)? * There are a number of dates (9 to be exact) where the rough is around \+100 – for these particular games, the attendance was about 10,000 more than the general pattern. Can you think of any reason why ballpark attendance would be unusually high for some days? I can think of a simple explanation. There are typically more fans attending baseball games during the weekend. I suspect that these extreme high rough values correspond to games played Saturday or Sunday. * There is one date (looks like game 22\) where the rough was about \-120 – this means that the attendance this day was about 12,000 smaller than the general pattern in the smooth. Are there explanations for poor baseball attendance? I suspect that it may be weather related – maybe it was unusually cold that day. * I don’t see any general pattern in the rough when plotted as a function of time. Most of the rough values are between \-50 and \+50 which translate to attendance values between \-5000 and \+5000\. 16\.1 Meet the data ------------------- The dataset `braves.attendance` in the `LearnEDAfunctions` package contains the home attendance (the unit is 100 people) for the Atlanta Braves baseball team (shown in chronological order) for every game they played in the 1995 season. The objective here is to gain some understanding about the general pattern of ballpark attendance during the year. Also we are interested in looking for unusual small or large attendances that deviate from the general pattern. ``` library(LearnEDAfunctions) library(tidyverse) head(braves.attendance) ``` ``` ## Game Attendance ## 1 1 320 ## 2 2 261 ## 3 3 332 ## 4 4 378 ## 5 5 341 ## 6 6 277 ``` 16\.2 We need to smooth ----------------------- To start, we plot the attendance numbers against the game number. ``` ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() ``` Looking at this graph, it’s hard to pick up any pattern in the attendance numbers. There is a lot of up\-and\-down variation in the counts – this large variation hides any general pattern in the attendance that might be visible. There is definitely a need for a method that can help pick up a general pattern. In this lecture, we describe one method of smoothing this sequence of attendance counts. This method has several characteristics: 1. This method is a hand smoother, which means that it is a method that one could do by hand. The advantages of using a hand smoother, rather than a computer smoother, were probably more relevant 30 years ago when Tukey wrote his EDA text. However, the hand smoothing has the advantage of being relatively easy to explain and we can see the rationale behind the design of this method. 2. This method is resistant, in that it will be insensitive to unusual or extreme values. We make the basic assumption that we observe a sequence of data at regularly spaced time points. If this assumption is met, we can ignore the time indicator and focus on the sequence of data values: \\\[ 320, 261, 332, 378, 341, 277, 331, 360, ... \\] 16\.3 Running medians --------------------- Our basic method for smoothing a sequence of data takes medians of three, which we abbreviate by “3”. Look at the first 10 attendance numbers shown in the table below. ``` slice(braves.attendance, 1:10) ``` ``` ## Game Attendance ## 1 1 320 ## 2 2 261 ## 3 3 332 ## 4 4 378 ## 5 5 341 ## 6 6 277 ## 7 7 331 ## 8 8 360 ## 9 9 288 ## 10 10 270 ``` The median of the first three numbers is median {320, 261, 332} \= 320 — this is the smoothed value for the 2nd game. The median of the 2nd, 3rd, 4th numbers is 332 – this is the smoothed value for the 3rd game. The median of the 3rd, 4th, 5th numbers is 341\. If we continue this procedure, called running medians, for the 10 games, we get the following “3 smooth”: ``` Smooth3 <- c(NA, 320, 332, 341, 341, 331, 331, 331, 288, NA) cbind(braves.attendance[1:10, ], Smooth3) ``` ``` ## Game Attendance Smooth3 ## 1 1 320 NA ## 2 2 261 320 ## 3 3 332 332 ## 4 4 378 341 ## 5 5 341 341 ## 6 6 277 331 ## 7 7 331 331 ## 8 8 360 331 ## 9 9 288 288 ## 10 10 270 NA ``` 16\.4 End\-value smoothing -------------------------- How do we handle the end values? We do a special procedure, called end\-value smoothing (EVS), to handle the end values. This isn’t that hard to do, but it is hard to explain – we’ll give Tukey’s definition of it and then illustrate for several examples. **Here is Tukey’s definition of EVS:** “The change from the end smoothed value to the next\-to\-end smoothed value is between 0 to 2 times the change from the next\-to\-end smoothed value to the next\-to\-end\-but\-one smoothed value. Subject to this being true, the end smoothed value is as close to the end value as possible.” Let’s use several fake examples to illustrate this rule: **Example 1:** | – | End | Next\-to\-end | Next\-to\-end\-but\-one | | --- | --- | --- | --- | | Data | 10 | | | | Smooth | ??? | 40 | 50 | Here we see that \- the change from the next\-to\-end smoothed value to the next\-to\-end\-but\-one smoothed value is 50\-40 \= 10 \- so we want the change from the end smoothed value to the next\-to\-end smoothed value to be between 0 and 2 times 10, or between 0 and 20\. This means that the end\-smoothed value could be any value between 20 and 60\. Subject to this being true, we want the end\-smoothed value to be as close to the end value, 10, as possible. So the end smoothed value would be 20 (the value between 20 and 60 which is closest to 20\). **Example 2:** | – | End | Next\-to\-end | Next\-to\-end\-but\-one | | --- | --- | --- | --- | | Data | 50 | | | | Smooth | ??? | 40 | 60 | In this case … \- the difference between the two smoothed values next to the end is 60 \- 40 \= 20 \- the difference between the first and 2nd smoothed values can be anywhere between 0 and 2 x 20 or 0 and 40\. So the end smoothed value could be any value between 0 and 80\. What is the value between 0 and 80 that’s closest to the end value 50? Clearly it is 50, so the end smoothed value is 50\. We are just copying\-on the end data value to be the smoothed value. **Example 3:** | – | End | Next\-to\-end | Next\-to\-end\-but\-one | | --- | --- | --- | --- | | Data | 50 | | | | Smooth | ??? | 80 | 80 | In this last example, \- the difference between the two smoothed values next to the end is 80 \- 80 \= 0 \- the difference between the first and 2nd smoothed values can be anywhere between 0 and 2 x 0, or 0 and 0\. So here we have no choice – the end\-smoothed value has to be 80\. When we do repeated medians (3\), we will use this EVS technique to smooth the end values. Suppose we apply this repeated medians procedure again to this sequence – we call this smooth “33”. If we keep on applying running medians until there is no change in the sequence, we call the procedure “3R” (R stands for repeated application of the “3” procedure). In the “3R” column of the following table, we apply this procedure applied to the ballpark attendance data; a graph of the data follows. ``` braves.attendance <- mutate(braves.attendance, smooth.3R = as.vector(smooth(Attendance, kind="3R"))) ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3R), color="red") ``` 16\.5 Splitting --------------- Looking at the graph, the 3R procedure does a pretty good job of smoothing the sequence. We see interesting patterns in the smoothed data. But the 3R smooth looks a bit artificial. One problem is that this smooth will result in odd\-looking peaks and valleys. The idea of the splitting technique is to break up these 2\-high peaks or 2\-low valleys. This is how we try to split: * We identify all of the 2\-high peaks or 2\-low valleys. * For each 2\-high peak or 2\-low valley * We divide the sequence at the peak (or valley) to form two subsequences. * We perform end\-value\-smoothing at the end of each subsequence. * We combine the two subsequences and perform 3R to smooth over what we did. Here is a simple illustration of splitting 1. We identify a 2\-low valley at the smoothed values 329, 329 2. We split between the two 329’s to form two subsequences and pretend each 329 is an end value. 3. Now perform EVS twice – once to smooth the top sequence, and one to smooth the bottom sequence assuming 329 is the end value. The results are shown in the EVS column below. 4. We complete the splitting operation by doing a 3R smooth at the end. | Smooth | EVS | 3R | | --- | --- | --- | | 380 | 380 | | | 341 | 341 | 341 | | 329 | 329 | 341 | | 329 | 360 | 360 | | 366 | 366 | 366 | | 369 | 369 | | If we perform this splitting operation for all two\-high peaks and two\-low valleys, we call it an “S”. Finally, we repeat this splitting twice – the resulting smooth is called “3RSS”. Here is a graph of this smooth. ``` smooth.3RSS <- smooth(braves.attendance$Attendance, kind="3RSS") braves.attendance <- mutate(braves.attendance, smooth.3RSS = as.vector(smooth(Attendance, kind="3RSS"))) ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3R), color="red") + geom_line(aes(Game, smooth.3RSS), color="blue") ``` If you compare the 3R (red) and 3RSS (blue) smooths, you’ll see that we are somewhat successful in removing these two\-wide peaks and valleys. 16\.6 Hanning ------------- This 3RSS smooth still looks a bit rough – we believe that there is a smoother pattern of attendance change across games. The median and splitting operations that we have performed will have no effect on monotone sequences like \\\[ 295, 324, 332, 332, 380 \\] We need to do another operation that will help in smoothing these monotone sequences. A well\-known smoothing technique for time\-series data is called hanning – in this operation you take a weighted average of the current smoothed value and the two neighboring smoothed values. By hand, we can accomplish a hanning operation in two steps: 1. for each value, we compute a skip mean – this is the mean of the observations directly before and after the current observation 2. we take the average of the current smoothed value and the skip mean value We illustrate hanning for a subset of our ballpark attendance data set. Let’s focus on the bold value “320” in the table which represents the current smoothed value. In the skip mean column, indicated by “\>”, we find the mean of the two neighbors 320 and 331\. Then in the hanning column, indicated by “H”, we find the mean of the smoothed value 320 and the skip mean 325\.5 – the hanning value (rounded to the nearest integer) is 323\. | 3R | \\(\>\\) | H | | --- | --- | --- | | 320 | 320 | 320 | | 320 | 320 | 320 | | 320 | 325\.5 | 323 | | 331 | 325\.5 | 328 | | 331 | 331 | 331 | | 331 | 331 | 331 | How do we hann an end\-value such as 320? We simply copy\-on the end value in the hanning column (H). If this hanning operation is performed on the previous smooth, we get a 3RSSH smooth shown below. ``` braves.attendance <- mutate(braves.attendance, smooth.3RSSH = han(as.vector(smooth(Attendance, kind="3RSS")))) ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3RSSH), color="red") ``` Note that this hanning operation is good in smoothing the bumpy monotone sequences that we saw in the 3RSS smooth. 16\.7 The smooth and the rough ------------------------------ In the table below, we have displayed the raw attendance data and the 3RSSH smooth. We represent the data as \\\[ DATA \= SMOOTH \+ ROUGH, \\] where “SMOOTH” refers to the smoothed fit obtained using the 3RSSH operation and “RESIDUAL” is the difference between the data and the smooth. We compute values of the rough into the variable `Rough` and display some of the values. ``` braves.attendance <- mutate(braves.attendance, Rough = Attendance - smooth.3RSS) slice(braves.attendance, 1:10) ``` ``` ## Game Attendance smooth.3R smooth.3RSS smooth.3RSSH Rough ## 1 1 320 320 320 320.00 0 ## 2 2 261 320 320 323.00 -59 ## 3 3 332 332 332 331.25 0 ## 4 4 378 341 341 336.25 37 ## 5 5 341 341 331 333.50 10 ## 6 6 277 331 331 331.00 -54 ## 7 7 331 331 331 331.00 0 ## 8 8 360 331 331 320.25 29 ## 9 9 288 288 288 294.25 0 ## 10 10 270 270 270 274.50 0 ``` 16\.8 Reroughing ---------------- In practice, it may seem that the 3RSSH smooth is a bit too heavy – it seems to remove too much structure from the data. So it is desirable to add a little more variation in the smooth to make it more similar to the original data. There is a procedure called reroughing that is designed to add a bit more variation to our smooth. Essentially what we do is smooth values of our rough (using the 3RSSH technique) and add this “smoothed rough” to the initial smooth to get a final smooth. Remember we write our data as \\\[ DATA \= SMOOTH \+ ROUGH . \\] If we smooth our rough, we can write the rough as \\\[ ROUGH \= (SMOOTHED \\, ROUGH) \+ (ROUGH \\, ROUGH) . \\] Substituting, we get \\\[ DATA \= SMOOTH \+ (SMOOTHED \\, ROUGH) \+ (ROUGH \\, ROUGH) \\] \\\[ \= (FINAL \\, SMOOTH) \+ (ROUGH \\, ROUGH) . \\] We won’t go through the details of this reroughing for our example since it is a bit tedious and best left for a computer. If we wanted to do this by hand, we would first smooth the residuals ``` options(width=60) braves.attendance$Rough ``` ``` ## [1] 0 -59 0 37 10 -54 0 29 0 0 -15 ## [12] 0 -8 0 51 31 -23 0 119 0 -20 29 ## [23] -117 0 117 -55 0 81 0 -16 -31 0 122 ## [34] -15 0 10 96 0 -18 0 -41 41 103 -14 ## [45] 0 10 -6 0 0 0 0 13 0 0 -34 ## [56] 43 0 -1 0 0 62 130 0 -28 2 0 ## [67] -8 0 0 0 35 0 ``` using the 3RSSH smooth that we described above. Then, when we are done, we add the smooth of this rough back to the original smooth to get our final smooth. This operation of reroughing using a 3RSSH smooth is called \\\[ 3RSSH, TWICE, \\] where “TWICE” refers to the twicing operation. ``` braves.attendance <- mutate(braves.attendance, smooth.3RS3R.twice = as.vector(smooth(Attendance, kind="3RS3R", twiceit=TRUE))) ``` 16\.9 Interpreting the final smooth and rough --------------------------------------------- Whew! – we’re done describing our method of smoothing. Let’s now interpret what we have when we smooth our ballpark attendance data. The first figure plots the smoothed attendance data. ``` ggplot(braves.attendance, aes(Game, Attendance)) + geom_point() + geom_line(aes(Game, smooth.3RS3R.twice), col="red") ``` What do we see in this smooth? * The Atlanta Braves’ attendance had an immediate drop in the early games after the excitement of the beginning of the season wore off. * After this early drop, the attendance showed a general increase from about game 10 to about game 50\. * Attendance reached a peak about game 50, then the attendance dropped off suddenly and there was a valley about game 60\. Maybe the Braves clinched a spot in the baseball playoffs about game 50 and the remaining games were not very important and relatively few people attended. * There was a sudden rise in attendance at the end of the season – at this point, maybe the fans were getting excited about the playoff games that were about to begin. Next, we plot the rough to see the deviations from the general pattern that we see in the smooth. ``` braves.attendance <- mutate(braves.attendance, FinalRough = Attendance - smooth.3RS3R.twice) ggplot(braves.attendance, aes(Game, FinalRough)) + geom_point() + geom_hline(yintercept = 0, color = "blue") ``` What do we see in the rough (residuals)? * There are a number of dates (9 to be exact) where the rough is around \+100 – for these particular games, the attendance was about 10,000 more than the general pattern. Can you think of any reason why ballpark attendance would be unusually high for some days? I can think of a simple explanation. There are typically more fans attending baseball games during the weekend. I suspect that these extreme high rough values correspond to games played Saturday or Sunday. * There is one date (looks like game 22\) where the rough was about \-120 – this means that the attendance this day was about 12,000 smaller than the general pattern in the smooth. Are there explanations for poor baseball attendance? I suspect that it may be weather related – maybe it was unusually cold that day. * I don’t see any general pattern in the rough when plotted as a function of time. Most of the rough values are between \-50 and \+50 which translate to attendance values between \-5000 and \+5000\.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/median-polish.html
17 Median Polish ================ 17\.1 Meet the data ------------------- In the table below, we have displayed the normal daily high temperatures (in degrees Fahrenheit) for four months for five selected cities. This is called a two\-way table. There is some measurement, here Temperature, that is classified with respect to two categorical variables, City and Month. We would like to understand how the temperature depends on the month and the city. For example, we know that it is generally colder in the winter months and warmer during the summer months. Also some cities, like Atlanta, that are located in the south tend to be warmer than northern cities like Minneapolis. We want to use a simple method to describe this relationship. ``` library(LearnEDAfunctions) temperatures ``` ``` ## City January April July October ## 1 Atlanta 50 73 88 73 ## 2 Detroit 30 58 83 62 ## 3 Kansas_City 35 65 89 68 ## 4 Minneapolis 21 57 84 59 ## 5 Philadelphia 38 63 86 66 ``` 17\.2 An additive model ----------------------- A simple description of these data is an additive model of the form \\\[ FIT \= {\\rm Overall \\, temperature} \+ {\\rm City \\, effect} \+ {Month \\, effect}. \\] What this additive model says is that different cities tend to have different temperatures, and one city, say Atlanta, will be so many degrees warmer than another city, say Detroit. This difference in temperatures for these two cities will be the same for all months using this model. Also, one month, say July, will tend to be a particular number of degrees warmer than another month, say January – this difference in monthly temperatures will be the same for all cities. 17\.3 Median polish ------------------- We describe a resistant method called median polish of fitting an additive model. This method is resistant so it will not be affected by extreme values in the table. We will later compare this fitting model with the better\-known least\-squares method of fitting an additive model. To begin median polish, we take the median of each row of the table. We place the row medians in a column REFF that stands for Row Effect. ``` temps <- temperatures[, -1] dimnames(temps)[[1]] <- temperatures[, 1] REFF <- apply(temps, 1, median) cbind(temps, REFF) ``` ``` ## January April July October REFF ## Atlanta 50 73 88 73 73.0 ## Detroit 30 58 83 62 60.0 ## Kansas_City 35 65 89 68 66.5 ## Minneapolis 21 57 84 59 58.0 ## Philadelphia 38 63 86 66 64.5 ``` Next, we subtract out the row medians. For each temperature, we subtract the corresponding row median. For example, the temperature in Atlanta in January is 50 degrees – we subtract the Atlanta median 73, getting a difference of \-23\. If we do this operation for all temperatures in all rows, we get the following table: ``` Residual <- sweep(temps, 1, REFF) RowSweep <- cbind(Residual, REFF) RowSweep ``` ``` ## January April July October REFF ## Atlanta -23.0 0.0 15.0 0.0 73.0 ## Detroit -30.0 -2.0 23.0 2.0 60.0 ## Kansas_City -31.5 -1.5 22.5 1.5 66.5 ## Minneapolis -37.0 -1.0 26.0 1.0 58.0 ## Philadelphia -26.5 -1.5 21.5 1.5 64.5 ``` We call the two steps {find row medians, subtract out the medians} a row sweep of the table. Next, we take the median of each column of the table (including the row effect column). We put these column medians in a row called CEFF (for column effects). ``` CEFF <- apply(RowSweep, 2, median) rbind(RowSweep, CEFF) ``` ``` ## January April July October REFF ## Atlanta -23.0 0.0 15.0 0.0 73.0 ## Detroit -30.0 -2.0 23.0 2.0 60.0 ## Kansas_City -31.5 -1.5 22.5 1.5 66.5 ## Minneapolis -37.0 -1.0 26.0 1.0 58.0 ## Philadelphia -26.5 -1.5 21.5 1.5 64.5 ## 6 -30.0 -1.5 22.5 1.5 64.5 ``` We subtract out the column medians, similar to above. For each entry in the table, we subtract the corresponding column median. The steps of taking column medians and subtracting them out is called a column sweep of the table. ``` Residual <- sweep(RowSweep, 2, CEFF) ColSweep <- rbind(Residual, CEFF) dimnames(ColSweep)[[1]][6] <- "CEFF" ColSweep ``` ``` ## January April July October REFF ## Atlanta 7.0 1.5 -7.5 -1.5 8.5 ## Detroit 0.0 -0.5 0.5 0.5 -4.5 ## Kansas_City -1.5 0.0 0.0 0.0 2.0 ## Minneapolis -7.0 0.5 3.5 -0.5 -6.5 ## Philadelphia 3.5 0.0 -1.0 0.0 0.0 ## CEFF -30.0 -1.5 22.5 1.5 64.5 ``` At this point, we have performed one row sweep and one column sweep in the table. We continue by taking medians of each row. We place these row medians in a column called `Rmed`. ``` Resid <- ColSweep[, -5] REFF <- ColSweep[, 5] Rmed <- apply(Resid, 1, median) cbind(Resid, Rmed, REFF) ``` ``` ## January April July October Rmed REFF ## Atlanta 7.0 1.5 -7.5 -1.5 0.00 8.5 ## Detroit 0.0 -0.5 0.5 0.5 0.25 -4.5 ## Kansas_City -1.5 0.0 0.0 0.0 0.00 2.0 ## Minneapolis -7.0 0.5 3.5 -0.5 0.00 -6.5 ## Philadelphia 3.5 0.0 -1.0 0.0 0.00 0.0 ## CEFF -30.0 -1.5 22.5 1.5 0.00 64.5 ``` To adjust the values in the middle of the table and the row effects, we * Add the row medians (rmed) to the row effects (REFF) * Subtract the row medians (rmed) from the values in the middle. After we do this, we get the following table: ``` REFF <- REFF + Rmed Resid <- sweep(Resid, 1, Rmed) RowSweep <- cbind(Resid, REFF) RowSweep ``` ``` ## January April July October REFF ## Atlanta 7.00 1.50 -7.50 -1.50 8.50 ## Detroit -0.25 -0.75 0.25 0.25 -4.25 ## Kansas_City -1.50 0.00 0.00 0.00 2.00 ## Minneapolis -7.00 0.50 3.50 -0.50 -6.50 ## Philadelphia 3.50 0.00 -1.00 0.00 0.00 ## CEFF -30.00 -1.50 22.50 1.50 64.50 ``` Finally, we take the medians of each column and put the values in a new row called `ceff`. ``` Resid <- RowSweep[-6, ] CEFF <- RowSweep[6, ] ceff <- apply(Resid, 2, median) rbind(Resid, ceff, CEFF) ``` ``` ## January April July October REFF ## Atlanta 7.00 1.50 -7.50 -1.50 8.50 ## Detroit -0.25 -0.75 0.25 0.25 -4.25 ## Kansas_City -1.50 0.00 0.00 0.00 2.00 ## Minneapolis -7.00 0.50 3.50 -0.50 -6.50 ## Philadelphia 3.50 0.00 -1.00 0.00 0.00 ## 6 -0.25 0.00 0.00 0.00 0.00 ## CEFF -30.00 -1.50 22.50 1.50 64.50 ``` To adjust the remaining values, we * Add the ceff values to the column effects in CEFF. * Subtract the ceff values from the values in the middle. ``` CEFF <- CEFF + ceff Resid <- sweep(Resid, 2, ceff) ColSweep <- rbind(Resid, CEFF) ColSweep ``` ``` ## January April July October REFF ## Atlanta 7.25 1.50 -7.50 -1.50 8.50 ## Detroit 0.00 -0.75 0.25 0.25 -4.25 ## Kansas_City -1.25 0.00 0.00 0.00 2.00 ## Minneapolis -6.75 0.50 3.50 -0.50 -6.50 ## Philadelphia 3.75 0.00 -1.00 0.00 0.00 ## CEFF -30.25 -1.50 22.50 1.50 64.50 ``` We could continue this procedure (take out row medians and take out column medians) many more times. If we do this by hand, then it is usually sufficient to do 4 iterations – a row sweep, a column sweep, a row sweep, and a column sweep. 17\.4 Interpreting the additive model ------------------------------------- What we have done is fit an additive model to this table. Let’s interpret what this fitted model is telling us. Atlanta’s January temperature is 50\. We can represent this temperature as ``` Atlanta's temp in January = (Common) + (Atlanta Row effect) + (January Col effect) + (Residual). ``` We can pick up these terms on the right hand side of the equation from the output of the median polish. We see the common value is 64\.5, the Atlanta effect is 8\.5, the January effect is \-30\.25 and the residual is 7\.25\. So Atlanta’s January temp is \\\[ 50 \= 64\.5 \+ 8\.5 \- 30\.25 \+ 7\.25 . \\] Likewise, we can express all of the temperatures of the table as the sum of four different components \\\[ COMMON \+ ROW \\, EFFECT \+ COL \\, EFFECT \+ RESIDUAL. \\] The function `medpolish` repeats this process for a maximum of 10 iterations. We illustrate using this function on the temperature data and display the components. ``` additive.fit <- medpolish(temps) ``` ``` ## 1: 36.5 ## Final: 36.25 ``` ``` additive.fit ``` ``` ## ## Median Polish Results (Dataset: "temps") ## ## Overall: 64.5 ## ## Row Effects: ## Atlanta Detroit Kansas_City Minneapolis ## 8.50 -4.25 2.00 -6.50 ## Philadelphia ## 0.00 ## ## Column Effects: ## January April July October ## -30.25 -1.50 22.50 1.50 ## ## Residuals: ## January April July October ## Atlanta 7.25 1.50 -7.50 -1.50 ## Detroit 0.00 -0.75 0.25 0.25 ## Kansas_City -1.25 0.00 0.00 0.00 ## Minneapolis -6.75 0.50 3.50 -0.50 ## Philadelphia 3.75 0.00 -1.00 0.00 ``` ### 17\.4\.1 Interpreting the row and column effects Let’s try to make sense of the additive model produced by the `medpolish` function. Our fit has the form \\\[ COMMON \+ ROW \\, EFFECT \+ COL \\, EFFECT \+ RESIDUAL. \\] and the common, row effects, and column effects are shown below. | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 8\.5 | | Detroit | | | | | \-4\.25 | | Kansas City | | | | | 2 | | Minneapolis | | | | | \-6\.5 | | Philadelphia | | | | | 0 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | 64\.5 | If we wish to focus on the row effects (cities), we can combine the common and row effects to get the fit \\\[ FIT \= \[COMMON \+ ROW \\, EFFECT] \+ COL \\, EFFECT \\] \\\[ \= ROW \\, FIT \+ COL \\, EFFECT \\] | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 73 | | Detroit | | | | | 60\.25 | | Kansas City | | | | | 66\.5 | | Minneapolis | | | | | 58 | | Philadelphia | | | | | 64\.5 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | | Looking at this table, specifically the row fits, we see that * the average Atlanta temperature is 73 degrees * generally, Atlanta is 73 \- 60\.25 \= 12\.75 degrees warmer than Detroit * Philadelphia tends to be 6\.5 degrees warmer than Minneapolis If we were interested in the month effects (columns), we can combine the common and column effects to get the representation \\\[ FIT \= \[COMMON \+ COL \\, EFFECT] \+ ROW \\, EFFECT \\] \\\[ \= COL \\, FIT \+ ROW \\, EFFECT \\] which is displayed below. | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 8\.5 | | Detroit | | | | | \-4\.25 | | Kansas City | | | | | 2 | | Minneapolis | | | | | \-6\.5 | | Philadelphia | | | | | 0 | | CEFF | 33\.25 | 63 | 87 | 66 | | We see * the temperature in April is on average 63 degrees. * it tends to be 63 – 33\.25 \= 29\.75 degrees warmer in April than January * October and April have similar temps on average – October is 3 degrees warmer 17\.5 Looking at the residuals ------------------------------ After we gain some general understanding about the fit (which cities and months tend to be warm or cold), we can look at the residuals, shown below. ``` additive.fit$residuals ``` ``` ## January April July October ## Atlanta 7.25 1.50 -7.50 -1.50 ## Detroit 0.00 -0.75 0.25 0.25 ## Kansas_City -1.25 0.00 0.00 0.00 ## Minneapolis -6.75 0.50 3.50 -0.50 ## Philadelphia 3.75 0.00 -1.00 0.00 ``` 1. Are the residuals “large”? To be more clear, are the sizes of the residuals large relative to the row and column effects? Above we saw that the row effects (corresponding to cities) ranged from \-6 to 8 and the column effects (months) ranged from \-30 to 22\. We see a few large residuals ``` Atlanta in January: 7.25 Minneapolis in January: -6.75 Atlanta in July: -7.5 ``` These are large relative to the city effects but small relative to the month effects. 2. Do we see any pattern in the residuals? We’ll talk later about one specific pattern we might see in the residuals. Here do we see any pattern in the large residuals? If we define large as 3 (in absolute value), then we see five large residuals that are all in the months of January and July. These two months are the most extreme ones (in terms of temperature) and the city temps in these months might show more variability, which would contribute to these large residuals. 17\.1 Meet the data ------------------- In the table below, we have displayed the normal daily high temperatures (in degrees Fahrenheit) for four months for five selected cities. This is called a two\-way table. There is some measurement, here Temperature, that is classified with respect to two categorical variables, City and Month. We would like to understand how the temperature depends on the month and the city. For example, we know that it is generally colder in the winter months and warmer during the summer months. Also some cities, like Atlanta, that are located in the south tend to be warmer than northern cities like Minneapolis. We want to use a simple method to describe this relationship. ``` library(LearnEDAfunctions) temperatures ``` ``` ## City January April July October ## 1 Atlanta 50 73 88 73 ## 2 Detroit 30 58 83 62 ## 3 Kansas_City 35 65 89 68 ## 4 Minneapolis 21 57 84 59 ## 5 Philadelphia 38 63 86 66 ``` 17\.2 An additive model ----------------------- A simple description of these data is an additive model of the form \\\[ FIT \= {\\rm Overall \\, temperature} \+ {\\rm City \\, effect} \+ {Month \\, effect}. \\] What this additive model says is that different cities tend to have different temperatures, and one city, say Atlanta, will be so many degrees warmer than another city, say Detroit. This difference in temperatures for these two cities will be the same for all months using this model. Also, one month, say July, will tend to be a particular number of degrees warmer than another month, say January – this difference in monthly temperatures will be the same for all cities. 17\.3 Median polish ------------------- We describe a resistant method called median polish of fitting an additive model. This method is resistant so it will not be affected by extreme values in the table. We will later compare this fitting model with the better\-known least\-squares method of fitting an additive model. To begin median polish, we take the median of each row of the table. We place the row medians in a column REFF that stands for Row Effect. ``` temps <- temperatures[, -1] dimnames(temps)[[1]] <- temperatures[, 1] REFF <- apply(temps, 1, median) cbind(temps, REFF) ``` ``` ## January April July October REFF ## Atlanta 50 73 88 73 73.0 ## Detroit 30 58 83 62 60.0 ## Kansas_City 35 65 89 68 66.5 ## Minneapolis 21 57 84 59 58.0 ## Philadelphia 38 63 86 66 64.5 ``` Next, we subtract out the row medians. For each temperature, we subtract the corresponding row median. For example, the temperature in Atlanta in January is 50 degrees – we subtract the Atlanta median 73, getting a difference of \-23\. If we do this operation for all temperatures in all rows, we get the following table: ``` Residual <- sweep(temps, 1, REFF) RowSweep <- cbind(Residual, REFF) RowSweep ``` ``` ## January April July October REFF ## Atlanta -23.0 0.0 15.0 0.0 73.0 ## Detroit -30.0 -2.0 23.0 2.0 60.0 ## Kansas_City -31.5 -1.5 22.5 1.5 66.5 ## Minneapolis -37.0 -1.0 26.0 1.0 58.0 ## Philadelphia -26.5 -1.5 21.5 1.5 64.5 ``` We call the two steps {find row medians, subtract out the medians} a row sweep of the table. Next, we take the median of each column of the table (including the row effect column). We put these column medians in a row called CEFF (for column effects). ``` CEFF <- apply(RowSweep, 2, median) rbind(RowSweep, CEFF) ``` ``` ## January April July October REFF ## Atlanta -23.0 0.0 15.0 0.0 73.0 ## Detroit -30.0 -2.0 23.0 2.0 60.0 ## Kansas_City -31.5 -1.5 22.5 1.5 66.5 ## Minneapolis -37.0 -1.0 26.0 1.0 58.0 ## Philadelphia -26.5 -1.5 21.5 1.5 64.5 ## 6 -30.0 -1.5 22.5 1.5 64.5 ``` We subtract out the column medians, similar to above. For each entry in the table, we subtract the corresponding column median. The steps of taking column medians and subtracting them out is called a column sweep of the table. ``` Residual <- sweep(RowSweep, 2, CEFF) ColSweep <- rbind(Residual, CEFF) dimnames(ColSweep)[[1]][6] <- "CEFF" ColSweep ``` ``` ## January April July October REFF ## Atlanta 7.0 1.5 -7.5 -1.5 8.5 ## Detroit 0.0 -0.5 0.5 0.5 -4.5 ## Kansas_City -1.5 0.0 0.0 0.0 2.0 ## Minneapolis -7.0 0.5 3.5 -0.5 -6.5 ## Philadelphia 3.5 0.0 -1.0 0.0 0.0 ## CEFF -30.0 -1.5 22.5 1.5 64.5 ``` At this point, we have performed one row sweep and one column sweep in the table. We continue by taking medians of each row. We place these row medians in a column called `Rmed`. ``` Resid <- ColSweep[, -5] REFF <- ColSweep[, 5] Rmed <- apply(Resid, 1, median) cbind(Resid, Rmed, REFF) ``` ``` ## January April July October Rmed REFF ## Atlanta 7.0 1.5 -7.5 -1.5 0.00 8.5 ## Detroit 0.0 -0.5 0.5 0.5 0.25 -4.5 ## Kansas_City -1.5 0.0 0.0 0.0 0.00 2.0 ## Minneapolis -7.0 0.5 3.5 -0.5 0.00 -6.5 ## Philadelphia 3.5 0.0 -1.0 0.0 0.00 0.0 ## CEFF -30.0 -1.5 22.5 1.5 0.00 64.5 ``` To adjust the values in the middle of the table and the row effects, we * Add the row medians (rmed) to the row effects (REFF) * Subtract the row medians (rmed) from the values in the middle. After we do this, we get the following table: ``` REFF <- REFF + Rmed Resid <- sweep(Resid, 1, Rmed) RowSweep <- cbind(Resid, REFF) RowSweep ``` ``` ## January April July October REFF ## Atlanta 7.00 1.50 -7.50 -1.50 8.50 ## Detroit -0.25 -0.75 0.25 0.25 -4.25 ## Kansas_City -1.50 0.00 0.00 0.00 2.00 ## Minneapolis -7.00 0.50 3.50 -0.50 -6.50 ## Philadelphia 3.50 0.00 -1.00 0.00 0.00 ## CEFF -30.00 -1.50 22.50 1.50 64.50 ``` Finally, we take the medians of each column and put the values in a new row called `ceff`. ``` Resid <- RowSweep[-6, ] CEFF <- RowSweep[6, ] ceff <- apply(Resid, 2, median) rbind(Resid, ceff, CEFF) ``` ``` ## January April July October REFF ## Atlanta 7.00 1.50 -7.50 -1.50 8.50 ## Detroit -0.25 -0.75 0.25 0.25 -4.25 ## Kansas_City -1.50 0.00 0.00 0.00 2.00 ## Minneapolis -7.00 0.50 3.50 -0.50 -6.50 ## Philadelphia 3.50 0.00 -1.00 0.00 0.00 ## 6 -0.25 0.00 0.00 0.00 0.00 ## CEFF -30.00 -1.50 22.50 1.50 64.50 ``` To adjust the remaining values, we * Add the ceff values to the column effects in CEFF. * Subtract the ceff values from the values in the middle. ``` CEFF <- CEFF + ceff Resid <- sweep(Resid, 2, ceff) ColSweep <- rbind(Resid, CEFF) ColSweep ``` ``` ## January April July October REFF ## Atlanta 7.25 1.50 -7.50 -1.50 8.50 ## Detroit 0.00 -0.75 0.25 0.25 -4.25 ## Kansas_City -1.25 0.00 0.00 0.00 2.00 ## Minneapolis -6.75 0.50 3.50 -0.50 -6.50 ## Philadelphia 3.75 0.00 -1.00 0.00 0.00 ## CEFF -30.25 -1.50 22.50 1.50 64.50 ``` We could continue this procedure (take out row medians and take out column medians) many more times. If we do this by hand, then it is usually sufficient to do 4 iterations – a row sweep, a column sweep, a row sweep, and a column sweep. 17\.4 Interpreting the additive model ------------------------------------- What we have done is fit an additive model to this table. Let’s interpret what this fitted model is telling us. Atlanta’s January temperature is 50\. We can represent this temperature as ``` Atlanta's temp in January = (Common) + (Atlanta Row effect) + (January Col effect) + (Residual). ``` We can pick up these terms on the right hand side of the equation from the output of the median polish. We see the common value is 64\.5, the Atlanta effect is 8\.5, the January effect is \-30\.25 and the residual is 7\.25\. So Atlanta’s January temp is \\\[ 50 \= 64\.5 \+ 8\.5 \- 30\.25 \+ 7\.25 . \\] Likewise, we can express all of the temperatures of the table as the sum of four different components \\\[ COMMON \+ ROW \\, EFFECT \+ COL \\, EFFECT \+ RESIDUAL. \\] The function `medpolish` repeats this process for a maximum of 10 iterations. We illustrate using this function on the temperature data and display the components. ``` additive.fit <- medpolish(temps) ``` ``` ## 1: 36.5 ## Final: 36.25 ``` ``` additive.fit ``` ``` ## ## Median Polish Results (Dataset: "temps") ## ## Overall: 64.5 ## ## Row Effects: ## Atlanta Detroit Kansas_City Minneapolis ## 8.50 -4.25 2.00 -6.50 ## Philadelphia ## 0.00 ## ## Column Effects: ## January April July October ## -30.25 -1.50 22.50 1.50 ## ## Residuals: ## January April July October ## Atlanta 7.25 1.50 -7.50 -1.50 ## Detroit 0.00 -0.75 0.25 0.25 ## Kansas_City -1.25 0.00 0.00 0.00 ## Minneapolis -6.75 0.50 3.50 -0.50 ## Philadelphia 3.75 0.00 -1.00 0.00 ``` ### 17\.4\.1 Interpreting the row and column effects Let’s try to make sense of the additive model produced by the `medpolish` function. Our fit has the form \\\[ COMMON \+ ROW \\, EFFECT \+ COL \\, EFFECT \+ RESIDUAL. \\] and the common, row effects, and column effects are shown below. | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 8\.5 | | Detroit | | | | | \-4\.25 | | Kansas City | | | | | 2 | | Minneapolis | | | | | \-6\.5 | | Philadelphia | | | | | 0 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | 64\.5 | If we wish to focus on the row effects (cities), we can combine the common and row effects to get the fit \\\[ FIT \= \[COMMON \+ ROW \\, EFFECT] \+ COL \\, EFFECT \\] \\\[ \= ROW \\, FIT \+ COL \\, EFFECT \\] | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 73 | | Detroit | | | | | 60\.25 | | Kansas City | | | | | 66\.5 | | Minneapolis | | | | | 58 | | Philadelphia | | | | | 64\.5 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | | Looking at this table, specifically the row fits, we see that * the average Atlanta temperature is 73 degrees * generally, Atlanta is 73 \- 60\.25 \= 12\.75 degrees warmer than Detroit * Philadelphia tends to be 6\.5 degrees warmer than Minneapolis If we were interested in the month effects (columns), we can combine the common and column effects to get the representation \\\[ FIT \= \[COMMON \+ COL \\, EFFECT] \+ ROW \\, EFFECT \\] \\\[ \= COL \\, FIT \+ ROW \\, EFFECT \\] which is displayed below. | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 8\.5 | | Detroit | | | | | \-4\.25 | | Kansas City | | | | | 2 | | Minneapolis | | | | | \-6\.5 | | Philadelphia | | | | | 0 | | CEFF | 33\.25 | 63 | 87 | 66 | | We see * the temperature in April is on average 63 degrees. * it tends to be 63 – 33\.25 \= 29\.75 degrees warmer in April than January * October and April have similar temps on average – October is 3 degrees warmer ### 17\.4\.1 Interpreting the row and column effects Let’s try to make sense of the additive model produced by the `medpolish` function. Our fit has the form \\\[ COMMON \+ ROW \\, EFFECT \+ COL \\, EFFECT \+ RESIDUAL. \\] and the common, row effects, and column effects are shown below. | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 8\.5 | | Detroit | | | | | \-4\.25 | | Kansas City | | | | | 2 | | Minneapolis | | | | | \-6\.5 | | Philadelphia | | | | | 0 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | 64\.5 | If we wish to focus on the row effects (cities), we can combine the common and row effects to get the fit \\\[ FIT \= \[COMMON \+ ROW \\, EFFECT] \+ COL \\, EFFECT \\] \\\[ \= ROW \\, FIT \+ COL \\, EFFECT \\] | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 73 | | Detroit | | | | | 60\.25 | | Kansas City | | | | | 66\.5 | | Minneapolis | | | | | 58 | | Philadelphia | | | | | 64\.5 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | | Looking at this table, specifically the row fits, we see that * the average Atlanta temperature is 73 degrees * generally, Atlanta is 73 \- 60\.25 \= 12\.75 degrees warmer than Detroit * Philadelphia tends to be 6\.5 degrees warmer than Minneapolis If we were interested in the month effects (columns), we can combine the common and column effects to get the representation \\\[ FIT \= \[COMMON \+ COL \\, EFFECT] \+ ROW \\, EFFECT \\] \\\[ \= COL \\, FIT \+ ROW \\, EFFECT \\] which is displayed below. | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 8\.5 | | Detroit | | | | | \-4\.25 | | Kansas City | | | | | 2 | | Minneapolis | | | | | \-6\.5 | | Philadelphia | | | | | 0 | | CEFF | 33\.25 | 63 | 87 | 66 | | We see * the temperature in April is on average 63 degrees. * it tends to be 63 – 33\.25 \= 29\.75 degrees warmer in April than January * October and April have similar temps on average – October is 3 degrees warmer 17\.5 Looking at the residuals ------------------------------ After we gain some general understanding about the fit (which cities and months tend to be warm or cold), we can look at the residuals, shown below. ``` additive.fit$residuals ``` ``` ## January April July October ## Atlanta 7.25 1.50 -7.50 -1.50 ## Detroit 0.00 -0.75 0.25 0.25 ## Kansas_City -1.25 0.00 0.00 0.00 ## Minneapolis -6.75 0.50 3.50 -0.50 ## Philadelphia 3.75 0.00 -1.00 0.00 ``` 1. Are the residuals “large”? To be more clear, are the sizes of the residuals large relative to the row and column effects? Above we saw that the row effects (corresponding to cities) ranged from \-6 to 8 and the column effects (months) ranged from \-30 to 22\. We see a few large residuals ``` Atlanta in January: 7.25 Minneapolis in January: -6.75 Atlanta in July: -7.5 ``` These are large relative to the city effects but small relative to the month effects. 2. Do we see any pattern in the residuals? We’ll talk later about one specific pattern we might see in the residuals. Here do we see any pattern in the large residuals? If we define large as 3 (in absolute value), then we see five large residuals that are all in the months of January and July. These two months are the most extreme ones (in terms of temperature) and the city temps in these months might show more variability, which would contribute to these large residuals.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/median-polish.html
17 Median Polish ================ 17\.1 Meet the data ------------------- In the table below, we have displayed the normal daily high temperatures (in degrees Fahrenheit) for four months for five selected cities. This is called a two\-way table. There is some measurement, here Temperature, that is classified with respect to two categorical variables, City and Month. We would like to understand how the temperature depends on the month and the city. For example, we know that it is generally colder in the winter months and warmer during the summer months. Also some cities, like Atlanta, that are located in the south tend to be warmer than northern cities like Minneapolis. We want to use a simple method to describe this relationship. ``` library(LearnEDAfunctions) temperatures ``` ``` ## City January April July October ## 1 Atlanta 50 73 88 73 ## 2 Detroit 30 58 83 62 ## 3 Kansas_City 35 65 89 68 ## 4 Minneapolis 21 57 84 59 ## 5 Philadelphia 38 63 86 66 ``` 17\.2 An additive model ----------------------- A simple description of these data is an additive model of the form \\\[ FIT \= {\\rm Overall \\, temperature} \+ {\\rm City \\, effect} \+ {Month \\, effect}. \\] What this additive model says is that different cities tend to have different temperatures, and one city, say Atlanta, will be so many degrees warmer than another city, say Detroit. This difference in temperatures for these two cities will be the same for all months using this model. Also, one month, say July, will tend to be a particular number of degrees warmer than another month, say January – this difference in monthly temperatures will be the same for all cities. 17\.3 Median polish ------------------- We describe a resistant method called median polish of fitting an additive model. This method is resistant so it will not be affected by extreme values in the table. We will later compare this fitting model with the better\-known least\-squares method of fitting an additive model. To begin median polish, we take the median of each row of the table. We place the row medians in a column REFF that stands for Row Effect. ``` temps <- temperatures[, -1] dimnames(temps)[[1]] <- temperatures[, 1] REFF <- apply(temps, 1, median) cbind(temps, REFF) ``` ``` ## January April July October REFF ## Atlanta 50 73 88 73 73.0 ## Detroit 30 58 83 62 60.0 ## Kansas_City 35 65 89 68 66.5 ## Minneapolis 21 57 84 59 58.0 ## Philadelphia 38 63 86 66 64.5 ``` Next, we subtract out the row medians. For each temperature, we subtract the corresponding row median. For example, the temperature in Atlanta in January is 50 degrees – we subtract the Atlanta median 73, getting a difference of \-23\. If we do this operation for all temperatures in all rows, we get the following table: ``` Residual <- sweep(temps, 1, REFF) RowSweep <- cbind(Residual, REFF) RowSweep ``` ``` ## January April July October REFF ## Atlanta -23.0 0.0 15.0 0.0 73.0 ## Detroit -30.0 -2.0 23.0 2.0 60.0 ## Kansas_City -31.5 -1.5 22.5 1.5 66.5 ## Minneapolis -37.0 -1.0 26.0 1.0 58.0 ## Philadelphia -26.5 -1.5 21.5 1.5 64.5 ``` We call the two steps {find row medians, subtract out the medians} a row sweep of the table. Next, we take the median of each column of the table (including the row effect column). We put these column medians in a row called CEFF (for column effects). ``` CEFF <- apply(RowSweep, 2, median) rbind(RowSweep, CEFF) ``` ``` ## January April July October REFF ## Atlanta -23.0 0.0 15.0 0.0 73.0 ## Detroit -30.0 -2.0 23.0 2.0 60.0 ## Kansas_City -31.5 -1.5 22.5 1.5 66.5 ## Minneapolis -37.0 -1.0 26.0 1.0 58.0 ## Philadelphia -26.5 -1.5 21.5 1.5 64.5 ## 6 -30.0 -1.5 22.5 1.5 64.5 ``` We subtract out the column medians, similar to above. For each entry in the table, we subtract the corresponding column median. The steps of taking column medians and subtracting them out is called a column sweep of the table. ``` Residual <- sweep(RowSweep, 2, CEFF) ColSweep <- rbind(Residual, CEFF) dimnames(ColSweep)[[1]][6] <- "CEFF" ColSweep ``` ``` ## January April July October REFF ## Atlanta 7.0 1.5 -7.5 -1.5 8.5 ## Detroit 0.0 -0.5 0.5 0.5 -4.5 ## Kansas_City -1.5 0.0 0.0 0.0 2.0 ## Minneapolis -7.0 0.5 3.5 -0.5 -6.5 ## Philadelphia 3.5 0.0 -1.0 0.0 0.0 ## CEFF -30.0 -1.5 22.5 1.5 64.5 ``` At this point, we have performed one row sweep and one column sweep in the table. We continue by taking medians of each row. We place these row medians in a column called `Rmed`. ``` Resid <- ColSweep[, -5] REFF <- ColSweep[, 5] Rmed <- apply(Resid, 1, median) cbind(Resid, Rmed, REFF) ``` ``` ## January April July October Rmed REFF ## Atlanta 7.0 1.5 -7.5 -1.5 0.00 8.5 ## Detroit 0.0 -0.5 0.5 0.5 0.25 -4.5 ## Kansas_City -1.5 0.0 0.0 0.0 0.00 2.0 ## Minneapolis -7.0 0.5 3.5 -0.5 0.00 -6.5 ## Philadelphia 3.5 0.0 -1.0 0.0 0.00 0.0 ## CEFF -30.0 -1.5 22.5 1.5 0.00 64.5 ``` To adjust the values in the middle of the table and the row effects, we * Add the row medians (rmed) to the row effects (REFF) * Subtract the row medians (rmed) from the values in the middle. After we do this, we get the following table: ``` REFF <- REFF + Rmed Resid <- sweep(Resid, 1, Rmed) RowSweep <- cbind(Resid, REFF) RowSweep ``` ``` ## January April July October REFF ## Atlanta 7.00 1.50 -7.50 -1.50 8.50 ## Detroit -0.25 -0.75 0.25 0.25 -4.25 ## Kansas_City -1.50 0.00 0.00 0.00 2.00 ## Minneapolis -7.00 0.50 3.50 -0.50 -6.50 ## Philadelphia 3.50 0.00 -1.00 0.00 0.00 ## CEFF -30.00 -1.50 22.50 1.50 64.50 ``` Finally, we take the medians of each column and put the values in a new row called `ceff`. ``` Resid <- RowSweep[-6, ] CEFF <- RowSweep[6, ] ceff <- apply(Resid, 2, median) rbind(Resid, ceff, CEFF) ``` ``` ## January April July October REFF ## Atlanta 7.00 1.50 -7.50 -1.50 8.50 ## Detroit -0.25 -0.75 0.25 0.25 -4.25 ## Kansas_City -1.50 0.00 0.00 0.00 2.00 ## Minneapolis -7.00 0.50 3.50 -0.50 -6.50 ## Philadelphia 3.50 0.00 -1.00 0.00 0.00 ## 6 -0.25 0.00 0.00 0.00 0.00 ## CEFF -30.00 -1.50 22.50 1.50 64.50 ``` To adjust the remaining values, we * Add the ceff values to the column effects in CEFF. * Subtract the ceff values from the values in the middle. ``` CEFF <- CEFF + ceff Resid <- sweep(Resid, 2, ceff) ColSweep <- rbind(Resid, CEFF) ColSweep ``` ``` ## January April July October REFF ## Atlanta 7.25 1.50 -7.50 -1.50 8.50 ## Detroit 0.00 -0.75 0.25 0.25 -4.25 ## Kansas_City -1.25 0.00 0.00 0.00 2.00 ## Minneapolis -6.75 0.50 3.50 -0.50 -6.50 ## Philadelphia 3.75 0.00 -1.00 0.00 0.00 ## CEFF -30.25 -1.50 22.50 1.50 64.50 ``` We could continue this procedure (take out row medians and take out column medians) many more times. If we do this by hand, then it is usually sufficient to do 4 iterations – a row sweep, a column sweep, a row sweep, and a column sweep. 17\.4 Interpreting the additive model ------------------------------------- What we have done is fit an additive model to this table. Let’s interpret what this fitted model is telling us. Atlanta’s January temperature is 50\. We can represent this temperature as ``` Atlanta's temp in January = (Common) + (Atlanta Row effect) + (January Col effect) + (Residual). ``` We can pick up these terms on the right hand side of the equation from the output of the median polish. We see the common value is 64\.5, the Atlanta effect is 8\.5, the January effect is \-30\.25 and the residual is 7\.25\. So Atlanta’s January temp is \\\[ 50 \= 64\.5 \+ 8\.5 \- 30\.25 \+ 7\.25 . \\] Likewise, we can express all of the temperatures of the table as the sum of four different components \\\[ COMMON \+ ROW \\, EFFECT \+ COL \\, EFFECT \+ RESIDUAL. \\] The function `medpolish` repeats this process for a maximum of 10 iterations. We illustrate using this function on the temperature data and display the components. ``` additive.fit <- medpolish(temps) ``` ``` ## 1: 36.5 ## Final: 36.25 ``` ``` additive.fit ``` ``` ## ## Median Polish Results (Dataset: "temps") ## ## Overall: 64.5 ## ## Row Effects: ## Atlanta Detroit Kansas_City Minneapolis ## 8.50 -4.25 2.00 -6.50 ## Philadelphia ## 0.00 ## ## Column Effects: ## January April July October ## -30.25 -1.50 22.50 1.50 ## ## Residuals: ## January April July October ## Atlanta 7.25 1.50 -7.50 -1.50 ## Detroit 0.00 -0.75 0.25 0.25 ## Kansas_City -1.25 0.00 0.00 0.00 ## Minneapolis -6.75 0.50 3.50 -0.50 ## Philadelphia 3.75 0.00 -1.00 0.00 ``` ### 17\.4\.1 Interpreting the row and column effects Let’s try to make sense of the additive model produced by the `medpolish` function. Our fit has the form \\\[ COMMON \+ ROW \\, EFFECT \+ COL \\, EFFECT \+ RESIDUAL. \\] and the common, row effects, and column effects are shown below. | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 8\.5 | | Detroit | | | | | \-4\.25 | | Kansas City | | | | | 2 | | Minneapolis | | | | | \-6\.5 | | Philadelphia | | | | | 0 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | 64\.5 | If we wish to focus on the row effects (cities), we can combine the common and row effects to get the fit \\\[ FIT \= \[COMMON \+ ROW \\, EFFECT] \+ COL \\, EFFECT \\] \\\[ \= ROW \\, FIT \+ COL \\, EFFECT \\] | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 73 | | Detroit | | | | | 60\.25 | | Kansas City | | | | | 66\.5 | | Minneapolis | | | | | 58 | | Philadelphia | | | | | 64\.5 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | | Looking at this table, specifically the row fits, we see that * the average Atlanta temperature is 73 degrees * generally, Atlanta is 73 \- 60\.25 \= 12\.75 degrees warmer than Detroit * Philadelphia tends to be 6\.5 degrees warmer than Minneapolis If we were interested in the month effects (columns), we can combine the common and column effects to get the representation \\\[ FIT \= \[COMMON \+ COL \\, EFFECT] \+ ROW \\, EFFECT \\] \\\[ \= COL \\, FIT \+ ROW \\, EFFECT \\] which is displayed below. | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 8\.5 | | Detroit | | | | | \-4\.25 | | Kansas City | | | | | 2 | | Minneapolis | | | | | \-6\.5 | | Philadelphia | | | | | 0 | | CEFF | 33\.25 | 63 | 87 | 66 | | We see * the temperature in April is on average 63 degrees. * it tends to be 63 – 33\.25 \= 29\.75 degrees warmer in April than January * October and April have similar temps on average – October is 3 degrees warmer 17\.5 Looking at the residuals ------------------------------ After we gain some general understanding about the fit (which cities and months tend to be warm or cold), we can look at the residuals, shown below. ``` additive.fit$residuals ``` ``` ## January April July October ## Atlanta 7.25 1.50 -7.50 -1.50 ## Detroit 0.00 -0.75 0.25 0.25 ## Kansas_City -1.25 0.00 0.00 0.00 ## Minneapolis -6.75 0.50 3.50 -0.50 ## Philadelphia 3.75 0.00 -1.00 0.00 ``` 1. Are the residuals “large”? To be more clear, are the sizes of the residuals large relative to the row and column effects? Above we saw that the row effects (corresponding to cities) ranged from \-6 to 8 and the column effects (months) ranged from \-30 to 22\. We see a few large residuals ``` Atlanta in January: 7.25 Minneapolis in January: -6.75 Atlanta in July: -7.5 ``` These are large relative to the city effects but small relative to the month effects. 2. Do we see any pattern in the residuals? We’ll talk later about one specific pattern we might see in the residuals. Here do we see any pattern in the large residuals? If we define large as 3 (in absolute value), then we see five large residuals that are all in the months of January and July. These two months are the most extreme ones (in terms of temperature) and the city temps in these months might show more variability, which would contribute to these large residuals. 17\.1 Meet the data ------------------- In the table below, we have displayed the normal daily high temperatures (in degrees Fahrenheit) for four months for five selected cities. This is called a two\-way table. There is some measurement, here Temperature, that is classified with respect to two categorical variables, City and Month. We would like to understand how the temperature depends on the month and the city. For example, we know that it is generally colder in the winter months and warmer during the summer months. Also some cities, like Atlanta, that are located in the south tend to be warmer than northern cities like Minneapolis. We want to use a simple method to describe this relationship. ``` library(LearnEDAfunctions) temperatures ``` ``` ## City January April July October ## 1 Atlanta 50 73 88 73 ## 2 Detroit 30 58 83 62 ## 3 Kansas_City 35 65 89 68 ## 4 Minneapolis 21 57 84 59 ## 5 Philadelphia 38 63 86 66 ``` 17\.2 An additive model ----------------------- A simple description of these data is an additive model of the form \\\[ FIT \= {\\rm Overall \\, temperature} \+ {\\rm City \\, effect} \+ {Month \\, effect}. \\] What this additive model says is that different cities tend to have different temperatures, and one city, say Atlanta, will be so many degrees warmer than another city, say Detroit. This difference in temperatures for these two cities will be the same for all months using this model. Also, one month, say July, will tend to be a particular number of degrees warmer than another month, say January – this difference in monthly temperatures will be the same for all cities. 17\.3 Median polish ------------------- We describe a resistant method called median polish of fitting an additive model. This method is resistant so it will not be affected by extreme values in the table. We will later compare this fitting model with the better\-known least\-squares method of fitting an additive model. To begin median polish, we take the median of each row of the table. We place the row medians in a column REFF that stands for Row Effect. ``` temps <- temperatures[, -1] dimnames(temps)[[1]] <- temperatures[, 1] REFF <- apply(temps, 1, median) cbind(temps, REFF) ``` ``` ## January April July October REFF ## Atlanta 50 73 88 73 73.0 ## Detroit 30 58 83 62 60.0 ## Kansas_City 35 65 89 68 66.5 ## Minneapolis 21 57 84 59 58.0 ## Philadelphia 38 63 86 66 64.5 ``` Next, we subtract out the row medians. For each temperature, we subtract the corresponding row median. For example, the temperature in Atlanta in January is 50 degrees – we subtract the Atlanta median 73, getting a difference of \-23\. If we do this operation for all temperatures in all rows, we get the following table: ``` Residual <- sweep(temps, 1, REFF) RowSweep <- cbind(Residual, REFF) RowSweep ``` ``` ## January April July October REFF ## Atlanta -23.0 0.0 15.0 0.0 73.0 ## Detroit -30.0 -2.0 23.0 2.0 60.0 ## Kansas_City -31.5 -1.5 22.5 1.5 66.5 ## Minneapolis -37.0 -1.0 26.0 1.0 58.0 ## Philadelphia -26.5 -1.5 21.5 1.5 64.5 ``` We call the two steps {find row medians, subtract out the medians} a row sweep of the table. Next, we take the median of each column of the table (including the row effect column). We put these column medians in a row called CEFF (for column effects). ``` CEFF <- apply(RowSweep, 2, median) rbind(RowSweep, CEFF) ``` ``` ## January April July October REFF ## Atlanta -23.0 0.0 15.0 0.0 73.0 ## Detroit -30.0 -2.0 23.0 2.0 60.0 ## Kansas_City -31.5 -1.5 22.5 1.5 66.5 ## Minneapolis -37.0 -1.0 26.0 1.0 58.0 ## Philadelphia -26.5 -1.5 21.5 1.5 64.5 ## 6 -30.0 -1.5 22.5 1.5 64.5 ``` We subtract out the column medians, similar to above. For each entry in the table, we subtract the corresponding column median. The steps of taking column medians and subtracting them out is called a column sweep of the table. ``` Residual <- sweep(RowSweep, 2, CEFF) ColSweep <- rbind(Residual, CEFF) dimnames(ColSweep)[[1]][6] <- "CEFF" ColSweep ``` ``` ## January April July October REFF ## Atlanta 7.0 1.5 -7.5 -1.5 8.5 ## Detroit 0.0 -0.5 0.5 0.5 -4.5 ## Kansas_City -1.5 0.0 0.0 0.0 2.0 ## Minneapolis -7.0 0.5 3.5 -0.5 -6.5 ## Philadelphia 3.5 0.0 -1.0 0.0 0.0 ## CEFF -30.0 -1.5 22.5 1.5 64.5 ``` At this point, we have performed one row sweep and one column sweep in the table. We continue by taking medians of each row. We place these row medians in a column called `Rmed`. ``` Resid <- ColSweep[, -5] REFF <- ColSweep[, 5] Rmed <- apply(Resid, 1, median) cbind(Resid, Rmed, REFF) ``` ``` ## January April July October Rmed REFF ## Atlanta 7.0 1.5 -7.5 -1.5 0.00 8.5 ## Detroit 0.0 -0.5 0.5 0.5 0.25 -4.5 ## Kansas_City -1.5 0.0 0.0 0.0 0.00 2.0 ## Minneapolis -7.0 0.5 3.5 -0.5 0.00 -6.5 ## Philadelphia 3.5 0.0 -1.0 0.0 0.00 0.0 ## CEFF -30.0 -1.5 22.5 1.5 0.00 64.5 ``` To adjust the values in the middle of the table and the row effects, we * Add the row medians (rmed) to the row effects (REFF) * Subtract the row medians (rmed) from the values in the middle. After we do this, we get the following table: ``` REFF <- REFF + Rmed Resid <- sweep(Resid, 1, Rmed) RowSweep <- cbind(Resid, REFF) RowSweep ``` ``` ## January April July October REFF ## Atlanta 7.00 1.50 -7.50 -1.50 8.50 ## Detroit -0.25 -0.75 0.25 0.25 -4.25 ## Kansas_City -1.50 0.00 0.00 0.00 2.00 ## Minneapolis -7.00 0.50 3.50 -0.50 -6.50 ## Philadelphia 3.50 0.00 -1.00 0.00 0.00 ## CEFF -30.00 -1.50 22.50 1.50 64.50 ``` Finally, we take the medians of each column and put the values in a new row called `ceff`. ``` Resid <- RowSweep[-6, ] CEFF <- RowSweep[6, ] ceff <- apply(Resid, 2, median) rbind(Resid, ceff, CEFF) ``` ``` ## January April July October REFF ## Atlanta 7.00 1.50 -7.50 -1.50 8.50 ## Detroit -0.25 -0.75 0.25 0.25 -4.25 ## Kansas_City -1.50 0.00 0.00 0.00 2.00 ## Minneapolis -7.00 0.50 3.50 -0.50 -6.50 ## Philadelphia 3.50 0.00 -1.00 0.00 0.00 ## 6 -0.25 0.00 0.00 0.00 0.00 ## CEFF -30.00 -1.50 22.50 1.50 64.50 ``` To adjust the remaining values, we * Add the ceff values to the column effects in CEFF. * Subtract the ceff values from the values in the middle. ``` CEFF <- CEFF + ceff Resid <- sweep(Resid, 2, ceff) ColSweep <- rbind(Resid, CEFF) ColSweep ``` ``` ## January April July October REFF ## Atlanta 7.25 1.50 -7.50 -1.50 8.50 ## Detroit 0.00 -0.75 0.25 0.25 -4.25 ## Kansas_City -1.25 0.00 0.00 0.00 2.00 ## Minneapolis -6.75 0.50 3.50 -0.50 -6.50 ## Philadelphia 3.75 0.00 -1.00 0.00 0.00 ## CEFF -30.25 -1.50 22.50 1.50 64.50 ``` We could continue this procedure (take out row medians and take out column medians) many more times. If we do this by hand, then it is usually sufficient to do 4 iterations – a row sweep, a column sweep, a row sweep, and a column sweep. 17\.4 Interpreting the additive model ------------------------------------- What we have done is fit an additive model to this table. Let’s interpret what this fitted model is telling us. Atlanta’s January temperature is 50\. We can represent this temperature as ``` Atlanta's temp in January = (Common) + (Atlanta Row effect) + (January Col effect) + (Residual). ``` We can pick up these terms on the right hand side of the equation from the output of the median polish. We see the common value is 64\.5, the Atlanta effect is 8\.5, the January effect is \-30\.25 and the residual is 7\.25\. So Atlanta’s January temp is \\\[ 50 \= 64\.5 \+ 8\.5 \- 30\.25 \+ 7\.25 . \\] Likewise, we can express all of the temperatures of the table as the sum of four different components \\\[ COMMON \+ ROW \\, EFFECT \+ COL \\, EFFECT \+ RESIDUAL. \\] The function `medpolish` repeats this process for a maximum of 10 iterations. We illustrate using this function on the temperature data and display the components. ``` additive.fit <- medpolish(temps) ``` ``` ## 1: 36.5 ## Final: 36.25 ``` ``` additive.fit ``` ``` ## ## Median Polish Results (Dataset: "temps") ## ## Overall: 64.5 ## ## Row Effects: ## Atlanta Detroit Kansas_City Minneapolis ## 8.50 -4.25 2.00 -6.50 ## Philadelphia ## 0.00 ## ## Column Effects: ## January April July October ## -30.25 -1.50 22.50 1.50 ## ## Residuals: ## January April July October ## Atlanta 7.25 1.50 -7.50 -1.50 ## Detroit 0.00 -0.75 0.25 0.25 ## Kansas_City -1.25 0.00 0.00 0.00 ## Minneapolis -6.75 0.50 3.50 -0.50 ## Philadelphia 3.75 0.00 -1.00 0.00 ``` ### 17\.4\.1 Interpreting the row and column effects Let’s try to make sense of the additive model produced by the `medpolish` function. Our fit has the form \\\[ COMMON \+ ROW \\, EFFECT \+ COL \\, EFFECT \+ RESIDUAL. \\] and the common, row effects, and column effects are shown below. | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 8\.5 | | Detroit | | | | | \-4\.25 | | Kansas City | | | | | 2 | | Minneapolis | | | | | \-6\.5 | | Philadelphia | | | | | 0 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | 64\.5 | If we wish to focus on the row effects (cities), we can combine the common and row effects to get the fit \\\[ FIT \= \[COMMON \+ ROW \\, EFFECT] \+ COL \\, EFFECT \\] \\\[ \= ROW \\, FIT \+ COL \\, EFFECT \\] | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 73 | | Detroit | | | | | 60\.25 | | Kansas City | | | | | 66\.5 | | Minneapolis | | | | | 58 | | Philadelphia | | | | | 64\.5 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | | Looking at this table, specifically the row fits, we see that * the average Atlanta temperature is 73 degrees * generally, Atlanta is 73 \- 60\.25 \= 12\.75 degrees warmer than Detroit * Philadelphia tends to be 6\.5 degrees warmer than Minneapolis If we were interested in the month effects (columns), we can combine the common and column effects to get the representation \\\[ FIT \= \[COMMON \+ COL \\, EFFECT] \+ ROW \\, EFFECT \\] \\\[ \= COL \\, FIT \+ ROW \\, EFFECT \\] which is displayed below. | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 8\.5 | | Detroit | | | | | \-4\.25 | | Kansas City | | | | | 2 | | Minneapolis | | | | | \-6\.5 | | Philadelphia | | | | | 0 | | CEFF | 33\.25 | 63 | 87 | 66 | | We see * the temperature in April is on average 63 degrees. * it tends to be 63 – 33\.25 \= 29\.75 degrees warmer in April than January * October and April have similar temps on average – October is 3 degrees warmer ### 17\.4\.1 Interpreting the row and column effects Let’s try to make sense of the additive model produced by the `medpolish` function. Our fit has the form \\\[ COMMON \+ ROW \\, EFFECT \+ COL \\, EFFECT \+ RESIDUAL. \\] and the common, row effects, and column effects are shown below. | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 8\.5 | | Detroit | | | | | \-4\.25 | | Kansas City | | | | | 2 | | Minneapolis | | | | | \-6\.5 | | Philadelphia | | | | | 0 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | 64\.5 | If we wish to focus on the row effects (cities), we can combine the common and row effects to get the fit \\\[ FIT \= \[COMMON \+ ROW \\, EFFECT] \+ COL \\, EFFECT \\] \\\[ \= ROW \\, FIT \+ COL \\, EFFECT \\] | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 73 | | Detroit | | | | | 60\.25 | | Kansas City | | | | | 66\.5 | | Minneapolis | | | | | 58 | | Philadelphia | | | | | 64\.5 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | | Looking at this table, specifically the row fits, we see that * the average Atlanta temperature is 73 degrees * generally, Atlanta is 73 \- 60\.25 \= 12\.75 degrees warmer than Detroit * Philadelphia tends to be 6\.5 degrees warmer than Minneapolis If we were interested in the month effects (columns), we can combine the common and column effects to get the representation \\\[ FIT \= \[COMMON \+ COL \\, EFFECT] \+ ROW \\, EFFECT \\] \\\[ \= COL \\, FIT \+ ROW \\, EFFECT \\] which is displayed below. | —\- | January | April | July | October | REFF | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 8\.5 | | Detroit | | | | | \-4\.25 | | Kansas City | | | | | 2 | | Minneapolis | | | | | \-6\.5 | | Philadelphia | | | | | 0 | | CEFF | 33\.25 | 63 | 87 | 66 | | We see * the temperature in April is on average 63 degrees. * it tends to be 63 – 33\.25 \= 29\.75 degrees warmer in April than January * October and April have similar temps on average – October is 3 degrees warmer 17\.5 Looking at the residuals ------------------------------ After we gain some general understanding about the fit (which cities and months tend to be warm or cold), we can look at the residuals, shown below. ``` additive.fit$residuals ``` ``` ## January April July October ## Atlanta 7.25 1.50 -7.50 -1.50 ## Detroit 0.00 -0.75 0.25 0.25 ## Kansas_City -1.25 0.00 0.00 0.00 ## Minneapolis -6.75 0.50 3.50 -0.50 ## Philadelphia 3.75 0.00 -1.00 0.00 ``` 1. Are the residuals “large”? To be more clear, are the sizes of the residuals large relative to the row and column effects? Above we saw that the row effects (corresponding to cities) ranged from \-6 to 8 and the column effects (months) ranged from \-30 to 22\. We see a few large residuals ``` Atlanta in January: 7.25 Minneapolis in January: -6.75 Atlanta in July: -7.5 ``` These are large relative to the city effects but small relative to the month effects. 2. Do we see any pattern in the residuals? We’ll talk later about one specific pattern we might see in the residuals. Here do we see any pattern in the large residuals? If we define large as 3 (in absolute value), then we see five large residuals that are all in the months of January and July. These two months are the most extreme ones (in terms of temperature) and the city temps in these months might show more variability, which would contribute to these large residuals.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/plotting-an-additive-plot.html
18 Plotting an Additive Plot ============================ Here we describe how one displays an additive fit. This graphical display may look a bit odd at first, but it is an effective way of portraying the additive fit found using median polish. 18\.1 The data -------------- We work with the temperature data that we used earlier to illustrate an additive fit. We will use the fit that we found using the application of the `medpolish` function. ``` library(LearnEDAfunctions) temps <- temperatures[, -1] dimnames(temps)[[1]] <- temperatures[, 1] additive.fit <- medpolish(temps) ``` ``` ## 1: 36.5 ## Final: 36.25 ``` In plotting the fit, we want to represent the fit as the sum of two terms: \\\[ ROW \\, PART \+ COLUMN \\, PART \\] In our usual representation of the fit, \\\[ FIT \= COMMON \+ ROW \\, EFFECT \+ COLUMN \\, EFFECT, \\] we have three terms, so we either add the common term to the row effects or to the column effects so we have the sum of two terms. In the table below we’ve added the common to the row effects, getting row fits (abbreviated RFIT), and \\\[ FIT \= ROW \\, FIT \+ COLUMN \\, EFFECT . \\] | – | January | April | July | October | RFIT | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 73 | | Detroit | | | | | 60\.25 | | Kansas City | | | | | 66\.5 | | Minneapolis | | | | | 58 | | Philadelphia | | | | | 64\.5 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | | So the temperature of Atlanta in January is represented as ``` Atlanta fit + January effect = 73 - 30.25, ``` and the temperature of Minneapolis in October is fitted by ``` Minneapolis fit + October effect = 58 + 1.5. ``` 18\.2 Plotting the two terms of the additive fit ------------------------------------------------ To begin our graph, we set up a grid where values of the row fits are on the horizontal axis and values of the column effects are in the vertical axis. 1. We first graph vertical lines corresponding to the values of the row fits \- here 73, 60\.25, 66\.5, 58, and 64\.5\. Next, we graph horizontal lines corresponding to the values of the column effects. These sets of horizontal and vertical lines are shown as thick lines in the display below. Each intersection of a horizontal line and a vertical line corresponds to a cell in the table. For example, the intersection of the top horizontal line and the left vertical line in the upper left portion of the display corresponds to the temperature of Minneapolis in July. We have labeled all horizontal and vertical lines with the city names and months to emphasize the connection of the graph with the table. 2. Remember the additive fit is the sum FIT \= ROW FIT \+ COLUMN EFFECT. In the figure, we have drawn diagonal lines where the fit is a constant value. The diagonal line at the bottom corresponds to all values of ROW FIT and COLUMN EFFECT such that the sum is equal to FIT \= 30, the next line corresponds to all row fits and column effects that add up to FIT \= 40, and so on. 3. We want to graph the fit, so we pivot the above display by an angle to make the diagonal lines of constant fit horizontal. So interactions of horizontal and vertical lines that are high on the new figure correspond to large values of fit 4. Last, we remove all extraneous parts of the figure, such as the original horizontal and vertical axes, that are not essential for the main message of the graph. We get the following display which is our graph of an additive fit of a two\-way table. ``` Row.Part <- with(additive.fit, row + overall) Col.Part <- additive.fit$col plot2way(Row.Part, Col.Part, dimnames(temps)[[1]], dimnames(temps)[[2]]) ``` 18\.3 Making sense of the display --------------------------------- Since this graph may look weird to you, we should spend some time interpreting it. 1. First, note that the highest intersection (of horizontal and vertical lines) corresponds to the temperature of Atlanta in July. According to the additive fit, you should be in this city in July if you like heat. Conversely, Minneapolis in January is the coldest spot – this intersection has the smallest value of fit. 2. Which is a warmer spot – Philly in October or Kansas City in May? Looking at the figure, note that these two intersections are roughly at the same vertical position, which means that they have similar values of fit. 3. Looking at the figure, we see that the rectangular region is just slightly rotated off the vertical. This means that there is more variation between months in the table than in cities. The more critical dimension of fit is the time of the year. 18\.4 Adding information about the residuals to the display ----------------------------------------------------------- Of course, the fit is only one of two aspects of the data – we need also to examine the residuals to get a complete view of the data. Here are the residuals from the median polish: ``` additive.fit$residual ``` ``` ## January April July October ## Atlanta 7.25 1.50 -7.50 -1.50 ## Detroit 0.00 -0.75 0.25 0.25 ## Kansas_City -1.25 0.00 0.00 0.00 ## Minneapolis -6.75 0.50 3.50 -0.50 ## Philadelphia 3.75 0.00 -1.00 0.00 ``` To get an idea of the pattern of these residuals, we construct a stemplot: ``` aplpack::stem.leaf(c(additive.fit$residual), unit=1, m=5) ``` ``` ## 1 | 2: represents 12 ## leaf unit: 1 ## n: 20 ## 2 s | 76 ## f | ## t | ## 7 -0* | 11100 ## (10) 0* | 0000000001 ## 3 t | 33 ## f | ## 1 s | 7 ``` What we see is that most of the residuals are small – between \-1 and 3, and there are only three large residuals that are set apart from the rest. These are \-7\.25 (Atlanta in January), \-7\.5 (Atlanta in July) and \-6\.75 (Minneapolis in January). There are formal methods of deciding if these three extreme residuals are “large”, but it makes sense in this case to indicate on our graph that these three cells have large residuals. We do this by plotting a “\-” or “\+” on the display for these three cells: So this extra residual information on the graph indicates that Atlanta has a relatively cool temperature in July, this city has a relatively warm January, and Minneapolis is unusually cold in January. 18\.1 The data -------------- We work with the temperature data that we used earlier to illustrate an additive fit. We will use the fit that we found using the application of the `medpolish` function. ``` library(LearnEDAfunctions) temps <- temperatures[, -1] dimnames(temps)[[1]] <- temperatures[, 1] additive.fit <- medpolish(temps) ``` ``` ## 1: 36.5 ## Final: 36.25 ``` In plotting the fit, we want to represent the fit as the sum of two terms: \\\[ ROW \\, PART \+ COLUMN \\, PART \\] In our usual representation of the fit, \\\[ FIT \= COMMON \+ ROW \\, EFFECT \+ COLUMN \\, EFFECT, \\] we have three terms, so we either add the common term to the row effects or to the column effects so we have the sum of two terms. In the table below we’ve added the common to the row effects, getting row fits (abbreviated RFIT), and \\\[ FIT \= ROW \\, FIT \+ COLUMN \\, EFFECT . \\] | – | January | April | July | October | RFIT | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 73 | | Detroit | | | | | 60\.25 | | Kansas City | | | | | 66\.5 | | Minneapolis | | | | | 58 | | Philadelphia | | | | | 64\.5 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | | So the temperature of Atlanta in January is represented as ``` Atlanta fit + January effect = 73 - 30.25, ``` and the temperature of Minneapolis in October is fitted by ``` Minneapolis fit + October effect = 58 + 1.5. ``` 18\.2 Plotting the two terms of the additive fit ------------------------------------------------ To begin our graph, we set up a grid where values of the row fits are on the horizontal axis and values of the column effects are in the vertical axis. 1. We first graph vertical lines corresponding to the values of the row fits \- here 73, 60\.25, 66\.5, 58, and 64\.5\. Next, we graph horizontal lines corresponding to the values of the column effects. These sets of horizontal and vertical lines are shown as thick lines in the display below. Each intersection of a horizontal line and a vertical line corresponds to a cell in the table. For example, the intersection of the top horizontal line and the left vertical line in the upper left portion of the display corresponds to the temperature of Minneapolis in July. We have labeled all horizontal and vertical lines with the city names and months to emphasize the connection of the graph with the table. 2. Remember the additive fit is the sum FIT \= ROW FIT \+ COLUMN EFFECT. In the figure, we have drawn diagonal lines where the fit is a constant value. The diagonal line at the bottom corresponds to all values of ROW FIT and COLUMN EFFECT such that the sum is equal to FIT \= 30, the next line corresponds to all row fits and column effects that add up to FIT \= 40, and so on. 3. We want to graph the fit, so we pivot the above display by an angle to make the diagonal lines of constant fit horizontal. So interactions of horizontal and vertical lines that are high on the new figure correspond to large values of fit 4. Last, we remove all extraneous parts of the figure, such as the original horizontal and vertical axes, that are not essential for the main message of the graph. We get the following display which is our graph of an additive fit of a two\-way table. ``` Row.Part <- with(additive.fit, row + overall) Col.Part <- additive.fit$col plot2way(Row.Part, Col.Part, dimnames(temps)[[1]], dimnames(temps)[[2]]) ``` 18\.3 Making sense of the display --------------------------------- Since this graph may look weird to you, we should spend some time interpreting it. 1. First, note that the highest intersection (of horizontal and vertical lines) corresponds to the temperature of Atlanta in July. According to the additive fit, you should be in this city in July if you like heat. Conversely, Minneapolis in January is the coldest spot – this intersection has the smallest value of fit. 2. Which is a warmer spot – Philly in October or Kansas City in May? Looking at the figure, note that these two intersections are roughly at the same vertical position, which means that they have similar values of fit. 3. Looking at the figure, we see that the rectangular region is just slightly rotated off the vertical. This means that there is more variation between months in the table than in cities. The more critical dimension of fit is the time of the year. 18\.4 Adding information about the residuals to the display ----------------------------------------------------------- Of course, the fit is only one of two aspects of the data – we need also to examine the residuals to get a complete view of the data. Here are the residuals from the median polish: ``` additive.fit$residual ``` ``` ## January April July October ## Atlanta 7.25 1.50 -7.50 -1.50 ## Detroit 0.00 -0.75 0.25 0.25 ## Kansas_City -1.25 0.00 0.00 0.00 ## Minneapolis -6.75 0.50 3.50 -0.50 ## Philadelphia 3.75 0.00 -1.00 0.00 ``` To get an idea of the pattern of these residuals, we construct a stemplot: ``` aplpack::stem.leaf(c(additive.fit$residual), unit=1, m=5) ``` ``` ## 1 | 2: represents 12 ## leaf unit: 1 ## n: 20 ## 2 s | 76 ## f | ## t | ## 7 -0* | 11100 ## (10) 0* | 0000000001 ## 3 t | 33 ## f | ## 1 s | 7 ``` What we see is that most of the residuals are small – between \-1 and 3, and there are only three large residuals that are set apart from the rest. These are \-7\.25 (Atlanta in January), \-7\.5 (Atlanta in July) and \-6\.75 (Minneapolis in January). There are formal methods of deciding if these three extreme residuals are “large”, but it makes sense in this case to indicate on our graph that these three cells have large residuals. We do this by plotting a “\-” or “\+” on the display for these three cells: So this extra residual information on the graph indicates that Atlanta has a relatively cool temperature in July, this city has a relatively warm January, and Minneapolis is unusually cold in January.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/plotting-an-additive-plot.html
18 Plotting an Additive Plot ============================ Here we describe how one displays an additive fit. This graphical display may look a bit odd at first, but it is an effective way of portraying the additive fit found using median polish. 18\.1 The data -------------- We work with the temperature data that we used earlier to illustrate an additive fit. We will use the fit that we found using the application of the `medpolish` function. ``` library(LearnEDAfunctions) temps <- temperatures[, -1] dimnames(temps)[[1]] <- temperatures[, 1] additive.fit <- medpolish(temps) ``` ``` ## 1: 36.5 ## Final: 36.25 ``` In plotting the fit, we want to represent the fit as the sum of two terms: \\\[ ROW \\, PART \+ COLUMN \\, PART \\] In our usual representation of the fit, \\\[ FIT \= COMMON \+ ROW \\, EFFECT \+ COLUMN \\, EFFECT, \\] we have three terms, so we either add the common term to the row effects or to the column effects so we have the sum of two terms. In the table below we’ve added the common to the row effects, getting row fits (abbreviated RFIT), and \\\[ FIT \= ROW \\, FIT \+ COLUMN \\, EFFECT . \\] | – | January | April | July | October | RFIT | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 73 | | Detroit | | | | | 60\.25 | | Kansas City | | | | | 66\.5 | | Minneapolis | | | | | 58 | | Philadelphia | | | | | 64\.5 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | | So the temperature of Atlanta in January is represented as ``` Atlanta fit + January effect = 73 - 30.25, ``` and the temperature of Minneapolis in October is fitted by ``` Minneapolis fit + October effect = 58 + 1.5. ``` 18\.2 Plotting the two terms of the additive fit ------------------------------------------------ To begin our graph, we set up a grid where values of the row fits are on the horizontal axis and values of the column effects are in the vertical axis. 1. We first graph vertical lines corresponding to the values of the row fits \- here 73, 60\.25, 66\.5, 58, and 64\.5\. Next, we graph horizontal lines corresponding to the values of the column effects. These sets of horizontal and vertical lines are shown as thick lines in the display below. Each intersection of a horizontal line and a vertical line corresponds to a cell in the table. For example, the intersection of the top horizontal line and the left vertical line in the upper left portion of the display corresponds to the temperature of Minneapolis in July. We have labeled all horizontal and vertical lines with the city names and months to emphasize the connection of the graph with the table. 2. Remember the additive fit is the sum FIT \= ROW FIT \+ COLUMN EFFECT. In the figure, we have drawn diagonal lines where the fit is a constant value. The diagonal line at the bottom corresponds to all values of ROW FIT and COLUMN EFFECT such that the sum is equal to FIT \= 30, the next line corresponds to all row fits and column effects that add up to FIT \= 40, and so on. 3. We want to graph the fit, so we pivot the above display by an angle to make the diagonal lines of constant fit horizontal. So interactions of horizontal and vertical lines that are high on the new figure correspond to large values of fit 4. Last, we remove all extraneous parts of the figure, such as the original horizontal and vertical axes, that are not essential for the main message of the graph. We get the following display which is our graph of an additive fit of a two\-way table. ``` Row.Part <- with(additive.fit, row + overall) Col.Part <- additive.fit$col plot2way(Row.Part, Col.Part, dimnames(temps)[[1]], dimnames(temps)[[2]]) ``` 18\.3 Making sense of the display --------------------------------- Since this graph may look weird to you, we should spend some time interpreting it. 1. First, note that the highest intersection (of horizontal and vertical lines) corresponds to the temperature of Atlanta in July. According to the additive fit, you should be in this city in July if you like heat. Conversely, Minneapolis in January is the coldest spot – this intersection has the smallest value of fit. 2. Which is a warmer spot – Philly in October or Kansas City in May? Looking at the figure, note that these two intersections are roughly at the same vertical position, which means that they have similar values of fit. 3. Looking at the figure, we see that the rectangular region is just slightly rotated off the vertical. This means that there is more variation between months in the table than in cities. The more critical dimension of fit is the time of the year. 18\.4 Adding information about the residuals to the display ----------------------------------------------------------- Of course, the fit is only one of two aspects of the data – we need also to examine the residuals to get a complete view of the data. Here are the residuals from the median polish: ``` additive.fit$residual ``` ``` ## January April July October ## Atlanta 7.25 1.50 -7.50 -1.50 ## Detroit 0.00 -0.75 0.25 0.25 ## Kansas_City -1.25 0.00 0.00 0.00 ## Minneapolis -6.75 0.50 3.50 -0.50 ## Philadelphia 3.75 0.00 -1.00 0.00 ``` To get an idea of the pattern of these residuals, we construct a stemplot: ``` aplpack::stem.leaf(c(additive.fit$residual), unit=1, m=5) ``` ``` ## 1 | 2: represents 12 ## leaf unit: 1 ## n: 20 ## 2 s | 76 ## f | ## t | ## 7 -0* | 11100 ## (10) 0* | 0000000001 ## 3 t | 33 ## f | ## 1 s | 7 ``` What we see is that most of the residuals are small – between \-1 and 3, and there are only three large residuals that are set apart from the rest. These are \-7\.25 (Atlanta in January), \-7\.5 (Atlanta in July) and \-6\.75 (Minneapolis in January). There are formal methods of deciding if these three extreme residuals are “large”, but it makes sense in this case to indicate on our graph that these three cells have large residuals. We do this by plotting a “\-” or “\+” on the display for these three cells: So this extra residual information on the graph indicates that Atlanta has a relatively cool temperature in July, this city has a relatively warm January, and Minneapolis is unusually cold in January. 18\.1 The data -------------- We work with the temperature data that we used earlier to illustrate an additive fit. We will use the fit that we found using the application of the `medpolish` function. ``` library(LearnEDAfunctions) temps <- temperatures[, -1] dimnames(temps)[[1]] <- temperatures[, 1] additive.fit <- medpolish(temps) ``` ``` ## 1: 36.5 ## Final: 36.25 ``` In plotting the fit, we want to represent the fit as the sum of two terms: \\\[ ROW \\, PART \+ COLUMN \\, PART \\] In our usual representation of the fit, \\\[ FIT \= COMMON \+ ROW \\, EFFECT \+ COLUMN \\, EFFECT, \\] we have three terms, so we either add the common term to the row effects or to the column effects so we have the sum of two terms. In the table below we’ve added the common to the row effects, getting row fits (abbreviated RFIT), and \\\[ FIT \= ROW \\, FIT \+ COLUMN \\, EFFECT . \\] | – | January | April | July | October | RFIT | | --- | --- | --- | --- | --- | --- | | Atlanta | | | | | 73 | | Detroit | | | | | 60\.25 | | Kansas City | | | | | 66\.5 | | Minneapolis | | | | | 58 | | Philadelphia | | | | | 64\.5 | | CEFF | \-30\.25 | \-1\.5 | 22\.5 | 1\.5 | | So the temperature of Atlanta in January is represented as ``` Atlanta fit + January effect = 73 - 30.25, ``` and the temperature of Minneapolis in October is fitted by ``` Minneapolis fit + October effect = 58 + 1.5. ``` 18\.2 Plotting the two terms of the additive fit ------------------------------------------------ To begin our graph, we set up a grid where values of the row fits are on the horizontal axis and values of the column effects are in the vertical axis. 1. We first graph vertical lines corresponding to the values of the row fits \- here 73, 60\.25, 66\.5, 58, and 64\.5\. Next, we graph horizontal lines corresponding to the values of the column effects. These sets of horizontal and vertical lines are shown as thick lines in the display below. Each intersection of a horizontal line and a vertical line corresponds to a cell in the table. For example, the intersection of the top horizontal line and the left vertical line in the upper left portion of the display corresponds to the temperature of Minneapolis in July. We have labeled all horizontal and vertical lines with the city names and months to emphasize the connection of the graph with the table. 2. Remember the additive fit is the sum FIT \= ROW FIT \+ COLUMN EFFECT. In the figure, we have drawn diagonal lines where the fit is a constant value. The diagonal line at the bottom corresponds to all values of ROW FIT and COLUMN EFFECT such that the sum is equal to FIT \= 30, the next line corresponds to all row fits and column effects that add up to FIT \= 40, and so on. 3. We want to graph the fit, so we pivot the above display by an angle to make the diagonal lines of constant fit horizontal. So interactions of horizontal and vertical lines that are high on the new figure correspond to large values of fit 4. Last, we remove all extraneous parts of the figure, such as the original horizontal and vertical axes, that are not essential for the main message of the graph. We get the following display which is our graph of an additive fit of a two\-way table. ``` Row.Part <- with(additive.fit, row + overall) Col.Part <- additive.fit$col plot2way(Row.Part, Col.Part, dimnames(temps)[[1]], dimnames(temps)[[2]]) ``` 18\.3 Making sense of the display --------------------------------- Since this graph may look weird to you, we should spend some time interpreting it. 1. First, note that the highest intersection (of horizontal and vertical lines) corresponds to the temperature of Atlanta in July. According to the additive fit, you should be in this city in July if you like heat. Conversely, Minneapolis in January is the coldest spot – this intersection has the smallest value of fit. 2. Which is a warmer spot – Philly in October or Kansas City in May? Looking at the figure, note that these two intersections are roughly at the same vertical position, which means that they have similar values of fit. 3. Looking at the figure, we see that the rectangular region is just slightly rotated off the vertical. This means that there is more variation between months in the table than in cities. The more critical dimension of fit is the time of the year. 18\.4 Adding information about the residuals to the display ----------------------------------------------------------- Of course, the fit is only one of two aspects of the data – we need also to examine the residuals to get a complete view of the data. Here are the residuals from the median polish: ``` additive.fit$residual ``` ``` ## January April July October ## Atlanta 7.25 1.50 -7.50 -1.50 ## Detroit 0.00 -0.75 0.25 0.25 ## Kansas_City -1.25 0.00 0.00 0.00 ## Minneapolis -6.75 0.50 3.50 -0.50 ## Philadelphia 3.75 0.00 -1.00 0.00 ``` To get an idea of the pattern of these residuals, we construct a stemplot: ``` aplpack::stem.leaf(c(additive.fit$residual), unit=1, m=5) ``` ``` ## 1 | 2: represents 12 ## leaf unit: 1 ## n: 20 ## 2 s | 76 ## f | ## t | ## 7 -0* | 11100 ## (10) 0* | 0000000001 ## 3 t | 33 ## f | ## 1 s | 7 ``` What we see is that most of the residuals are small – between \-1 and 3, and there are only three large residuals that are set apart from the rest. These are \-7\.25 (Atlanta in January), \-7\.5 (Atlanta in July) and \-6\.75 (Minneapolis in January). There are formal methods of deciding if these three extreme residuals are “large”, but it makes sense in this case to indicate on our graph that these three cells have large residuals. We do this by plotting a “\-” or “\+” on the display for these three cells: So this extra residual information on the graph indicates that Atlanta has a relatively cool temperature in July, this city has a relatively warm January, and Minneapolis is unusually cold in January.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/multiplicative-fit.html
19 Multiplicative Fit ===================== We have illustrated the use of median polish to perform an additive fit to a two\-way table. But sometimes, a multiplicative fit rather than an additive fit is appropriate. Here we illustrate fitting and interpreting a multiplicative fit. 19\.1 Meet the data ------------------- A fascinating data set are statistics collected from the Olympic games through the years. Here we focus on women swimming results – in particular, the times of the winning swimmer in the women freestyle swimming race for the 100, 200, 400, 800 meter distances for the most nine recent Summer Olympics. (The source of the data is the ESPN Sports Almanac.) We present the data as a two\-way table – time (in seconds) classified by year and distance. ``` library(LearnEDAfunctions) times <- olympics.swim[, -1] row.names(times) <- olympics.swim[, 1] times ``` ``` ## X100m X200m X400m X800m ## 1968 60.00 130.50 271.80 564.00 ## 1972 58.59 123.56 259.44 533.68 ## 1976 55.65 119.26 249.89 517.14 ## 1980 54.79 118.33 248.76 508.90 ## 1984 55.92 119.23 247.10 504.95 ## 1988 54.93 117.65 243.85 500.20 ## 1992 54.65 117.90 247.18 505.52 ## 1996 54.50 118.16 247.25 507.89 ## 2000 53.83 118.24 245.80 499.67 ## 2004 53.84 118.03 245.34 504.54 ``` What patterns do we expect to see in this table? Certainly, we expect the times to get larger for the longer distances. Also, we might expect to see some decrease in the winning times as a function of year. There are improvements in training and swimming technique that can lead to better race times. 19\.2 An additive model isn’t suitable -------------------------------------- If we think about the structure of this table, it should be clear that an additive model is not the best fit for these data. If the table were additive, then one would expect the difference between the times in the first two columns to be a constant across rows. Likewise, the difference between any two columns should be a constant for each row. But what is the expected relationship between the times for the 100 m and 200 m races? Since the 200 meter race is twice the distance, you would expect the winning time to be roughly twice the winning time for the 100 meter race. So the relationship between the columns of the table is multiplicative, rather than additive. A multiplicative fit is equivalent to an additive fit to the logs Here is a multiplicative model for these data: \\\[ TIME \= \[COMMON] \\times \[ROW \\, EFFECT] \\times \[ COL \\,EFFECT] \\times \[RESIDUAL] \\] This model says that one row of the table will be a constant multiple of another row of the table. Similarly, we expect one column, say 400 m, to be a constant multiple of another column like 100 m. Also, notice that the residual is a multiplicative term rather than an additive term that we saw in the additive fit. How do we fit this seemingly more complicated model? Easy – we change this multiplicative model to an additive one by taking logs: \\\[ \\log TIME \= \\log \\, COMMON \+ \\log ROW \\, EFFECT \+ \\log COL \\, EFFECT \+ \\log RESIDUAL \\] Here is our strategy for fitting this multiplicative model. 1. Take logs (base 10\) of our response – here the times. 2. We fit an additive model using median polish. From this fit, we find the COMMON, row effects, and column effects. 3. We convert the additive fit back to a multiplicative fit by exponentiating the model terms – that is, taking all terms to the 10th power. 19\.3 Fitting our model to the data ----------------------------------- We start by taking the logs (base 10\) of our winning times – the two\-way table of log times is shown below. ``` log.times <- log10(times) log.times ``` ``` ## X100m X200m X400m X800m ## 1968 1.778151 2.115611 2.434249 2.751279 ## 1972 1.767823 2.091878 2.414037 2.727281 ## 1976 1.745465 2.076495 2.397749 2.713608 ## 1980 1.738701 2.073095 2.395781 2.706632 ## 1984 1.747567 2.076386 2.392873 2.703248 ## 1988 1.739810 2.070592 2.387123 2.699144 ## 1992 1.737590 2.071514 2.393013 2.703738 ## 1996 1.736397 2.072470 2.393136 2.705770 ## 2000 1.731024 2.072764 2.390582 2.698683 ## 2004 1.731105 2.071992 2.389768 2.702896 ``` Using R, we apply median polish to these data. The output of this additive fit are the common value, the row effects (reff), the column effects (ceff), and the residuals. We show all of these terms in the following table. ``` additive.fit <- medpolish(log.times) ``` ``` ## 1: 0.0857004 ## 2: 0.07810789 ## Final: 0.07788071 ``` ``` options(width=60) additive.fit ``` ``` ## ## Median Polish Results (Dataset: "log.times") ## ## Overall: 2.233155 ## ## Row Effects: ## 1968 1972 1976 1980 ## 0.0417754242 0.0213550737 0.0060742663 0.0005843024 ## 1984 1988 1992 1996 ## 0.0014745868 -0.0042972121 -0.0013812874 -0.0005843024 ## 2000 2004 ## -0.0046712810 -0.0029718761 ## ## Column Effects: ## X100m X200m X400m X800m ## -0.4946103 -0.1597096 0.1595565 0.4727421 ## ## Residuals: ## X100m X200m X400m X800m ## 1968 -0.00216840 0.00039015 -2.3702e-04 3.6070e-03 ## 1972 0.00792420 -0.00292211 -2.9188e-05 2.9188e-05 ## 1976 0.00084668 -0.00302440 -1.0364e-03 1.6372e-03 ## 1980 -0.00042723 -0.00093437 2.4852e-03 1.5148e-04 ## 1984 0.00754835 0.00146602 -1.3129e-03 -4.1229e-03 ## 1988 0.00556259 0.00144421 -1.2911e-03 -2.4558e-03 ## 1992 0.00042723 -0.00054984 1.6836e-03 -7.7704e-04 ## 1996 -0.00156342 -0.00039015 1.0096e-03 4.5730e-04 ## 2000 -0.00284856 0.00399077 2.5421e-03 -2.5421e-03 ## 2004 -0.00446730 0.00151935 2.9188e-05 -2.9188e-05 ``` To demonstrate the fit, note that the log time for the 100 m race in 1968 is 1\.77815\. We can express this log time as \\\[ 1\.77815 \= 2\.23315 \+ .04178 \- .49461 \- .00217 \\] where * 2\.23315 is the common value * .04178 is the additive effect due to the year 1968 * \\(\-.49461\\) is the additive effect due to the 100m swim * \\(\-.00217\\) is the residual (what’s left of the data after taking out the additive fit) To get a fit for the original time data, we take the common, row effects, column effects, and residuals each to the 10th power. For example, we reexpress the common value 2\.23359 to \\(10^{2\.2339}\\) \= 171\.233 and the first row effect .0410 to \\(10^{.0410}\\) \= 1\.0990\. If we do this operation to all terms, we get the following table of fits and residuals: ``` COMMON <- 10 ^ additive.fit$overall ROW <- 10 ^ additive.fit$row COL <- 10 ^ additive.fit$col RESIDUAL <- 10 ^ additive.fit$residual COMMON ``` ``` ## [1] 171.0624 ``` ``` ROW ``` ``` ## 1968 1972 1976 1980 1984 1988 ## 1.1009698 1.0504009 1.0140848 1.0013463 1.0034011 0.9901541 ## 1992 1996 2000 2004 ## 0.9968245 0.9986555 0.9893016 0.9931804 ``` ``` COL ``` ``` ## X100m X200m X400m X800m ## 0.3201767 0.6922937 1.4439644 2.9699019 ``` ``` RESIDUAL ``` ``` ## X100m X200m X400m X800m ## 1968 0.9950195 1.0008988 0.9994544 1.0083400 ## 1972 1.0184136 0.9932942 0.9999328 1.0000672 ## 1976 1.0019514 0.9930603 0.9976164 1.0037769 ## 1980 0.9990168 0.9978508 1.0057388 1.0003489 ## 1984 1.0175326 1.0033813 0.9969815 0.9905516 ## 1988 1.0128907 1.0033309 0.9970316 0.9943613 ## 1992 1.0009842 0.9987347 1.0038841 0.9982124 ## 1996 0.9964066 0.9991020 1.0023273 1.0010535 ## 2000 0.9934624 1.0092314 1.0058706 0.9941637 ## 2004 0.9897664 1.0035046 1.0000672 0.9999328 ``` 19\.4 Interpreting the fit -------------------------- Remember that we have performed an additive fit to the log times which is equivalent to a multiplicative fit to the times. Remember the log time for the 200 m swim in 1968 was represented as the sum \\\[ 1\.77815 \= 2\.23315 \+ .04178 \- .49461 \- .00217 \\] Equivalently, the time for the 200 m swim in 1968 is expressible as the product \\\[ 10^{1\.77815} \= 10^{2\.23315} \\times 10^{.04178} \\times 10^{\- .49461} \\times 10^{\-.00217} \\] or (looking at the output from the table) \\\[ 60 \= 171\.062 \\times 1\.1010 \\times .3202 \\times .9950 \\] Looking at a different cell in the table, say the 800 m time in 1980 (508\.90 seconds). We can represent this time as ``` [common time] x [effect due to 1980] x [effect due to 800 m] x [residual] ``` \\\[ \= 171\.062 \\times 1\.0013 \\times 2\.9699 \\times 1\.0003 \\] Here we have (close to) a perfect fit, which means that the observed time is exactly equal to the fitted time, and the (multiplicative) residual is approximately equal to 1\. Let’s interpret the fit shown below: ``` YEAR 100m 200m 400m 800m REFF 1968 1.1010 1972 1.0504 1976 1.0141 1980 1.0013 1984 1.0034 1988 0.9902 1992 0.9968 1996 0.9987 2000 0.9893 2004 0.9932 CEFF 0.3202 0.6923 1.4440 2.9699 171.062 ``` If we wish to get fitted times for each year, we multiply the row effects (reff) by the common value to get the following table. We use the abbreviation RFITS to stand for the row fits. ``` YEAR 100m 200m 400m 800m RFITS 1968 188.33 1972 179.68 1976 173.47 1980 171.29 1984 171.64 1988 169.38 1992 170.52 1996 170.83 2000 169.23 2004 169.90 CEFF 0.3202 0.6923 1.4440 2.9699 ``` If we wish to compare years, then we look at ratios (not differences). For example, comparing 1968 and 2000, the ratio of the corresponding row fits is 188\.33/169\.23 \= 1\.11\. So we can say that times in 1968 were on the average 11% slower than they were in the year 2000\. Likewise, what are the effects of the different distances? Since the 200m, 400m, 800m distances are 2, 4, 8 times longer, respectively, than the 100m distance, it might be reasonable to expect column effect ratios of 2, 4, and 8\. The estimated ratios from the table are shown below: | Distance | 100m | 200m | 400m | | --- | --- | --- | --- | | Column Effect | .3202 | .6923 | 1\.4440 | | Effect / Effect (100m) | 1 | 2\.16 | 4\.51 | Note that we see a fatigue effect – the 200m time is barely twice as long as the 100m, but the 400m time is 4\.5 times as long as the 100m time, and the 800 time is over 9 times long. To gain a better understanding of the row and column effects, we can plot them. In the top graph of the below figure, we have plotted the row effects for the log time against the year. In the bottom graph, we’ve plotted the column effects (again for log time) against the logarithm of the length of the race. ``` Year <- seq(1968, 2004, by=4) Log.Distance <- log10(c(100, 200, 400, 800)) ggplot(data.frame(Year, Row_Effect = additive.fit$row), aes(Year, Row_Effect)) + geom_point() ``` ``` ggplot(data.frame(Log.Distance, Col_Effect = additive.fit$col), aes(Log.Distance, Col_Effect)) + geom_point() ``` From the graphs, we see * There was a clear decrease in log time from 1968 to 1980, but since 1980 the times have been pretty constant * As we know, the log times increase linearly as a function of log distance. But this graph doesn’t show the fatigue effect – one could discover this by means of a residual plot from a linear fit to this graph. 19\.5 Interpreting the residuals -------------------------------- After we interpret the fit, we look at the residuals to find an interesting pattern or to detect unusual observations. In this multiplicative fit, a residual of 1 corresponds to a perfect fit in that cell, so we are looking for residual values that deviate from 1\. ``` round(RESIDUAL, 2) ``` ``` ## X100m X200m X400m X800m ## 1968 1.00 1.00 1.00 1.01 ## 1972 1.02 0.99 1.00 1.00 ## 1976 1.00 0.99 1.00 1.00 ## 1980 1.00 1.00 1.01 1.00 ## 1984 1.02 1.00 1.00 0.99 ## 1988 1.01 1.00 1.00 0.99 ## 1992 1.00 1.00 1.00 1.00 ## 1996 1.00 1.00 1.00 1.00 ## 2000 0.99 1.01 1.01 0.99 ## 2004 0.99 1.00 1.00 1.00 ``` I look for residuals that are either small or .99 or larger than 1\.01 – three “large” residuals are highlighted. In the 100m races of 1972, 1984, and 1988, the winning times were a bit slow considering the year and the length of the race. 19\.1 Meet the data ------------------- A fascinating data set are statistics collected from the Olympic games through the years. Here we focus on women swimming results – in particular, the times of the winning swimmer in the women freestyle swimming race for the 100, 200, 400, 800 meter distances for the most nine recent Summer Olympics. (The source of the data is the ESPN Sports Almanac.) We present the data as a two\-way table – time (in seconds) classified by year and distance. ``` library(LearnEDAfunctions) times <- olympics.swim[, -1] row.names(times) <- olympics.swim[, 1] times ``` ``` ## X100m X200m X400m X800m ## 1968 60.00 130.50 271.80 564.00 ## 1972 58.59 123.56 259.44 533.68 ## 1976 55.65 119.26 249.89 517.14 ## 1980 54.79 118.33 248.76 508.90 ## 1984 55.92 119.23 247.10 504.95 ## 1988 54.93 117.65 243.85 500.20 ## 1992 54.65 117.90 247.18 505.52 ## 1996 54.50 118.16 247.25 507.89 ## 2000 53.83 118.24 245.80 499.67 ## 2004 53.84 118.03 245.34 504.54 ``` What patterns do we expect to see in this table? Certainly, we expect the times to get larger for the longer distances. Also, we might expect to see some decrease in the winning times as a function of year. There are improvements in training and swimming technique that can lead to better race times. 19\.2 An additive model isn’t suitable -------------------------------------- If we think about the structure of this table, it should be clear that an additive model is not the best fit for these data. If the table were additive, then one would expect the difference between the times in the first two columns to be a constant across rows. Likewise, the difference between any two columns should be a constant for each row. But what is the expected relationship between the times for the 100 m and 200 m races? Since the 200 meter race is twice the distance, you would expect the winning time to be roughly twice the winning time for the 100 meter race. So the relationship between the columns of the table is multiplicative, rather than additive. A multiplicative fit is equivalent to an additive fit to the logs Here is a multiplicative model for these data: \\\[ TIME \= \[COMMON] \\times \[ROW \\, EFFECT] \\times \[ COL \\,EFFECT] \\times \[RESIDUAL] \\] This model says that one row of the table will be a constant multiple of another row of the table. Similarly, we expect one column, say 400 m, to be a constant multiple of another column like 100 m. Also, notice that the residual is a multiplicative term rather than an additive term that we saw in the additive fit. How do we fit this seemingly more complicated model? Easy – we change this multiplicative model to an additive one by taking logs: \\\[ \\log TIME \= \\log \\, COMMON \+ \\log ROW \\, EFFECT \+ \\log COL \\, EFFECT \+ \\log RESIDUAL \\] Here is our strategy for fitting this multiplicative model. 1. Take logs (base 10\) of our response – here the times. 2. We fit an additive model using median polish. From this fit, we find the COMMON, row effects, and column effects. 3. We convert the additive fit back to a multiplicative fit by exponentiating the model terms – that is, taking all terms to the 10th power. 19\.3 Fitting our model to the data ----------------------------------- We start by taking the logs (base 10\) of our winning times – the two\-way table of log times is shown below. ``` log.times <- log10(times) log.times ``` ``` ## X100m X200m X400m X800m ## 1968 1.778151 2.115611 2.434249 2.751279 ## 1972 1.767823 2.091878 2.414037 2.727281 ## 1976 1.745465 2.076495 2.397749 2.713608 ## 1980 1.738701 2.073095 2.395781 2.706632 ## 1984 1.747567 2.076386 2.392873 2.703248 ## 1988 1.739810 2.070592 2.387123 2.699144 ## 1992 1.737590 2.071514 2.393013 2.703738 ## 1996 1.736397 2.072470 2.393136 2.705770 ## 2000 1.731024 2.072764 2.390582 2.698683 ## 2004 1.731105 2.071992 2.389768 2.702896 ``` Using R, we apply median polish to these data. The output of this additive fit are the common value, the row effects (reff), the column effects (ceff), and the residuals. We show all of these terms in the following table. ``` additive.fit <- medpolish(log.times) ``` ``` ## 1: 0.0857004 ## 2: 0.07810789 ## Final: 0.07788071 ``` ``` options(width=60) additive.fit ``` ``` ## ## Median Polish Results (Dataset: "log.times") ## ## Overall: 2.233155 ## ## Row Effects: ## 1968 1972 1976 1980 ## 0.0417754242 0.0213550737 0.0060742663 0.0005843024 ## 1984 1988 1992 1996 ## 0.0014745868 -0.0042972121 -0.0013812874 -0.0005843024 ## 2000 2004 ## -0.0046712810 -0.0029718761 ## ## Column Effects: ## X100m X200m X400m X800m ## -0.4946103 -0.1597096 0.1595565 0.4727421 ## ## Residuals: ## X100m X200m X400m X800m ## 1968 -0.00216840 0.00039015 -2.3702e-04 3.6070e-03 ## 1972 0.00792420 -0.00292211 -2.9188e-05 2.9188e-05 ## 1976 0.00084668 -0.00302440 -1.0364e-03 1.6372e-03 ## 1980 -0.00042723 -0.00093437 2.4852e-03 1.5148e-04 ## 1984 0.00754835 0.00146602 -1.3129e-03 -4.1229e-03 ## 1988 0.00556259 0.00144421 -1.2911e-03 -2.4558e-03 ## 1992 0.00042723 -0.00054984 1.6836e-03 -7.7704e-04 ## 1996 -0.00156342 -0.00039015 1.0096e-03 4.5730e-04 ## 2000 -0.00284856 0.00399077 2.5421e-03 -2.5421e-03 ## 2004 -0.00446730 0.00151935 2.9188e-05 -2.9188e-05 ``` To demonstrate the fit, note that the log time for the 100 m race in 1968 is 1\.77815\. We can express this log time as \\\[ 1\.77815 \= 2\.23315 \+ .04178 \- .49461 \- .00217 \\] where * 2\.23315 is the common value * .04178 is the additive effect due to the year 1968 * \\(\-.49461\\) is the additive effect due to the 100m swim * \\(\-.00217\\) is the residual (what’s left of the data after taking out the additive fit) To get a fit for the original time data, we take the common, row effects, column effects, and residuals each to the 10th power. For example, we reexpress the common value 2\.23359 to \\(10^{2\.2339}\\) \= 171\.233 and the first row effect .0410 to \\(10^{.0410}\\) \= 1\.0990\. If we do this operation to all terms, we get the following table of fits and residuals: ``` COMMON <- 10 ^ additive.fit$overall ROW <- 10 ^ additive.fit$row COL <- 10 ^ additive.fit$col RESIDUAL <- 10 ^ additive.fit$residual COMMON ``` ``` ## [1] 171.0624 ``` ``` ROW ``` ``` ## 1968 1972 1976 1980 1984 1988 ## 1.1009698 1.0504009 1.0140848 1.0013463 1.0034011 0.9901541 ## 1992 1996 2000 2004 ## 0.9968245 0.9986555 0.9893016 0.9931804 ``` ``` COL ``` ``` ## X100m X200m X400m X800m ## 0.3201767 0.6922937 1.4439644 2.9699019 ``` ``` RESIDUAL ``` ``` ## X100m X200m X400m X800m ## 1968 0.9950195 1.0008988 0.9994544 1.0083400 ## 1972 1.0184136 0.9932942 0.9999328 1.0000672 ## 1976 1.0019514 0.9930603 0.9976164 1.0037769 ## 1980 0.9990168 0.9978508 1.0057388 1.0003489 ## 1984 1.0175326 1.0033813 0.9969815 0.9905516 ## 1988 1.0128907 1.0033309 0.9970316 0.9943613 ## 1992 1.0009842 0.9987347 1.0038841 0.9982124 ## 1996 0.9964066 0.9991020 1.0023273 1.0010535 ## 2000 0.9934624 1.0092314 1.0058706 0.9941637 ## 2004 0.9897664 1.0035046 1.0000672 0.9999328 ``` 19\.4 Interpreting the fit -------------------------- Remember that we have performed an additive fit to the log times which is equivalent to a multiplicative fit to the times. Remember the log time for the 200 m swim in 1968 was represented as the sum \\\[ 1\.77815 \= 2\.23315 \+ .04178 \- .49461 \- .00217 \\] Equivalently, the time for the 200 m swim in 1968 is expressible as the product \\\[ 10^{1\.77815} \= 10^{2\.23315} \\times 10^{.04178} \\times 10^{\- .49461} \\times 10^{\-.00217} \\] or (looking at the output from the table) \\\[ 60 \= 171\.062 \\times 1\.1010 \\times .3202 \\times .9950 \\] Looking at a different cell in the table, say the 800 m time in 1980 (508\.90 seconds). We can represent this time as ``` [common time] x [effect due to 1980] x [effect due to 800 m] x [residual] ``` \\\[ \= 171\.062 \\times 1\.0013 \\times 2\.9699 \\times 1\.0003 \\] Here we have (close to) a perfect fit, which means that the observed time is exactly equal to the fitted time, and the (multiplicative) residual is approximately equal to 1\. Let’s interpret the fit shown below: ``` YEAR 100m 200m 400m 800m REFF 1968 1.1010 1972 1.0504 1976 1.0141 1980 1.0013 1984 1.0034 1988 0.9902 1992 0.9968 1996 0.9987 2000 0.9893 2004 0.9932 CEFF 0.3202 0.6923 1.4440 2.9699 171.062 ``` If we wish to get fitted times for each year, we multiply the row effects (reff) by the common value to get the following table. We use the abbreviation RFITS to stand for the row fits. ``` YEAR 100m 200m 400m 800m RFITS 1968 188.33 1972 179.68 1976 173.47 1980 171.29 1984 171.64 1988 169.38 1992 170.52 1996 170.83 2000 169.23 2004 169.90 CEFF 0.3202 0.6923 1.4440 2.9699 ``` If we wish to compare years, then we look at ratios (not differences). For example, comparing 1968 and 2000, the ratio of the corresponding row fits is 188\.33/169\.23 \= 1\.11\. So we can say that times in 1968 were on the average 11% slower than they were in the year 2000\. Likewise, what are the effects of the different distances? Since the 200m, 400m, 800m distances are 2, 4, 8 times longer, respectively, than the 100m distance, it might be reasonable to expect column effect ratios of 2, 4, and 8\. The estimated ratios from the table are shown below: | Distance | 100m | 200m | 400m | | --- | --- | --- | --- | | Column Effect | .3202 | .6923 | 1\.4440 | | Effect / Effect (100m) | 1 | 2\.16 | 4\.51 | Note that we see a fatigue effect – the 200m time is barely twice as long as the 100m, but the 400m time is 4\.5 times as long as the 100m time, and the 800 time is over 9 times long. To gain a better understanding of the row and column effects, we can plot them. In the top graph of the below figure, we have plotted the row effects for the log time against the year. In the bottom graph, we’ve plotted the column effects (again for log time) against the logarithm of the length of the race. ``` Year <- seq(1968, 2004, by=4) Log.Distance <- log10(c(100, 200, 400, 800)) ggplot(data.frame(Year, Row_Effect = additive.fit$row), aes(Year, Row_Effect)) + geom_point() ``` ``` ggplot(data.frame(Log.Distance, Col_Effect = additive.fit$col), aes(Log.Distance, Col_Effect)) + geom_point() ``` From the graphs, we see * There was a clear decrease in log time from 1968 to 1980, but since 1980 the times have been pretty constant * As we know, the log times increase linearly as a function of log distance. But this graph doesn’t show the fatigue effect – one could discover this by means of a residual plot from a linear fit to this graph. 19\.5 Interpreting the residuals -------------------------------- After we interpret the fit, we look at the residuals to find an interesting pattern or to detect unusual observations. In this multiplicative fit, a residual of 1 corresponds to a perfect fit in that cell, so we are looking for residual values that deviate from 1\. ``` round(RESIDUAL, 2) ``` ``` ## X100m X200m X400m X800m ## 1968 1.00 1.00 1.00 1.01 ## 1972 1.02 0.99 1.00 1.00 ## 1976 1.00 0.99 1.00 1.00 ## 1980 1.00 1.00 1.01 1.00 ## 1984 1.02 1.00 1.00 0.99 ## 1988 1.01 1.00 1.00 0.99 ## 1992 1.00 1.00 1.00 1.00 ## 1996 1.00 1.00 1.00 1.00 ## 2000 0.99 1.01 1.01 0.99 ## 2004 0.99 1.00 1.00 1.00 ``` I look for residuals that are either small or .99 or larger than 1\.01 – three “large” residuals are highlighted. In the 100m races of 1972, 1984, and 1988, the winning times were a bit slow considering the year and the length of the race.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/multiplicative-fit.html
19 Multiplicative Fit ===================== We have illustrated the use of median polish to perform an additive fit to a two\-way table. But sometimes, a multiplicative fit rather than an additive fit is appropriate. Here we illustrate fitting and interpreting a multiplicative fit. 19\.1 Meet the data ------------------- A fascinating data set are statistics collected from the Olympic games through the years. Here we focus on women swimming results – in particular, the times of the winning swimmer in the women freestyle swimming race for the 100, 200, 400, 800 meter distances for the most nine recent Summer Olympics. (The source of the data is the ESPN Sports Almanac.) We present the data as a two\-way table – time (in seconds) classified by year and distance. ``` library(LearnEDAfunctions) times <- olympics.swim[, -1] row.names(times) <- olympics.swim[, 1] times ``` ``` ## X100m X200m X400m X800m ## 1968 60.00 130.50 271.80 564.00 ## 1972 58.59 123.56 259.44 533.68 ## 1976 55.65 119.26 249.89 517.14 ## 1980 54.79 118.33 248.76 508.90 ## 1984 55.92 119.23 247.10 504.95 ## 1988 54.93 117.65 243.85 500.20 ## 1992 54.65 117.90 247.18 505.52 ## 1996 54.50 118.16 247.25 507.89 ## 2000 53.83 118.24 245.80 499.67 ## 2004 53.84 118.03 245.34 504.54 ``` What patterns do we expect to see in this table? Certainly, we expect the times to get larger for the longer distances. Also, we might expect to see some decrease in the winning times as a function of year. There are improvements in training and swimming technique that can lead to better race times. 19\.2 An additive model isn’t suitable -------------------------------------- If we think about the structure of this table, it should be clear that an additive model is not the best fit for these data. If the table were additive, then one would expect the difference between the times in the first two columns to be a constant across rows. Likewise, the difference between any two columns should be a constant for each row. But what is the expected relationship between the times for the 100 m and 200 m races? Since the 200 meter race is twice the distance, you would expect the winning time to be roughly twice the winning time for the 100 meter race. So the relationship between the columns of the table is multiplicative, rather than additive. A multiplicative fit is equivalent to an additive fit to the logs Here is a multiplicative model for these data: \\\[ TIME \= \[COMMON] \\times \[ROW \\, EFFECT] \\times \[ COL \\,EFFECT] \\times \[RESIDUAL] \\] This model says that one row of the table will be a constant multiple of another row of the table. Similarly, we expect one column, say 400 m, to be a constant multiple of another column like 100 m. Also, notice that the residual is a multiplicative term rather than an additive term that we saw in the additive fit. How do we fit this seemingly more complicated model? Easy – we change this multiplicative model to an additive one by taking logs: \\\[ \\log TIME \= \\log \\, COMMON \+ \\log ROW \\, EFFECT \+ \\log COL \\, EFFECT \+ \\log RESIDUAL \\] Here is our strategy for fitting this multiplicative model. 1. Take logs (base 10\) of our response – here the times. 2. We fit an additive model using median polish. From this fit, we find the COMMON, row effects, and column effects. 3. We convert the additive fit back to a multiplicative fit by exponentiating the model terms – that is, taking all terms to the 10th power. 19\.3 Fitting our model to the data ----------------------------------- We start by taking the logs (base 10\) of our winning times – the two\-way table of log times is shown below. ``` log.times <- log10(times) log.times ``` ``` ## X100m X200m X400m X800m ## 1968 1.778151 2.115611 2.434249 2.751279 ## 1972 1.767823 2.091878 2.414037 2.727281 ## 1976 1.745465 2.076495 2.397749 2.713608 ## 1980 1.738701 2.073095 2.395781 2.706632 ## 1984 1.747567 2.076386 2.392873 2.703248 ## 1988 1.739810 2.070592 2.387123 2.699144 ## 1992 1.737590 2.071514 2.393013 2.703738 ## 1996 1.736397 2.072470 2.393136 2.705770 ## 2000 1.731024 2.072764 2.390582 2.698683 ## 2004 1.731105 2.071992 2.389768 2.702896 ``` Using R, we apply median polish to these data. The output of this additive fit are the common value, the row effects (reff), the column effects (ceff), and the residuals. We show all of these terms in the following table. ``` additive.fit <- medpolish(log.times) ``` ``` ## 1: 0.0857004 ## 2: 0.07810789 ## Final: 0.07788071 ``` ``` options(width=60) additive.fit ``` ``` ## ## Median Polish Results (Dataset: "log.times") ## ## Overall: 2.233155 ## ## Row Effects: ## 1968 1972 1976 1980 ## 0.0417754242 0.0213550737 0.0060742663 0.0005843024 ## 1984 1988 1992 1996 ## 0.0014745868 -0.0042972121 -0.0013812874 -0.0005843024 ## 2000 2004 ## -0.0046712810 -0.0029718761 ## ## Column Effects: ## X100m X200m X400m X800m ## -0.4946103 -0.1597096 0.1595565 0.4727421 ## ## Residuals: ## X100m X200m X400m X800m ## 1968 -0.00216840 0.00039015 -2.3702e-04 3.6070e-03 ## 1972 0.00792420 -0.00292211 -2.9188e-05 2.9188e-05 ## 1976 0.00084668 -0.00302440 -1.0364e-03 1.6372e-03 ## 1980 -0.00042723 -0.00093437 2.4852e-03 1.5148e-04 ## 1984 0.00754835 0.00146602 -1.3129e-03 -4.1229e-03 ## 1988 0.00556259 0.00144421 -1.2911e-03 -2.4558e-03 ## 1992 0.00042723 -0.00054984 1.6836e-03 -7.7704e-04 ## 1996 -0.00156342 -0.00039015 1.0096e-03 4.5730e-04 ## 2000 -0.00284856 0.00399077 2.5421e-03 -2.5421e-03 ## 2004 -0.00446730 0.00151935 2.9188e-05 -2.9188e-05 ``` To demonstrate the fit, note that the log time for the 100 m race in 1968 is 1\.77815\. We can express this log time as \\\[ 1\.77815 \= 2\.23315 \+ .04178 \- .49461 \- .00217 \\] where * 2\.23315 is the common value * .04178 is the additive effect due to the year 1968 * \\(\-.49461\\) is the additive effect due to the 100m swim * \\(\-.00217\\) is the residual (what’s left of the data after taking out the additive fit) To get a fit for the original time data, we take the common, row effects, column effects, and residuals each to the 10th power. For example, we reexpress the common value 2\.23359 to \\(10^{2\.2339}\\) \= 171\.233 and the first row effect .0410 to \\(10^{.0410}\\) \= 1\.0990\. If we do this operation to all terms, we get the following table of fits and residuals: ``` COMMON <- 10 ^ additive.fit$overall ROW <- 10 ^ additive.fit$row COL <- 10 ^ additive.fit$col RESIDUAL <- 10 ^ additive.fit$residual COMMON ``` ``` ## [1] 171.0624 ``` ``` ROW ``` ``` ## 1968 1972 1976 1980 1984 1988 ## 1.1009698 1.0504009 1.0140848 1.0013463 1.0034011 0.9901541 ## 1992 1996 2000 2004 ## 0.9968245 0.9986555 0.9893016 0.9931804 ``` ``` COL ``` ``` ## X100m X200m X400m X800m ## 0.3201767 0.6922937 1.4439644 2.9699019 ``` ``` RESIDUAL ``` ``` ## X100m X200m X400m X800m ## 1968 0.9950195 1.0008988 0.9994544 1.0083400 ## 1972 1.0184136 0.9932942 0.9999328 1.0000672 ## 1976 1.0019514 0.9930603 0.9976164 1.0037769 ## 1980 0.9990168 0.9978508 1.0057388 1.0003489 ## 1984 1.0175326 1.0033813 0.9969815 0.9905516 ## 1988 1.0128907 1.0033309 0.9970316 0.9943613 ## 1992 1.0009842 0.9987347 1.0038841 0.9982124 ## 1996 0.9964066 0.9991020 1.0023273 1.0010535 ## 2000 0.9934624 1.0092314 1.0058706 0.9941637 ## 2004 0.9897664 1.0035046 1.0000672 0.9999328 ``` 19\.4 Interpreting the fit -------------------------- Remember that we have performed an additive fit to the log times which is equivalent to a multiplicative fit to the times. Remember the log time for the 200 m swim in 1968 was represented as the sum \\\[ 1\.77815 \= 2\.23315 \+ .04178 \- .49461 \- .00217 \\] Equivalently, the time for the 200 m swim in 1968 is expressible as the product \\\[ 10^{1\.77815} \= 10^{2\.23315} \\times 10^{.04178} \\times 10^{\- .49461} \\times 10^{\-.00217} \\] or (looking at the output from the table) \\\[ 60 \= 171\.062 \\times 1\.1010 \\times .3202 \\times .9950 \\] Looking at a different cell in the table, say the 800 m time in 1980 (508\.90 seconds). We can represent this time as ``` [common time] x [effect due to 1980] x [effect due to 800 m] x [residual] ``` \\\[ \= 171\.062 \\times 1\.0013 \\times 2\.9699 \\times 1\.0003 \\] Here we have (close to) a perfect fit, which means that the observed time is exactly equal to the fitted time, and the (multiplicative) residual is approximately equal to 1\. Let’s interpret the fit shown below: ``` YEAR 100m 200m 400m 800m REFF 1968 1.1010 1972 1.0504 1976 1.0141 1980 1.0013 1984 1.0034 1988 0.9902 1992 0.9968 1996 0.9987 2000 0.9893 2004 0.9932 CEFF 0.3202 0.6923 1.4440 2.9699 171.062 ``` If we wish to get fitted times for each year, we multiply the row effects (reff) by the common value to get the following table. We use the abbreviation RFITS to stand for the row fits. ``` YEAR 100m 200m 400m 800m RFITS 1968 188.33 1972 179.68 1976 173.47 1980 171.29 1984 171.64 1988 169.38 1992 170.52 1996 170.83 2000 169.23 2004 169.90 CEFF 0.3202 0.6923 1.4440 2.9699 ``` If we wish to compare years, then we look at ratios (not differences). For example, comparing 1968 and 2000, the ratio of the corresponding row fits is 188\.33/169\.23 \= 1\.11\. So we can say that times in 1968 were on the average 11% slower than they were in the year 2000\. Likewise, what are the effects of the different distances? Since the 200m, 400m, 800m distances are 2, 4, 8 times longer, respectively, than the 100m distance, it might be reasonable to expect column effect ratios of 2, 4, and 8\. The estimated ratios from the table are shown below: | Distance | 100m | 200m | 400m | | --- | --- | --- | --- | | Column Effect | .3202 | .6923 | 1\.4440 | | Effect / Effect (100m) | 1 | 2\.16 | 4\.51 | Note that we see a fatigue effect – the 200m time is barely twice as long as the 100m, but the 400m time is 4\.5 times as long as the 100m time, and the 800 time is over 9 times long. To gain a better understanding of the row and column effects, we can plot them. In the top graph of the below figure, we have plotted the row effects for the log time against the year. In the bottom graph, we’ve plotted the column effects (again for log time) against the logarithm of the length of the race. ``` Year <- seq(1968, 2004, by=4) Log.Distance <- log10(c(100, 200, 400, 800)) ggplot(data.frame(Year, Row_Effect = additive.fit$row), aes(Year, Row_Effect)) + geom_point() ``` ``` ggplot(data.frame(Log.Distance, Col_Effect = additive.fit$col), aes(Log.Distance, Col_Effect)) + geom_point() ``` From the graphs, we see * There was a clear decrease in log time from 1968 to 1980, but since 1980 the times have been pretty constant * As we know, the log times increase linearly as a function of log distance. But this graph doesn’t show the fatigue effect – one could discover this by means of a residual plot from a linear fit to this graph. 19\.5 Interpreting the residuals -------------------------------- After we interpret the fit, we look at the residuals to find an interesting pattern or to detect unusual observations. In this multiplicative fit, a residual of 1 corresponds to a perfect fit in that cell, so we are looking for residual values that deviate from 1\. ``` round(RESIDUAL, 2) ``` ``` ## X100m X200m X400m X800m ## 1968 1.00 1.00 1.00 1.01 ## 1972 1.02 0.99 1.00 1.00 ## 1976 1.00 0.99 1.00 1.00 ## 1980 1.00 1.00 1.01 1.00 ## 1984 1.02 1.00 1.00 0.99 ## 1988 1.01 1.00 1.00 0.99 ## 1992 1.00 1.00 1.00 1.00 ## 1996 1.00 1.00 1.00 1.00 ## 2000 0.99 1.01 1.01 0.99 ## 2004 0.99 1.00 1.00 1.00 ``` I look for residuals that are either small or .99 or larger than 1\.01 – three “large” residuals are highlighted. In the 100m races of 1972, 1984, and 1988, the winning times were a bit slow considering the year and the length of the race. 19\.1 Meet the data ------------------- A fascinating data set are statistics collected from the Olympic games through the years. Here we focus on women swimming results – in particular, the times of the winning swimmer in the women freestyle swimming race for the 100, 200, 400, 800 meter distances for the most nine recent Summer Olympics. (The source of the data is the ESPN Sports Almanac.) We present the data as a two\-way table – time (in seconds) classified by year and distance. ``` library(LearnEDAfunctions) times <- olympics.swim[, -1] row.names(times) <- olympics.swim[, 1] times ``` ``` ## X100m X200m X400m X800m ## 1968 60.00 130.50 271.80 564.00 ## 1972 58.59 123.56 259.44 533.68 ## 1976 55.65 119.26 249.89 517.14 ## 1980 54.79 118.33 248.76 508.90 ## 1984 55.92 119.23 247.10 504.95 ## 1988 54.93 117.65 243.85 500.20 ## 1992 54.65 117.90 247.18 505.52 ## 1996 54.50 118.16 247.25 507.89 ## 2000 53.83 118.24 245.80 499.67 ## 2004 53.84 118.03 245.34 504.54 ``` What patterns do we expect to see in this table? Certainly, we expect the times to get larger for the longer distances. Also, we might expect to see some decrease in the winning times as a function of year. There are improvements in training and swimming technique that can lead to better race times. 19\.2 An additive model isn’t suitable -------------------------------------- If we think about the structure of this table, it should be clear that an additive model is not the best fit for these data. If the table were additive, then one would expect the difference between the times in the first two columns to be a constant across rows. Likewise, the difference between any two columns should be a constant for each row. But what is the expected relationship between the times for the 100 m and 200 m races? Since the 200 meter race is twice the distance, you would expect the winning time to be roughly twice the winning time for the 100 meter race. So the relationship between the columns of the table is multiplicative, rather than additive. A multiplicative fit is equivalent to an additive fit to the logs Here is a multiplicative model for these data: \\\[ TIME \= \[COMMON] \\times \[ROW \\, EFFECT] \\times \[ COL \\,EFFECT] \\times \[RESIDUAL] \\] This model says that one row of the table will be a constant multiple of another row of the table. Similarly, we expect one column, say 400 m, to be a constant multiple of another column like 100 m. Also, notice that the residual is a multiplicative term rather than an additive term that we saw in the additive fit. How do we fit this seemingly more complicated model? Easy – we change this multiplicative model to an additive one by taking logs: \\\[ \\log TIME \= \\log \\, COMMON \+ \\log ROW \\, EFFECT \+ \\log COL \\, EFFECT \+ \\log RESIDUAL \\] Here is our strategy for fitting this multiplicative model. 1. Take logs (base 10\) of our response – here the times. 2. We fit an additive model using median polish. From this fit, we find the COMMON, row effects, and column effects. 3. We convert the additive fit back to a multiplicative fit by exponentiating the model terms – that is, taking all terms to the 10th power. 19\.3 Fitting our model to the data ----------------------------------- We start by taking the logs (base 10\) of our winning times – the two\-way table of log times is shown below. ``` log.times <- log10(times) log.times ``` ``` ## X100m X200m X400m X800m ## 1968 1.778151 2.115611 2.434249 2.751279 ## 1972 1.767823 2.091878 2.414037 2.727281 ## 1976 1.745465 2.076495 2.397749 2.713608 ## 1980 1.738701 2.073095 2.395781 2.706632 ## 1984 1.747567 2.076386 2.392873 2.703248 ## 1988 1.739810 2.070592 2.387123 2.699144 ## 1992 1.737590 2.071514 2.393013 2.703738 ## 1996 1.736397 2.072470 2.393136 2.705770 ## 2000 1.731024 2.072764 2.390582 2.698683 ## 2004 1.731105 2.071992 2.389768 2.702896 ``` Using R, we apply median polish to these data. The output of this additive fit are the common value, the row effects (reff), the column effects (ceff), and the residuals. We show all of these terms in the following table. ``` additive.fit <- medpolish(log.times) ``` ``` ## 1: 0.0857004 ## 2: 0.07810789 ## Final: 0.07788071 ``` ``` options(width=60) additive.fit ``` ``` ## ## Median Polish Results (Dataset: "log.times") ## ## Overall: 2.233155 ## ## Row Effects: ## 1968 1972 1976 1980 ## 0.0417754242 0.0213550737 0.0060742663 0.0005843024 ## 1984 1988 1992 1996 ## 0.0014745868 -0.0042972121 -0.0013812874 -0.0005843024 ## 2000 2004 ## -0.0046712810 -0.0029718761 ## ## Column Effects: ## X100m X200m X400m X800m ## -0.4946103 -0.1597096 0.1595565 0.4727421 ## ## Residuals: ## X100m X200m X400m X800m ## 1968 -0.00216840 0.00039015 -2.3702e-04 3.6070e-03 ## 1972 0.00792420 -0.00292211 -2.9188e-05 2.9188e-05 ## 1976 0.00084668 -0.00302440 -1.0364e-03 1.6372e-03 ## 1980 -0.00042723 -0.00093437 2.4852e-03 1.5148e-04 ## 1984 0.00754835 0.00146602 -1.3129e-03 -4.1229e-03 ## 1988 0.00556259 0.00144421 -1.2911e-03 -2.4558e-03 ## 1992 0.00042723 -0.00054984 1.6836e-03 -7.7704e-04 ## 1996 -0.00156342 -0.00039015 1.0096e-03 4.5730e-04 ## 2000 -0.00284856 0.00399077 2.5421e-03 -2.5421e-03 ## 2004 -0.00446730 0.00151935 2.9188e-05 -2.9188e-05 ``` To demonstrate the fit, note that the log time for the 100 m race in 1968 is 1\.77815\. We can express this log time as \\\[ 1\.77815 \= 2\.23315 \+ .04178 \- .49461 \- .00217 \\] where * 2\.23315 is the common value * .04178 is the additive effect due to the year 1968 * \\(\-.49461\\) is the additive effect due to the 100m swim * \\(\-.00217\\) is the residual (what’s left of the data after taking out the additive fit) To get a fit for the original time data, we take the common, row effects, column effects, and residuals each to the 10th power. For example, we reexpress the common value 2\.23359 to \\(10^{2\.2339}\\) \= 171\.233 and the first row effect .0410 to \\(10^{.0410}\\) \= 1\.0990\. If we do this operation to all terms, we get the following table of fits and residuals: ``` COMMON <- 10 ^ additive.fit$overall ROW <- 10 ^ additive.fit$row COL <- 10 ^ additive.fit$col RESIDUAL <- 10 ^ additive.fit$residual COMMON ``` ``` ## [1] 171.0624 ``` ``` ROW ``` ``` ## 1968 1972 1976 1980 1984 1988 ## 1.1009698 1.0504009 1.0140848 1.0013463 1.0034011 0.9901541 ## 1992 1996 2000 2004 ## 0.9968245 0.9986555 0.9893016 0.9931804 ``` ``` COL ``` ``` ## X100m X200m X400m X800m ## 0.3201767 0.6922937 1.4439644 2.9699019 ``` ``` RESIDUAL ``` ``` ## X100m X200m X400m X800m ## 1968 0.9950195 1.0008988 0.9994544 1.0083400 ## 1972 1.0184136 0.9932942 0.9999328 1.0000672 ## 1976 1.0019514 0.9930603 0.9976164 1.0037769 ## 1980 0.9990168 0.9978508 1.0057388 1.0003489 ## 1984 1.0175326 1.0033813 0.9969815 0.9905516 ## 1988 1.0128907 1.0033309 0.9970316 0.9943613 ## 1992 1.0009842 0.9987347 1.0038841 0.9982124 ## 1996 0.9964066 0.9991020 1.0023273 1.0010535 ## 2000 0.9934624 1.0092314 1.0058706 0.9941637 ## 2004 0.9897664 1.0035046 1.0000672 0.9999328 ``` 19\.4 Interpreting the fit -------------------------- Remember that we have performed an additive fit to the log times which is equivalent to a multiplicative fit to the times. Remember the log time for the 200 m swim in 1968 was represented as the sum \\\[ 1\.77815 \= 2\.23315 \+ .04178 \- .49461 \- .00217 \\] Equivalently, the time for the 200 m swim in 1968 is expressible as the product \\\[ 10^{1\.77815} \= 10^{2\.23315} \\times 10^{.04178} \\times 10^{\- .49461} \\times 10^{\-.00217} \\] or (looking at the output from the table) \\\[ 60 \= 171\.062 \\times 1\.1010 \\times .3202 \\times .9950 \\] Looking at a different cell in the table, say the 800 m time in 1980 (508\.90 seconds). We can represent this time as ``` [common time] x [effect due to 1980] x [effect due to 800 m] x [residual] ``` \\\[ \= 171\.062 \\times 1\.0013 \\times 2\.9699 \\times 1\.0003 \\] Here we have (close to) a perfect fit, which means that the observed time is exactly equal to the fitted time, and the (multiplicative) residual is approximately equal to 1\. Let’s interpret the fit shown below: ``` YEAR 100m 200m 400m 800m REFF 1968 1.1010 1972 1.0504 1976 1.0141 1980 1.0013 1984 1.0034 1988 0.9902 1992 0.9968 1996 0.9987 2000 0.9893 2004 0.9932 CEFF 0.3202 0.6923 1.4440 2.9699 171.062 ``` If we wish to get fitted times for each year, we multiply the row effects (reff) by the common value to get the following table. We use the abbreviation RFITS to stand for the row fits. ``` YEAR 100m 200m 400m 800m RFITS 1968 188.33 1972 179.68 1976 173.47 1980 171.29 1984 171.64 1988 169.38 1992 170.52 1996 170.83 2000 169.23 2004 169.90 CEFF 0.3202 0.6923 1.4440 2.9699 ``` If we wish to compare years, then we look at ratios (not differences). For example, comparing 1968 and 2000, the ratio of the corresponding row fits is 188\.33/169\.23 \= 1\.11\. So we can say that times in 1968 were on the average 11% slower than they were in the year 2000\. Likewise, what are the effects of the different distances? Since the 200m, 400m, 800m distances are 2, 4, 8 times longer, respectively, than the 100m distance, it might be reasonable to expect column effect ratios of 2, 4, and 8\. The estimated ratios from the table are shown below: | Distance | 100m | 200m | 400m | | --- | --- | --- | --- | | Column Effect | .3202 | .6923 | 1\.4440 | | Effect / Effect (100m) | 1 | 2\.16 | 4\.51 | Note that we see a fatigue effect – the 200m time is barely twice as long as the 100m, but the 400m time is 4\.5 times as long as the 100m time, and the 800 time is over 9 times long. To gain a better understanding of the row and column effects, we can plot them. In the top graph of the below figure, we have plotted the row effects for the log time against the year. In the bottom graph, we’ve plotted the column effects (again for log time) against the logarithm of the length of the race. ``` Year <- seq(1968, 2004, by=4) Log.Distance <- log10(c(100, 200, 400, 800)) ggplot(data.frame(Year, Row_Effect = additive.fit$row), aes(Year, Row_Effect)) + geom_point() ``` ``` ggplot(data.frame(Log.Distance, Col_Effect = additive.fit$col), aes(Log.Distance, Col_Effect)) + geom_point() ``` From the graphs, we see * There was a clear decrease in log time from 1968 to 1980, but since 1980 the times have been pretty constant * As we know, the log times increase linearly as a function of log distance. But this graph doesn’t show the fatigue effect – one could discover this by means of a residual plot from a linear fit to this graph. 19\.5 Interpreting the residuals -------------------------------- After we interpret the fit, we look at the residuals to find an interesting pattern or to detect unusual observations. In this multiplicative fit, a residual of 1 corresponds to a perfect fit in that cell, so we are looking for residual values that deviate from 1\. ``` round(RESIDUAL, 2) ``` ``` ## X100m X200m X400m X800m ## 1968 1.00 1.00 1.00 1.01 ## 1972 1.02 0.99 1.00 1.00 ## 1976 1.00 0.99 1.00 1.00 ## 1980 1.00 1.00 1.01 1.00 ## 1984 1.02 1.00 1.00 0.99 ## 1988 1.01 1.00 1.00 0.99 ## 1992 1.00 1.00 1.00 1.00 ## 1996 1.00 1.00 1.00 1.00 ## 2000 0.99 1.01 1.01 0.99 ## 2004 0.99 1.00 1.00 1.00 ``` I look for residuals that are either small or .99 or larger than 1\.01 – three “large” residuals are highlighted. In the 100m races of 1972, 1984, and 1988, the winning times were a bit slow considering the year and the length of the race.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/extended-fit.html
20 Extended Fit =============== In the two previous chapters, we have talked about two ways of fitting a model to a two\-way table. We can fit an additive model using median polish – this seemed to work well for our temperature data that was classified by city and month. In other situations, like the Olympics swimming times classified by year and distance, it seemed better to use a multiplicative fit. For an arbitrary dataset, how can we tell if an additive fit or a multiplicative fit is more suitable? Here we describe a way of extending the additive model, called an extended fit, that will help us in situations where an additive model does not fit the data well. We will see that this extended fit has a strong connection with the multiplicative fit that we saw earlier. 20\.1 Meet the data ------------------- The New York Times Almanac has a lot of interesting data about housing in the United States. When you are considering taking a new job, one factor in your decision\-making is the cost of living in the city where you are thinking of moving. The almanac gives the average price of apartments and houses for many cities in the country. The table shown below gives the market price of five different types of apartments for nine cities. These cities were chosen to get a good spread in cost of housing – we see that San Francisco is over twice as expensive than Tulsa. ``` library(LearnEDAfunctions) library(tidyverse) prices <- rent.prices[, -1] row.names(prices) <- rent.prices[, 1] prices ``` ``` ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## Albuquerque 397 473 592 816 ## Atlanta 613 682 795 1060 ## Boston 695 782 979 1223 ## Columbus 398 471 605 768 ## Honolulu 595 713 839 1134 ## Miami 461 579 722 991 ## Philadelphia 497 611 755 945 ## San_Francisco 891 1154 1459 2001 ## Tulsa 392 470 625 869 ## Four.Bedroom ## Albuquerque 963 ## Atlanta 1282 ## Boston 1437 ## Columbus 883 ## Honolulu 1226 ## Miami 1149 ## Philadelphia 1185 ## San_Francisco 2118 ## Tulsa 1025 ``` The goal here is to gain some understanding about the difference in rentals between different cities, and between apartments of different types. Is an additive model appropriate for these data? Well, you might not know, so we will be na"{}ve and start by trying an additive fit. 20\.2 An additive fit --------------------- We first use R to fit an additive model using median polish. The fitted model (common, row and column effects) and the residuals are shown in the below table. ``` (additive.fit <- medpolish(prices)) ``` ``` ## 1: 2011 ## 2: 1938 ## Final: 1938 ``` ``` ## ## Median Polish Results (Dataset: "prices") ## ## Overall: 754 ## ## Row Effects: ## Albuquerque Atlanta Boston Columbus ## -162 71 225 -149 ## Honolulu Miami Philadelphia San_Francisco ## 85 -32 0 705 ## Tulsa ## -129 ## ## Column Effects: ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## -244 -143 0 244 ## Four.Bedroom ## 427 ## ## Residuals: ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## Albuquerque 49 24 0 -20 ## Atlanta 32 0 -30 -9 ## Boston -40 -54 0 0 ## Columbus 37 9 0 -81 ## Honolulu 0 17 0 51 ## Miami -17 0 0 25 ## Philadelphia -13 0 1 -53 ## San_Francisco -324 -162 0 298 ## Tulsa 11 -12 0 0 ## Four.Bedroom ## Albuquerque -56 ## Atlanta 30 ## Boston 31 ## Columbus -149 ## Honolulu -40 ## Miami 0 ## Philadelphia 4 ## San_Francisco 232 ## Tulsa -27 ``` For the following, it will be helpful to order the rows and columns of the table by the effects. Actually, the columns are already ordered by effects – a studio apartment tends to be cheaper than a 1 bedroom apartment which is cheaper than a 2 bedroom apartment and so on. But the rows were not originally ordered by effects and so we’ve reordered the cities with the cheapest cities on top. ``` prices <- prices[order(additive.fit$row), ] additive.fit <- medpolish(prices) ``` ``` ## 1: 2011 ## 2: 1938 ## Final: 1938 ``` ``` additive.fit$residual ``` ``` ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## Albuquerque 49 24 0 -20 ## Columbus 37 9 0 -81 ## Tulsa 11 -12 0 0 ## Miami -17 0 0 25 ## Philadelphia -13 0 1 -53 ## Atlanta 32 0 -30 -9 ## Honolulu 0 17 0 51 ## Boston -40 -54 0 0 ## San_Francisco -324 -162 0 298 ## Four.Bedroom ## Albuquerque -56 ## Columbus -149 ## Tulsa -27 ## Miami 0 ## Philadelphia 4 ## Atlanta 30 ## Honolulu -40 ## Boston 31 ## San_Francisco 232 ``` We’ve drawn some lines to divide the table. The rows above (below) the center horizontal line have negative (positive) effects. Similarly, the columns to the left (right) of the center vertical line have negative (positive) effects. Looking at the row and column effects, we see * San Francisco is the most expensive place to rent. Comparing effects, San Francisco is $705 \- $225 \= $480 more expensive than the next most expensive city Boston. Albuquerque and Columbus are the cheapest cities. * A 4\-bedroom apartment is, on the average, $183 more expensive than a 3\-bedroom apartment. A 1\-bedroom apartment is about $100 more expensive than a studio apartment. 20\.3 Looking at the residuals ------------------------------ Looking at the residuals, we see a lot of large values – in particular, we note one residual larger than 300 in absolute value, two residual in the 200’s, and two residuals in the 100’s. These large residuals are in the same order of magnitude as the effects, so this additive model doesn’t appear to fit very well. The residuals appear large – also they show a distinctive pattern. The residuals generally are positive in the upper left and lower right quadrants, and negative in the upper right and lower left quadrants. Can we improve the additive model to remove this residual pattern? 20\.4 Comparison values ----------------------- In the above figure, we also show the sign of the row effects and the column effects. Suppose that we multiply the row effect by the column effect for each cell of the table. Then we would observe the same pattern as we saw in the residuals above. This observation motivates adding a single new term to improve our additive fit. For each cell in the two\-way table, we define a comparison value (cv) as \\\[ cv \= \\frac{ROWEFF \\times COLEFF}{COMMON}. \\] To illustrate computing a comparison value, consider the upper left cell of the table that corresponds to a studio apartment in Albuquerque. The corresponding row effect is \-162, the corresponding column effect is \-244, and the common value is 754; so the comparison value for this cell is (\-162\)(\-244\)/754 \= 52\.424\. The table below shows the comparison values for all cells of the table. ``` 1 2 3 4 Studio Bedroom Bedrooms Bedrooms Bedrooms REFF Albuquerque 52.424 30.724 0 -52.424 -91.743 -162 Columbus 48.218 28.259 0 -48.218 -84.381 -149 Tulsa 41.745 24.466 0 -41.745 -73.054 -129 Miami 10.355 6.069 0 -10.355 -18.122 -32 Philadelphia 0.000 0.000 0 0.000 0.000 0 Atlanta -22.976 -13.466 0 22.976 40.208 71 Honolulu -27.507 -16.121 0 27.507 48.137 85 Boston -72.812 -42.672 0 72.812 127.420 225 San Francisco -228.143 -133.707 0 228.143 399.251 705 CEFF -244 -143 0 244 427 754 ``` Note that these comparison values exhibit the same basic pattern that we saw in the residuals. 20\.5 Extending the additive model by one more term --------------------------------------------------- We add a single term to our additive model to account for the pattern that we see in the residuals. This new model, called an extended model, has the form \\\[ FIT \= COMMON \+ ROW \\, EFF \+ COL \\, EFF \+ k CV, \\] where \\(CV\\) is the comparison value and \\(k\\) is a constant chosen that the new model is a good fit to the data. 20\.6 Finding k --------------- How do we find the coefficient \\(k\\) of the comparison value in our extended model? We plot the residuals from the additive fit (vertical) against the comparison values (horizontal). As expected, we see a positive relationship between the residuals and the cv’s. The slope of a line fitted to this plot is our estimate at the coefficient \\(k\\). ``` cv <- with(additive.fit, outer(row, col, "*") / overall) df <- data.frame(Comparison_Value = as.vector(cv), Residual = as.vector(additive.fit$residuals)) ggplot(df, aes(Comparison_Value, Residual)) + geom_point() ``` ``` rline(Residual ~ Comparison_Value, df)$b ``` ``` ## [1] 0.5669006 ``` We check the suitability of our line fit by looking at the residuals and seeing if we have removed the trend from the scatterplot. (It appears from the graph that we have indeed removed the trend from the plot.) ``` ggplot(df, aes(Comparison_Value, Residual - 0.56 * Comparison_Value)) + geom_point() + geom_hline(yintercept = 0, color="red") ``` The slope of this line, .56, is our estimate at the coefficient k. So our improved fit to these data has the form \\\[ FIT \= COMMON \+ ROW \\, EFF \+ COL \\, EFF \+ 0\.56 CV, \\] 20\.7 Relationship with multiplicative fit ------------------------------------------ In our extended fit, we found that a suitable choice for the coefficient k of the comparison value was k \= .8\. What if we chose the nearby value k \= 1? Then the extended model is (using more concise notation) \\\[ COMMON \+ ROW \+ COL \+ \\frac{ROW \\times COL}{COMMON} \\] Using a little algebra, note that this fit can be rewritten as \\\[ COMMON \\left(1 \+ \\frac{ROW}{COMMON} \+ \\frac{COL}{COMMON} \+ \\frac{ROW \\times COL}{COMMON^2}\\right) \\] \\\[ \= COMMON \\left(1 \+ \\frac{ROW}{COMMON}\\right) \\left(1 \+ \\frac{COL}{COMMON}\\right), \\] which is a multiplicative fit with \\\[ ROWEFF \= \\left(1 \+ \\frac{ROW}{COMMON}\\right), \\, COLEFF \= \\left(1 \+ \\frac{COL}{COMMON}\\right). \\] We have already discussed this type of fit. Specifically, we found that one could find a multiplicative fit for a two\-way table by taking an additive fit of the log response. 20\.8 Extended fits and transformations --------------------------------------- Let’s summarize what we’ve learned in our example. 1. The additive fit was unsatisfactory – there was a very clear pattern in the residuals. 2. The special pattern in the residuals motivated the consideration of an extended fit. By plotting the residuals against the comparison values, we saw that the slope of the comparison values was .8 which is pretty close to 1\. 3. If we use \\(k\=1\\) in our extended fit, this is equivalent to a multiplicative model which we can fit by fitting an additive model to the log rents. So this analysis suggests that we should reexpress the response by taking a log which is a power transformation with \\(p \= 0\\). 4. Actually, what we found here can be generalized. Suppose that the slope of the (residual, comparison value) plot is \\(k\\). Then the recommendation is to perform an additive fit to transformed data where the data is reexpressed using a power transformation with power \\(p \= 1 \- k\\). 20\.1 Meet the data ------------------- The New York Times Almanac has a lot of interesting data about housing in the United States. When you are considering taking a new job, one factor in your decision\-making is the cost of living in the city where you are thinking of moving. The almanac gives the average price of apartments and houses for many cities in the country. The table shown below gives the market price of five different types of apartments for nine cities. These cities were chosen to get a good spread in cost of housing – we see that San Francisco is over twice as expensive than Tulsa. ``` library(LearnEDAfunctions) library(tidyverse) prices <- rent.prices[, -1] row.names(prices) <- rent.prices[, 1] prices ``` ``` ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## Albuquerque 397 473 592 816 ## Atlanta 613 682 795 1060 ## Boston 695 782 979 1223 ## Columbus 398 471 605 768 ## Honolulu 595 713 839 1134 ## Miami 461 579 722 991 ## Philadelphia 497 611 755 945 ## San_Francisco 891 1154 1459 2001 ## Tulsa 392 470 625 869 ## Four.Bedroom ## Albuquerque 963 ## Atlanta 1282 ## Boston 1437 ## Columbus 883 ## Honolulu 1226 ## Miami 1149 ## Philadelphia 1185 ## San_Francisco 2118 ## Tulsa 1025 ``` The goal here is to gain some understanding about the difference in rentals between different cities, and between apartments of different types. Is an additive model appropriate for these data? Well, you might not know, so we will be na"{}ve and start by trying an additive fit. 20\.2 An additive fit --------------------- We first use R to fit an additive model using median polish. The fitted model (common, row and column effects) and the residuals are shown in the below table. ``` (additive.fit <- medpolish(prices)) ``` ``` ## 1: 2011 ## 2: 1938 ## Final: 1938 ``` ``` ## ## Median Polish Results (Dataset: "prices") ## ## Overall: 754 ## ## Row Effects: ## Albuquerque Atlanta Boston Columbus ## -162 71 225 -149 ## Honolulu Miami Philadelphia San_Francisco ## 85 -32 0 705 ## Tulsa ## -129 ## ## Column Effects: ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## -244 -143 0 244 ## Four.Bedroom ## 427 ## ## Residuals: ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## Albuquerque 49 24 0 -20 ## Atlanta 32 0 -30 -9 ## Boston -40 -54 0 0 ## Columbus 37 9 0 -81 ## Honolulu 0 17 0 51 ## Miami -17 0 0 25 ## Philadelphia -13 0 1 -53 ## San_Francisco -324 -162 0 298 ## Tulsa 11 -12 0 0 ## Four.Bedroom ## Albuquerque -56 ## Atlanta 30 ## Boston 31 ## Columbus -149 ## Honolulu -40 ## Miami 0 ## Philadelphia 4 ## San_Francisco 232 ## Tulsa -27 ``` For the following, it will be helpful to order the rows and columns of the table by the effects. Actually, the columns are already ordered by effects – a studio apartment tends to be cheaper than a 1 bedroom apartment which is cheaper than a 2 bedroom apartment and so on. But the rows were not originally ordered by effects and so we’ve reordered the cities with the cheapest cities on top. ``` prices <- prices[order(additive.fit$row), ] additive.fit <- medpolish(prices) ``` ``` ## 1: 2011 ## 2: 1938 ## Final: 1938 ``` ``` additive.fit$residual ``` ``` ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## Albuquerque 49 24 0 -20 ## Columbus 37 9 0 -81 ## Tulsa 11 -12 0 0 ## Miami -17 0 0 25 ## Philadelphia -13 0 1 -53 ## Atlanta 32 0 -30 -9 ## Honolulu 0 17 0 51 ## Boston -40 -54 0 0 ## San_Francisco -324 -162 0 298 ## Four.Bedroom ## Albuquerque -56 ## Columbus -149 ## Tulsa -27 ## Miami 0 ## Philadelphia 4 ## Atlanta 30 ## Honolulu -40 ## Boston 31 ## San_Francisco 232 ``` We’ve drawn some lines to divide the table. The rows above (below) the center horizontal line have negative (positive) effects. Similarly, the columns to the left (right) of the center vertical line have negative (positive) effects. Looking at the row and column effects, we see * San Francisco is the most expensive place to rent. Comparing effects, San Francisco is $705 \- $225 \= $480 more expensive than the next most expensive city Boston. Albuquerque and Columbus are the cheapest cities. * A 4\-bedroom apartment is, on the average, $183 more expensive than a 3\-bedroom apartment. A 1\-bedroom apartment is about $100 more expensive than a studio apartment. 20\.3 Looking at the residuals ------------------------------ Looking at the residuals, we see a lot of large values – in particular, we note one residual larger than 300 in absolute value, two residual in the 200’s, and two residuals in the 100’s. These large residuals are in the same order of magnitude as the effects, so this additive model doesn’t appear to fit very well. The residuals appear large – also they show a distinctive pattern. The residuals generally are positive in the upper left and lower right quadrants, and negative in the upper right and lower left quadrants. Can we improve the additive model to remove this residual pattern? 20\.4 Comparison values ----------------------- In the above figure, we also show the sign of the row effects and the column effects. Suppose that we multiply the row effect by the column effect for each cell of the table. Then we would observe the same pattern as we saw in the residuals above. This observation motivates adding a single new term to improve our additive fit. For each cell in the two\-way table, we define a comparison value (cv) as \\\[ cv \= \\frac{ROWEFF \\times COLEFF}{COMMON}. \\] To illustrate computing a comparison value, consider the upper left cell of the table that corresponds to a studio apartment in Albuquerque. The corresponding row effect is \-162, the corresponding column effect is \-244, and the common value is 754; so the comparison value for this cell is (\-162\)(\-244\)/754 \= 52\.424\. The table below shows the comparison values for all cells of the table. ``` 1 2 3 4 Studio Bedroom Bedrooms Bedrooms Bedrooms REFF Albuquerque 52.424 30.724 0 -52.424 -91.743 -162 Columbus 48.218 28.259 0 -48.218 -84.381 -149 Tulsa 41.745 24.466 0 -41.745 -73.054 -129 Miami 10.355 6.069 0 -10.355 -18.122 -32 Philadelphia 0.000 0.000 0 0.000 0.000 0 Atlanta -22.976 -13.466 0 22.976 40.208 71 Honolulu -27.507 -16.121 0 27.507 48.137 85 Boston -72.812 -42.672 0 72.812 127.420 225 San Francisco -228.143 -133.707 0 228.143 399.251 705 CEFF -244 -143 0 244 427 754 ``` Note that these comparison values exhibit the same basic pattern that we saw in the residuals. 20\.5 Extending the additive model by one more term --------------------------------------------------- We add a single term to our additive model to account for the pattern that we see in the residuals. This new model, called an extended model, has the form \\\[ FIT \= COMMON \+ ROW \\, EFF \+ COL \\, EFF \+ k CV, \\] where \\(CV\\) is the comparison value and \\(k\\) is a constant chosen that the new model is a good fit to the data. 20\.6 Finding k --------------- How do we find the coefficient \\(k\\) of the comparison value in our extended model? We plot the residuals from the additive fit (vertical) against the comparison values (horizontal). As expected, we see a positive relationship between the residuals and the cv’s. The slope of a line fitted to this plot is our estimate at the coefficient \\(k\\). ``` cv <- with(additive.fit, outer(row, col, "*") / overall) df <- data.frame(Comparison_Value = as.vector(cv), Residual = as.vector(additive.fit$residuals)) ggplot(df, aes(Comparison_Value, Residual)) + geom_point() ``` ``` rline(Residual ~ Comparison_Value, df)$b ``` ``` ## [1] 0.5669006 ``` We check the suitability of our line fit by looking at the residuals and seeing if we have removed the trend from the scatterplot. (It appears from the graph that we have indeed removed the trend from the plot.) ``` ggplot(df, aes(Comparison_Value, Residual - 0.56 * Comparison_Value)) + geom_point() + geom_hline(yintercept = 0, color="red") ``` The slope of this line, .56, is our estimate at the coefficient k. So our improved fit to these data has the form \\\[ FIT \= COMMON \+ ROW \\, EFF \+ COL \\, EFF \+ 0\.56 CV, \\] 20\.7 Relationship with multiplicative fit ------------------------------------------ In our extended fit, we found that a suitable choice for the coefficient k of the comparison value was k \= .8\. What if we chose the nearby value k \= 1? Then the extended model is (using more concise notation) \\\[ COMMON \+ ROW \+ COL \+ \\frac{ROW \\times COL}{COMMON} \\] Using a little algebra, note that this fit can be rewritten as \\\[ COMMON \\left(1 \+ \\frac{ROW}{COMMON} \+ \\frac{COL}{COMMON} \+ \\frac{ROW \\times COL}{COMMON^2}\\right) \\] \\\[ \= COMMON \\left(1 \+ \\frac{ROW}{COMMON}\\right) \\left(1 \+ \\frac{COL}{COMMON}\\right), \\] which is a multiplicative fit with \\\[ ROWEFF \= \\left(1 \+ \\frac{ROW}{COMMON}\\right), \\, COLEFF \= \\left(1 \+ \\frac{COL}{COMMON}\\right). \\] We have already discussed this type of fit. Specifically, we found that one could find a multiplicative fit for a two\-way table by taking an additive fit of the log response. 20\.8 Extended fits and transformations --------------------------------------- Let’s summarize what we’ve learned in our example. 1. The additive fit was unsatisfactory – there was a very clear pattern in the residuals. 2. The special pattern in the residuals motivated the consideration of an extended fit. By plotting the residuals against the comparison values, we saw that the slope of the comparison values was .8 which is pretty close to 1\. 3. If we use \\(k\=1\\) in our extended fit, this is equivalent to a multiplicative model which we can fit by fitting an additive model to the log rents. So this analysis suggests that we should reexpress the response by taking a log which is a power transformation with \\(p \= 0\\). 4. Actually, what we found here can be generalized. Suppose that the slope of the (residual, comparison value) plot is \\(k\\). Then the recommendation is to perform an additive fit to transformed data where the data is reexpressed using a power transformation with power \\(p \= 1 \- k\\).
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/extended-fit.html
20 Extended Fit =============== In the two previous chapters, we have talked about two ways of fitting a model to a two\-way table. We can fit an additive model using median polish – this seemed to work well for our temperature data that was classified by city and month. In other situations, like the Olympics swimming times classified by year and distance, it seemed better to use a multiplicative fit. For an arbitrary dataset, how can we tell if an additive fit or a multiplicative fit is more suitable? Here we describe a way of extending the additive model, called an extended fit, that will help us in situations where an additive model does not fit the data well. We will see that this extended fit has a strong connection with the multiplicative fit that we saw earlier. 20\.1 Meet the data ------------------- The New York Times Almanac has a lot of interesting data about housing in the United States. When you are considering taking a new job, one factor in your decision\-making is the cost of living in the city where you are thinking of moving. The almanac gives the average price of apartments and houses for many cities in the country. The table shown below gives the market price of five different types of apartments for nine cities. These cities were chosen to get a good spread in cost of housing – we see that San Francisco is over twice as expensive than Tulsa. ``` library(LearnEDAfunctions) library(tidyverse) prices <- rent.prices[, -1] row.names(prices) <- rent.prices[, 1] prices ``` ``` ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## Albuquerque 397 473 592 816 ## Atlanta 613 682 795 1060 ## Boston 695 782 979 1223 ## Columbus 398 471 605 768 ## Honolulu 595 713 839 1134 ## Miami 461 579 722 991 ## Philadelphia 497 611 755 945 ## San_Francisco 891 1154 1459 2001 ## Tulsa 392 470 625 869 ## Four.Bedroom ## Albuquerque 963 ## Atlanta 1282 ## Boston 1437 ## Columbus 883 ## Honolulu 1226 ## Miami 1149 ## Philadelphia 1185 ## San_Francisco 2118 ## Tulsa 1025 ``` The goal here is to gain some understanding about the difference in rentals between different cities, and between apartments of different types. Is an additive model appropriate for these data? Well, you might not know, so we will be na"{}ve and start by trying an additive fit. 20\.2 An additive fit --------------------- We first use R to fit an additive model using median polish. The fitted model (common, row and column effects) and the residuals are shown in the below table. ``` (additive.fit <- medpolish(prices)) ``` ``` ## 1: 2011 ## 2: 1938 ## Final: 1938 ``` ``` ## ## Median Polish Results (Dataset: "prices") ## ## Overall: 754 ## ## Row Effects: ## Albuquerque Atlanta Boston Columbus ## -162 71 225 -149 ## Honolulu Miami Philadelphia San_Francisco ## 85 -32 0 705 ## Tulsa ## -129 ## ## Column Effects: ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## -244 -143 0 244 ## Four.Bedroom ## 427 ## ## Residuals: ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## Albuquerque 49 24 0 -20 ## Atlanta 32 0 -30 -9 ## Boston -40 -54 0 0 ## Columbus 37 9 0 -81 ## Honolulu 0 17 0 51 ## Miami -17 0 0 25 ## Philadelphia -13 0 1 -53 ## San_Francisco -324 -162 0 298 ## Tulsa 11 -12 0 0 ## Four.Bedroom ## Albuquerque -56 ## Atlanta 30 ## Boston 31 ## Columbus -149 ## Honolulu -40 ## Miami 0 ## Philadelphia 4 ## San_Francisco 232 ## Tulsa -27 ``` For the following, it will be helpful to order the rows and columns of the table by the effects. Actually, the columns are already ordered by effects – a studio apartment tends to be cheaper than a 1 bedroom apartment which is cheaper than a 2 bedroom apartment and so on. But the rows were not originally ordered by effects and so we’ve reordered the cities with the cheapest cities on top. ``` prices <- prices[order(additive.fit$row), ] additive.fit <- medpolish(prices) ``` ``` ## 1: 2011 ## 2: 1938 ## Final: 1938 ``` ``` additive.fit$residual ``` ``` ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## Albuquerque 49 24 0 -20 ## Columbus 37 9 0 -81 ## Tulsa 11 -12 0 0 ## Miami -17 0 0 25 ## Philadelphia -13 0 1 -53 ## Atlanta 32 0 -30 -9 ## Honolulu 0 17 0 51 ## Boston -40 -54 0 0 ## San_Francisco -324 -162 0 298 ## Four.Bedroom ## Albuquerque -56 ## Columbus -149 ## Tulsa -27 ## Miami 0 ## Philadelphia 4 ## Atlanta 30 ## Honolulu -40 ## Boston 31 ## San_Francisco 232 ``` We’ve drawn some lines to divide the table. The rows above (below) the center horizontal line have negative (positive) effects. Similarly, the columns to the left (right) of the center vertical line have negative (positive) effects. Looking at the row and column effects, we see * San Francisco is the most expensive place to rent. Comparing effects, San Francisco is $705 \- $225 \= $480 more expensive than the next most expensive city Boston. Albuquerque and Columbus are the cheapest cities. * A 4\-bedroom apartment is, on the average, $183 more expensive than a 3\-bedroom apartment. A 1\-bedroom apartment is about $100 more expensive than a studio apartment. 20\.3 Looking at the residuals ------------------------------ Looking at the residuals, we see a lot of large values – in particular, we note one residual larger than 300 in absolute value, two residual in the 200’s, and two residuals in the 100’s. These large residuals are in the same order of magnitude as the effects, so this additive model doesn’t appear to fit very well. The residuals appear large – also they show a distinctive pattern. The residuals generally are positive in the upper left and lower right quadrants, and negative in the upper right and lower left quadrants. Can we improve the additive model to remove this residual pattern? 20\.4 Comparison values ----------------------- In the above figure, we also show the sign of the row effects and the column effects. Suppose that we multiply the row effect by the column effect for each cell of the table. Then we would observe the same pattern as we saw in the residuals above. This observation motivates adding a single new term to improve our additive fit. For each cell in the two\-way table, we define a comparison value (cv) as \\\[ cv \= \\frac{ROWEFF \\times COLEFF}{COMMON}. \\] To illustrate computing a comparison value, consider the upper left cell of the table that corresponds to a studio apartment in Albuquerque. The corresponding row effect is \-162, the corresponding column effect is \-244, and the common value is 754; so the comparison value for this cell is (\-162\)(\-244\)/754 \= 52\.424\. The table below shows the comparison values for all cells of the table. ``` 1 2 3 4 Studio Bedroom Bedrooms Bedrooms Bedrooms REFF Albuquerque 52.424 30.724 0 -52.424 -91.743 -162 Columbus 48.218 28.259 0 -48.218 -84.381 -149 Tulsa 41.745 24.466 0 -41.745 -73.054 -129 Miami 10.355 6.069 0 -10.355 -18.122 -32 Philadelphia 0.000 0.000 0 0.000 0.000 0 Atlanta -22.976 -13.466 0 22.976 40.208 71 Honolulu -27.507 -16.121 0 27.507 48.137 85 Boston -72.812 -42.672 0 72.812 127.420 225 San Francisco -228.143 -133.707 0 228.143 399.251 705 CEFF -244 -143 0 244 427 754 ``` Note that these comparison values exhibit the same basic pattern that we saw in the residuals. 20\.5 Extending the additive model by one more term --------------------------------------------------- We add a single term to our additive model to account for the pattern that we see in the residuals. This new model, called an extended model, has the form \\\[ FIT \= COMMON \+ ROW \\, EFF \+ COL \\, EFF \+ k CV, \\] where \\(CV\\) is the comparison value and \\(k\\) is a constant chosen that the new model is a good fit to the data. 20\.6 Finding k --------------- How do we find the coefficient \\(k\\) of the comparison value in our extended model? We plot the residuals from the additive fit (vertical) against the comparison values (horizontal). As expected, we see a positive relationship between the residuals and the cv’s. The slope of a line fitted to this plot is our estimate at the coefficient \\(k\\). ``` cv <- with(additive.fit, outer(row, col, "*") / overall) df <- data.frame(Comparison_Value = as.vector(cv), Residual = as.vector(additive.fit$residuals)) ggplot(df, aes(Comparison_Value, Residual)) + geom_point() ``` ``` rline(Residual ~ Comparison_Value, df)$b ``` ``` ## [1] 0.5669006 ``` We check the suitability of our line fit by looking at the residuals and seeing if we have removed the trend from the scatterplot. (It appears from the graph that we have indeed removed the trend from the plot.) ``` ggplot(df, aes(Comparison_Value, Residual - 0.56 * Comparison_Value)) + geom_point() + geom_hline(yintercept = 0, color="red") ``` The slope of this line, .56, is our estimate at the coefficient k. So our improved fit to these data has the form \\\[ FIT \= COMMON \+ ROW \\, EFF \+ COL \\, EFF \+ 0\.56 CV, \\] 20\.7 Relationship with multiplicative fit ------------------------------------------ In our extended fit, we found that a suitable choice for the coefficient k of the comparison value was k \= .8\. What if we chose the nearby value k \= 1? Then the extended model is (using more concise notation) \\\[ COMMON \+ ROW \+ COL \+ \\frac{ROW \\times COL}{COMMON} \\] Using a little algebra, note that this fit can be rewritten as \\\[ COMMON \\left(1 \+ \\frac{ROW}{COMMON} \+ \\frac{COL}{COMMON} \+ \\frac{ROW \\times COL}{COMMON^2}\\right) \\] \\\[ \= COMMON \\left(1 \+ \\frac{ROW}{COMMON}\\right) \\left(1 \+ \\frac{COL}{COMMON}\\right), \\] which is a multiplicative fit with \\\[ ROWEFF \= \\left(1 \+ \\frac{ROW}{COMMON}\\right), \\, COLEFF \= \\left(1 \+ \\frac{COL}{COMMON}\\right). \\] We have already discussed this type of fit. Specifically, we found that one could find a multiplicative fit for a two\-way table by taking an additive fit of the log response. 20\.8 Extended fits and transformations --------------------------------------- Let’s summarize what we’ve learned in our example. 1. The additive fit was unsatisfactory – there was a very clear pattern in the residuals. 2. The special pattern in the residuals motivated the consideration of an extended fit. By plotting the residuals against the comparison values, we saw that the slope of the comparison values was .8 which is pretty close to 1\. 3. If we use \\(k\=1\\) in our extended fit, this is equivalent to a multiplicative model which we can fit by fitting an additive model to the log rents. So this analysis suggests that we should reexpress the response by taking a log which is a power transformation with \\(p \= 0\\). 4. Actually, what we found here can be generalized. Suppose that the slope of the (residual, comparison value) plot is \\(k\\). Then the recommendation is to perform an additive fit to transformed data where the data is reexpressed using a power transformation with power \\(p \= 1 \- k\\). 20\.1 Meet the data ------------------- The New York Times Almanac has a lot of interesting data about housing in the United States. When you are considering taking a new job, one factor in your decision\-making is the cost of living in the city where you are thinking of moving. The almanac gives the average price of apartments and houses for many cities in the country. The table shown below gives the market price of five different types of apartments for nine cities. These cities were chosen to get a good spread in cost of housing – we see that San Francisco is over twice as expensive than Tulsa. ``` library(LearnEDAfunctions) library(tidyverse) prices <- rent.prices[, -1] row.names(prices) <- rent.prices[, 1] prices ``` ``` ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## Albuquerque 397 473 592 816 ## Atlanta 613 682 795 1060 ## Boston 695 782 979 1223 ## Columbus 398 471 605 768 ## Honolulu 595 713 839 1134 ## Miami 461 579 722 991 ## Philadelphia 497 611 755 945 ## San_Francisco 891 1154 1459 2001 ## Tulsa 392 470 625 869 ## Four.Bedroom ## Albuquerque 963 ## Atlanta 1282 ## Boston 1437 ## Columbus 883 ## Honolulu 1226 ## Miami 1149 ## Philadelphia 1185 ## San_Francisco 2118 ## Tulsa 1025 ``` The goal here is to gain some understanding about the difference in rentals between different cities, and between apartments of different types. Is an additive model appropriate for these data? Well, you might not know, so we will be na"{}ve and start by trying an additive fit. 20\.2 An additive fit --------------------- We first use R to fit an additive model using median polish. The fitted model (common, row and column effects) and the residuals are shown in the below table. ``` (additive.fit <- medpolish(prices)) ``` ``` ## 1: 2011 ## 2: 1938 ## Final: 1938 ``` ``` ## ## Median Polish Results (Dataset: "prices") ## ## Overall: 754 ## ## Row Effects: ## Albuquerque Atlanta Boston Columbus ## -162 71 225 -149 ## Honolulu Miami Philadelphia San_Francisco ## 85 -32 0 705 ## Tulsa ## -129 ## ## Column Effects: ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## -244 -143 0 244 ## Four.Bedroom ## 427 ## ## Residuals: ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## Albuquerque 49 24 0 -20 ## Atlanta 32 0 -30 -9 ## Boston -40 -54 0 0 ## Columbus 37 9 0 -81 ## Honolulu 0 17 0 51 ## Miami -17 0 0 25 ## Philadelphia -13 0 1 -53 ## San_Francisco -324 -162 0 298 ## Tulsa 11 -12 0 0 ## Four.Bedroom ## Albuquerque -56 ## Atlanta 30 ## Boston 31 ## Columbus -149 ## Honolulu -40 ## Miami 0 ## Philadelphia 4 ## San_Francisco 232 ## Tulsa -27 ``` For the following, it will be helpful to order the rows and columns of the table by the effects. Actually, the columns are already ordered by effects – a studio apartment tends to be cheaper than a 1 bedroom apartment which is cheaper than a 2 bedroom apartment and so on. But the rows were not originally ordered by effects and so we’ve reordered the cities with the cheapest cities on top. ``` prices <- prices[order(additive.fit$row), ] additive.fit <- medpolish(prices) ``` ``` ## 1: 2011 ## 2: 1938 ## Final: 1938 ``` ``` additive.fit$residual ``` ``` ## Studio One.Bedroom Two.Bedroom Three.Bedroom ## Albuquerque 49 24 0 -20 ## Columbus 37 9 0 -81 ## Tulsa 11 -12 0 0 ## Miami -17 0 0 25 ## Philadelphia -13 0 1 -53 ## Atlanta 32 0 -30 -9 ## Honolulu 0 17 0 51 ## Boston -40 -54 0 0 ## San_Francisco -324 -162 0 298 ## Four.Bedroom ## Albuquerque -56 ## Columbus -149 ## Tulsa -27 ## Miami 0 ## Philadelphia 4 ## Atlanta 30 ## Honolulu -40 ## Boston 31 ## San_Francisco 232 ``` We’ve drawn some lines to divide the table. The rows above (below) the center horizontal line have negative (positive) effects. Similarly, the columns to the left (right) of the center vertical line have negative (positive) effects. Looking at the row and column effects, we see * San Francisco is the most expensive place to rent. Comparing effects, San Francisco is $705 \- $225 \= $480 more expensive than the next most expensive city Boston. Albuquerque and Columbus are the cheapest cities. * A 4\-bedroom apartment is, on the average, $183 more expensive than a 3\-bedroom apartment. A 1\-bedroom apartment is about $100 more expensive than a studio apartment. 20\.3 Looking at the residuals ------------------------------ Looking at the residuals, we see a lot of large values – in particular, we note one residual larger than 300 in absolute value, two residual in the 200’s, and two residuals in the 100’s. These large residuals are in the same order of magnitude as the effects, so this additive model doesn’t appear to fit very well. The residuals appear large – also they show a distinctive pattern. The residuals generally are positive in the upper left and lower right quadrants, and negative in the upper right and lower left quadrants. Can we improve the additive model to remove this residual pattern? 20\.4 Comparison values ----------------------- In the above figure, we also show the sign of the row effects and the column effects. Suppose that we multiply the row effect by the column effect for each cell of the table. Then we would observe the same pattern as we saw in the residuals above. This observation motivates adding a single new term to improve our additive fit. For each cell in the two\-way table, we define a comparison value (cv) as \\\[ cv \= \\frac{ROWEFF \\times COLEFF}{COMMON}. \\] To illustrate computing a comparison value, consider the upper left cell of the table that corresponds to a studio apartment in Albuquerque. The corresponding row effect is \-162, the corresponding column effect is \-244, and the common value is 754; so the comparison value for this cell is (\-162\)(\-244\)/754 \= 52\.424\. The table below shows the comparison values for all cells of the table. ``` 1 2 3 4 Studio Bedroom Bedrooms Bedrooms Bedrooms REFF Albuquerque 52.424 30.724 0 -52.424 -91.743 -162 Columbus 48.218 28.259 0 -48.218 -84.381 -149 Tulsa 41.745 24.466 0 -41.745 -73.054 -129 Miami 10.355 6.069 0 -10.355 -18.122 -32 Philadelphia 0.000 0.000 0 0.000 0.000 0 Atlanta -22.976 -13.466 0 22.976 40.208 71 Honolulu -27.507 -16.121 0 27.507 48.137 85 Boston -72.812 -42.672 0 72.812 127.420 225 San Francisco -228.143 -133.707 0 228.143 399.251 705 CEFF -244 -143 0 244 427 754 ``` Note that these comparison values exhibit the same basic pattern that we saw in the residuals. 20\.5 Extending the additive model by one more term --------------------------------------------------- We add a single term to our additive model to account for the pattern that we see in the residuals. This new model, called an extended model, has the form \\\[ FIT \= COMMON \+ ROW \\, EFF \+ COL \\, EFF \+ k CV, \\] where \\(CV\\) is the comparison value and \\(k\\) is a constant chosen that the new model is a good fit to the data. 20\.6 Finding k --------------- How do we find the coefficient \\(k\\) of the comparison value in our extended model? We plot the residuals from the additive fit (vertical) against the comparison values (horizontal). As expected, we see a positive relationship between the residuals and the cv’s. The slope of a line fitted to this plot is our estimate at the coefficient \\(k\\). ``` cv <- with(additive.fit, outer(row, col, "*") / overall) df <- data.frame(Comparison_Value = as.vector(cv), Residual = as.vector(additive.fit$residuals)) ggplot(df, aes(Comparison_Value, Residual)) + geom_point() ``` ``` rline(Residual ~ Comparison_Value, df)$b ``` ``` ## [1] 0.5669006 ``` We check the suitability of our line fit by looking at the residuals and seeing if we have removed the trend from the scatterplot. (It appears from the graph that we have indeed removed the trend from the plot.) ``` ggplot(df, aes(Comparison_Value, Residual - 0.56 * Comparison_Value)) + geom_point() + geom_hline(yintercept = 0, color="red") ``` The slope of this line, .56, is our estimate at the coefficient k. So our improved fit to these data has the form \\\[ FIT \= COMMON \+ ROW \\, EFF \+ COL \\, EFF \+ 0\.56 CV, \\] 20\.7 Relationship with multiplicative fit ------------------------------------------ In our extended fit, we found that a suitable choice for the coefficient k of the comparison value was k \= .8\. What if we chose the nearby value k \= 1? Then the extended model is (using more concise notation) \\\[ COMMON \+ ROW \+ COL \+ \\frac{ROW \\times COL}{COMMON} \\] Using a little algebra, note that this fit can be rewritten as \\\[ COMMON \\left(1 \+ \\frac{ROW}{COMMON} \+ \\frac{COL}{COMMON} \+ \\frac{ROW \\times COL}{COMMON^2}\\right) \\] \\\[ \= COMMON \\left(1 \+ \\frac{ROW}{COMMON}\\right) \\left(1 \+ \\frac{COL}{COMMON}\\right), \\] which is a multiplicative fit with \\\[ ROWEFF \= \\left(1 \+ \\frac{ROW}{COMMON}\\right), \\, COLEFF \= \\left(1 \+ \\frac{COL}{COMMON}\\right). \\] We have already discussed this type of fit. Specifically, we found that one could find a multiplicative fit for a two\-way table by taking an additive fit of the log response. 20\.8 Extended fits and transformations --------------------------------------- Let’s summarize what we’ve learned in our example. 1. The additive fit was unsatisfactory – there was a very clear pattern in the residuals. 2. The special pattern in the residuals motivated the consideration of an extended fit. By plotting the residuals against the comparison values, we saw that the slope of the comparison values was .8 which is pretty close to 1\. 3. If we use \\(k\=1\\) in our extended fit, this is equivalent to a multiplicative model which we can fit by fitting an additive model to the log rents. So this analysis suggests that we should reexpress the response by taking a log which is a power transformation with \\(p \= 0\\). 4. Actually, what we found here can be generalized. Suppose that the slope of the (residual, comparison value) plot is \\(k\\). Then the recommendation is to perform an additive fit to transformed data where the data is reexpressed using a power transformation with power \\(p \= 1 \- k\\).
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/binned-data.html
21 Binned Data ============== 21\.1 Meet the data ------------------- Our data are the times (in seconds) in running Grandma’s Marathon (run in Duluth, Minnesota) for women of ages 19 to 34\. (<http://www.grandmamarathon.org>) This data is stored as the dataset `grandma.19.40` in the `LearnEDAfunctions` package. ``` library(LearnEDAfunctions) library(tidyverse) ``` Here are the top ten times: ``` grandma.19.40 %>% arrange(time) %>% slice(1:10) ``` ``` ## time ## 1 9233 ## 2 9400 ## 3 9473 ## 4 10051 ## 5 10325 ## 6 10565 ## 7 10723 ## 8 10779 ## 9 10840 ## 10 10988 ``` There are 1208 women that ran this particular race. Some questions come to mind: * How do we display this batch of data effectively? * How do we compare this display with a standardized shape, such as the normal. * What are \`\`good” residuals to compute that compare the batch with the standardized shape? 21\.2 Constructing a histogram ------------------------------ When we have a lot of data, we first group the data into bins and then construct a histogram of the group counts. Scanning the data, we see that the times (in seconds) range between 9233 and 21595\. Let’s cut the range into bins of width 1000 as follows: ``` bins <- seq(8000, 23000, 1000) bin.mids <- (bins[-1] + bins[-length(bins)]) / 2 ``` I used R to construct a histogram of the race times using this choice of bins. ``` ggplot(grandma.19.40, aes(time)) + geom_histogram(breaks = bins, fill = "white", color = "red") ``` Whenever we construct a bar graph or histogram, it is important to adhere to the area principle. **AREA PRINCIPLE:** Each data value should be represented by the same amount of area in the graphical display. This is an important principle since the visual impact of a bar is proportional to an area. (You can probably think of misleading graphs that you have seen in newspapers or magazines where the area principle is not followed and the resulting graphical display is not an accurate representation of the data.) The histogram with equally spaced bins (like the ones we chose) does obey the area principle. What we see in this histogram of the marathon race times? 1. The first thing that we should notice is the approximate bell\-shape of the times. Most of the runners have time in the 13000\-20000 seconds range and it pretty uncommon to have a time close to 10000 or larger than 20000 seconds. Since the most popular bell\-shape is the normal or Gaussian curve, we might wonder how well these data fit a normal curve. 2. There is one problem interpreting the shape of histograms. The heights of the bars correspond to the counts in the bins. There is a basic fact about counts: **Large counts show more variability than small counts.** Another way of saying this is that there is more variation in the bar heights in bins with long bars than in bins with short bars. We see this in our histogram. As shown in the figure below, there appears to be more spread in the heights of the tallest bars than in the heights of the shortest bars. This will make it harder to compare our histogram with a normal curve. 21\.3 A rootogram ----------------- What we see in the heights of the histogram bars illustrates a general pattern of counts. Large counts tend to have more variability than small counts. This histogram illustrates a dependence between spread and level that we talked about when we were comparing batches. We can remove the dependence between spread and level by using a reexpression. From experience, the most helpful reexpression is a square root – so it is better to work with the square root of the frequencies than the frequencies. In the table below, we show the bins, the frequencies (we use the symbol \\(d\\) to denote a frequency) and the root frequencies. ``` p <- ggplot(grandma.19.40, aes(time)) + geom_histogram(breaks = bins) out <- ggplot_build(p)$data[[1]] select(out, count, x, xmin, xmax) ``` ``` ## count x xmin xmax ## 1 0 8500 8000 9000 ## 2 3 9500 9000 10000 ## 3 7 10500 10000 11000 ## 4 15 11500 11000 12000 ## 5 79 12500 12000 13000 ## 6 139 13500 13000 14000 ## 7 225 14500 14000 15000 ## 8 191 15500 15000 16000 ## 9 201 16500 16000 17000 ## 10 141 17500 17000 18000 ## 11 89 18500 18000 19000 ## 12 68 19500 19000 20000 ## 13 31 20500 20000 21000 ## 14 19 21500 21000 22000 ## 15 0 22500 22000 23000 ``` A rootogram is simply a bar graph where the heights of the bars are the root frequencies. Here is a rootogram for our data: ``` ggplot(out, aes(x, sqrt(count))) + geom_col() ``` How is this an improvement over the histogram? By taking a root reexpression, the variability of the heights of the bars is approximately the same for small and large counts. By making the variability constant, it will be easier to make comparisons between the observed counts and fitted counts from a Gaussian curve. But there is a downside to a rootogram. By taking the root, the areas in the bars are no longer proportional to the number of data values. So we are violating the area principle. But we think that the issue of ease of comparison of the observed and fitted counts is more important here than the accuracy of the display. 21\.4 Fitting a Gaussian comparison curve ----------------------------------------- Next, we’d like to fit a Gaussian (normal) curve to these data. You may have done this type of before – specifically, in the context of learning about the chi\-square goodness of fit test procedure. We want first to find a Gaussian curve that matches the data. We could find the Gaussian curve that has the same mean and standard deviation as the observed data. But the mean and standard deviation of the data are nonresistant measures that can be distorted by extreme values in the sample. So we use resistant measures (median, fourths, etc.) to find the matching Gaussian parameters. 1. We find the Gaussian mean by taking the average of the lower and upper fourths: \\\[ m \= \\frac{F\_U \+ F\_L}{2}. \\] 2. We use the fourth\-spread of the sample to estimate the Gaussian standard deviation. Remember that the middle 50% of a normal curve has width 1\.349 \\(s\\), where \\(s\\) is the standard deviation. Using this fact, the Gaussian standard deviation \\(s\\) is given by \\\[ s \= \\frac{F\_U \- F\_L}{1\.349}. \\] Let’s illustrate finding the matching Gaussian parameters for our marathon times data. Here the fourths are 14296 and 17321\. So the matching mean is \\\[ m \= \\frac{14296 \+ 17321} {2} \= 15808 \\] and the matching standard deviation is \\\[ s \= \\frac{17321 \- 14296} {1\.349} \= 2242 \\] So our matching Gaussian curve is N(15808, 2242\). From our Gaussian curve, we can find expected counts for each bin. * First, we find the probability that a running time falls in each bin using the normal curve. To illustrate, the probability that a time falls in the interval (9000, 10000\) is \\\[ PROB \= \\Phi\\left(\\frac{10000\-m}{s}\\right) \- \\Phi\\left(\\frac{9000\-m}{s}\\right), \\] where \\(m\\) and \\(s\\) are the normal mean and standard deviation found above and \\(\\Phi(z)\\) is the area under the standard normal curve to for values smaller than \\(z\\). * We then find the expected number in a bin (we call this expected count \\(e\\)) by multiplying the total sample size (here 1208\) by the probability. The table below gives the observed count (\\(d\\)) and expected count (\\(e\\)) for all the intervals. ``` s <- fit.gaussian(grandma.19.40$time, bins, 15808, 2242) options(digits=3) (df <- data.frame(Mid=bin.mids, d=s$counts, sqrt.d=sqrt(s$counts), Prob=s$probs, e=s$expected, sqrt.e=sqrt(s$expected), Residual=s$residual)) ``` ``` ## Mid d sqrt.d Prob e sqrt.e Residual ## 1 8500 0 0.00 0.000948 1.15 1.07 -1.0702 ## 2 9500 3 1.73 0.003595 4.34 2.08 -0.3518 ## 3 10500 7 2.65 0.011205 13.54 3.68 -1.0333 ## 4 11500 15 3.87 0.028712 34.68 5.89 -2.0164 ## 5 12500 79 8.89 0.060494 73.08 8.55 0.3397 ## 6 13500 139 11.79 0.104797 126.59 11.25 0.5384 ## 7 14500 225 15.00 0.149277 180.33 13.43 1.5714 ## 8 15500 191 13.82 0.174846 211.21 14.53 -0.7129 ## 9 16500 201 14.18 0.168399 203.43 14.26 -0.0853 ## 10 17500 141 11.87 0.133366 161.11 12.69 -0.8184 ## 11 18500 89 9.43 0.086849 104.91 10.24 -0.8088 ## 12 19500 68 8.25 0.046504 56.18 7.50 0.7511 ## 13 20500 31 5.57 0.020474 24.73 4.97 0.5946 ## 14 21500 19 4.36 0.007411 8.95 2.99 1.3669 ## 15 22500 0 0.00 0.002205 2.66 1.63 -1.6322 ``` This table also gives the root expected counts for all bins. The figure below is a rootogram with a smooth curve on top that corresponds to the root expected counts. ``` ggplot(out, aes(x, sqrt(count))) + geom_col() + geom_line(data = df, aes(bin.mids, sqrt.e), color="red") ``` 21\.5 Residuals --------------- We want to see how well this Gaussian curve fits our histogram. To do this, we want to look at residuals which compare the counts with the expected counts using the normal model. What is a good definition of residual in this case? A simple rootogram residual is based on the difference between the root count and the root of the expected count: \\\[ r \= \\sqrt{d} \- \\sqrt{e}. \\] These residuals are displayed in the above table. Residuals that are unusually large (either positive or negative) correspond to bins that have counts that deviate from what would be expected from a Gaussian distribution. 21\.6 Hanging rootogram ----------------------- There is a clever alternative way of displaying the rootogram that gives attention to the residuals. 1. First, we graph \\(\\sqrt{e}\\), the square root of the fitted counts. This is shown as the red bell\-shape curve on the figure below. 2. Next, we graph \\(\\sqrt{e}\-\\sqrt{d}\\) by subtracting the rootogram bars (of height \\(\\sqrt{d}\\) ) to the bell\-shape curve. This is called a **hanging rootogram**, since we are in effect hanging the rootogram on the Gaussian fit. ``` library(vcd) rootogram(s$counts, s$expected) ``` The focus of this graph is different from that of the rootogram. In the rootogram we were looking at the heights of the bars. In a hanging rootogram, we notice how the bars fall above or below the horizontal line at 0\. Bars that fall below (above) the line correspond to positive (negative) residuals. You might be more comfortable with a residual plot like below where hanging bars have been removed and bars are plotted with heights . ``` rootogram(s$counts, s$expected, type="deviation") ``` 21\.7 Interpreting the residuals -------------------------------- We see some lack of fit of the Gaussian curve: * the number of small times seems a bit low * we see too many times around 14000 seconds and larger than 18000 seconds How can we interpret this lack of fit? In a race such as a marathon, it makes sense that there are relatively few (even fewer than predicted by a normal curve) runners that are very fast. Also, there are a relatively large number of slow runners – these are the runners who are most interested in finishing the race and the time they run isn’t that important. This is interesting. By looking beyond the general bell\-shape of the data, we get some extra insight about the times of a marathon race. 21\.8 Other definitions of residuals (Double Root Residuals) ------------------------------------------------------------ Although we focused on the use of R, the package MINITAB also constructs a suspended rootogram. But the output looks a bit different and needs some extra explanation. I entered the raw data (from “grandma.txt”) into MINITAB. I created a new column called “bins” that contained the cutpoints for the bins that I want to use – these are the numbers 9000, 10000, 11000, etc. I then executed the rootogram command – you indicate the column that contains the race times and the column that has the bin cutpoints. Here is the output. The Bin and Count columns are self\-explanatory. The column RawRes contains the raw residuals that are simply $d – $e (no roots). The DRRs contains the so\-called “double\-root residuals”. This is a slight variation of the root residuals that we used above. It is defined by \\\[ DRR \= \\sqrt{2 \+ 4d} \- \\sqrt{1\+ 4e} \\] Actually a DRR is approximately equal to twice the residual we defined. (Can you guess why this is called a double\-root residual?) \\\[ DRR \= 2( \\sqrt{d} \- \\sqrt{e}) \\] The extra 2 and 1 inside the radical helps out when one has small bin counts – the DRR makes sense even when there are no observations in the bin. 21\.1 Meet the data ------------------- Our data are the times (in seconds) in running Grandma’s Marathon (run in Duluth, Minnesota) for women of ages 19 to 34\. (<http://www.grandmamarathon.org>) This data is stored as the dataset `grandma.19.40` in the `LearnEDAfunctions` package. ``` library(LearnEDAfunctions) library(tidyverse) ``` Here are the top ten times: ``` grandma.19.40 %>% arrange(time) %>% slice(1:10) ``` ``` ## time ## 1 9233 ## 2 9400 ## 3 9473 ## 4 10051 ## 5 10325 ## 6 10565 ## 7 10723 ## 8 10779 ## 9 10840 ## 10 10988 ``` There are 1208 women that ran this particular race. Some questions come to mind: * How do we display this batch of data effectively? * How do we compare this display with a standardized shape, such as the normal. * What are \`\`good” residuals to compute that compare the batch with the standardized shape? 21\.2 Constructing a histogram ------------------------------ When we have a lot of data, we first group the data into bins and then construct a histogram of the group counts. Scanning the data, we see that the times (in seconds) range between 9233 and 21595\. Let’s cut the range into bins of width 1000 as follows: ``` bins <- seq(8000, 23000, 1000) bin.mids <- (bins[-1] + bins[-length(bins)]) / 2 ``` I used R to construct a histogram of the race times using this choice of bins. ``` ggplot(grandma.19.40, aes(time)) + geom_histogram(breaks = bins, fill = "white", color = "red") ``` Whenever we construct a bar graph or histogram, it is important to adhere to the area principle. **AREA PRINCIPLE:** Each data value should be represented by the same amount of area in the graphical display. This is an important principle since the visual impact of a bar is proportional to an area. (You can probably think of misleading graphs that you have seen in newspapers or magazines where the area principle is not followed and the resulting graphical display is not an accurate representation of the data.) The histogram with equally spaced bins (like the ones we chose) does obey the area principle. What we see in this histogram of the marathon race times? 1. The first thing that we should notice is the approximate bell\-shape of the times. Most of the runners have time in the 13000\-20000 seconds range and it pretty uncommon to have a time close to 10000 or larger than 20000 seconds. Since the most popular bell\-shape is the normal or Gaussian curve, we might wonder how well these data fit a normal curve. 2. There is one problem interpreting the shape of histograms. The heights of the bars correspond to the counts in the bins. There is a basic fact about counts: **Large counts show more variability than small counts.** Another way of saying this is that there is more variation in the bar heights in bins with long bars than in bins with short bars. We see this in our histogram. As shown in the figure below, there appears to be more spread in the heights of the tallest bars than in the heights of the shortest bars. This will make it harder to compare our histogram with a normal curve. 21\.3 A rootogram ----------------- What we see in the heights of the histogram bars illustrates a general pattern of counts. Large counts tend to have more variability than small counts. This histogram illustrates a dependence between spread and level that we talked about when we were comparing batches. We can remove the dependence between spread and level by using a reexpression. From experience, the most helpful reexpression is a square root – so it is better to work with the square root of the frequencies than the frequencies. In the table below, we show the bins, the frequencies (we use the symbol \\(d\\) to denote a frequency) and the root frequencies. ``` p <- ggplot(grandma.19.40, aes(time)) + geom_histogram(breaks = bins) out <- ggplot_build(p)$data[[1]] select(out, count, x, xmin, xmax) ``` ``` ## count x xmin xmax ## 1 0 8500 8000 9000 ## 2 3 9500 9000 10000 ## 3 7 10500 10000 11000 ## 4 15 11500 11000 12000 ## 5 79 12500 12000 13000 ## 6 139 13500 13000 14000 ## 7 225 14500 14000 15000 ## 8 191 15500 15000 16000 ## 9 201 16500 16000 17000 ## 10 141 17500 17000 18000 ## 11 89 18500 18000 19000 ## 12 68 19500 19000 20000 ## 13 31 20500 20000 21000 ## 14 19 21500 21000 22000 ## 15 0 22500 22000 23000 ``` A rootogram is simply a bar graph where the heights of the bars are the root frequencies. Here is a rootogram for our data: ``` ggplot(out, aes(x, sqrt(count))) + geom_col() ``` How is this an improvement over the histogram? By taking a root reexpression, the variability of the heights of the bars is approximately the same for small and large counts. By making the variability constant, it will be easier to make comparisons between the observed counts and fitted counts from a Gaussian curve. But there is a downside to a rootogram. By taking the root, the areas in the bars are no longer proportional to the number of data values. So we are violating the area principle. But we think that the issue of ease of comparison of the observed and fitted counts is more important here than the accuracy of the display. 21\.4 Fitting a Gaussian comparison curve ----------------------------------------- Next, we’d like to fit a Gaussian (normal) curve to these data. You may have done this type of before – specifically, in the context of learning about the chi\-square goodness of fit test procedure. We want first to find a Gaussian curve that matches the data. We could find the Gaussian curve that has the same mean and standard deviation as the observed data. But the mean and standard deviation of the data are nonresistant measures that can be distorted by extreme values in the sample. So we use resistant measures (median, fourths, etc.) to find the matching Gaussian parameters. 1. We find the Gaussian mean by taking the average of the lower and upper fourths: \\\[ m \= \\frac{F\_U \+ F\_L}{2}. \\] 2. We use the fourth\-spread of the sample to estimate the Gaussian standard deviation. Remember that the middle 50% of a normal curve has width 1\.349 \\(s\\), where \\(s\\) is the standard deviation. Using this fact, the Gaussian standard deviation \\(s\\) is given by \\\[ s \= \\frac{F\_U \- F\_L}{1\.349}. \\] Let’s illustrate finding the matching Gaussian parameters for our marathon times data. Here the fourths are 14296 and 17321\. So the matching mean is \\\[ m \= \\frac{14296 \+ 17321} {2} \= 15808 \\] and the matching standard deviation is \\\[ s \= \\frac{17321 \- 14296} {1\.349} \= 2242 \\] So our matching Gaussian curve is N(15808, 2242\). From our Gaussian curve, we can find expected counts for each bin. * First, we find the probability that a running time falls in each bin using the normal curve. To illustrate, the probability that a time falls in the interval (9000, 10000\) is \\\[ PROB \= \\Phi\\left(\\frac{10000\-m}{s}\\right) \- \\Phi\\left(\\frac{9000\-m}{s}\\right), \\] where \\(m\\) and \\(s\\) are the normal mean and standard deviation found above and \\(\\Phi(z)\\) is the area under the standard normal curve to for values smaller than \\(z\\). * We then find the expected number in a bin (we call this expected count \\(e\\)) by multiplying the total sample size (here 1208\) by the probability. The table below gives the observed count (\\(d\\)) and expected count (\\(e\\)) for all the intervals. ``` s <- fit.gaussian(grandma.19.40$time, bins, 15808, 2242) options(digits=3) (df <- data.frame(Mid=bin.mids, d=s$counts, sqrt.d=sqrt(s$counts), Prob=s$probs, e=s$expected, sqrt.e=sqrt(s$expected), Residual=s$residual)) ``` ``` ## Mid d sqrt.d Prob e sqrt.e Residual ## 1 8500 0 0.00 0.000948 1.15 1.07 -1.0702 ## 2 9500 3 1.73 0.003595 4.34 2.08 -0.3518 ## 3 10500 7 2.65 0.011205 13.54 3.68 -1.0333 ## 4 11500 15 3.87 0.028712 34.68 5.89 -2.0164 ## 5 12500 79 8.89 0.060494 73.08 8.55 0.3397 ## 6 13500 139 11.79 0.104797 126.59 11.25 0.5384 ## 7 14500 225 15.00 0.149277 180.33 13.43 1.5714 ## 8 15500 191 13.82 0.174846 211.21 14.53 -0.7129 ## 9 16500 201 14.18 0.168399 203.43 14.26 -0.0853 ## 10 17500 141 11.87 0.133366 161.11 12.69 -0.8184 ## 11 18500 89 9.43 0.086849 104.91 10.24 -0.8088 ## 12 19500 68 8.25 0.046504 56.18 7.50 0.7511 ## 13 20500 31 5.57 0.020474 24.73 4.97 0.5946 ## 14 21500 19 4.36 0.007411 8.95 2.99 1.3669 ## 15 22500 0 0.00 0.002205 2.66 1.63 -1.6322 ``` This table also gives the root expected counts for all bins. The figure below is a rootogram with a smooth curve on top that corresponds to the root expected counts. ``` ggplot(out, aes(x, sqrt(count))) + geom_col() + geom_line(data = df, aes(bin.mids, sqrt.e), color="red") ``` 21\.5 Residuals --------------- We want to see how well this Gaussian curve fits our histogram. To do this, we want to look at residuals which compare the counts with the expected counts using the normal model. What is a good definition of residual in this case? A simple rootogram residual is based on the difference between the root count and the root of the expected count: \\\[ r \= \\sqrt{d} \- \\sqrt{e}. \\] These residuals are displayed in the above table. Residuals that are unusually large (either positive or negative) correspond to bins that have counts that deviate from what would be expected from a Gaussian distribution. 21\.6 Hanging rootogram ----------------------- There is a clever alternative way of displaying the rootogram that gives attention to the residuals. 1. First, we graph \\(\\sqrt{e}\\), the square root of the fitted counts. This is shown as the red bell\-shape curve on the figure below. 2. Next, we graph \\(\\sqrt{e}\-\\sqrt{d}\\) by subtracting the rootogram bars (of height \\(\\sqrt{d}\\) ) to the bell\-shape curve. This is called a **hanging rootogram**, since we are in effect hanging the rootogram on the Gaussian fit. ``` library(vcd) rootogram(s$counts, s$expected) ``` The focus of this graph is different from that of the rootogram. In the rootogram we were looking at the heights of the bars. In a hanging rootogram, we notice how the bars fall above or below the horizontal line at 0\. Bars that fall below (above) the line correspond to positive (negative) residuals. You might be more comfortable with a residual plot like below where hanging bars have been removed and bars are plotted with heights . ``` rootogram(s$counts, s$expected, type="deviation") ``` 21\.7 Interpreting the residuals -------------------------------- We see some lack of fit of the Gaussian curve: * the number of small times seems a bit low * we see too many times around 14000 seconds and larger than 18000 seconds How can we interpret this lack of fit? In a race such as a marathon, it makes sense that there are relatively few (even fewer than predicted by a normal curve) runners that are very fast. Also, there are a relatively large number of slow runners – these are the runners who are most interested in finishing the race and the time they run isn’t that important. This is interesting. By looking beyond the general bell\-shape of the data, we get some extra insight about the times of a marathon race. 21\.8 Other definitions of residuals (Double Root Residuals) ------------------------------------------------------------ Although we focused on the use of R, the package MINITAB also constructs a suspended rootogram. But the output looks a bit different and needs some extra explanation. I entered the raw data (from “grandma.txt”) into MINITAB. I created a new column called “bins” that contained the cutpoints for the bins that I want to use – these are the numbers 9000, 10000, 11000, etc. I then executed the rootogram command – you indicate the column that contains the race times and the column that has the bin cutpoints. Here is the output. The Bin and Count columns are self\-explanatory. The column RawRes contains the raw residuals that are simply $d – $e (no roots). The DRRs contains the so\-called “double\-root residuals”. This is a slight variation of the root residuals that we used above. It is defined by \\\[ DRR \= \\sqrt{2 \+ 4d} \- \\sqrt{1\+ 4e} \\] Actually a DRR is approximately equal to twice the residual we defined. (Can you guess why this is called a double\-root residual?) \\\[ DRR \= 2( \\sqrt{d} \- \\sqrt{e}) \\] The extra 2 and 1 inside the radical helps out when one has small bin counts – the DRR makes sense even when there are no observations in the bin.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/binned-data.html
21 Binned Data ============== 21\.1 Meet the data ------------------- Our data are the times (in seconds) in running Grandma’s Marathon (run in Duluth, Minnesota) for women of ages 19 to 34\. (<http://www.grandmamarathon.org>) This data is stored as the dataset `grandma.19.40` in the `LearnEDAfunctions` package. ``` library(LearnEDAfunctions) library(tidyverse) ``` Here are the top ten times: ``` grandma.19.40 %>% arrange(time) %>% slice(1:10) ``` ``` ## time ## 1 9233 ## 2 9400 ## 3 9473 ## 4 10051 ## 5 10325 ## 6 10565 ## 7 10723 ## 8 10779 ## 9 10840 ## 10 10988 ``` There are 1208 women that ran this particular race. Some questions come to mind: * How do we display this batch of data effectively? * How do we compare this display with a standardized shape, such as the normal. * What are \`\`good” residuals to compute that compare the batch with the standardized shape? 21\.2 Constructing a histogram ------------------------------ When we have a lot of data, we first group the data into bins and then construct a histogram of the group counts. Scanning the data, we see that the times (in seconds) range between 9233 and 21595\. Let’s cut the range into bins of width 1000 as follows: ``` bins <- seq(8000, 23000, 1000) bin.mids <- (bins[-1] + bins[-length(bins)]) / 2 ``` I used R to construct a histogram of the race times using this choice of bins. ``` ggplot(grandma.19.40, aes(time)) + geom_histogram(breaks = bins, fill = "white", color = "red") ``` Whenever we construct a bar graph or histogram, it is important to adhere to the area principle. **AREA PRINCIPLE:** Each data value should be represented by the same amount of area in the graphical display. This is an important principle since the visual impact of a bar is proportional to an area. (You can probably think of misleading graphs that you have seen in newspapers or magazines where the area principle is not followed and the resulting graphical display is not an accurate representation of the data.) The histogram with equally spaced bins (like the ones we chose) does obey the area principle. What we see in this histogram of the marathon race times? 1. The first thing that we should notice is the approximate bell\-shape of the times. Most of the runners have time in the 13000\-20000 seconds range and it pretty uncommon to have a time close to 10000 or larger than 20000 seconds. Since the most popular bell\-shape is the normal or Gaussian curve, we might wonder how well these data fit a normal curve. 2. There is one problem interpreting the shape of histograms. The heights of the bars correspond to the counts in the bins. There is a basic fact about counts: **Large counts show more variability than small counts.** Another way of saying this is that there is more variation in the bar heights in bins with long bars than in bins with short bars. We see this in our histogram. As shown in the figure below, there appears to be more spread in the heights of the tallest bars than in the heights of the shortest bars. This will make it harder to compare our histogram with a normal curve. 21\.3 A rootogram ----------------- What we see in the heights of the histogram bars illustrates a general pattern of counts. Large counts tend to have more variability than small counts. This histogram illustrates a dependence between spread and level that we talked about when we were comparing batches. We can remove the dependence between spread and level by using a reexpression. From experience, the most helpful reexpression is a square root – so it is better to work with the square root of the frequencies than the frequencies. In the table below, we show the bins, the frequencies (we use the symbol \\(d\\) to denote a frequency) and the root frequencies. ``` p <- ggplot(grandma.19.40, aes(time)) + geom_histogram(breaks = bins) out <- ggplot_build(p)$data[[1]] select(out, count, x, xmin, xmax) ``` ``` ## count x xmin xmax ## 1 0 8500 8000 9000 ## 2 3 9500 9000 10000 ## 3 7 10500 10000 11000 ## 4 15 11500 11000 12000 ## 5 79 12500 12000 13000 ## 6 139 13500 13000 14000 ## 7 225 14500 14000 15000 ## 8 191 15500 15000 16000 ## 9 201 16500 16000 17000 ## 10 141 17500 17000 18000 ## 11 89 18500 18000 19000 ## 12 68 19500 19000 20000 ## 13 31 20500 20000 21000 ## 14 19 21500 21000 22000 ## 15 0 22500 22000 23000 ``` A rootogram is simply a bar graph where the heights of the bars are the root frequencies. Here is a rootogram for our data: ``` ggplot(out, aes(x, sqrt(count))) + geom_col() ``` How is this an improvement over the histogram? By taking a root reexpression, the variability of the heights of the bars is approximately the same for small and large counts. By making the variability constant, it will be easier to make comparisons between the observed counts and fitted counts from a Gaussian curve. But there is a downside to a rootogram. By taking the root, the areas in the bars are no longer proportional to the number of data values. So we are violating the area principle. But we think that the issue of ease of comparison of the observed and fitted counts is more important here than the accuracy of the display. 21\.4 Fitting a Gaussian comparison curve ----------------------------------------- Next, we’d like to fit a Gaussian (normal) curve to these data. You may have done this type of before – specifically, in the context of learning about the chi\-square goodness of fit test procedure. We want first to find a Gaussian curve that matches the data. We could find the Gaussian curve that has the same mean and standard deviation as the observed data. But the mean and standard deviation of the data are nonresistant measures that can be distorted by extreme values in the sample. So we use resistant measures (median, fourths, etc.) to find the matching Gaussian parameters. 1. We find the Gaussian mean by taking the average of the lower and upper fourths: \\\[ m \= \\frac{F\_U \+ F\_L}{2}. \\] 2. We use the fourth\-spread of the sample to estimate the Gaussian standard deviation. Remember that the middle 50% of a normal curve has width 1\.349 \\(s\\), where \\(s\\) is the standard deviation. Using this fact, the Gaussian standard deviation \\(s\\) is given by \\\[ s \= \\frac{F\_U \- F\_L}{1\.349}. \\] Let’s illustrate finding the matching Gaussian parameters for our marathon times data. Here the fourths are 14296 and 17321\. So the matching mean is \\\[ m \= \\frac{14296 \+ 17321} {2} \= 15808 \\] and the matching standard deviation is \\\[ s \= \\frac{17321 \- 14296} {1\.349} \= 2242 \\] So our matching Gaussian curve is N(15808, 2242\). From our Gaussian curve, we can find expected counts for each bin. * First, we find the probability that a running time falls in each bin using the normal curve. To illustrate, the probability that a time falls in the interval (9000, 10000\) is \\\[ PROB \= \\Phi\\left(\\frac{10000\-m}{s}\\right) \- \\Phi\\left(\\frac{9000\-m}{s}\\right), \\] where \\(m\\) and \\(s\\) are the normal mean and standard deviation found above and \\(\\Phi(z)\\) is the area under the standard normal curve to for values smaller than \\(z\\). * We then find the expected number in a bin (we call this expected count \\(e\\)) by multiplying the total sample size (here 1208\) by the probability. The table below gives the observed count (\\(d\\)) and expected count (\\(e\\)) for all the intervals. ``` s <- fit.gaussian(grandma.19.40$time, bins, 15808, 2242) options(digits=3) (df <- data.frame(Mid=bin.mids, d=s$counts, sqrt.d=sqrt(s$counts), Prob=s$probs, e=s$expected, sqrt.e=sqrt(s$expected), Residual=s$residual)) ``` ``` ## Mid d sqrt.d Prob e sqrt.e Residual ## 1 8500 0 0.00 0.000948 1.15 1.07 -1.0702 ## 2 9500 3 1.73 0.003595 4.34 2.08 -0.3518 ## 3 10500 7 2.65 0.011205 13.54 3.68 -1.0333 ## 4 11500 15 3.87 0.028712 34.68 5.89 -2.0164 ## 5 12500 79 8.89 0.060494 73.08 8.55 0.3397 ## 6 13500 139 11.79 0.104797 126.59 11.25 0.5384 ## 7 14500 225 15.00 0.149277 180.33 13.43 1.5714 ## 8 15500 191 13.82 0.174846 211.21 14.53 -0.7129 ## 9 16500 201 14.18 0.168399 203.43 14.26 -0.0853 ## 10 17500 141 11.87 0.133366 161.11 12.69 -0.8184 ## 11 18500 89 9.43 0.086849 104.91 10.24 -0.8088 ## 12 19500 68 8.25 0.046504 56.18 7.50 0.7511 ## 13 20500 31 5.57 0.020474 24.73 4.97 0.5946 ## 14 21500 19 4.36 0.007411 8.95 2.99 1.3669 ## 15 22500 0 0.00 0.002205 2.66 1.63 -1.6322 ``` This table also gives the root expected counts for all bins. The figure below is a rootogram with a smooth curve on top that corresponds to the root expected counts. ``` ggplot(out, aes(x, sqrt(count))) + geom_col() + geom_line(data = df, aes(bin.mids, sqrt.e), color="red") ``` 21\.5 Residuals --------------- We want to see how well this Gaussian curve fits our histogram. To do this, we want to look at residuals which compare the counts with the expected counts using the normal model. What is a good definition of residual in this case? A simple rootogram residual is based on the difference between the root count and the root of the expected count: \\\[ r \= \\sqrt{d} \- \\sqrt{e}. \\] These residuals are displayed in the above table. Residuals that are unusually large (either positive or negative) correspond to bins that have counts that deviate from what would be expected from a Gaussian distribution. 21\.6 Hanging rootogram ----------------------- There is a clever alternative way of displaying the rootogram that gives attention to the residuals. 1. First, we graph \\(\\sqrt{e}\\), the square root of the fitted counts. This is shown as the red bell\-shape curve on the figure below. 2. Next, we graph \\(\\sqrt{e}\-\\sqrt{d}\\) by subtracting the rootogram bars (of height \\(\\sqrt{d}\\) ) to the bell\-shape curve. This is called a **hanging rootogram**, since we are in effect hanging the rootogram on the Gaussian fit. ``` library(vcd) rootogram(s$counts, s$expected) ``` The focus of this graph is different from that of the rootogram. In the rootogram we were looking at the heights of the bars. In a hanging rootogram, we notice how the bars fall above or below the horizontal line at 0\. Bars that fall below (above) the line correspond to positive (negative) residuals. You might be more comfortable with a residual plot like below where hanging bars have been removed and bars are plotted with heights . ``` rootogram(s$counts, s$expected, type="deviation") ``` 21\.7 Interpreting the residuals -------------------------------- We see some lack of fit of the Gaussian curve: * the number of small times seems a bit low * we see too many times around 14000 seconds and larger than 18000 seconds How can we interpret this lack of fit? In a race such as a marathon, it makes sense that there are relatively few (even fewer than predicted by a normal curve) runners that are very fast. Also, there are a relatively large number of slow runners – these are the runners who are most interested in finishing the race and the time they run isn’t that important. This is interesting. By looking beyond the general bell\-shape of the data, we get some extra insight about the times of a marathon race. 21\.8 Other definitions of residuals (Double Root Residuals) ------------------------------------------------------------ Although we focused on the use of R, the package MINITAB also constructs a suspended rootogram. But the output looks a bit different and needs some extra explanation. I entered the raw data (from “grandma.txt”) into MINITAB. I created a new column called “bins” that contained the cutpoints for the bins that I want to use – these are the numbers 9000, 10000, 11000, etc. I then executed the rootogram command – you indicate the column that contains the race times and the column that has the bin cutpoints. Here is the output. The Bin and Count columns are self\-explanatory. The column RawRes contains the raw residuals that are simply $d – $e (no roots). The DRRs contains the so\-called “double\-root residuals”. This is a slight variation of the root residuals that we used above. It is defined by \\\[ DRR \= \\sqrt{2 \+ 4d} \- \\sqrt{1\+ 4e} \\] Actually a DRR is approximately equal to twice the residual we defined. (Can you guess why this is called a double\-root residual?) \\\[ DRR \= 2( \\sqrt{d} \- \\sqrt{e}) \\] The extra 2 and 1 inside the radical helps out when one has small bin counts – the DRR makes sense even when there are no observations in the bin. 21\.1 Meet the data ------------------- Our data are the times (in seconds) in running Grandma’s Marathon (run in Duluth, Minnesota) for women of ages 19 to 34\. (<http://www.grandmamarathon.org>) This data is stored as the dataset `grandma.19.40` in the `LearnEDAfunctions` package. ``` library(LearnEDAfunctions) library(tidyverse) ``` Here are the top ten times: ``` grandma.19.40 %>% arrange(time) %>% slice(1:10) ``` ``` ## time ## 1 9233 ## 2 9400 ## 3 9473 ## 4 10051 ## 5 10325 ## 6 10565 ## 7 10723 ## 8 10779 ## 9 10840 ## 10 10988 ``` There are 1208 women that ran this particular race. Some questions come to mind: * How do we display this batch of data effectively? * How do we compare this display with a standardized shape, such as the normal. * What are \`\`good” residuals to compute that compare the batch with the standardized shape? 21\.2 Constructing a histogram ------------------------------ When we have a lot of data, we first group the data into bins and then construct a histogram of the group counts. Scanning the data, we see that the times (in seconds) range between 9233 and 21595\. Let’s cut the range into bins of width 1000 as follows: ``` bins <- seq(8000, 23000, 1000) bin.mids <- (bins[-1] + bins[-length(bins)]) / 2 ``` I used R to construct a histogram of the race times using this choice of bins. ``` ggplot(grandma.19.40, aes(time)) + geom_histogram(breaks = bins, fill = "white", color = "red") ``` Whenever we construct a bar graph or histogram, it is important to adhere to the area principle. **AREA PRINCIPLE:** Each data value should be represented by the same amount of area in the graphical display. This is an important principle since the visual impact of a bar is proportional to an area. (You can probably think of misleading graphs that you have seen in newspapers or magazines where the area principle is not followed and the resulting graphical display is not an accurate representation of the data.) The histogram with equally spaced bins (like the ones we chose) does obey the area principle. What we see in this histogram of the marathon race times? 1. The first thing that we should notice is the approximate bell\-shape of the times. Most of the runners have time in the 13000\-20000 seconds range and it pretty uncommon to have a time close to 10000 or larger than 20000 seconds. Since the most popular bell\-shape is the normal or Gaussian curve, we might wonder how well these data fit a normal curve. 2. There is one problem interpreting the shape of histograms. The heights of the bars correspond to the counts in the bins. There is a basic fact about counts: **Large counts show more variability than small counts.** Another way of saying this is that there is more variation in the bar heights in bins with long bars than in bins with short bars. We see this in our histogram. As shown in the figure below, there appears to be more spread in the heights of the tallest bars than in the heights of the shortest bars. This will make it harder to compare our histogram with a normal curve. 21\.3 A rootogram ----------------- What we see in the heights of the histogram bars illustrates a general pattern of counts. Large counts tend to have more variability than small counts. This histogram illustrates a dependence between spread and level that we talked about when we were comparing batches. We can remove the dependence between spread and level by using a reexpression. From experience, the most helpful reexpression is a square root – so it is better to work with the square root of the frequencies than the frequencies. In the table below, we show the bins, the frequencies (we use the symbol \\(d\\) to denote a frequency) and the root frequencies. ``` p <- ggplot(grandma.19.40, aes(time)) + geom_histogram(breaks = bins) out <- ggplot_build(p)$data[[1]] select(out, count, x, xmin, xmax) ``` ``` ## count x xmin xmax ## 1 0 8500 8000 9000 ## 2 3 9500 9000 10000 ## 3 7 10500 10000 11000 ## 4 15 11500 11000 12000 ## 5 79 12500 12000 13000 ## 6 139 13500 13000 14000 ## 7 225 14500 14000 15000 ## 8 191 15500 15000 16000 ## 9 201 16500 16000 17000 ## 10 141 17500 17000 18000 ## 11 89 18500 18000 19000 ## 12 68 19500 19000 20000 ## 13 31 20500 20000 21000 ## 14 19 21500 21000 22000 ## 15 0 22500 22000 23000 ``` A rootogram is simply a bar graph where the heights of the bars are the root frequencies. Here is a rootogram for our data: ``` ggplot(out, aes(x, sqrt(count))) + geom_col() ``` How is this an improvement over the histogram? By taking a root reexpression, the variability of the heights of the bars is approximately the same for small and large counts. By making the variability constant, it will be easier to make comparisons between the observed counts and fitted counts from a Gaussian curve. But there is a downside to a rootogram. By taking the root, the areas in the bars are no longer proportional to the number of data values. So we are violating the area principle. But we think that the issue of ease of comparison of the observed and fitted counts is more important here than the accuracy of the display. 21\.4 Fitting a Gaussian comparison curve ----------------------------------------- Next, we’d like to fit a Gaussian (normal) curve to these data. You may have done this type of before – specifically, in the context of learning about the chi\-square goodness of fit test procedure. We want first to find a Gaussian curve that matches the data. We could find the Gaussian curve that has the same mean and standard deviation as the observed data. But the mean and standard deviation of the data are nonresistant measures that can be distorted by extreme values in the sample. So we use resistant measures (median, fourths, etc.) to find the matching Gaussian parameters. 1. We find the Gaussian mean by taking the average of the lower and upper fourths: \\\[ m \= \\frac{F\_U \+ F\_L}{2}. \\] 2. We use the fourth\-spread of the sample to estimate the Gaussian standard deviation. Remember that the middle 50% of a normal curve has width 1\.349 \\(s\\), where \\(s\\) is the standard deviation. Using this fact, the Gaussian standard deviation \\(s\\) is given by \\\[ s \= \\frac{F\_U \- F\_L}{1\.349}. \\] Let’s illustrate finding the matching Gaussian parameters for our marathon times data. Here the fourths are 14296 and 17321\. So the matching mean is \\\[ m \= \\frac{14296 \+ 17321} {2} \= 15808 \\] and the matching standard deviation is \\\[ s \= \\frac{17321 \- 14296} {1\.349} \= 2242 \\] So our matching Gaussian curve is N(15808, 2242\). From our Gaussian curve, we can find expected counts for each bin. * First, we find the probability that a running time falls in each bin using the normal curve. To illustrate, the probability that a time falls in the interval (9000, 10000\) is \\\[ PROB \= \\Phi\\left(\\frac{10000\-m}{s}\\right) \- \\Phi\\left(\\frac{9000\-m}{s}\\right), \\] where \\(m\\) and \\(s\\) are the normal mean and standard deviation found above and \\(\\Phi(z)\\) is the area under the standard normal curve to for values smaller than \\(z\\). * We then find the expected number in a bin (we call this expected count \\(e\\)) by multiplying the total sample size (here 1208\) by the probability. The table below gives the observed count (\\(d\\)) and expected count (\\(e\\)) for all the intervals. ``` s <- fit.gaussian(grandma.19.40$time, bins, 15808, 2242) options(digits=3) (df <- data.frame(Mid=bin.mids, d=s$counts, sqrt.d=sqrt(s$counts), Prob=s$probs, e=s$expected, sqrt.e=sqrt(s$expected), Residual=s$residual)) ``` ``` ## Mid d sqrt.d Prob e sqrt.e Residual ## 1 8500 0 0.00 0.000948 1.15 1.07 -1.0702 ## 2 9500 3 1.73 0.003595 4.34 2.08 -0.3518 ## 3 10500 7 2.65 0.011205 13.54 3.68 -1.0333 ## 4 11500 15 3.87 0.028712 34.68 5.89 -2.0164 ## 5 12500 79 8.89 0.060494 73.08 8.55 0.3397 ## 6 13500 139 11.79 0.104797 126.59 11.25 0.5384 ## 7 14500 225 15.00 0.149277 180.33 13.43 1.5714 ## 8 15500 191 13.82 0.174846 211.21 14.53 -0.7129 ## 9 16500 201 14.18 0.168399 203.43 14.26 -0.0853 ## 10 17500 141 11.87 0.133366 161.11 12.69 -0.8184 ## 11 18500 89 9.43 0.086849 104.91 10.24 -0.8088 ## 12 19500 68 8.25 0.046504 56.18 7.50 0.7511 ## 13 20500 31 5.57 0.020474 24.73 4.97 0.5946 ## 14 21500 19 4.36 0.007411 8.95 2.99 1.3669 ## 15 22500 0 0.00 0.002205 2.66 1.63 -1.6322 ``` This table also gives the root expected counts for all bins. The figure below is a rootogram with a smooth curve on top that corresponds to the root expected counts. ``` ggplot(out, aes(x, sqrt(count))) + geom_col() + geom_line(data = df, aes(bin.mids, sqrt.e), color="red") ``` 21\.5 Residuals --------------- We want to see how well this Gaussian curve fits our histogram. To do this, we want to look at residuals which compare the counts with the expected counts using the normal model. What is a good definition of residual in this case? A simple rootogram residual is based on the difference between the root count and the root of the expected count: \\\[ r \= \\sqrt{d} \- \\sqrt{e}. \\] These residuals are displayed in the above table. Residuals that are unusually large (either positive or negative) correspond to bins that have counts that deviate from what would be expected from a Gaussian distribution. 21\.6 Hanging rootogram ----------------------- There is a clever alternative way of displaying the rootogram that gives attention to the residuals. 1. First, we graph \\(\\sqrt{e}\\), the square root of the fitted counts. This is shown as the red bell\-shape curve on the figure below. 2. Next, we graph \\(\\sqrt{e}\-\\sqrt{d}\\) by subtracting the rootogram bars (of height \\(\\sqrt{d}\\) ) to the bell\-shape curve. This is called a **hanging rootogram**, since we are in effect hanging the rootogram on the Gaussian fit. ``` library(vcd) rootogram(s$counts, s$expected) ``` The focus of this graph is different from that of the rootogram. In the rootogram we were looking at the heights of the bars. In a hanging rootogram, we notice how the bars fall above or below the horizontal line at 0\. Bars that fall below (above) the line correspond to positive (negative) residuals. You might be more comfortable with a residual plot like below where hanging bars have been removed and bars are plotted with heights . ``` rootogram(s$counts, s$expected, type="deviation") ``` 21\.7 Interpreting the residuals -------------------------------- We see some lack of fit of the Gaussian curve: * the number of small times seems a bit low * we see too many times around 14000 seconds and larger than 18000 seconds How can we interpret this lack of fit? In a race such as a marathon, it makes sense that there are relatively few (even fewer than predicted by a normal curve) runners that are very fast. Also, there are a relatively large number of slow runners – these are the runners who are most interested in finishing the race and the time they run isn’t that important. This is interesting. By looking beyond the general bell\-shape of the data, we get some extra insight about the times of a marathon race. 21\.8 Other definitions of residuals (Double Root Residuals) ------------------------------------------------------------ Although we focused on the use of R, the package MINITAB also constructs a suspended rootogram. But the output looks a bit different and needs some extra explanation. I entered the raw data (from “grandma.txt”) into MINITAB. I created a new column called “bins” that contained the cutpoints for the bins that I want to use – these are the numbers 9000, 10000, 11000, etc. I then executed the rootogram command – you indicate the column that contains the race times and the column that has the bin cutpoints. Here is the output. The Bin and Count columns are self\-explanatory. The column RawRes contains the raw residuals that are simply $d – $e (no roots). The DRRs contains the so\-called “double\-root residuals”. This is a slight variation of the root residuals that we used above. It is defined by \\\[ DRR \= \\sqrt{2 \+ 4d} \- \\sqrt{1\+ 4e} \\] Actually a DRR is approximately equal to twice the residual we defined. (Can you guess why this is called a double\-root residual?) \\\[ DRR \= 2( \\sqrt{d} \- \\sqrt{e}) \\] The extra 2 and 1 inside the radical helps out when one has small bin counts – the DRR makes sense even when there are no observations in the bin.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/binned-data-ii.html
22 Binned Data II ================= 22\.1 Introduction ------------------ In the last lecture, we illustrated fitting a Gaussian (normal) curve to a histogram. But the Gaussian curve is just one of many possible symmetric curves that we can fit to binned data. Here we illustrate fitting a more general symmetric curve to grouped data. This is a nice closing topic for this class, since it will illustrate * reexpression using a power transformation * smoothing using a resistant smooth (3RSSH) * straightening a plot ``` library(LearnEDAfunctions) library(tidyverse) ``` 22\.2 Meet a simulated, but famous, dataset ------------------------------------------- I have to apologize at this point – we are going to look at a collection of simulated (fake) data. But it’s a famous type of simulated data. I believe a famous brewer in the 19th century looked at a similar type of data. \[ASIDE: Can you guess at the name of this brewer? A particular distribution is named after him. I’ll give you the name of this brewer later in this lecture.] Here’s how I generated this set of data. First, I took a random sample of eight normally distributed values. Then, I computed the mean \\(\\bar X\\) and the standard deviation \\(S\\) from this sample and computed the ratio \\\[ R \= \\frac{\\bar X}{S}. \\] I repeated this process (take a sample of 8 normals and compute \\(R\\)) 1000 times, resulting in 1000 values of R. ``` my.sim <- function() { x <- rnorm(8) mean(x) / sd(x) } df <- data.frame(R = replicate(1000, my.sim())) ``` I group these data using 14 equally spaced bins between the low and high values; the table below gives the bin counts ``` bins <- seq(min(df$R), max(df$R), length.out = 15) p <- ggplot(df, aes(R)) + geom_histogram(breaks = bins) out <- ggplot_build(p)$data[[1]] select(out, count, x, xmin, xmax) ``` ``` ## count x xmin xmax ## 1 1 -1.690 -1.8213 -1.5595 ## 2 4 -1.429 -1.5595 -1.2977 ## 3 4 -1.167 -1.2977 -1.0359 ## 4 22 -0.905 -1.0359 -0.7741 ## 5 54 -0.643 -0.7741 -0.5123 ## 6 146 -0.381 -0.5123 -0.2505 ## 7 260 -0.120 -0.2505 0.0113 ## 8 262 0.142 0.0113 0.2731 ## 9 155 0.404 0.2731 0.5349 ## 10 58 0.666 0.5349 0.7967 ## 11 22 0.928 0.7967 1.0585 ## 12 9 1.189 1.0585 1.3203 ## 13 2 1.451 1.3203 1.5821 ## 14 1 1.713 1.5821 1.8439 ``` 22\.3 Smoothing the root counts ------------------------------- Remember how we handled bin counts in the last lecture? We observed that the variation of large counts tends to be larger than the variation of small counts. To remove the dependence of spread and level of these counts, we reexpressed by a square root (the resulting graph was called a rootogram). So the first thing we do is to take the roots of the counts, shown below. ``` out <- mutate(out, ROOT = sqrt(count)) select(out, count, x, xmin, xmax, ROOT) ``` ``` ## count x xmin xmax ROOT ## 1 1 -1.690 -1.8213 -1.5595 1.00 ## 2 4 -1.429 -1.5595 -1.2977 2.00 ## 3 4 -1.167 -1.2977 -1.0359 2.00 ## 4 22 -0.905 -1.0359 -0.7741 4.69 ## 5 54 -0.643 -0.7741 -0.5123 7.35 ## 6 146 -0.381 -0.5123 -0.2505 12.08 ## 7 260 -0.120 -0.2505 0.0113 16.12 ## 8 262 0.142 0.0113 0.2731 16.19 ## 9 155 0.404 0.2731 0.5349 12.45 ## 10 58 0.666 0.5349 0.7967 7.62 ## 11 22 0.928 0.7967 1.0585 4.69 ## 12 9 1.189 1.0585 1.3203 3.00 ## 13 2 1.451 1.3203 1.5821 1.41 ## 14 1 1.713 1.5821 1.8439 1.00 ``` We plot the root counts using a line graph. ``` ggplot(out, aes(x, ROOT)) + geom_line() + xlab("Bin Mid Points") ``` Generally, we’ll see some unevenness in this plot of the root counts. Here the unevenness is most obvious near the peak of the graph. We smooth out the curve by applying our resistant smooth (a 3RSSH, twice) to the sequence of root counts. We use R to apply a 3RSSH smooth. The table below shows the values of the smoothed roots and the figure below the table plots the smooth in red. ``` out <- mutate(out, Smooth.Root = as.vector(smooth(ROOT, twiceit = TRUE))) select(out, count, x, ROOT, Smooth.Root) ``` ``` ## count x ROOT Smooth.Root ## 1 1 -1.690 1.00 2.00 ## 2 4 -1.429 2.00 2.00 ## 3 4 -1.167 2.00 2.00 ## 4 22 -0.905 4.69 4.69 ## 5 54 -0.643 7.35 7.35 ## 6 146 -0.381 12.08 12.08 ## 7 260 -0.120 16.12 16.12 ## 8 262 0.142 16.19 16.12 ## 9 155 0.404 12.45 12.45 ## 10 58 0.666 7.62 7.62 ## 11 22 0.928 4.69 4.69 ## 12 9 1.189 3.00 3.00 ## 13 2 1.451 1.41 1.41 ## 14 1 1.713 1.00 1.00 ``` ``` ggplot(out, aes(x, ROOT)) + geom_line() + geom_line(aes(x, Smooth.Root), color="red") + xlab("Bin Mid Points") ``` 22\.4 Fitting a symmetric curve ------------------------------- Now that we have a smoothed version of the root counts, we want to fit a symmetric curve. A general form of this symmetric curve is \\\[ ({\\rm some \\, reexpression \\, of } \\, \\sqrt{count}) \= a \+ b (bin \- peak) ^ 2, \\] where \- \\(bin\\) is a value of a bin midpoint \- \\(peak\\) is the value where the curve hits its peak We fit this symmetric curve in three steps. 1. First, we find where the histogram appears to peak. Here I have strong reason to believe that the histogram peaks at 0, so \\\[ peak \= 0\. \\] 2. Next, we find an appropriate power transformation (that is, choice of a power \\(p\\)) so that \\\[ {\\sqrt{count}} ^ p \\] is approximately a linear function of \\\[ (bin \- shift)^2 \= shift ^ 2\. \\] where \\(shift\\) is the difference between the bin midpoint and the peak. In the table below, we show the bin midpoints (BIN MPT) and the values of the \\(shift^2 \= (bin \- peak)^2\\) for all bins. ``` out <- mutate(out, Shift.Sq = round((x - 0) ^ 2, 2)) select(out, count, x, ROOT, Smooth.Root, Shift.Sq) ``` ``` ## count x ROOT Smooth.Root Shift.Sq ## 1 1 -1.690 1.00 2.00 2.86 ## 2 4 -1.429 2.00 2.00 2.04 ## 3 4 -1.167 2.00 2.00 1.36 ## 4 22 -0.905 4.69 4.69 0.82 ## 5 54 -0.643 7.35 7.35 0.41 ## 6 146 -0.381 12.08 12.08 0.15 ## 7 260 -0.120 16.12 16.12 0.01 ## 8 262 0.142 16.19 16.12 0.02 ## 9 155 0.404 12.45 12.45 0.16 ## 10 58 0.666 7.62 7.62 0.44 ## 11 22 0.928 4.69 4.69 0.86 ## 12 9 1.189 3.00 3.00 1.41 ## 13 2 1.451 1.41 1.41 2.11 ## 14 1 1.713 1.00 1.00 2.93 ``` 3. To find the appropriate reexpression, we plot \\(\\sqrt{smoothed \\, root}\\) against the squared shift for all bins. ``` ggplot(out, aes(Shift.Sq, Smooth.Root)) + geom_point() ``` We see strong curvature in the graph and a reexpression is needed to straighten. We wish to reexpress the \\(y\\) variable (that is, the smoothed root count) so that the curve is approximately linear. We could choose three summary points and find an appropriate choice of reexpression so that the half\-slope ratio is close to 1\. But this method seems to give unsatisfactory results for this example. As an alternative, we will plot the shift squared against different power transformations of the smoothed roots, and look for a power p that seems to straighten the graph. In the following figure, we graph the shift squared against the smoothed roots using reexpressions \\(p\\) \= 0\.5 (roots), \\(p\\) \= .001 (logs), \\(p \= \-0\.5\\) (reciprocal roots), and \\(p \= \-1\\) (reciprocals) for the smoothed roots. We have placed a lowess smooth on each graph to help in detecting curvature. ``` library(gridExtra) p1 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, 0.5))) + geom_point() + ylab("New Y") + ggtitle("Power = 0.5") p2 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, 0))) + geom_point() + ylab("New Y") + ggtitle("Power = 0") p3 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, - 0.5))) + geom_point() + ylab("New Y") + ggtitle("Power = -0.5") p4 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, - 1))) + geom_point() + ylab("New Y") + ggtitle("Power = -1") grid.arrange(p1, p2, p3, p4) ``` Looking at the four graphs, we see substantial curvature in the \\(p\=0\.5\\) and \\(p\=0\.001\\) plots and the straightening reexpression appears to fall between the values \\(p \= \-0\.5\\) and \\(p \= \-1\\). Since a reexpression has straightened the graph, we can now fit a line. Our linear fit has the form \\\[ (smoothed \\, root)^p \= a \+ b \\, shift^2\. \\] If we write this fit in terms of the count, we have \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/p)} \\] Suppose we use a reexpression \\(p \= \-0\.75\\). A resistant line to the reexpressed data gives the fit \\\[ (smoothed \\, root)^{\-0\.75} \= a \+ b \\, shift^2\. \\] which is equivalent to \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/\-0\.75\)} \\] In the R code, we estimate \\(a\\) and \\(b\\) and use the formula \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/\-0\.75\)} \\] to find the fitted smoothed count, and then take the square root to find the fitted smooth root count. ``` out <- filter(out, Smooth.Root > 0) fit <- lm(I(Smooth.Root ^ (- 0.75)) ~ Shift.Sq, data=out) b <- coef(fit) out <- mutate(out, Final.Smooth = (b[1] + b[2] * Shift.Sq) ^ (-2 / 0.75), Final.S.Root = sqrt(Final.Smooth)) select(out, count, x, ROOT, Final.Smooth, Final.S.Root) ``` ``` ## count x ROOT Final.Smooth Final.S.Root ## 1 1 -1.690 1.00 1.61 1.27 ## 2 4 -1.429 2.00 3.40 1.84 ## 3 4 -1.167 2.00 7.80 2.79 ## 4 22 -0.905 4.69 19.50 4.42 ## 5 54 -0.643 7.35 52.69 7.26 ## 6 146 -0.381 12.08 129.59 11.38 ## 7 260 -0.120 16.12 248.08 15.75 ## 8 262 0.142 16.19 235.51 15.35 ## 9 155 0.404 12.45 124.40 11.15 ## 10 58 0.666 7.62 48.31 6.95 ## 11 22 0.928 4.69 18.01 4.24 ## 12 9 1.189 3.00 7.27 2.70 ## 13 2 1.451 1.41 3.16 1.78 ## 14 1 1.713 1.00 1.52 1.23 ``` On the following figure, we graph the original bin counts against the shift and overlay the symmetric fitting curve. ``` ggplot(out, aes(x, ROOT)) + geom_line() + geom_line(aes(x, Final.S.Root), color="red") + xlab("Bin Mid Points") ``` 22\.5 Is it a reasonable fit? ----------------------------- Does this fit provide a good description of the counts? To check the goodness of fit, we compute residuals. Recall how we defined residuals from the last lecture. We look at the difference between the observed root count (we called it \\(\\sqrt{d}\\) ) and the root of the expected count \\(\\sqrt{e}\\) from the smooth curve: \\\[ r \= \\sqrt{d} \- \\sqrt{e} \\] We plot the residuals against the bin midpoints below. ``` out <- mutate(out, Residual = ROOT - Final.S.Root) ggplot(out, aes(x, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` If we don’t see any pattern in the residuals, then this will appear to be a reasonable fit. The curve we fit is actually a famous probability density Does the curve \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/\-0\.75\)} \\] look familiar? It should if you are familiar with the popular sampling distributions in statistical inference. If we take a random sample of size \\(n\\) from a normal population with mean \\(\\mu\\) , then the distribution of the statistic \\\[ T \= \\frac{\\sqrt{n}(\\bar X \- \\mu)}{S} \\] has a \\(t\\) distribution with \\(n \- 1\\) degrees of freedom. The general form of the density function of a \\(t\\) curve with mean \\(\\mu\\) , scale parameter \\(\\sigma\\) , and degrees of freedom \\(\\nu\\) has the form \\\[ f(y) \\propto \\left(1 \+ \\frac{(y\-\\mu)^2}{\\sigma^2 \\nu}\\right)^{\-(\\nu \+1\)/2} \\] If we compare this form of this density with our smooth curve , we see that it has the same form. By matching up the powers \\\[ \- \\frac{\\nu \+ 1}{2} \= \-\\frac{2}{0\.75} \\] and solving for \\(\\nu\\) , we obtain that \\(\\nu\\) \= 4\.3\. So our fit to our data is a t curve with 4\.3 degrees of freedom. This is not quite right – the true distribution of \\(\\bar X / S\\) is \\(t\\) with 7 degrees of freedom (remember we took samples of size 8\) – but we’re close to the true distribution. 22\.6 The famous brewer ----------------------- Who was the famous brewer/statistician? William Gossett worked for a brewery in England in the 18th century. He did mathematics on the side and published a result about the \\(t\\) distribution using the pseudonym Student. (He didn’t want the brewery to know that he had a second job.) So that’s why we call it the Student\-t distribution. 22\.1 Introduction ------------------ In the last lecture, we illustrated fitting a Gaussian (normal) curve to a histogram. But the Gaussian curve is just one of many possible symmetric curves that we can fit to binned data. Here we illustrate fitting a more general symmetric curve to grouped data. This is a nice closing topic for this class, since it will illustrate * reexpression using a power transformation * smoothing using a resistant smooth (3RSSH) * straightening a plot ``` library(LearnEDAfunctions) library(tidyverse) ``` 22\.2 Meet a simulated, but famous, dataset ------------------------------------------- I have to apologize at this point – we are going to look at a collection of simulated (fake) data. But it’s a famous type of simulated data. I believe a famous brewer in the 19th century looked at a similar type of data. \[ASIDE: Can you guess at the name of this brewer? A particular distribution is named after him. I’ll give you the name of this brewer later in this lecture.] Here’s how I generated this set of data. First, I took a random sample of eight normally distributed values. Then, I computed the mean \\(\\bar X\\) and the standard deviation \\(S\\) from this sample and computed the ratio \\\[ R \= \\frac{\\bar X}{S}. \\] I repeated this process (take a sample of 8 normals and compute \\(R\\)) 1000 times, resulting in 1000 values of R. ``` my.sim <- function() { x <- rnorm(8) mean(x) / sd(x) } df <- data.frame(R = replicate(1000, my.sim())) ``` I group these data using 14 equally spaced bins between the low and high values; the table below gives the bin counts ``` bins <- seq(min(df$R), max(df$R), length.out = 15) p <- ggplot(df, aes(R)) + geom_histogram(breaks = bins) out <- ggplot_build(p)$data[[1]] select(out, count, x, xmin, xmax) ``` ``` ## count x xmin xmax ## 1 1 -1.690 -1.8213 -1.5595 ## 2 4 -1.429 -1.5595 -1.2977 ## 3 4 -1.167 -1.2977 -1.0359 ## 4 22 -0.905 -1.0359 -0.7741 ## 5 54 -0.643 -0.7741 -0.5123 ## 6 146 -0.381 -0.5123 -0.2505 ## 7 260 -0.120 -0.2505 0.0113 ## 8 262 0.142 0.0113 0.2731 ## 9 155 0.404 0.2731 0.5349 ## 10 58 0.666 0.5349 0.7967 ## 11 22 0.928 0.7967 1.0585 ## 12 9 1.189 1.0585 1.3203 ## 13 2 1.451 1.3203 1.5821 ## 14 1 1.713 1.5821 1.8439 ``` 22\.3 Smoothing the root counts ------------------------------- Remember how we handled bin counts in the last lecture? We observed that the variation of large counts tends to be larger than the variation of small counts. To remove the dependence of spread and level of these counts, we reexpressed by a square root (the resulting graph was called a rootogram). So the first thing we do is to take the roots of the counts, shown below. ``` out <- mutate(out, ROOT = sqrt(count)) select(out, count, x, xmin, xmax, ROOT) ``` ``` ## count x xmin xmax ROOT ## 1 1 -1.690 -1.8213 -1.5595 1.00 ## 2 4 -1.429 -1.5595 -1.2977 2.00 ## 3 4 -1.167 -1.2977 -1.0359 2.00 ## 4 22 -0.905 -1.0359 -0.7741 4.69 ## 5 54 -0.643 -0.7741 -0.5123 7.35 ## 6 146 -0.381 -0.5123 -0.2505 12.08 ## 7 260 -0.120 -0.2505 0.0113 16.12 ## 8 262 0.142 0.0113 0.2731 16.19 ## 9 155 0.404 0.2731 0.5349 12.45 ## 10 58 0.666 0.5349 0.7967 7.62 ## 11 22 0.928 0.7967 1.0585 4.69 ## 12 9 1.189 1.0585 1.3203 3.00 ## 13 2 1.451 1.3203 1.5821 1.41 ## 14 1 1.713 1.5821 1.8439 1.00 ``` We plot the root counts using a line graph. ``` ggplot(out, aes(x, ROOT)) + geom_line() + xlab("Bin Mid Points") ``` Generally, we’ll see some unevenness in this plot of the root counts. Here the unevenness is most obvious near the peak of the graph. We smooth out the curve by applying our resistant smooth (a 3RSSH, twice) to the sequence of root counts. We use R to apply a 3RSSH smooth. The table below shows the values of the smoothed roots and the figure below the table plots the smooth in red. ``` out <- mutate(out, Smooth.Root = as.vector(smooth(ROOT, twiceit = TRUE))) select(out, count, x, ROOT, Smooth.Root) ``` ``` ## count x ROOT Smooth.Root ## 1 1 -1.690 1.00 2.00 ## 2 4 -1.429 2.00 2.00 ## 3 4 -1.167 2.00 2.00 ## 4 22 -0.905 4.69 4.69 ## 5 54 -0.643 7.35 7.35 ## 6 146 -0.381 12.08 12.08 ## 7 260 -0.120 16.12 16.12 ## 8 262 0.142 16.19 16.12 ## 9 155 0.404 12.45 12.45 ## 10 58 0.666 7.62 7.62 ## 11 22 0.928 4.69 4.69 ## 12 9 1.189 3.00 3.00 ## 13 2 1.451 1.41 1.41 ## 14 1 1.713 1.00 1.00 ``` ``` ggplot(out, aes(x, ROOT)) + geom_line() + geom_line(aes(x, Smooth.Root), color="red") + xlab("Bin Mid Points") ``` 22\.4 Fitting a symmetric curve ------------------------------- Now that we have a smoothed version of the root counts, we want to fit a symmetric curve. A general form of this symmetric curve is \\\[ ({\\rm some \\, reexpression \\, of } \\, \\sqrt{count}) \= a \+ b (bin \- peak) ^ 2, \\] where \- \\(bin\\) is a value of a bin midpoint \- \\(peak\\) is the value where the curve hits its peak We fit this symmetric curve in three steps. 1. First, we find where the histogram appears to peak. Here I have strong reason to believe that the histogram peaks at 0, so \\\[ peak \= 0\. \\] 2. Next, we find an appropriate power transformation (that is, choice of a power \\(p\\)) so that \\\[ {\\sqrt{count}} ^ p \\] is approximately a linear function of \\\[ (bin \- shift)^2 \= shift ^ 2\. \\] where \\(shift\\) is the difference between the bin midpoint and the peak. In the table below, we show the bin midpoints (BIN MPT) and the values of the \\(shift^2 \= (bin \- peak)^2\\) for all bins. ``` out <- mutate(out, Shift.Sq = round((x - 0) ^ 2, 2)) select(out, count, x, ROOT, Smooth.Root, Shift.Sq) ``` ``` ## count x ROOT Smooth.Root Shift.Sq ## 1 1 -1.690 1.00 2.00 2.86 ## 2 4 -1.429 2.00 2.00 2.04 ## 3 4 -1.167 2.00 2.00 1.36 ## 4 22 -0.905 4.69 4.69 0.82 ## 5 54 -0.643 7.35 7.35 0.41 ## 6 146 -0.381 12.08 12.08 0.15 ## 7 260 -0.120 16.12 16.12 0.01 ## 8 262 0.142 16.19 16.12 0.02 ## 9 155 0.404 12.45 12.45 0.16 ## 10 58 0.666 7.62 7.62 0.44 ## 11 22 0.928 4.69 4.69 0.86 ## 12 9 1.189 3.00 3.00 1.41 ## 13 2 1.451 1.41 1.41 2.11 ## 14 1 1.713 1.00 1.00 2.93 ``` 3. To find the appropriate reexpression, we plot \\(\\sqrt{smoothed \\, root}\\) against the squared shift for all bins. ``` ggplot(out, aes(Shift.Sq, Smooth.Root)) + geom_point() ``` We see strong curvature in the graph and a reexpression is needed to straighten. We wish to reexpress the \\(y\\) variable (that is, the smoothed root count) so that the curve is approximately linear. We could choose three summary points and find an appropriate choice of reexpression so that the half\-slope ratio is close to 1\. But this method seems to give unsatisfactory results for this example. As an alternative, we will plot the shift squared against different power transformations of the smoothed roots, and look for a power p that seems to straighten the graph. In the following figure, we graph the shift squared against the smoothed roots using reexpressions \\(p\\) \= 0\.5 (roots), \\(p\\) \= .001 (logs), \\(p \= \-0\.5\\) (reciprocal roots), and \\(p \= \-1\\) (reciprocals) for the smoothed roots. We have placed a lowess smooth on each graph to help in detecting curvature. ``` library(gridExtra) p1 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, 0.5))) + geom_point() + ylab("New Y") + ggtitle("Power = 0.5") p2 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, 0))) + geom_point() + ylab("New Y") + ggtitle("Power = 0") p3 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, - 0.5))) + geom_point() + ylab("New Y") + ggtitle("Power = -0.5") p4 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, - 1))) + geom_point() + ylab("New Y") + ggtitle("Power = -1") grid.arrange(p1, p2, p3, p4) ``` Looking at the four graphs, we see substantial curvature in the \\(p\=0\.5\\) and \\(p\=0\.001\\) plots and the straightening reexpression appears to fall between the values \\(p \= \-0\.5\\) and \\(p \= \-1\\). Since a reexpression has straightened the graph, we can now fit a line. Our linear fit has the form \\\[ (smoothed \\, root)^p \= a \+ b \\, shift^2\. \\] If we write this fit in terms of the count, we have \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/p)} \\] Suppose we use a reexpression \\(p \= \-0\.75\\). A resistant line to the reexpressed data gives the fit \\\[ (smoothed \\, root)^{\-0\.75} \= a \+ b \\, shift^2\. \\] which is equivalent to \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/\-0\.75\)} \\] In the R code, we estimate \\(a\\) and \\(b\\) and use the formula \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/\-0\.75\)} \\] to find the fitted smoothed count, and then take the square root to find the fitted smooth root count. ``` out <- filter(out, Smooth.Root > 0) fit <- lm(I(Smooth.Root ^ (- 0.75)) ~ Shift.Sq, data=out) b <- coef(fit) out <- mutate(out, Final.Smooth = (b[1] + b[2] * Shift.Sq) ^ (-2 / 0.75), Final.S.Root = sqrt(Final.Smooth)) select(out, count, x, ROOT, Final.Smooth, Final.S.Root) ``` ``` ## count x ROOT Final.Smooth Final.S.Root ## 1 1 -1.690 1.00 1.61 1.27 ## 2 4 -1.429 2.00 3.40 1.84 ## 3 4 -1.167 2.00 7.80 2.79 ## 4 22 -0.905 4.69 19.50 4.42 ## 5 54 -0.643 7.35 52.69 7.26 ## 6 146 -0.381 12.08 129.59 11.38 ## 7 260 -0.120 16.12 248.08 15.75 ## 8 262 0.142 16.19 235.51 15.35 ## 9 155 0.404 12.45 124.40 11.15 ## 10 58 0.666 7.62 48.31 6.95 ## 11 22 0.928 4.69 18.01 4.24 ## 12 9 1.189 3.00 7.27 2.70 ## 13 2 1.451 1.41 3.16 1.78 ## 14 1 1.713 1.00 1.52 1.23 ``` On the following figure, we graph the original bin counts against the shift and overlay the symmetric fitting curve. ``` ggplot(out, aes(x, ROOT)) + geom_line() + geom_line(aes(x, Final.S.Root), color="red") + xlab("Bin Mid Points") ``` 22\.5 Is it a reasonable fit? ----------------------------- Does this fit provide a good description of the counts? To check the goodness of fit, we compute residuals. Recall how we defined residuals from the last lecture. We look at the difference between the observed root count (we called it \\(\\sqrt{d}\\) ) and the root of the expected count \\(\\sqrt{e}\\) from the smooth curve: \\\[ r \= \\sqrt{d} \- \\sqrt{e} \\] We plot the residuals against the bin midpoints below. ``` out <- mutate(out, Residual = ROOT - Final.S.Root) ggplot(out, aes(x, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` If we don’t see any pattern in the residuals, then this will appear to be a reasonable fit. The curve we fit is actually a famous probability density Does the curve \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/\-0\.75\)} \\] look familiar? It should if you are familiar with the popular sampling distributions in statistical inference. If we take a random sample of size \\(n\\) from a normal population with mean \\(\\mu\\) , then the distribution of the statistic \\\[ T \= \\frac{\\sqrt{n}(\\bar X \- \\mu)}{S} \\] has a \\(t\\) distribution with \\(n \- 1\\) degrees of freedom. The general form of the density function of a \\(t\\) curve with mean \\(\\mu\\) , scale parameter \\(\\sigma\\) , and degrees of freedom \\(\\nu\\) has the form \\\[ f(y) \\propto \\left(1 \+ \\frac{(y\-\\mu)^2}{\\sigma^2 \\nu}\\right)^{\-(\\nu \+1\)/2} \\] If we compare this form of this density with our smooth curve , we see that it has the same form. By matching up the powers \\\[ \- \\frac{\\nu \+ 1}{2} \= \-\\frac{2}{0\.75} \\] and solving for \\(\\nu\\) , we obtain that \\(\\nu\\) \= 4\.3\. So our fit to our data is a t curve with 4\.3 degrees of freedom. This is not quite right – the true distribution of \\(\\bar X / S\\) is \\(t\\) with 7 degrees of freedom (remember we took samples of size 8\) – but we’re close to the true distribution. 22\.6 The famous brewer ----------------------- Who was the famous brewer/statistician? William Gossett worked for a brewery in England in the 18th century. He did mathematics on the side and published a result about the \\(t\\) distribution using the pseudonym Student. (He didn’t want the brewery to know that he had a second job.) So that’s why we call it the Student\-t distribution.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/binned-data-ii.html
22 Binned Data II ================= 22\.1 Introduction ------------------ In the last lecture, we illustrated fitting a Gaussian (normal) curve to a histogram. But the Gaussian curve is just one of many possible symmetric curves that we can fit to binned data. Here we illustrate fitting a more general symmetric curve to grouped data. This is a nice closing topic for this class, since it will illustrate * reexpression using a power transformation * smoothing using a resistant smooth (3RSSH) * straightening a plot ``` library(LearnEDAfunctions) library(tidyverse) ``` 22\.2 Meet a simulated, but famous, dataset ------------------------------------------- I have to apologize at this point – we are going to look at a collection of simulated (fake) data. But it’s a famous type of simulated data. I believe a famous brewer in the 19th century looked at a similar type of data. \[ASIDE: Can you guess at the name of this brewer? A particular distribution is named after him. I’ll give you the name of this brewer later in this lecture.] Here’s how I generated this set of data. First, I took a random sample of eight normally distributed values. Then, I computed the mean \\(\\bar X\\) and the standard deviation \\(S\\) from this sample and computed the ratio \\\[ R \= \\frac{\\bar X}{S}. \\] I repeated this process (take a sample of 8 normals and compute \\(R\\)) 1000 times, resulting in 1000 values of R. ``` my.sim <- function() { x <- rnorm(8) mean(x) / sd(x) } df <- data.frame(R = replicate(1000, my.sim())) ``` I group these data using 14 equally spaced bins between the low and high values; the table below gives the bin counts ``` bins <- seq(min(df$R), max(df$R), length.out = 15) p <- ggplot(df, aes(R)) + geom_histogram(breaks = bins) out <- ggplot_build(p)$data[[1]] select(out, count, x, xmin, xmax) ``` ``` ## count x xmin xmax ## 1 1 -1.690 -1.8213 -1.5595 ## 2 4 -1.429 -1.5595 -1.2977 ## 3 4 -1.167 -1.2977 -1.0359 ## 4 22 -0.905 -1.0359 -0.7741 ## 5 54 -0.643 -0.7741 -0.5123 ## 6 146 -0.381 -0.5123 -0.2505 ## 7 260 -0.120 -0.2505 0.0113 ## 8 262 0.142 0.0113 0.2731 ## 9 155 0.404 0.2731 0.5349 ## 10 58 0.666 0.5349 0.7967 ## 11 22 0.928 0.7967 1.0585 ## 12 9 1.189 1.0585 1.3203 ## 13 2 1.451 1.3203 1.5821 ## 14 1 1.713 1.5821 1.8439 ``` 22\.3 Smoothing the root counts ------------------------------- Remember how we handled bin counts in the last lecture? We observed that the variation of large counts tends to be larger than the variation of small counts. To remove the dependence of spread and level of these counts, we reexpressed by a square root (the resulting graph was called a rootogram). So the first thing we do is to take the roots of the counts, shown below. ``` out <- mutate(out, ROOT = sqrt(count)) select(out, count, x, xmin, xmax, ROOT) ``` ``` ## count x xmin xmax ROOT ## 1 1 -1.690 -1.8213 -1.5595 1.00 ## 2 4 -1.429 -1.5595 -1.2977 2.00 ## 3 4 -1.167 -1.2977 -1.0359 2.00 ## 4 22 -0.905 -1.0359 -0.7741 4.69 ## 5 54 -0.643 -0.7741 -0.5123 7.35 ## 6 146 -0.381 -0.5123 -0.2505 12.08 ## 7 260 -0.120 -0.2505 0.0113 16.12 ## 8 262 0.142 0.0113 0.2731 16.19 ## 9 155 0.404 0.2731 0.5349 12.45 ## 10 58 0.666 0.5349 0.7967 7.62 ## 11 22 0.928 0.7967 1.0585 4.69 ## 12 9 1.189 1.0585 1.3203 3.00 ## 13 2 1.451 1.3203 1.5821 1.41 ## 14 1 1.713 1.5821 1.8439 1.00 ``` We plot the root counts using a line graph. ``` ggplot(out, aes(x, ROOT)) + geom_line() + xlab("Bin Mid Points") ``` Generally, we’ll see some unevenness in this plot of the root counts. Here the unevenness is most obvious near the peak of the graph. We smooth out the curve by applying our resistant smooth (a 3RSSH, twice) to the sequence of root counts. We use R to apply a 3RSSH smooth. The table below shows the values of the smoothed roots and the figure below the table plots the smooth in red. ``` out <- mutate(out, Smooth.Root = as.vector(smooth(ROOT, twiceit = TRUE))) select(out, count, x, ROOT, Smooth.Root) ``` ``` ## count x ROOT Smooth.Root ## 1 1 -1.690 1.00 2.00 ## 2 4 -1.429 2.00 2.00 ## 3 4 -1.167 2.00 2.00 ## 4 22 -0.905 4.69 4.69 ## 5 54 -0.643 7.35 7.35 ## 6 146 -0.381 12.08 12.08 ## 7 260 -0.120 16.12 16.12 ## 8 262 0.142 16.19 16.12 ## 9 155 0.404 12.45 12.45 ## 10 58 0.666 7.62 7.62 ## 11 22 0.928 4.69 4.69 ## 12 9 1.189 3.00 3.00 ## 13 2 1.451 1.41 1.41 ## 14 1 1.713 1.00 1.00 ``` ``` ggplot(out, aes(x, ROOT)) + geom_line() + geom_line(aes(x, Smooth.Root), color="red") + xlab("Bin Mid Points") ``` 22\.4 Fitting a symmetric curve ------------------------------- Now that we have a smoothed version of the root counts, we want to fit a symmetric curve. A general form of this symmetric curve is \\\[ ({\\rm some \\, reexpression \\, of } \\, \\sqrt{count}) \= a \+ b (bin \- peak) ^ 2, \\] where \- \\(bin\\) is a value of a bin midpoint \- \\(peak\\) is the value where the curve hits its peak We fit this symmetric curve in three steps. 1. First, we find where the histogram appears to peak. Here I have strong reason to believe that the histogram peaks at 0, so \\\[ peak \= 0\. \\] 2. Next, we find an appropriate power transformation (that is, choice of a power \\(p\\)) so that \\\[ {\\sqrt{count}} ^ p \\] is approximately a linear function of \\\[ (bin \- shift)^2 \= shift ^ 2\. \\] where \\(shift\\) is the difference between the bin midpoint and the peak. In the table below, we show the bin midpoints (BIN MPT) and the values of the \\(shift^2 \= (bin \- peak)^2\\) for all bins. ``` out <- mutate(out, Shift.Sq = round((x - 0) ^ 2, 2)) select(out, count, x, ROOT, Smooth.Root, Shift.Sq) ``` ``` ## count x ROOT Smooth.Root Shift.Sq ## 1 1 -1.690 1.00 2.00 2.86 ## 2 4 -1.429 2.00 2.00 2.04 ## 3 4 -1.167 2.00 2.00 1.36 ## 4 22 -0.905 4.69 4.69 0.82 ## 5 54 -0.643 7.35 7.35 0.41 ## 6 146 -0.381 12.08 12.08 0.15 ## 7 260 -0.120 16.12 16.12 0.01 ## 8 262 0.142 16.19 16.12 0.02 ## 9 155 0.404 12.45 12.45 0.16 ## 10 58 0.666 7.62 7.62 0.44 ## 11 22 0.928 4.69 4.69 0.86 ## 12 9 1.189 3.00 3.00 1.41 ## 13 2 1.451 1.41 1.41 2.11 ## 14 1 1.713 1.00 1.00 2.93 ``` 3. To find the appropriate reexpression, we plot \\(\\sqrt{smoothed \\, root}\\) against the squared shift for all bins. ``` ggplot(out, aes(Shift.Sq, Smooth.Root)) + geom_point() ``` We see strong curvature in the graph and a reexpression is needed to straighten. We wish to reexpress the \\(y\\) variable (that is, the smoothed root count) so that the curve is approximately linear. We could choose three summary points and find an appropriate choice of reexpression so that the half\-slope ratio is close to 1\. But this method seems to give unsatisfactory results for this example. As an alternative, we will plot the shift squared against different power transformations of the smoothed roots, and look for a power p that seems to straighten the graph. In the following figure, we graph the shift squared against the smoothed roots using reexpressions \\(p\\) \= 0\.5 (roots), \\(p\\) \= .001 (logs), \\(p \= \-0\.5\\) (reciprocal roots), and \\(p \= \-1\\) (reciprocals) for the smoothed roots. We have placed a lowess smooth on each graph to help in detecting curvature. ``` library(gridExtra) p1 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, 0.5))) + geom_point() + ylab("New Y") + ggtitle("Power = 0.5") p2 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, 0))) + geom_point() + ylab("New Y") + ggtitle("Power = 0") p3 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, - 0.5))) + geom_point() + ylab("New Y") + ggtitle("Power = -0.5") p4 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, - 1))) + geom_point() + ylab("New Y") + ggtitle("Power = -1") grid.arrange(p1, p2, p3, p4) ``` Looking at the four graphs, we see substantial curvature in the \\(p\=0\.5\\) and \\(p\=0\.001\\) plots and the straightening reexpression appears to fall between the values \\(p \= \-0\.5\\) and \\(p \= \-1\\). Since a reexpression has straightened the graph, we can now fit a line. Our linear fit has the form \\\[ (smoothed \\, root)^p \= a \+ b \\, shift^2\. \\] If we write this fit in terms of the count, we have \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/p)} \\] Suppose we use a reexpression \\(p \= \-0\.75\\). A resistant line to the reexpressed data gives the fit \\\[ (smoothed \\, root)^{\-0\.75} \= a \+ b \\, shift^2\. \\] which is equivalent to \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/\-0\.75\)} \\] In the R code, we estimate \\(a\\) and \\(b\\) and use the formula \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/\-0\.75\)} \\] to find the fitted smoothed count, and then take the square root to find the fitted smooth root count. ``` out <- filter(out, Smooth.Root > 0) fit <- lm(I(Smooth.Root ^ (- 0.75)) ~ Shift.Sq, data=out) b <- coef(fit) out <- mutate(out, Final.Smooth = (b[1] + b[2] * Shift.Sq) ^ (-2 / 0.75), Final.S.Root = sqrt(Final.Smooth)) select(out, count, x, ROOT, Final.Smooth, Final.S.Root) ``` ``` ## count x ROOT Final.Smooth Final.S.Root ## 1 1 -1.690 1.00 1.61 1.27 ## 2 4 -1.429 2.00 3.40 1.84 ## 3 4 -1.167 2.00 7.80 2.79 ## 4 22 -0.905 4.69 19.50 4.42 ## 5 54 -0.643 7.35 52.69 7.26 ## 6 146 -0.381 12.08 129.59 11.38 ## 7 260 -0.120 16.12 248.08 15.75 ## 8 262 0.142 16.19 235.51 15.35 ## 9 155 0.404 12.45 124.40 11.15 ## 10 58 0.666 7.62 48.31 6.95 ## 11 22 0.928 4.69 18.01 4.24 ## 12 9 1.189 3.00 7.27 2.70 ## 13 2 1.451 1.41 3.16 1.78 ## 14 1 1.713 1.00 1.52 1.23 ``` On the following figure, we graph the original bin counts against the shift and overlay the symmetric fitting curve. ``` ggplot(out, aes(x, ROOT)) + geom_line() + geom_line(aes(x, Final.S.Root), color="red") + xlab("Bin Mid Points") ``` 22\.5 Is it a reasonable fit? ----------------------------- Does this fit provide a good description of the counts? To check the goodness of fit, we compute residuals. Recall how we defined residuals from the last lecture. We look at the difference between the observed root count (we called it \\(\\sqrt{d}\\) ) and the root of the expected count \\(\\sqrt{e}\\) from the smooth curve: \\\[ r \= \\sqrt{d} \- \\sqrt{e} \\] We plot the residuals against the bin midpoints below. ``` out <- mutate(out, Residual = ROOT - Final.S.Root) ggplot(out, aes(x, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` If we don’t see any pattern in the residuals, then this will appear to be a reasonable fit. The curve we fit is actually a famous probability density Does the curve \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/\-0\.75\)} \\] look familiar? It should if you are familiar with the popular sampling distributions in statistical inference. If we take a random sample of size \\(n\\) from a normal population with mean \\(\\mu\\) , then the distribution of the statistic \\\[ T \= \\frac{\\sqrt{n}(\\bar X \- \\mu)}{S} \\] has a \\(t\\) distribution with \\(n \- 1\\) degrees of freedom. The general form of the density function of a \\(t\\) curve with mean \\(\\mu\\) , scale parameter \\(\\sigma\\) , and degrees of freedom \\(\\nu\\) has the form \\\[ f(y) \\propto \\left(1 \+ \\frac{(y\-\\mu)^2}{\\sigma^2 \\nu}\\right)^{\-(\\nu \+1\)/2} \\] If we compare this form of this density with our smooth curve , we see that it has the same form. By matching up the powers \\\[ \- \\frac{\\nu \+ 1}{2} \= \-\\frac{2}{0\.75} \\] and solving for \\(\\nu\\) , we obtain that \\(\\nu\\) \= 4\.3\. So our fit to our data is a t curve with 4\.3 degrees of freedom. This is not quite right – the true distribution of \\(\\bar X / S\\) is \\(t\\) with 7 degrees of freedom (remember we took samples of size 8\) – but we’re close to the true distribution. 22\.6 The famous brewer ----------------------- Who was the famous brewer/statistician? William Gossett worked for a brewery in England in the 18th century. He did mathematics on the side and published a result about the \\(t\\) distribution using the pseudonym Student. (He didn’t want the brewery to know that he had a second job.) So that’s why we call it the Student\-t distribution. 22\.1 Introduction ------------------ In the last lecture, we illustrated fitting a Gaussian (normal) curve to a histogram. But the Gaussian curve is just one of many possible symmetric curves that we can fit to binned data. Here we illustrate fitting a more general symmetric curve to grouped data. This is a nice closing topic for this class, since it will illustrate * reexpression using a power transformation * smoothing using a resistant smooth (3RSSH) * straightening a plot ``` library(LearnEDAfunctions) library(tidyverse) ``` 22\.2 Meet a simulated, but famous, dataset ------------------------------------------- I have to apologize at this point – we are going to look at a collection of simulated (fake) data. But it’s a famous type of simulated data. I believe a famous brewer in the 19th century looked at a similar type of data. \[ASIDE: Can you guess at the name of this brewer? A particular distribution is named after him. I’ll give you the name of this brewer later in this lecture.] Here’s how I generated this set of data. First, I took a random sample of eight normally distributed values. Then, I computed the mean \\(\\bar X\\) and the standard deviation \\(S\\) from this sample and computed the ratio \\\[ R \= \\frac{\\bar X}{S}. \\] I repeated this process (take a sample of 8 normals and compute \\(R\\)) 1000 times, resulting in 1000 values of R. ``` my.sim <- function() { x <- rnorm(8) mean(x) / sd(x) } df <- data.frame(R = replicate(1000, my.sim())) ``` I group these data using 14 equally spaced bins between the low and high values; the table below gives the bin counts ``` bins <- seq(min(df$R), max(df$R), length.out = 15) p <- ggplot(df, aes(R)) + geom_histogram(breaks = bins) out <- ggplot_build(p)$data[[1]] select(out, count, x, xmin, xmax) ``` ``` ## count x xmin xmax ## 1 1 -1.690 -1.8213 -1.5595 ## 2 4 -1.429 -1.5595 -1.2977 ## 3 4 -1.167 -1.2977 -1.0359 ## 4 22 -0.905 -1.0359 -0.7741 ## 5 54 -0.643 -0.7741 -0.5123 ## 6 146 -0.381 -0.5123 -0.2505 ## 7 260 -0.120 -0.2505 0.0113 ## 8 262 0.142 0.0113 0.2731 ## 9 155 0.404 0.2731 0.5349 ## 10 58 0.666 0.5349 0.7967 ## 11 22 0.928 0.7967 1.0585 ## 12 9 1.189 1.0585 1.3203 ## 13 2 1.451 1.3203 1.5821 ## 14 1 1.713 1.5821 1.8439 ``` 22\.3 Smoothing the root counts ------------------------------- Remember how we handled bin counts in the last lecture? We observed that the variation of large counts tends to be larger than the variation of small counts. To remove the dependence of spread and level of these counts, we reexpressed by a square root (the resulting graph was called a rootogram). So the first thing we do is to take the roots of the counts, shown below. ``` out <- mutate(out, ROOT = sqrt(count)) select(out, count, x, xmin, xmax, ROOT) ``` ``` ## count x xmin xmax ROOT ## 1 1 -1.690 -1.8213 -1.5595 1.00 ## 2 4 -1.429 -1.5595 -1.2977 2.00 ## 3 4 -1.167 -1.2977 -1.0359 2.00 ## 4 22 -0.905 -1.0359 -0.7741 4.69 ## 5 54 -0.643 -0.7741 -0.5123 7.35 ## 6 146 -0.381 -0.5123 -0.2505 12.08 ## 7 260 -0.120 -0.2505 0.0113 16.12 ## 8 262 0.142 0.0113 0.2731 16.19 ## 9 155 0.404 0.2731 0.5349 12.45 ## 10 58 0.666 0.5349 0.7967 7.62 ## 11 22 0.928 0.7967 1.0585 4.69 ## 12 9 1.189 1.0585 1.3203 3.00 ## 13 2 1.451 1.3203 1.5821 1.41 ## 14 1 1.713 1.5821 1.8439 1.00 ``` We plot the root counts using a line graph. ``` ggplot(out, aes(x, ROOT)) + geom_line() + xlab("Bin Mid Points") ``` Generally, we’ll see some unevenness in this plot of the root counts. Here the unevenness is most obvious near the peak of the graph. We smooth out the curve by applying our resistant smooth (a 3RSSH, twice) to the sequence of root counts. We use R to apply a 3RSSH smooth. The table below shows the values of the smoothed roots and the figure below the table plots the smooth in red. ``` out <- mutate(out, Smooth.Root = as.vector(smooth(ROOT, twiceit = TRUE))) select(out, count, x, ROOT, Smooth.Root) ``` ``` ## count x ROOT Smooth.Root ## 1 1 -1.690 1.00 2.00 ## 2 4 -1.429 2.00 2.00 ## 3 4 -1.167 2.00 2.00 ## 4 22 -0.905 4.69 4.69 ## 5 54 -0.643 7.35 7.35 ## 6 146 -0.381 12.08 12.08 ## 7 260 -0.120 16.12 16.12 ## 8 262 0.142 16.19 16.12 ## 9 155 0.404 12.45 12.45 ## 10 58 0.666 7.62 7.62 ## 11 22 0.928 4.69 4.69 ## 12 9 1.189 3.00 3.00 ## 13 2 1.451 1.41 1.41 ## 14 1 1.713 1.00 1.00 ``` ``` ggplot(out, aes(x, ROOT)) + geom_line() + geom_line(aes(x, Smooth.Root), color="red") + xlab("Bin Mid Points") ``` 22\.4 Fitting a symmetric curve ------------------------------- Now that we have a smoothed version of the root counts, we want to fit a symmetric curve. A general form of this symmetric curve is \\\[ ({\\rm some \\, reexpression \\, of } \\, \\sqrt{count}) \= a \+ b (bin \- peak) ^ 2, \\] where \- \\(bin\\) is a value of a bin midpoint \- \\(peak\\) is the value where the curve hits its peak We fit this symmetric curve in three steps. 1. First, we find where the histogram appears to peak. Here I have strong reason to believe that the histogram peaks at 0, so \\\[ peak \= 0\. \\] 2. Next, we find an appropriate power transformation (that is, choice of a power \\(p\\)) so that \\\[ {\\sqrt{count}} ^ p \\] is approximately a linear function of \\\[ (bin \- shift)^2 \= shift ^ 2\. \\] where \\(shift\\) is the difference between the bin midpoint and the peak. In the table below, we show the bin midpoints (BIN MPT) and the values of the \\(shift^2 \= (bin \- peak)^2\\) for all bins. ``` out <- mutate(out, Shift.Sq = round((x - 0) ^ 2, 2)) select(out, count, x, ROOT, Smooth.Root, Shift.Sq) ``` ``` ## count x ROOT Smooth.Root Shift.Sq ## 1 1 -1.690 1.00 2.00 2.86 ## 2 4 -1.429 2.00 2.00 2.04 ## 3 4 -1.167 2.00 2.00 1.36 ## 4 22 -0.905 4.69 4.69 0.82 ## 5 54 -0.643 7.35 7.35 0.41 ## 6 146 -0.381 12.08 12.08 0.15 ## 7 260 -0.120 16.12 16.12 0.01 ## 8 262 0.142 16.19 16.12 0.02 ## 9 155 0.404 12.45 12.45 0.16 ## 10 58 0.666 7.62 7.62 0.44 ## 11 22 0.928 4.69 4.69 0.86 ## 12 9 1.189 3.00 3.00 1.41 ## 13 2 1.451 1.41 1.41 2.11 ## 14 1 1.713 1.00 1.00 2.93 ``` 3. To find the appropriate reexpression, we plot \\(\\sqrt{smoothed \\, root}\\) against the squared shift for all bins. ``` ggplot(out, aes(Shift.Sq, Smooth.Root)) + geom_point() ``` We see strong curvature in the graph and a reexpression is needed to straighten. We wish to reexpress the \\(y\\) variable (that is, the smoothed root count) so that the curve is approximately linear. We could choose three summary points and find an appropriate choice of reexpression so that the half\-slope ratio is close to 1\. But this method seems to give unsatisfactory results for this example. As an alternative, we will plot the shift squared against different power transformations of the smoothed roots, and look for a power p that seems to straighten the graph. In the following figure, we graph the shift squared against the smoothed roots using reexpressions \\(p\\) \= 0\.5 (roots), \\(p\\) \= .001 (logs), \\(p \= \-0\.5\\) (reciprocal roots), and \\(p \= \-1\\) (reciprocals) for the smoothed roots. We have placed a lowess smooth on each graph to help in detecting curvature. ``` library(gridExtra) p1 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, 0.5))) + geom_point() + ylab("New Y") + ggtitle("Power = 0.5") p2 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, 0))) + geom_point() + ylab("New Y") + ggtitle("Power = 0") p3 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, - 0.5))) + geom_point() + ylab("New Y") + ggtitle("Power = -0.5") p4 <- ggplot(out, aes(Shift.Sq, power.t(Smooth.Root, - 1))) + geom_point() + ylab("New Y") + ggtitle("Power = -1") grid.arrange(p1, p2, p3, p4) ``` Looking at the four graphs, we see substantial curvature in the \\(p\=0\.5\\) and \\(p\=0\.001\\) plots and the straightening reexpression appears to fall between the values \\(p \= \-0\.5\\) and \\(p \= \-1\\). Since a reexpression has straightened the graph, we can now fit a line. Our linear fit has the form \\\[ (smoothed \\, root)^p \= a \+ b \\, shift^2\. \\] If we write this fit in terms of the count, we have \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/p)} \\] Suppose we use a reexpression \\(p \= \-0\.75\\). A resistant line to the reexpressed data gives the fit \\\[ (smoothed \\, root)^{\-0\.75} \= a \+ b \\, shift^2\. \\] which is equivalent to \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/\-0\.75\)} \\] In the R code, we estimate \\(a\\) and \\(b\\) and use the formula \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/\-0\.75\)} \\] to find the fitted smoothed count, and then take the square root to find the fitted smooth root count. ``` out <- filter(out, Smooth.Root > 0) fit <- lm(I(Smooth.Root ^ (- 0.75)) ~ Shift.Sq, data=out) b <- coef(fit) out <- mutate(out, Final.Smooth = (b[1] + b[2] * Shift.Sq) ^ (-2 / 0.75), Final.S.Root = sqrt(Final.Smooth)) select(out, count, x, ROOT, Final.Smooth, Final.S.Root) ``` ``` ## count x ROOT Final.Smooth Final.S.Root ## 1 1 -1.690 1.00 1.61 1.27 ## 2 4 -1.429 2.00 3.40 1.84 ## 3 4 -1.167 2.00 7.80 2.79 ## 4 22 -0.905 4.69 19.50 4.42 ## 5 54 -0.643 7.35 52.69 7.26 ## 6 146 -0.381 12.08 129.59 11.38 ## 7 260 -0.120 16.12 248.08 15.75 ## 8 262 0.142 16.19 235.51 15.35 ## 9 155 0.404 12.45 124.40 11.15 ## 10 58 0.666 7.62 48.31 6.95 ## 11 22 0.928 4.69 18.01 4.24 ## 12 9 1.189 3.00 7.27 2.70 ## 13 2 1.451 1.41 3.16 1.78 ## 14 1 1.713 1.00 1.52 1.23 ``` On the following figure, we graph the original bin counts against the shift and overlay the symmetric fitting curve. ``` ggplot(out, aes(x, ROOT)) + geom_line() + geom_line(aes(x, Final.S.Root), color="red") + xlab("Bin Mid Points") ``` 22\.5 Is it a reasonable fit? ----------------------------- Does this fit provide a good description of the counts? To check the goodness of fit, we compute residuals. Recall how we defined residuals from the last lecture. We look at the difference between the observed root count (we called it \\(\\sqrt{d}\\) ) and the root of the expected count \\(\\sqrt{e}\\) from the smooth curve: \\\[ r \= \\sqrt{d} \- \\sqrt{e} \\] We plot the residuals against the bin midpoints below. ``` out <- mutate(out, Residual = ROOT - Final.S.Root) ggplot(out, aes(x, Residual)) + geom_point() + geom_hline(yintercept = 0) ``` If we don’t see any pattern in the residuals, then this will appear to be a reasonable fit. The curve we fit is actually a famous probability density Does the curve \\\[ smooth \= (a \+ b \\, shift ^ 2\)^{(2/\-0\.75\)} \\] look familiar? It should if you are familiar with the popular sampling distributions in statistical inference. If we take a random sample of size \\(n\\) from a normal population with mean \\(\\mu\\) , then the distribution of the statistic \\\[ T \= \\frac{\\sqrt{n}(\\bar X \- \\mu)}{S} \\] has a \\(t\\) distribution with \\(n \- 1\\) degrees of freedom. The general form of the density function of a \\(t\\) curve with mean \\(\\mu\\) , scale parameter \\(\\sigma\\) , and degrees of freedom \\(\\nu\\) has the form \\\[ f(y) \\propto \\left(1 \+ \\frac{(y\-\\mu)^2}{\\sigma^2 \\nu}\\right)^{\-(\\nu \+1\)/2} \\] If we compare this form of this density with our smooth curve , we see that it has the same form. By matching up the powers \\\[ \- \\frac{\\nu \+ 1}{2} \= \-\\frac{2}{0\.75} \\] and solving for \\(\\nu\\) , we obtain that \\(\\nu\\) \= 4\.3\. So our fit to our data is a t curve with 4\.3 degrees of freedom. This is not quite right – the true distribution of \\(\\bar X / S\\) is \\(t\\) with 7 degrees of freedom (remember we took samples of size 8\) – but we’re close to the true distribution. 22\.6 The famous brewer ----------------------- Who was the famous brewer/statistician? William Gossett worked for a brewery in England in the 18th century. He did mathematics on the side and published a result about the \\(t\\) distribution using the pseudonym Student. (He didn’t want the brewery to know that he had a second job.) So that’s why we call it the Student\-t distribution.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/fraction-data.html
23 Fraction Data ================ 23\.1 The Quality of Students at BGSU ------------------------------------- A general concern among faculty at BGSU is the quality of the incoming undergraduate freshmen class. Is the university admitting more students of questionable ability? If so, this has a great effect on the performance of the students that take precalculus mathematics in the department. Weak students generally don’t do well in their precalculus or introductory statistics classes. Back in 1991, the Office of the President was preparing some data to show the university community how much the university had advanced in the nine years between the academic years 1981\-1982 and 1990\-1991\. One statistic that they considered was the “Percentage of Freshmen in the Bottom 50% of High School Graduating Class” At first glance, one might wonder why they consider the percentage of students in the bottom 50% of their class – wouldn’t it be clearer to consider the percentage of students in the top 50% of their class? Anyway, the data sheet shows the following: | – | 1981\-1982 | 1990\-1991 | % Change | | --- | --- | --- | --- | | Percentage of Freshmen | | | | | in the Bottom 50% of High | 9\.4 % | 6\.9 % | (26\.6 %) | | School Graduating Class | | | | This is supposed to impress you – the percentage of students in the bottom half of their high school class decreased by 26\.6 % in the nine\-year period. But what if we considered instead the percentage of students in the top half of the class – we get the following table: | – | 1981\-1982 | 1990\-1991 | % Change | | --- | --- | --- | --- | | Percentage of Freshmen | | | | | in the Top 50% of High | 90\.6 % | 93\.1 % | 2\.8 % | | School Graduating Class | | | | We see that the percentage of freshmen in the top half increased by 2\.8 %. This doesn’t sound that impressive, so I think I know now why the President’s Office decided to consider the percentage in the bottom half. But wait – it shouldn’t matter if we consider the percentage of freshmen in the top half or the percentage in the bottom half. Why should our measure of change depend on this arbitrary definition? In this lecture, we’ll talk about accurate and effective ways of comparing proportions. This type of data suffers from the “change in variability” problem that we saw earlier in our comparison of batches and deserves an appropriate reexpression. 23\.2 Meet the Data ------------------- The Office of Undergraduate Admissions collects data on the high school ranks of regularly admitted undergraduate students. Using the admissions data, the Office of Institutional Research has the following table that shows the number and % of students in different high school ranks for the past five years. | – | 1996 | | 1997 | | 1998 | | 1999 | | 2000 | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | HS Rank | N | % | N | % | N | % | N | % | N | % | | 90% \- 100% | 321 | 13\.3 | 345 | 14\.2 | 390 | 13\.0 | 373 | 12\.3 | 357 | 12\.1 | | 80%\-89% | 434 | 18 | 356 | 14\.7 | 486 | 16\.2 | 504 | 16\.6 | 430 | 14\.6 | | 70%\-79% | 396 | 16\.4 | 387 | 15\.9 | 482 | 16\.0 | 478 | 15\.7 | 459 | 15\.6 | | 60%\-69% | 395 | 16\.4 | 393 | 16\.2 | 485 | 16\.1 | 518 | 17\.0 | 470 | 16\.0 | | 50%\-59% | 377 | 15\.6 | 366 | 15\.1 | 458 | 15\.2 | 439 | 14\.4 | 490 | 16\.7 | | Below 50% | 493 | 20\.4 | 581 | 23\.9 | 704 | 23\.4 | 727 | 23\.9 | 736 | 25\.0 | | Total | 2416 | 100 | 2428 | 100 | 3005 | 100 | 3039 | 100 | 2942 | 100 | We will focus here on the table of percentages. | HS Rank | 1996 | 1997 | 1998 | 1999 | | --- | --- | --- | --- | --- | | 90% \- 100% | 13\.3 | 14\.2 | 13\.0 | 12\.3 | | 80%\-89% | 18 | 14\.7 | 16\.2 | 16\.6 | | 70%\-79% | 16\.4 | 15\.9 | 16\.0 | 15\.7 | | 60%\-69% | 16\.4 | 16\.2 | 16\.1 | 17\.0 | | 50%\-59% | 15\.6 | 15\.1 | 15\.2 | 14\.4 | | Below 50% | 20\.4 | 23\.9 | 23\.4 | 23\.9 | | Total | 100 | 100 | 100 | 100 | The objective here is to get an overall sense how the percentages are changing over the 5\-year period. Certain low and high percentages might attract our eye (the 25\.0% of students in the bottom half of the HS class in 2000 certainly seems high), but that’s just an isolated value and may not reflect the general pattern of change across years. 23\.3 Counted Fractions ----------------------- To start off, how do we obtain data that is a fraction? This fraction is found by taking a COUNT and dividing by a total number. So \\\[ FRACTION \= \\frac{COUNT}{TOTAL}. \\] We call these data counted fractions, since the numerator of the fraction is some type of count. In many cases, we create these counts by cutting continuous data. Our data is this type. A student’s high school rank is a percentage between 0 and 100, and we are cutting this in different places (90, 80, and so on) to get the fractions in the above table. 23\.4 Started Counts and Split Counts ------------------------------------- Suppose that we sampled 20 students and all of them were in the bottom 90% of their high school class. So the fraction of students in the top 10% of their class is \\\[ \\frac{0}{20} \= 0\. \\] This answer is a bit unsatisfactory, since we know that if we kept on sampling, we would find some students in the top 10% of their class. So we want to adjust our numerator in our fraction for the possibility that some would fall in this class. We adjust the counts of “in the top 10%” and “not in the top 10%” (that is, the two classifications), by adding 1/6 to each type. Then the fraction of students in the top 10% would be \\\[ \\frac{0 \+ 1/6}{20 \+ 1/3}. \\] We call (0 \+ 1/6\) a started count, and so the corresponding fraction is a started fraction. Another issue is that, when we cut continuous data to get our counts, it is possible that some observations will fall on the fence. Since it makes sense to treat the two classes (those in the class and those not in the class) symmetrically, we add half of the counts on the fence to one class and the remaining half of the count to the other class. Tukey defines the new count \\\[ {\\rm ss\-count \\, below} \= {\\rm count \\, below} \+ \\frac{1}{2} ({\\rm count \\, equal}) \+ \\frac{1}{6} \\] That is, we add three quantities: \- the count below the boundary \- one half of the count that falls on the boundary (we call this the split count) \- 1/6 (the started count) and the corresponding fraction is given by \\\[ {\\rm ss\-fraction \\, below} \= \\frac{{\\rm ss\-count \\,below}} {{\\rm total \\, count} \+1/3}. \\] We won’t say any more about starting and split counts here since they won’t be needed for our example. 23\.5 Three Matched Scales for Counted Fractions (folded fractions, roots, and logs) ------------------------------------------------------------------------------------ The main issue that we want to address is how to properly express fractions. Fraction data are hard to analyze for several reasons: * People aren’t sure if they should work with the “fraction that is” or the “fraction that isn’t”. In the Office of the President example above, the person who made the table thought that there was some advantage to working with the fraction of students in the lower half of their class instead of the fraction of students in the top talk. * Small fractions near 0, and large fractions near 1 have small variation, and fractions close to .5 have large variation. We all know that the standard error of a sample proportion is This standard error is 0 when the fraction f is 0 or 1, and is maximized when f is equal to .5\. If we reexpress a fraction, what properties would we like of the reexpressed fraction? * We would like to treat “those who are” and “whose who aren’t” in a symmetric fashion. * Since \\(f \= .5\\) is a central fraction value, it would be desirable if our reexpressed fraction is equal to 0 when the fraction is equal to .5\. * If we swap a fraction \\(f\\) with the fraction \\(1\-f\\), the reexpressed fraction will change in size but the size of the fraction won’t change. The simplest reexpression that has these properties is the folded fraction, which we abbreviate by ff: \\\[ ff \= f \- (1\-f). \\] The table below gives some folded fractions for values of the fraction f. | f | .05 | .1 | .3 | .5 | .7 | .9 | .95 | | --- | --- | --- | --- | --- | --- | --- | --- | | ff | \-.9 | \-.8 | \-.4 | 0 | .4 | .8 | .9 | Note that \\(ff\\) satisfies our basic properties. The folded fraction for \\(f \= .1\\) is \\(ff \= \-.8\\); if we replace \\(f \= .1\\) by \\(f \= .9\\), the value of \\(ff\\) is changed from \-8 to \+.8, but the size of ff doesn’t change. A folded fraction \\(ff \= 0\\) corresponds to a fraction \\(f \= .5\\). We can obtain alternative folded reexpressions by taking the fractions \\(f, 1\-f\\) to the \\(p\\)th power and then folding: \\\[ f^p \- (1\-f)^p. \\] If we use a \\(p \= \\frac{1}{2}\\) power, we get a folded root, or froot \\\[ f^{1/2} \- (1\-f)^{1/2}. \\] If we use a p \= 0 (one more half\-step), we get a folded log, or flog \\\[ log(f) \- log(1\-f). \\] All three rexpressions (\\(ff\\), \\(froot\\), \\(flog\\)) are equal to 0 when \\(f\\) \= .5\. Also if you replace \\(f\\) by \\(1\-f\\), then the measure will change in sign. To compare these reexpressions, we slightly modify the definitions of \\(froot\\) and \\(flog\\) so that they are matched with the folded fractions: \\\[ froot \= (2 f)^{1/2} \- (2 (1 \- f))^{1/2} \\] \\\[ flog \= 1\.15 log(f) \- 1\.15 log(1\-f) \\] The table below displays values of the reexpressions for some values of the fraction \\(f\\). The figure below graphs values of these three reexpressions. | f | .05 | .1 | .3 | .5 | .7 | .9 | .95 | | --- | --- | --- | --- | --- | --- | --- | --- | | ff | \-.9 | \-.8 | \-.4 | 0 | .4 | .8 | .9 | | froot | \-1\.06 | \-.89 | \-.41 | 0 | .41 | .89 | 1\.06 | | flog | \-1\.47 | \-1\.10 | \-.42 | 0 | .42 | 1\.10 | 1\.47 | The folded fraction \\(ff\\) is just a linear transformation of \\(f\\) that changes the support from (0, 1\) to (\-1, 1\). Due to the matching, note that \\(ff\\), \\(froot\\) and \\(flog\\) agree closely for values of \\(f\\) between .3 and .7\. The differences are primarily how the reexpressions handle extreme values of \\(f\\) near 0 and 1\. By taking the root and log, the \\(froots\\) and \\(flogs\\) stretch the scale for these extreme values. By stretching values of \\(f\\) near 0 and 1, this reexpression is adjusting for the small spread in fraction values that are close to 0 or 1\. Here is another graph – we are plotting the three rexpressions (\\(ff, froot, flog\\)) against the fraction \\(f\\). This again illustrates the stretching effect of the \\(froot\\) and \\(flog\\) reexpressions. 23\.6 An Example Illustrating the Benefits of Reexpressing Fractions -------------------------------------------------------------------- The dataset `college.ratings` contains a number of different measurements of a group of national universities in the United States based on a 2001 survey. One of the interesting variables is the fraction `Top.10`, the proportion of students enrolled who were in the top 10 percent of their high school class. The variable `Tier` with four levels provides a general classification of the universities – the `Tier 1` schools are the top\-rated schools, followed by the `Tier 2` schools, the `Tier 3` schools, and the `Tier 4` schools. We are interested how the `Top.10` variable varies between tiers, and if there is any advantage to reexpressing the fraction `Top.10` to a different scale. Using the `geom_boxplot()` function, we construct parallel boxplots of the Top 10 variable across tier. ``` library(LearnEDAfunctions) ggplot(college.ratings, aes(factor(Tier), Top.10)) + geom_boxplot() + coord_flip() + xlab("Tier") ``` ``` ## Warning: Removed 28 rows containing non-finite values ## (stat_boxplot). ``` It is difficult to compare these Top 10 rates due to the difference in variability across tiers. If we focus on the fourth\-spreads (or interquartile ranges) the Tier 1 and Tier 2 rates have the small variation, followed by Tier 3 and Tier 4\. Using the `summarize` function in the `dplyr` package, we compare the interquartile range of `Top.10` for each tier. ``` college.ratings %>% group_by(Tier) %>% summarize(IQR = IQR(Top.10, na.rm = TRUE)) ``` ``` ## # A tibble: 4 × 2 ## Tier IQR ## <int> <dbl> ## 1 1 0.235 ## 2 2 0.155 ## 3 3 0.09 ## 4 4 0.0925 ``` If we focus on comparisons of tiers 2, 3, 4, we see that the spread of the Tier 2 values (0\.155\) is about 1\.7 times larger than the spread of the Tier 3 values (0\.090\). Next, let’s see the effect of transformating the top 10 variable to the \\(froot\\) and \\(flog\\) scales. We write short functions defining these functions, and then show boxplots using these two scales. ``` froot <- function(p) sqrt(p) - sqrt(1- p) flog <- function(p) log(p) - log(1 - p) ``` ``` ggplot(college.ratings, aes(factor(Tier), froot(Top.10))) + geom_boxplot() + coord_flip() + ylab("Froot(Top 10)") + xlab("Tier") ``` ``` ggplot(college.ratings, aes(factor(Tier), flog(Top.10))) + geom_boxplot() + coord_flip() + ylab("Flog(Top 10)") + xlab("Tier") ``` Have these reexpressions helped in equalizing spread? We compute the IQRs of the groups using the \\(froot\\) and \\(flog\\) scales. ``` college.ratings %>% group_by(Tier) %>% summarize(IQR_froot = IQR(froot(Top.10), na.rm = TRUE), IQR_flog = IQR(flog(Top.10), na.rm = TRUE)) ``` ``` ## # A tibble: 4 × 3 ## Tier IQR_froot IQR_flog ## <int> <dbl> <dbl> ## 1 1 0.393 1.48 ## 2 2 0.231 0.717 ## 3 3 0.148 0.540 ## 4 4 0.173 0.763 ``` If we focus again on comparisons between tiers 2, 3, and 4, it appears that the \\(flog\\) expression is best for equalizing spreads. If look at the ratio of the largest IQR to the smallest IQR, then we compute the ratio 1\.56 for \\(froot\\)s and \\(1\.41\\) for \\(flog\\)s. Since \\(flog\\)s have approximately equalized spreads, we can make comparisons between the top 10 flog fractions between tiers 2, 3, and 4 by computing medians. ``` college.ratings %>% group_by(Tier) %>% summarize(M_flog = median(flog(Top.10), na.rm = TRUE)) ``` ``` ## # A tibble: 4 × 2 ## Tier M_flog ## <int> <dbl> ## 1 1 1.39 ## 2 2 -0.800 ## 3 3 -1.32 ## 4 4 -1.82 ``` On the \\(flog\\) scale, the Tier 2 “top 10” fractions tend to be \-0\.80 \- (\-1\.32\) \= 0\.52 higher than the Tier 3 fractions. Similarly, on the flog scale, the Tier 3 “top 10” fractions tend to be \-1\.32 \- (\-1\.82\) \= 0\.50 higher than the Tier 4 fractions. 23\.7 A Tukey Example Where Careful Reexpression Pays Off --------------------------------------------------------- Tukey’s EDA book has a great example that illustrates the benefit of reexpressing fractions. This table comes from a newspaper article – the Washington Post February 2, 1962 article titled “Protestants shifts support to Kennedy”. As you might or might not know, John Kennedy was the first Catholic to have a serious chance of being elected president and one suspected that the Catholics would be more behind Kennedy than Protestants in the election between Kennedy and Nixon. There were polls taken on the dates 11/60 and 1/62\. Kennedy’s support increased from 38% to 59% among the Protestants, in contrast to the 11% change in support – 78% to 89% – among the Catholics. On the surface, it would appear that Kennedy’s support has increased more among the Protestants. | – | Protestants | Protestants | Catholics | Catholics | | --- | --- | --- | --- | --- | | Date | 11/60 | 1/62 | 11/60 | 1/62 | | Kennedy | 38% | 59% | 78% | 89% | | Nixon | 62% | 41% | 22% | 11% | But this conclusion is deceptive since there is smaller variation among the large fractions (in the 70\-80% range) than among the middle\-range fractions between 38\-59%. Suppose that we reexpress to flogs. For example, we reexpress \\(f\\) \= .38 to \\(flog \= \\log(.38\)\-\\log(.62\)\\), and so on for all the fractions in the table. | – | Protestants | Protestants | Catholics | Catholics | | --- | --- | --- | --- | --- | | Date | 11/60 | 1/62 | 11/60 | 1/62 | | Kennedy | \-0\.24 | 0\.18 | 0\.63 | 1\.05 | Now let’s look at the change in support in the \\(flog\\) scale. The change in support (from 11/60 to 1/62\) among the Protestants was \\(.18 \- (\-.24\) \= .42\\) and the change in support among the Catholics was also 1\.05 \- .63 \= 0\.42\. So, by adjusting for the variability problem in the fractions, we see that really Protestant and Catholic changes are similar. Our conclusion is that JFK’s popularity improved by 0\.42 on \\(flog\\) scale (for both Protestants and Catholics) 23\.8 Back to the Data – An Analysis Based on Flogs --------------------------------------------------- Let’s return to our BGSU admissions example. We are interested in looking at any possible trend in the HS rank data across years. Here is our data, where the numbers represent the percentages of students in the different HS Rank classes for a given year. | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | 13\.3 | 14\.2 | 13\.0 | 12\.3 | 12\.1 | | 80%\-89% | 18 | 14\.7 | 16\.2 | 16\.6 | 14\.6 | | 70%\-79% | 16\.4 | 15\.9 | 16\.0 | 15\.7 | 15\.6 | | 60%\-69% | 16\.4 | 16\.2 | 16\.1 | 17\.0 | 16\.0 | | 50%\-59% | 15\.6 | 15\.1 | 15\.2 | 14\.4 | 16\.7 | | Below 50% | 20\.4 | 23\.9 | 23\.4 | 23\.9 | 25\.0 | | Total | 100 | 100 | 100 | 100 | 100 | What we will do is to cut the HS Rank values different places and compute the corresponding flogs. For example, suppose that we cut HS Rank at 90%. Then we have the data table | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | 13\.3 | 14\.2 | 13\.0 | 12\.3 | 12\.1 | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | 86\.7 | 85\.8 | 87\.0 | 87\.7 | 87\.9 | | 50%\-59% | | | | | | | Below 50% | | | | | | | Total | 100 | 100 | 100 | 100 | 100 | where I have combined the percentages below the cut. I convert the year fractions (.133, .142, .130, .123, .121\) to flogs (I use the basic definition \\(\\log(f) \- \\log(1\-f)\\) here.) | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | \\(\-1\.87\\) | \\(\-1\.80\\) | \\(\-1\.90\\) | \\(\-1\.96\\) | \\(\-1\.98\\) | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | | | | | | | 50%\-59% | | | | | | | Below 50% | | | | | | Next, we cut the HS Rank data at 80% as follows: | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | 31\.3 | 28\.9 | 29\.2 | 28\.9 | 26\.7 | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | 68\.7 | 71\.1 | 71\.8 | 71\.1 | 73\.3 | | 50%\-59% | | | | | | | Below 50% | | | | | | Again we convert the fractions to flogs. | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | \\(\-0\.79\\) | \\(\-0\.90\\) | \\(\-0\.89\\) | \\(\-0\.90\\) | \\(\-1\.01\\) | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | | | | | | | 50%\-59% | | | | | | | Below 50% | | | | | | If we do this procedure for all 5 possible cuts, then we get the following table of flogs: | CUT | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90\-100% between 0\-89% | \-1\.87 | \-1\.80 | \-1\.90 | \-1\.96 | \-1\.98 | | 80\-100% between 0\-79% | \-0\.79 | \-0\.90 | \-0\.89 | \-0\.90 | \-1\.01 | | 70\-100% between 0\-69% | \-0\.09 | \-0\.21 | \-0\.19 | \-0\.22 | \-0\.31 | | 60\-100% between 0\-59% | 0\.58 | 0\.45 | 0\.46 | 0\.47 | 0\.34 | | 50\-100% between 0\-49% | 1\.37 | 1\.16 | 1\.18 | 1\.15 | 1\.10 | This table essentially gives an analysis of fractions where all possible cuts are considered. We are interested in how the flogs change across years. So we compute the differences between consecutive years. The first column shows the change in the flog between 1996 and 1997, the second column the change in flog between 1997 and 1998, and so on. | CUT | 1996\-1997 | 1997\-1998 | 1998\-1999 | 1999\-2000 | | --- | --- | --- | --- | --- | | 90\-100% between 0\-89% | 0\.08 | \-0\.10 | \-0\.06 | \-0\.02 | | 80\-100% between 0\-79% | \-0\.11 | 0\.01 | \-0\.01 | \-0\.11 | | 70\-100% between 0\-69% | \-0\.12 | 0\.02 | \-0\.02 | \-0\.09 | | 60\-100% between 0\-59% | \-0\.13 | 0\.01 | 0\.01 | \-0\.14 | | 50\-100% between 0\-49% | \-0\.21 | 0\.02 | \-0\.03 | \-0\.05 | | MEDIAN | \-0\.12 | 0\.01 | \-0\.01 | \-0\.09 | By looking down each column, we get a general sense of how the HS rank percentages change in consecutive years. The MEDIAN row gives the median change in flog for each two\-year sequence. It is pretty clear that the BGSU freshmen’s HS rank significantly decreased from 1996 to 1997 and from 1999 to 2000\. In contrast, the HS ranks stayed roughly the same between 1997 and 1999\. It would be interesting to see if there were any changes in the admission procedure that may have impacted the quality of the incoming BGSU freshmen. 23\.1 The Quality of Students at BGSU ------------------------------------- A general concern among faculty at BGSU is the quality of the incoming undergraduate freshmen class. Is the university admitting more students of questionable ability? If so, this has a great effect on the performance of the students that take precalculus mathematics in the department. Weak students generally don’t do well in their precalculus or introductory statistics classes. Back in 1991, the Office of the President was preparing some data to show the university community how much the university had advanced in the nine years between the academic years 1981\-1982 and 1990\-1991\. One statistic that they considered was the “Percentage of Freshmen in the Bottom 50% of High School Graduating Class” At first glance, one might wonder why they consider the percentage of students in the bottom 50% of their class – wouldn’t it be clearer to consider the percentage of students in the top 50% of their class? Anyway, the data sheet shows the following: | – | 1981\-1982 | 1990\-1991 | % Change | | --- | --- | --- | --- | | Percentage of Freshmen | | | | | in the Bottom 50% of High | 9\.4 % | 6\.9 % | (26\.6 %) | | School Graduating Class | | | | This is supposed to impress you – the percentage of students in the bottom half of their high school class decreased by 26\.6 % in the nine\-year period. But what if we considered instead the percentage of students in the top half of the class – we get the following table: | – | 1981\-1982 | 1990\-1991 | % Change | | --- | --- | --- | --- | | Percentage of Freshmen | | | | | in the Top 50% of High | 90\.6 % | 93\.1 % | 2\.8 % | | School Graduating Class | | | | We see that the percentage of freshmen in the top half increased by 2\.8 %. This doesn’t sound that impressive, so I think I know now why the President’s Office decided to consider the percentage in the bottom half. But wait – it shouldn’t matter if we consider the percentage of freshmen in the top half or the percentage in the bottom half. Why should our measure of change depend on this arbitrary definition? In this lecture, we’ll talk about accurate and effective ways of comparing proportions. This type of data suffers from the “change in variability” problem that we saw earlier in our comparison of batches and deserves an appropriate reexpression. 23\.2 Meet the Data ------------------- The Office of Undergraduate Admissions collects data on the high school ranks of regularly admitted undergraduate students. Using the admissions data, the Office of Institutional Research has the following table that shows the number and % of students in different high school ranks for the past five years. | – | 1996 | | 1997 | | 1998 | | 1999 | | 2000 | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | HS Rank | N | % | N | % | N | % | N | % | N | % | | 90% \- 100% | 321 | 13\.3 | 345 | 14\.2 | 390 | 13\.0 | 373 | 12\.3 | 357 | 12\.1 | | 80%\-89% | 434 | 18 | 356 | 14\.7 | 486 | 16\.2 | 504 | 16\.6 | 430 | 14\.6 | | 70%\-79% | 396 | 16\.4 | 387 | 15\.9 | 482 | 16\.0 | 478 | 15\.7 | 459 | 15\.6 | | 60%\-69% | 395 | 16\.4 | 393 | 16\.2 | 485 | 16\.1 | 518 | 17\.0 | 470 | 16\.0 | | 50%\-59% | 377 | 15\.6 | 366 | 15\.1 | 458 | 15\.2 | 439 | 14\.4 | 490 | 16\.7 | | Below 50% | 493 | 20\.4 | 581 | 23\.9 | 704 | 23\.4 | 727 | 23\.9 | 736 | 25\.0 | | Total | 2416 | 100 | 2428 | 100 | 3005 | 100 | 3039 | 100 | 2942 | 100 | We will focus here on the table of percentages. | HS Rank | 1996 | 1997 | 1998 | 1999 | | --- | --- | --- | --- | --- | | 90% \- 100% | 13\.3 | 14\.2 | 13\.0 | 12\.3 | | 80%\-89% | 18 | 14\.7 | 16\.2 | 16\.6 | | 70%\-79% | 16\.4 | 15\.9 | 16\.0 | 15\.7 | | 60%\-69% | 16\.4 | 16\.2 | 16\.1 | 17\.0 | | 50%\-59% | 15\.6 | 15\.1 | 15\.2 | 14\.4 | | Below 50% | 20\.4 | 23\.9 | 23\.4 | 23\.9 | | Total | 100 | 100 | 100 | 100 | The objective here is to get an overall sense how the percentages are changing over the 5\-year period. Certain low and high percentages might attract our eye (the 25\.0% of students in the bottom half of the HS class in 2000 certainly seems high), but that’s just an isolated value and may not reflect the general pattern of change across years. 23\.3 Counted Fractions ----------------------- To start off, how do we obtain data that is a fraction? This fraction is found by taking a COUNT and dividing by a total number. So \\\[ FRACTION \= \\frac{COUNT}{TOTAL}. \\] We call these data counted fractions, since the numerator of the fraction is some type of count. In many cases, we create these counts by cutting continuous data. Our data is this type. A student’s high school rank is a percentage between 0 and 100, and we are cutting this in different places (90, 80, and so on) to get the fractions in the above table. 23\.4 Started Counts and Split Counts ------------------------------------- Suppose that we sampled 20 students and all of them were in the bottom 90% of their high school class. So the fraction of students in the top 10% of their class is \\\[ \\frac{0}{20} \= 0\. \\] This answer is a bit unsatisfactory, since we know that if we kept on sampling, we would find some students in the top 10% of their class. So we want to adjust our numerator in our fraction for the possibility that some would fall in this class. We adjust the counts of “in the top 10%” and “not in the top 10%” (that is, the two classifications), by adding 1/6 to each type. Then the fraction of students in the top 10% would be \\\[ \\frac{0 \+ 1/6}{20 \+ 1/3}. \\] We call (0 \+ 1/6\) a started count, and so the corresponding fraction is a started fraction. Another issue is that, when we cut continuous data to get our counts, it is possible that some observations will fall on the fence. Since it makes sense to treat the two classes (those in the class and those not in the class) symmetrically, we add half of the counts on the fence to one class and the remaining half of the count to the other class. Tukey defines the new count \\\[ {\\rm ss\-count \\, below} \= {\\rm count \\, below} \+ \\frac{1}{2} ({\\rm count \\, equal}) \+ \\frac{1}{6} \\] That is, we add three quantities: \- the count below the boundary \- one half of the count that falls on the boundary (we call this the split count) \- 1/6 (the started count) and the corresponding fraction is given by \\\[ {\\rm ss\-fraction \\, below} \= \\frac{{\\rm ss\-count \\,below}} {{\\rm total \\, count} \+1/3}. \\] We won’t say any more about starting and split counts here since they won’t be needed for our example. 23\.5 Three Matched Scales for Counted Fractions (folded fractions, roots, and logs) ------------------------------------------------------------------------------------ The main issue that we want to address is how to properly express fractions. Fraction data are hard to analyze for several reasons: * People aren’t sure if they should work with the “fraction that is” or the “fraction that isn’t”. In the Office of the President example above, the person who made the table thought that there was some advantage to working with the fraction of students in the lower half of their class instead of the fraction of students in the top talk. * Small fractions near 0, and large fractions near 1 have small variation, and fractions close to .5 have large variation. We all know that the standard error of a sample proportion is This standard error is 0 when the fraction f is 0 or 1, and is maximized when f is equal to .5\. If we reexpress a fraction, what properties would we like of the reexpressed fraction? * We would like to treat “those who are” and “whose who aren’t” in a symmetric fashion. * Since \\(f \= .5\\) is a central fraction value, it would be desirable if our reexpressed fraction is equal to 0 when the fraction is equal to .5\. * If we swap a fraction \\(f\\) with the fraction \\(1\-f\\), the reexpressed fraction will change in size but the size of the fraction won’t change. The simplest reexpression that has these properties is the folded fraction, which we abbreviate by ff: \\\[ ff \= f \- (1\-f). \\] The table below gives some folded fractions for values of the fraction f. | f | .05 | .1 | .3 | .5 | .7 | .9 | .95 | | --- | --- | --- | --- | --- | --- | --- | --- | | ff | \-.9 | \-.8 | \-.4 | 0 | .4 | .8 | .9 | Note that \\(ff\\) satisfies our basic properties. The folded fraction for \\(f \= .1\\) is \\(ff \= \-.8\\); if we replace \\(f \= .1\\) by \\(f \= .9\\), the value of \\(ff\\) is changed from \-8 to \+.8, but the size of ff doesn’t change. A folded fraction \\(ff \= 0\\) corresponds to a fraction \\(f \= .5\\). We can obtain alternative folded reexpressions by taking the fractions \\(f, 1\-f\\) to the \\(p\\)th power and then folding: \\\[ f^p \- (1\-f)^p. \\] If we use a \\(p \= \\frac{1}{2}\\) power, we get a folded root, or froot \\\[ f^{1/2} \- (1\-f)^{1/2}. \\] If we use a p \= 0 (one more half\-step), we get a folded log, or flog \\\[ log(f) \- log(1\-f). \\] All three rexpressions (\\(ff\\), \\(froot\\), \\(flog\\)) are equal to 0 when \\(f\\) \= .5\. Also if you replace \\(f\\) by \\(1\-f\\), then the measure will change in sign. To compare these reexpressions, we slightly modify the definitions of \\(froot\\) and \\(flog\\) so that they are matched with the folded fractions: \\\[ froot \= (2 f)^{1/2} \- (2 (1 \- f))^{1/2} \\] \\\[ flog \= 1\.15 log(f) \- 1\.15 log(1\-f) \\] The table below displays values of the reexpressions for some values of the fraction \\(f\\). The figure below graphs values of these three reexpressions. | f | .05 | .1 | .3 | .5 | .7 | .9 | .95 | | --- | --- | --- | --- | --- | --- | --- | --- | | ff | \-.9 | \-.8 | \-.4 | 0 | .4 | .8 | .9 | | froot | \-1\.06 | \-.89 | \-.41 | 0 | .41 | .89 | 1\.06 | | flog | \-1\.47 | \-1\.10 | \-.42 | 0 | .42 | 1\.10 | 1\.47 | The folded fraction \\(ff\\) is just a linear transformation of \\(f\\) that changes the support from (0, 1\) to (\-1, 1\). Due to the matching, note that \\(ff\\), \\(froot\\) and \\(flog\\) agree closely for values of \\(f\\) between .3 and .7\. The differences are primarily how the reexpressions handle extreme values of \\(f\\) near 0 and 1\. By taking the root and log, the \\(froots\\) and \\(flogs\\) stretch the scale for these extreme values. By stretching values of \\(f\\) near 0 and 1, this reexpression is adjusting for the small spread in fraction values that are close to 0 or 1\. Here is another graph – we are plotting the three rexpressions (\\(ff, froot, flog\\)) against the fraction \\(f\\). This again illustrates the stretching effect of the \\(froot\\) and \\(flog\\) reexpressions. 23\.6 An Example Illustrating the Benefits of Reexpressing Fractions -------------------------------------------------------------------- The dataset `college.ratings` contains a number of different measurements of a group of national universities in the United States based on a 2001 survey. One of the interesting variables is the fraction `Top.10`, the proportion of students enrolled who were in the top 10 percent of their high school class. The variable `Tier` with four levels provides a general classification of the universities – the `Tier 1` schools are the top\-rated schools, followed by the `Tier 2` schools, the `Tier 3` schools, and the `Tier 4` schools. We are interested how the `Top.10` variable varies between tiers, and if there is any advantage to reexpressing the fraction `Top.10` to a different scale. Using the `geom_boxplot()` function, we construct parallel boxplots of the Top 10 variable across tier. ``` library(LearnEDAfunctions) ggplot(college.ratings, aes(factor(Tier), Top.10)) + geom_boxplot() + coord_flip() + xlab("Tier") ``` ``` ## Warning: Removed 28 rows containing non-finite values ## (stat_boxplot). ``` It is difficult to compare these Top 10 rates due to the difference in variability across tiers. If we focus on the fourth\-spreads (or interquartile ranges) the Tier 1 and Tier 2 rates have the small variation, followed by Tier 3 and Tier 4\. Using the `summarize` function in the `dplyr` package, we compare the interquartile range of `Top.10` for each tier. ``` college.ratings %>% group_by(Tier) %>% summarize(IQR = IQR(Top.10, na.rm = TRUE)) ``` ``` ## # A tibble: 4 × 2 ## Tier IQR ## <int> <dbl> ## 1 1 0.235 ## 2 2 0.155 ## 3 3 0.09 ## 4 4 0.0925 ``` If we focus on comparisons of tiers 2, 3, 4, we see that the spread of the Tier 2 values (0\.155\) is about 1\.7 times larger than the spread of the Tier 3 values (0\.090\). Next, let’s see the effect of transformating the top 10 variable to the \\(froot\\) and \\(flog\\) scales. We write short functions defining these functions, and then show boxplots using these two scales. ``` froot <- function(p) sqrt(p) - sqrt(1- p) flog <- function(p) log(p) - log(1 - p) ``` ``` ggplot(college.ratings, aes(factor(Tier), froot(Top.10))) + geom_boxplot() + coord_flip() + ylab("Froot(Top 10)") + xlab("Tier") ``` ``` ggplot(college.ratings, aes(factor(Tier), flog(Top.10))) + geom_boxplot() + coord_flip() + ylab("Flog(Top 10)") + xlab("Tier") ``` Have these reexpressions helped in equalizing spread? We compute the IQRs of the groups using the \\(froot\\) and \\(flog\\) scales. ``` college.ratings %>% group_by(Tier) %>% summarize(IQR_froot = IQR(froot(Top.10), na.rm = TRUE), IQR_flog = IQR(flog(Top.10), na.rm = TRUE)) ``` ``` ## # A tibble: 4 × 3 ## Tier IQR_froot IQR_flog ## <int> <dbl> <dbl> ## 1 1 0.393 1.48 ## 2 2 0.231 0.717 ## 3 3 0.148 0.540 ## 4 4 0.173 0.763 ``` If we focus again on comparisons between tiers 2, 3, and 4, it appears that the \\(flog\\) expression is best for equalizing spreads. If look at the ratio of the largest IQR to the smallest IQR, then we compute the ratio 1\.56 for \\(froot\\)s and \\(1\.41\\) for \\(flog\\)s. Since \\(flog\\)s have approximately equalized spreads, we can make comparisons between the top 10 flog fractions between tiers 2, 3, and 4 by computing medians. ``` college.ratings %>% group_by(Tier) %>% summarize(M_flog = median(flog(Top.10), na.rm = TRUE)) ``` ``` ## # A tibble: 4 × 2 ## Tier M_flog ## <int> <dbl> ## 1 1 1.39 ## 2 2 -0.800 ## 3 3 -1.32 ## 4 4 -1.82 ``` On the \\(flog\\) scale, the Tier 2 “top 10” fractions tend to be \-0\.80 \- (\-1\.32\) \= 0\.52 higher than the Tier 3 fractions. Similarly, on the flog scale, the Tier 3 “top 10” fractions tend to be \-1\.32 \- (\-1\.82\) \= 0\.50 higher than the Tier 4 fractions. 23\.7 A Tukey Example Where Careful Reexpression Pays Off --------------------------------------------------------- Tukey’s EDA book has a great example that illustrates the benefit of reexpressing fractions. This table comes from a newspaper article – the Washington Post February 2, 1962 article titled “Protestants shifts support to Kennedy”. As you might or might not know, John Kennedy was the first Catholic to have a serious chance of being elected president and one suspected that the Catholics would be more behind Kennedy than Protestants in the election between Kennedy and Nixon. There were polls taken on the dates 11/60 and 1/62\. Kennedy’s support increased from 38% to 59% among the Protestants, in contrast to the 11% change in support – 78% to 89% – among the Catholics. On the surface, it would appear that Kennedy’s support has increased more among the Protestants. | – | Protestants | Protestants | Catholics | Catholics | | --- | --- | --- | --- | --- | | Date | 11/60 | 1/62 | 11/60 | 1/62 | | Kennedy | 38% | 59% | 78% | 89% | | Nixon | 62% | 41% | 22% | 11% | But this conclusion is deceptive since there is smaller variation among the large fractions (in the 70\-80% range) than among the middle\-range fractions between 38\-59%. Suppose that we reexpress to flogs. For example, we reexpress \\(f\\) \= .38 to \\(flog \= \\log(.38\)\-\\log(.62\)\\), and so on for all the fractions in the table. | – | Protestants | Protestants | Catholics | Catholics | | --- | --- | --- | --- | --- | | Date | 11/60 | 1/62 | 11/60 | 1/62 | | Kennedy | \-0\.24 | 0\.18 | 0\.63 | 1\.05 | Now let’s look at the change in support in the \\(flog\\) scale. The change in support (from 11/60 to 1/62\) among the Protestants was \\(.18 \- (\-.24\) \= .42\\) and the change in support among the Catholics was also 1\.05 \- .63 \= 0\.42\. So, by adjusting for the variability problem in the fractions, we see that really Protestant and Catholic changes are similar. Our conclusion is that JFK’s popularity improved by 0\.42 on \\(flog\\) scale (for both Protestants and Catholics) 23\.8 Back to the Data – An Analysis Based on Flogs --------------------------------------------------- Let’s return to our BGSU admissions example. We are interested in looking at any possible trend in the HS rank data across years. Here is our data, where the numbers represent the percentages of students in the different HS Rank classes for a given year. | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | 13\.3 | 14\.2 | 13\.0 | 12\.3 | 12\.1 | | 80%\-89% | 18 | 14\.7 | 16\.2 | 16\.6 | 14\.6 | | 70%\-79% | 16\.4 | 15\.9 | 16\.0 | 15\.7 | 15\.6 | | 60%\-69% | 16\.4 | 16\.2 | 16\.1 | 17\.0 | 16\.0 | | 50%\-59% | 15\.6 | 15\.1 | 15\.2 | 14\.4 | 16\.7 | | Below 50% | 20\.4 | 23\.9 | 23\.4 | 23\.9 | 25\.0 | | Total | 100 | 100 | 100 | 100 | 100 | What we will do is to cut the HS Rank values different places and compute the corresponding flogs. For example, suppose that we cut HS Rank at 90%. Then we have the data table | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | 13\.3 | 14\.2 | 13\.0 | 12\.3 | 12\.1 | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | 86\.7 | 85\.8 | 87\.0 | 87\.7 | 87\.9 | | 50%\-59% | | | | | | | Below 50% | | | | | | | Total | 100 | 100 | 100 | 100 | 100 | where I have combined the percentages below the cut. I convert the year fractions (.133, .142, .130, .123, .121\) to flogs (I use the basic definition \\(\\log(f) \- \\log(1\-f)\\) here.) | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | \\(\-1\.87\\) | \\(\-1\.80\\) | \\(\-1\.90\\) | \\(\-1\.96\\) | \\(\-1\.98\\) | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | | | | | | | 50%\-59% | | | | | | | Below 50% | | | | | | Next, we cut the HS Rank data at 80% as follows: | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | 31\.3 | 28\.9 | 29\.2 | 28\.9 | 26\.7 | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | 68\.7 | 71\.1 | 71\.8 | 71\.1 | 73\.3 | | 50%\-59% | | | | | | | Below 50% | | | | | | Again we convert the fractions to flogs. | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | \\(\-0\.79\\) | \\(\-0\.90\\) | \\(\-0\.89\\) | \\(\-0\.90\\) | \\(\-1\.01\\) | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | | | | | | | 50%\-59% | | | | | | | Below 50% | | | | | | If we do this procedure for all 5 possible cuts, then we get the following table of flogs: | CUT | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90\-100% between 0\-89% | \-1\.87 | \-1\.80 | \-1\.90 | \-1\.96 | \-1\.98 | | 80\-100% between 0\-79% | \-0\.79 | \-0\.90 | \-0\.89 | \-0\.90 | \-1\.01 | | 70\-100% between 0\-69% | \-0\.09 | \-0\.21 | \-0\.19 | \-0\.22 | \-0\.31 | | 60\-100% between 0\-59% | 0\.58 | 0\.45 | 0\.46 | 0\.47 | 0\.34 | | 50\-100% between 0\-49% | 1\.37 | 1\.16 | 1\.18 | 1\.15 | 1\.10 | This table essentially gives an analysis of fractions where all possible cuts are considered. We are interested in how the flogs change across years. So we compute the differences between consecutive years. The first column shows the change in the flog between 1996 and 1997, the second column the change in flog between 1997 and 1998, and so on. | CUT | 1996\-1997 | 1997\-1998 | 1998\-1999 | 1999\-2000 | | --- | --- | --- | --- | --- | | 90\-100% between 0\-89% | 0\.08 | \-0\.10 | \-0\.06 | \-0\.02 | | 80\-100% between 0\-79% | \-0\.11 | 0\.01 | \-0\.01 | \-0\.11 | | 70\-100% between 0\-69% | \-0\.12 | 0\.02 | \-0\.02 | \-0\.09 | | 60\-100% between 0\-59% | \-0\.13 | 0\.01 | 0\.01 | \-0\.14 | | 50\-100% between 0\-49% | \-0\.21 | 0\.02 | \-0\.03 | \-0\.05 | | MEDIAN | \-0\.12 | 0\.01 | \-0\.01 | \-0\.09 | By looking down each column, we get a general sense of how the HS rank percentages change in consecutive years. The MEDIAN row gives the median change in flog for each two\-year sequence. It is pretty clear that the BGSU freshmen’s HS rank significantly decreased from 1996 to 1997 and from 1999 to 2000\. In contrast, the HS ranks stayed roughly the same between 1997 and 1999\. It would be interesting to see if there were any changes in the admission procedure that may have impacted the quality of the incoming BGSU freshmen.
Data Science
bayesball.github.io
https://bayesball.github.io/EDA/fraction-data.html
23 Fraction Data ================ 23\.1 The Quality of Students at BGSU ------------------------------------- A general concern among faculty at BGSU is the quality of the incoming undergraduate freshmen class. Is the university admitting more students of questionable ability? If so, this has a great effect on the performance of the students that take precalculus mathematics in the department. Weak students generally don’t do well in their precalculus or introductory statistics classes. Back in 1991, the Office of the President was preparing some data to show the university community how much the university had advanced in the nine years between the academic years 1981\-1982 and 1990\-1991\. One statistic that they considered was the “Percentage of Freshmen in the Bottom 50% of High School Graduating Class” At first glance, one might wonder why they consider the percentage of students in the bottom 50% of their class – wouldn’t it be clearer to consider the percentage of students in the top 50% of their class? Anyway, the data sheet shows the following: | – | 1981\-1982 | 1990\-1991 | % Change | | --- | --- | --- | --- | | Percentage of Freshmen | | | | | in the Bottom 50% of High | 9\.4 % | 6\.9 % | (26\.6 %) | | School Graduating Class | | | | This is supposed to impress you – the percentage of students in the bottom half of their high school class decreased by 26\.6 % in the nine\-year period. But what if we considered instead the percentage of students in the top half of the class – we get the following table: | – | 1981\-1982 | 1990\-1991 | % Change | | --- | --- | --- | --- | | Percentage of Freshmen | | | | | in the Top 50% of High | 90\.6 % | 93\.1 % | 2\.8 % | | School Graduating Class | | | | We see that the percentage of freshmen in the top half increased by 2\.8 %. This doesn’t sound that impressive, so I think I know now why the President’s Office decided to consider the percentage in the bottom half. But wait – it shouldn’t matter if we consider the percentage of freshmen in the top half or the percentage in the bottom half. Why should our measure of change depend on this arbitrary definition? In this lecture, we’ll talk about accurate and effective ways of comparing proportions. This type of data suffers from the “change in variability” problem that we saw earlier in our comparison of batches and deserves an appropriate reexpression. 23\.2 Meet the Data ------------------- The Office of Undergraduate Admissions collects data on the high school ranks of regularly admitted undergraduate students. Using the admissions data, the Office of Institutional Research has the following table that shows the number and % of students in different high school ranks for the past five years. | – | 1996 | | 1997 | | 1998 | | 1999 | | 2000 | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | HS Rank | N | % | N | % | N | % | N | % | N | % | | 90% \- 100% | 321 | 13\.3 | 345 | 14\.2 | 390 | 13\.0 | 373 | 12\.3 | 357 | 12\.1 | | 80%\-89% | 434 | 18 | 356 | 14\.7 | 486 | 16\.2 | 504 | 16\.6 | 430 | 14\.6 | | 70%\-79% | 396 | 16\.4 | 387 | 15\.9 | 482 | 16\.0 | 478 | 15\.7 | 459 | 15\.6 | | 60%\-69% | 395 | 16\.4 | 393 | 16\.2 | 485 | 16\.1 | 518 | 17\.0 | 470 | 16\.0 | | 50%\-59% | 377 | 15\.6 | 366 | 15\.1 | 458 | 15\.2 | 439 | 14\.4 | 490 | 16\.7 | | Below 50% | 493 | 20\.4 | 581 | 23\.9 | 704 | 23\.4 | 727 | 23\.9 | 736 | 25\.0 | | Total | 2416 | 100 | 2428 | 100 | 3005 | 100 | 3039 | 100 | 2942 | 100 | We will focus here on the table of percentages. | HS Rank | 1996 | 1997 | 1998 | 1999 | | --- | --- | --- | --- | --- | | 90% \- 100% | 13\.3 | 14\.2 | 13\.0 | 12\.3 | | 80%\-89% | 18 | 14\.7 | 16\.2 | 16\.6 | | 70%\-79% | 16\.4 | 15\.9 | 16\.0 | 15\.7 | | 60%\-69% | 16\.4 | 16\.2 | 16\.1 | 17\.0 | | 50%\-59% | 15\.6 | 15\.1 | 15\.2 | 14\.4 | | Below 50% | 20\.4 | 23\.9 | 23\.4 | 23\.9 | | Total | 100 | 100 | 100 | 100 | The objective here is to get an overall sense how the percentages are changing over the 5\-year period. Certain low and high percentages might attract our eye (the 25\.0% of students in the bottom half of the HS class in 2000 certainly seems high), but that’s just an isolated value and may not reflect the general pattern of change across years. 23\.3 Counted Fractions ----------------------- To start off, how do we obtain data that is a fraction? This fraction is found by taking a COUNT and dividing by a total number. So \\\[ FRACTION \= \\frac{COUNT}{TOTAL}. \\] We call these data counted fractions, since the numerator of the fraction is some type of count. In many cases, we create these counts by cutting continuous data. Our data is this type. A student’s high school rank is a percentage between 0 and 100, and we are cutting this in different places (90, 80, and so on) to get the fractions in the above table. 23\.4 Started Counts and Split Counts ------------------------------------- Suppose that we sampled 20 students and all of them were in the bottom 90% of their high school class. So the fraction of students in the top 10% of their class is \\\[ \\frac{0}{20} \= 0\. \\] This answer is a bit unsatisfactory, since we know that if we kept on sampling, we would find some students in the top 10% of their class. So we want to adjust our numerator in our fraction for the possibility that some would fall in this class. We adjust the counts of “in the top 10%” and “not in the top 10%” (that is, the two classifications), by adding 1/6 to each type. Then the fraction of students in the top 10% would be \\\[ \\frac{0 \+ 1/6}{20 \+ 1/3}. \\] We call (0 \+ 1/6\) a started count, and so the corresponding fraction is a started fraction. Another issue is that, when we cut continuous data to get our counts, it is possible that some observations will fall on the fence. Since it makes sense to treat the two classes (those in the class and those not in the class) symmetrically, we add half of the counts on the fence to one class and the remaining half of the count to the other class. Tukey defines the new count \\\[ {\\rm ss\-count \\, below} \= {\\rm count \\, below} \+ \\frac{1}{2} ({\\rm count \\, equal}) \+ \\frac{1}{6} \\] That is, we add three quantities: \- the count below the boundary \- one half of the count that falls on the boundary (we call this the split count) \- 1/6 (the started count) and the corresponding fraction is given by \\\[ {\\rm ss\-fraction \\, below} \= \\frac{{\\rm ss\-count \\,below}} {{\\rm total \\, count} \+1/3}. \\] We won’t say any more about starting and split counts here since they won’t be needed for our example. 23\.5 Three Matched Scales for Counted Fractions (folded fractions, roots, and logs) ------------------------------------------------------------------------------------ The main issue that we want to address is how to properly express fractions. Fraction data are hard to analyze for several reasons: * People aren’t sure if they should work with the “fraction that is” or the “fraction that isn’t”. In the Office of the President example above, the person who made the table thought that there was some advantage to working with the fraction of students in the lower half of their class instead of the fraction of students in the top talk. * Small fractions near 0, and large fractions near 1 have small variation, and fractions close to .5 have large variation. We all know that the standard error of a sample proportion is This standard error is 0 when the fraction f is 0 or 1, and is maximized when f is equal to .5\. If we reexpress a fraction, what properties would we like of the reexpressed fraction? * We would like to treat “those who are” and “whose who aren’t” in a symmetric fashion. * Since \\(f \= .5\\) is a central fraction value, it would be desirable if our reexpressed fraction is equal to 0 when the fraction is equal to .5\. * If we swap a fraction \\(f\\) with the fraction \\(1\-f\\), the reexpressed fraction will change in size but the size of the fraction won’t change. The simplest reexpression that has these properties is the folded fraction, which we abbreviate by ff: \\\[ ff \= f \- (1\-f). \\] The table below gives some folded fractions for values of the fraction f. | f | .05 | .1 | .3 | .5 | .7 | .9 | .95 | | --- | --- | --- | --- | --- | --- | --- | --- | | ff | \-.9 | \-.8 | \-.4 | 0 | .4 | .8 | .9 | Note that \\(ff\\) satisfies our basic properties. The folded fraction for \\(f \= .1\\) is \\(ff \= \-.8\\); if we replace \\(f \= .1\\) by \\(f \= .9\\), the value of \\(ff\\) is changed from \-8 to \+.8, but the size of ff doesn’t change. A folded fraction \\(ff \= 0\\) corresponds to a fraction \\(f \= .5\\). We can obtain alternative folded reexpressions by taking the fractions \\(f, 1\-f\\) to the \\(p\\)th power and then folding: \\\[ f^p \- (1\-f)^p. \\] If we use a \\(p \= \\frac{1}{2}\\) power, we get a folded root, or froot \\\[ f^{1/2} \- (1\-f)^{1/2}. \\] If we use a p \= 0 (one more half\-step), we get a folded log, or flog \\\[ log(f) \- log(1\-f). \\] All three rexpressions (\\(ff\\), \\(froot\\), \\(flog\\)) are equal to 0 when \\(f\\) \= .5\. Also if you replace \\(f\\) by \\(1\-f\\), then the measure will change in sign. To compare these reexpressions, we slightly modify the definitions of \\(froot\\) and \\(flog\\) so that they are matched with the folded fractions: \\\[ froot \= (2 f)^{1/2} \- (2 (1 \- f))^{1/2} \\] \\\[ flog \= 1\.15 log(f) \- 1\.15 log(1\-f) \\] The table below displays values of the reexpressions for some values of the fraction \\(f\\). The figure below graphs values of these three reexpressions. | f | .05 | .1 | .3 | .5 | .7 | .9 | .95 | | --- | --- | --- | --- | --- | --- | --- | --- | | ff | \-.9 | \-.8 | \-.4 | 0 | .4 | .8 | .9 | | froot | \-1\.06 | \-.89 | \-.41 | 0 | .41 | .89 | 1\.06 | | flog | \-1\.47 | \-1\.10 | \-.42 | 0 | .42 | 1\.10 | 1\.47 | The folded fraction \\(ff\\) is just a linear transformation of \\(f\\) that changes the support from (0, 1\) to (\-1, 1\). Due to the matching, note that \\(ff\\), \\(froot\\) and \\(flog\\) agree closely for values of \\(f\\) between .3 and .7\. The differences are primarily how the reexpressions handle extreme values of \\(f\\) near 0 and 1\. By taking the root and log, the \\(froots\\) and \\(flogs\\) stretch the scale for these extreme values. By stretching values of \\(f\\) near 0 and 1, this reexpression is adjusting for the small spread in fraction values that are close to 0 or 1\. Here is another graph – we are plotting the three rexpressions (\\(ff, froot, flog\\)) against the fraction \\(f\\). This again illustrates the stretching effect of the \\(froot\\) and \\(flog\\) reexpressions. 23\.6 An Example Illustrating the Benefits of Reexpressing Fractions -------------------------------------------------------------------- The dataset `college.ratings` contains a number of different measurements of a group of national universities in the United States based on a 2001 survey. One of the interesting variables is the fraction `Top.10`, the proportion of students enrolled who were in the top 10 percent of their high school class. The variable `Tier` with four levels provides a general classification of the universities – the `Tier 1` schools are the top\-rated schools, followed by the `Tier 2` schools, the `Tier 3` schools, and the `Tier 4` schools. We are interested how the `Top.10` variable varies between tiers, and if there is any advantage to reexpressing the fraction `Top.10` to a different scale. Using the `geom_boxplot()` function, we construct parallel boxplots of the Top 10 variable across tier. ``` library(LearnEDAfunctions) ggplot(college.ratings, aes(factor(Tier), Top.10)) + geom_boxplot() + coord_flip() + xlab("Tier") ``` ``` ## Warning: Removed 28 rows containing non-finite values ## (stat_boxplot). ``` It is difficult to compare these Top 10 rates due to the difference in variability across tiers. If we focus on the fourth\-spreads (or interquartile ranges) the Tier 1 and Tier 2 rates have the small variation, followed by Tier 3 and Tier 4\. Using the `summarize` function in the `dplyr` package, we compare the interquartile range of `Top.10` for each tier. ``` college.ratings %>% group_by(Tier) %>% summarize(IQR = IQR(Top.10, na.rm = TRUE)) ``` ``` ## # A tibble: 4 × 2 ## Tier IQR ## <int> <dbl> ## 1 1 0.235 ## 2 2 0.155 ## 3 3 0.09 ## 4 4 0.0925 ``` If we focus on comparisons of tiers 2, 3, 4, we see that the spread of the Tier 2 values (0\.155\) is about 1\.7 times larger than the spread of the Tier 3 values (0\.090\). Next, let’s see the effect of transformating the top 10 variable to the \\(froot\\) and \\(flog\\) scales. We write short functions defining these functions, and then show boxplots using these two scales. ``` froot <- function(p) sqrt(p) - sqrt(1- p) flog <- function(p) log(p) - log(1 - p) ``` ``` ggplot(college.ratings, aes(factor(Tier), froot(Top.10))) + geom_boxplot() + coord_flip() + ylab("Froot(Top 10)") + xlab("Tier") ``` ``` ggplot(college.ratings, aes(factor(Tier), flog(Top.10))) + geom_boxplot() + coord_flip() + ylab("Flog(Top 10)") + xlab("Tier") ``` Have these reexpressions helped in equalizing spread? We compute the IQRs of the groups using the \\(froot\\) and \\(flog\\) scales. ``` college.ratings %>% group_by(Tier) %>% summarize(IQR_froot = IQR(froot(Top.10), na.rm = TRUE), IQR_flog = IQR(flog(Top.10), na.rm = TRUE)) ``` ``` ## # A tibble: 4 × 3 ## Tier IQR_froot IQR_flog ## <int> <dbl> <dbl> ## 1 1 0.393 1.48 ## 2 2 0.231 0.717 ## 3 3 0.148 0.540 ## 4 4 0.173 0.763 ``` If we focus again on comparisons between tiers 2, 3, and 4, it appears that the \\(flog\\) expression is best for equalizing spreads. If look at the ratio of the largest IQR to the smallest IQR, then we compute the ratio 1\.56 for \\(froot\\)s and \\(1\.41\\) for \\(flog\\)s. Since \\(flog\\)s have approximately equalized spreads, we can make comparisons between the top 10 flog fractions between tiers 2, 3, and 4 by computing medians. ``` college.ratings %>% group_by(Tier) %>% summarize(M_flog = median(flog(Top.10), na.rm = TRUE)) ``` ``` ## # A tibble: 4 × 2 ## Tier M_flog ## <int> <dbl> ## 1 1 1.39 ## 2 2 -0.800 ## 3 3 -1.32 ## 4 4 -1.82 ``` On the \\(flog\\) scale, the Tier 2 “top 10” fractions tend to be \-0\.80 \- (\-1\.32\) \= 0\.52 higher than the Tier 3 fractions. Similarly, on the flog scale, the Tier 3 “top 10” fractions tend to be \-1\.32 \- (\-1\.82\) \= 0\.50 higher than the Tier 4 fractions. 23\.7 A Tukey Example Where Careful Reexpression Pays Off --------------------------------------------------------- Tukey’s EDA book has a great example that illustrates the benefit of reexpressing fractions. This table comes from a newspaper article – the Washington Post February 2, 1962 article titled “Protestants shifts support to Kennedy”. As you might or might not know, John Kennedy was the first Catholic to have a serious chance of being elected president and one suspected that the Catholics would be more behind Kennedy than Protestants in the election between Kennedy and Nixon. There were polls taken on the dates 11/60 and 1/62\. Kennedy’s support increased from 38% to 59% among the Protestants, in contrast to the 11% change in support – 78% to 89% – among the Catholics. On the surface, it would appear that Kennedy’s support has increased more among the Protestants. | – | Protestants | Protestants | Catholics | Catholics | | --- | --- | --- | --- | --- | | Date | 11/60 | 1/62 | 11/60 | 1/62 | | Kennedy | 38% | 59% | 78% | 89% | | Nixon | 62% | 41% | 22% | 11% | But this conclusion is deceptive since there is smaller variation among the large fractions (in the 70\-80% range) than among the middle\-range fractions between 38\-59%. Suppose that we reexpress to flogs. For example, we reexpress \\(f\\) \= .38 to \\(flog \= \\log(.38\)\-\\log(.62\)\\), and so on for all the fractions in the table. | – | Protestants | Protestants | Catholics | Catholics | | --- | --- | --- | --- | --- | | Date | 11/60 | 1/62 | 11/60 | 1/62 | | Kennedy | \-0\.24 | 0\.18 | 0\.63 | 1\.05 | Now let’s look at the change in support in the \\(flog\\) scale. The change in support (from 11/60 to 1/62\) among the Protestants was \\(.18 \- (\-.24\) \= .42\\) and the change in support among the Catholics was also 1\.05 \- .63 \= 0\.42\. So, by adjusting for the variability problem in the fractions, we see that really Protestant and Catholic changes are similar. Our conclusion is that JFK’s popularity improved by 0\.42 on \\(flog\\) scale (for both Protestants and Catholics) 23\.8 Back to the Data – An Analysis Based on Flogs --------------------------------------------------- Let’s return to our BGSU admissions example. We are interested in looking at any possible trend in the HS rank data across years. Here is our data, where the numbers represent the percentages of students in the different HS Rank classes for a given year. | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | 13\.3 | 14\.2 | 13\.0 | 12\.3 | 12\.1 | | 80%\-89% | 18 | 14\.7 | 16\.2 | 16\.6 | 14\.6 | | 70%\-79% | 16\.4 | 15\.9 | 16\.0 | 15\.7 | 15\.6 | | 60%\-69% | 16\.4 | 16\.2 | 16\.1 | 17\.0 | 16\.0 | | 50%\-59% | 15\.6 | 15\.1 | 15\.2 | 14\.4 | 16\.7 | | Below 50% | 20\.4 | 23\.9 | 23\.4 | 23\.9 | 25\.0 | | Total | 100 | 100 | 100 | 100 | 100 | What we will do is to cut the HS Rank values different places and compute the corresponding flogs. For example, suppose that we cut HS Rank at 90%. Then we have the data table | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | 13\.3 | 14\.2 | 13\.0 | 12\.3 | 12\.1 | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | 86\.7 | 85\.8 | 87\.0 | 87\.7 | 87\.9 | | 50%\-59% | | | | | | | Below 50% | | | | | | | Total | 100 | 100 | 100 | 100 | 100 | where I have combined the percentages below the cut. I convert the year fractions (.133, .142, .130, .123, .121\) to flogs (I use the basic definition \\(\\log(f) \- \\log(1\-f)\\) here.) | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | \\(\-1\.87\\) | \\(\-1\.80\\) | \\(\-1\.90\\) | \\(\-1\.96\\) | \\(\-1\.98\\) | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | | | | | | | 50%\-59% | | | | | | | Below 50% | | | | | | Next, we cut the HS Rank data at 80% as follows: | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | 31\.3 | 28\.9 | 29\.2 | 28\.9 | 26\.7 | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | 68\.7 | 71\.1 | 71\.8 | 71\.1 | 73\.3 | | 50%\-59% | | | | | | | Below 50% | | | | | | Again we convert the fractions to flogs. | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | \\(\-0\.79\\) | \\(\-0\.90\\) | \\(\-0\.89\\) | \\(\-0\.90\\) | \\(\-1\.01\\) | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | | | | | | | 50%\-59% | | | | | | | Below 50% | | | | | | If we do this procedure for all 5 possible cuts, then we get the following table of flogs: | CUT | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90\-100% between 0\-89% | \-1\.87 | \-1\.80 | \-1\.90 | \-1\.96 | \-1\.98 | | 80\-100% between 0\-79% | \-0\.79 | \-0\.90 | \-0\.89 | \-0\.90 | \-1\.01 | | 70\-100% between 0\-69% | \-0\.09 | \-0\.21 | \-0\.19 | \-0\.22 | \-0\.31 | | 60\-100% between 0\-59% | 0\.58 | 0\.45 | 0\.46 | 0\.47 | 0\.34 | | 50\-100% between 0\-49% | 1\.37 | 1\.16 | 1\.18 | 1\.15 | 1\.10 | This table essentially gives an analysis of fractions where all possible cuts are considered. We are interested in how the flogs change across years. So we compute the differences between consecutive years. The first column shows the change in the flog between 1996 and 1997, the second column the change in flog between 1997 and 1998, and so on. | CUT | 1996\-1997 | 1997\-1998 | 1998\-1999 | 1999\-2000 | | --- | --- | --- | --- | --- | | 90\-100% between 0\-89% | 0\.08 | \-0\.10 | \-0\.06 | \-0\.02 | | 80\-100% between 0\-79% | \-0\.11 | 0\.01 | \-0\.01 | \-0\.11 | | 70\-100% between 0\-69% | \-0\.12 | 0\.02 | \-0\.02 | \-0\.09 | | 60\-100% between 0\-59% | \-0\.13 | 0\.01 | 0\.01 | \-0\.14 | | 50\-100% between 0\-49% | \-0\.21 | 0\.02 | \-0\.03 | \-0\.05 | | MEDIAN | \-0\.12 | 0\.01 | \-0\.01 | \-0\.09 | By looking down each column, we get a general sense of how the HS rank percentages change in consecutive years. The MEDIAN row gives the median change in flog for each two\-year sequence. It is pretty clear that the BGSU freshmen’s HS rank significantly decreased from 1996 to 1997 and from 1999 to 2000\. In contrast, the HS ranks stayed roughly the same between 1997 and 1999\. It would be interesting to see if there were any changes in the admission procedure that may have impacted the quality of the incoming BGSU freshmen. 23\.1 The Quality of Students at BGSU ------------------------------------- A general concern among faculty at BGSU is the quality of the incoming undergraduate freshmen class. Is the university admitting more students of questionable ability? If so, this has a great effect on the performance of the students that take precalculus mathematics in the department. Weak students generally don’t do well in their precalculus or introductory statistics classes. Back in 1991, the Office of the President was preparing some data to show the university community how much the university had advanced in the nine years between the academic years 1981\-1982 and 1990\-1991\. One statistic that they considered was the “Percentage of Freshmen in the Bottom 50% of High School Graduating Class” At first glance, one might wonder why they consider the percentage of students in the bottom 50% of their class – wouldn’t it be clearer to consider the percentage of students in the top 50% of their class? Anyway, the data sheet shows the following: | – | 1981\-1982 | 1990\-1991 | % Change | | --- | --- | --- | --- | | Percentage of Freshmen | | | | | in the Bottom 50% of High | 9\.4 % | 6\.9 % | (26\.6 %) | | School Graduating Class | | | | This is supposed to impress you – the percentage of students in the bottom half of their high school class decreased by 26\.6 % in the nine\-year period. But what if we considered instead the percentage of students in the top half of the class – we get the following table: | – | 1981\-1982 | 1990\-1991 | % Change | | --- | --- | --- | --- | | Percentage of Freshmen | | | | | in the Top 50% of High | 90\.6 % | 93\.1 % | 2\.8 % | | School Graduating Class | | | | We see that the percentage of freshmen in the top half increased by 2\.8 %. This doesn’t sound that impressive, so I think I know now why the President’s Office decided to consider the percentage in the bottom half. But wait – it shouldn’t matter if we consider the percentage of freshmen in the top half or the percentage in the bottom half. Why should our measure of change depend on this arbitrary definition? In this lecture, we’ll talk about accurate and effective ways of comparing proportions. This type of data suffers from the “change in variability” problem that we saw earlier in our comparison of batches and deserves an appropriate reexpression. 23\.2 Meet the Data ------------------- The Office of Undergraduate Admissions collects data on the high school ranks of regularly admitted undergraduate students. Using the admissions data, the Office of Institutional Research has the following table that shows the number and % of students in different high school ranks for the past five years. | – | 1996 | | 1997 | | 1998 | | 1999 | | 2000 | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | HS Rank | N | % | N | % | N | % | N | % | N | % | | 90% \- 100% | 321 | 13\.3 | 345 | 14\.2 | 390 | 13\.0 | 373 | 12\.3 | 357 | 12\.1 | | 80%\-89% | 434 | 18 | 356 | 14\.7 | 486 | 16\.2 | 504 | 16\.6 | 430 | 14\.6 | | 70%\-79% | 396 | 16\.4 | 387 | 15\.9 | 482 | 16\.0 | 478 | 15\.7 | 459 | 15\.6 | | 60%\-69% | 395 | 16\.4 | 393 | 16\.2 | 485 | 16\.1 | 518 | 17\.0 | 470 | 16\.0 | | 50%\-59% | 377 | 15\.6 | 366 | 15\.1 | 458 | 15\.2 | 439 | 14\.4 | 490 | 16\.7 | | Below 50% | 493 | 20\.4 | 581 | 23\.9 | 704 | 23\.4 | 727 | 23\.9 | 736 | 25\.0 | | Total | 2416 | 100 | 2428 | 100 | 3005 | 100 | 3039 | 100 | 2942 | 100 | We will focus here on the table of percentages. | HS Rank | 1996 | 1997 | 1998 | 1999 | | --- | --- | --- | --- | --- | | 90% \- 100% | 13\.3 | 14\.2 | 13\.0 | 12\.3 | | 80%\-89% | 18 | 14\.7 | 16\.2 | 16\.6 | | 70%\-79% | 16\.4 | 15\.9 | 16\.0 | 15\.7 | | 60%\-69% | 16\.4 | 16\.2 | 16\.1 | 17\.0 | | 50%\-59% | 15\.6 | 15\.1 | 15\.2 | 14\.4 | | Below 50% | 20\.4 | 23\.9 | 23\.4 | 23\.9 | | Total | 100 | 100 | 100 | 100 | The objective here is to get an overall sense how the percentages are changing over the 5\-year period. Certain low and high percentages might attract our eye (the 25\.0% of students in the bottom half of the HS class in 2000 certainly seems high), but that’s just an isolated value and may not reflect the general pattern of change across years. 23\.3 Counted Fractions ----------------------- To start off, how do we obtain data that is a fraction? This fraction is found by taking a COUNT and dividing by a total number. So \\\[ FRACTION \= \\frac{COUNT}{TOTAL}. \\] We call these data counted fractions, since the numerator of the fraction is some type of count. In many cases, we create these counts by cutting continuous data. Our data is this type. A student’s high school rank is a percentage between 0 and 100, and we are cutting this in different places (90, 80, and so on) to get the fractions in the above table. 23\.4 Started Counts and Split Counts ------------------------------------- Suppose that we sampled 20 students and all of them were in the bottom 90% of their high school class. So the fraction of students in the top 10% of their class is \\\[ \\frac{0}{20} \= 0\. \\] This answer is a bit unsatisfactory, since we know that if we kept on sampling, we would find some students in the top 10% of their class. So we want to adjust our numerator in our fraction for the possibility that some would fall in this class. We adjust the counts of “in the top 10%” and “not in the top 10%” (that is, the two classifications), by adding 1/6 to each type. Then the fraction of students in the top 10% would be \\\[ \\frac{0 \+ 1/6}{20 \+ 1/3}. \\] We call (0 \+ 1/6\) a started count, and so the corresponding fraction is a started fraction. Another issue is that, when we cut continuous data to get our counts, it is possible that some observations will fall on the fence. Since it makes sense to treat the two classes (those in the class and those not in the class) symmetrically, we add half of the counts on the fence to one class and the remaining half of the count to the other class. Tukey defines the new count \\\[ {\\rm ss\-count \\, below} \= {\\rm count \\, below} \+ \\frac{1}{2} ({\\rm count \\, equal}) \+ \\frac{1}{6} \\] That is, we add three quantities: \- the count below the boundary \- one half of the count that falls on the boundary (we call this the split count) \- 1/6 (the started count) and the corresponding fraction is given by \\\[ {\\rm ss\-fraction \\, below} \= \\frac{{\\rm ss\-count \\,below}} {{\\rm total \\, count} \+1/3}. \\] We won’t say any more about starting and split counts here since they won’t be needed for our example. 23\.5 Three Matched Scales for Counted Fractions (folded fractions, roots, and logs) ------------------------------------------------------------------------------------ The main issue that we want to address is how to properly express fractions. Fraction data are hard to analyze for several reasons: * People aren’t sure if they should work with the “fraction that is” or the “fraction that isn’t”. In the Office of the President example above, the person who made the table thought that there was some advantage to working with the fraction of students in the lower half of their class instead of the fraction of students in the top talk. * Small fractions near 0, and large fractions near 1 have small variation, and fractions close to .5 have large variation. We all know that the standard error of a sample proportion is This standard error is 0 when the fraction f is 0 or 1, and is maximized when f is equal to .5\. If we reexpress a fraction, what properties would we like of the reexpressed fraction? * We would like to treat “those who are” and “whose who aren’t” in a symmetric fashion. * Since \\(f \= .5\\) is a central fraction value, it would be desirable if our reexpressed fraction is equal to 0 when the fraction is equal to .5\. * If we swap a fraction \\(f\\) with the fraction \\(1\-f\\), the reexpressed fraction will change in size but the size of the fraction won’t change. The simplest reexpression that has these properties is the folded fraction, which we abbreviate by ff: \\\[ ff \= f \- (1\-f). \\] The table below gives some folded fractions for values of the fraction f. | f | .05 | .1 | .3 | .5 | .7 | .9 | .95 | | --- | --- | --- | --- | --- | --- | --- | --- | | ff | \-.9 | \-.8 | \-.4 | 0 | .4 | .8 | .9 | Note that \\(ff\\) satisfies our basic properties. The folded fraction for \\(f \= .1\\) is \\(ff \= \-.8\\); if we replace \\(f \= .1\\) by \\(f \= .9\\), the value of \\(ff\\) is changed from \-8 to \+.8, but the size of ff doesn’t change. A folded fraction \\(ff \= 0\\) corresponds to a fraction \\(f \= .5\\). We can obtain alternative folded reexpressions by taking the fractions \\(f, 1\-f\\) to the \\(p\\)th power and then folding: \\\[ f^p \- (1\-f)^p. \\] If we use a \\(p \= \\frac{1}{2}\\) power, we get a folded root, or froot \\\[ f^{1/2} \- (1\-f)^{1/2}. \\] If we use a p \= 0 (one more half\-step), we get a folded log, or flog \\\[ log(f) \- log(1\-f). \\] All three rexpressions (\\(ff\\), \\(froot\\), \\(flog\\)) are equal to 0 when \\(f\\) \= .5\. Also if you replace \\(f\\) by \\(1\-f\\), then the measure will change in sign. To compare these reexpressions, we slightly modify the definitions of \\(froot\\) and \\(flog\\) so that they are matched with the folded fractions: \\\[ froot \= (2 f)^{1/2} \- (2 (1 \- f))^{1/2} \\] \\\[ flog \= 1\.15 log(f) \- 1\.15 log(1\-f) \\] The table below displays values of the reexpressions for some values of the fraction \\(f\\). The figure below graphs values of these three reexpressions. | f | .05 | .1 | .3 | .5 | .7 | .9 | .95 | | --- | --- | --- | --- | --- | --- | --- | --- | | ff | \-.9 | \-.8 | \-.4 | 0 | .4 | .8 | .9 | | froot | \-1\.06 | \-.89 | \-.41 | 0 | .41 | .89 | 1\.06 | | flog | \-1\.47 | \-1\.10 | \-.42 | 0 | .42 | 1\.10 | 1\.47 | The folded fraction \\(ff\\) is just a linear transformation of \\(f\\) that changes the support from (0, 1\) to (\-1, 1\). Due to the matching, note that \\(ff\\), \\(froot\\) and \\(flog\\) agree closely for values of \\(f\\) between .3 and .7\. The differences are primarily how the reexpressions handle extreme values of \\(f\\) near 0 and 1\. By taking the root and log, the \\(froots\\) and \\(flogs\\) stretch the scale for these extreme values. By stretching values of \\(f\\) near 0 and 1, this reexpression is adjusting for the small spread in fraction values that are close to 0 or 1\. Here is another graph – we are plotting the three rexpressions (\\(ff, froot, flog\\)) against the fraction \\(f\\). This again illustrates the stretching effect of the \\(froot\\) and \\(flog\\) reexpressions. 23\.6 An Example Illustrating the Benefits of Reexpressing Fractions -------------------------------------------------------------------- The dataset `college.ratings` contains a number of different measurements of a group of national universities in the United States based on a 2001 survey. One of the interesting variables is the fraction `Top.10`, the proportion of students enrolled who were in the top 10 percent of their high school class. The variable `Tier` with four levels provides a general classification of the universities – the `Tier 1` schools are the top\-rated schools, followed by the `Tier 2` schools, the `Tier 3` schools, and the `Tier 4` schools. We are interested how the `Top.10` variable varies between tiers, and if there is any advantage to reexpressing the fraction `Top.10` to a different scale. Using the `geom_boxplot()` function, we construct parallel boxplots of the Top 10 variable across tier. ``` library(LearnEDAfunctions) ggplot(college.ratings, aes(factor(Tier), Top.10)) + geom_boxplot() + coord_flip() + xlab("Tier") ``` ``` ## Warning: Removed 28 rows containing non-finite values ## (stat_boxplot). ``` It is difficult to compare these Top 10 rates due to the difference in variability across tiers. If we focus on the fourth\-spreads (or interquartile ranges) the Tier 1 and Tier 2 rates have the small variation, followed by Tier 3 and Tier 4\. Using the `summarize` function in the `dplyr` package, we compare the interquartile range of `Top.10` for each tier. ``` college.ratings %>% group_by(Tier) %>% summarize(IQR = IQR(Top.10, na.rm = TRUE)) ``` ``` ## # A tibble: 4 × 2 ## Tier IQR ## <int> <dbl> ## 1 1 0.235 ## 2 2 0.155 ## 3 3 0.09 ## 4 4 0.0925 ``` If we focus on comparisons of tiers 2, 3, 4, we see that the spread of the Tier 2 values (0\.155\) is about 1\.7 times larger than the spread of the Tier 3 values (0\.090\). Next, let’s see the effect of transformating the top 10 variable to the \\(froot\\) and \\(flog\\) scales. We write short functions defining these functions, and then show boxplots using these two scales. ``` froot <- function(p) sqrt(p) - sqrt(1- p) flog <- function(p) log(p) - log(1 - p) ``` ``` ggplot(college.ratings, aes(factor(Tier), froot(Top.10))) + geom_boxplot() + coord_flip() + ylab("Froot(Top 10)") + xlab("Tier") ``` ``` ggplot(college.ratings, aes(factor(Tier), flog(Top.10))) + geom_boxplot() + coord_flip() + ylab("Flog(Top 10)") + xlab("Tier") ``` Have these reexpressions helped in equalizing spread? We compute the IQRs of the groups using the \\(froot\\) and \\(flog\\) scales. ``` college.ratings %>% group_by(Tier) %>% summarize(IQR_froot = IQR(froot(Top.10), na.rm = TRUE), IQR_flog = IQR(flog(Top.10), na.rm = TRUE)) ``` ``` ## # A tibble: 4 × 3 ## Tier IQR_froot IQR_flog ## <int> <dbl> <dbl> ## 1 1 0.393 1.48 ## 2 2 0.231 0.717 ## 3 3 0.148 0.540 ## 4 4 0.173 0.763 ``` If we focus again on comparisons between tiers 2, 3, and 4, it appears that the \\(flog\\) expression is best for equalizing spreads. If look at the ratio of the largest IQR to the smallest IQR, then we compute the ratio 1\.56 for \\(froot\\)s and \\(1\.41\\) for \\(flog\\)s. Since \\(flog\\)s have approximately equalized spreads, we can make comparisons between the top 10 flog fractions between tiers 2, 3, and 4 by computing medians. ``` college.ratings %>% group_by(Tier) %>% summarize(M_flog = median(flog(Top.10), na.rm = TRUE)) ``` ``` ## # A tibble: 4 × 2 ## Tier M_flog ## <int> <dbl> ## 1 1 1.39 ## 2 2 -0.800 ## 3 3 -1.32 ## 4 4 -1.82 ``` On the \\(flog\\) scale, the Tier 2 “top 10” fractions tend to be \-0\.80 \- (\-1\.32\) \= 0\.52 higher than the Tier 3 fractions. Similarly, on the flog scale, the Tier 3 “top 10” fractions tend to be \-1\.32 \- (\-1\.82\) \= 0\.50 higher than the Tier 4 fractions. 23\.7 A Tukey Example Where Careful Reexpression Pays Off --------------------------------------------------------- Tukey’s EDA book has a great example that illustrates the benefit of reexpressing fractions. This table comes from a newspaper article – the Washington Post February 2, 1962 article titled “Protestants shifts support to Kennedy”. As you might or might not know, John Kennedy was the first Catholic to have a serious chance of being elected president and one suspected that the Catholics would be more behind Kennedy than Protestants in the election between Kennedy and Nixon. There were polls taken on the dates 11/60 and 1/62\. Kennedy’s support increased from 38% to 59% among the Protestants, in contrast to the 11% change in support – 78% to 89% – among the Catholics. On the surface, it would appear that Kennedy’s support has increased more among the Protestants. | – | Protestants | Protestants | Catholics | Catholics | | --- | --- | --- | --- | --- | | Date | 11/60 | 1/62 | 11/60 | 1/62 | | Kennedy | 38% | 59% | 78% | 89% | | Nixon | 62% | 41% | 22% | 11% | But this conclusion is deceptive since there is smaller variation among the large fractions (in the 70\-80% range) than among the middle\-range fractions between 38\-59%. Suppose that we reexpress to flogs. For example, we reexpress \\(f\\) \= .38 to \\(flog \= \\log(.38\)\-\\log(.62\)\\), and so on for all the fractions in the table. | – | Protestants | Protestants | Catholics | Catholics | | --- | --- | --- | --- | --- | | Date | 11/60 | 1/62 | 11/60 | 1/62 | | Kennedy | \-0\.24 | 0\.18 | 0\.63 | 1\.05 | Now let’s look at the change in support in the \\(flog\\) scale. The change in support (from 11/60 to 1/62\) among the Protestants was \\(.18 \- (\-.24\) \= .42\\) and the change in support among the Catholics was also 1\.05 \- .63 \= 0\.42\. So, by adjusting for the variability problem in the fractions, we see that really Protestant and Catholic changes are similar. Our conclusion is that JFK’s popularity improved by 0\.42 on \\(flog\\) scale (for both Protestants and Catholics) 23\.8 Back to the Data – An Analysis Based on Flogs --------------------------------------------------- Let’s return to our BGSU admissions example. We are interested in looking at any possible trend in the HS rank data across years. Here is our data, where the numbers represent the percentages of students in the different HS Rank classes for a given year. | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | 13\.3 | 14\.2 | 13\.0 | 12\.3 | 12\.1 | | 80%\-89% | 18 | 14\.7 | 16\.2 | 16\.6 | 14\.6 | | 70%\-79% | 16\.4 | 15\.9 | 16\.0 | 15\.7 | 15\.6 | | 60%\-69% | 16\.4 | 16\.2 | 16\.1 | 17\.0 | 16\.0 | | 50%\-59% | 15\.6 | 15\.1 | 15\.2 | 14\.4 | 16\.7 | | Below 50% | 20\.4 | 23\.9 | 23\.4 | 23\.9 | 25\.0 | | Total | 100 | 100 | 100 | 100 | 100 | What we will do is to cut the HS Rank values different places and compute the corresponding flogs. For example, suppose that we cut HS Rank at 90%. Then we have the data table | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | 13\.3 | 14\.2 | 13\.0 | 12\.3 | 12\.1 | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | 86\.7 | 85\.8 | 87\.0 | 87\.7 | 87\.9 | | 50%\-59% | | | | | | | Below 50% | | | | | | | Total | 100 | 100 | 100 | 100 | 100 | where I have combined the percentages below the cut. I convert the year fractions (.133, .142, .130, .123, .121\) to flogs (I use the basic definition \\(\\log(f) \- \\log(1\-f)\\) here.) | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | \\(\-1\.87\\) | \\(\-1\.80\\) | \\(\-1\.90\\) | \\(\-1\.96\\) | \\(\-1\.98\\) | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | | | | | | | 50%\-59% | | | | | | | Below 50% | | | | | | Next, we cut the HS Rank data at 80% as follows: | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | 31\.3 | 28\.9 | 29\.2 | 28\.9 | 26\.7 | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | 68\.7 | 71\.1 | 71\.8 | 71\.1 | 73\.3 | | 50%\-59% | | | | | | | Below 50% | | | | | | Again we convert the fractions to flogs. | HS Rank | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90% \- 100% | \\(\-0\.79\\) | \\(\-0\.90\\) | \\(\-0\.89\\) | \\(\-0\.90\\) | \\(\-1\.01\\) | | 80%\-89% | | | | | | | 70%\-79% | | | | | | | 60%\-69% | | | | | | | 50%\-59% | | | | | | | Below 50% | | | | | | If we do this procedure for all 5 possible cuts, then we get the following table of flogs: | CUT | 1996 | 1997 | 1998 | 1999 | 2000 | | --- | --- | --- | --- | --- | --- | | 90\-100% between 0\-89% | \-1\.87 | \-1\.80 | \-1\.90 | \-1\.96 | \-1\.98 | | 80\-100% between 0\-79% | \-0\.79 | \-0\.90 | \-0\.89 | \-0\.90 | \-1\.01 | | 70\-100% between 0\-69% | \-0\.09 | \-0\.21 | \-0\.19 | \-0\.22 | \-0\.31 | | 60\-100% between 0\-59% | 0\.58 | 0\.45 | 0\.46 | 0\.47 | 0\.34 | | 50\-100% between 0\-49% | 1\.37 | 1\.16 | 1\.18 | 1\.15 | 1\.10 | This table essentially gives an analysis of fractions where all possible cuts are considered. We are interested in how the flogs change across years. So we compute the differences between consecutive years. The first column shows the change in the flog between 1996 and 1997, the second column the change in flog between 1997 and 1998, and so on. | CUT | 1996\-1997 | 1997\-1998 | 1998\-1999 | 1999\-2000 | | --- | --- | --- | --- | --- | | 90\-100% between 0\-89% | 0\.08 | \-0\.10 | \-0\.06 | \-0\.02 | | 80\-100% between 0\-79% | \-0\.11 | 0\.01 | \-0\.01 | \-0\.11 | | 70\-100% between 0\-69% | \-0\.12 | 0\.02 | \-0\.02 | \-0\.09 | | 60\-100% between 0\-59% | \-0\.13 | 0\.01 | 0\.01 | \-0\.14 | | 50\-100% between 0\-49% | \-0\.21 | 0\.02 | \-0\.03 | \-0\.05 | | MEDIAN | \-0\.12 | 0\.01 | \-0\.01 | \-0\.09 | By looking down each column, we get a general sense of how the HS rank percentages change in consecutive years. The MEDIAN row gives the median change in flog for each two\-year sequence. It is pretty clear that the BGSU freshmen’s HS rank significantly decreased from 1996 to 1997 and from 1999 to 2000\. In contrast, the HS ranks stayed roughly the same between 1997 and 1999\. It would be interesting to see if there were any changes in the admission procedure that may have impacted the quality of the incoming BGSU freshmen.
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/Chap-01-00-intro-installation.html
1\.6 Installation ----------------- To follow the code in this book, you need a recent version of [R](https://www.r-project.org/) (recommended is an R version at least 4\.0\). We recommend the use of [RStudio](https://rstudio.com/). The course also has its own R package. The `aida` package contains convenience functions introduced and used in this book. It can be installed by executing the following code. (More on R, packages and their installation in the next chapter.) ``` install.packages('remotes') remotes::install_github('michael-franke/aida-package') ``` This course also requires the packages `rstan` and `brms` which let R interact with the probabilistic programming language Stan. Installation of these packages can be difficult. If you run into trouble during the installation of these packages, please follow the instructions on the [Stan homepage](https://mc-stan.org/) for the most recent recommendations for installation for your OS.
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/ch1-first-steps.html
2\.1 First steps ---------------- R is an interpreted language. This means that you do not have to compile it. You can just evaluate it line by line, in a so\-called **session**. The session stores the current values of all variables. Usually, code is stored in a **script**, so one does not have to retype it when starting a new session. [2](#fn2) Try this out by either typing `r` to open an R session in a terminal or load RStudio.[3](#fn3) You can immediately calculate stuff: ``` 6 * 7 ``` ``` ## [1] 42 ``` **Exercise 2\.1** Use R to calculate 5 times the result of 659 minus 34\. Solution ``` 5 * (659 - 34) ``` ``` ## [1] 3125 ``` ### 2\.1\.1 Functions R has many built\-in functions. The most common situation is that the function is called by its name using **prefix notation**, followed by round brackets that enclose the function’s arguments (separated by commas if there are multiple arguments). For example, the function `round` takes a number and, by default, returns the closest integer: ``` # the function `round` takes a number as an argument and # returns the closest integer (default) round(0.6) ``` ``` ## [1] 1 ``` Actually, `round` allows several arguments. It takes as input the number `x` to be rounded and another integer number `digits` which gives the number of digits after the comma to which `x` should be rounded. We can then specify these arguments in a function call of `round` by providing the named arguments. ``` # rounds the number `x` to the number `digits` of digits round(x = 0.138, digits = 2) ``` ``` ## [1] 0.14 ``` If all of the parsed arguments are named, then their order does not matter. But all non\-named arguments have to be presented in the positions expected by the function after subtracting the named arguments from the ordered list of arguments (to find out the right order one should use `help`, as explained below in [2\.1\.6](ch1-first-steps.html#Chap-01-01-R-help)). Here are examples for illustration: ``` round(x = 0.138, digits = 2) # works as intended round(digits = 2, x = 0.138) # works as intended round(0.138, digits = 2) # works as intended round(0.138, 2) # works as intended round(x = 0.138, 2) # works as intended round(digits = 2, 0.138) # works as intended round(2, x = 0.138) # works as intended round(2, 0.138) # does not work as intended (returns 2) ``` Functions can have default values for some or for all of their arguments. In the case of `round`, the default is `digits = 0`. There is obviously no default for `x` in the function `round`. ``` round(x = 6.138) # returns 6 ``` ``` ## [1] 6 ``` Some functions can take an arbitrary number of arguments. The function `sum`, which sums up numbers is a point in case. ``` # adds all of its arguments together sum(1, 2, 3) ``` ``` ## [1] 6 ``` Selected functions can also be expressed as operators in **infix notation**. This applies to frequently recurring operations, such as mathematical operations or logical comparisons. ``` # both of these calls sum 1, 2, and 3 together sum(1, 2, 3) # prefix notation 1 + 2 + 3 # infix notation ``` An expression like `3 + 5` is internally processed as the function ``+`(3, 5)` which is equivalent to `sum(3, 5)`. Section [2\.3](Chap-01-01-functions.html#Chap-01-01-functions) will list some of the most important built\-in functions. It will also explain how to define your own functions. ### 2\.1\.2 Variables You can assign values to variables using three assignment operators: `->`, `<-` and `=`, like so: ``` x <- 6 # assigns 6 to variable x 7 -> y # assigns 7 to variable y z = 3 # assigns 3 to variable z x * y / z # returns 6 * 7 / 3 = 14 ``` ``` ## [1] 14 ``` Use of `=` is discouraged.[4](#fn4) It is good practice to use a consistent naming scheme for variables. This book uses `snake_case_variable_names` and tends towards using `long_and_excessively_informative_names` for important variables, and short variable names, like `i`, `j` or `x`, for local variables, indices etc. **Exercise 2\.2** Create two variables, `a` and `b`, and assign the values 103 and 14 to them, respectively. Next, divide variable `a` by variable `b` and produce an output with three digits after the comma. Solution ``` a <- 103 b <- 14 round(x = a / b, digits = 3) ``` ``` ## [1] 7.357 ``` ### 2\.1\.3 Literate coding It is good practice to document code with short but informative comments. Comments in R are demarcated with `#`. ``` x <- 4711 # a nice number from Cologne ``` Since everything on a line after an occurrence of `#` is treated as a comment, it is possible to break long function calls across several lines, and to add comments to each line: ``` round( # call the function `round` x = 0.138, # number to be rounded digits = 2 # number of after-comma digits to round to ) ``` In RStudio, you can use `Command+Shift+C` (on Mac) and `Ctrl+Shift+C` (on Windows/Linux) to comment or uncomment code, and you can use comments to structure your scripts. Any comment followed by `----` is treated as a (foldable) section in RStudio. ``` # SECTION: variable assignments ---- x <- 6 y <- 7 # SECTION: some calculations ---- x * y ``` **Exercise 2\.3** Provide extensive comments to all operations in the solution code of the previous exercise. Solution ``` a <- 103 # assign value 103 to variable `a` b <- 14 # assign value 14 to variable `b` round( # produce a rounded number x = a / b, # number to be rounded is a/b digits = 3 # show three digits after the comma ) ``` ``` ## [1] 7.357 ``` ### 2\.1\.4 Objects Strictly speaking, all entities in R are *objects* but that is not always apparent or important for everyday practical purposes ([see the manual for more information](https://colinfay.me/intro-to-r/objects-their-modes-and-attributes.html)). R supports an object\-oriented programming style, but we will not make (explicit) use of this functionality. In fact, this book heavily uses and encourages a functional programming style (see Section [2\.4](ch-01-01-loops-and-maps.html#ch-01-01-loops-and-maps)). However, some functions (e.g., optimizers or fitting functions for statistical models) return objects, and we will use this output in various ways. For example, if we run some model on a data set the output is an object. Here, for example, we run a regression model, that will be discussed later on in the book, on a dataset called `cars`. ``` # you do not need to understand this code model_fit = lm(formula = speed~dist, data = cars) # just notice that the function `lm` returns an object is.object(model_fit) ``` ``` ## [1] TRUE ``` ``` # printing an object on the screen usually gives you summary information print(model_fit) ``` ``` ## ## Call: ## lm(formula = speed ~ dist, data = cars) ## ## Coefficients: ## (Intercept) dist ## 8.2839 0.1656 ``` ### 2\.1\.5 Packages Much of R’s charm unfolds through the use of packages. [CRAN](https://cran.r-project.org) has the official package repository. To install a new package from a CRAN mirror use the `install.packages` function. For example, to install the package `remotes`, you would use: ``` install.packages("remotes") ``` Once installed, you need to load your desired packages for each fresh session, using a command like the following:[5](#fn5) ``` library(remotes) ``` Once loaded, all functions, data, etc. that ship with a package are available without additional reference to the package name. If you want to be careful or courteous to an admirer of your code, you can reference a function from a package also by explicitly referring to that package. For example, the following code calls the function `install_github` from the package `remotes` explicitly.[6](#fn6) ``` remotes::install_github("SOME-URL") ``` Indeed, the `install_github` function allows you to install bleeding\-edge packages from GitHub. You can install all packages relevant for this book using the following code (after installing the `remotes` package): ``` remotes::install_github("michael-franke/aida-package") ``` After this installation, you can load all packages for this book simply by using: ``` library(aida) ``` In RStudio, there is a special tab in the pane with information on “files”, “plots” etc. to show all installed packages. This also shows which packages are currently loaded. ### 2\.1\.6 Getting help If you encounter a function like `lm` that you do not know about, you can access its documentation with the `help` function or just typing `?lm`. For example, the following call summons the documentation for `lm`, the first parts of which are shown in Figure [2\.2](ch1-first-steps.html#fig:R-doc-example). ``` help(lm) ``` Figure 2\.2: Excerpt from the documentation of the `lm` function. If you are looking for help on a more general topic, use the function `help.search`. It takes a regular expression as input and outputs a list of occurrences in the available documentation. A useful shortcut for `help.search` is just to type `??` followed by the (unquoted) string to search for. For example, calling either of the following lines might produce a display like in Figure [2\.3](ch1-first-steps.html#fig:R-doc-search-example). ``` # two equivalent ways for obtaining help on search term 'linear' help.search("linear") ??linear ``` Figure 2\.3: Result of calling `help.search` for the term ‘linear’. The top entries in Figure [2\.3](ch1-first-steps.html#fig:R-doc-search-example) are **vignettes**. These are compact manuals or tutorials on particular topics or functions, and they are directly available in R. If you want to browse through the vignettes available on your machine (which depend on which packages you have installed), go ahead with: ``` browseVignettes() ``` **Exercise 2\.4** Look up the help page for the command `round`. As you know about this function already, focus on getting a feeling for how the help text is structured and the most important bits of information are conveyed. Try to understand what the other functions covered in this entry do and when which one would be most useful. ### 2\.1\.1 Functions R has many built\-in functions. The most common situation is that the function is called by its name using **prefix notation**, followed by round brackets that enclose the function’s arguments (separated by commas if there are multiple arguments). For example, the function `round` takes a number and, by default, returns the closest integer: ``` # the function `round` takes a number as an argument and # returns the closest integer (default) round(0.6) ``` ``` ## [1] 1 ``` Actually, `round` allows several arguments. It takes as input the number `x` to be rounded and another integer number `digits` which gives the number of digits after the comma to which `x` should be rounded. We can then specify these arguments in a function call of `round` by providing the named arguments. ``` # rounds the number `x` to the number `digits` of digits round(x = 0.138, digits = 2) ``` ``` ## [1] 0.14 ``` If all of the parsed arguments are named, then their order does not matter. But all non\-named arguments have to be presented in the positions expected by the function after subtracting the named arguments from the ordered list of arguments (to find out the right order one should use `help`, as explained below in [2\.1\.6](ch1-first-steps.html#Chap-01-01-R-help)). Here are examples for illustration: ``` round(x = 0.138, digits = 2) # works as intended round(digits = 2, x = 0.138) # works as intended round(0.138, digits = 2) # works as intended round(0.138, 2) # works as intended round(x = 0.138, 2) # works as intended round(digits = 2, 0.138) # works as intended round(2, x = 0.138) # works as intended round(2, 0.138) # does not work as intended (returns 2) ``` Functions can have default values for some or for all of their arguments. In the case of `round`, the default is `digits = 0`. There is obviously no default for `x` in the function `round`. ``` round(x = 6.138) # returns 6 ``` ``` ## [1] 6 ``` Some functions can take an arbitrary number of arguments. The function `sum`, which sums up numbers is a point in case. ``` # adds all of its arguments together sum(1, 2, 3) ``` ``` ## [1] 6 ``` Selected functions can also be expressed as operators in **infix notation**. This applies to frequently recurring operations, such as mathematical operations or logical comparisons. ``` # both of these calls sum 1, 2, and 3 together sum(1, 2, 3) # prefix notation 1 + 2 + 3 # infix notation ``` An expression like `3 + 5` is internally processed as the function ``+`(3, 5)` which is equivalent to `sum(3, 5)`. Section [2\.3](Chap-01-01-functions.html#Chap-01-01-functions) will list some of the most important built\-in functions. It will also explain how to define your own functions. ### 2\.1\.2 Variables You can assign values to variables using three assignment operators: `->`, `<-` and `=`, like so: ``` x <- 6 # assigns 6 to variable x 7 -> y # assigns 7 to variable y z = 3 # assigns 3 to variable z x * y / z # returns 6 * 7 / 3 = 14 ``` ``` ## [1] 14 ``` Use of `=` is discouraged.[4](#fn4) It is good practice to use a consistent naming scheme for variables. This book uses `snake_case_variable_names` and tends towards using `long_and_excessively_informative_names` for important variables, and short variable names, like `i`, `j` or `x`, for local variables, indices etc. **Exercise 2\.2** Create two variables, `a` and `b`, and assign the values 103 and 14 to them, respectively. Next, divide variable `a` by variable `b` and produce an output with three digits after the comma. Solution ``` a <- 103 b <- 14 round(x = a / b, digits = 3) ``` ``` ## [1] 7.357 ``` ### 2\.1\.3 Literate coding It is good practice to document code with short but informative comments. Comments in R are demarcated with `#`. ``` x <- 4711 # a nice number from Cologne ``` Since everything on a line after an occurrence of `#` is treated as a comment, it is possible to break long function calls across several lines, and to add comments to each line: ``` round( # call the function `round` x = 0.138, # number to be rounded digits = 2 # number of after-comma digits to round to ) ``` In RStudio, you can use `Command+Shift+C` (on Mac) and `Ctrl+Shift+C` (on Windows/Linux) to comment or uncomment code, and you can use comments to structure your scripts. Any comment followed by `----` is treated as a (foldable) section in RStudio. ``` # SECTION: variable assignments ---- x <- 6 y <- 7 # SECTION: some calculations ---- x * y ``` **Exercise 2\.3** Provide extensive comments to all operations in the solution code of the previous exercise. Solution ``` a <- 103 # assign value 103 to variable `a` b <- 14 # assign value 14 to variable `b` round( # produce a rounded number x = a / b, # number to be rounded is a/b digits = 3 # show three digits after the comma ) ``` ``` ## [1] 7.357 ``` ### 2\.1\.4 Objects Strictly speaking, all entities in R are *objects* but that is not always apparent or important for everyday practical purposes ([see the manual for more information](https://colinfay.me/intro-to-r/objects-their-modes-and-attributes.html)). R supports an object\-oriented programming style, but we will not make (explicit) use of this functionality. In fact, this book heavily uses and encourages a functional programming style (see Section [2\.4](ch-01-01-loops-and-maps.html#ch-01-01-loops-and-maps)). However, some functions (e.g., optimizers or fitting functions for statistical models) return objects, and we will use this output in various ways. For example, if we run some model on a data set the output is an object. Here, for example, we run a regression model, that will be discussed later on in the book, on a dataset called `cars`. ``` # you do not need to understand this code model_fit = lm(formula = speed~dist, data = cars) # just notice that the function `lm` returns an object is.object(model_fit) ``` ``` ## [1] TRUE ``` ``` # printing an object on the screen usually gives you summary information print(model_fit) ``` ``` ## ## Call: ## lm(formula = speed ~ dist, data = cars) ## ## Coefficients: ## (Intercept) dist ## 8.2839 0.1656 ``` ### 2\.1\.5 Packages Much of R’s charm unfolds through the use of packages. [CRAN](https://cran.r-project.org) has the official package repository. To install a new package from a CRAN mirror use the `install.packages` function. For example, to install the package `remotes`, you would use: ``` install.packages("remotes") ``` Once installed, you need to load your desired packages for each fresh session, using a command like the following:[5](#fn5) ``` library(remotes) ``` Once loaded, all functions, data, etc. that ship with a package are available without additional reference to the package name. If you want to be careful or courteous to an admirer of your code, you can reference a function from a package also by explicitly referring to that package. For example, the following code calls the function `install_github` from the package `remotes` explicitly.[6](#fn6) ``` remotes::install_github("SOME-URL") ``` Indeed, the `install_github` function allows you to install bleeding\-edge packages from GitHub. You can install all packages relevant for this book using the following code (after installing the `remotes` package): ``` remotes::install_github("michael-franke/aida-package") ``` After this installation, you can load all packages for this book simply by using: ``` library(aida) ``` In RStudio, there is a special tab in the pane with information on “files”, “plots” etc. to show all installed packages. This also shows which packages are currently loaded. ### 2\.1\.6 Getting help If you encounter a function like `lm` that you do not know about, you can access its documentation with the `help` function or just typing `?lm`. For example, the following call summons the documentation for `lm`, the first parts of which are shown in Figure [2\.2](ch1-first-steps.html#fig:R-doc-example). ``` help(lm) ``` Figure 2\.2: Excerpt from the documentation of the `lm` function. If you are looking for help on a more general topic, use the function `help.search`. It takes a regular expression as input and outputs a list of occurrences in the available documentation. A useful shortcut for `help.search` is just to type `??` followed by the (unquoted) string to search for. For example, calling either of the following lines might produce a display like in Figure [2\.3](ch1-first-steps.html#fig:R-doc-search-example). ``` # two equivalent ways for obtaining help on search term 'linear' help.search("linear") ??linear ``` Figure 2\.3: Result of calling `help.search` for the term ‘linear’. The top entries in Figure [2\.3](ch1-first-steps.html#fig:R-doc-search-example) are **vignettes**. These are compact manuals or tutorials on particular topics or functions, and they are directly available in R. If you want to browse through the vignettes available on your machine (which depend on which packages you have installed), go ahead with: ``` browseVignettes() ``` **Exercise 2\.4** Look up the help page for the command `round`. As you know about this function already, focus on getting a feeling for how the help text is structured and the most important bits of information are conveyed. Try to understand what the other functions covered in this entry do and when which one would be most useful.
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/ch1-data-types.html
2\.2 Data types --------------- To learn about a new programming language entails to first learn something about what kinds of objects (elements, first\-order citizens) you will have to deal with. Let’s therefore briefly go through the data types that are most important for our later purposes. We will see how to deal with numeric information, Booleans, strings and so forth. In general, we can assess the type of an object stored in variable `x` with the function `typeof(x)`. Let’s just try this for a bunch of things, just to give you an overview of some of R’s data types (not all of which are important to know about right from the start): ``` typeof(3) # returns type "double" typeof(TRUE) # returns type "logical" typeof(cars) # returns "list" (includes data.frames, tibbles, objects, ...) typeof("huhu") # returns "character" (= string) typeof(mean) # returns "closure" (= function) typeof(c) # returns "builtin" (= deep system internal stuff) typeof(round) # returns type "special" (= well, special stuff?) ``` If you really wonder, you can sometimes learn more about an object, if you just print it out as a string: ``` # `lm` is actually a function ("linear model") # the function `str` casts this function into a string # the result is then printed to screen str(lm) ``` ``` ## function (formula, data, subset, weights, na.action, method = "qr", model = TRUE, ## x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, contrasts = NULL, ## offset, ...) ``` It is sometimes possible to cast objects of one type into another type `XXX` using functions `as.XXX` in base R or `as_XXX` in the tidyverse. ``` # casting Boolean value `TRUE` into number format as.numeric(TRUE) # returns 1 ``` ``` ## [1] 1 ``` Casting can also happen explicitly. The expressions `TRUE` and `FALSE` are built\-in variables for the Boolean values “true” and “false”. But when we use them in mathematical expressions, we can do math with them, like so: ``` TRUE + TRUE + FALSE + TRUE + TRUE ``` ``` ## [1] 4 ``` ### 2\.2\.1 Numeric vectors \& matrices R is essentially an array\-based language. Arrays are arbitrary but finite\-dimensional matrices. We will discuss what is usually referred to as vectors (\= one\-dimensional arrays), matrices (\= two\-dimensional arrays), and arrays (\= more\-than\-two\-dimensional) in this section with a focus on numeric information. But it is important to keep in mind that arrays can contain objects of other types than numeric information (as long as all objects in the array are of the same type). #### 2\.2\.1\.1 Numeric information Standard number format in R is double. ``` typeof(3) ``` ``` ## [1] "double" ``` We can also represent numbers as integers and complex. ``` typeof(as.integer(3)) # returns 'integer' ``` ``` ## [1] "integer" ``` ``` typeof(as.complex(3)) # returns 'complex' ``` ``` ## [1] "complex" ``` #### 2\.2\.1\.2 Numeric vectors As a generally useful heuristic, expect every numerical information to be treated as a vector (or higher\-order: matrix, array, … ; see below), and to expect any (basic, mathematical) operation in R to (most likely) apply to the whole vector, matrix, array, collection.[7](#fn7) This makes it possible to ask for the length of a variable to which we assign a single number, for instance: ``` x <- 7 length(x) ``` ``` ## [1] 1 ``` We can even index such a variable: ``` x <- 7 x[1] # what is the entry in position 1 of the vector x? ``` ``` ## [1] 7 ``` Or assign a new value to a hitherto unused index: ``` x[3] <- 6 # assign the value 6 to the 3rd entry of vector x x # notice that the 2nd entry is undefined, or "NA", not available ``` ``` ## [1] 7 NA 6 ``` Vectors in general can be declared with the built\-in function `c()`. To memorize this, think of *concatenation* or *combination*. ``` x <- c(4, 7, 1, 1) # this is now a 4-place vector x ``` ``` ## [1] 4 7 1 1 ``` There are also helpful functions to generate sequences of numbers: ``` 1:10 # returns 1, 2, 3, ..., 10 seq(from = 1, to = 10, by = 1) # returns 1, 2, 3, ..., 10 seq(from = 1, to = 10, by = 0.5) # returns 1, 1.5, 2, ..., 9.5, 10 seq(from = 0, to = 1 , length.out = 11) # returns 0, 0.1, ..., 0.9, 1 ``` Indexing in R starts with 1, not 0! ``` x <- c(4, 7, 1, 1) # this is now a 4-place vector x[2] ``` ``` ## [1] 7 ``` And now we see what is meant above when we said that (almost) every mathematical operation can be expected to apply to a vector: ``` x <- c(4, 7, 1, 1) # 4-placed vector as before x + 1 ``` ``` ## [1] 5 8 2 2 ``` **Exercise 2\.5** Create a vector that contains all even numbers from 0 to 20 and assign it to a variable. Now transform the variable such that it contains only odd numbers up to 20 using mathematical operation. Notice that the numbers above 20 should not be included! \[**Hint**: use indexing.] Solution ``` a <- seq(from = 0, to = 20, by = 2) a <- a + 1 a <- a[1:10] a ``` ``` ## [1] 1 3 5 7 9 11 13 15 17 19 ``` #### 2\.2\.1\.3 Numeric matrices Matrices are declared with the function `matrix`. This function takes, for instance, a vector as an argument. ``` x <- c(4, 7, 1, 1) # 4-placed vector as before (m <- matrix(x)) # cast x into matrix format ``` ``` ## [,1] ## [1,] 4 ## [2,] 7 ## [3,] 1 ## [4,] 1 ``` Notice that the result is a matrix with a single column. This is important. R uses so\-called *column\-major mode*.[8](#fn8) This means that it will fill columns first. For example, a matrix with three columns based on a six\-placed vector 1, 2, \\(\\dots\\), 6 will be built by filling the first column from top to bottom, then the second column top to bottom, and so on.[9](#fn9) ``` m <- matrix(1:6, ncol = 3) m ``` ``` ## [,1] [,2] [,3] ## [1,] 1 3 5 ## [2,] 2 4 6 ``` In line with a column\-major mode, vectors are treated as column vectors in matrix operations: ``` x = c(1, 0, 1) # 3-place vector m %*% x # dot product with previous matrix 'm' ``` ``` ## [,1] ## [1,] 6 ## [2,] 8 ``` As usual, and independently of a column\- or row\-major mode, matrix indexing starts with the row index: ``` m[1,] # produces first row of matrix 'm' ``` ``` ## [1] 1 3 5 ``` **Exercise 2\.6** Create a sequence of 9 numbers, equally spaced, starting from 0 and ending with 1\. Assign this sequence to a vector called `x`. Now, create a matrix, stored in variable `X`, with three columns and three rows that contain the numbers of this vector in the usual column\-major fashion. Solution ``` x <- seq(from = 0, to = 1, length.out = 9) X <- matrix(x, ncol = 3) X ``` ``` ## [,1] [,2] [,3] ## [1,] 0.000 0.375 0.750 ## [2,] 0.125 0.500 0.875 ## [3,] 0.250 0.625 1.000 ``` We have not yet covered this, but give it a try and guess what might be a convenient and very short statement to compute the sum of all numbers in matrix `X`. Solution ``` sum(X) ``` ``` ## [1] 4.5 ``` #### 2\.2\.1\.4 Arrays Arrays are simply higher\-dimensional matrices. We will not make (prominent) use of arrays in this book. #### 2\.2\.1\.5 Names for vectors, matrices and arrays The positions in a vector can be given names. This is extremely useful for good “literate coding” and therefore highly recommended. The names of vector `x`’s positions are retrieved and set by the `names` function:[10](#fn10) ``` students <- c("Jax", "Jamie", "Jason") # names of students grades <- c(1.3, 2.7, 2.0) # a vector of grades names(grades) # retrieve names: with no names so far ``` ``` ## NULL ``` ``` names(grades) <- students # assign names names(grades) # retrieve names again: names assigned ``` ``` ## [1] "Jax" "Jamie" "Jason" ``` ``` grades # output shows names ``` ``` ## Jax Jamie Jason ## 1.3 2.7 2.0 ``` But we can also do this in one swoop, like so: ``` c(Jax = 1.3, Jamie = 2.7, Jason = 2.0) ``` ``` ## Jax Jamie Jason ## 1.3 2.7 2.0 ``` Names for matrices are retrieved or set with functions `rownames` and `colnames`. ``` # declare matrix m <- matrix(1:6, ncol = 3) # assign row and column names, using function # `str_c` which is described below rownames(m) <- str_c("row", 1:nrow(m), sep = "_") colnames(m) <- str_c("col", 1:ncol(m), sep = "_") m ``` ``` ## col_1 col_2 col_3 ## row_1 1 3 5 ## row_2 2 4 6 ``` ### 2\.2\.2 Booleans There are built\-in names for Boolean values “true” and “false”, predictably named `TRUE` and `FALSE`. Equivalent shortcuts are `T` and `F`. If we attempt to do math with Boolean vectors, the outcome is what any reasonable logician would expect: ``` x <- c(T, F, T) 1 - x ``` ``` ## [1] 0 1 0 ``` ``` x + 3 ``` ``` ## [1] 4 3 4 ``` Boolean vectors can be used as index sets to extract elements from other vectors. ``` # vector 1, 2, ..., 5 number_vector <- 1:5 # index of odd numbers set to `TRUE` boolean_vector <- c(T, F, T, F, T) # returns the elements from number vector, for which # the corresponding element in the Boolean vector is true number_vector[boolean_vector] ``` ``` ## [1] 1 3 5 ``` ### 2\.2\.3 Special values There are a couple of keywords reserved in R for special kinds of objects: * `NA`: “not available”; represent missing values in data * `NaN`: “not a number”; e.g., division zero by zero * `Inf` or `-Inf`: infinity and negative infinity; returned when a number is too big or divided by zero * `NULL`: the NULL object; often returned when a function is undefined for the provided input ### 2\.2\.4 Characters (\= strings) Strings are called characters in R. We will be stubborn and call them strings for most of the time here. We can assign a string value to a variable by putting the string in double\-quotes: ``` x <- "huhu" typeof(x) ``` ``` ## [1] "character" ``` We can create vectors of characters in the obvious way: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") chr_vector ``` ``` ## [1] "huhu" "hello" "huhu" "ciao" ``` The package `stringr` from the tidyverse also provides very useful and, in comparison to base R, more uniform functions for string manipulation. The [cheat sheet](http://edrub.in/CheatSheets/cheatSheetStringr.pdf) for the `stringr` package is highly recommended for a quick overview. Below are some examples. Function `str_c` concatenates strings: ``` str_c("Hello", "Hi", "Hey", sep = "! ") ``` ``` ## [1] "Hello! Hi! Hey" ``` We can find the indices of matches in a character vector with `str_which`: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") str_which(chr_vector, "hu") ``` ``` ## [1] 1 3 ``` Similarly, `str_detect` gives a Boolean vector of matching: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") str_detect(chr_vector, "hu") ``` ``` ## [1] TRUE FALSE TRUE FALSE ``` If we want to get the strings matching a pattern, we can use `str_subset`: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") str_subset(chr_vector, "hu") ``` ``` ## [1] "huhu" "huhu" ``` Replacing all matches with another string works with `str_replace_all`: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") str_replace_all(chr_vector, "h", "B") ``` ``` ## [1] "BuBu" "Bello" "BuBu" "ciao" ``` For data preparation, we often need to split strings by a particular character. For instance, a set of reaction times could be separated by a character line “\|”. We can split this string representation to get individual measurements like so: ``` # three measures of reaction time in a single string reaction_times <- "123|234|345" # notice that we need to doubly (!) escape character | # notice also that the result is a list (see below) str_split(reaction_times, "\\|", n = 3) ``` ``` ## [[1]] ## [1] "123" "234" "345" ``` ### 2\.2\.5 Factors Factors are special vectors, which treat their elements as instances of a finite set of categories. To create a factor, we can use the function `factor`. The following code creates a factor from a character vector. Notice that, when printing, we get information of the kinds of entries (\= categories) that occurred in the original character vector: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") factor(chr_vector) ``` ``` ## [1] huhu hello huhu ciao ## Levels: ciao hello huhu ``` For plotting or other representational purposes, it can help to manually specify an ordering on the levels of a factor using the `levels` argument: ``` # the order of levels is changed manually factor(chr_vector, levels = c("huhu", "ciao", "hello")) ``` ``` ## [1] huhu hello huhu ciao ## Levels: huhu ciao hello ``` Even though we specified an ordering among factor levels, the last code chunk nonetheless creates what R treats as an *unordered factor*. There are also genuine *ordered factors*. An *ordered factor* is created by setting the argument `ordered = T`, and optionally also specifying a specific ordering of factor levels, like so: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") factor( chr_vector, # the vector to treat as factor ordered = T, # make sure it's treated as ordered factor levels = c("huhu", "ciao", "hello") # specify order of levels by hand ) ``` ``` ## [1] huhu hello huhu ciao ## Levels: huhu < ciao < hello ``` Having both unordered and ordered factors is useful for representing data from experiments, e.g., from categorical or ordinal variables (see Chapter [3](Chap-02-01-data.html#Chap-02-01-data)). The difference between an unordered factor with explicit ordering information and an ordered factor is subtle and not important in the beginning. (This only matters, for example, in the context of regression modeling.) Factors are trickier to work with than mere vectors because they are rigid about the represented factor levels. Adding an item that does not belong to any of a factor’s levels, leads to trouble: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") my_factor <- factor( chr_vector, # the vector to treat as factor ordered = T, # make sure it's treated as ordered factor levels = c("huhu", "ciao", "hello") # specify order of levels ) my_factor[5] <- "huhu" # adding a "known category" is okay my_factor[6] <- "moin" # adding an "unknown category" does not work my_factor ``` ``` ## [1] huhu hello huhu ciao huhu <NA> ## Levels: huhu < ciao < hello ``` The `forcats` package from the tidyverse helps in dealing with factors. You should check the [Cheat Sheet](http://www.flutterbys.com.au/stats/downloads/slides/figure/factors.pdf) for more helpful functionality. Here is an example of how to expand the levels of a factor: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") my_factor <- factor( chr_vector, # the vector to treat as factor ordered = T, # make sure it's treated as ordered factor levels = c("huhu", "ciao", "hello") # specify order of levels ) my_factor[5] <- "huhu" # adding a "known category" is okay my_factor <- fct_expand(my_factor, "moin") # add new category my_factor[6] <- "moin" # adding new item now works my_factor ``` ``` ## [1] huhu hello huhu ciao huhu moin ## Levels: huhu < ciao < hello < moin ``` It is sometimes useful (especially for plotting) to flexibly reorder the levels of an ordered factor. Here are some useful functions from the `forcats` package: ``` my_factor # original factor ``` ``` ## [1] huhu hello huhu ciao huhu moin ## Levels: huhu < ciao < hello < moin ``` ``` fct_rev(my_factor) # reverse level order ``` ``` ## [1] huhu hello huhu ciao huhu moin ## Levels: moin < hello < ciao < huhu ``` ``` fct_relevel( # manually supply new level order my_factor, c("hello", "ciao", "huhu") ) ``` ``` ## [1] huhu hello huhu ciao huhu moin ## Levels: hello < ciao < huhu < moin ``` ### 2\.2\.6 Lists, data frames \& tibbles Lists are key\-value pairs. They are created with the built\-in function `list`. The difference between a list and a named vector is that in the latter, all elements must be of the same type. In a list, the elements can be of arbitrary type. They can also be vectors or even lists themselves. For example: ``` my_list <- list( single_number = 42, chr_vector = c("huhu", "ciao"), nested_list = list(x = 1, y = 2, z = 3) ) my_list ``` ``` ## $single_number ## [1] 42 ## ## $chr_vector ## [1] "huhu" "ciao" ## ## $nested_list ## $nested_list$x ## [1] 1 ## ## $nested_list$y ## [1] 2 ## ## $nested_list$z ## [1] 3 ``` To access a list element by its name (\= key), we can use the `$` sign followed by the unquoted name, double square brackets `[[ "name" ]]` with the quoted name inside, or indices in double brackets, like so: ``` # all of these return the same list element my_list$chr_vector ``` ``` ## [1] "huhu" "ciao" ``` ``` my_list[["chr_vector"]] ``` ``` ## [1] "huhu" "ciao" ``` ``` my_list[[2]] ``` ``` ## [1] "huhu" "ciao" ``` Lists are very important in R because almost all structured data that belongs together is stored as lists. Objects are special kinds of lists. Data is stored in special kinds of lists, so\-called *data frames* or so\-called *tibbles*. A data frame is base R’s standard format to store data in. A data frame is a list of vectors of equal length. Data sets are instantiated with the function `data.frame`: ``` # fake experimental data exp_data <- data.frame( trial = 1:5, condition = factor( c("C1", "C2", "C1", "C3", "C2"), ordered = T ), response = c(121, 133, 119, 102, 156) ) exp_data ``` ``` ## trial condition response ## 1 1 C1 121 ## 2 2 C2 133 ## 3 3 C1 119 ## 4 4 C3 102 ## 5 5 C2 156 ``` **Exercise 2\.7** Create a vector `a` that contains the names of three of your best (imaginary) friends and a vector `b` with their (imaginary) age. Create a data frame that represents this information (one column with names and one with respective age). Notice that column names should represent the information they contain! Solution ``` a <- c("M", "N", "H") b <- c(23, 41, 13) best_friends <- data.frame(name = a, age = b) best_friends ``` ``` ## name age ## 1 M 23 ## 2 N 41 ## 3 H 13 ``` We can access columns of a data frame, just like we access elements in a list. Additionally, we can also use index notation, like in a matrix: ``` # gives the value of the cell in row 2, column 3 exp_data[2, 3] # returns 133 ``` ``` ## [1] 133 ``` **Exercise 2\.8** Display the column of names of your (imaginary) friends from the `best_friends` data frame. Solution ``` best_friends["name"] ``` ``` ## name ## 1 M ## 2 N ## 3 H ``` ``` best_friends[1] ``` ``` ## name ## 1 M ## 2 N ## 3 H ``` Now show only the names of friends who are younger than 22 (or some other age that makes sense for your friends and their ages). \[**Hint:** you can write `x <= 22` to get a Boolean vector of the same length as `x` with an entry `TRUE` at all indices where `x` is no bigger than 22\.] Solution ``` best_friends[best_friends$age <= 22, "name"] ``` ``` ## [1] "H" ``` In RStudio, you can inspect data in data frames (and tibbles (see below)) with the function `View`. *Tibbles* are the tidyverse counterpart of data frames. We can cast a data frame into a tibble, using `as_tibble`. Notice that the information shown for a tibble is much richer than what is provided when printing the content of a data frame. ``` as_tibble(exp_data) ``` ``` ## # A tibble: 5 × 3 ## trial condition response ## <int> <ord> <dbl> ## 1 1 C1 121 ## 2 2 C2 133 ## 3 3 C1 119 ## 4 4 C3 102 ## 5 5 C2 156 ``` We can also create a tibble directly with the keyword `tibble`. Indeed, the creation of tibbles is conveniently more flexible than the creation of data frames: the former allows dynamic look\-up of previously defined elements. ``` my_tibble <- tibble(x = 1:10, y = x^2) # dynamic construction possible my_dataframe <- data.frame(x = 1:10, y = x^2) # ERROR :/ ``` Another important difference between data frames and tibbles concerns the default treatment of character (\= string) vectors. When reading in data from a CSV file as a data frame (using function `read.csv`), each character vector is treated as a factor by default. But when using `read_csv` to read CSV data into a tibble character vector are not treated as factors. There is also a very convenient function, called `tribble`, which allows you to create a tibble by explicitly writing out the information in the rows. ``` hw_points <- tribble( ~hw_nr, ~Jax, ~Jamie, ~Jason, "HW1", 33, 24, 17, "HW2", 41, 23, 8 ) hw_points ``` ``` ## # A tibble: 2 × 4 ## hw_nr Jax Jamie Jason ## <chr> <dbl> <dbl> <dbl> ## 1 HW1 33 24 17 ## 2 HW2 41 23 8 ``` **Exercise 2\.9** Assign to the variable `bff` a tibble with the following columns (with reasonable names): at least four names of your (imaginary) best friends, their current country of residence, their age, and a Boolean column storing whether they are not older than 23\. Ideally, use dynamic construction and the `<=` operator as in previous exercises. Solution ``` bff <- tibble( name = c("A", "B", "C", "D"), residence = c("UK", "JP", "CH", "JA"), age = c(24, 45, 72, 12), young = age <= 23 ) bff ``` ``` ## # A tibble: 4 × 4 ## name residence age young ## <chr> <chr> <dbl> <lgl> ## 1 A UK 24 FALSE ## 2 B JP 45 FALSE ## 3 C CH 72 FALSE ## 4 D JA 12 TRUE ``` ### 2\.2\.1 Numeric vectors \& matrices R is essentially an array\-based language. Arrays are arbitrary but finite\-dimensional matrices. We will discuss what is usually referred to as vectors (\= one\-dimensional arrays), matrices (\= two\-dimensional arrays), and arrays (\= more\-than\-two\-dimensional) in this section with a focus on numeric information. But it is important to keep in mind that arrays can contain objects of other types than numeric information (as long as all objects in the array are of the same type). #### 2\.2\.1\.1 Numeric information Standard number format in R is double. ``` typeof(3) ``` ``` ## [1] "double" ``` We can also represent numbers as integers and complex. ``` typeof(as.integer(3)) # returns 'integer' ``` ``` ## [1] "integer" ``` ``` typeof(as.complex(3)) # returns 'complex' ``` ``` ## [1] "complex" ``` #### 2\.2\.1\.2 Numeric vectors As a generally useful heuristic, expect every numerical information to be treated as a vector (or higher\-order: matrix, array, … ; see below), and to expect any (basic, mathematical) operation in R to (most likely) apply to the whole vector, matrix, array, collection.[7](#fn7) This makes it possible to ask for the length of a variable to which we assign a single number, for instance: ``` x <- 7 length(x) ``` ``` ## [1] 1 ``` We can even index such a variable: ``` x <- 7 x[1] # what is the entry in position 1 of the vector x? ``` ``` ## [1] 7 ``` Or assign a new value to a hitherto unused index: ``` x[3] <- 6 # assign the value 6 to the 3rd entry of vector x x # notice that the 2nd entry is undefined, or "NA", not available ``` ``` ## [1] 7 NA 6 ``` Vectors in general can be declared with the built\-in function `c()`. To memorize this, think of *concatenation* or *combination*. ``` x <- c(4, 7, 1, 1) # this is now a 4-place vector x ``` ``` ## [1] 4 7 1 1 ``` There are also helpful functions to generate sequences of numbers: ``` 1:10 # returns 1, 2, 3, ..., 10 seq(from = 1, to = 10, by = 1) # returns 1, 2, 3, ..., 10 seq(from = 1, to = 10, by = 0.5) # returns 1, 1.5, 2, ..., 9.5, 10 seq(from = 0, to = 1 , length.out = 11) # returns 0, 0.1, ..., 0.9, 1 ``` Indexing in R starts with 1, not 0! ``` x <- c(4, 7, 1, 1) # this is now a 4-place vector x[2] ``` ``` ## [1] 7 ``` And now we see what is meant above when we said that (almost) every mathematical operation can be expected to apply to a vector: ``` x <- c(4, 7, 1, 1) # 4-placed vector as before x + 1 ``` ``` ## [1] 5 8 2 2 ``` **Exercise 2\.5** Create a vector that contains all even numbers from 0 to 20 and assign it to a variable. Now transform the variable such that it contains only odd numbers up to 20 using mathematical operation. Notice that the numbers above 20 should not be included! \[**Hint**: use indexing.] Solution ``` a <- seq(from = 0, to = 20, by = 2) a <- a + 1 a <- a[1:10] a ``` ``` ## [1] 1 3 5 7 9 11 13 15 17 19 ``` #### 2\.2\.1\.3 Numeric matrices Matrices are declared with the function `matrix`. This function takes, for instance, a vector as an argument. ``` x <- c(4, 7, 1, 1) # 4-placed vector as before (m <- matrix(x)) # cast x into matrix format ``` ``` ## [,1] ## [1,] 4 ## [2,] 7 ## [3,] 1 ## [4,] 1 ``` Notice that the result is a matrix with a single column. This is important. R uses so\-called *column\-major mode*.[8](#fn8) This means that it will fill columns first. For example, a matrix with three columns based on a six\-placed vector 1, 2, \\(\\dots\\), 6 will be built by filling the first column from top to bottom, then the second column top to bottom, and so on.[9](#fn9) ``` m <- matrix(1:6, ncol = 3) m ``` ``` ## [,1] [,2] [,3] ## [1,] 1 3 5 ## [2,] 2 4 6 ``` In line with a column\-major mode, vectors are treated as column vectors in matrix operations: ``` x = c(1, 0, 1) # 3-place vector m %*% x # dot product with previous matrix 'm' ``` ``` ## [,1] ## [1,] 6 ## [2,] 8 ``` As usual, and independently of a column\- or row\-major mode, matrix indexing starts with the row index: ``` m[1,] # produces first row of matrix 'm' ``` ``` ## [1] 1 3 5 ``` **Exercise 2\.6** Create a sequence of 9 numbers, equally spaced, starting from 0 and ending with 1\. Assign this sequence to a vector called `x`. Now, create a matrix, stored in variable `X`, with three columns and three rows that contain the numbers of this vector in the usual column\-major fashion. Solution ``` x <- seq(from = 0, to = 1, length.out = 9) X <- matrix(x, ncol = 3) X ``` ``` ## [,1] [,2] [,3] ## [1,] 0.000 0.375 0.750 ## [2,] 0.125 0.500 0.875 ## [3,] 0.250 0.625 1.000 ``` We have not yet covered this, but give it a try and guess what might be a convenient and very short statement to compute the sum of all numbers in matrix `X`. Solution ``` sum(X) ``` ``` ## [1] 4.5 ``` #### 2\.2\.1\.4 Arrays Arrays are simply higher\-dimensional matrices. We will not make (prominent) use of arrays in this book. #### 2\.2\.1\.5 Names for vectors, matrices and arrays The positions in a vector can be given names. This is extremely useful for good “literate coding” and therefore highly recommended. The names of vector `x`’s positions are retrieved and set by the `names` function:[10](#fn10) ``` students <- c("Jax", "Jamie", "Jason") # names of students grades <- c(1.3, 2.7, 2.0) # a vector of grades names(grades) # retrieve names: with no names so far ``` ``` ## NULL ``` ``` names(grades) <- students # assign names names(grades) # retrieve names again: names assigned ``` ``` ## [1] "Jax" "Jamie" "Jason" ``` ``` grades # output shows names ``` ``` ## Jax Jamie Jason ## 1.3 2.7 2.0 ``` But we can also do this in one swoop, like so: ``` c(Jax = 1.3, Jamie = 2.7, Jason = 2.0) ``` ``` ## Jax Jamie Jason ## 1.3 2.7 2.0 ``` Names for matrices are retrieved or set with functions `rownames` and `colnames`. ``` # declare matrix m <- matrix(1:6, ncol = 3) # assign row and column names, using function # `str_c` which is described below rownames(m) <- str_c("row", 1:nrow(m), sep = "_") colnames(m) <- str_c("col", 1:ncol(m), sep = "_") m ``` ``` ## col_1 col_2 col_3 ## row_1 1 3 5 ## row_2 2 4 6 ``` #### 2\.2\.1\.1 Numeric information Standard number format in R is double. ``` typeof(3) ``` ``` ## [1] "double" ``` We can also represent numbers as integers and complex. ``` typeof(as.integer(3)) # returns 'integer' ``` ``` ## [1] "integer" ``` ``` typeof(as.complex(3)) # returns 'complex' ``` ``` ## [1] "complex" ``` #### 2\.2\.1\.2 Numeric vectors As a generally useful heuristic, expect every numerical information to be treated as a vector (or higher\-order: matrix, array, … ; see below), and to expect any (basic, mathematical) operation in R to (most likely) apply to the whole vector, matrix, array, collection.[7](#fn7) This makes it possible to ask for the length of a variable to which we assign a single number, for instance: ``` x <- 7 length(x) ``` ``` ## [1] 1 ``` We can even index such a variable: ``` x <- 7 x[1] # what is the entry in position 1 of the vector x? ``` ``` ## [1] 7 ``` Or assign a new value to a hitherto unused index: ``` x[3] <- 6 # assign the value 6 to the 3rd entry of vector x x # notice that the 2nd entry is undefined, or "NA", not available ``` ``` ## [1] 7 NA 6 ``` Vectors in general can be declared with the built\-in function `c()`. To memorize this, think of *concatenation* or *combination*. ``` x <- c(4, 7, 1, 1) # this is now a 4-place vector x ``` ``` ## [1] 4 7 1 1 ``` There are also helpful functions to generate sequences of numbers: ``` 1:10 # returns 1, 2, 3, ..., 10 seq(from = 1, to = 10, by = 1) # returns 1, 2, 3, ..., 10 seq(from = 1, to = 10, by = 0.5) # returns 1, 1.5, 2, ..., 9.5, 10 seq(from = 0, to = 1 , length.out = 11) # returns 0, 0.1, ..., 0.9, 1 ``` Indexing in R starts with 1, not 0! ``` x <- c(4, 7, 1, 1) # this is now a 4-place vector x[2] ``` ``` ## [1] 7 ``` And now we see what is meant above when we said that (almost) every mathematical operation can be expected to apply to a vector: ``` x <- c(4, 7, 1, 1) # 4-placed vector as before x + 1 ``` ``` ## [1] 5 8 2 2 ``` **Exercise 2\.5** Create a vector that contains all even numbers from 0 to 20 and assign it to a variable. Now transform the variable such that it contains only odd numbers up to 20 using mathematical operation. Notice that the numbers above 20 should not be included! \[**Hint**: use indexing.] Solution ``` a <- seq(from = 0, to = 20, by = 2) a <- a + 1 a <- a[1:10] a ``` ``` ## [1] 1 3 5 7 9 11 13 15 17 19 ``` #### 2\.2\.1\.3 Numeric matrices Matrices are declared with the function `matrix`. This function takes, for instance, a vector as an argument. ``` x <- c(4, 7, 1, 1) # 4-placed vector as before (m <- matrix(x)) # cast x into matrix format ``` ``` ## [,1] ## [1,] 4 ## [2,] 7 ## [3,] 1 ## [4,] 1 ``` Notice that the result is a matrix with a single column. This is important. R uses so\-called *column\-major mode*.[8](#fn8) This means that it will fill columns first. For example, a matrix with three columns based on a six\-placed vector 1, 2, \\(\\dots\\), 6 will be built by filling the first column from top to bottom, then the second column top to bottom, and so on.[9](#fn9) ``` m <- matrix(1:6, ncol = 3) m ``` ``` ## [,1] [,2] [,3] ## [1,] 1 3 5 ## [2,] 2 4 6 ``` In line with a column\-major mode, vectors are treated as column vectors in matrix operations: ``` x = c(1, 0, 1) # 3-place vector m %*% x # dot product with previous matrix 'm' ``` ``` ## [,1] ## [1,] 6 ## [2,] 8 ``` As usual, and independently of a column\- or row\-major mode, matrix indexing starts with the row index: ``` m[1,] # produces first row of matrix 'm' ``` ``` ## [1] 1 3 5 ``` **Exercise 2\.6** Create a sequence of 9 numbers, equally spaced, starting from 0 and ending with 1\. Assign this sequence to a vector called `x`. Now, create a matrix, stored in variable `X`, with three columns and three rows that contain the numbers of this vector in the usual column\-major fashion. Solution ``` x <- seq(from = 0, to = 1, length.out = 9) X <- matrix(x, ncol = 3) X ``` ``` ## [,1] [,2] [,3] ## [1,] 0.000 0.375 0.750 ## [2,] 0.125 0.500 0.875 ## [3,] 0.250 0.625 1.000 ``` We have not yet covered this, but give it a try and guess what might be a convenient and very short statement to compute the sum of all numbers in matrix `X`. Solution ``` sum(X) ``` ``` ## [1] 4.5 ``` #### 2\.2\.1\.4 Arrays Arrays are simply higher\-dimensional matrices. We will not make (prominent) use of arrays in this book. #### 2\.2\.1\.5 Names for vectors, matrices and arrays The positions in a vector can be given names. This is extremely useful for good “literate coding” and therefore highly recommended. The names of vector `x`’s positions are retrieved and set by the `names` function:[10](#fn10) ``` students <- c("Jax", "Jamie", "Jason") # names of students grades <- c(1.3, 2.7, 2.0) # a vector of grades names(grades) # retrieve names: with no names so far ``` ``` ## NULL ``` ``` names(grades) <- students # assign names names(grades) # retrieve names again: names assigned ``` ``` ## [1] "Jax" "Jamie" "Jason" ``` ``` grades # output shows names ``` ``` ## Jax Jamie Jason ## 1.3 2.7 2.0 ``` But we can also do this in one swoop, like so: ``` c(Jax = 1.3, Jamie = 2.7, Jason = 2.0) ``` ``` ## Jax Jamie Jason ## 1.3 2.7 2.0 ``` Names for matrices are retrieved or set with functions `rownames` and `colnames`. ``` # declare matrix m <- matrix(1:6, ncol = 3) # assign row and column names, using function # `str_c` which is described below rownames(m) <- str_c("row", 1:nrow(m), sep = "_") colnames(m) <- str_c("col", 1:ncol(m), sep = "_") m ``` ``` ## col_1 col_2 col_3 ## row_1 1 3 5 ## row_2 2 4 6 ``` ### 2\.2\.2 Booleans There are built\-in names for Boolean values “true” and “false”, predictably named `TRUE` and `FALSE`. Equivalent shortcuts are `T` and `F`. If we attempt to do math with Boolean vectors, the outcome is what any reasonable logician would expect: ``` x <- c(T, F, T) 1 - x ``` ``` ## [1] 0 1 0 ``` ``` x + 3 ``` ``` ## [1] 4 3 4 ``` Boolean vectors can be used as index sets to extract elements from other vectors. ``` # vector 1, 2, ..., 5 number_vector <- 1:5 # index of odd numbers set to `TRUE` boolean_vector <- c(T, F, T, F, T) # returns the elements from number vector, for which # the corresponding element in the Boolean vector is true number_vector[boolean_vector] ``` ``` ## [1] 1 3 5 ``` ### 2\.2\.3 Special values There are a couple of keywords reserved in R for special kinds of objects: * `NA`: “not available”; represent missing values in data * `NaN`: “not a number”; e.g., division zero by zero * `Inf` or `-Inf`: infinity and negative infinity; returned when a number is too big or divided by zero * `NULL`: the NULL object; often returned when a function is undefined for the provided input ### 2\.2\.4 Characters (\= strings) Strings are called characters in R. We will be stubborn and call them strings for most of the time here. We can assign a string value to a variable by putting the string in double\-quotes: ``` x <- "huhu" typeof(x) ``` ``` ## [1] "character" ``` We can create vectors of characters in the obvious way: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") chr_vector ``` ``` ## [1] "huhu" "hello" "huhu" "ciao" ``` The package `stringr` from the tidyverse also provides very useful and, in comparison to base R, more uniform functions for string manipulation. The [cheat sheet](http://edrub.in/CheatSheets/cheatSheetStringr.pdf) for the `stringr` package is highly recommended for a quick overview. Below are some examples. Function `str_c` concatenates strings: ``` str_c("Hello", "Hi", "Hey", sep = "! ") ``` ``` ## [1] "Hello! Hi! Hey" ``` We can find the indices of matches in a character vector with `str_which`: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") str_which(chr_vector, "hu") ``` ``` ## [1] 1 3 ``` Similarly, `str_detect` gives a Boolean vector of matching: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") str_detect(chr_vector, "hu") ``` ``` ## [1] TRUE FALSE TRUE FALSE ``` If we want to get the strings matching a pattern, we can use `str_subset`: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") str_subset(chr_vector, "hu") ``` ``` ## [1] "huhu" "huhu" ``` Replacing all matches with another string works with `str_replace_all`: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") str_replace_all(chr_vector, "h", "B") ``` ``` ## [1] "BuBu" "Bello" "BuBu" "ciao" ``` For data preparation, we often need to split strings by a particular character. For instance, a set of reaction times could be separated by a character line “\|”. We can split this string representation to get individual measurements like so: ``` # three measures of reaction time in a single string reaction_times <- "123|234|345" # notice that we need to doubly (!) escape character | # notice also that the result is a list (see below) str_split(reaction_times, "\\|", n = 3) ``` ``` ## [[1]] ## [1] "123" "234" "345" ``` ### 2\.2\.5 Factors Factors are special vectors, which treat their elements as instances of a finite set of categories. To create a factor, we can use the function `factor`. The following code creates a factor from a character vector. Notice that, when printing, we get information of the kinds of entries (\= categories) that occurred in the original character vector: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") factor(chr_vector) ``` ``` ## [1] huhu hello huhu ciao ## Levels: ciao hello huhu ``` For plotting or other representational purposes, it can help to manually specify an ordering on the levels of a factor using the `levels` argument: ``` # the order of levels is changed manually factor(chr_vector, levels = c("huhu", "ciao", "hello")) ``` ``` ## [1] huhu hello huhu ciao ## Levels: huhu ciao hello ``` Even though we specified an ordering among factor levels, the last code chunk nonetheless creates what R treats as an *unordered factor*. There are also genuine *ordered factors*. An *ordered factor* is created by setting the argument `ordered = T`, and optionally also specifying a specific ordering of factor levels, like so: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") factor( chr_vector, # the vector to treat as factor ordered = T, # make sure it's treated as ordered factor levels = c("huhu", "ciao", "hello") # specify order of levels by hand ) ``` ``` ## [1] huhu hello huhu ciao ## Levels: huhu < ciao < hello ``` Having both unordered and ordered factors is useful for representing data from experiments, e.g., from categorical or ordinal variables (see Chapter [3](Chap-02-01-data.html#Chap-02-01-data)). The difference between an unordered factor with explicit ordering information and an ordered factor is subtle and not important in the beginning. (This only matters, for example, in the context of regression modeling.) Factors are trickier to work with than mere vectors because they are rigid about the represented factor levels. Adding an item that does not belong to any of a factor’s levels, leads to trouble: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") my_factor <- factor( chr_vector, # the vector to treat as factor ordered = T, # make sure it's treated as ordered factor levels = c("huhu", "ciao", "hello") # specify order of levels ) my_factor[5] <- "huhu" # adding a "known category" is okay my_factor[6] <- "moin" # adding an "unknown category" does not work my_factor ``` ``` ## [1] huhu hello huhu ciao huhu <NA> ## Levels: huhu < ciao < hello ``` The `forcats` package from the tidyverse helps in dealing with factors. You should check the [Cheat Sheet](http://www.flutterbys.com.au/stats/downloads/slides/figure/factors.pdf) for more helpful functionality. Here is an example of how to expand the levels of a factor: ``` chr_vector <- c("huhu", "hello", "huhu", "ciao") my_factor <- factor( chr_vector, # the vector to treat as factor ordered = T, # make sure it's treated as ordered factor levels = c("huhu", "ciao", "hello") # specify order of levels ) my_factor[5] <- "huhu" # adding a "known category" is okay my_factor <- fct_expand(my_factor, "moin") # add new category my_factor[6] <- "moin" # adding new item now works my_factor ``` ``` ## [1] huhu hello huhu ciao huhu moin ## Levels: huhu < ciao < hello < moin ``` It is sometimes useful (especially for plotting) to flexibly reorder the levels of an ordered factor. Here are some useful functions from the `forcats` package: ``` my_factor # original factor ``` ``` ## [1] huhu hello huhu ciao huhu moin ## Levels: huhu < ciao < hello < moin ``` ``` fct_rev(my_factor) # reverse level order ``` ``` ## [1] huhu hello huhu ciao huhu moin ## Levels: moin < hello < ciao < huhu ``` ``` fct_relevel( # manually supply new level order my_factor, c("hello", "ciao", "huhu") ) ``` ``` ## [1] huhu hello huhu ciao huhu moin ## Levels: hello < ciao < huhu < moin ``` ### 2\.2\.6 Lists, data frames \& tibbles Lists are key\-value pairs. They are created with the built\-in function `list`. The difference between a list and a named vector is that in the latter, all elements must be of the same type. In a list, the elements can be of arbitrary type. They can also be vectors or even lists themselves. For example: ``` my_list <- list( single_number = 42, chr_vector = c("huhu", "ciao"), nested_list = list(x = 1, y = 2, z = 3) ) my_list ``` ``` ## $single_number ## [1] 42 ## ## $chr_vector ## [1] "huhu" "ciao" ## ## $nested_list ## $nested_list$x ## [1] 1 ## ## $nested_list$y ## [1] 2 ## ## $nested_list$z ## [1] 3 ``` To access a list element by its name (\= key), we can use the `$` sign followed by the unquoted name, double square brackets `[[ "name" ]]` with the quoted name inside, or indices in double brackets, like so: ``` # all of these return the same list element my_list$chr_vector ``` ``` ## [1] "huhu" "ciao" ``` ``` my_list[["chr_vector"]] ``` ``` ## [1] "huhu" "ciao" ``` ``` my_list[[2]] ``` ``` ## [1] "huhu" "ciao" ``` Lists are very important in R because almost all structured data that belongs together is stored as lists. Objects are special kinds of lists. Data is stored in special kinds of lists, so\-called *data frames* or so\-called *tibbles*. A data frame is base R’s standard format to store data in. A data frame is a list of vectors of equal length. Data sets are instantiated with the function `data.frame`: ``` # fake experimental data exp_data <- data.frame( trial = 1:5, condition = factor( c("C1", "C2", "C1", "C3", "C2"), ordered = T ), response = c(121, 133, 119, 102, 156) ) exp_data ``` ``` ## trial condition response ## 1 1 C1 121 ## 2 2 C2 133 ## 3 3 C1 119 ## 4 4 C3 102 ## 5 5 C2 156 ``` **Exercise 2\.7** Create a vector `a` that contains the names of three of your best (imaginary) friends and a vector `b` with their (imaginary) age. Create a data frame that represents this information (one column with names and one with respective age). Notice that column names should represent the information they contain! Solution ``` a <- c("M", "N", "H") b <- c(23, 41, 13) best_friends <- data.frame(name = a, age = b) best_friends ``` ``` ## name age ## 1 M 23 ## 2 N 41 ## 3 H 13 ``` We can access columns of a data frame, just like we access elements in a list. Additionally, we can also use index notation, like in a matrix: ``` # gives the value of the cell in row 2, column 3 exp_data[2, 3] # returns 133 ``` ``` ## [1] 133 ``` **Exercise 2\.8** Display the column of names of your (imaginary) friends from the `best_friends` data frame. Solution ``` best_friends["name"] ``` ``` ## name ## 1 M ## 2 N ## 3 H ``` ``` best_friends[1] ``` ``` ## name ## 1 M ## 2 N ## 3 H ``` Now show only the names of friends who are younger than 22 (or some other age that makes sense for your friends and their ages). \[**Hint:** you can write `x <= 22` to get a Boolean vector of the same length as `x` with an entry `TRUE` at all indices where `x` is no bigger than 22\.] Solution ``` best_friends[best_friends$age <= 22, "name"] ``` ``` ## [1] "H" ``` In RStudio, you can inspect data in data frames (and tibbles (see below)) with the function `View`. *Tibbles* are the tidyverse counterpart of data frames. We can cast a data frame into a tibble, using `as_tibble`. Notice that the information shown for a tibble is much richer than what is provided when printing the content of a data frame. ``` as_tibble(exp_data) ``` ``` ## # A tibble: 5 × 3 ## trial condition response ## <int> <ord> <dbl> ## 1 1 C1 121 ## 2 2 C2 133 ## 3 3 C1 119 ## 4 4 C3 102 ## 5 5 C2 156 ``` We can also create a tibble directly with the keyword `tibble`. Indeed, the creation of tibbles is conveniently more flexible than the creation of data frames: the former allows dynamic look\-up of previously defined elements. ``` my_tibble <- tibble(x = 1:10, y = x^2) # dynamic construction possible my_dataframe <- data.frame(x = 1:10, y = x^2) # ERROR :/ ``` Another important difference between data frames and tibbles concerns the default treatment of character (\= string) vectors. When reading in data from a CSV file as a data frame (using function `read.csv`), each character vector is treated as a factor by default. But when using `read_csv` to read CSV data into a tibble character vector are not treated as factors. There is also a very convenient function, called `tribble`, which allows you to create a tibble by explicitly writing out the information in the rows. ``` hw_points <- tribble( ~hw_nr, ~Jax, ~Jamie, ~Jason, "HW1", 33, 24, 17, "HW2", 41, 23, 8 ) hw_points ``` ``` ## # A tibble: 2 × 4 ## hw_nr Jax Jamie Jason ## <chr> <dbl> <dbl> <dbl> ## 1 HW1 33 24 17 ## 2 HW2 41 23 8 ``` **Exercise 2\.9** Assign to the variable `bff` a tibble with the following columns (with reasonable names): at least four names of your (imaginary) best friends, their current country of residence, their age, and a Boolean column storing whether they are not older than 23\. Ideally, use dynamic construction and the `<=` operator as in previous exercises. Solution ``` bff <- tibble( name = c("A", "B", "C", "D"), residence = c("UK", "JP", "CH", "JA"), age = c(24, 45, 72, 12), young = age <= 23 ) bff ``` ``` ## # A tibble: 4 × 4 ## name residence age young ## <chr> <chr> <dbl> <lgl> ## 1 A UK 24 FALSE ## 2 B JP 45 FALSE ## 3 C CH 72 FALSE ## 4 D JA 12 TRUE ```
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/Chap-01-01-functions.html
2\.3 Functions -------------- ### 2\.3\.1 Some important built\-in functions Many helpful functions are defined in base R or supplied by packages. We recommend browsing the [Cheat Sheets](https://rstudio.com/resources/cheatsheets/) every now and then to pick up more useful stuff for your inventory. Here are some functions that are very basic and generally useful. #### 2\.3\.1\.1 Standard logic * `&`: “and” * `|`: “or” * `!`: “not” * `negate()`: a pipe\-friendly `!` (see Section [2\.5](Chap-01-01-piping.html#Chap-01-01-piping) for more on piping) * `all()`: returns true of a vector if all elements are `T` * `any()`: returns true of a vector if at least one element is `T` #### 2\.3\.1\.2 Comparisons * `<`: smaller * `>`: greater * `==`: equal (you can also use `near()`instead of `==` e.g. `near(3/3,1)`returns TRUE) * `>=`: greater or equal * `<=`: less or equal * `!=`: not equal #### 2\.3\.1\.3 Set theory * `%in%`: whether an element is in a vector * `union(x, y)`: union of `x` and `y` * `intersect(x, y)`: intersection of `x` and `y` * `setdiff(x, y)`: all elements in `x` that are not in `y` #### 2\.3\.1\.4 Sampling and combinatorics * `runif()`: random number from unit interval \[0;1] * `sample(x, size, replace)`: take `size` samples from `x` (with replacement if `replace` is `T`) * `choose(n, k)`: number of subsets of size `n` out of a set of size `k` (binomial coefficient) ### 2\.3\.2 Defining your own functions If you find yourself in a situation in which you would like to copy\-paste some code, possibly with minor amendments, this usually means that you should wrap some recurring operations into a custom\-defined function. There are two ways of defining your own functions: as a *named function*, or an *anonymous function*. #### 2\.3\.2\.1 Named functions The special operator supplied by base R to create new functions is the keyword `function`. Here is an example of defining a new function with two input variables `x` and `y` that returns a computation based on these numbers. We assign a newly created function to the variable `cool_function` so that we can use this name to call the function later. Notice that the use of the `return` keyword is optional here. If it is left out, the evaluation of the last line is returned. ``` # define a new function # - takes two numbers x & y as argument # - returns x * y + 1 cool_function <- function(x, y) { return(x * y + 1) } # apply `cool_function` to some numbers: cool_function(3, 3) # returns 10 cool_function(1, 1) # returns 2 cool_function(1:2, 1) # returns vector [2,3] cool_function(1) # throws error: 'argument "y" is missing, with no default' cool_function() # throws error: 'argument "x" is missing, with no default' ``` We can give default values for the parameters passed to a function: ``` # same function as before but with # default values for each argument cool_function_2 <- function(x = 2, y = 3) { return(x * y + 1) } # apply `cool_function_2` to some numbers: cool_function_2(3, 3) # returns 10 cool_function_2(1, 1) # returns 2 cool_function_2(1:2, 1) # returns vector [2,3] cool_function_2(1) # returns 4 (= 1 * 3 + 1) cool_function_2() # returns 7 (= 2 * 3 + 1) ``` **Exercise 2\.10** Create a function called `bigger_100` which takes two numbers as input and outputs 0 if their product is less than or equal to 100, and 1 otherwise. (**Hint:** remember that you can cast a Boolean value to an integer with `as.integer`.) Solution ``` bigger_100 <- function(x, y) { return(as.integer(x * y > 100)) } bigger_100(40, 3) ``` ``` ## [1] 1 ``` #### 2\.3\.2\.2 Anonymous functions Notice that we can feed functions as parameters to other functions. This is an important ingredient of a functional\-style of programming, and something that we will rely on heavily in this book (see Section [2\.4](ch-01-01-loops-and-maps.html#ch-01-01-loops-and-maps)). When supplying a function as an argument to another function, we might not want to name the function that is passed. Here’s a (stupid, but hopefully illustrating) example. We first define the named function `new_applier_function` which takes two arguments as input: an input vector, which is locally called `input` in the scope of the function’s body, and a function, which is locally called `function_to_apply`. Our new function `new_applier_function` first checks whether the input vector has more than one element, throws an error if not, and otherwise applies the argument function `function_to_apply` to the vector `input`. ``` # define a function that takes a vector and a function as an argument new_applier_function <- function(input, function_to_apply) { # check if input vector has at least 2 elements if (length(input) <= 1) { # terminate and show informative error message stop("Error in 'new_applier_function': input vector has length <= 1.") } # otherwise apply the function to the input vector return(function_to_apply(input)) } ``` We use this new function to show the difference between named and unnamed functions, in particular why the latter can be very handy and elegant. First, we consider a case where we use `new_applier_function` in connection with the named built\-in function `sum`: ``` # sum vector with built-in & named function new_applier_function( input = 1:3, # input vector function_to_apply = sum # built-in & named function to apply ) # returns 6 ``` If instead of an existing named function, we want to use a new function to supply to `new_applier_function`, we could define that function first and give it a name, but if we only need it “in situ” for calling `new_applier_function` once, we can also write this: ``` # Sum vector with anonymous function new_applier_function( input = 1:3, # input vector function_to_apply = function(in_vec) { return(in_vec[1] + in_vec[2]) } ) # returns 3 (as it only sums the first two arguments) ``` **Exercise 2\.11** How many arguments should you pass to a function that… 1. …tells if the sum of two numbers is even? 2. …applies two different operations on a variable and sums the results? Operations are not fixed in the function. Solution 1. Two arguments. 2. Three arguments. Call the function `new_applier_function` with `input = 1:3` and an anonymous function that returns just the first two elements of the input vector in reverse order (as a vector). Solution ``` new_applier_function( input = 1:3, # input vector function_to_apply = function(in_vec) { return(c(in_vec[c(2,1)])) } ) ``` ### 2\.3\.1 Some important built\-in functions Many helpful functions are defined in base R or supplied by packages. We recommend browsing the [Cheat Sheets](https://rstudio.com/resources/cheatsheets/) every now and then to pick up more useful stuff for your inventory. Here are some functions that are very basic and generally useful. #### 2\.3\.1\.1 Standard logic * `&`: “and” * `|`: “or” * `!`: “not” * `negate()`: a pipe\-friendly `!` (see Section [2\.5](Chap-01-01-piping.html#Chap-01-01-piping) for more on piping) * `all()`: returns true of a vector if all elements are `T` * `any()`: returns true of a vector if at least one element is `T` #### 2\.3\.1\.2 Comparisons * `<`: smaller * `>`: greater * `==`: equal (you can also use `near()`instead of `==` e.g. `near(3/3,1)`returns TRUE) * `>=`: greater or equal * `<=`: less or equal * `!=`: not equal #### 2\.3\.1\.3 Set theory * `%in%`: whether an element is in a vector * `union(x, y)`: union of `x` and `y` * `intersect(x, y)`: intersection of `x` and `y` * `setdiff(x, y)`: all elements in `x` that are not in `y` #### 2\.3\.1\.4 Sampling and combinatorics * `runif()`: random number from unit interval \[0;1] * `sample(x, size, replace)`: take `size` samples from `x` (with replacement if `replace` is `T`) * `choose(n, k)`: number of subsets of size `n` out of a set of size `k` (binomial coefficient) #### 2\.3\.1\.1 Standard logic * `&`: “and” * `|`: “or” * `!`: “not” * `negate()`: a pipe\-friendly `!` (see Section [2\.5](Chap-01-01-piping.html#Chap-01-01-piping) for more on piping) * `all()`: returns true of a vector if all elements are `T` * `any()`: returns true of a vector if at least one element is `T` #### 2\.3\.1\.2 Comparisons * `<`: smaller * `>`: greater * `==`: equal (you can also use `near()`instead of `==` e.g. `near(3/3,1)`returns TRUE) * `>=`: greater or equal * `<=`: less or equal * `!=`: not equal #### 2\.3\.1\.3 Set theory * `%in%`: whether an element is in a vector * `union(x, y)`: union of `x` and `y` * `intersect(x, y)`: intersection of `x` and `y` * `setdiff(x, y)`: all elements in `x` that are not in `y` #### 2\.3\.1\.4 Sampling and combinatorics * `runif()`: random number from unit interval \[0;1] * `sample(x, size, replace)`: take `size` samples from `x` (with replacement if `replace` is `T`) * `choose(n, k)`: number of subsets of size `n` out of a set of size `k` (binomial coefficient) ### 2\.3\.2 Defining your own functions If you find yourself in a situation in which you would like to copy\-paste some code, possibly with minor amendments, this usually means that you should wrap some recurring operations into a custom\-defined function. There are two ways of defining your own functions: as a *named function*, or an *anonymous function*. #### 2\.3\.2\.1 Named functions The special operator supplied by base R to create new functions is the keyword `function`. Here is an example of defining a new function with two input variables `x` and `y` that returns a computation based on these numbers. We assign a newly created function to the variable `cool_function` so that we can use this name to call the function later. Notice that the use of the `return` keyword is optional here. If it is left out, the evaluation of the last line is returned. ``` # define a new function # - takes two numbers x & y as argument # - returns x * y + 1 cool_function <- function(x, y) { return(x * y + 1) } # apply `cool_function` to some numbers: cool_function(3, 3) # returns 10 cool_function(1, 1) # returns 2 cool_function(1:2, 1) # returns vector [2,3] cool_function(1) # throws error: 'argument "y" is missing, with no default' cool_function() # throws error: 'argument "x" is missing, with no default' ``` We can give default values for the parameters passed to a function: ``` # same function as before but with # default values for each argument cool_function_2 <- function(x = 2, y = 3) { return(x * y + 1) } # apply `cool_function_2` to some numbers: cool_function_2(3, 3) # returns 10 cool_function_2(1, 1) # returns 2 cool_function_2(1:2, 1) # returns vector [2,3] cool_function_2(1) # returns 4 (= 1 * 3 + 1) cool_function_2() # returns 7 (= 2 * 3 + 1) ``` **Exercise 2\.10** Create a function called `bigger_100` which takes two numbers as input and outputs 0 if their product is less than or equal to 100, and 1 otherwise. (**Hint:** remember that you can cast a Boolean value to an integer with `as.integer`.) Solution ``` bigger_100 <- function(x, y) { return(as.integer(x * y > 100)) } bigger_100(40, 3) ``` ``` ## [1] 1 ``` #### 2\.3\.2\.2 Anonymous functions Notice that we can feed functions as parameters to other functions. This is an important ingredient of a functional\-style of programming, and something that we will rely on heavily in this book (see Section [2\.4](ch-01-01-loops-and-maps.html#ch-01-01-loops-and-maps)). When supplying a function as an argument to another function, we might not want to name the function that is passed. Here’s a (stupid, but hopefully illustrating) example. We first define the named function `new_applier_function` which takes two arguments as input: an input vector, which is locally called `input` in the scope of the function’s body, and a function, which is locally called `function_to_apply`. Our new function `new_applier_function` first checks whether the input vector has more than one element, throws an error if not, and otherwise applies the argument function `function_to_apply` to the vector `input`. ``` # define a function that takes a vector and a function as an argument new_applier_function <- function(input, function_to_apply) { # check if input vector has at least 2 elements if (length(input) <= 1) { # terminate and show informative error message stop("Error in 'new_applier_function': input vector has length <= 1.") } # otherwise apply the function to the input vector return(function_to_apply(input)) } ``` We use this new function to show the difference between named and unnamed functions, in particular why the latter can be very handy and elegant. First, we consider a case where we use `new_applier_function` in connection with the named built\-in function `sum`: ``` # sum vector with built-in & named function new_applier_function( input = 1:3, # input vector function_to_apply = sum # built-in & named function to apply ) # returns 6 ``` If instead of an existing named function, we want to use a new function to supply to `new_applier_function`, we could define that function first and give it a name, but if we only need it “in situ” for calling `new_applier_function` once, we can also write this: ``` # Sum vector with anonymous function new_applier_function( input = 1:3, # input vector function_to_apply = function(in_vec) { return(in_vec[1] + in_vec[2]) } ) # returns 3 (as it only sums the first two arguments) ``` **Exercise 2\.11** How many arguments should you pass to a function that… 1. …tells if the sum of two numbers is even? 2. …applies two different operations on a variable and sums the results? Operations are not fixed in the function. Solution 1. Two arguments. 2. Three arguments. Call the function `new_applier_function` with `input = 1:3` and an anonymous function that returns just the first two elements of the input vector in reverse order (as a vector). Solution ``` new_applier_function( input = 1:3, # input vector function_to_apply = function(in_vec) { return(c(in_vec[c(2,1)])) } ) ``` #### 2\.3\.2\.1 Named functions The special operator supplied by base R to create new functions is the keyword `function`. Here is an example of defining a new function with two input variables `x` and `y` that returns a computation based on these numbers. We assign a newly created function to the variable `cool_function` so that we can use this name to call the function later. Notice that the use of the `return` keyword is optional here. If it is left out, the evaluation of the last line is returned. ``` # define a new function # - takes two numbers x & y as argument # - returns x * y + 1 cool_function <- function(x, y) { return(x * y + 1) } # apply `cool_function` to some numbers: cool_function(3, 3) # returns 10 cool_function(1, 1) # returns 2 cool_function(1:2, 1) # returns vector [2,3] cool_function(1) # throws error: 'argument "y" is missing, with no default' cool_function() # throws error: 'argument "x" is missing, with no default' ``` We can give default values for the parameters passed to a function: ``` # same function as before but with # default values for each argument cool_function_2 <- function(x = 2, y = 3) { return(x * y + 1) } # apply `cool_function_2` to some numbers: cool_function_2(3, 3) # returns 10 cool_function_2(1, 1) # returns 2 cool_function_2(1:2, 1) # returns vector [2,3] cool_function_2(1) # returns 4 (= 1 * 3 + 1) cool_function_2() # returns 7 (= 2 * 3 + 1) ``` **Exercise 2\.10** Create a function called `bigger_100` which takes two numbers as input and outputs 0 if their product is less than or equal to 100, and 1 otherwise. (**Hint:** remember that you can cast a Boolean value to an integer with `as.integer`.) Solution ``` bigger_100 <- function(x, y) { return(as.integer(x * y > 100)) } bigger_100(40, 3) ``` ``` ## [1] 1 ``` #### 2\.3\.2\.2 Anonymous functions Notice that we can feed functions as parameters to other functions. This is an important ingredient of a functional\-style of programming, and something that we will rely on heavily in this book (see Section [2\.4](ch-01-01-loops-and-maps.html#ch-01-01-loops-and-maps)). When supplying a function as an argument to another function, we might not want to name the function that is passed. Here’s a (stupid, but hopefully illustrating) example. We first define the named function `new_applier_function` which takes two arguments as input: an input vector, which is locally called `input` in the scope of the function’s body, and a function, which is locally called `function_to_apply`. Our new function `new_applier_function` first checks whether the input vector has more than one element, throws an error if not, and otherwise applies the argument function `function_to_apply` to the vector `input`. ``` # define a function that takes a vector and a function as an argument new_applier_function <- function(input, function_to_apply) { # check if input vector has at least 2 elements if (length(input) <= 1) { # terminate and show informative error message stop("Error in 'new_applier_function': input vector has length <= 1.") } # otherwise apply the function to the input vector return(function_to_apply(input)) } ``` We use this new function to show the difference between named and unnamed functions, in particular why the latter can be very handy and elegant. First, we consider a case where we use `new_applier_function` in connection with the named built\-in function `sum`: ``` # sum vector with built-in & named function new_applier_function( input = 1:3, # input vector function_to_apply = sum # built-in & named function to apply ) # returns 6 ``` If instead of an existing named function, we want to use a new function to supply to `new_applier_function`, we could define that function first and give it a name, but if we only need it “in situ” for calling `new_applier_function` once, we can also write this: ``` # Sum vector with anonymous function new_applier_function( input = 1:3, # input vector function_to_apply = function(in_vec) { return(in_vec[1] + in_vec[2]) } ) # returns 3 (as it only sums the first two arguments) ``` **Exercise 2\.11** How many arguments should you pass to a function that… 1. …tells if the sum of two numbers is even? 2. …applies two different operations on a variable and sums the results? Operations are not fixed in the function. Solution 1. Two arguments. 2. Three arguments. Call the function `new_applier_function` with `input = 1:3` and an anonymous function that returns just the first two elements of the input vector in reverse order (as a vector). Solution ``` new_applier_function( input = 1:3, # input vector function_to_apply = function(in_vec) { return(c(in_vec[c(2,1)])) } ) ```
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/ch-01-01-loops-and-maps.html
2\.4 Loops and maps ------------------- ### 2\.4\.1 For\-loops For iteratively performing computation steps, R has a special syntax for `for` loops. Here is an example of an (again, stupid, but illustrative) example of a `for` loop in R: ``` # fix a vector to transform input_vector <- 1:6 # create output vector for memory allocation output_vector <- integer(length(input_vector)) # iterate over length of input for (i in 1:length(input_vector)) { # multiply by 10 if even if (input_vector[i] %% 2 == 0) { output_vector[i] <- input_vector[i] * 10 } # otherwise leave unchanged else { output_vector[i] <- input_vector[i] } } output_vector ``` ``` ## [1] 1 20 3 40 5 60 ``` **Exercise 2\.12** Let’s practice for\-loops and if/else statements! Create a vector `a` with 10 random integers from range (1:50\). Create a second vector `b` that has the same length as vector `a`. Then fill vector `b` such that the \\(i\\)th entry in `b` is the mean of a\[(i\-1\):(i\+1\)]. Do that using a for\-loop. Note that missing values are equal to 0 (see example below). Print out the result as a tibble whose columns are `a` and `b`. Example: If `a` has the values \[25, 39, 12, 33, 47, 3, 48, 14, 45, 8], then vector `b` should contain the values \[21, 25, 28, 31, 28, 33, 22, 36, 22, 18] when rounded to whole integers. The value in the fourth position of `b` (value 31\), is obtained with (a\[3] \+ a\[4] \+ a\[5])/3\. The value in the first position of `b` (value 21\) is obtained with (0 \+ a\[1] \+ a\[2])/3 and similarly the last value with (a\[9] \+ a\[10] \+ 0\)/3\. (**Hint:** use conditional statements `if`, `if else` and `else` to deal specifically with the edge cases (first and last entry in the vectors).) Solution ``` a <- c(sample((1:50), 10, replace = T)) b <- c(integer(length(a))) for (i in 1:length(a)){ if (i == 1) { b[i] <- (sum(a[i:(i+1)])/3) } else if (i == length(a)) { b[i] <- (sum((a[(i-1):i]))/3) } else { b[i] <- (mean(a[(i-1):(i+1)])) } } tibble(a, b) ``` ``` ## # A tibble: 10 × 2 ## a b ## <int> <dbl> ## 1 27 18.3 ## 2 28 24 ## 3 17 28.7 ## 4 41 28.3 ## 5 27 26.3 ## 6 11 13.3 ## 7 2 19.7 ## 8 46 27.7 ## 9 35 43.3 ## 10 49 28 ``` ### 2\.4\.2 Functional iterators Base R provides functional iterators (e.g., `apply`), but we will use the functional iterators from the `purrr` package. The main functional operator from `purrr` is `map` which takes a vector and a function, applies the function to each element in the vector and returns a list with the outcome. There are also versions of `map`, written as `map_dbl` (double), `map_lgl` (logical) or `map_df` (data frame), which return a vector of doubles, Booleans or a data frame. The following code repeats the previous example which used a for\-loop but now within a functional style using the functional iterator `map_dbl`: ``` input_vector <- 1:6 map_dbl( input_vector, function(i) { if (input_vector[i] %% 2 == 0) { return(input_vector[i] * 10) } else { return (input_vector[i]) } } ) ``` ``` ## [1] 1 20 3 40 5 60 ``` We can write this even shorter, using `purrr`’s short\-hand notation for functions:[11](#fn11) ``` input_vector <- 1:6 map_dbl( input_vector, ~ ifelse(.x %% 2 == 0, .x * 10, .x) ) ``` ``` ## [1] 1 20 3 40 5 60 ``` The trailing `~` indicates that we define an anonymous function. It, therefore, replaces the usual `function(...)` call which indicates which arguments the anonymous function expects. To make up for this, after the `~` we can use `.x` for the first (and only) argument of our anonymous function. To apply a function to more than one input vector, element per element, we can use `pmap` and its derivatives, like `pmap_dbl` etc. `pmap` takes a list of vectors and a function. In short\-hand notation, we can define an anonymous function with `~` and integers like `..1`, `..2` etc, for the first, second … argument. For example: ``` x <- 1:3 y <- 4:6 z <- 7:9 pmap_dbl( list(x, y, z), ~ ..1 - ..2 + ..3 ) ``` ``` ## [1] 4 5 6 ``` **Exercise 2\.13** Use `map_dbl` and an anonymous function to take the following input vector and return a vector whose \\(i\\)th element is the cumulative product of `input` up to the \\(i\\)th position divided by the cumulative sum of `input` up to that position. (**Hint:** the cumulative product up to position \\(i\\) is produced by `prod(input[1:i])`; notice that you need to “loop over”, so to speak, the index \\(i\\), not the elements of the vector `input`.) ``` input <- c(12, 6, 18) ``` Solution ``` map_dbl( 1:length(input), function(i) { prod(input[1:i]) / sum(input[1:i]) } ) ``` ``` ## [1] 1 4 36 ``` ### 2\.4\.1 For\-loops For iteratively performing computation steps, R has a special syntax for `for` loops. Here is an example of an (again, stupid, but illustrative) example of a `for` loop in R: ``` # fix a vector to transform input_vector <- 1:6 # create output vector for memory allocation output_vector <- integer(length(input_vector)) # iterate over length of input for (i in 1:length(input_vector)) { # multiply by 10 if even if (input_vector[i] %% 2 == 0) { output_vector[i] <- input_vector[i] * 10 } # otherwise leave unchanged else { output_vector[i] <- input_vector[i] } } output_vector ``` ``` ## [1] 1 20 3 40 5 60 ``` **Exercise 2\.12** Let’s practice for\-loops and if/else statements! Create a vector `a` with 10 random integers from range (1:50\). Create a second vector `b` that has the same length as vector `a`. Then fill vector `b` such that the \\(i\\)th entry in `b` is the mean of a\[(i\-1\):(i\+1\)]. Do that using a for\-loop. Note that missing values are equal to 0 (see example below). Print out the result as a tibble whose columns are `a` and `b`. Example: If `a` has the values \[25, 39, 12, 33, 47, 3, 48, 14, 45, 8], then vector `b` should contain the values \[21, 25, 28, 31, 28, 33, 22, 36, 22, 18] when rounded to whole integers. The value in the fourth position of `b` (value 31\), is obtained with (a\[3] \+ a\[4] \+ a\[5])/3\. The value in the first position of `b` (value 21\) is obtained with (0 \+ a\[1] \+ a\[2])/3 and similarly the last value with (a\[9] \+ a\[10] \+ 0\)/3\. (**Hint:** use conditional statements `if`, `if else` and `else` to deal specifically with the edge cases (first and last entry in the vectors).) Solution ``` a <- c(sample((1:50), 10, replace = T)) b <- c(integer(length(a))) for (i in 1:length(a)){ if (i == 1) { b[i] <- (sum(a[i:(i+1)])/3) } else if (i == length(a)) { b[i] <- (sum((a[(i-1):i]))/3) } else { b[i] <- (mean(a[(i-1):(i+1)])) } } tibble(a, b) ``` ``` ## # A tibble: 10 × 2 ## a b ## <int> <dbl> ## 1 27 18.3 ## 2 28 24 ## 3 17 28.7 ## 4 41 28.3 ## 5 27 26.3 ## 6 11 13.3 ## 7 2 19.7 ## 8 46 27.7 ## 9 35 43.3 ## 10 49 28 ``` ### 2\.4\.2 Functional iterators Base R provides functional iterators (e.g., `apply`), but we will use the functional iterators from the `purrr` package. The main functional operator from `purrr` is `map` which takes a vector and a function, applies the function to each element in the vector and returns a list with the outcome. There are also versions of `map`, written as `map_dbl` (double), `map_lgl` (logical) or `map_df` (data frame), which return a vector of doubles, Booleans or a data frame. The following code repeats the previous example which used a for\-loop but now within a functional style using the functional iterator `map_dbl`: ``` input_vector <- 1:6 map_dbl( input_vector, function(i) { if (input_vector[i] %% 2 == 0) { return(input_vector[i] * 10) } else { return (input_vector[i]) } } ) ``` ``` ## [1] 1 20 3 40 5 60 ``` We can write this even shorter, using `purrr`’s short\-hand notation for functions:[11](#fn11) ``` input_vector <- 1:6 map_dbl( input_vector, ~ ifelse(.x %% 2 == 0, .x * 10, .x) ) ``` ``` ## [1] 1 20 3 40 5 60 ``` The trailing `~` indicates that we define an anonymous function. It, therefore, replaces the usual `function(...)` call which indicates which arguments the anonymous function expects. To make up for this, after the `~` we can use `.x` for the first (and only) argument of our anonymous function. To apply a function to more than one input vector, element per element, we can use `pmap` and its derivatives, like `pmap_dbl` etc. `pmap` takes a list of vectors and a function. In short\-hand notation, we can define an anonymous function with `~` and integers like `..1`, `..2` etc, for the first, second … argument. For example: ``` x <- 1:3 y <- 4:6 z <- 7:9 pmap_dbl( list(x, y, z), ~ ..1 - ..2 + ..3 ) ``` ``` ## [1] 4 5 6 ``` **Exercise 2\.13** Use `map_dbl` and an anonymous function to take the following input vector and return a vector whose \\(i\\)th element is the cumulative product of `input` up to the \\(i\\)th position divided by the cumulative sum of `input` up to that position. (**Hint:** the cumulative product up to position \\(i\\) is produced by `prod(input[1:i])`; notice that you need to “loop over”, so to speak, the index \\(i\\), not the elements of the vector `input`.) ``` input <- c(12, 6, 18) ``` Solution ``` map_dbl( 1:length(input), function(i) { prod(input[1:i]) / sum(input[1:i]) } ) ``` ``` ## [1] 1 4 36 ```
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/Chap-01-01-piping.html
2\.5 Piping ----------- When we use a functional style of programming, piping is your best friend. Consider the standard example of applying functions in what linguists would call “center\-embedding”. We start with the input (written inside the inner\-most bracketing), then apply the first function `round`, then the second `mean`, writing each next function call “around” the previous. ``` # define input input_vector <- c(0.4, 0.5, 0.6) # first round, then take mean mean(round(input_vector)) ``` ``` ## [1] 0.3333333 ``` Things quickly get out of hand when more commands are nested. A common practice is to store intermediate results of computations in new variables which are only used to pass the result into the next step. ``` # define input input_vector <- c(0.4, 0.5, 0.6) # rounded input rounded_input <- round(input_vector) # mean of rounded input mean(rounded_input) ``` ``` ## [1] 0.3333333 ``` Piping lets you pass the result of a previous function call into the next. The `magrittr` package supplies a special infix operator `%>%` for piping.[12](#fn12) The pipe `%>%` essentially takes what results from evaluating the expression on its left\-hand side and inputs it as the first argument in the function on its right\-hand side. So `x %>% f` is equivalent to `f(x)`. Or, to continue the example from above, we can now write: ``` input_vector %>% round %>% mean ``` ``` ## [1] 0.3333333 ``` The functions defined as part of the tidyverse are all constructed in such a way that the first argument is the most likely input you would like to pipe into them. But if you want to pipe the left\-hand side into another argument slot than the first, you can do that by using the `.` notation to mark the slot where the left\-hand side should be piped into: `y %>% f(x, .)` is equivalent to `f(x, y)`. **Exercise 2\.14** A friendly colleague has sent reaction time data in a weird format: ``` weird_RTs <- c("RT = 323", "RT = 345", "RT = 421", "RT = 50") ``` Starting with that vector, use a chain of pipes to: extract the numeric information from the string, cast the information into a vector of type `numeric`, take the log, take the mean, round to 2 significant digits. (**Hint:** to get the numeric information use `stringr::str_sub`, which works in this case because the numeric information starts after the exact same number of characters.) Solution ``` weird_RTs %>% stringr::str_sub(start = 6) %>% as.numeric() %>% log %>% mean %>% signif(digits = 2) ``` ``` ## [1] 5.4 ``` ### 2\.5\.1 Excursion: More on pipes in R When you load the `tidyverse` package the pipe operator `%>%` is automatically imported from the `magrittr` package, but not the whole `magrittr` package. But the `magrittr` package has three more useful pipe operators, which are only available if you also explicitly load the `magrittr` package. ``` library(magrittr) ``` The **tree pipe** `%T>%` from the `magrittr` package passes over to its RHS whatever it was fed on the LHS, thus omitting the output of the current command in the piping chain. This is particularly useful for printing or plotting intermediate results: ``` input_vector <- c(0.4, 0.5, 0.6) input_vector %>% # get the mean mean %T>% # output intermediate result print %>% # do more computations sum(3) ``` ``` ## [1] 0.5 ``` ``` ## [1] 3.5 ``` The **exposition pipe** `%$%` from the `magrittr` package is like the base pipe `%>%` but makes the names (e.g., columns in a data frame) in the LHS available on the RHS, even when the function on the RHS normally does not allow for this. So, this does not work with the normal pipe: ``` tibble( x = 1:3 ) %>% # normal pipe sum(x) # error: object `x` not found ``` But it works with the exposition pipe: ``` tibble( x = 1:3 ) %$% # exposition pipe makes 'x' available sum(x) # works! ``` ``` ## [1] 6 ``` Finally, the **assignment pipe** `%<>%` from the `magrittr` package pipes the LHS into a chain of computations, as usual, but then assigns the final value back to the LHS. ``` x <- c(0.4, 0.5, 0.6) # x is changed in place x %<>% sum(3) %>% mean print(x) ``` ``` ## [1] 4.5 ``` Base R has introduced a native pipe operator `|>` in version 4\.1\.0\. It differs slightly from the `magrittr` version, e.g., in that it requires function brackets: ``` 1:10 |> mean # error! 1:10 |> mean() # 5.5 ``` You can read more about the history of the pipe in R in this [blog post](http://adolfoalvarez.cl/blog/2021-09-16-plumbers-chains-and-famous-painters-the-history-of-the-pipe-operator-in-r/). ### 2\.5\.2 Excursion: Multiple assignments, or “unpacking” The `zeallot` package can be additionally loaded to obtain a “multiple assignment” operator `%<-%` which looks like a pipe, but isn’t. ``` library(zeallot) ``` It allows for several variables to be instantiated at the same time: ``` c(x, y) %<-% c(3, "huhu") print(x) ``` ``` ## [1] "3" ``` ``` print(y) ``` ``` ## [1] "huhu" ``` This is particularly helpful for functions that return several outputs in a list or vector: ``` input_vector <- c(0.4, 0.5, 0.6) some_function <- function(input) { return( list(sum = sum(input), mean = mean(input))) } c(x, y) %<-% some_function(input_vector) print(x) ``` ``` ## [1] 1.5 ``` ``` print(y) ``` ``` ## [1] 0.5 ``` ### 2\.5\.1 Excursion: More on pipes in R When you load the `tidyverse` package the pipe operator `%>%` is automatically imported from the `magrittr` package, but not the whole `magrittr` package. But the `magrittr` package has three more useful pipe operators, which are only available if you also explicitly load the `magrittr` package. ``` library(magrittr) ``` The **tree pipe** `%T>%` from the `magrittr` package passes over to its RHS whatever it was fed on the LHS, thus omitting the output of the current command in the piping chain. This is particularly useful for printing or plotting intermediate results: ``` input_vector <- c(0.4, 0.5, 0.6) input_vector %>% # get the mean mean %T>% # output intermediate result print %>% # do more computations sum(3) ``` ``` ## [1] 0.5 ``` ``` ## [1] 3.5 ``` The **exposition pipe** `%$%` from the `magrittr` package is like the base pipe `%>%` but makes the names (e.g., columns in a data frame) in the LHS available on the RHS, even when the function on the RHS normally does not allow for this. So, this does not work with the normal pipe: ``` tibble( x = 1:3 ) %>% # normal pipe sum(x) # error: object `x` not found ``` But it works with the exposition pipe: ``` tibble( x = 1:3 ) %$% # exposition pipe makes 'x' available sum(x) # works! ``` ``` ## [1] 6 ``` Finally, the **assignment pipe** `%<>%` from the `magrittr` package pipes the LHS into a chain of computations, as usual, but then assigns the final value back to the LHS. ``` x <- c(0.4, 0.5, 0.6) # x is changed in place x %<>% sum(3) %>% mean print(x) ``` ``` ## [1] 4.5 ``` Base R has introduced a native pipe operator `|>` in version 4\.1\.0\. It differs slightly from the `magrittr` version, e.g., in that it requires function brackets: ``` 1:10 |> mean # error! 1:10 |> mean() # 5.5 ``` You can read more about the history of the pipe in R in this [blog post](http://adolfoalvarez.cl/blog/2021-09-16-plumbers-chains-and-famous-painters-the-history-of-the-pipe-operator-in-r/). ### 2\.5\.2 Excursion: Multiple assignments, or “unpacking” The `zeallot` package can be additionally loaded to obtain a “multiple assignment” operator `%<-%` which looks like a pipe, but isn’t. ``` library(zeallot) ``` It allows for several variables to be instantiated at the same time: ``` c(x, y) %<-% c(3, "huhu") print(x) ``` ``` ## [1] "3" ``` ``` print(y) ``` ``` ## [1] "huhu" ``` This is particularly helpful for functions that return several outputs in a list or vector: ``` input_vector <- c(0.4, 0.5, 0.6) some_function <- function(input) { return( list(sum = sum(input), mean = mean(input))) } c(x, y) %<-% some_function(input_vector) print(x) ``` ``` ## [1] 1.5 ``` ``` print(y) ``` ``` ## [1] 0.5 ```
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/Chap-02-01-data-what-is-data.html
3\.1 What is data? ------------------ Some say we live in the **data age**. But what is data actually? Purist pedants say: “The plural of datum” and add that a datum is just an observation. But when we say “data”, we usually mean a bit more than a bunch of observations. The observation that Jones had apple *and* banana for breakfast, is maybe interesting but not what we usually call “data”. The Merriam\-Webster offers the following definition: > Factual information (such as measurements or statistics) used as a basis for reasoning, discussion, or calculation. This is a teleological definition in the sense that it refers to a purpose: data is something that is “used as basis for reasoning, discussion, or calculation”. So, what we mean by “data” is, in large part, defined by what we intend to do with it. Another important aspect of this definition is that we usually consider data to be systematically structured in some way or another. Even when we speak of “raw data”, we expect there to be some structure (maybe labels, categories etc.) that distinguishes data from uninterpretable noise (e.g., the notion of a “variable”, discussed in Section [3\.3](Chap-02-01-data-variables.html#Chap-02-01-data-variables)). In sum, we can say that **data is a representation of information stored in a systematic way for the purpose of inference, argument or decision making**. Let us consider an example of data from a behavioral experiment, namely the [King of France experiment](app-93-data-sets-king-of-france.html#app-93-data-sets-king-of-france). It is not important to know about this experiment for now. We just want to have a first glimpse at how data frequently looks like. Using R (in ways that we will discuss in the next chapter), we can show the content of part of the data as follows: ``` ## # A tibble: 6 × 4 ## submission_id trial_number trial_type response ## <dbl> <dbl> <chr> <lgl> ## 1 192 1 practice FALSE ## 2 192 2 practice TRUE ## 3 192 3 practice FALSE ## 4 192 4 practice TRUE ## 5 192 5 practice TRUE ## 6 192 1 filler TRUE ``` We see that the data is represented as a tibble and that there are different kinds of column with different kinds of information. The `submission_id` is an anonymous identifier for the person whose data is shown here. The `trial_number` is a consecutive numbering of the different stages of the experiment (at each of which the participant gave one response, listed in the `response` column). The `trial_type` tells us which kind of trial each observation is from. There are more columns in this data set, but this is just for a first, rough impression of how “data” might look like. The most important thing to see here is that, following the definition above, data is “information stored in a systematic way”.
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/Chap-02-01-data-variables.html
3\.3 On the notion of “variables” --------------------------------- Data used for data analysis, even if it is “raw data”, i.e., data before preprocessing and cleaning, is usually structured or labeled in some way or other. Even if the whole data we have is a vector of numbers, we would usually know what these numbers represent. For instance, we might just have a quintuple of numbers, but we would (usually/ideally) know that these represent the results of an IQ test. ``` # a simple data vector of IQ-scores IQ_scores <- c(102, 115, 97, 126, 87) ``` Or we might have a Boolean vector with the information of whether each of five students passed an exam. But even then we would (usually/ideally) know the association between names and test results, as in a table like this: ``` # who passed the exam exam_results <- tribble( ~student, ~pass, "Jax", TRUE, "Jason", FALSE, "Jamie", TRUE ) ``` Association of information, as between different columns in a table like the one above, is crucial. Most often, we have more than one kind of observation that we care about. Most often, we care about systematic relationships between different observables in the world. For instance, we might want to look at a relation between, on the one hand, the chance of passing an exam and, on the other hand, the proportion of attendance of the course’s tutorial sessions: ``` # proportion of tutorials attended and exam pass/fail exam_results <- tribble( ~student, ~tutorial_proportion, ~pass, "Jax", 0.0, TRUE, "Jason", 0.78, FALSE, "Jamie", 0.39, TRUE ) exam_results ``` ``` ## # A tibble: 3 × 3 ## student tutorial_proportion pass ## <chr> <dbl> <lgl> ## 1 Jax 0 TRUE ## 2 Jason 0.78 FALSE ## 3 Jamie 0.39 TRUE ``` Data of this kind is also called **rectangular data**, i.e., data that fits into a rectangular table (More on the structure of rectangular data in Section [4\.2](Chap-02-02-data-tidy-data.html#Chap-02-02-data-tidy-data).). In the example above, every column represents a **variable** of interest. A *(data) variable* stores the observations that are of the same kind.[15](#fn15) Different kinds of variables are distinguished based on the structural properties of the kinds of observations that they represent. Common types of variables are, for instance: * **nominal variable**: each observation is an instance of a (finite) set of clearly distinct categories, lacking a natural ordering; * **binary variable**: special case of a nominal variable where there are only two categories; * **Boolean variable**: special case of a binary variable where the two categories are Boolean values “true” and “false”; * **ordinal variable**: each observation is an instance of a (finite) set of clearly distinct and naturally ordered categories, but there is no natural meaning of distance between categories (i.e., it makes sense to say that A is “more” than B but not that A is three times “more” than B); * **metric variable**: each observation is isomorphic to a subset of the reals and interval\-scaled (i.e., it makes sense to say that A is three times “more” than B); Examples of some different kinds of variables are shown in Figure [3\.2](Chap-02-01-data-variables.html#fig:Ch-02-01-factor-levels), and Table [3\.2](Chap-02-01-data-variables.html#tab:Ch-02-01-variable-types-in-R) lists common and/or natural ways of representing different kinds of (data) variables in R. Figure 3\.2: Examples of different kinds of (data) variables. Artwork by allison\_horst. Table 3\.2: Common / natural formats for representing data of different kinds in R. | variable type | representation in R | | --- | --- | | nominal / binary | unordered factor | | Boolean | logical vector | | ordinal | ordered factor | | metric | numeric vector | In experimental data, we also distinguish the **dependent variable(s)** from the **independent variable(s)**. The dependent variables are the variables that we do not control or manipulate in the experiment, but the ones that we are curious to record (e.g., whether a patient recovered from an illness within a week). Dependent variables are also called **to\-be\-explained variables**. The independent variables are the variables in the experiment that we manipulate (e.g., which drug to administer), usually with the intention of seeing a particular effect on the dependent variables. Independent variables are also called **explanatory variables**. **Exercise 3\.1: Variables** You are given the following table of observational data: ``` ## # A tibble: 7 × 8 ## name age gender handedness height education has_pets mood ## <chr> <dbl> <chr> <chr> <dbl> <chr> <lgl> <chr> ## 1 A 24 female right 1.74 undergraduate FALSE neutral ## 2 B 32 non-binary right 1.68 graduate TRUE happy ## 3 C 23 male left 1.62 high school TRUE OK ## 4 D 27 male right 1.84 graduate FALSE very happy ## 5 E 26 non-binary left 1.59 undergraduate FALSE very happy ## 6 F 28 female right 1.66 graduate TRUE OK ## 7 G 35 male right 1.68 high school FALSE neutral ``` For each column, decide which type of variable (nominal, binary, etc.) is stored. Solution * `name`: nominal variable * `age`: metric variable * `gender`: nominal variable * `handedness`: binary variable * `height`: metric variable * `education`: ordinal variable * `has_pets`: Boolean variable * `mood`: ordinal variable
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/Chap-02-01-data-exp-design.html
3\.4 Basics of experimental design ---------------------------------- The most basic template for an experiment is to just measure a quantity of interest (the dependent variable), without taking into account any kind of variation in any kind of independent variables. For instance, we measure the time it takes for an object with a specific shape and weight to hit the ground when dropped from a height of exactly 2 meters. To filter out **measurement noise**, we do not just record one observation, but, ideally, as much as we possibly and practically can. We use the measurements, in our concrete example: time measurements, to test a theory about acceleration and gravity. Data from such a simple measurement experiment would be just a single vector of numbers. A more elaborate kind of experiment would allow for at least one independent variable. Another archetypical example of an empirical experiment would be a medical study, e.g., one in which we are interested in the effect of a particular drug on the blood pressure of patients. We would then randomly allocate each participant to one of two groups. One group, the **treatment group**, receives the drug in question; the other group, **the control group**, receives a placebo (and nobody, not even the experimenter, knows who receives what). After a pre\-defined exposure to either drug or placebo, blood pressure (for simplicity, just systolic blood pressure) is measured. The interesting question is whether there is a difference between the measurements across groups. This is a simple example of a **one\-factor design**. The factor in question is which group any particular measurement belongs to. Data from such an experiment could look like this: ``` tribble( ~subj_id, ~group, ~systolic, 1, "treatment", 118, 2, "control", 132, 3, "control", 116, 4, "treatment", 127, 5, "treatment", 122 ) ``` ``` ## # A tibble: 5 × 3 ## subj_id group systolic ## <dbl> <chr> <dbl> ## 1 1 treatment 118 ## 2 2 control 132 ## 3 3 control 116 ## 4 4 treatment 127 ## 5 5 treatment 122 ``` For the purposes of this course, which is not a course on experimental design, just a few key concepts of experimental design are important to be aware of. We will go through some of these issues in the following. ### 3\.4\.1 What to analyze? – Dependent variables To begin with, it is important to realize that there is quite some variation in what counts as a dependent variable. Not only can there be more than one dependent variable, but each dependent variable can also be of quite a different type (nominal, ordinal, metric, …), as discussed in the previous section. Moreover, we need to carefully distinguish between the actual measurement/observation and the dependent variable itself. The dependent variable is (usually) what we plot, analyze and discuss, but very often, we measure much more or something else. The dependent variable (of analysis) could well just be one part of the measurement. For example, a standard measure of blood pressure has a number for systolic and another for diastolic pressure. Focussing on just one of these numbers is a (hopefully: theoretically motivated; possibly: arbitrary; in the worst case: result\-oriented) decision of the analyst. More interesting examples of such **data preprocessing** frequently arise in the cognitive sciences, for example: * **eye\-tracking**: the measured data are triples consisting of a time\-point and two spatial coordinates, but what might be analyzed is just the relative proportion of looks at a particular spatial region of interest (some object on the screen) in a particular temporal region of interest (up to 200 ms after the image appeared) * **EEG**: individual measurements obtained by EEG are very noisy, so that the dependent measure in many analyses is an aggregation over the mean voltage recorded by selected electrodes, where averages are taken for a particular subject over many trials of the same condition (repeated measures) that this subject has seen But we do not need to go fancy in our experimental methods to see how issues of data processing affect data analysis at its earliest stages, namely by selecting the dependent variable (that which is to be analyzed). Just take the distinction between **closed questions** and **open questions** in text\-based surveys. In closed questions, participants select an answer from a finite (usually) small number of choices. In open questions, however, they can write text freely, or they can draw, sing, pronounce, gesture, etc. Open response formats are great and naturalistic, but they, too, often require the analyst to carve out a particular aspect of the (rich, natural) observed reality to enter the analysis. ### 3\.4\.2 Conditions, trials, items A **factorial design** is an experiment with at least two independent variables, all of which are (ordered or unordered) factors.[16](#fn16) Many psychological studies are factorial designs. Whole batteries of analysis techniques have been developed specifically tuned to these kinds of experiments. Factorial designs are often described in terms of short abbreviations. For example, an experiment described as a “\\(n \\times m\\) factorial design” would have two factors of interest, the first of which has \\(n\\) levels, the second of which has \\(m\\) levels. For example, a \\(2 \\times 3\\) factorial design could have one independent variable recording a binary distinction between control and treatment group, and another independent variable representing an orthogonal distinction of gender in categories ‘male’, ‘female’ and ‘non\-binary’. For a \\(2 \\times 2 \\times 3\\) factorial design, there are `2 * 2 * 3 = 12` different **experimental conditions** (also sometimes called **design cells**). An important distinction in experimental design is whether all participants contribute data to all of the experimental conditions, or whether each only contributes to a part of it. If participants only contribute data to a part of all experimental conditions, this is called a **between\-subjects design**. If all participants contribute data to all experimental conditions, we speak of a **within\-subjects design**. Clearly, sometimes the nature of a design factor determines whether the study can be within\-subjects. For example, switching gender for the purpose of a medical study on blood pressure drugs is perhaps a tad much to ask of a participant (though possibly a very enlightening experience). If there is room for the experimenter’s choice of study type, it pays to be aware of some of the clear advantages and drawbacks of either method, as listed in Table [3\.3](Chap-02-01-data-exp-design.html#tab:Ch-02-01-comparison-designs). Table 3\.3: Comparison of the pros and cons of between\- and within\-subjects designs. | between\-subjects | within\-subjects | | --- | --- | | no confound between conditions | possible cross\-contamination between conditions | | more participants needed | fewer participants needed | | less associated information for analysis | more associated data for analysis | No matter whether we are dealing with a between\- or within\-subjects design, another important question is whether each participant gives us only one observation per design cell, or more than one. If participants contribute more than one observation to a design cell, we speak of a *repeated\-measures* design. Such designs are useful as they help separate the signal from the noise (recall the initial example of time measurement from physics). They are also economical because getting several observations worth of relevant data from a single participant for each design cell means that we have to get fewer people to do the experiment (normally). However, exposing a participant repeatedly to the same experimental condition can be detrimental to an experiment’s purpose. Participants might recognize the repetition and develop quick coping strategies to deal with the boredom, for example. For this reason, repeated\-measures designs usually include different kinds of trials: * **Critical trials** belong to, roughly put, the actual experiment, e.g., one of the experiment’s design cells. * **Filler trials** are packaged around the critical trials to prevent blatant repetition, predictability or recognition of the experiment’s purpose. * **Control trials** are trials whose data is not used for statistical inference but for checking the quality of the data (e.g., attention checks or tests of whether a participant understood the task correctly). When participants are exposed to several different kinds of trials and even several instances of the same experimental condition, it is also often important to introduce some variability between the instances of the same types of trials. Therefore, psychological experiments often use different **items**, i.e., different (theoretically exchangeable) instantiations of the same (theoretically important) pattern. For example, if a careful psycholinguist designs a study on the processing of garden\-path sentences, she will include not just one example (“The horse raced past the barn fell”) but several (e.g., “Since Jones frequently jogs a mile is a short distance to her”). Item\-variability is also important for statistical analyses, as we will see when we talk about hierarchical modeling. In longer experiments, especially within\-subjects repeated\-measures designs in which participants encounter a lot of different items for each experimental condition, clever regimes of **randomization** are important to minimize the possible effect of carry\-over artifacts, for example. A frequent method is **pseudo\-randomization**, where the trial sequence is not completely arbitrary but arbitrary within certain constraints, such as a particular **block design**, where each block presents an identical number of trials of each type, but each block shuffles the sequence of its types completely at random. The complete opposite of a within\-participants repeated measures design is a so\-called **single\-shot experiment** in which any participant gives exactly one data point for one experimental condition. ### 3\.4\.3 Sample size A very important question for experimental design is that of the **sample size**: how many data points do we need (per experimental condition)? We will come back to this issue only much later in this course when we talk about statistical inference. This is because the decision of how many, say, participants to invite for a study should ideally be influenced not by the available time and money, but also by statistical considerations of the kind: how many data points do I need in order to obtain a reasonable level of confidence in the resulting statistical inferences I care about? **Exercise 3\.2: Experimental Design** Suppose that we want to investigate the effect of caffeine ingestion and time of day on reaction times in solving simple math tasks. The following table shows the measurements of two participants: ``` ## # A tibble: 12 × 4 ## subject_id `RT (ms)` caffeine `time of day` ## <dbl> <dbl> <chr> <chr> ## 1 1 43490 none morning ## 2 1 35200 medium morning ## 3 1 33186 high morning ## 4 1 26350 none afternoon ## 5 1 27004 medium afternoon ## 6 1 26492 high afternoon ## 7 2 42904 none morning ## 8 2 36129 medium morning ## 9 2 30340 high morning ## 10 2 28455 none afternoon ## 11 2 40593 medium afternoon ## 12 2 23992 high afternoon ``` 1. Is this experiment a one\-factor or a full factorial design? What is/are the factor(s)? How many levels does each factor have? Solution This experiment is a \\(3 \\times 2\\) full factorial design. It has two factors, `caffeine` (levels: none, medium, high) and `time of day` (levels: morning, afternoon). 2. How many experimental conditions are there? Solution There are `3 * 2 = 6` different experimental conditions. 3. Is it a between\- or within\-subjects design? Solution Within\-subjects design (each participant contributes data to *all* experimental conditions). 4. What is the dependent variable, what is/are the independent variable(s)? Solution Dependent variable: `RT` (the reaction time) Independent variable 1: `caffeine` (the caffeine dosage) Independent variable 2: `time of day` 5. Is this experiment a repeated measures design? Explain your answer. Solution No, each participant contributes exactly one data point per design cell. ### 3\.4\.1 What to analyze? – Dependent variables To begin with, it is important to realize that there is quite some variation in what counts as a dependent variable. Not only can there be more than one dependent variable, but each dependent variable can also be of quite a different type (nominal, ordinal, metric, …), as discussed in the previous section. Moreover, we need to carefully distinguish between the actual measurement/observation and the dependent variable itself. The dependent variable is (usually) what we plot, analyze and discuss, but very often, we measure much more or something else. The dependent variable (of analysis) could well just be one part of the measurement. For example, a standard measure of blood pressure has a number for systolic and another for diastolic pressure. Focussing on just one of these numbers is a (hopefully: theoretically motivated; possibly: arbitrary; in the worst case: result\-oriented) decision of the analyst. More interesting examples of such **data preprocessing** frequently arise in the cognitive sciences, for example: * **eye\-tracking**: the measured data are triples consisting of a time\-point and two spatial coordinates, but what might be analyzed is just the relative proportion of looks at a particular spatial region of interest (some object on the screen) in a particular temporal region of interest (up to 200 ms after the image appeared) * **EEG**: individual measurements obtained by EEG are very noisy, so that the dependent measure in many analyses is an aggregation over the mean voltage recorded by selected electrodes, where averages are taken for a particular subject over many trials of the same condition (repeated measures) that this subject has seen But we do not need to go fancy in our experimental methods to see how issues of data processing affect data analysis at its earliest stages, namely by selecting the dependent variable (that which is to be analyzed). Just take the distinction between **closed questions** and **open questions** in text\-based surveys. In closed questions, participants select an answer from a finite (usually) small number of choices. In open questions, however, they can write text freely, or they can draw, sing, pronounce, gesture, etc. Open response formats are great and naturalistic, but they, too, often require the analyst to carve out a particular aspect of the (rich, natural) observed reality to enter the analysis. ### 3\.4\.2 Conditions, trials, items A **factorial design** is an experiment with at least two independent variables, all of which are (ordered or unordered) factors.[16](#fn16) Many psychological studies are factorial designs. Whole batteries of analysis techniques have been developed specifically tuned to these kinds of experiments. Factorial designs are often described in terms of short abbreviations. For example, an experiment described as a “\\(n \\times m\\) factorial design” would have two factors of interest, the first of which has \\(n\\) levels, the second of which has \\(m\\) levels. For example, a \\(2 \\times 3\\) factorial design could have one independent variable recording a binary distinction between control and treatment group, and another independent variable representing an orthogonal distinction of gender in categories ‘male’, ‘female’ and ‘non\-binary’. For a \\(2 \\times 2 \\times 3\\) factorial design, there are `2 * 2 * 3 = 12` different **experimental conditions** (also sometimes called **design cells**). An important distinction in experimental design is whether all participants contribute data to all of the experimental conditions, or whether each only contributes to a part of it. If participants only contribute data to a part of all experimental conditions, this is called a **between\-subjects design**. If all participants contribute data to all experimental conditions, we speak of a **within\-subjects design**. Clearly, sometimes the nature of a design factor determines whether the study can be within\-subjects. For example, switching gender for the purpose of a medical study on blood pressure drugs is perhaps a tad much to ask of a participant (though possibly a very enlightening experience). If there is room for the experimenter’s choice of study type, it pays to be aware of some of the clear advantages and drawbacks of either method, as listed in Table [3\.3](Chap-02-01-data-exp-design.html#tab:Ch-02-01-comparison-designs). Table 3\.3: Comparison of the pros and cons of between\- and within\-subjects designs. | between\-subjects | within\-subjects | | --- | --- | | no confound between conditions | possible cross\-contamination between conditions | | more participants needed | fewer participants needed | | less associated information for analysis | more associated data for analysis | No matter whether we are dealing with a between\- or within\-subjects design, another important question is whether each participant gives us only one observation per design cell, or more than one. If participants contribute more than one observation to a design cell, we speak of a *repeated\-measures* design. Such designs are useful as they help separate the signal from the noise (recall the initial example of time measurement from physics). They are also economical because getting several observations worth of relevant data from a single participant for each design cell means that we have to get fewer people to do the experiment (normally). However, exposing a participant repeatedly to the same experimental condition can be detrimental to an experiment’s purpose. Participants might recognize the repetition and develop quick coping strategies to deal with the boredom, for example. For this reason, repeated\-measures designs usually include different kinds of trials: * **Critical trials** belong to, roughly put, the actual experiment, e.g., one of the experiment’s design cells. * **Filler trials** are packaged around the critical trials to prevent blatant repetition, predictability or recognition of the experiment’s purpose. * **Control trials** are trials whose data is not used for statistical inference but for checking the quality of the data (e.g., attention checks or tests of whether a participant understood the task correctly). When participants are exposed to several different kinds of trials and even several instances of the same experimental condition, it is also often important to introduce some variability between the instances of the same types of trials. Therefore, psychological experiments often use different **items**, i.e., different (theoretically exchangeable) instantiations of the same (theoretically important) pattern. For example, if a careful psycholinguist designs a study on the processing of garden\-path sentences, she will include not just one example (“The horse raced past the barn fell”) but several (e.g., “Since Jones frequently jogs a mile is a short distance to her”). Item\-variability is also important for statistical analyses, as we will see when we talk about hierarchical modeling. In longer experiments, especially within\-subjects repeated\-measures designs in which participants encounter a lot of different items for each experimental condition, clever regimes of **randomization** are important to minimize the possible effect of carry\-over artifacts, for example. A frequent method is **pseudo\-randomization**, where the trial sequence is not completely arbitrary but arbitrary within certain constraints, such as a particular **block design**, where each block presents an identical number of trials of each type, but each block shuffles the sequence of its types completely at random. The complete opposite of a within\-participants repeated measures design is a so\-called **single\-shot experiment** in which any participant gives exactly one data point for one experimental condition. ### 3\.4\.3 Sample size A very important question for experimental design is that of the **sample size**: how many data points do we need (per experimental condition)? We will come back to this issue only much later in this course when we talk about statistical inference. This is because the decision of how many, say, participants to invite for a study should ideally be influenced not by the available time and money, but also by statistical considerations of the kind: how many data points do I need in order to obtain a reasonable level of confidence in the resulting statistical inferences I care about? **Exercise 3\.2: Experimental Design** Suppose that we want to investigate the effect of caffeine ingestion and time of day on reaction times in solving simple math tasks. The following table shows the measurements of two participants: ``` ## # A tibble: 12 × 4 ## subject_id `RT (ms)` caffeine `time of day` ## <dbl> <dbl> <chr> <chr> ## 1 1 43490 none morning ## 2 1 35200 medium morning ## 3 1 33186 high morning ## 4 1 26350 none afternoon ## 5 1 27004 medium afternoon ## 6 1 26492 high afternoon ## 7 2 42904 none morning ## 8 2 36129 medium morning ## 9 2 30340 high morning ## 10 2 28455 none afternoon ## 11 2 40593 medium afternoon ## 12 2 23992 high afternoon ``` 1. Is this experiment a one\-factor or a full factorial design? What is/are the factor(s)? How many levels does each factor have? Solution This experiment is a \\(3 \\times 2\\) full factorial design. It has two factors, `caffeine` (levels: none, medium, high) and `time of day` (levels: morning, afternoon). 2. How many experimental conditions are there? Solution There are `3 * 2 = 6` different experimental conditions. 3. Is it a between\- or within\-subjects design? Solution Within\-subjects design (each participant contributes data to *all* experimental conditions). 4. What is the dependent variable, what is/are the independent variable(s)? Solution Dependent variable: `RT` (the reaction time) Independent variable 1: `caffeine` (the caffeine dosage) Independent variable 2: `time of day` 5. Is this experiment a repeated measures design? Explain your answer. Solution No, each participant contributes exactly one data point per design cell.
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/Chap-02-02-data-IO.html
4\.1 Data in, data out ---------------------- The `readr` package handles the reading and writing of data stored in text files.[17](#fn17) Here is a cheat sheet on the topic: [data I/O cheat sheet](https://rawgit.com/rstudio/cheatsheets/master/data-import.pdf). The data sets we will mainly deal with in this course are included in the `aida` package for convenience. Occasionally, we will also read in data stored in CSV files. Reading a data set from a CSV file works with the `read_csv` function: ``` fresh_raw_data <- read_csv("PATH/FILENAME_RAW_DATA.csv") ``` Writing to a CSV file can be done with the `write_csv` function: ``` write_csv(processed_data, "PATH/FILENAME_PROCESSED_DATA.csv") ``` If you want to use a different delimiter (between cells) than a comma, you can use `read_delim` and `write_delim` for example, which take an additional argument `delim` to be set to the delimiter in question. ``` # reading data from a file where cells are (unconventionally) delimited by string "|" data_from_weird_file <- read_delim("WEIRD_DATA_FILE.TXT", delim = "|") ```
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/Chap-02-02-data-tidy-data.html
4\.2 Tidy data -------------- The same data can be represented in multiple ways. There is even room for variance in the class of rectangular representations of data. Some manners of representations are more useful for certain purposes than for others. For data analysis (plotting, statistical analyses) we prefer to represent our data as (rectangular) **tidy data**. A concise rationale for using tidy data is given in Figure [4\.1](Chap-02-02-data-tidy-data.html#fig:tidy-data-allison-horst). Figure 4\.1: Artwork by allison\_horst ### 4\.2\.1 Running example Consider the example of student grades for two exams in a course. A compact way of representing the data for visual digestion is the following representation: ``` exam_results_visual <- tribble( ~exam, ~"Rozz", ~"Andrew", ~"Siouxsie", "midterm", "1.3", "2.0", "1.7", "final" , "2.3", "1.7", "1.0" ) exam_results_visual ``` ``` ## # A tibble: 2 × 4 ## exam Rozz Andrew Siouxsie ## <chr> <chr> <chr> <chr> ## 1 midterm 1.3 2.0 1.7 ## 2 final 2.3 1.7 1.0 ``` This is how such data would frequently be represented, e.g., in tables in a journal. Indeed, Rmarkdown helps us present this data in an appetizing manner, e.g., in Table [4\.1](Chap-02-02-data-tidy-data.html#tab:Ch-02-01-exam-results-untidy), which is produced by the code below: ``` knitr::kable( exam_results_visual, caption = "Fictitious exam results of fictitious students.", booktabs = TRUE ) ``` Table 4\.1: Fictitious exam results of fictitious students. | exam | Rozz | Andrew | Siouxsie | | --- | --- | --- | --- | | midterm | 1\.3 | 2\.0 | 1\.7 | | final | 2\.3 | 1\.7 | 1\.0 | Though highly perspicuous, this representation of the data is not tidy, in the special technical sense we endorse here. A tidy representation of the course results could be this: ``` exam_results_tidy <- tribble( ~student, ~exam, ~grade, "Rozz", "midterm", 1.3, "Andrew", "midterm", 2.0, "Siouxsie", "midterm", 1.7, "Rozz", "final", 2.3, "Andrew", "final", 1.7, "Siouxsie", "final", 1.0 ) exam_results_tidy ``` ``` ## # A tibble: 6 × 3 ## student exam grade ## <chr> <chr> <dbl> ## 1 Rozz midterm 1.3 ## 2 Andrew midterm 2 ## 3 Siouxsie midterm 1.7 ## 4 Rozz final 2.3 ## 5 Andrew final 1.7 ## 6 Siouxsie final 1 ``` ### 4\.2\.2 Definition of *tidy data* Following Wickham ([2014](#ref-Wickham2014:Tidy-Data)), a tidy representation of (rectangular) data is defined as one where: 1. each variable forms a column, 2. each observation forms a row, and 3. each type of observational unit forms a table. Any data set that is not tidy is **messy data**. Messy data that satisfies the first two constraints, but not the third will be called **almost tidy data** in this course. We will work, wherever possible, with data that is at least almost tidy. Figure [4\.2](Chap-02-02-data-tidy-data.html#fig:02-02-tidy-data-picture) shows a graphical representation of the concept of tidy data. Figure 4\.2: Organization of tidy data (taken from Wickham and Grolemund ([2016](#ref-wickham2016))). ### 4\.2\.3 Excursion: non\-redundant data The final condition in the definition of tidy data is not particularly important for us here (since we will make do with ‘almost tidy data’), but to understand it nonetheless consider the following data set: ``` exam_results_overloaded <- tribble( ~student, ~stu_number, ~exam, ~grade, "Rozz", "666", "midterm", 1.3, "Andrew", "1969", "midterm", 2.0, "Siouxsie", "3.14", "midterm", 1.7, "Rozz", "666", "final", 2.3, "Andrew", "1969", "final", 1.7, "Siouxsie", "3.14", "final", 1.0 ) exam_results_overloaded ``` ``` ## # A tibble: 6 × 4 ## student stu_number exam grade ## <chr> <chr> <chr> <dbl> ## 1 Rozz 666 midterm 1.3 ## 2 Andrew 1969 midterm 2 ## 3 Siouxsie 3.14 midterm 1.7 ## 4 Rozz 666 final 2.3 ## 5 Andrew 1969 final 1.7 ## 6 Siouxsie 3.14 final 1 ``` This table is not tidy in an intuitive sense because it includes redundancy. Why list the student numbers twice, once with each observation of exam score? The table is not tidy in the technical sense that not every observational unit forms a table, i.e., the observation of student numbers and the observation of exam scores should be stored independently in different tables, like so: ``` # same as before exam_results_tidy <- tribble( ~student, ~exam, ~grade, "Rozz", "midterm", 1.3, "Andrew", "midterm", 2.0, "Siouxsie", "midterm", 1.7, "Rozz", "final", 2.3, "Andrew", "final", 1.7, "Siouxsie", "final", 1.0 ) # additional table with student numbers student_numbers <- tribble( ~student, ~student_number, "Rozz", "666", "Andrew", "1969", "Siouxsie", "3.14" ) ``` Notice that, although the information is distributed over two tibbles, it is linked by the common column `student`. If we really need to bring all of the information together, the tidyverse has a quick and elegant solution: ``` full_join(exam_results_tidy, student_numbers, by = "student") ``` ``` ## # A tibble: 6 × 4 ## student exam grade student_number ## <chr> <chr> <dbl> <chr> ## 1 Rozz midterm 1.3 666 ## 2 Andrew midterm 2 1969 ## 3 Siouxsie midterm 1.7 3.14 ## 4 Rozz final 2.3 666 ## 5 Andrew final 1.7 1969 ## 6 Siouxsie final 1 3.14 ``` **Exercise 4\.1: Tidy or Untidy?** Lets take a look at this made up data set: ``` data <- tribble( ~subject_id, ~choices, ~reaction_times, 1, "A,B,B", "312 433 365", 2, "B,A,B", "393 491 327", 3, "B,A,A", "356 313 475", 4, "A,B,B", "292 352 378") ``` Is this data tidy or untidy? Explain your reasoning. Solution This data is *untidy* for given reasons: 1. Each row contains more than one observation. 2. Most fields contain more than one value. ### 4\.2\.1 Running example Consider the example of student grades for two exams in a course. A compact way of representing the data for visual digestion is the following representation: ``` exam_results_visual <- tribble( ~exam, ~"Rozz", ~"Andrew", ~"Siouxsie", "midterm", "1.3", "2.0", "1.7", "final" , "2.3", "1.7", "1.0" ) exam_results_visual ``` ``` ## # A tibble: 2 × 4 ## exam Rozz Andrew Siouxsie ## <chr> <chr> <chr> <chr> ## 1 midterm 1.3 2.0 1.7 ## 2 final 2.3 1.7 1.0 ``` This is how such data would frequently be represented, e.g., in tables in a journal. Indeed, Rmarkdown helps us present this data in an appetizing manner, e.g., in Table [4\.1](Chap-02-02-data-tidy-data.html#tab:Ch-02-01-exam-results-untidy), which is produced by the code below: ``` knitr::kable( exam_results_visual, caption = "Fictitious exam results of fictitious students.", booktabs = TRUE ) ``` Table 4\.1: Fictitious exam results of fictitious students. | exam | Rozz | Andrew | Siouxsie | | --- | --- | --- | --- | | midterm | 1\.3 | 2\.0 | 1\.7 | | final | 2\.3 | 1\.7 | 1\.0 | Though highly perspicuous, this representation of the data is not tidy, in the special technical sense we endorse here. A tidy representation of the course results could be this: ``` exam_results_tidy <- tribble( ~student, ~exam, ~grade, "Rozz", "midterm", 1.3, "Andrew", "midterm", 2.0, "Siouxsie", "midterm", 1.7, "Rozz", "final", 2.3, "Andrew", "final", 1.7, "Siouxsie", "final", 1.0 ) exam_results_tidy ``` ``` ## # A tibble: 6 × 3 ## student exam grade ## <chr> <chr> <dbl> ## 1 Rozz midterm 1.3 ## 2 Andrew midterm 2 ## 3 Siouxsie midterm 1.7 ## 4 Rozz final 2.3 ## 5 Andrew final 1.7 ## 6 Siouxsie final 1 ``` ### 4\.2\.2 Definition of *tidy data* Following Wickham ([2014](#ref-Wickham2014:Tidy-Data)), a tidy representation of (rectangular) data is defined as one where: 1. each variable forms a column, 2. each observation forms a row, and 3. each type of observational unit forms a table. Any data set that is not tidy is **messy data**. Messy data that satisfies the first two constraints, but not the third will be called **almost tidy data** in this course. We will work, wherever possible, with data that is at least almost tidy. Figure [4\.2](Chap-02-02-data-tidy-data.html#fig:02-02-tidy-data-picture) shows a graphical representation of the concept of tidy data. Figure 4\.2: Organization of tidy data (taken from Wickham and Grolemund ([2016](#ref-wickham2016))). ### 4\.2\.3 Excursion: non\-redundant data The final condition in the definition of tidy data is not particularly important for us here (since we will make do with ‘almost tidy data’), but to understand it nonetheless consider the following data set: ``` exam_results_overloaded <- tribble( ~student, ~stu_number, ~exam, ~grade, "Rozz", "666", "midterm", 1.3, "Andrew", "1969", "midterm", 2.0, "Siouxsie", "3.14", "midterm", 1.7, "Rozz", "666", "final", 2.3, "Andrew", "1969", "final", 1.7, "Siouxsie", "3.14", "final", 1.0 ) exam_results_overloaded ``` ``` ## # A tibble: 6 × 4 ## student stu_number exam grade ## <chr> <chr> <chr> <dbl> ## 1 Rozz 666 midterm 1.3 ## 2 Andrew 1969 midterm 2 ## 3 Siouxsie 3.14 midterm 1.7 ## 4 Rozz 666 final 2.3 ## 5 Andrew 1969 final 1.7 ## 6 Siouxsie 3.14 final 1 ``` This table is not tidy in an intuitive sense because it includes redundancy. Why list the student numbers twice, once with each observation of exam score? The table is not tidy in the technical sense that not every observational unit forms a table, i.e., the observation of student numbers and the observation of exam scores should be stored independently in different tables, like so: ``` # same as before exam_results_tidy <- tribble( ~student, ~exam, ~grade, "Rozz", "midterm", 1.3, "Andrew", "midterm", 2.0, "Siouxsie", "midterm", 1.7, "Rozz", "final", 2.3, "Andrew", "final", 1.7, "Siouxsie", "final", 1.0 ) # additional table with student numbers student_numbers <- tribble( ~student, ~student_number, "Rozz", "666", "Andrew", "1969", "Siouxsie", "3.14" ) ``` Notice that, although the information is distributed over two tibbles, it is linked by the common column `student`. If we really need to bring all of the information together, the tidyverse has a quick and elegant solution: ``` full_join(exam_results_tidy, student_numbers, by = "student") ``` ``` ## # A tibble: 6 × 4 ## student exam grade student_number ## <chr> <chr> <dbl> <chr> ## 1 Rozz midterm 1.3 666 ## 2 Andrew midterm 2 1969 ## 3 Siouxsie midterm 1.7 3.14 ## 4 Rozz final 2.3 666 ## 5 Andrew final 1.7 1969 ## 6 Siouxsie final 1 3.14 ``` **Exercise 4\.1: Tidy or Untidy?** Lets take a look at this made up data set: ``` data <- tribble( ~subject_id, ~choices, ~reaction_times, 1, "A,B,B", "312 433 365", 2, "B,A,B", "393 491 327", 3, "B,A,A", "356 313 475", 4, "A,B,B", "292 352 378") ``` Is this data tidy or untidy? Explain your reasoning. Solution This data is *untidy* for given reasons: 1. Each row contains more than one observation. 2. Most fields contain more than one value.
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/Chap-02-02-data-preprocessing-cleaning.html
4\.3 Data manipulation: the basics ---------------------------------- ### 4\.3\.1 Pivoting The tidyverse strongly encourages the use of tidy data, or at least almost tidy data. If your data is (almost) tidy, you can be reasonably sure that you can plot and analyze the data without additional wrangling. If your data is not (almost) tidy because it is too wide or too long (see below), what is required is a joyful round of pivoting. There are two directions of pivoting: making data longer, and making data wider. #### 4\.3\.1\.1 Making too wide data longer with `pivot_longer` Consider the previous example of messy data again: ``` exam_results_visual <- tribble( ~exam, ~"Rozz", ~"Andrew", ~"Siouxsie", "midterm", "1.3", "2.0", "1.7", "final" , "2.3", "1.7", "1.0" ) exam_results_visual ``` ``` ## # A tibble: 2 × 4 ## exam Rozz Andrew Siouxsie ## <chr> <chr> <chr> <chr> ## 1 midterm 1.3 2.0 1.7 ## 2 final 2.3 1.7 1.0 ``` This data is “too wide”. We can make it longer with the function `pivot_longer` from the `tidyr` package. Check out the example below before we plunge into a description of `pivot_longer`. ``` exam_results_visual %>% pivot_longer( # pivot every column except the first # (a negative number here means "exclude column with that index number") cols = - 1, # name of new column which contains the # names of the columns to be "gathered" names_to = "student", # name of new column which contains the values # of the cells which now form a new column values_to = "grade" ) %>% # optional reordering of columns (to make # the output exactly like `exam_results_tidy`) select(student, exam, grade) ``` ``` ## # A tibble: 6 × 3 ## student exam grade ## <chr> <chr> <chr> ## 1 Rozz midterm 1.3 ## 2 Andrew midterm 2.0 ## 3 Siouxsie midterm 1.7 ## 4 Rozz final 2.3 ## 5 Andrew final 1.7 ## 6 Siouxsie final 1.0 ``` What `pivot_longer` does, in general, is take a bunch of columns and gather the values of all cells in these columns into a single, new column, the so\-called *value column*, i.e., the column with the values of the cells to be gathered. If `pivot_longer` stopped here, we would lose information about which cell values belonged to which original column. Therefore, `pivot_longer` also creates a second new column, the so\-called *name column*, i.e., the column with the names of the original columns that we gathered together. Consequently, in order to do its job, `pivot_longer` minimally needs three pieces of information:[18](#fn18) 1. which columns to spin around (function argument `cols`) 2. the name of the to\-be\-created new value column (function argument `values_to`) 3. the name of the to\-be\-created new name column (function argument `names_to`) For different ways of selecting columns to pivot around, see Section [4\.3\.3](Chap-02-02-data-preprocessing-cleaning.html#Chap-02-02-tidy-selection) below. #### 4\.3\.1\.2 Making too long data wider with `pivot_wider` Consider the following example of data which is untidy because it is too long: ``` mixed_results_too_long <- tibble(student = rep(c('Rozz', 'Andrew', 'Siouxsie'), times = 2), what = rep(c('grade', 'participation'), each = 3), howmuch = c(2.7, 2.0, 1.0, 75, 93, 33)) mixed_results_too_long ``` ``` ## # A tibble: 6 × 3 ## student what howmuch ## <chr> <chr> <dbl> ## 1 Rozz grade 2.7 ## 2 Andrew grade 2 ## 3 Siouxsie grade 1 ## 4 Rozz participation 75 ## 5 Andrew participation 93 ## 6 Siouxsie participation 33 ``` This data is untidy because it lumps two types of different measurements (a course grade, and the percentage of participation) in a single column. These are different variables, and so should be represented in different columns. To fix a data representation that is too long, we can make it wider with the help of the `pivot_wider` function from the `tidyr` package. We look at an example before looking at the general behavior of the `pivot_wider` function. ``` mixed_results_too_long %>% pivot_wider( # column containing the names of the new columns names_from = what, # column containing the values of the new columns values_from = howmuch ) ``` ``` ## # A tibble: 3 × 3 ## student grade participation ## <chr> <dbl> <dbl> ## 1 Rozz 2.7 75 ## 2 Andrew 2 93 ## 3 Siouxsie 1 33 ``` In general, `pivot_wider` picks out two columns, one column of values to distribute into new to\-be\-created columns, and one vector of names or groups which contains the information about the, well, names of the to\-be\-created new columns. There are more refined options for `pivot_wider`, some of which we will encounter in the context of concrete cases of application. ### 4\.3\.2 Subsetting rows \& columns If a data set contains too much information for your current purposes, you can discard irrelevant (or unhelpful) rows and columns. The function `filter` takes a Boolean expression and returns only those rows of which the Boolean expression is true: ``` exam_results_tidy %>% # keep only entries with grades better than # or equal to 1.7 filter(grade <= 1.7) ``` ``` ## # A tibble: 4 × 3 ## student exam grade ## <chr> <chr> <dbl> ## 1 Rozz midterm 1.3 ## 2 Siouxsie midterm 1.7 ## 3 Andrew final 1.7 ## 4 Siouxsie final 1 ``` To select rows by an index or a vector of indeces, use the `slice` function: ``` exam_results_tidy %>% # keep only entries from rows with an even index slice(c(2, 4, 6)) ``` ``` ## # A tibble: 3 × 3 ## student exam grade ## <chr> <chr> <dbl> ## 1 Andrew midterm 2 ## 2 Rozz final 2.3 ## 3 Siouxsie final 1 ``` The function `select` allows to pick out a subset of columns. Interestingly, it can also be used to reorder columns, because the order in which column names are specified matches the order in the returned tibble. ``` exam_results_tidy %>% # select columns `grade` and `exam` select(grade, exam) ``` ``` ## # A tibble: 6 × 2 ## grade exam ## <dbl> <chr> ## 1 1.3 midterm ## 2 2 midterm ## 3 1.7 midterm ## 4 2.3 final ## 5 1.7 final ## 6 1 final ``` ### 4\.3\.3 Tidy selection of column names To select the columns in several functions within the tidyverse, such as `pivot_longer` or `select`, there are useful helper functions from the `tidyselect` package. Here are some examples:[19](#fn19) ``` # bogus code for illustration of possibilities! SOME_DATA %>% select( ... # could be one of the following # all columns indexed 2, 3, ..., 10 2:10 # all columns except the one called "COLNAME" - COLNAME # all columns with names starting with "STRING" starts_with("STRING") # all columns with names ending with "STRING" ends_with("STRING") # all columns with names containing "STRING" contains("STRING") # all columns with names of the form "Col_i" with i = 1, ..., 10 num_range("Col_", 1:10) ) ``` ### 4\.3\.4 Adding, changing and renaming columns To add a new column, or to change an existing one use the function `mutate`, like so: ``` exam_results_tidy %>% mutate( # add a new column called 'passed' depending on grade # [NB: severe passing conditions in this class!!] passed = grade <= 1.7, # change an existing column; here: change # character column 'exam' to ordered factor exam = factor(exam, ordered = T) ) ``` ``` ## # A tibble: 6 × 4 ## student exam grade passed ## <chr> <ord> <dbl> <lgl> ## 1 Rozz midterm 1.3 TRUE ## 2 Andrew midterm 2 FALSE ## 3 Siouxsie midterm 1.7 TRUE ## 4 Rozz final 2.3 FALSE ## 5 Andrew final 1.7 TRUE ## 6 Siouxsie final 1 TRUE ``` If you want to rename a column, function `rename` is what you want: ``` exam_results_tidy %>% # rename existing column "student" to new name "participant" # [NB: rename takes the new name first] rename(participant = student) ``` ``` ## # A tibble: 6 × 3 ## participant exam grade ## <chr> <chr> <dbl> ## 1 Rozz midterm 1.3 ## 2 Andrew midterm 2 ## 3 Siouxsie midterm 1.7 ## 4 Rozz final 2.3 ## 5 Andrew final 1.7 ## 6 Siouxsie final 1 ``` ### 4\.3\.5 Splitting and uniting columns Here is data from course homework: ``` homework_results_untidy <- tribble( ~student, ~results, "Rozz", "1.0,2.3,3.0", "Andrew", "2.3,2.7,1.3", "Siouxsie", "1.7,4.0,1.0" ) ``` This is not a useful representation format. Results of three homework sets are mushed together in a single column. Each value is separated by a comma, but it is all stored as a character vector. To disentangle information in a single column, use the `separate` function: ``` homework_results_untidy %>% separate( # which column to split up col = results, # names of the new column to store results into = str_c("HW_", 1:3), # separate by which character / reg-exp sep = ",", # automatically (smart-)convert the type of the new cols convert = T ) ``` ``` ## # A tibble: 3 × 4 ## student HW_1 HW_2 HW_3 ## <chr> <dbl> <dbl> <dbl> ## 1 Rozz 1 2.3 3 ## 2 Andrew 2.3 2.7 1.3 ## 3 Siouxsie 1.7 4 1 ``` If you have a reason to perform the reverse operation, i.e., join together several columns, use the `unite` function. ### 4\.3\.6 Sorting a data set If you want to indicate a fixed order of the reoccurring elements in a (character) vector, e.g., for plotting in a particular order, you should make this column an ordered factor. But if you want to order a data set along a column, e.g., for inspection or printing as a table, then you can do that by using the `arrange` function. You can specify several columns to sort alpha\-numerically in ascending order, and also indicate a descending order using the `desc` function: ``` exam_results_tidy %>% arrange(desc(student), grade) ``` ``` ## # A tibble: 6 × 3 ## student exam grade ## <chr> <chr> <dbl> ## 1 Siouxsie final 1 ## 2 Siouxsie midterm 1.7 ## 3 Rozz midterm 1.3 ## 4 Rozz final 2.3 ## 5 Andrew final 1.7 ## 6 Andrew midterm 2 ``` ### 4\.3\.7 Combining tibbles There are frequent occasions on which data from two separate variables need to be combined. The simplest case is where two entirely disjoint data sets merely need to be glued together, either horizontally (binding columns together with function `cbind`) or vertically (binding rows together with function `rbind`). ``` new_exam_results_tidy <- tribble( ~student, ~exam, ~grade, "Rozz", "bonus", 1.7, "Andrew", "bonus", 2.3, "Siouxsie", "bonus", 1.0 ) rbind( exam_results_tidy, new_exam_results_tidy ) ``` ``` ## # A tibble: 9 × 3 ## student exam grade ## <chr> <chr> <dbl> ## 1 Rozz midterm 1.3 ## 2 Andrew midterm 2 ## 3 Siouxsie midterm 1.7 ## 4 Rozz final 2.3 ## 5 Andrew final 1.7 ## 6 Siouxsie final 1 ## 7 Rozz bonus 1.7 ## 8 Andrew bonus 2.3 ## 9 Siouxsie bonus 1 ``` If two data sets have information in common, and the combination should respect that commonality, the `join` family of functions is of great help. Consider the case of distributed information again that we looked at to understand the third constraint of the concept of “tidy data”. There are two tibbles, both of which contain information about the same students. They share the column `student` (this does not necessarily have to be in the same order!) and we might want to join the information from both sources into a single (messy but almost tidy) representation, using `full_join`. We have seen an example already, which is repeated here: ``` # same as before exam_results_tidy <- tribble( ~student, ~exam, ~grade, "Rozz", "midterm", 1.3, "Andrew", "midterm", 2.0, "Siouxsie", "midterm", 1.7, "Rozz", "final", 2.3, "Andrew", "final", 1.7, "Siouxsie", "final", 1.0 ) # additional table with student numbers student_numbers <- tribble( ~student, ~student_number, "Rozz", "666", "Andrew", "1969", "Siouxsie", "3.14" ) full_join(exam_results_tidy, student_numbers, by = "student") ``` ``` ## # A tibble: 6 × 4 ## student exam grade student_number ## <chr> <chr> <dbl> <chr> ## 1 Rozz midterm 1.3 666 ## 2 Andrew midterm 2 1969 ## 3 Siouxsie midterm 1.7 3.14 ## 4 Rozz final 2.3 666 ## 5 Andrew final 1.7 1969 ## 6 Siouxsie final 1 3.14 ``` If two data sets are to be joined by a column that is not exactly shared by both sets (one contains entries in this column that the other doesn’t) then a `full_join` will retain all information from both. If that is not what you want, check out alternative functions like `right_join`, `semi_join` etc. using the [data wrangling cheat sheet](https://rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf). **Exercise 4\.2: Data Wrangling in R** We are working with the same example as in the earlier exercise: ``` data <- tribble( ~subject_id, ~choices, ~reaction_times, 1, "A,B,B", "312 433 365", 2, "B,A,B", "393 491 327", 3, "B,A,A", "356 313 475", 4, "A,B,B", "292 352 378" ) ``` Take a look at the following code snippet. Explain what the individual parts (indicated by the numbers) do. What will the result look like? ``` choice_data <- data %>% #1 select(subject_id, choices) %>% #2 separate( col = choices, into = str_c("C_", 1:3), sep = ",") %>% #3 pivot_longer( cols = -1, names_to = "condition", values_to = "response") ``` Solution 1. Selecting two columns (`subject_id` and `choices`) out of the data set. 2. In the data set, each cell in the `choices` column contains more than one value. To separate them, we take this column and divide the strings by the “,”. The names are then given for each line from one to three. 3. Now we are making the data set longer, so that each condition is its own row. We are pivoting each column apart from the first. The names of the columns are combined in a column called `condition` and the values are put into a column called `response`. The result: ``` choice_data ``` ``` ## # A tibble: 12 × 3 ## subject_id condition response ## <dbl> <chr> <chr> ## 1 1 C_1 A ## 2 1 C_2 B ## 3 1 C_3 B ## 4 2 C_1 B ## 5 2 C_2 A ## 6 2 C_3 B ## 7 3 C_1 B ## 8 3 C_2 A ## 9 3 C_3 A ## 10 4 C_1 A ## 11 4 C_2 B ## 12 4 C_3 B ``` ### 4\.3\.1 Pivoting The tidyverse strongly encourages the use of tidy data, or at least almost tidy data. If your data is (almost) tidy, you can be reasonably sure that you can plot and analyze the data without additional wrangling. If your data is not (almost) tidy because it is too wide or too long (see below), what is required is a joyful round of pivoting. There are two directions of pivoting: making data longer, and making data wider. #### 4\.3\.1\.1 Making too wide data longer with `pivot_longer` Consider the previous example of messy data again: ``` exam_results_visual <- tribble( ~exam, ~"Rozz", ~"Andrew", ~"Siouxsie", "midterm", "1.3", "2.0", "1.7", "final" , "2.3", "1.7", "1.0" ) exam_results_visual ``` ``` ## # A tibble: 2 × 4 ## exam Rozz Andrew Siouxsie ## <chr> <chr> <chr> <chr> ## 1 midterm 1.3 2.0 1.7 ## 2 final 2.3 1.7 1.0 ``` This data is “too wide”. We can make it longer with the function `pivot_longer` from the `tidyr` package. Check out the example below before we plunge into a description of `pivot_longer`. ``` exam_results_visual %>% pivot_longer( # pivot every column except the first # (a negative number here means "exclude column with that index number") cols = - 1, # name of new column which contains the # names of the columns to be "gathered" names_to = "student", # name of new column which contains the values # of the cells which now form a new column values_to = "grade" ) %>% # optional reordering of columns (to make # the output exactly like `exam_results_tidy`) select(student, exam, grade) ``` ``` ## # A tibble: 6 × 3 ## student exam grade ## <chr> <chr> <chr> ## 1 Rozz midterm 1.3 ## 2 Andrew midterm 2.0 ## 3 Siouxsie midterm 1.7 ## 4 Rozz final 2.3 ## 5 Andrew final 1.7 ## 6 Siouxsie final 1.0 ``` What `pivot_longer` does, in general, is take a bunch of columns and gather the values of all cells in these columns into a single, new column, the so\-called *value column*, i.e., the column with the values of the cells to be gathered. If `pivot_longer` stopped here, we would lose information about which cell values belonged to which original column. Therefore, `pivot_longer` also creates a second new column, the so\-called *name column*, i.e., the column with the names of the original columns that we gathered together. Consequently, in order to do its job, `pivot_longer` minimally needs three pieces of information:[18](#fn18) 1. which columns to spin around (function argument `cols`) 2. the name of the to\-be\-created new value column (function argument `values_to`) 3. the name of the to\-be\-created new name column (function argument `names_to`) For different ways of selecting columns to pivot around, see Section [4\.3\.3](Chap-02-02-data-preprocessing-cleaning.html#Chap-02-02-tidy-selection) below. #### 4\.3\.1\.2 Making too long data wider with `pivot_wider` Consider the following example of data which is untidy because it is too long: ``` mixed_results_too_long <- tibble(student = rep(c('Rozz', 'Andrew', 'Siouxsie'), times = 2), what = rep(c('grade', 'participation'), each = 3), howmuch = c(2.7, 2.0, 1.0, 75, 93, 33)) mixed_results_too_long ``` ``` ## # A tibble: 6 × 3 ## student what howmuch ## <chr> <chr> <dbl> ## 1 Rozz grade 2.7 ## 2 Andrew grade 2 ## 3 Siouxsie grade 1 ## 4 Rozz participation 75 ## 5 Andrew participation 93 ## 6 Siouxsie participation 33 ``` This data is untidy because it lumps two types of different measurements (a course grade, and the percentage of participation) in a single column. These are different variables, and so should be represented in different columns. To fix a data representation that is too long, we can make it wider with the help of the `pivot_wider` function from the `tidyr` package. We look at an example before looking at the general behavior of the `pivot_wider` function. ``` mixed_results_too_long %>% pivot_wider( # column containing the names of the new columns names_from = what, # column containing the values of the new columns values_from = howmuch ) ``` ``` ## # A tibble: 3 × 3 ## student grade participation ## <chr> <dbl> <dbl> ## 1 Rozz 2.7 75 ## 2 Andrew 2 93 ## 3 Siouxsie 1 33 ``` In general, `pivot_wider` picks out two columns, one column of values to distribute into new to\-be\-created columns, and one vector of names or groups which contains the information about the, well, names of the to\-be\-created new columns. There are more refined options for `pivot_wider`, some of which we will encounter in the context of concrete cases of application. #### 4\.3\.1\.1 Making too wide data longer with `pivot_longer` Consider the previous example of messy data again: ``` exam_results_visual <- tribble( ~exam, ~"Rozz", ~"Andrew", ~"Siouxsie", "midterm", "1.3", "2.0", "1.7", "final" , "2.3", "1.7", "1.0" ) exam_results_visual ``` ``` ## # A tibble: 2 × 4 ## exam Rozz Andrew Siouxsie ## <chr> <chr> <chr> <chr> ## 1 midterm 1.3 2.0 1.7 ## 2 final 2.3 1.7 1.0 ``` This data is “too wide”. We can make it longer with the function `pivot_longer` from the `tidyr` package. Check out the example below before we plunge into a description of `pivot_longer`. ``` exam_results_visual %>% pivot_longer( # pivot every column except the first # (a negative number here means "exclude column with that index number") cols = - 1, # name of new column which contains the # names of the columns to be "gathered" names_to = "student", # name of new column which contains the values # of the cells which now form a new column values_to = "grade" ) %>% # optional reordering of columns (to make # the output exactly like `exam_results_tidy`) select(student, exam, grade) ``` ``` ## # A tibble: 6 × 3 ## student exam grade ## <chr> <chr> <chr> ## 1 Rozz midterm 1.3 ## 2 Andrew midterm 2.0 ## 3 Siouxsie midterm 1.7 ## 4 Rozz final 2.3 ## 5 Andrew final 1.7 ## 6 Siouxsie final 1.0 ``` What `pivot_longer` does, in general, is take a bunch of columns and gather the values of all cells in these columns into a single, new column, the so\-called *value column*, i.e., the column with the values of the cells to be gathered. If `pivot_longer` stopped here, we would lose information about which cell values belonged to which original column. Therefore, `pivot_longer` also creates a second new column, the so\-called *name column*, i.e., the column with the names of the original columns that we gathered together. Consequently, in order to do its job, `pivot_longer` minimally needs three pieces of information:[18](#fn18) 1. which columns to spin around (function argument `cols`) 2. the name of the to\-be\-created new value column (function argument `values_to`) 3. the name of the to\-be\-created new name column (function argument `names_to`) For different ways of selecting columns to pivot around, see Section [4\.3\.3](Chap-02-02-data-preprocessing-cleaning.html#Chap-02-02-tidy-selection) below. #### 4\.3\.1\.2 Making too long data wider with `pivot_wider` Consider the following example of data which is untidy because it is too long: ``` mixed_results_too_long <- tibble(student = rep(c('Rozz', 'Andrew', 'Siouxsie'), times = 2), what = rep(c('grade', 'participation'), each = 3), howmuch = c(2.7, 2.0, 1.0, 75, 93, 33)) mixed_results_too_long ``` ``` ## # A tibble: 6 × 3 ## student what howmuch ## <chr> <chr> <dbl> ## 1 Rozz grade 2.7 ## 2 Andrew grade 2 ## 3 Siouxsie grade 1 ## 4 Rozz participation 75 ## 5 Andrew participation 93 ## 6 Siouxsie participation 33 ``` This data is untidy because it lumps two types of different measurements (a course grade, and the percentage of participation) in a single column. These are different variables, and so should be represented in different columns. To fix a data representation that is too long, we can make it wider with the help of the `pivot_wider` function from the `tidyr` package. We look at an example before looking at the general behavior of the `pivot_wider` function. ``` mixed_results_too_long %>% pivot_wider( # column containing the names of the new columns names_from = what, # column containing the values of the new columns values_from = howmuch ) ``` ``` ## # A tibble: 3 × 3 ## student grade participation ## <chr> <dbl> <dbl> ## 1 Rozz 2.7 75 ## 2 Andrew 2 93 ## 3 Siouxsie 1 33 ``` In general, `pivot_wider` picks out two columns, one column of values to distribute into new to\-be\-created columns, and one vector of names or groups which contains the information about the, well, names of the to\-be\-created new columns. There are more refined options for `pivot_wider`, some of which we will encounter in the context of concrete cases of application. ### 4\.3\.2 Subsetting rows \& columns If a data set contains too much information for your current purposes, you can discard irrelevant (or unhelpful) rows and columns. The function `filter` takes a Boolean expression and returns only those rows of which the Boolean expression is true: ``` exam_results_tidy %>% # keep only entries with grades better than # or equal to 1.7 filter(grade <= 1.7) ``` ``` ## # A tibble: 4 × 3 ## student exam grade ## <chr> <chr> <dbl> ## 1 Rozz midterm 1.3 ## 2 Siouxsie midterm 1.7 ## 3 Andrew final 1.7 ## 4 Siouxsie final 1 ``` To select rows by an index or a vector of indeces, use the `slice` function: ``` exam_results_tidy %>% # keep only entries from rows with an even index slice(c(2, 4, 6)) ``` ``` ## # A tibble: 3 × 3 ## student exam grade ## <chr> <chr> <dbl> ## 1 Andrew midterm 2 ## 2 Rozz final 2.3 ## 3 Siouxsie final 1 ``` The function `select` allows to pick out a subset of columns. Interestingly, it can also be used to reorder columns, because the order in which column names are specified matches the order in the returned tibble. ``` exam_results_tidy %>% # select columns `grade` and `exam` select(grade, exam) ``` ``` ## # A tibble: 6 × 2 ## grade exam ## <dbl> <chr> ## 1 1.3 midterm ## 2 2 midterm ## 3 1.7 midterm ## 4 2.3 final ## 5 1.7 final ## 6 1 final ``` ### 4\.3\.3 Tidy selection of column names To select the columns in several functions within the tidyverse, such as `pivot_longer` or `select`, there are useful helper functions from the `tidyselect` package. Here are some examples:[19](#fn19) ``` # bogus code for illustration of possibilities! SOME_DATA %>% select( ... # could be one of the following # all columns indexed 2, 3, ..., 10 2:10 # all columns except the one called "COLNAME" - COLNAME # all columns with names starting with "STRING" starts_with("STRING") # all columns with names ending with "STRING" ends_with("STRING") # all columns with names containing "STRING" contains("STRING") # all columns with names of the form "Col_i" with i = 1, ..., 10 num_range("Col_", 1:10) ) ``` ### 4\.3\.4 Adding, changing and renaming columns To add a new column, or to change an existing one use the function `mutate`, like so: ``` exam_results_tidy %>% mutate( # add a new column called 'passed' depending on grade # [NB: severe passing conditions in this class!!] passed = grade <= 1.7, # change an existing column; here: change # character column 'exam' to ordered factor exam = factor(exam, ordered = T) ) ``` ``` ## # A tibble: 6 × 4 ## student exam grade passed ## <chr> <ord> <dbl> <lgl> ## 1 Rozz midterm 1.3 TRUE ## 2 Andrew midterm 2 FALSE ## 3 Siouxsie midterm 1.7 TRUE ## 4 Rozz final 2.3 FALSE ## 5 Andrew final 1.7 TRUE ## 6 Siouxsie final 1 TRUE ``` If you want to rename a column, function `rename` is what you want: ``` exam_results_tidy %>% # rename existing column "student" to new name "participant" # [NB: rename takes the new name first] rename(participant = student) ``` ``` ## # A tibble: 6 × 3 ## participant exam grade ## <chr> <chr> <dbl> ## 1 Rozz midterm 1.3 ## 2 Andrew midterm 2 ## 3 Siouxsie midterm 1.7 ## 4 Rozz final 2.3 ## 5 Andrew final 1.7 ## 6 Siouxsie final 1 ``` ### 4\.3\.5 Splitting and uniting columns Here is data from course homework: ``` homework_results_untidy <- tribble( ~student, ~results, "Rozz", "1.0,2.3,3.0", "Andrew", "2.3,2.7,1.3", "Siouxsie", "1.7,4.0,1.0" ) ``` This is not a useful representation format. Results of three homework sets are mushed together in a single column. Each value is separated by a comma, but it is all stored as a character vector. To disentangle information in a single column, use the `separate` function: ``` homework_results_untidy %>% separate( # which column to split up col = results, # names of the new column to store results into = str_c("HW_", 1:3), # separate by which character / reg-exp sep = ",", # automatically (smart-)convert the type of the new cols convert = T ) ``` ``` ## # A tibble: 3 × 4 ## student HW_1 HW_2 HW_3 ## <chr> <dbl> <dbl> <dbl> ## 1 Rozz 1 2.3 3 ## 2 Andrew 2.3 2.7 1.3 ## 3 Siouxsie 1.7 4 1 ``` If you have a reason to perform the reverse operation, i.e., join together several columns, use the `unite` function. ### 4\.3\.6 Sorting a data set If you want to indicate a fixed order of the reoccurring elements in a (character) vector, e.g., for plotting in a particular order, you should make this column an ordered factor. But if you want to order a data set along a column, e.g., for inspection or printing as a table, then you can do that by using the `arrange` function. You can specify several columns to sort alpha\-numerically in ascending order, and also indicate a descending order using the `desc` function: ``` exam_results_tidy %>% arrange(desc(student), grade) ``` ``` ## # A tibble: 6 × 3 ## student exam grade ## <chr> <chr> <dbl> ## 1 Siouxsie final 1 ## 2 Siouxsie midterm 1.7 ## 3 Rozz midterm 1.3 ## 4 Rozz final 2.3 ## 5 Andrew final 1.7 ## 6 Andrew midterm 2 ``` ### 4\.3\.7 Combining tibbles There are frequent occasions on which data from two separate variables need to be combined. The simplest case is where two entirely disjoint data sets merely need to be glued together, either horizontally (binding columns together with function `cbind`) or vertically (binding rows together with function `rbind`). ``` new_exam_results_tidy <- tribble( ~student, ~exam, ~grade, "Rozz", "bonus", 1.7, "Andrew", "bonus", 2.3, "Siouxsie", "bonus", 1.0 ) rbind( exam_results_tidy, new_exam_results_tidy ) ``` ``` ## # A tibble: 9 × 3 ## student exam grade ## <chr> <chr> <dbl> ## 1 Rozz midterm 1.3 ## 2 Andrew midterm 2 ## 3 Siouxsie midterm 1.7 ## 4 Rozz final 2.3 ## 5 Andrew final 1.7 ## 6 Siouxsie final 1 ## 7 Rozz bonus 1.7 ## 8 Andrew bonus 2.3 ## 9 Siouxsie bonus 1 ``` If two data sets have information in common, and the combination should respect that commonality, the `join` family of functions is of great help. Consider the case of distributed information again that we looked at to understand the third constraint of the concept of “tidy data”. There are two tibbles, both of which contain information about the same students. They share the column `student` (this does not necessarily have to be in the same order!) and we might want to join the information from both sources into a single (messy but almost tidy) representation, using `full_join`. We have seen an example already, which is repeated here: ``` # same as before exam_results_tidy <- tribble( ~student, ~exam, ~grade, "Rozz", "midterm", 1.3, "Andrew", "midterm", 2.0, "Siouxsie", "midterm", 1.7, "Rozz", "final", 2.3, "Andrew", "final", 1.7, "Siouxsie", "final", 1.0 ) # additional table with student numbers student_numbers <- tribble( ~student, ~student_number, "Rozz", "666", "Andrew", "1969", "Siouxsie", "3.14" ) full_join(exam_results_tidy, student_numbers, by = "student") ``` ``` ## # A tibble: 6 × 4 ## student exam grade student_number ## <chr> <chr> <dbl> <chr> ## 1 Rozz midterm 1.3 666 ## 2 Andrew midterm 2 1969 ## 3 Siouxsie midterm 1.7 3.14 ## 4 Rozz final 2.3 666 ## 5 Andrew final 1.7 1969 ## 6 Siouxsie final 1 3.14 ``` If two data sets are to be joined by a column that is not exactly shared by both sets (one contains entries in this column that the other doesn’t) then a `full_join` will retain all information from both. If that is not what you want, check out alternative functions like `right_join`, `semi_join` etc. using the [data wrangling cheat sheet](https://rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf). **Exercise 4\.2: Data Wrangling in R** We are working with the same example as in the earlier exercise: ``` data <- tribble( ~subject_id, ~choices, ~reaction_times, 1, "A,B,B", "312 433 365", 2, "B,A,B", "393 491 327", 3, "B,A,A", "356 313 475", 4, "A,B,B", "292 352 378" ) ``` Take a look at the following code snippet. Explain what the individual parts (indicated by the numbers) do. What will the result look like? ``` choice_data <- data %>% #1 select(subject_id, choices) %>% #2 separate( col = choices, into = str_c("C_", 1:3), sep = ",") %>% #3 pivot_longer( cols = -1, names_to = "condition", values_to = "response") ``` Solution 1. Selecting two columns (`subject_id` and `choices`) out of the data set. 2. In the data set, each cell in the `choices` column contains more than one value. To separate them, we take this column and divide the strings by the “,”. The names are then given for each line from one to three. 3. Now we are making the data set longer, so that each condition is its own row. We are pivoting each column apart from the first. The names of the columns are combined in a column called `condition` and the values are put into a column called `response`. The result: ``` choice_data ``` ``` ## # A tibble: 12 × 3 ## subject_id condition response ## <dbl> <chr> <chr> ## 1 1 C_1 A ## 2 1 C_2 B ## 3 1 C_3 B ## 4 2 C_1 B ## 5 2 C_2 A ## 6 2 C_3 B ## 7 3 C_1 B ## 8 3 C_2 A ## 9 3 C_3 A ## 10 4 C_1 A ## 11 4 C_2 B ## 12 4 C_3 B ```
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/Chap-02-02-data-grouping-nesting.html
4\.4 Grouped operations ----------------------- A frequently occurring problem in data analysis is to obtain a summary statistic (see Chapter [5](Chap-02-03-summary-statistics.html#Chap-02-03-summary-statistics)) for different subsets of data. For example, we might want to calculate the average score for each student in our class. We could do that by filtering like so (notice that `pull` gives you the column vector specified): ``` # extracting mean grade for Rozz mean_grade_Rozz <- exam_results_tidy %>% filter(student == "Rozz") %>% pull(grade) %>% mean mean_grade_Rozz ``` ``` ## [1] 1.8 ``` But then we need to do that two more times. So, as we shouldn’t copy\-paste code, we write a function and use `map_dbl` to add a mean for each student: ``` get_mean_for_student <- function(student_name) { exam_results_tidy %>% filter(student == student_name) %>% pull(grade) %>% mean } map_dbl( exam_results_tidy %>% pull(student) %>% unique, get_mean_for_student ) ``` ``` ## [1] 1.80 1.85 1.35 ``` Also not quite satisfactory, clumsy and error\-prone. Enter, grouping in the tidyverse. If we want to apply a particular operation to all combinations of levels of different variables (no matter whether they are encoded as factors or not when we group), we can do this with the function `group_by`, followed by either a call to `mutate` or `summarise`. Check this example: ``` exam_results_tidy %>% group_by(student) %>% summarise( student_mean = mean(grade) ) ``` ``` ## # A tibble: 3 × 2 ## student student_mean ## <chr> <dbl> ## 1 Andrew 1.85 ## 2 Rozz 1.8 ## 3 Siouxsie 1.35 ``` The function `summarise` returns a single row for each combination of levels of grouping variables. If we use the function `mutate` instead, the summary statistic is added (repeatedly) in each of the original rows: ``` exam_results_tidy %>% group_by(student) %>% mutate( student_mean = mean(grade) ) ``` ``` ## # A tibble: 6 × 4 ## # Groups: student [3] ## student exam grade student_mean ## <chr> <chr> <dbl> <dbl> ## 1 Rozz midterm 1.3 1.8 ## 2 Andrew midterm 2 1.85 ## 3 Siouxsie midterm 1.7 1.35 ## 4 Rozz final 2.3 1.8 ## 5 Andrew final 1.7 1.85 ## 6 Siouxsie final 1 1.35 ``` The latter can sometimes be handy, for example when overlaying a plot of the data with grouped means, for instance. It may be important to remember that after a call of `group_by`, the resulting tibbles retains the grouping information for *all* subsequent operations. To remove grouping information, use the function `ungroup`.
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/Chap-02-02-data-case-study-KoF.html
4\.5 Case study: the King of France ----------------------------------- Let’s go through one case study of data preprocessing and cleaning. We look at the example introduced and fully worked out in Appendix [D.3](app-93-data-sets-king-of-france.html#app-93-data-sets-king-of-france). (Please read Section [D.3\.1](app-93-data-sets-king-of-france.html#app-93-data-sets-king-of-france-background) to find out more about where this data set is coming from.) The raw data set is part of the `aida` package and can be loaded using: ``` data_KoF_raw <- aida::data_KoF_raw ``` We then take a glimpse at the data: ``` glimpse(data_KoF_raw ) ``` ``` ## Rows: 2,813 ## Columns: 16 ## $ submission_id <dbl> 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, … ## $ RT <dbl> 8110, 35557, 3647, 16037, 11816, 6024, 4986, 13019, 538… ## $ age <dbl> 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57,… ## $ comments <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,… ## $ item_version <chr> "none", "none", "none", "none", "none", "none", "none",… ## $ correct_answer <lgl> FALSE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE, FAL… ## $ education <chr> "Graduated College", "Graduated College", "Graduated Co… ## $ gender <chr> "female", "female", "female", "female", "female", "fema… ## $ languages <chr> "English", "English", "English", "English", "English", … ## $ question <chr> "World War II was a global war that lasted from 1914 to… ## $ response <lgl> FALSE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE, FAL… ## $ timeSpent <dbl> 39.48995, 39.48995, 39.48995, 39.48995, 39.48995, 39.48… ## $ trial_name <chr> "practice_trials", "practice_trials", "practice_trials"… ## $ trial_number <dbl> 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 1… ## $ trial_type <chr> "practice", "practice", "practice", "practice", "practi… ## $ vignette <chr> "undefined", "undefined", "undefined", "undefined", "un… ``` The variables in this data set are: * `submission_id`: unique identifier for each participant * `RT`: the reaction time for each decision * `age`: the (self\-reported) age of the participant * `comments`: the (optional) comments each participant may have given * `item_version`: the condition which the test sentence belongs to (only given for trials of type `main` and `special`) * `correct_answer`: for trials of type `filler` and `special` what the true answer should have been * `education`: the (self\-reported) education level with options `Graduated College`, `Graduated High School`, `Higher Degree` * `gender`: (self\-reported) gender * `languages`: (self\-reported) native languages * `question`: the sentence to be judged true or false * `response`: the answer (“TRUE” or “FALSE”) on each trial * `trial_name`: whether the trial is a main or practice trials (levels `main_trials` and `practice_trials`) * `trial_number`: consecutive numbering of each participant’s trial * `trial_type`: whether the trial was of the category `filler`, `main`, `practice` or `special`, where the latter encodes the “background checks” * `vignette`: the current item’s vignette number (applies only to trials of type `main` and `special`) Let’s have a brief look at the comments (sometimes helpful, usually entertaining) and the self\-reported native languages: ``` data_KoF_raw %>% pull(comments) %>% unique ``` ``` ## [1] NA ## [2] "I hope I was right most of the time!" ## [3] "My level of education is Some Highschool, not finished. So I couldn't input what was correct, so I'm leaving a comment here." ## [4] "It was interesting, and made re-read questions to make sure they weren't tricks. I hope I got them all correct." ## [5] "Worked well" ## [6] "A surprisingly tricky study! Thoroughly enjoyed completing it, despite several red herrings!!" ## [7] "n/a" ## [8] "Thank you for the opportunity." ## [9] "this was challenging" ## [10] "I'm not good at learning history so i might of made couple of mistakes. I hope I did well. :)" ## [11] "Interesting survey - thanks!" ## [12] "no" ## [13] "Regarding the practice question - I'm aware that Alexander Bell invented the telephone, but in reality, it was a collaborative effort by a team of people" ## [14] "Fun study!" ## [15] "Fun stuff" ``` ``` data_KoF_raw %>% pull(languages) %>% unique ``` ``` ## [1] "English" "english" "English, Italian" ## [4] "English/ ASL" "English and Polish" "Chinese" ## [7] "English, Mandarin" "Polish" "Turkish" ## [10] NA "English, Sarcasm" "English, Portuguese" ``` We might wish to exclude people who do not include “English” as one of their native languages in some studies. Here, we do not since we also have strong, more specific filters on comprehension (see below). Since we are not going to use this information later on, we might as well discard it now: ``` data_KoF_raw <- data_KoF_raw %>% select(-languages, -comments, -age, -RT, -education, -gender) ``` But even after pruning irrelevant columns, this data set is still not ideal. We need to preprocess it more thoroughly to make it more intuitively manageable. For example, the information in column `trial_name` does not give the trial’s name in an intuitive sense, but its type: whether it is a practice or a main trial. But this information, and more, is also represented in the column `trial_type`. The column `item_version` contains information about the experimental condition. To see this (mess), the code below prints the selected information from the main trials of only one participant in an order that makes it easier to see what is what. ``` data_KoF_raw %>% # ignore practice trials for the moment # focus on one participant only filter(trial_type != "practice", submission_id == 192) %>% select(trial_type, item_version, question) %>% arrange(desc(trial_type), item_version) %>% print(n = Inf) ``` ``` ## # A tibble: 24 × 3 ## trial_type item_version question ## <chr> <chr> <chr> ## 1 special none The Pope is currently not married. ## 2 special none Germany has volcanoes. ## 3 special none France has a king. ## 4 special none Canada is a democracy. ## 5 special none Belgium has rainforests. ## 6 main 0 The volcanoes of Germany dominate the landscape. ## 7 main 1 Canada has an emperor, and he is fond of sushi. ## 8 main 10 Donald Trump, his favorite nature spot is not the Be… ## 9 main 6 The King of France isn’t bald. ## 10 main 9 The Pope’s wife, she did not invite Angela Merkel fo… ## 11 filler none The Solar System includes the planet Earth. ## 12 filler none Vatican City is the world's largest country by land … ## 13 filler none Big Ben is a very large building in the middle of Pa… ## 14 filler none Harry Potter is a series of fantasy novels written b… ## 15 filler none Taj Mahal is a mausoleum on the bank of the river in… ## 16 filler none James Bond is a spanish dancer from Madrid. ## 17 filler none The Pacific Ocean is a large ocean between Japan and… ## 18 filler none Australia has a very large border with Brazil. ## 19 filler none Steve Jobs was an American inventor and co-founder o… ## 20 filler none Planet Earth is part of the galaxy ‘Milky Way’. ## 21 filler none Germany shares borders with France, Belgium and Denm… ## 22 filler none Antarctica is a continent covered almost completely … ## 23 filler none The Statue of Liberty is a colossal sculpture on Lib… ## 24 filler none English is the main language in Australia, Britain a… ``` We see that the information in `item_version` specifies the critical condition. To make this more intuitively manageable, we would like to have a column called `condition` and it should, ideally, also contain useful information for the cases where `trial_type` is not `main` or `special`. That is why we will therefore remove the column `trial_name` completely, and create an informative column `condition` in which we learn of every row whether it belongs to one of the five experimental conditions, and if not whether it is a filler or a “background check” (\= special) trial. ``` data_KoF_processed <- data_KoF_raw %>% # drop redundant information in column `trial_name` select(-trial_name) %>% # discard practice trials filter(trial_type != "practice") %>% mutate( # add a 'condition' variable condition = case_when( trial_type == "special" ~ "background check", trial_type == "main" ~ str_c("Condition ", item_version), TRUE ~ "filler" ) %>% # make the new 'condition' variable a factor factor( ordered = T, levels = c( str_c("Condition ", c(0, 1, 6, 9, 10)), "background check", "filler" ) ) ) ``` ### 4\.5\.1 Cleaning the data We clean the data in two consecutive steps: 1. Remove all data from any participant who got more than 50% of the answers to the filler material wrong. 2. Remove individual main trials if the corresponding “background check” question was answered wrongly. #### 4\.5\.1\.1 Cleaning by\-participant ``` # look at error rates for filler sentences by subject # mark every subject as an outlier when they # have a proportion of correct responses of less than 0.5 subject_error_rate <- data_KoF_processed %>% filter(trial_type == "filler") %>% group_by(submission_id) %>% summarise( proportion_correct = mean(correct_answer == response), outlier_subject = proportion_correct < 0.5 ) %>% arrange(proportion_correct) ``` Apply the cleaning step: ``` # add info about error rates and exclude outlier subject(s) d_cleaned <- full_join(data_KoF_processed, subject_error_rate, by = "submission_id") %>% filter(outlier_subject == FALSE) ``` #### 4\.5\.1\.2 Cleaning by\-trial ``` # exclude every critical trial whose 'background' test question was answered wrongly d_cleaned <- d_cleaned %>% # select only the 'background question' trials filter(trial_type == "special") %>% # is the background question answered correctly? mutate( background_correct = correct_answer == response ) %>% # select only the relevant columns select(submission_id, vignette, background_correct) %>% # right join lines to original data set right_join(d_cleaned, by = c("submission_id", "vignette")) %>% # remove all special trials, as well as main trials with incorrect background check filter(trial_type == "main" & background_correct == TRUE) ``` For later reuse, both the preprocessed and the cleaned data set are included in the `aida` package as well. They are loaded by calling `aida::data_KoF_preprocessed` and `aida::data_KoF_cleaned`, respectively. ### 4\.5\.1 Cleaning the data We clean the data in two consecutive steps: 1. Remove all data from any participant who got more than 50% of the answers to the filler material wrong. 2. Remove individual main trials if the corresponding “background check” question was answered wrongly. #### 4\.5\.1\.1 Cleaning by\-participant ``` # look at error rates for filler sentences by subject # mark every subject as an outlier when they # have a proportion of correct responses of less than 0.5 subject_error_rate <- data_KoF_processed %>% filter(trial_type == "filler") %>% group_by(submission_id) %>% summarise( proportion_correct = mean(correct_answer == response), outlier_subject = proportion_correct < 0.5 ) %>% arrange(proportion_correct) ``` Apply the cleaning step: ``` # add info about error rates and exclude outlier subject(s) d_cleaned <- full_join(data_KoF_processed, subject_error_rate, by = "submission_id") %>% filter(outlier_subject == FALSE) ``` #### 4\.5\.1\.2 Cleaning by\-trial ``` # exclude every critical trial whose 'background' test question was answered wrongly d_cleaned <- d_cleaned %>% # select only the 'background question' trials filter(trial_type == "special") %>% # is the background question answered correctly? mutate( background_correct = correct_answer == response ) %>% # select only the relevant columns select(submission_id, vignette, background_correct) %>% # right join lines to original data set right_join(d_cleaned, by = c("submission_id", "vignette")) %>% # remove all special trials, as well as main trials with incorrect background check filter(trial_type == "main" & background_correct == TRUE) ``` For later reuse, both the preprocessed and the cleaned data set are included in the `aida` package as well. They are loaded by calling `aida::data_KoF_preprocessed` and `aida::data_KoF_cleaned`, respectively. #### 4\.5\.1\.1 Cleaning by\-participant ``` # look at error rates for filler sentences by subject # mark every subject as an outlier when they # have a proportion of correct responses of less than 0.5 subject_error_rate <- data_KoF_processed %>% filter(trial_type == "filler") %>% group_by(submission_id) %>% summarise( proportion_correct = mean(correct_answer == response), outlier_subject = proportion_correct < 0.5 ) %>% arrange(proportion_correct) ``` Apply the cleaning step: ``` # add info about error rates and exclude outlier subject(s) d_cleaned <- full_join(data_KoF_processed, subject_error_rate, by = "submission_id") %>% filter(outlier_subject == FALSE) ``` #### 4\.5\.1\.2 Cleaning by\-trial ``` # exclude every critical trial whose 'background' test question was answered wrongly d_cleaned <- d_cleaned %>% # select only the 'background question' trials filter(trial_type == "special") %>% # is the background question answered correctly? mutate( background_correct = correct_answer == response ) %>% # select only the relevant columns select(submission_id, vignette, background_correct) %>% # right join lines to original data set right_join(d_cleaned, by = c("submission_id", "vignette")) %>% # remove all special trials, as well as main trials with incorrect background check filter(trial_type == "main" & background_correct == TRUE) ``` For later reuse, both the preprocessed and the cleaned data set are included in the `aida` package as well. They are loaded by calling `aida::data_KoF_preprocessed` and `aida::data_KoF_cleaned`, respectively.
Data Science
michael-franke.github.io
https://michael-franke.github.io/intro-data-analysis/Chap-02-03-summary-statistics-counts.html
5\.1 Counts and proportions --------------------------- Very familiar instances of summary statistics are counts and frequencies. While there is no conceptual difficulty in understanding these numerical measures, we have yet to see how to obtain counts for categorical data in R. The [Bio\-Logic Jazz\-Metal data set](app-93-data-sets-BLJM.html#app-93-data-sets-BLJM) provides nice material for doing so. If you are unfamiliar with the data and the experiment that generated it, please have a look at Appendix Chapter [D.4](app-93-data-sets-BLJM.html#app-93-data-sets-BLJM). ### 5\.1\.1 Loading and inspecting the data We load the preprocessed data immediately (see Appendix [D.4](app-93-data-sets-BLJM.html#app-93-data-sets-BLJM) for details on how this preprocessing was performed). ``` data_BLJM_processed <- aida::data_BLJM ``` The preprocessed data lists, for each participant (in column `submission_id`) the binary choice (in column `response`) in a particular condition (in column `condition`). ``` head(data_BLJM_processed) ``` ``` ## # A tibble: 6 × 3 ## submission_id condition response ## <dbl> <chr> <chr> ## 1 379 BM Beach ## 2 379 LB Logic ## 3 379 JM Metal ## 4 378 JM Metal ## 5 378 LB Logic ## 6 378 BM Beach ``` ### 5\.1\.2 Obtaining counts with `n`, `count` and `tally` To obtain counts, the `dplyr` package offers the functions `n`, `count` and `tally`, among others.[20](#fn20) The function `n` does not take arguments and is useful for counting rows. It works inside of `summarise` and `mutate` and is usually applied to grouped data sets. For example, we can get a count of how many observations the data in `data_BLJM_processed` contains for each condition by first grouping by variable `condition` and then calling `n` (without arguments) inside of `summarise`: ``` data_BLJM_processed %>% group_by(condition) %>% summarise(nr_observation_per_condition = n()) %>% ungroup() ``` ``` ## # A tibble: 3 × 2 ## condition nr_observation_per_condition ## <chr> <int> ## 1 BM 102 ## 2 JM 102 ## 3 LB 102 ``` Notice that calling `n` without grouping just gives you the number of rows in the data set: ``` data_BLJM_processed %>% summarize(n_rows = n()) ``` ``` ## # A tibble: 1 × 1 ## n_rows ## <int> ## 1 306 ``` This can also be obtained simply by (although in a different output format!): ``` nrow(data_BLJM_processed) ``` ``` ## [1] 306 ``` Counting can be helpful also when getting acquainted with a data set, or when checking whether the data is complete. For example, we can verify that every participant in the experiment contributed three data points like so: ``` data_BLJM_processed %>% group_by(submission_id) %>% summarise(nr_data_points = n()) ``` ``` ## # A tibble: 102 × 2 ## submission_id nr_data_points ## <dbl> <int> ## 1 278 3 ## 2 279 3 ## 3 280 3 ## 4 281 3 ## 5 282 3 ## 6 283 3 ## 7 284 3 ## 8 285 3 ## 9 286 3 ## 10 287 3 ## # … with 92 more rows ``` The functions `tally` and `count` are essentially just convenience wrappers around `n`. While `tally` expects that the data is already grouped in the relevant way, `count` takes a column specification as an argument and does the grouping (and final ungrouping) implicitly. For instance, the following code blocks produce the same output, one using `n`, the other using `count`, namely the total number of times a particular response has been given in a particular condition: ``` data_BLJM_processed %>% group_by(condition, response) %>% summarise(n = n()) ``` ``` ## # A tibble: 6 × 3 ## # Groups: condition [3] ## condition response n ## <chr> <chr> <int> ## 1 BM Beach 44 ## 2 BM Mountains 58 ## 3 JM Jazz 64 ## 4 JM Metal 38 ## 5 LB Biology 58 ## 6 LB Logic 44 ``` ``` data_BLJM_processed %>% # use function `count` from `dplyr` package dplyr::count(condition, response) ``` ``` ## # A tibble: 6 × 3 ## condition response n ## <chr> <chr> <int> ## 1 BM Beach 44 ## 2 BM Mountains 58 ## 3 JM Jazz 64 ## 4 JM Metal 38 ## 5 LB Biology 58 ## 6 LB Logic 44 ``` So, these counts suggest that there is an overall preference for mountains over beaches, Jazz over Metal and Biology over Logic. Who would have known!? These counts are overall numbers. They do not tell us anything about any potentially interesting relationship between preferences. So, let’s have a closer look at the number of people who selected which music\-subject pair. We collect these counts in variable `BLJM_associated_counts`. We first need to pivot the data, using `pivot_wider`, to make sure each participant’s choices are associated with each other, and then take the counts of interest: ``` BLJM_associated_counts <- data_BLJM_processed %>% pivot_wider(names_from = condition, values_from = response) %>% # drop the Beach vs. Mountain condition select(-BM) %>% dplyr::count(JM, LB) BLJM_associated_counts ``` ``` ## # A tibble: 4 × 3 ## JM LB n ## <chr> <chr> <int> ## 1 Jazz Biology 38 ## 2 Jazz Logic 26 ## 3 Metal Biology 20 ## 4 Metal Logic 18 ``` We can also produce a table of proportions from this, simply by dividing the column called `n` by the total number of observations using `sum(n)`. We can also flip the table around into a more convenient (though messy) representation: ``` BLJM_associated_counts %>% # look at relative frequency, not total counts mutate(n = n / sum(n)) %>% pivot_wider(names_from = LB, values_from = n) ``` ``` ## # A tibble: 2 × 3 ## JM Biology Logic ## <chr> <dbl> <dbl> ## 1 Jazz 0.373 0.255 ## 2 Metal 0.196 0.176 ``` Eye\-balling this table of relative frequencies, we might indeed hypothesize that preference for musical style is not independent of preference for an academic subject. The impression is corroborated by looking at the plot in Figure [5\.1](Chap-02-03-summary-statistics-counts.html#fig:chap-02-03-BLJM-proportions). More on this later! Figure 5\.1: Proportions of jointly choosing a musical style and an academic subfield in the Bio\-Logic Jazz\-Metal data set. ### 5\.1\.1 Loading and inspecting the data We load the preprocessed data immediately (see Appendix [D.4](app-93-data-sets-BLJM.html#app-93-data-sets-BLJM) for details on how this preprocessing was performed). ``` data_BLJM_processed <- aida::data_BLJM ``` The preprocessed data lists, for each participant (in column `submission_id`) the binary choice (in column `response`) in a particular condition (in column `condition`). ``` head(data_BLJM_processed) ``` ``` ## # A tibble: 6 × 3 ## submission_id condition response ## <dbl> <chr> <chr> ## 1 379 BM Beach ## 2 379 LB Logic ## 3 379 JM Metal ## 4 378 JM Metal ## 5 378 LB Logic ## 6 378 BM Beach ``` ### 5\.1\.2 Obtaining counts with `n`, `count` and `tally` To obtain counts, the `dplyr` package offers the functions `n`, `count` and `tally`, among others.[20](#fn20) The function `n` does not take arguments and is useful for counting rows. It works inside of `summarise` and `mutate` and is usually applied to grouped data sets. For example, we can get a count of how many observations the data in `data_BLJM_processed` contains for each condition by first grouping by variable `condition` and then calling `n` (without arguments) inside of `summarise`: ``` data_BLJM_processed %>% group_by(condition) %>% summarise(nr_observation_per_condition = n()) %>% ungroup() ``` ``` ## # A tibble: 3 × 2 ## condition nr_observation_per_condition ## <chr> <int> ## 1 BM 102 ## 2 JM 102 ## 3 LB 102 ``` Notice that calling `n` without grouping just gives you the number of rows in the data set: ``` data_BLJM_processed %>% summarize(n_rows = n()) ``` ``` ## # A tibble: 1 × 1 ## n_rows ## <int> ## 1 306 ``` This can also be obtained simply by (although in a different output format!): ``` nrow(data_BLJM_processed) ``` ``` ## [1] 306 ``` Counting can be helpful also when getting acquainted with a data set, or when checking whether the data is complete. For example, we can verify that every participant in the experiment contributed three data points like so: ``` data_BLJM_processed %>% group_by(submission_id) %>% summarise(nr_data_points = n()) ``` ``` ## # A tibble: 102 × 2 ## submission_id nr_data_points ## <dbl> <int> ## 1 278 3 ## 2 279 3 ## 3 280 3 ## 4 281 3 ## 5 282 3 ## 6 283 3 ## 7 284 3 ## 8 285 3 ## 9 286 3 ## 10 287 3 ## # … with 92 more rows ``` The functions `tally` and `count` are essentially just convenience wrappers around `n`. While `tally` expects that the data is already grouped in the relevant way, `count` takes a column specification as an argument and does the grouping (and final ungrouping) implicitly. For instance, the following code blocks produce the same output, one using `n`, the other using `count`, namely the total number of times a particular response has been given in a particular condition: ``` data_BLJM_processed %>% group_by(condition, response) %>% summarise(n = n()) ``` ``` ## # A tibble: 6 × 3 ## # Groups: condition [3] ## condition response n ## <chr> <chr> <int> ## 1 BM Beach 44 ## 2 BM Mountains 58 ## 3 JM Jazz 64 ## 4 JM Metal 38 ## 5 LB Biology 58 ## 6 LB Logic 44 ``` ``` data_BLJM_processed %>% # use function `count` from `dplyr` package dplyr::count(condition, response) ``` ``` ## # A tibble: 6 × 3 ## condition response n ## <chr> <chr> <int> ## 1 BM Beach 44 ## 2 BM Mountains 58 ## 3 JM Jazz 64 ## 4 JM Metal 38 ## 5 LB Biology 58 ## 6 LB Logic 44 ``` So, these counts suggest that there is an overall preference for mountains over beaches, Jazz over Metal and Biology over Logic. Who would have known!? These counts are overall numbers. They do not tell us anything about any potentially interesting relationship between preferences. So, let’s have a closer look at the number of people who selected which music\-subject pair. We collect these counts in variable `BLJM_associated_counts`. We first need to pivot the data, using `pivot_wider`, to make sure each participant’s choices are associated with each other, and then take the counts of interest: ``` BLJM_associated_counts <- data_BLJM_processed %>% pivot_wider(names_from = condition, values_from = response) %>% # drop the Beach vs. Mountain condition select(-BM) %>% dplyr::count(JM, LB) BLJM_associated_counts ``` ``` ## # A tibble: 4 × 3 ## JM LB n ## <chr> <chr> <int> ## 1 Jazz Biology 38 ## 2 Jazz Logic 26 ## 3 Metal Biology 20 ## 4 Metal Logic 18 ``` We can also produce a table of proportions from this, simply by dividing the column called `n` by the total number of observations using `sum(n)`. We can also flip the table around into a more convenient (though messy) representation: ``` BLJM_associated_counts %>% # look at relative frequency, not total counts mutate(n = n / sum(n)) %>% pivot_wider(names_from = LB, values_from = n) ``` ``` ## # A tibble: 2 × 3 ## JM Biology Logic ## <chr> <dbl> <dbl> ## 1 Jazz 0.373 0.255 ## 2 Metal 0.196 0.176 ``` Eye\-balling this table of relative frequencies, we might indeed hypothesize that preference for musical style is not independent of preference for an academic subject. The impression is corroborated by looking at the plot in Figure [5\.1](Chap-02-03-summary-statistics-counts.html#fig:chap-02-03-BLJM-proportions). More on this later! Figure 5\.1: Proportions of jointly choosing a musical style and an academic subfield in the Bio\-Logic Jazz\-Metal data set.
Data Science
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/kmeans.html
Chapter 20 *K*\-means Clustering ================================ In PART III of this book we focused on methods for reducing the dimension of our feature space (\\(p\\)). The remaining chapters concern methods for reducing the dimension of our observation space (\\(n\\)); these methods are commonly referred to as *clustering*. *K*\-means clustering is one of the most commonly used clustering algorithms for partitioning observations into a set of \\(k\\) groups (i.e. \\(k\\) clusters), where \\(k\\) is pre\-specified by the analyst. *k*\-means, like other clustering algorithms, tries to classify observations into mutually exclusive groups (or clusters), such that observations within the same cluster are as similar as possible (i.e., high intra\-class similarity), whereas observations from different clusters are as dissimilar as possible (i.e., low inter\-class similarity). In *k*\-means clustering, each cluster is represented by its center (i.e, centroid) which corresponds to the mean of the observation values assigned to the cluster. The procedure used to find these clusters is similar to the *k*\-nearest neighbor (KNN) algorithm discussed in Chapter [8](knn.html#knn); albeit, without the need to predict an average response value. 20\.1 Prerequisites ------------------- For this chapter we’ll use the following packages (note that the primary function to perform *k*\-means, `kmeans()`, is provided in the **stats** package that comes with your basic R installation): ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for data visualization library(stringr) # for string functionality # Modeling packages library(cluster) # for general clustering algorithms library(factoextra) # for visualizing cluster results ``` To illustrate *k*\-means concepts we’ll use the `mnist` and `my_basket` data sets from previous chapters. We’ll also discuss clustering with mixed data (e.g., data with both numeric and categorical data types) using the Ames housing data later in the chapter. ``` mnist <- dslabs::read_mnist() url <- "https://koalaverse.github.io/homlr/data/my_basket.csv" my_basket <- readr::read_csv(url) ``` 20\.2 Distance measures ----------------------- The classification of observations into groups requires some method for computing the distance or the (dis)similarity between each pair of observations which form a distance or dissimilarity or matrix. There are many approaches to calculating these distances; the choice of distance measure is a critical step in clustering (as it was with KNN). It defines how the similarity of two observations (\\(x\_a\\) and \\(x\_b\\) for all \\(j\\) features) is calculated and it will influence the shape and size of the clusters. Recall from Section [8\.2\.1](knn.html#knn-distance) that the classical methods for distance measures are the Euclidean and Manhattan distances; however, alternative distance measures exist such as correlation\-based distances, which are widely used for gene expression data; the *Gower distance* measure (discuss later in Section [20\.7](kmeans.html#cluster-mixed)), which is commonly used for data sets containing categorical and ordinal features; and *cosine distance*, which is commonly used in the field of *text mining*. So how do you decide on a particular distance measure? Unfortunately, there is no straightforward answer and several considerations come into play. Euclidean distance (i.e., straight line distance, or *as the crow flies*) is very sensitive to outliers; when they exist they can skew the cluster results which gives false confidence in the compactness of the cluster. If your features follow an approximate Gaussian distribution then Euclidean distance is a reasonable measure to use. However, if your features deviate significantly from normality or if you just want to be more robust to existing outliers, then Manhattan, Minkowski, or Gower distances are often better choices. If you are analyzing unscaled data where observations may have large differences in magnitude but similar behavior then a correlation\-based distance is preferred. For example, say you want to cluster customers based on common purchasing characteristics. It is possible for large volume and low volume customers to exhibit similar behaviors; however, due to their purchasing magnitude the scale of the data may skew the clusters if not using a correlation\-based distance measure. Figure [20\.1](kmeans.html#fig:correlation-distance-example) illustrates this phenomenon where observation one and two purchase similar quantities of items; however, observation two and three have nearly perfect correlation in their purchasing behavior. A non\-correlation distance measure would group observations one and two together whereas a correlation\-based distance measure would group observations two and three together. Figure 20\.1: Correlation\-based distance measures will capture the correlation between two observations better than a non\-correlation\-based distance measure; regardless of magnitude differences. 20\.3 Defining clusters ----------------------- The basic idea behind *k*\-means clustering is constructing clusters so that the total *within\-cluster variation* is minimized. There are several *k*\-means algorithms available for doing this. The standard algorithm is the *Hartigan\-Wong algorithm* (Hartigan and Wong [1979](#ref-hartigan1979algorithm)), which defines the total within\-cluster variation as the sum of the Euclidean distances between observation \\(i\\)’s feature values and the corresponding centroid: \\\[\\begin{equation} \\tag{20\.1} W\\left(C\_k\\right) \= \\sum\_{x\_i \\in C\_k}\\left(x\_{i} \- \\mu\_k\\right) ^ 2, \\end{equation}\\] where: * \\(x\_i\\) is an observation belonging to the cluster \\(C\_k\\); * \\(\\mu\_k\\) is the mean value of the points assigned to the cluster \\(C\_k\\). Each observation (\\(x\_i\\)) is assigned to a given cluster such that the sum of squared (SS) distances of each observation to their assigned cluster centers (\\(\\mu\_k\\)) is minimized. We define the total within\-cluster variation as follows: \\\[\\begin{equation} \\tag{20\.2} SS\_{within} \= \\sum^k\_{k\=1}W\\left(C\_k\\right) \= \\sum^k\_{k\=1}\\sum\_{x\_i \\in C\_k}\\left(x\_i \- \\mu\_k\\right)^2 \\end{equation}\\] The \\(SS\_{within}\\) measures the compactness (i.e., goodness) of the resulting clusters and we want it to be as small as possible as illustrated in Figure [20\.2](kmeans.html#fig:kmeans-clusters-good-better-best). Figure 20\.2: Total within\-cluster variation captures the total distances between a cluster’s centroid and the individual observations assigned to that cluster. The more compact the these distances, the more defined and isolated the clusters are. The underlying assumptions of *k*\-means requires points to be closer to their own cluster center than to others. This assumption can be ineffective when the clusters have complicated geometries as *k*\-means requires convex boundaries. For example, consider the data in Figure [20\.3](kmeans.html#fig:non-linear-boundaries) (A). These data are clearly grouped; however, their groupings do not have nice convex boundaries (like the convex boundaries used to illustrate the hard margin classifier in Chapter [14](svm.html#svm)). Consequently, *k*\-means clustering does not capture the appropriate groups as Figure [20\.3](kmeans.html#fig:non-linear-boundaries) (B) illustrates. However, *spectral clustering methods* apply the same kernal trick discussed in Chapter [14](svm.html#svm) to allow *k*\-means to discover non\-convex boundaries (Figure [20\.3](kmeans.html#fig:non-linear-boundaries) (C)). See J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)) for a thorough discussion of spectral clustering and the **kernlab** package for an R implementation. We’ll also discuss model\-based clustering methods in Chapter [22](model-clustering.html#model-clustering) which provide an alternative approach to capture non\-convex cluster shapes. Figure 20\.3: The assumptions of k\-means lends it ineffective in capturing complex geometric groupings; however, spectral clustering allows you to cluster data that is connected but not necessarily clustered within convex boundaries. 20\.4 *k*\-means algorithm -------------------------- The first step when using *k*\-means clustering is to indicate the number of clusters (\\(k\\)) that will be generated in the final solution. Unfortunately, unless our data set is very small, we cannot evaluate every possible cluster combination because there are almost \\(k^n\\) ways to partition \\(n\\) observations into \\(k\\) clusters. Consequently, we need to estimate a *greedy local optimum* solution for our specified \\(k\\) (Hartigan and Wong [1979](#ref-hartigan1979algorithm)). To do so, the algorithm starts by randomly selecting \\(k\\) observations from the data set to serve as the initial centers for the clusters (i.e., centroids). Next, each of the remaining observations are assigned to its closest centroid, where closest is defined using the distance between the object and the cluster mean (based on the selected distance measure). This is called the *cluster assignment step*. Next, the algorithm computes the new center (i.e., mean value) of each cluster. The term *centroid update* is used to define this step. Now that the centers have been recalculated, every observation is checked again to see if it might be closer to a different cluster. All the objects are reassigned again using the updated cluster means. The cluster assignment and centroid update steps are iteratively repeated until the cluster assignments stop changing (i.e., when convergence is achieved). That is, the clusters formed in the current iteration are the same as those obtained in the previous iteration. Due to randomization of the initial \\(k\\) observations used as the starting centroids, we can get slightly different results each time we apply the procedure. Consequently, most algorithms use several *random starts* and choose the iteration with the lowest \\(W\\left(C\_k\\right)\\) (Equation [(20\.1\)](kmeans.html#eq:within-cluster-variation)). Figure [20\.4](kmeans.html#fig:random-starts) illustrates the variation in \\(W\\left(C\_k\\right)\\) for different random starts. A good rule for the number of random starts to apply is 10–20\. Figure 20\.4: Each application of the k\-means algorithm can achieve slight differences in the final results based on the random start. The *k*\-means algorithm can be summarized as follows: 1. Specify the number of clusters (\\(k\\)) to be created (this is done by the analyst). 2. Select \\(k\\) observations at random from the data set to use as the initial cluster centroids. 3. Assign each observation to their closest centroid based on the distance measure selected. 4. For each of the \\(k\\) clusters update the cluster centroid by calculating the new mean values of all the data points in the cluster. The centroid for the \\(i\\)\-th cluster is a vector of length \\(p\\) containing the means of all \\(p\\) features for the observations in cluster \\(i\\). 5. Iteratively minimize \\(SS\_{within}\\). That is, iterate steps 3–4 until the cluster assignments stop changing (beyond some threshold) or the maximum number of iterations is reached. A good rule of thumb is to perform 10–20 iterations. 20\.5 Clustering digits ----------------------- Let’s illustrate an example by performing *k*\-means clustering on the MNIST pixel features and see if we can identify unique clusters of digits without using the response variable. Here, we declare \\(k \= 10\\) only because we already know there are 10 unique digits represented in the data. We also use 10 random starts (`nstart = 10`). The output of our model contains many of the metrics we’ve already discussed such as total within\-cluster variation (`withinss`), total within\-cluster sum of squares (`tot.withinss`), the size of each cluster (`size`), and the iteration out of our 10 random starts used (`iter`). It also includes the `cluster` each observation is assigned to and the `centers` of each cluster. Training *k*\-means on the MNIST data with 10 random starts took about 4\.5 minutes for us using the code below. ``` features <- mnist$train$images # Use k-means model with 10 centers and 10 random starts mnist_clustering <- kmeans(features, centers = 10, nstart = 10) # Print contents of the model output str(mnist_clustering) ## List of 9 ## $ cluster : int [1:60000] 5 9 3 8 10 7 4 5 4 6 ... ## $ centers : num [1:10, 1:784] 0 0 0 0 0 0 0 0 0 0 ... ## ..- attr(*, "dimnames")=List of 2 ## .. ..$ : chr [1:10] "1" "2" "3" "4" ... ## .. ..$ : NULL ## $ totss : num 205706725984 ## $ withinss : num [1:10] 23123576673 14119007546 16438261395 7950166288 ... ## $ tot.withinss: num 153017742761 ## $ betweenss : num 52688983223 ## $ size : int [1:10] 7786 5384 5380 5515 7051 6706 4634 5311 4971 7262 ## $ iter : int 8 ## $ ifault : int 0 ## - attr(*, "class")= chr "kmeans" ``` The `centers` output is a 10x784 matrix. This matrix contains the average value of each of the 784 features for the 10 clusters. We can plot this as in Figure [20\.5](kmeans.html#fig:plot-kmeans-mnist-centers) which shows us what the typical digit is in each cluster. We clearly see recognizable digits even though *k*\-means had no insight into the response variable. ``` # Extract cluster centers mnist_centers <- mnist_clustering$centers # Plot typical cluster digits par(mfrow = c(2, 5), mar=c(0.5, 0.5, 0.5, 0.5)) layout(matrix(seq_len(nrow(mnist_centers)), 2, 5, byrow = FALSE)) for(i in seq_len(nrow(mnist_centers))) { image(matrix(mnist_centers[i, ], 28, 28)[, 28:1], col = gray.colors(12, rev = TRUE), xaxt="n", yaxt="n") } ``` Figure 20\.5: Cluster centers for the 10 clusters identified in the MNIST training data. We can compare the cluster digits with the actual digit labels to see how well our clustering is performing. To do so, we compare the most common digit in each cluster (i.e., with the mode) to the actual training labels. Figure [20\.6](kmeans.html#fig:mnist-clustering-confusion-matrix) illustrates the results. We see that *k*\-means does a decent job of clustering some of the digits. In fact, most of the digits are clustered more often with like digits than with different digits. However, we also see some digits are grouped often with different digits (e.g., 6s are often grouped with 0s and 9s are often grouped with 7s). We also see that 0s and 5s are never the dominant digit in a cluster. Consequently, our clustering is grouping many digits that have some resemblance (3s, 5s, and 8s are often grouped together) and since this is an unsupervised task, there is no mechanism to supervise the algorithm otherwise. ``` # Create mode function mode_fun <- function(x){ which.max(tabulate(x)) } mnist_comparison <- data.frame( cluster = mnist_clustering$cluster, actual = mnist$train$labels ) %>% group_by(cluster) %>% mutate(mode = mode_fun(actual)) %>% ungroup() %>% mutate_all(factor, levels = 0:9) # Create confusion matrix and plot results yardstick::conf_mat( mnist_comparison, truth = actual, estimate = mode ) %>% autoplot(type = 'heatmap') ``` Figure 20\.6: Confusion matrix illustrating how the k\-means algorithm clustered the digits (x\-axis) and the actual labels (y\-axis). 20\.6 How many clusters? ------------------------ When clustering the MNIST data, the number of clusters we specified was based on prior knowledge of the data. However, often we do not have this kind of *a priori* information and the reason we are performing cluster analysis is to identify what clusters may exist. So how do we go about determining the right number of *k*? Choosing the number of clusters requires a delicate balance. Larger values of \\(k\\) can improve homogeneity of the clusters; however it risks overfitting. Best case (or maybe we should say easiest case) scenario, \\(k\\) is predetermined. This often occurs when we have deterministic resources to allocate. For example, a company may employ \\(k\\) sales people and they would like to partition their customers into one of \\(k\\) segments so that they can be assigned to one of the sales folks. In this case \\(k\\) is predetermined by external resources or knowledge. A more common case is that \\(k\\) is unknown; however, we can often still apply *a priori* knowledge for potential groupings. For example, maybe you need to cluster customer experience survey responses for an automobile sales company. You may start by setting \\(k\\) to the number of car brands the company carries. If you lack any *a priori* knowledge for setting \\(k\\), then a commonly used rule of thumb is \\(k \= \\sqrt{n/2}\\), where \\(n\\) is the number of observations to cluster. However, this rule can result in very large values of \\(k\\) for larger data sets (e.g., this would have us use \\(k \= 173\\) for the MNIST data set). When the goal of the clustering procedure is to ascertain what natural distinct groups exist in the data, without any *a priori* knowledge, there are multiple statistical methods we can apply. However, many of these measures suffer from the *curse of dimensionality* as they require multiple iterations and clustering large data sets is not efficient, especially when clustering repeatedly. See Charrad et al. ([2015](#ref-R-NbClust)) for a thorough review of the vast assortment of measures of cluster performance. The **NbClust** package implements many of these methods, providing you with over 30 indices to determine the optimal \\(k\\). One of the more popular methods is the *elbow method*. Recall that the basic idea behind cluster partitioning methods, such as *k*\-means clustering, is to define clusters such that the total within\-cluster variation is minimized (Equation [(20\.2\)](kmeans.html#eq:tot-within-ss)). The total within\-cluster sum of squares measures the compactness of the clustering and we want it to be as small as possible. Thus, we can use the following approach to define the optimal clusters: 1. Compute *k*\-means clustering for different values of \\(k\\). For instance, by varying \\(k\\) from 1–20 clusters. 2. For each \\(k\\), calculate the total within\-cluster sum of squares (WSS). 3. Plot the curve of WSS according to the number of clusters \\(k\\). 4. The location of a bend (i.e., elbow) in the plot is generally considered as an indicator of the appropriate number of clusters. When using small to moderate sized data sets this process can be performed conveniently with `factoextra::fviz_nbclust()`. However, this function requires you to specify a single max \\(k\\) value and it will train *k*\-means models for \\(1\-k\\) clusters. When dealing with large data sets, such as MNIST, this is unreasonable so you will want to manually implement the procedure (e.g., with a `for` loop and specify the values of \\(k\\) to assess). The following assesses clustering the `my_basket` data into 1–25 clusters. The `method = 'wss'` argument specifies that our search criteria is using the elbow method discussed above and since we are assessing quantities across different baskets of goods we use the non\-parametric Spearman correlation\-based distance measure. The results show the “elbow” appears to happen when \\(k \= 5\\). ``` fviz_nbclust( my_basket, kmeans, k.max = 25, method = "wss", diss = get_dist(my_basket, method = "spearman") ) ``` Figure 20\.7: Using the elbow method to identify the preferred number of clusters in the my basket data set. `fviz_nbclust()` also implements other popular methods such as the Silhouette method (Rousseeuw [1987](#ref-rousseeuw1987silhouettes)) and Gap statistic (Tibshirani, Walther, and Hastie [2001](#ref-tibshirani2001estimating)). Luckily, applications requiring the exact optimal set of clusters is fairly rare. In most applications, it suffices to choose a \\(k\\) based on convenience rather than strict performance requirements. But if necessary, the elbow method and other performance metrics can point you in the right direction. 20\.7 Clustering with mixed data -------------------------------- Often textbook examples of clustering include only numeric data. However, most real life data sets contain a mixture of numeric, categorical, and ordinal variables; and whether an observation is similar to another observation should depend on these data type attributes. There are a few options for performing clustering with mixed data and we’ll demonstrate on the full Ames housing data set (minus the response variable `Sale_Price`). To perform *k*\-means clustering on mixed data we can convert any ordinal categorical variables to numeric and one\-hot encode the remaining nominal categorical variables. ``` # Full ames data set --> recode ordinal variables to numeric ames_full <- AmesHousing::make_ames() %>% mutate_if(str_detect(names(.), 'Qual|Cond|QC|Qu'), as.numeric) # One-hot encode --> retain only the features and not sale price full_rank <- caret::dummyVars(Sale_Price ~ ., data = ames_full, fullRank = TRUE) ames_1hot <- predict(full_rank, ames_full) # Scale data ames_1hot_scaled <- scale(ames_1hot) # New dimensions dim(ames_1hot_scaled) ## [1] 2930 240 ``` Now that all our variables are represented numerically, we can perform *k*\-means clustering as we did in the previous sections. Using the elbow method, there does not appear to be a definitive number of clusters to use. ``` set.seed(123) fviz_nbclust( ames_1hot_scaled, kmeans, method = "wss", k.max = 25, verbose = FALSE ) ``` Figure 20\.8: Suggested number of clusters for one\-hot encoded Ames data using k\-means clustering and the elbow criterion. Unfortunately, this is a common issue. As the number of features expand, performance of *k*\-means tends to break down and both *k*\-means and hierarchical clustering (Chapter [21](hierarchical.html#hierarchical)) approaches become slow and ineffective. This happens, typically, as your data becomes more sparse. An additional option for heavily mixed data is to use the Gower distance (Gower [1971](#ref-gower1971general)) measure, which applies a particular distance calculation that works well for each data type. The metrics used for each data type include: * **quantitative (interval)**: range\-normalized Manhattan distance; * **ordinal**: variable is first ranked, then Manhattan distance is used with a special adjustment for ties; * **nominal**: variables with \\(k\\) categories are first converted into \\(k\\) binary columns (i.e., one\-hot encoded) and then the *Dice coefficient* is used. To compute the dice metric for two observations \\(\\left(X, Y\\right)\\), the algorithm looks across all one\-hot encoded categorical variables and scores them as: + **a** — number of dummies 1 for both observations + **b** — number of dummies 1 for \\(X\\) and 0 for \\(Y\\) + **c** — number of dummies 0 for \\(X\\) and 1 for \\(Y\\) + **d** — number of dummies 0 for both and then uses the following formula: \\\[\\begin{equation} \\tag{20\.3} D \= \\frac{2a}{2a \+ b \+ c} \\end{equation}\\] We can use the `cluster::daisy()` function to create a Gower distance matrix from our data; this function performs the categorical data transformations so you can supply the data in the original format. ``` # Original data minus Sale_Price ames_full <- AmesHousing::make_ames() %>% select(-Sale_Price) # Compute Gower distance for original data gower_dst <- daisy(ames_full, metric = "gower") ``` We can now feed the results into any clustering algorithm that accepts a distance matrix. This primarily includes `cluster::pam()`, `cluster::diana()`, and `cluster::agnes()` (`stats::kmeans()` and `cluster::clara()` do not accept distance matrices as inputs). `cluster::diana()` and `cluster::agnes()` are hierarchical clustering algorithms that you will learn about in Chapter 21\. `cluster::pam()` and `cluster::clara()` are discussed in the next section. ``` # You can supply the Gower distance matrix to several clustering algos pam_gower <- pam(x = gower_dst, k = 8, diss = TRUE) diana_gower <- diana(x = gower_dst, diss = TRUE) agnes_gower <- agnes(x = gower_dst, diss = TRUE) ``` 20\.8 Alternative partitioning methods -------------------------------------- As your data grow in dimensions you are likely to introduce more outliers; since *k*\-means uses the mean, it is not robust to outliers. An alternative to this is to use *partitioning around medians* (PAM), which has the same algorithmic steps as *k*\-means but uses the median rather than the mean to determine the centroid; making it more robust to outliers. Unfortunately, this robustness comes with an added computational expense (J. Friedman, Hastie, and Tibshirani [2001](#ref-esl)). To perform PAM clustering use `cluster::pam()` instead of `kmeans()`. If you compare *k*\-means and PAM clustering results for a given criterion and experience common results then that is a good indication that outliers are not effecting your results. Figure [20\.9](kmeans.html#fig:pam) illustrates the total within sum of squares for 1–25 clusters using PAM clustering on the one\-hot encoded Ames data. We see very similar results as with \\(k\\)\-means (Figure [20\.8](kmeans.html#fig:kmeans-silhouette-mixed)), which tells us that outliers are not negatively influencing the \\(k\\)\-means results. ``` fviz_nbclust( ames_1hot_scaled, pam, method = "wss", k.max = 25, verbose = FALSE ) ``` Figure 20\.9: Total within sum of squares for 1\-25 clusters using PAM clustering. As your data set becomes larger both hierarchical, *k*\-means, and PAM clustering become slower. An alternative is *clustering large applications* (CLARA), which performs the same algorithmic process as PAM; however, instead of finding the *medoids* for the entire data set it considers a small sample size and applies *k*\-means or PAM. Medoids are similar in spirit to the cluster centers or means, but medoids are always restricted to be members of the data set (similar to the difference between the sample mean and median when you have an odd number of observations and no ties). CLARA performs the following algorithmic steps: 1. Randomly split the data set into multiple subsets with fixed size. 2. Compute PAM algorithm on each subset and choose the corresponding \\(k\\) medoids. Assign each observation of the entire data set to the closest medoid. 3. Calculate the mean (or sum) of the dissimilarities of the observations to their closest medoid. This is used as a measure of the goodness of fit of the clustering. 4. Retain the sub\-data set for which the mean (or sum) is minimal. To perform CLARA clustering use `cluster::clara()` instead of `cluster::pam()` and `kmeans()`. If you compute CLARA on the Ames mixed data or on the MNIST data you will find very similar results to both *k*\-means and PAM; however, as the below code illustrates it takes less than \\(\\frac{1}{5}\\)\-th of the time! ``` # k-means computation time on MNIST data system.time(kmeans(features, centers = 10)) ## user system elapsed ## 230.875 4.659 237.404 # CLARA computation time on MNIST data system.time(clara(features, k = 10)) ## user system elapsed ## 37.975 0.286 38.966 ``` 20\.9 Final thoughts -------------------- *K*\-means clustering is probably the most popular clustering algorithm and usually the first applied when solving clustering tasks. Although there have been methods to help analysts identify the optimal number of *k* clusters, this task is still largely based on subjective inputs and decisions by the analyst considering the unsupervised nature of the algorithm. In the next two chapters we’ll explore alternative approaches that help reduce the burden of the analyst needing to define *k*. These methods also address other limitations of *k*\-means such as how the algorithm primarily performs well only if the clusters have convex, non\-overlapping boundaries; attributes that rarely exists in real data sets. 20\.1 Prerequisites ------------------- For this chapter we’ll use the following packages (note that the primary function to perform *k*\-means, `kmeans()`, is provided in the **stats** package that comes with your basic R installation): ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for data visualization library(stringr) # for string functionality # Modeling packages library(cluster) # for general clustering algorithms library(factoextra) # for visualizing cluster results ``` To illustrate *k*\-means concepts we’ll use the `mnist` and `my_basket` data sets from previous chapters. We’ll also discuss clustering with mixed data (e.g., data with both numeric and categorical data types) using the Ames housing data later in the chapter. ``` mnist <- dslabs::read_mnist() url <- "https://koalaverse.github.io/homlr/data/my_basket.csv" my_basket <- readr::read_csv(url) ``` 20\.2 Distance measures ----------------------- The classification of observations into groups requires some method for computing the distance or the (dis)similarity between each pair of observations which form a distance or dissimilarity or matrix. There are many approaches to calculating these distances; the choice of distance measure is a critical step in clustering (as it was with KNN). It defines how the similarity of two observations (\\(x\_a\\) and \\(x\_b\\) for all \\(j\\) features) is calculated and it will influence the shape and size of the clusters. Recall from Section [8\.2\.1](knn.html#knn-distance) that the classical methods for distance measures are the Euclidean and Manhattan distances; however, alternative distance measures exist such as correlation\-based distances, which are widely used for gene expression data; the *Gower distance* measure (discuss later in Section [20\.7](kmeans.html#cluster-mixed)), which is commonly used for data sets containing categorical and ordinal features; and *cosine distance*, which is commonly used in the field of *text mining*. So how do you decide on a particular distance measure? Unfortunately, there is no straightforward answer and several considerations come into play. Euclidean distance (i.e., straight line distance, or *as the crow flies*) is very sensitive to outliers; when they exist they can skew the cluster results which gives false confidence in the compactness of the cluster. If your features follow an approximate Gaussian distribution then Euclidean distance is a reasonable measure to use. However, if your features deviate significantly from normality or if you just want to be more robust to existing outliers, then Manhattan, Minkowski, or Gower distances are often better choices. If you are analyzing unscaled data where observations may have large differences in magnitude but similar behavior then a correlation\-based distance is preferred. For example, say you want to cluster customers based on common purchasing characteristics. It is possible for large volume and low volume customers to exhibit similar behaviors; however, due to their purchasing magnitude the scale of the data may skew the clusters if not using a correlation\-based distance measure. Figure [20\.1](kmeans.html#fig:correlation-distance-example) illustrates this phenomenon where observation one and two purchase similar quantities of items; however, observation two and three have nearly perfect correlation in their purchasing behavior. A non\-correlation distance measure would group observations one and two together whereas a correlation\-based distance measure would group observations two and three together. Figure 20\.1: Correlation\-based distance measures will capture the correlation between two observations better than a non\-correlation\-based distance measure; regardless of magnitude differences. 20\.3 Defining clusters ----------------------- The basic idea behind *k*\-means clustering is constructing clusters so that the total *within\-cluster variation* is minimized. There are several *k*\-means algorithms available for doing this. The standard algorithm is the *Hartigan\-Wong algorithm* (Hartigan and Wong [1979](#ref-hartigan1979algorithm)), which defines the total within\-cluster variation as the sum of the Euclidean distances between observation \\(i\\)’s feature values and the corresponding centroid: \\\[\\begin{equation} \\tag{20\.1} W\\left(C\_k\\right) \= \\sum\_{x\_i \\in C\_k}\\left(x\_{i} \- \\mu\_k\\right) ^ 2, \\end{equation}\\] where: * \\(x\_i\\) is an observation belonging to the cluster \\(C\_k\\); * \\(\\mu\_k\\) is the mean value of the points assigned to the cluster \\(C\_k\\). Each observation (\\(x\_i\\)) is assigned to a given cluster such that the sum of squared (SS) distances of each observation to their assigned cluster centers (\\(\\mu\_k\\)) is minimized. We define the total within\-cluster variation as follows: \\\[\\begin{equation} \\tag{20\.2} SS\_{within} \= \\sum^k\_{k\=1}W\\left(C\_k\\right) \= \\sum^k\_{k\=1}\\sum\_{x\_i \\in C\_k}\\left(x\_i \- \\mu\_k\\right)^2 \\end{equation}\\] The \\(SS\_{within}\\) measures the compactness (i.e., goodness) of the resulting clusters and we want it to be as small as possible as illustrated in Figure [20\.2](kmeans.html#fig:kmeans-clusters-good-better-best). Figure 20\.2: Total within\-cluster variation captures the total distances between a cluster’s centroid and the individual observations assigned to that cluster. The more compact the these distances, the more defined and isolated the clusters are. The underlying assumptions of *k*\-means requires points to be closer to their own cluster center than to others. This assumption can be ineffective when the clusters have complicated geometries as *k*\-means requires convex boundaries. For example, consider the data in Figure [20\.3](kmeans.html#fig:non-linear-boundaries) (A). These data are clearly grouped; however, their groupings do not have nice convex boundaries (like the convex boundaries used to illustrate the hard margin classifier in Chapter [14](svm.html#svm)). Consequently, *k*\-means clustering does not capture the appropriate groups as Figure [20\.3](kmeans.html#fig:non-linear-boundaries) (B) illustrates. However, *spectral clustering methods* apply the same kernal trick discussed in Chapter [14](svm.html#svm) to allow *k*\-means to discover non\-convex boundaries (Figure [20\.3](kmeans.html#fig:non-linear-boundaries) (C)). See J. Friedman, Hastie, and Tibshirani ([2001](#ref-esl)) for a thorough discussion of spectral clustering and the **kernlab** package for an R implementation. We’ll also discuss model\-based clustering methods in Chapter [22](model-clustering.html#model-clustering) which provide an alternative approach to capture non\-convex cluster shapes. Figure 20\.3: The assumptions of k\-means lends it ineffective in capturing complex geometric groupings; however, spectral clustering allows you to cluster data that is connected but not necessarily clustered within convex boundaries. 20\.4 *k*\-means algorithm -------------------------- The first step when using *k*\-means clustering is to indicate the number of clusters (\\(k\\)) that will be generated in the final solution. Unfortunately, unless our data set is very small, we cannot evaluate every possible cluster combination because there are almost \\(k^n\\) ways to partition \\(n\\) observations into \\(k\\) clusters. Consequently, we need to estimate a *greedy local optimum* solution for our specified \\(k\\) (Hartigan and Wong [1979](#ref-hartigan1979algorithm)). To do so, the algorithm starts by randomly selecting \\(k\\) observations from the data set to serve as the initial centers for the clusters (i.e., centroids). Next, each of the remaining observations are assigned to its closest centroid, where closest is defined using the distance between the object and the cluster mean (based on the selected distance measure). This is called the *cluster assignment step*. Next, the algorithm computes the new center (i.e., mean value) of each cluster. The term *centroid update* is used to define this step. Now that the centers have been recalculated, every observation is checked again to see if it might be closer to a different cluster. All the objects are reassigned again using the updated cluster means. The cluster assignment and centroid update steps are iteratively repeated until the cluster assignments stop changing (i.e., when convergence is achieved). That is, the clusters formed in the current iteration are the same as those obtained in the previous iteration. Due to randomization of the initial \\(k\\) observations used as the starting centroids, we can get slightly different results each time we apply the procedure. Consequently, most algorithms use several *random starts* and choose the iteration with the lowest \\(W\\left(C\_k\\right)\\) (Equation [(20\.1\)](kmeans.html#eq:within-cluster-variation)). Figure [20\.4](kmeans.html#fig:random-starts) illustrates the variation in \\(W\\left(C\_k\\right)\\) for different random starts. A good rule for the number of random starts to apply is 10–20\. Figure 20\.4: Each application of the k\-means algorithm can achieve slight differences in the final results based on the random start. The *k*\-means algorithm can be summarized as follows: 1. Specify the number of clusters (\\(k\\)) to be created (this is done by the analyst). 2. Select \\(k\\) observations at random from the data set to use as the initial cluster centroids. 3. Assign each observation to their closest centroid based on the distance measure selected. 4. For each of the \\(k\\) clusters update the cluster centroid by calculating the new mean values of all the data points in the cluster. The centroid for the \\(i\\)\-th cluster is a vector of length \\(p\\) containing the means of all \\(p\\) features for the observations in cluster \\(i\\). 5. Iteratively minimize \\(SS\_{within}\\). That is, iterate steps 3–4 until the cluster assignments stop changing (beyond some threshold) or the maximum number of iterations is reached. A good rule of thumb is to perform 10–20 iterations. 20\.5 Clustering digits ----------------------- Let’s illustrate an example by performing *k*\-means clustering on the MNIST pixel features and see if we can identify unique clusters of digits without using the response variable. Here, we declare \\(k \= 10\\) only because we already know there are 10 unique digits represented in the data. We also use 10 random starts (`nstart = 10`). The output of our model contains many of the metrics we’ve already discussed such as total within\-cluster variation (`withinss`), total within\-cluster sum of squares (`tot.withinss`), the size of each cluster (`size`), and the iteration out of our 10 random starts used (`iter`). It also includes the `cluster` each observation is assigned to and the `centers` of each cluster. Training *k*\-means on the MNIST data with 10 random starts took about 4\.5 minutes for us using the code below. ``` features <- mnist$train$images # Use k-means model with 10 centers and 10 random starts mnist_clustering <- kmeans(features, centers = 10, nstart = 10) # Print contents of the model output str(mnist_clustering) ## List of 9 ## $ cluster : int [1:60000] 5 9 3 8 10 7 4 5 4 6 ... ## $ centers : num [1:10, 1:784] 0 0 0 0 0 0 0 0 0 0 ... ## ..- attr(*, "dimnames")=List of 2 ## .. ..$ : chr [1:10] "1" "2" "3" "4" ... ## .. ..$ : NULL ## $ totss : num 205706725984 ## $ withinss : num [1:10] 23123576673 14119007546 16438261395 7950166288 ... ## $ tot.withinss: num 153017742761 ## $ betweenss : num 52688983223 ## $ size : int [1:10] 7786 5384 5380 5515 7051 6706 4634 5311 4971 7262 ## $ iter : int 8 ## $ ifault : int 0 ## - attr(*, "class")= chr "kmeans" ``` The `centers` output is a 10x784 matrix. This matrix contains the average value of each of the 784 features for the 10 clusters. We can plot this as in Figure [20\.5](kmeans.html#fig:plot-kmeans-mnist-centers) which shows us what the typical digit is in each cluster. We clearly see recognizable digits even though *k*\-means had no insight into the response variable. ``` # Extract cluster centers mnist_centers <- mnist_clustering$centers # Plot typical cluster digits par(mfrow = c(2, 5), mar=c(0.5, 0.5, 0.5, 0.5)) layout(matrix(seq_len(nrow(mnist_centers)), 2, 5, byrow = FALSE)) for(i in seq_len(nrow(mnist_centers))) { image(matrix(mnist_centers[i, ], 28, 28)[, 28:1], col = gray.colors(12, rev = TRUE), xaxt="n", yaxt="n") } ``` Figure 20\.5: Cluster centers for the 10 clusters identified in the MNIST training data. We can compare the cluster digits with the actual digit labels to see how well our clustering is performing. To do so, we compare the most common digit in each cluster (i.e., with the mode) to the actual training labels. Figure [20\.6](kmeans.html#fig:mnist-clustering-confusion-matrix) illustrates the results. We see that *k*\-means does a decent job of clustering some of the digits. In fact, most of the digits are clustered more often with like digits than with different digits. However, we also see some digits are grouped often with different digits (e.g., 6s are often grouped with 0s and 9s are often grouped with 7s). We also see that 0s and 5s are never the dominant digit in a cluster. Consequently, our clustering is grouping many digits that have some resemblance (3s, 5s, and 8s are often grouped together) and since this is an unsupervised task, there is no mechanism to supervise the algorithm otherwise. ``` # Create mode function mode_fun <- function(x){ which.max(tabulate(x)) } mnist_comparison <- data.frame( cluster = mnist_clustering$cluster, actual = mnist$train$labels ) %>% group_by(cluster) %>% mutate(mode = mode_fun(actual)) %>% ungroup() %>% mutate_all(factor, levels = 0:9) # Create confusion matrix and plot results yardstick::conf_mat( mnist_comparison, truth = actual, estimate = mode ) %>% autoplot(type = 'heatmap') ``` Figure 20\.6: Confusion matrix illustrating how the k\-means algorithm clustered the digits (x\-axis) and the actual labels (y\-axis). 20\.6 How many clusters? ------------------------ When clustering the MNIST data, the number of clusters we specified was based on prior knowledge of the data. However, often we do not have this kind of *a priori* information and the reason we are performing cluster analysis is to identify what clusters may exist. So how do we go about determining the right number of *k*? Choosing the number of clusters requires a delicate balance. Larger values of \\(k\\) can improve homogeneity of the clusters; however it risks overfitting. Best case (or maybe we should say easiest case) scenario, \\(k\\) is predetermined. This often occurs when we have deterministic resources to allocate. For example, a company may employ \\(k\\) sales people and they would like to partition their customers into one of \\(k\\) segments so that they can be assigned to one of the sales folks. In this case \\(k\\) is predetermined by external resources or knowledge. A more common case is that \\(k\\) is unknown; however, we can often still apply *a priori* knowledge for potential groupings. For example, maybe you need to cluster customer experience survey responses for an automobile sales company. You may start by setting \\(k\\) to the number of car brands the company carries. If you lack any *a priori* knowledge for setting \\(k\\), then a commonly used rule of thumb is \\(k \= \\sqrt{n/2}\\), where \\(n\\) is the number of observations to cluster. However, this rule can result in very large values of \\(k\\) for larger data sets (e.g., this would have us use \\(k \= 173\\) for the MNIST data set). When the goal of the clustering procedure is to ascertain what natural distinct groups exist in the data, without any *a priori* knowledge, there are multiple statistical methods we can apply. However, many of these measures suffer from the *curse of dimensionality* as they require multiple iterations and clustering large data sets is not efficient, especially when clustering repeatedly. See Charrad et al. ([2015](#ref-R-NbClust)) for a thorough review of the vast assortment of measures of cluster performance. The **NbClust** package implements many of these methods, providing you with over 30 indices to determine the optimal \\(k\\). One of the more popular methods is the *elbow method*. Recall that the basic idea behind cluster partitioning methods, such as *k*\-means clustering, is to define clusters such that the total within\-cluster variation is minimized (Equation [(20\.2\)](kmeans.html#eq:tot-within-ss)). The total within\-cluster sum of squares measures the compactness of the clustering and we want it to be as small as possible. Thus, we can use the following approach to define the optimal clusters: 1. Compute *k*\-means clustering for different values of \\(k\\). For instance, by varying \\(k\\) from 1–20 clusters. 2. For each \\(k\\), calculate the total within\-cluster sum of squares (WSS). 3. Plot the curve of WSS according to the number of clusters \\(k\\). 4. The location of a bend (i.e., elbow) in the plot is generally considered as an indicator of the appropriate number of clusters. When using small to moderate sized data sets this process can be performed conveniently with `factoextra::fviz_nbclust()`. However, this function requires you to specify a single max \\(k\\) value and it will train *k*\-means models for \\(1\-k\\) clusters. When dealing with large data sets, such as MNIST, this is unreasonable so you will want to manually implement the procedure (e.g., with a `for` loop and specify the values of \\(k\\) to assess). The following assesses clustering the `my_basket` data into 1–25 clusters. The `method = 'wss'` argument specifies that our search criteria is using the elbow method discussed above and since we are assessing quantities across different baskets of goods we use the non\-parametric Spearman correlation\-based distance measure. The results show the “elbow” appears to happen when \\(k \= 5\\). ``` fviz_nbclust( my_basket, kmeans, k.max = 25, method = "wss", diss = get_dist(my_basket, method = "spearman") ) ``` Figure 20\.7: Using the elbow method to identify the preferred number of clusters in the my basket data set. `fviz_nbclust()` also implements other popular methods such as the Silhouette method (Rousseeuw [1987](#ref-rousseeuw1987silhouettes)) and Gap statistic (Tibshirani, Walther, and Hastie [2001](#ref-tibshirani2001estimating)). Luckily, applications requiring the exact optimal set of clusters is fairly rare. In most applications, it suffices to choose a \\(k\\) based on convenience rather than strict performance requirements. But if necessary, the elbow method and other performance metrics can point you in the right direction. 20\.7 Clustering with mixed data -------------------------------- Often textbook examples of clustering include only numeric data. However, most real life data sets contain a mixture of numeric, categorical, and ordinal variables; and whether an observation is similar to another observation should depend on these data type attributes. There are a few options for performing clustering with mixed data and we’ll demonstrate on the full Ames housing data set (minus the response variable `Sale_Price`). To perform *k*\-means clustering on mixed data we can convert any ordinal categorical variables to numeric and one\-hot encode the remaining nominal categorical variables. ``` # Full ames data set --> recode ordinal variables to numeric ames_full <- AmesHousing::make_ames() %>% mutate_if(str_detect(names(.), 'Qual|Cond|QC|Qu'), as.numeric) # One-hot encode --> retain only the features and not sale price full_rank <- caret::dummyVars(Sale_Price ~ ., data = ames_full, fullRank = TRUE) ames_1hot <- predict(full_rank, ames_full) # Scale data ames_1hot_scaled <- scale(ames_1hot) # New dimensions dim(ames_1hot_scaled) ## [1] 2930 240 ``` Now that all our variables are represented numerically, we can perform *k*\-means clustering as we did in the previous sections. Using the elbow method, there does not appear to be a definitive number of clusters to use. ``` set.seed(123) fviz_nbclust( ames_1hot_scaled, kmeans, method = "wss", k.max = 25, verbose = FALSE ) ``` Figure 20\.8: Suggested number of clusters for one\-hot encoded Ames data using k\-means clustering and the elbow criterion. Unfortunately, this is a common issue. As the number of features expand, performance of *k*\-means tends to break down and both *k*\-means and hierarchical clustering (Chapter [21](hierarchical.html#hierarchical)) approaches become slow and ineffective. This happens, typically, as your data becomes more sparse. An additional option for heavily mixed data is to use the Gower distance (Gower [1971](#ref-gower1971general)) measure, which applies a particular distance calculation that works well for each data type. The metrics used for each data type include: * **quantitative (interval)**: range\-normalized Manhattan distance; * **ordinal**: variable is first ranked, then Manhattan distance is used with a special adjustment for ties; * **nominal**: variables with \\(k\\) categories are first converted into \\(k\\) binary columns (i.e., one\-hot encoded) and then the *Dice coefficient* is used. To compute the dice metric for two observations \\(\\left(X, Y\\right)\\), the algorithm looks across all one\-hot encoded categorical variables and scores them as: + **a** — number of dummies 1 for both observations + **b** — number of dummies 1 for \\(X\\) and 0 for \\(Y\\) + **c** — number of dummies 0 for \\(X\\) and 1 for \\(Y\\) + **d** — number of dummies 0 for both and then uses the following formula: \\\[\\begin{equation} \\tag{20\.3} D \= \\frac{2a}{2a \+ b \+ c} \\end{equation}\\] We can use the `cluster::daisy()` function to create a Gower distance matrix from our data; this function performs the categorical data transformations so you can supply the data in the original format. ``` # Original data minus Sale_Price ames_full <- AmesHousing::make_ames() %>% select(-Sale_Price) # Compute Gower distance for original data gower_dst <- daisy(ames_full, metric = "gower") ``` We can now feed the results into any clustering algorithm that accepts a distance matrix. This primarily includes `cluster::pam()`, `cluster::diana()`, and `cluster::agnes()` (`stats::kmeans()` and `cluster::clara()` do not accept distance matrices as inputs). `cluster::diana()` and `cluster::agnes()` are hierarchical clustering algorithms that you will learn about in Chapter 21\. `cluster::pam()` and `cluster::clara()` are discussed in the next section. ``` # You can supply the Gower distance matrix to several clustering algos pam_gower <- pam(x = gower_dst, k = 8, diss = TRUE) diana_gower <- diana(x = gower_dst, diss = TRUE) agnes_gower <- agnes(x = gower_dst, diss = TRUE) ``` 20\.8 Alternative partitioning methods -------------------------------------- As your data grow in dimensions you are likely to introduce more outliers; since *k*\-means uses the mean, it is not robust to outliers. An alternative to this is to use *partitioning around medians* (PAM), which has the same algorithmic steps as *k*\-means but uses the median rather than the mean to determine the centroid; making it more robust to outliers. Unfortunately, this robustness comes with an added computational expense (J. Friedman, Hastie, and Tibshirani [2001](#ref-esl)). To perform PAM clustering use `cluster::pam()` instead of `kmeans()`. If you compare *k*\-means and PAM clustering results for a given criterion and experience common results then that is a good indication that outliers are not effecting your results. Figure [20\.9](kmeans.html#fig:pam) illustrates the total within sum of squares for 1–25 clusters using PAM clustering on the one\-hot encoded Ames data. We see very similar results as with \\(k\\)\-means (Figure [20\.8](kmeans.html#fig:kmeans-silhouette-mixed)), which tells us that outliers are not negatively influencing the \\(k\\)\-means results. ``` fviz_nbclust( ames_1hot_scaled, pam, method = "wss", k.max = 25, verbose = FALSE ) ``` Figure 20\.9: Total within sum of squares for 1\-25 clusters using PAM clustering. As your data set becomes larger both hierarchical, *k*\-means, and PAM clustering become slower. An alternative is *clustering large applications* (CLARA), which performs the same algorithmic process as PAM; however, instead of finding the *medoids* for the entire data set it considers a small sample size and applies *k*\-means or PAM. Medoids are similar in spirit to the cluster centers or means, but medoids are always restricted to be members of the data set (similar to the difference between the sample mean and median when you have an odd number of observations and no ties). CLARA performs the following algorithmic steps: 1. Randomly split the data set into multiple subsets with fixed size. 2. Compute PAM algorithm on each subset and choose the corresponding \\(k\\) medoids. Assign each observation of the entire data set to the closest medoid. 3. Calculate the mean (or sum) of the dissimilarities of the observations to their closest medoid. This is used as a measure of the goodness of fit of the clustering. 4. Retain the sub\-data set for which the mean (or sum) is minimal. To perform CLARA clustering use `cluster::clara()` instead of `cluster::pam()` and `kmeans()`. If you compute CLARA on the Ames mixed data or on the MNIST data you will find very similar results to both *k*\-means and PAM; however, as the below code illustrates it takes less than \\(\\frac{1}{5}\\)\-th of the time! ``` # k-means computation time on MNIST data system.time(kmeans(features, centers = 10)) ## user system elapsed ## 230.875 4.659 237.404 # CLARA computation time on MNIST data system.time(clara(features, k = 10)) ## user system elapsed ## 37.975 0.286 38.966 ``` 20\.9 Final thoughts -------------------- *K*\-means clustering is probably the most popular clustering algorithm and usually the first applied when solving clustering tasks. Although there have been methods to help analysts identify the optimal number of *k* clusters, this task is still largely based on subjective inputs and decisions by the analyst considering the unsupervised nature of the algorithm. In the next two chapters we’ll explore alternative approaches that help reduce the burden of the analyst needing to define *k*. These methods also address other limitations of *k*\-means such as how the algorithm primarily performs well only if the clusters have convex, non\-overlapping boundaries; attributes that rarely exists in real data sets.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/hierarchical.html
Chapter 21 Hierarchical Clustering ================================== Hierarchical clustering is an alternative approach to *k*\-means clustering for identifying groups in a data set. In contrast to *k*\-means, hierarchical clustering will create a hierarchy of clusters and therefore does not require us to pre\-specify the number of clusters. Furthermore, hierarchical clustering has an added advantage over *k*\-means clustering in that its results can be easily visualized using an attractive tree\-based representation called a *dendrogram*. Figure 21\.1: Illustrative dendrogram. 21\.1 Prerequisites ------------------- For this chapter we’ll use the following packages: ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for data visualization # Modeling packages library(cluster) # for general clustering algorithms library(factoextra) # for visualizing cluster results ``` The major concepts of hierarchical clustering will be illustrated using the Ames housing data. For simplicity we’ll just use the 34 numeric features but refer to our discussion in Section [20\.7](kmeans.html#cluster-mixed) if you’d like to replicate this analysis with the full set of features. Since these features are measured on significantly different magnitudes we standardize the data first: ``` ames_scale <- AmesHousing::make_ames() %>% select_if(is.numeric) %>% # select numeric columns select(-Sale_Price) %>% # remove target column mutate_all(as.double) %>% # coerce to double type scale() # center & scale the resulting columns ``` 21\.2 Hierarchical clustering algorithms ---------------------------------------- Hierarchical clustering can be divided into two main types: 1. **Agglomerative clustering:** Commonly referred to as AGNES (AGglomerative NESting) works in a bottom\-up manner. That is, each observation is initially considered as a single\-element cluster (leaf). At each step of the algorithm, the two clusters that are the most similar are combined into a new bigger cluster (nodes). This procedure is iterated until all points are a member of just one single big cluster (root) (see Figure [21\.2](hierarchical.html#fig:dendrogram2)). The result is a tree which can be displayed using a dendrogram. 2. **Divisive hierarchical clustering:** Commonly referred to as DIANA (DIvise ANAlysis) works in a top\-down manner. DIANA is like the reverse of AGNES. It begins with the root, in which all observations are included in a single cluster. At each step of the algorithm, the current cluster is split into two clusters that are considered most heterogeneous. The process is iterated until all observations are in their own cluster. Note that agglomerative clustering is good at identifying small clusters. Divisive hierarchical clustering, on the other hand, is better at identifying large clusters. Figure 21\.2: AGNES (bottom\-up) versus DIANA (top\-down) clustering. Similar to *k*\-means (Chapter [20](kmeans.html#kmeans)), we measure the (dis)similarity of observations using distance measures (e.g., Euclidean distance, Manhattan distance, etc.); the Euclidean distance is most commonly the default. However, a fundamental question in hierarchical clustering is: *How do we measure the dissimilarity between two clusters of observations?* A number of different cluster agglomeration methods (i.e., linkage methods) have been developed to answer this question. The most common methods are: * **Maximum or complete linkage clustering:** Computes all pairwise dissimilarities between the elements in cluster 1 and the elements in cluster 2, and considers the largest value of these dissimilarities as the distance between the two clusters. It tends to produce more compact clusters. * **Minimum or single linkage clustering:** Computes all pairwise dissimilarities between the elements in cluster 1 and the elements in cluster 2, and considers the smallest of these dissimilarities as a linkage criterion. It tends to produce long, “loose” clusters. * **Mean or average linkage clustering:** Computes all pairwise dissimilarities between the elements in cluster 1 and the elements in cluster 2, and considers the average of these dissimilarities as the distance between the two clusters. Can vary in the compactness of the clusters it creates. * **Centroid linkage clustering:** Computes the dissimilarity between the centroid for cluster 1 (a mean vector of length \\(p\\), one element for each variable) and the centroid for cluster 2\. * **Ward’s minimum variance method:** Minimizes the total within\-cluster variance. At each step the pair of clusters with the smallest between\-cluster distance are merged. Tends to produce more compact clusters. Other methods have been introduced such as measuring cluster descriptors after merging two clusters (Ma et al. [2007](#ref-ma2007segmentation); Zhao and Tang [2009](#ref-zhao2009cyclizing); Zhang, Zhao, and Wang [2013](#ref-zhang2013agglomerative)) but the above methods are, by far, the most popular and commonly used (Hair [2006](#ref-hair2006multivariate)). There are multiple agglomeration methods to define clusters when performing a hierarchical cluster analysis; however, complete linkage and Ward’s method are often preferred for AGNES clustering. For DIANA, clusters are divided based on the maximum average dissimilarity which is very similar to the mean or average linkage clustering method outlined above. See Kaufman and Rousseeuw ([2009](#ref-kaufman2009finding)) for details. We can see the differences these approaches produce in the dendrograms displayed in Figure [21\.3](hierarchical.html#fig:dendrogram3). Figure 21\.3: Differing hierarchical clustering outputs based on similarity measures. 21\.3 Hierarchical clustering in R ---------------------------------- There are many functions available in R for hierarchical clustering. The most commonly used functions are `stats::hclust()` and `cluster::agnes()` for agglomerative hierarchical clustering (HC) and `cluster::diana()` for divisive HC. ### 21\.3\.1 Agglomerative hierarchical clustering To perform agglomerative HC with `hclust()`, we first compute the dissimilarity values with `dist()` and then feed these values into `hclust()` and specify the agglomeration method to be used (i.e. `"complete"`, `"average"`, `"single"`, or `"ward.D"`). ``` # For reproducibility set.seed(123) # Dissimilarity matrix d <- dist(ames_scale, method = "euclidean") # Hierarchical clustering using Complete Linkage hc1 <- hclust(d, method = "complete" ) ``` You could plot the dendrogram with `plot(hc1, cex = 0.6, hang = -1)`; however, due to the large number of observations the output is not discernable. Alternatively, we can use the `agnes()` function. This function behaves similar to `hclust()`; however, with the `agnes()` function you can also get the *agglomerative coefficient* (AC), which measures the amount of clustering structure found. Generally speaking, the AC describes the strength of the clustering structure. Values closer to 1 suggest a more balanced clustering structure such as the complete linkage and Ward’s method dendrograms in Figure 21\.3\. Values closer to 0 suggest less well\-formed clusters such as the single linkage dendrogram in Figure 21\.3\. However, the AC tends to become larger as \\(n\\) increases, so it should not be used to compare across data sets of very different sizes. ``` # For reproducibility set.seed(123) # Compute maximum or complete linkage clustering with agnes hc2 <- agnes(ames_scale, method = "complete") # Agglomerative coefficient hc2$ac ## [1] 0.926775 ``` This allows us to find certain hierarchical clustering methods that can identify stronger clustering structures. Here we see that Ward’s method identifies the strongest clustering structure of the four methods assessed. This grid search took a little over 3 minutes. ``` # methods to assess m <- c( "average", "single", "complete", "ward") names(m) <- c( "average", "single", "complete", "ward") # function to compute coefficient ac <- function(x) { agnes(ames_scale, method = x)$ac } # get agglomerative coefficient for each linkage method purrr::map_dbl(m, ac) ## average single complete ward ## 0.9139303 0.8712890 0.9267750 0.9766577 ``` ### 21\.3\.2 Divisive hierarchical clustering The R function `diana()` in package **cluster** allows us to perform divisive hierarchical clustering. `diana()` works similar to `agnes()`; however, there is no agglomeration method to provide (see Kaufman and Rousseeuw ([2009](#ref-kaufman2009finding)) for details). As before, a *divisive coefficient* (DC) closer to one suggests stronger group distinctions. Consequently, it appears that an agglomerative approach with Ward’s linkage provides the optimal results. ``` # compute divisive hierarchical clustering hc4 <- diana(ames_scale) # Divise coefficient; amount of clustering structure found hc4$dc ## [1] 0.9191094 ``` 21\.4 Determining optimal clusters ---------------------------------- Although hierarchical clustering provides a fully connected dendrogram representing the cluster relationships, you may still need to choose the preferred number of clusters to extract. Fortunately we can execute approaches similar to those discussed for *k*\-means clustering (Section [20\.6](kmeans.html#determine-k)). The following compares results provided by the elbow, silhouette, and gap statistic methods. There is no definitively clear optimal number of clusters in this case; although, the silhouette method and the gap statistic suggest 8–9 clusters: ``` # Plot cluster results p1 <- fviz_nbclust(ames_scale, FUN = hcut, method = "wss", k.max = 10) + ggtitle("(A) Elbow method") p2 <- fviz_nbclust(ames_scale, FUN = hcut, method = "silhouette", k.max = 10) + ggtitle("(B) Silhouette method") p3 <- fviz_nbclust(ames_scale, FUN = hcut, method = "gap_stat", k.max = 10) + ggtitle("(C) Gap statistic") # Display plots side by side gridExtra::grid.arrange(p1, p2, p3, nrow = 1) ``` Figure 21\.4: Comparison of three different methods to identify the optimal number of clusters. 21\.5 Working with dendrograms ------------------------------ The nice thing about hierarchical clustering is that it provides a complete dendrogram illustrating the relationships between clusters in our data. In Figure [21\.5](hierarchical.html#fig:illustrative-dendrogram-plot), each leaf in the dendrogram corresponds to one observation (in our data this represents an individual house). As we move up the tree, observations that are similar to each other are combined into branches, which are themselves fused at a higher height. ``` # Construct dendorgram for the Ames housing example hc5 <- hclust(d, method = "ward.D2" ) dend_plot <- fviz_dend(hc5) dend_data <- attr(dend_plot, "dendrogram") dend_cuts <- cut(dend_data, h = 8) fviz_dend(dend_cuts$lower[[2]]) ``` Figure 21\.5: A subsection of the dendrogram for illustrative purposes. However, dendrograms are often misinterpreted. Conclusions about the proximity of two observations should not be implied by their relationship on the horizontal axis nor by the vertical connections. Rather, the height of the branch between an observation and the clusters of observations below them indicate the distance between the observation and that cluster it is joined to. For example, consider observation 9 \& 2 in Figure [21\.6](hierarchical.html#fig:comparing-dendrogram-to-distances). They appear close on the dendrogram (right) but, in fact, their closeness on the dendrogram imply they are approximately the same distance measure from the cluster that they are fused to (observations 5, 7, \& 8\). It by no means implies that observation 9 \& 2 are close to one another. Figure 21\.6: Comparison of nine observations measured across two features (left) and the resulting dendrogram created based on hierarchical clustering (right). In order to identify sub\-groups (i.e., clusters), we can *cut* the dendrogram with `cutree()`. The height of the cut to the dendrogram controls the number of clusters obtained. It plays the same role as the \\(k\\) in *k*\-means clustering. Here, we cut our agglomerative hierarchical clustering model into eight clusters. We can see that the concentration of observations are in clusters 1–3\. ``` # Ward's method hc5 <- hclust(d, method = "ward.D2" ) # Cut tree into 4 groups sub_grp <- cutree(hc5, k = 8) # Number of members in each cluster table(sub_grp) ## sub_grp ## 1 2 3 4 5 6 7 8 ## 1363 567 650 36 123 156 24 11 ``` We can plot the entire dendrogram with `fviz_dend` and highlight the eight clusters with `k = 8`. ``` # Plot full dendogram fviz_dend( hc5, k = 8, horiz = TRUE, rect = TRUE, rect_fill = TRUE, rect_border = "jco", k_colors = "jco", cex = 0.1 ) ``` Figure 21\.7: The complete dendogram highlighting all 8 clusters. However, due to the size of the Ames housing data, the dendrogram is not very legible. Consequently, we may want to zoom into one particular region or cluster. This allows us to see which observations are most similar within a particular group. There is no easy way to get the exact height required to capture all eight clusters. This is largely trial and error by using different heights until the output of `dend_cuts()` matches the cluster totals identified previously. ``` dend_plot <- fviz_dend(hc5) # create full dendogram dend_data <- attr(dend_plot, "dendrogram") # extract plot info dend_cuts <- cut(dend_data, h = 70.5) # cut the dendogram at # designated height # Create sub dendrogram plots p1 <- fviz_dend(dend_cuts$lower[[1]]) p2 <- fviz_dend(dend_cuts$lower[[1]], type = 'circular') # Side by side plots gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 21\.8: A subsection of the dendrogram highlighting cluster 7\. 21\.6 Final thoughts -------------------- Hierarchical clustering may have some benefits over *k*\-means such as not having to pre\-specify the number of clusters and the fact that it can produce a nice hierarchical illustration of the clusters (that’s useful for smaller data sets). However, from a practical perspective, hierarchical clustering analysis still involves a number of decisions that can have large impacts on the interpretation of the results. First, like *k*\-means, you still need to make a decision on the dissimilarity measure to use. Second, you need to make a decision on the linkage method. Each linkage method has different systematic tendencies (or biases) in the way it groups observations and can result in significantly different results. For example, the centroid method has a bias toward producing irregularly shaped clusters. Ward’s method tends to produce clusters with roughly the same number of observations and the solutions it provides tend to be heavily distorted by outliers. Given such tendencies, there should be a match between the algorithm selected and the underlying structure of the data (e.g., sample size, distribution of observations, and what types of variables are included\-nominal, ordinal, ratio, or interval). For example, the centroid method should primarily be used when (a) data are measured with interval or ratio scales and (b) clusters are expected to be very dissimilar from each other. Likewise, Ward’s method is best suited for analyses where (a) the number of observations in each cluster is expected to be approximately equal and (b) there are no outliers (Ketchen and Shook [1996](#ref-ketchen1996application)). Third, although we do not need to pre\-specify the number of clusters, we often still need to decide where to cut the dendrogram in order to obtain the final clusters to use. So the onus is still on us to decide the number of clusters, albeit in the end. In Chapter [22](model-clustering.html#model-clustering) we discuss a method that relieves us of this decision. 21\.1 Prerequisites ------------------- For this chapter we’ll use the following packages: ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for data visualization # Modeling packages library(cluster) # for general clustering algorithms library(factoextra) # for visualizing cluster results ``` The major concepts of hierarchical clustering will be illustrated using the Ames housing data. For simplicity we’ll just use the 34 numeric features but refer to our discussion in Section [20\.7](kmeans.html#cluster-mixed) if you’d like to replicate this analysis with the full set of features. Since these features are measured on significantly different magnitudes we standardize the data first: ``` ames_scale <- AmesHousing::make_ames() %>% select_if(is.numeric) %>% # select numeric columns select(-Sale_Price) %>% # remove target column mutate_all(as.double) %>% # coerce to double type scale() # center & scale the resulting columns ``` 21\.2 Hierarchical clustering algorithms ---------------------------------------- Hierarchical clustering can be divided into two main types: 1. **Agglomerative clustering:** Commonly referred to as AGNES (AGglomerative NESting) works in a bottom\-up manner. That is, each observation is initially considered as a single\-element cluster (leaf). At each step of the algorithm, the two clusters that are the most similar are combined into a new bigger cluster (nodes). This procedure is iterated until all points are a member of just one single big cluster (root) (see Figure [21\.2](hierarchical.html#fig:dendrogram2)). The result is a tree which can be displayed using a dendrogram. 2. **Divisive hierarchical clustering:** Commonly referred to as DIANA (DIvise ANAlysis) works in a top\-down manner. DIANA is like the reverse of AGNES. It begins with the root, in which all observations are included in a single cluster. At each step of the algorithm, the current cluster is split into two clusters that are considered most heterogeneous. The process is iterated until all observations are in their own cluster. Note that agglomerative clustering is good at identifying small clusters. Divisive hierarchical clustering, on the other hand, is better at identifying large clusters. Figure 21\.2: AGNES (bottom\-up) versus DIANA (top\-down) clustering. Similar to *k*\-means (Chapter [20](kmeans.html#kmeans)), we measure the (dis)similarity of observations using distance measures (e.g., Euclidean distance, Manhattan distance, etc.); the Euclidean distance is most commonly the default. However, a fundamental question in hierarchical clustering is: *How do we measure the dissimilarity between two clusters of observations?* A number of different cluster agglomeration methods (i.e., linkage methods) have been developed to answer this question. The most common methods are: * **Maximum or complete linkage clustering:** Computes all pairwise dissimilarities between the elements in cluster 1 and the elements in cluster 2, and considers the largest value of these dissimilarities as the distance between the two clusters. It tends to produce more compact clusters. * **Minimum or single linkage clustering:** Computes all pairwise dissimilarities between the elements in cluster 1 and the elements in cluster 2, and considers the smallest of these dissimilarities as a linkage criterion. It tends to produce long, “loose” clusters. * **Mean or average linkage clustering:** Computes all pairwise dissimilarities between the elements in cluster 1 and the elements in cluster 2, and considers the average of these dissimilarities as the distance between the two clusters. Can vary in the compactness of the clusters it creates. * **Centroid linkage clustering:** Computes the dissimilarity between the centroid for cluster 1 (a mean vector of length \\(p\\), one element for each variable) and the centroid for cluster 2\. * **Ward’s minimum variance method:** Minimizes the total within\-cluster variance. At each step the pair of clusters with the smallest between\-cluster distance are merged. Tends to produce more compact clusters. Other methods have been introduced such as measuring cluster descriptors after merging two clusters (Ma et al. [2007](#ref-ma2007segmentation); Zhao and Tang [2009](#ref-zhao2009cyclizing); Zhang, Zhao, and Wang [2013](#ref-zhang2013agglomerative)) but the above methods are, by far, the most popular and commonly used (Hair [2006](#ref-hair2006multivariate)). There are multiple agglomeration methods to define clusters when performing a hierarchical cluster analysis; however, complete linkage and Ward’s method are often preferred for AGNES clustering. For DIANA, clusters are divided based on the maximum average dissimilarity which is very similar to the mean or average linkage clustering method outlined above. See Kaufman and Rousseeuw ([2009](#ref-kaufman2009finding)) for details. We can see the differences these approaches produce in the dendrograms displayed in Figure [21\.3](hierarchical.html#fig:dendrogram3). Figure 21\.3: Differing hierarchical clustering outputs based on similarity measures. 21\.3 Hierarchical clustering in R ---------------------------------- There are many functions available in R for hierarchical clustering. The most commonly used functions are `stats::hclust()` and `cluster::agnes()` for agglomerative hierarchical clustering (HC) and `cluster::diana()` for divisive HC. ### 21\.3\.1 Agglomerative hierarchical clustering To perform agglomerative HC with `hclust()`, we first compute the dissimilarity values with `dist()` and then feed these values into `hclust()` and specify the agglomeration method to be used (i.e. `"complete"`, `"average"`, `"single"`, or `"ward.D"`). ``` # For reproducibility set.seed(123) # Dissimilarity matrix d <- dist(ames_scale, method = "euclidean") # Hierarchical clustering using Complete Linkage hc1 <- hclust(d, method = "complete" ) ``` You could plot the dendrogram with `plot(hc1, cex = 0.6, hang = -1)`; however, due to the large number of observations the output is not discernable. Alternatively, we can use the `agnes()` function. This function behaves similar to `hclust()`; however, with the `agnes()` function you can also get the *agglomerative coefficient* (AC), which measures the amount of clustering structure found. Generally speaking, the AC describes the strength of the clustering structure. Values closer to 1 suggest a more balanced clustering structure such as the complete linkage and Ward’s method dendrograms in Figure 21\.3\. Values closer to 0 suggest less well\-formed clusters such as the single linkage dendrogram in Figure 21\.3\. However, the AC tends to become larger as \\(n\\) increases, so it should not be used to compare across data sets of very different sizes. ``` # For reproducibility set.seed(123) # Compute maximum or complete linkage clustering with agnes hc2 <- agnes(ames_scale, method = "complete") # Agglomerative coefficient hc2$ac ## [1] 0.926775 ``` This allows us to find certain hierarchical clustering methods that can identify stronger clustering structures. Here we see that Ward’s method identifies the strongest clustering structure of the four methods assessed. This grid search took a little over 3 minutes. ``` # methods to assess m <- c( "average", "single", "complete", "ward") names(m) <- c( "average", "single", "complete", "ward") # function to compute coefficient ac <- function(x) { agnes(ames_scale, method = x)$ac } # get agglomerative coefficient for each linkage method purrr::map_dbl(m, ac) ## average single complete ward ## 0.9139303 0.8712890 0.9267750 0.9766577 ``` ### 21\.3\.2 Divisive hierarchical clustering The R function `diana()` in package **cluster** allows us to perform divisive hierarchical clustering. `diana()` works similar to `agnes()`; however, there is no agglomeration method to provide (see Kaufman and Rousseeuw ([2009](#ref-kaufman2009finding)) for details). As before, a *divisive coefficient* (DC) closer to one suggests stronger group distinctions. Consequently, it appears that an agglomerative approach with Ward’s linkage provides the optimal results. ``` # compute divisive hierarchical clustering hc4 <- diana(ames_scale) # Divise coefficient; amount of clustering structure found hc4$dc ## [1] 0.9191094 ``` ### 21\.3\.1 Agglomerative hierarchical clustering To perform agglomerative HC with `hclust()`, we first compute the dissimilarity values with `dist()` and then feed these values into `hclust()` and specify the agglomeration method to be used (i.e. `"complete"`, `"average"`, `"single"`, or `"ward.D"`). ``` # For reproducibility set.seed(123) # Dissimilarity matrix d <- dist(ames_scale, method = "euclidean") # Hierarchical clustering using Complete Linkage hc1 <- hclust(d, method = "complete" ) ``` You could plot the dendrogram with `plot(hc1, cex = 0.6, hang = -1)`; however, due to the large number of observations the output is not discernable. Alternatively, we can use the `agnes()` function. This function behaves similar to `hclust()`; however, with the `agnes()` function you can also get the *agglomerative coefficient* (AC), which measures the amount of clustering structure found. Generally speaking, the AC describes the strength of the clustering structure. Values closer to 1 suggest a more balanced clustering structure such as the complete linkage and Ward’s method dendrograms in Figure 21\.3\. Values closer to 0 suggest less well\-formed clusters such as the single linkage dendrogram in Figure 21\.3\. However, the AC tends to become larger as \\(n\\) increases, so it should not be used to compare across data sets of very different sizes. ``` # For reproducibility set.seed(123) # Compute maximum or complete linkage clustering with agnes hc2 <- agnes(ames_scale, method = "complete") # Agglomerative coefficient hc2$ac ## [1] 0.926775 ``` This allows us to find certain hierarchical clustering methods that can identify stronger clustering structures. Here we see that Ward’s method identifies the strongest clustering structure of the four methods assessed. This grid search took a little over 3 minutes. ``` # methods to assess m <- c( "average", "single", "complete", "ward") names(m) <- c( "average", "single", "complete", "ward") # function to compute coefficient ac <- function(x) { agnes(ames_scale, method = x)$ac } # get agglomerative coefficient for each linkage method purrr::map_dbl(m, ac) ## average single complete ward ## 0.9139303 0.8712890 0.9267750 0.9766577 ``` ### 21\.3\.2 Divisive hierarchical clustering The R function `diana()` in package **cluster** allows us to perform divisive hierarchical clustering. `diana()` works similar to `agnes()`; however, there is no agglomeration method to provide (see Kaufman and Rousseeuw ([2009](#ref-kaufman2009finding)) for details). As before, a *divisive coefficient* (DC) closer to one suggests stronger group distinctions. Consequently, it appears that an agglomerative approach with Ward’s linkage provides the optimal results. ``` # compute divisive hierarchical clustering hc4 <- diana(ames_scale) # Divise coefficient; amount of clustering structure found hc4$dc ## [1] 0.9191094 ``` 21\.4 Determining optimal clusters ---------------------------------- Although hierarchical clustering provides a fully connected dendrogram representing the cluster relationships, you may still need to choose the preferred number of clusters to extract. Fortunately we can execute approaches similar to those discussed for *k*\-means clustering (Section [20\.6](kmeans.html#determine-k)). The following compares results provided by the elbow, silhouette, and gap statistic methods. There is no definitively clear optimal number of clusters in this case; although, the silhouette method and the gap statistic suggest 8–9 clusters: ``` # Plot cluster results p1 <- fviz_nbclust(ames_scale, FUN = hcut, method = "wss", k.max = 10) + ggtitle("(A) Elbow method") p2 <- fviz_nbclust(ames_scale, FUN = hcut, method = "silhouette", k.max = 10) + ggtitle("(B) Silhouette method") p3 <- fviz_nbclust(ames_scale, FUN = hcut, method = "gap_stat", k.max = 10) + ggtitle("(C) Gap statistic") # Display plots side by side gridExtra::grid.arrange(p1, p2, p3, nrow = 1) ``` Figure 21\.4: Comparison of three different methods to identify the optimal number of clusters. 21\.5 Working with dendrograms ------------------------------ The nice thing about hierarchical clustering is that it provides a complete dendrogram illustrating the relationships between clusters in our data. In Figure [21\.5](hierarchical.html#fig:illustrative-dendrogram-plot), each leaf in the dendrogram corresponds to one observation (in our data this represents an individual house). As we move up the tree, observations that are similar to each other are combined into branches, which are themselves fused at a higher height. ``` # Construct dendorgram for the Ames housing example hc5 <- hclust(d, method = "ward.D2" ) dend_plot <- fviz_dend(hc5) dend_data <- attr(dend_plot, "dendrogram") dend_cuts <- cut(dend_data, h = 8) fviz_dend(dend_cuts$lower[[2]]) ``` Figure 21\.5: A subsection of the dendrogram for illustrative purposes. However, dendrograms are often misinterpreted. Conclusions about the proximity of two observations should not be implied by their relationship on the horizontal axis nor by the vertical connections. Rather, the height of the branch between an observation and the clusters of observations below them indicate the distance between the observation and that cluster it is joined to. For example, consider observation 9 \& 2 in Figure [21\.6](hierarchical.html#fig:comparing-dendrogram-to-distances). They appear close on the dendrogram (right) but, in fact, their closeness on the dendrogram imply they are approximately the same distance measure from the cluster that they are fused to (observations 5, 7, \& 8\). It by no means implies that observation 9 \& 2 are close to one another. Figure 21\.6: Comparison of nine observations measured across two features (left) and the resulting dendrogram created based on hierarchical clustering (right). In order to identify sub\-groups (i.e., clusters), we can *cut* the dendrogram with `cutree()`. The height of the cut to the dendrogram controls the number of clusters obtained. It plays the same role as the \\(k\\) in *k*\-means clustering. Here, we cut our agglomerative hierarchical clustering model into eight clusters. We can see that the concentration of observations are in clusters 1–3\. ``` # Ward's method hc5 <- hclust(d, method = "ward.D2" ) # Cut tree into 4 groups sub_grp <- cutree(hc5, k = 8) # Number of members in each cluster table(sub_grp) ## sub_grp ## 1 2 3 4 5 6 7 8 ## 1363 567 650 36 123 156 24 11 ``` We can plot the entire dendrogram with `fviz_dend` and highlight the eight clusters with `k = 8`. ``` # Plot full dendogram fviz_dend( hc5, k = 8, horiz = TRUE, rect = TRUE, rect_fill = TRUE, rect_border = "jco", k_colors = "jco", cex = 0.1 ) ``` Figure 21\.7: The complete dendogram highlighting all 8 clusters. However, due to the size of the Ames housing data, the dendrogram is not very legible. Consequently, we may want to zoom into one particular region or cluster. This allows us to see which observations are most similar within a particular group. There is no easy way to get the exact height required to capture all eight clusters. This is largely trial and error by using different heights until the output of `dend_cuts()` matches the cluster totals identified previously. ``` dend_plot <- fviz_dend(hc5) # create full dendogram dend_data <- attr(dend_plot, "dendrogram") # extract plot info dend_cuts <- cut(dend_data, h = 70.5) # cut the dendogram at # designated height # Create sub dendrogram plots p1 <- fviz_dend(dend_cuts$lower[[1]]) p2 <- fviz_dend(dend_cuts$lower[[1]], type = 'circular') # Side by side plots gridExtra::grid.arrange(p1, p2, nrow = 1) ``` Figure 21\.8: A subsection of the dendrogram highlighting cluster 7\. 21\.6 Final thoughts -------------------- Hierarchical clustering may have some benefits over *k*\-means such as not having to pre\-specify the number of clusters and the fact that it can produce a nice hierarchical illustration of the clusters (that’s useful for smaller data sets). However, from a practical perspective, hierarchical clustering analysis still involves a number of decisions that can have large impacts on the interpretation of the results. First, like *k*\-means, you still need to make a decision on the dissimilarity measure to use. Second, you need to make a decision on the linkage method. Each linkage method has different systematic tendencies (or biases) in the way it groups observations and can result in significantly different results. For example, the centroid method has a bias toward producing irregularly shaped clusters. Ward’s method tends to produce clusters with roughly the same number of observations and the solutions it provides tend to be heavily distorted by outliers. Given such tendencies, there should be a match between the algorithm selected and the underlying structure of the data (e.g., sample size, distribution of observations, and what types of variables are included\-nominal, ordinal, ratio, or interval). For example, the centroid method should primarily be used when (a) data are measured with interval or ratio scales and (b) clusters are expected to be very dissimilar from each other. Likewise, Ward’s method is best suited for analyses where (a) the number of observations in each cluster is expected to be approximately equal and (b) there are no outliers (Ketchen and Shook [1996](#ref-ketchen1996application)). Third, although we do not need to pre\-specify the number of clusters, we often still need to decide where to cut the dendrogram in order to obtain the final clusters to use. So the onus is still on us to decide the number of clusters, albeit in the end. In Chapter [22](model-clustering.html#model-clustering) we discuss a method that relieves us of this decision.
Machine Learning
bradleyboehmke.github.io
https://bradleyboehmke.github.io/HOML/model-clustering.html
Chapter 22 Model\-based Clustering ================================== Traditional clustering algorithms such as *k*\-means (Chapter [20](kmeans.html#kmeans)) and hierarchical (Chapter [21](hierarchical.html#hierarchical)) clustering are heuristic\-based algorithms that derive clusters directly based on the data rather than incorporating a measure of probability or uncertainty to the cluster assignments. Model\-based clustering attempts to address this concern and provide *soft assignment* where observations have a probability of belonging to each cluster. Moreover, model\-based clustering provides the added benefit of automatically identifying the optimal number of clusters. This chapter covers Gaussian mixture models, which are one of the most popular model\-based clustering approaches available. 22\.1 Prerequisites ------------------- For this chapter we’ll use the following packages with the emphasis on **mclust** (Fraley, Raftery, and Scrucca [2019](#ref-R-mclust)): ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for data visualization # Modeling packages library(mclust) # for fitting clustering algorithms ``` To illustrate the main concepts of model\-based clustering we’ll use the `geyser` data provided by the **MASS** package along with the `my_basket` data. ``` data(geyser, package = 'MASS') url <- "https://koalaverse.github.io/homlr/data/my_basket.csv" my_basket <- readr::read_csv(url) ``` 22\.2 Measuring probability and uncertainty ------------------------------------------- The key idea behind model\-based clustering is that the data are considered as coming from a mixture of underlying probability distributions. The most popular approach is the *Gaussian mixture model* (GMM) (Banfield and Raftery [1993](#ref-banfield1993model)) where each observation is assumed to be distributed as one of \\(k\\) multivariate\-normal distributions, where \\(k\\) is the number of clusters (commonly referred to as *components* in model\-based clustering). For a comprehensive review of model\-based clustering, see Fraley and Raftery ([2002](#ref-fraley2002model)). GMMs are founded on the multivariate normal (Gaussian) distribution where \\(p\\) variables (\\(X\_1, X\_2, \\dots, X\_p\\)) are assumed to have means \\(\\mu \= \\left(\\mu\_1, \\mu\_2, \\dots, \\mu\_p\\right)\\) and a covariance matrix \\(\\Sigma\\), which describes the joint variability (i.e., covariance) between each pair of variables: \\\[\\begin{equation} \\tag{22\.1} \\sum \= \\left\[\\begin{array}{cccc} \\sigma\_{1}^{2} \& \\sigma\_{1, 2} \& \\cdots \& \\sigma\_{1, p} \\\\ \\sigma\_{2, 1} \& \\sigma\_{2}^{2} \& \\cdots \& \\sigma\_{2, p} \\\\ \\vdots \& \\vdots \& \\ddots \& \\vdots \\\\ \\sigma\_{p, 1} \& \\sigma\_{p, 2}^{2} \& \\cdots \& \\sigma\_{p}^{2} \\end{array}\\right]. \\end{equation}\\] Note that \\(\\Sigma\\) contains \\(p\\) variances (\\(\\sigma\_1^2, \\sigma\_2^2, \\dots, \\sigma\_p^2\\)) and \\(p\\left(p \- 1\\right) / 2\\) unique covariances \\(\\sigma\_{i, j}\\) (\\(i \\ne j\\)); note that \\(\\sigma\_{i, j} \= \\sigma\_{j, i}\\). A multivariate\-normal distribution with mean \\(\\mu\\) and covariance \\(\|sigma\\) is notationally represented by Equation [(22\.2\)](model-clustering.html#eq:multinorm): \\\[\\begin{equation} \\tag{22\.2} \\left(X\_1, X\_2, \\dots, X\_p\\right) \\sim N\_p \\left( \\mu, \\Sigma \\right). \\end{equation}\\] This distribution has the property that every subset of variables (say, \\(X\_1\\), \\(X\_5\\), and \\(X\_9\\)) also has a multivariate normal distribution (albeit with a different mean and covariance structure). GMMs assume the clusters can be created using \\(k\\) Gaussian distributions. For example, if there are two variables (say, \\(X\\) and \\(Y\\)), then each observation \\(\\left(X\_i, Y\_i\\right)\\) is modeled has having been sampled from one of \\(k\\) distributions (\\(N\_1\\left(\\mu\_1, \\Sigma\_1 \\right), N\_2\\left(\\mu\_2, \\Sigma\_2 \\right), \\dots, N\_p\\left(\\mu\_p, \\Sigma\_p \\right)\\)). This is illustrated in Figure [22\.1](model-clustering.html#fig:multivariate-density-plot) which suggests variables \\(X\\) and \\(Y\\) come from three multivariate distributions. However, as the data points deviate from the center of one of the three clusters, the probability that they align to a particular cluster decreases as indicated by the fading elliptical rings around each cluster center. Figure 22\.1: Data points across two features (X and Y) appear to come from three multivariate normal distributions. We can illustrate this concretely by applying a GMM model to the `geyser` data, which is the data illustrated in Figure [22\.1](model-clustering.html#fig:multivariate-density-plot). To do so we apply `Mclust()` and specify three components. Plotting the output (Figure [22\.2](model-clustering.html#fig:geyser-mc1-plot)) provides a density plot (left) just like we saw in Figure [22\.1](model-clustering.html#fig:multivariate-density-plot) and the component assignment for each observation based on the largest probability (right). In the uncertainty plot (right), you’ll notice the observations near the center of the densities are small, indicating small uncertainty (or high probability) of being from that respective component; however, the observations that are large represent observations with high uncertainty (or low probability) regarding the component they are aligned to. ``` # Apply GMM model with 3 components geyser_mc <- Mclust(geyser, G = 3) # Plot results plot(geyser_mc, what = "density") plot(geyser_mc, what = "uncertainty") ``` Figure 22\.2: Multivariate density plot (left) highlighting three clusters in the `geyser` data and an uncertainty plot (right) highlighting observations with high uncertainty of which cluster they are a member of. This idea of a probabilistic cluster assignment can be quite useful as it allows you to identify observations with high or low cluster uncertainty and, potentially, target them uniquely or provide alternative solutions. For example, the following six observations all have nearly 50% probability of being assigned to two different clusters. If this were an advertising data set and you were marketing to these observations you may want to provide them with a combination of marketing solutions for the two clusters they are nearest to. Or you may want to perform additional A/B testing on them to try gain additional confidence regarding which cluster they align to most. ``` # Observations with high uncertainty sort(geyser_mc$uncertainty, decreasing = TRUE) %>% head() ## 187 211 85 285 28 206 ## 0.4689087 0.4542588 0.4355496 0.4355496 0.4312406 0.4168440 ``` 22\.3 Covariance types ---------------------- The covariance matrix in Equation [(22\.1\)](model-clustering.html#eq:covariance) describes the geometry of the clusters; namely, the volume, shape, and orientation of the clusters. Looking at Figure [22\.2](model-clustering.html#fig:geyser-mc1-plot), the clusters and their densities appear approximately proportional in size and shape. However, this is not a requirement of GMMs. In fact, GMMs allow for far more flexible clustering structures. This is done by adding constraints to the covariance matrix \\(\\Sigma\\). These constraints can be one or more of the following: 1. **volume**: each cluster has approximately the same number of observations; 2. **shape**: each cluster has approximately the same variance so that the distribution is spherical; 3. **orientation**: each cluster is forced to be axis\-aligned. The various combinations of the above constraints have been classified into three main families of models: *spherical*, *diagonal*, and *general* (also referred to as *ellipsoidal*) families (Celeux and Govaert [1995](#ref-celeux1995gaussian)). These combinations are listed in Table [22\.1](model-clustering.html#tab:covariance-parameterization). See Fraley et al. ([2012](#ref-fraley2012mclust)) regarding the technical implementation of these covariance parameters. Table 22\.1: Parameterizations of the covariance matrix | Model | Family | Volume | Shape | Orientation | Identifier | | --- | --- | --- | --- | --- | --- | | 1 | Spherical | Equal | Equal | NA | EII | | 2 | Spherical | Variable | Equal | NA | VII | | 3 | Diagonal | Equal | Equal | Axes | EEI | | 4 | Diagonal | Variable | Equal | Axes | VEI | | 5 | Diagonal | Equal | Variable | Axes | EVI | | 6 | Diagonal | Variable | Variable | Axes | VVI | | 7 | General | Equal | Equal | Equal | EEE | | 8 | General | Equal | Variable | Equal | EVE | | 9 | General | Variable | Equal | Equal | VEE | | 10 | General | Variable | Variable | Equal | VVE | | 11 | General | Equal | Equal | Variable | EEV | | 12 | General | Variable | Equal | Variable | VEV | | 13 | General | Equal | Variable | Variable | EVV | | 14 | General | Variable | Variable | Variable | VVV | These various covariance parameters allow GMMs to capture unique clustering structures in data, as illustrated in Figure [22\.3](model-clustering.html#fig:visualize-different-covariance-models). Figure 22\.3: Graphical representation of how different covariance models allow GMMs to capture different cluster structures. Users can optionally specify a conjugate prior if prior knowledge of the underlying probability distributions are available. By default, `Mclust()` does not apply a prior for modeling. 22\.4 Model selection --------------------- If we assess the summary of our `geyser_mc` model we see that, behind the scenes, `Mclust()` applied the EEI model. A more detailed summary, including the estimated parameters, can be obtained using `summary(geyser_mc, parameters = TRUE)`. ``` summary(geyser_mc) ## ---------------------------------------------------- ## Gaussian finite mixture model fitted by EM algorithm ## ---------------------------------------------------- ## ## Mclust EEI (diagonal, equal volume and shape) model with 3 components: ## ## log-likelihood n df BIC ICL ## -1371.823 299 10 -2800.65 -2814.577 ## ## Clustering table: ## 1 2 3 ## 91 107 101 ``` However, `Mclust()` will apply all 14 models from Table [22\.3](model-clustering.html#fig:visualize-different-covariance-models) and identify the one that best characterizes the data. To select the optimal model, any model selection procedure can be applied (e.g., Akaike information criteria (AIC), likelihood ratio, etc.); however, the Bayesian information criterion (BIC) has been shown to work well in model\-based clustering (Dasgupta and Raftery [1998](#ref-dasgupta1998detecting); Fraley and Raftery [1998](#ref-fraley1998many)) and is typically the default. `Mclust()` implements BIC as represented in Equation [(22\.3\)](model-clustering.html#eq:BIC) \\\[\\begin{equation} \\tag{22\.3} BIC \= \-2\\log\\left(L\\right) \+ m \\log\\left(n\\right), \\end{equation}\\] where \\(\\log\\left(n\\right)\\) is the maximized loglikelihood for the model and data, \\(m\\) is the number of free parameters to be estimated in the given model, and \\(n\\) is the number observations in the data. This penalizes large models with many clusters. The objective in hyperparameter tuning with `Mclust()` is to maximize BIC. Not only can we use BIC to identify the optimal covariance parameters, but we can also use it to identify the optimal number of clusters. Rather than specify `G = 3` in `Mclust()`, leaving `G = NULL` forces `Mclust()` to evaluate 1–9 clusters and select the optimal number of components based on BIC. Alternatively, you can specify certain values to evaluate for `G`. The following performs a hyperparameter search across all 14 covariance models for 1–9 clusters on the geyser data. The left plot in Figure [22\.4](model-clustering.html#fig:geyser-mc2-plot) shows that the EEI and VII models perform particularly poor while the rest of the models perform much better and have little differentiation. The optimal model uses the VVI covariance parameters, which identified four clusters and has a BIC of \-2768\.568\. ``` geyser_optimal_mc <- Mclust(geyser) summary(geyser_optimal_mc) ## ---------------------------------------------------- ## Gaussian finite mixture model fitted by EM algorithm ## ---------------------------------------------------- ## ## Mclust VVI (diagonal, varying volume and shape) model with 4 components: ## ## log-likelihood n df BIC ICL ## -1330.13 299 19 -2768.568 -2798.746 ## ## Clustering table: ## 1 2 3 4 ## 90 17 98 94 legend_args <- list(x = "bottomright", ncol = 5) plot(geyser_optimal_mc, what = 'BIC', legendArgs = legend_args) plot(geyser_optimal_mc, what = 'classification') plot(geyser_optimal_mc, what = 'uncertainty') ``` Figure 22\.4: Identifying the optimal GMM model and number of clusters for the `geyser` data (left). The classification (center) and uncertainty (right) plots illustrate which observations are assigned to each cluster and their level of assignment uncertainty. 22\.5 My basket example ----------------------- Let’s turn to our `my_basket` data to demonstrate GMMs on a more modern sized data set. The following performs a search across all 14 GMM models and across 1–20 clusters. If you are following along and running the code you’ll notice that GMMs are computationally slow, especially since they are assessing 14 models for each cluster size instance. This GMM hyperparameter search took a little over a minute. Figure [22\.5](model-clustering.html#fig:my-basket-BIC) illustrates the BIC scores and we see that the optimal GMM method is EEV with six clusters. You may notice that not all models generate results for each cluster size (e.g., the VVI model only produced results for clusters 1–3\). This is because the model could not converge on optimal results for those settings. This becomes more problematic as data sets get larger. Often, performing dimension reduction prior to a GMM can minimize this issue. ``` my_basket_mc <- Mclust(my_basket, 1:20) summary(my_basket_mc) ## ---------------------------------------------------- ## Gaussian finite mixture model fitted by EM algorithm ## ---------------------------------------------------- ## ## Mclust EEV (ellipsoidal, equal volume and shape) model with 6 components: ## ## log-likelihood n df BIC ICL ## 8308.915 2000 5465 -24921.1 -25038.38 ## ## Clustering table: ## 1 2 3 4 5 6 ## 391 403 75 315 365 451 plot(my_basket_mc, what = 'BIC', legendArgs = list(x = "bottomright", ncol = 5)) ``` Figure 22\.5: BIC scores for clusters (components) ranging from 1\-20 We can look across our six clusters and assess the probabilities of cluster membership. Figure [22\.6](model-clustering.html#fig:my-basket-probabilities) illustrates very bimodal distributions of probabilities. Observations with greater than 0\.50 probability will be aligned to a given cluster so this bimodality is preferred as it illustrates that observations have either a very high probability of the cluster they are aligned to or a very low probability, which means they would not be aligned to that cluster. Looking at cluster `C3`, we see that there are very few, if any, observations in the middle of the probability range. `C3` also has far fewer observations with high probability. This means that `C3` is the smallest of the clusters (confirmed using `summary(my_basket_mc))` above, and that `C3` is a more compact cluster. As clusters have more observations with middling levels of probability (i.e., 0\.25–0\.75\), their clusters are usually less compact. Therefore, cluster `C2` is less compact than cluster `C3`. ``` probabilities <- my_basket_mc$z colnames(probabilities) <- paste0('C', 1:6) probabilities <- probabilities %>% as.data.frame() %>% mutate(id = row_number()) %>% tidyr::gather(cluster, probability, -id) ggplot(probabilities, aes(probability)) + geom_histogram() + facet_wrap(~ cluster, nrow = 2) ``` Figure 22\.6: Distribution of probabilities for all observations aligning to each of the six clusters. We can extract the cluster membership for each observation with `my_basket_mc$classification`. In Figure [22\.7](model-clustering.html#fig:cluster-uncertainty) we find the observations that are aligned to each cluster but the uncertainty of their membership to that particular cluster is 0\.25 or greater. You may notice that cluster three is not represented. Recall from Figure [22\.6](model-clustering.html#fig:my-basket-probabilities) that cluster three’s observations all had very strong membership probabilities so they have no observations with uncertainty greater than 0\.25\. ``` uncertainty <- data.frame( id = 1:nrow(my_basket), cluster = my_basket_mc$classification, uncertainty = my_basket_mc$uncertainty ) uncertainty %>% group_by(cluster) %>% filter(uncertainty > 0.25) %>% ggplot(aes(uncertainty, reorder(id, uncertainty))) + geom_point() + facet_wrap(~ cluster, scales = 'free_y', nrow = 1) ``` Figure 22\.7: Observations that are aligned to each cluster but their uncertainty of membership is greater than 0\.25\. When doing cluster analysis, our goal is to find those observations that are most similar to others. What defines this similarity becomes difficult as our data sets become larger. Let’s take a look at cluster two. The following standardizes the count of each product across all baskets and then looks at consumption for cluster two. Figure [22\.8](model-clustering.html#fig:cluster2-consumption) illustrates the results and shows that cluster two baskets have above average consumption for candy bars, lottery tickets, cigarettes, and alcohol. Needless to say, this group may include our more unhealthy baskets—or maybe they’re the recreational baskets made on the weekends when people just want to sit on the deck and relax with a drink in one hand and a candy bar in the other! Regardless, this group is likely to receive marketing ads for candy bars, alcohol, and the like rather than the items we see at the bottom of Figure [22\.8](model-clustering.html#fig:cluster2-consumption), which represent the items this group consumes less than the average observations. ``` cluster2 <- my_basket %>% scale() %>% as.data.frame() %>% mutate(cluster = my_basket_mc$classification) %>% filter(cluster == 2) %>% select(-cluster) cluster2 %>% tidyr::gather(product, std_count) %>% group_by(product) %>% summarize(avg = mean(std_count)) %>% ggplot(aes(avg, reorder(product, avg))) + geom_point() + labs(x = "Average standardized consumption", y = NULL) ``` Figure 22\.8: Average standardized consumption for cluster 2 observations compared to all observations. 22\.6 Final thoughts -------------------- Model\-based clustering techniques do have their limitations. The methods require an underlying model for the data (e.g., GMMs assume multivariate normality), and the cluster results are heavily dependent on this assumption. Although there have been many advancements to limit this constraint (Lee and McLachlan [2013](#ref-lee2013model)), software implementations are still lacking. A more significant limitation is the computational demands. Classical model\-based clustering show disappointing computational performance in high\-dimensional spaces (Bouveyron and Brunet\-Saumard [2014](#ref-bouveyron2014model)). This is mainly due to the fact that model\-based clustering methods are dramatically over\-parameterized. The primary approach for dealing with this is to perform dimension reduction prior to clustering. Although this often improves computational performance, reducing the dimension without taking into consideration the clustering goal may be dangerous. Indeed, dimension reduction may yield a loss of information which could have been useful for discriminating the groups. There have been alternative solutions proposed, such as high\-dimensional GMMs (Bouveyron, Girard, and Schmid [2007](#ref-bouveyron2007high)), which has been implemented in the **HDclassif** package (Berge, Bouveyron, and Girard [2018](#ref-R-HDclassif)). 22\.1 Prerequisites ------------------- For this chapter we’ll use the following packages with the emphasis on **mclust** (Fraley, Raftery, and Scrucca [2019](#ref-R-mclust)): ``` # Helper packages library(dplyr) # for data manipulation library(ggplot2) # for data visualization # Modeling packages library(mclust) # for fitting clustering algorithms ``` To illustrate the main concepts of model\-based clustering we’ll use the `geyser` data provided by the **MASS** package along with the `my_basket` data. ``` data(geyser, package = 'MASS') url <- "https://koalaverse.github.io/homlr/data/my_basket.csv" my_basket <- readr::read_csv(url) ``` 22\.2 Measuring probability and uncertainty ------------------------------------------- The key idea behind model\-based clustering is that the data are considered as coming from a mixture of underlying probability distributions. The most popular approach is the *Gaussian mixture model* (GMM) (Banfield and Raftery [1993](#ref-banfield1993model)) where each observation is assumed to be distributed as one of \\(k\\) multivariate\-normal distributions, where \\(k\\) is the number of clusters (commonly referred to as *components* in model\-based clustering). For a comprehensive review of model\-based clustering, see Fraley and Raftery ([2002](#ref-fraley2002model)). GMMs are founded on the multivariate normal (Gaussian) distribution where \\(p\\) variables (\\(X\_1, X\_2, \\dots, X\_p\\)) are assumed to have means \\(\\mu \= \\left(\\mu\_1, \\mu\_2, \\dots, \\mu\_p\\right)\\) and a covariance matrix \\(\\Sigma\\), which describes the joint variability (i.e., covariance) between each pair of variables: \\\[\\begin{equation} \\tag{22\.1} \\sum \= \\left\[\\begin{array}{cccc} \\sigma\_{1}^{2} \& \\sigma\_{1, 2} \& \\cdots \& \\sigma\_{1, p} \\\\ \\sigma\_{2, 1} \& \\sigma\_{2}^{2} \& \\cdots \& \\sigma\_{2, p} \\\\ \\vdots \& \\vdots \& \\ddots \& \\vdots \\\\ \\sigma\_{p, 1} \& \\sigma\_{p, 2}^{2} \& \\cdots \& \\sigma\_{p}^{2} \\end{array}\\right]. \\end{equation}\\] Note that \\(\\Sigma\\) contains \\(p\\) variances (\\(\\sigma\_1^2, \\sigma\_2^2, \\dots, \\sigma\_p^2\\)) and \\(p\\left(p \- 1\\right) / 2\\) unique covariances \\(\\sigma\_{i, j}\\) (\\(i \\ne j\\)); note that \\(\\sigma\_{i, j} \= \\sigma\_{j, i}\\). A multivariate\-normal distribution with mean \\(\\mu\\) and covariance \\(\|sigma\\) is notationally represented by Equation [(22\.2\)](model-clustering.html#eq:multinorm): \\\[\\begin{equation} \\tag{22\.2} \\left(X\_1, X\_2, \\dots, X\_p\\right) \\sim N\_p \\left( \\mu, \\Sigma \\right). \\end{equation}\\] This distribution has the property that every subset of variables (say, \\(X\_1\\), \\(X\_5\\), and \\(X\_9\\)) also has a multivariate normal distribution (albeit with a different mean and covariance structure). GMMs assume the clusters can be created using \\(k\\) Gaussian distributions. For example, if there are two variables (say, \\(X\\) and \\(Y\\)), then each observation \\(\\left(X\_i, Y\_i\\right)\\) is modeled has having been sampled from one of \\(k\\) distributions (\\(N\_1\\left(\\mu\_1, \\Sigma\_1 \\right), N\_2\\left(\\mu\_2, \\Sigma\_2 \\right), \\dots, N\_p\\left(\\mu\_p, \\Sigma\_p \\right)\\)). This is illustrated in Figure [22\.1](model-clustering.html#fig:multivariate-density-plot) which suggests variables \\(X\\) and \\(Y\\) come from three multivariate distributions. However, as the data points deviate from the center of one of the three clusters, the probability that they align to a particular cluster decreases as indicated by the fading elliptical rings around each cluster center. Figure 22\.1: Data points across two features (X and Y) appear to come from three multivariate normal distributions. We can illustrate this concretely by applying a GMM model to the `geyser` data, which is the data illustrated in Figure [22\.1](model-clustering.html#fig:multivariate-density-plot). To do so we apply `Mclust()` and specify three components. Plotting the output (Figure [22\.2](model-clustering.html#fig:geyser-mc1-plot)) provides a density plot (left) just like we saw in Figure [22\.1](model-clustering.html#fig:multivariate-density-plot) and the component assignment for each observation based on the largest probability (right). In the uncertainty plot (right), you’ll notice the observations near the center of the densities are small, indicating small uncertainty (or high probability) of being from that respective component; however, the observations that are large represent observations with high uncertainty (or low probability) regarding the component they are aligned to. ``` # Apply GMM model with 3 components geyser_mc <- Mclust(geyser, G = 3) # Plot results plot(geyser_mc, what = "density") plot(geyser_mc, what = "uncertainty") ``` Figure 22\.2: Multivariate density plot (left) highlighting three clusters in the `geyser` data and an uncertainty plot (right) highlighting observations with high uncertainty of which cluster they are a member of. This idea of a probabilistic cluster assignment can be quite useful as it allows you to identify observations with high or low cluster uncertainty and, potentially, target them uniquely or provide alternative solutions. For example, the following six observations all have nearly 50% probability of being assigned to two different clusters. If this were an advertising data set and you were marketing to these observations you may want to provide them with a combination of marketing solutions for the two clusters they are nearest to. Or you may want to perform additional A/B testing on them to try gain additional confidence regarding which cluster they align to most. ``` # Observations with high uncertainty sort(geyser_mc$uncertainty, decreasing = TRUE) %>% head() ## 187 211 85 285 28 206 ## 0.4689087 0.4542588 0.4355496 0.4355496 0.4312406 0.4168440 ``` 22\.3 Covariance types ---------------------- The covariance matrix in Equation [(22\.1\)](model-clustering.html#eq:covariance) describes the geometry of the clusters; namely, the volume, shape, and orientation of the clusters. Looking at Figure [22\.2](model-clustering.html#fig:geyser-mc1-plot), the clusters and their densities appear approximately proportional in size and shape. However, this is not a requirement of GMMs. In fact, GMMs allow for far more flexible clustering structures. This is done by adding constraints to the covariance matrix \\(\\Sigma\\). These constraints can be one or more of the following: 1. **volume**: each cluster has approximately the same number of observations; 2. **shape**: each cluster has approximately the same variance so that the distribution is spherical; 3. **orientation**: each cluster is forced to be axis\-aligned. The various combinations of the above constraints have been classified into three main families of models: *spherical*, *diagonal*, and *general* (also referred to as *ellipsoidal*) families (Celeux and Govaert [1995](#ref-celeux1995gaussian)). These combinations are listed in Table [22\.1](model-clustering.html#tab:covariance-parameterization). See Fraley et al. ([2012](#ref-fraley2012mclust)) regarding the technical implementation of these covariance parameters. Table 22\.1: Parameterizations of the covariance matrix | Model | Family | Volume | Shape | Orientation | Identifier | | --- | --- | --- | --- | --- | --- | | 1 | Spherical | Equal | Equal | NA | EII | | 2 | Spherical | Variable | Equal | NA | VII | | 3 | Diagonal | Equal | Equal | Axes | EEI | | 4 | Diagonal | Variable | Equal | Axes | VEI | | 5 | Diagonal | Equal | Variable | Axes | EVI | | 6 | Diagonal | Variable | Variable | Axes | VVI | | 7 | General | Equal | Equal | Equal | EEE | | 8 | General | Equal | Variable | Equal | EVE | | 9 | General | Variable | Equal | Equal | VEE | | 10 | General | Variable | Variable | Equal | VVE | | 11 | General | Equal | Equal | Variable | EEV | | 12 | General | Variable | Equal | Variable | VEV | | 13 | General | Equal | Variable | Variable | EVV | | 14 | General | Variable | Variable | Variable | VVV | These various covariance parameters allow GMMs to capture unique clustering structures in data, as illustrated in Figure [22\.3](model-clustering.html#fig:visualize-different-covariance-models). Figure 22\.3: Graphical representation of how different covariance models allow GMMs to capture different cluster structures. Users can optionally specify a conjugate prior if prior knowledge of the underlying probability distributions are available. By default, `Mclust()` does not apply a prior for modeling. 22\.4 Model selection --------------------- If we assess the summary of our `geyser_mc` model we see that, behind the scenes, `Mclust()` applied the EEI model. A more detailed summary, including the estimated parameters, can be obtained using `summary(geyser_mc, parameters = TRUE)`. ``` summary(geyser_mc) ## ---------------------------------------------------- ## Gaussian finite mixture model fitted by EM algorithm ## ---------------------------------------------------- ## ## Mclust EEI (diagonal, equal volume and shape) model with 3 components: ## ## log-likelihood n df BIC ICL ## -1371.823 299 10 -2800.65 -2814.577 ## ## Clustering table: ## 1 2 3 ## 91 107 101 ``` However, `Mclust()` will apply all 14 models from Table [22\.3](model-clustering.html#fig:visualize-different-covariance-models) and identify the one that best characterizes the data. To select the optimal model, any model selection procedure can be applied (e.g., Akaike information criteria (AIC), likelihood ratio, etc.); however, the Bayesian information criterion (BIC) has been shown to work well in model\-based clustering (Dasgupta and Raftery [1998](#ref-dasgupta1998detecting); Fraley and Raftery [1998](#ref-fraley1998many)) and is typically the default. `Mclust()` implements BIC as represented in Equation [(22\.3\)](model-clustering.html#eq:BIC) \\\[\\begin{equation} \\tag{22\.3} BIC \= \-2\\log\\left(L\\right) \+ m \\log\\left(n\\right), \\end{equation}\\] where \\(\\log\\left(n\\right)\\) is the maximized loglikelihood for the model and data, \\(m\\) is the number of free parameters to be estimated in the given model, and \\(n\\) is the number observations in the data. This penalizes large models with many clusters. The objective in hyperparameter tuning with `Mclust()` is to maximize BIC. Not only can we use BIC to identify the optimal covariance parameters, but we can also use it to identify the optimal number of clusters. Rather than specify `G = 3` in `Mclust()`, leaving `G = NULL` forces `Mclust()` to evaluate 1–9 clusters and select the optimal number of components based on BIC. Alternatively, you can specify certain values to evaluate for `G`. The following performs a hyperparameter search across all 14 covariance models for 1–9 clusters on the geyser data. The left plot in Figure [22\.4](model-clustering.html#fig:geyser-mc2-plot) shows that the EEI and VII models perform particularly poor while the rest of the models perform much better and have little differentiation. The optimal model uses the VVI covariance parameters, which identified four clusters and has a BIC of \-2768\.568\. ``` geyser_optimal_mc <- Mclust(geyser) summary(geyser_optimal_mc) ## ---------------------------------------------------- ## Gaussian finite mixture model fitted by EM algorithm ## ---------------------------------------------------- ## ## Mclust VVI (diagonal, varying volume and shape) model with 4 components: ## ## log-likelihood n df BIC ICL ## -1330.13 299 19 -2768.568 -2798.746 ## ## Clustering table: ## 1 2 3 4 ## 90 17 98 94 legend_args <- list(x = "bottomright", ncol = 5) plot(geyser_optimal_mc, what = 'BIC', legendArgs = legend_args) plot(geyser_optimal_mc, what = 'classification') plot(geyser_optimal_mc, what = 'uncertainty') ``` Figure 22\.4: Identifying the optimal GMM model and number of clusters for the `geyser` data (left). The classification (center) and uncertainty (right) plots illustrate which observations are assigned to each cluster and their level of assignment uncertainty. 22\.5 My basket example ----------------------- Let’s turn to our `my_basket` data to demonstrate GMMs on a more modern sized data set. The following performs a search across all 14 GMM models and across 1–20 clusters. If you are following along and running the code you’ll notice that GMMs are computationally slow, especially since they are assessing 14 models for each cluster size instance. This GMM hyperparameter search took a little over a minute. Figure [22\.5](model-clustering.html#fig:my-basket-BIC) illustrates the BIC scores and we see that the optimal GMM method is EEV with six clusters. You may notice that not all models generate results for each cluster size (e.g., the VVI model only produced results for clusters 1–3\). This is because the model could not converge on optimal results for those settings. This becomes more problematic as data sets get larger. Often, performing dimension reduction prior to a GMM can minimize this issue. ``` my_basket_mc <- Mclust(my_basket, 1:20) summary(my_basket_mc) ## ---------------------------------------------------- ## Gaussian finite mixture model fitted by EM algorithm ## ---------------------------------------------------- ## ## Mclust EEV (ellipsoidal, equal volume and shape) model with 6 components: ## ## log-likelihood n df BIC ICL ## 8308.915 2000 5465 -24921.1 -25038.38 ## ## Clustering table: ## 1 2 3 4 5 6 ## 391 403 75 315 365 451 plot(my_basket_mc, what = 'BIC', legendArgs = list(x = "bottomright", ncol = 5)) ``` Figure 22\.5: BIC scores for clusters (components) ranging from 1\-20 We can look across our six clusters and assess the probabilities of cluster membership. Figure [22\.6](model-clustering.html#fig:my-basket-probabilities) illustrates very bimodal distributions of probabilities. Observations with greater than 0\.50 probability will be aligned to a given cluster so this bimodality is preferred as it illustrates that observations have either a very high probability of the cluster they are aligned to or a very low probability, which means they would not be aligned to that cluster. Looking at cluster `C3`, we see that there are very few, if any, observations in the middle of the probability range. `C3` also has far fewer observations with high probability. This means that `C3` is the smallest of the clusters (confirmed using `summary(my_basket_mc))` above, and that `C3` is a more compact cluster. As clusters have more observations with middling levels of probability (i.e., 0\.25–0\.75\), their clusters are usually less compact. Therefore, cluster `C2` is less compact than cluster `C3`. ``` probabilities <- my_basket_mc$z colnames(probabilities) <- paste0('C', 1:6) probabilities <- probabilities %>% as.data.frame() %>% mutate(id = row_number()) %>% tidyr::gather(cluster, probability, -id) ggplot(probabilities, aes(probability)) + geom_histogram() + facet_wrap(~ cluster, nrow = 2) ``` Figure 22\.6: Distribution of probabilities for all observations aligning to each of the six clusters. We can extract the cluster membership for each observation with `my_basket_mc$classification`. In Figure [22\.7](model-clustering.html#fig:cluster-uncertainty) we find the observations that are aligned to each cluster but the uncertainty of their membership to that particular cluster is 0\.25 or greater. You may notice that cluster three is not represented. Recall from Figure [22\.6](model-clustering.html#fig:my-basket-probabilities) that cluster three’s observations all had very strong membership probabilities so they have no observations with uncertainty greater than 0\.25\. ``` uncertainty <- data.frame( id = 1:nrow(my_basket), cluster = my_basket_mc$classification, uncertainty = my_basket_mc$uncertainty ) uncertainty %>% group_by(cluster) %>% filter(uncertainty > 0.25) %>% ggplot(aes(uncertainty, reorder(id, uncertainty))) + geom_point() + facet_wrap(~ cluster, scales = 'free_y', nrow = 1) ``` Figure 22\.7: Observations that are aligned to each cluster but their uncertainty of membership is greater than 0\.25\. When doing cluster analysis, our goal is to find those observations that are most similar to others. What defines this similarity becomes difficult as our data sets become larger. Let’s take a look at cluster two. The following standardizes the count of each product across all baskets and then looks at consumption for cluster two. Figure [22\.8](model-clustering.html#fig:cluster2-consumption) illustrates the results and shows that cluster two baskets have above average consumption for candy bars, lottery tickets, cigarettes, and alcohol. Needless to say, this group may include our more unhealthy baskets—or maybe they’re the recreational baskets made on the weekends when people just want to sit on the deck and relax with a drink in one hand and a candy bar in the other! Regardless, this group is likely to receive marketing ads for candy bars, alcohol, and the like rather than the items we see at the bottom of Figure [22\.8](model-clustering.html#fig:cluster2-consumption), which represent the items this group consumes less than the average observations. ``` cluster2 <- my_basket %>% scale() %>% as.data.frame() %>% mutate(cluster = my_basket_mc$classification) %>% filter(cluster == 2) %>% select(-cluster) cluster2 %>% tidyr::gather(product, std_count) %>% group_by(product) %>% summarize(avg = mean(std_count)) %>% ggplot(aes(avg, reorder(product, avg))) + geom_point() + labs(x = "Average standardized consumption", y = NULL) ``` Figure 22\.8: Average standardized consumption for cluster 2 observations compared to all observations. 22\.6 Final thoughts -------------------- Model\-based clustering techniques do have their limitations. The methods require an underlying model for the data (e.g., GMMs assume multivariate normality), and the cluster results are heavily dependent on this assumption. Although there have been many advancements to limit this constraint (Lee and McLachlan [2013](#ref-lee2013model)), software implementations are still lacking. A more significant limitation is the computational demands. Classical model\-based clustering show disappointing computational performance in high\-dimensional spaces (Bouveyron and Brunet\-Saumard [2014](#ref-bouveyron2014model)). This is mainly due to the fact that model\-based clustering methods are dramatically over\-parameterized. The primary approach for dealing with this is to perform dimension reduction prior to clustering. Although this often improves computational performance, reducing the dimension without taking into consideration the clustering goal may be dangerous. Indeed, dimension reduction may yield a loss of information which could have been useful for discriminating the groups. There have been alternative solutions proposed, such as high\-dimensional GMMs (Bouveyron, Girard, and Schmid [2007](#ref-bouveyron2007high)), which has been implemented in the **HDclassif** package (Berge, Bouveyron, and Girard [2018](#ref-R-HDclassif)).
Machine Learning
christophm.github.io
https://christophm.github.io/interpretable-ml-book/cite.html
Chapter 13 Citing this Book =========================== If you found this book useful for your blog post, research article or product, I would be grateful if you would cite this book. You can cite the book like this: ``` Molnar, C. (2022). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (2nd ed.). christophm.github.io/interpretable-ml-book/ ``` Or use the following bibtex entry: ``` @book{molnar2022, title = {Interpretable Machine Learning}, author = {Christoph Molnar}, year = {2022}, subtitle = {A Guide for Making Black Box Models Explainable}, edition = {2}, url = {https://christophm.github.io/interpretable-ml-book} } ``` I am always curious about where and how interpretation methods are used in industry and research. If you use the book as a reference, it would be great if you wrote me a line and told me what for. This is of course optional and only serves to satisfy my own curiosity and to stimulate interesting exchanges. My mail is [[email protected]](mailto:[email protected]) .
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/setup.html
Setup ===== How to use this on\-line book ----------------------------- This book contains reproducible code that can be run on an R environment. There are three options to setup your working environment: 1. Install R and RStudio and the packages required by `sits`, with specific procedures for each type of operating systems. 2. Use a Docker image provided by the Brazil Data Cube. 3. Install `sits` and all its dependencies using `conda`. How to install sits using R and RStudio --------------------------------------- We suggest a staged installation, as follows: 1. Get and install base R from [CRAN](https://cran.r-project.org/). 2. Install RStudio from the [Posit website](https://posit.co/). ### Installing `sits` from CRAN The Comprehensive R Archive Network (CRAN), a network of servers (also known as mirrors) from around the world that store up\-to\-date versions of basic code and packages for R. In what follows, we describe how to use CRAN to `sits` on Windows, Linux and MacOS. ### Installing in Microsoft Windows and MacOS environments Windows and MacOS users are strongly encouraged to install binary packages from CRAN. The `sits` package relies on the `sf` and `terra` packages, which require the GDAL and PROJ libraries. Run RStudio and install binary packages `sf` and `terra`, in this order: ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("sf") [install.packages](https://rdrr.io/r/utils/install.packages.html)("terra") ``` After installing the binaries for `sf` and `terra`, install `sits` as follows; ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("sits", dependencies = TRUE) ``` To run the examples in the book, please also install `sitsdata` package, which is available from GitHub. It is necessary to use package `devtools` to install `sitsdata`. ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("devtools") devtools::[install_github](https://remotes.r-lib.org/reference/install_github.html)("e-sensing/sitsdata") ``` To install `sits` from source, please install [Rtools](https://cran.r-project.org/bin/windows/Rtools/) for Windows to have access to the compiling environment. For Mac, please follow the instructions available [here](https://mac.r-project.org/tools/). ### Installing in Ubuntu environments For Ubuntu, the first step should be to install the latest version of the GDAL, GEOS, and PROJ4 libraries and binaries. To do so, use the repository `ubuntugis-unstable`, which should be done as follows: ``` sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable sudo apt-get update sudo apt-get install libudunits2-dev libgdal-dev libgeos-dev libproj-dev sudo apt-get install gdal-bin sudo apt-get install proj-bin ``` Getting an error while adding this PPA repository could be due to the absence of the package `software-properties-common`. After installing GDAL, GEOS, and PROJ4, please install packages `sf` and `terra`: ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("sf") [install.packages](https://rdrr.io/r/utils/install.packages.html)("terra") ``` Then please proceed to install `sits`, which can be installed as a regular **R** package. ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("sits", dependencies = TRUE) ``` ### Installing in Debian environments For Debian, use the [rocker geospatial](https://github.com/rocker-org/geospatial) dockerfiles. ### Installing in Fedora environments In the case of Fedora, the following command installs all required dependencies: ``` sudo dnf install gdal-devel proj-devel geos-devel sqlite-devel udunits2-devel ``` Using Docker images ------------------- If you are familiar with Docker, there are images for `sits` available with RStudio or Jupyter notebook. Such images are provided by the Brazil Data Cube team: * [Version for R and RStudio](https://hub.docker.com/r/brazildatacube/sits-rstudio). * [Version for Jupyter Notebooks](https://hub.docker.com/r/brazildatacube/sits-jupyter). On a Windows or Mac platform, install [Docker](https://docs.docker.com/desktop/install/windows-install/) and then obtain one of the two images listed above from the Brazil Data Cube. Both images contain the full `sits` running environment. When GDAL is running in `docker` containers, please add the security flag `--security-opt seccomp=unconfined` on start. Install `sits` from CONDA ------------------------- Conda is an open\-source, cross\-platform package manager. It is a convenient way to installl Python and R packages. To use `conda`, first download the software from the [CONDA website](https://conda.io/projects/conda/en/latest/index.html). After installation, use `conda` to install sits from the terminal as follows: ``` # add conda-forge to the download channels conda config --add channels conda-forge conda config --set channel_priority strict # install sits using conda conda install conda-forge::r-sits ``` The conda installer will download all packages and libraries required to run `sits`. This is the easiest way to install `sits` on Windows. Accessing the development version --------------------------------- The source code repository of `sits` is on [GitHub](https://github.com/e-sensing/sits). There are two versions available on GitHub: `master` and `dev`. The `master` contains the current stable version, which is either the same code available in CRAN or a minor update with bug fixes. To install the `master` version, install `devtools` (if not already available) and do as follows: ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("devtools") devtools::[install_github](https://remotes.r-lib.org/reference/install_github.html)("e-sensing/sits", dependencies = TRUE) ``` To install the `dev` (development) version, which contains the latest updates but might be unstable, install `devtools` (if not already available), and then install `sits` as follows: ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("devtools") devtools::[install_github](https://remotes.r-lib.org/reference/install_github.html)("e-sensing/sits@dev", dependencies = TRUE) ``` Additional requirements ----------------------- To run the examples in the book, please also install the `sitsdata` package. We recommend installing it using `wget`. See instructions in the [GNU Wget site](https://www.gnu.org/software/wget/). ``` [options](https://rdrr.io/r/base/options.html)(download.file.method = "wget") devtools::[install_github](https://remotes.r-lib.org/reference/install_github.html)("e-sensing/sitsdata") ``` Using GPUs with `sits` ---------------------- The `torch` package automatically recognizes if a GPU is available on the machine and uses it for training and classification. There is a significant performance gain when GPUs are used instead of CPUs for deep learning models. There is no need for specific adjustments to `torch` scripts. To use GPUs, `torch` requires version 11\.6 of the CUDA library, which is available for Ubuntu 18\.04 and 20\.04\. Please follow the detailed instructions for setting up `torch` available [here](https://torch.mlverse.org/docs/articles/installation.html). ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("torch") ``` How to use this on\-line book ----------------------------- This book contains reproducible code that can be run on an R environment. There are three options to setup your working environment: 1. Install R and RStudio and the packages required by `sits`, with specific procedures for each type of operating systems. 2. Use a Docker image provided by the Brazil Data Cube. 3. Install `sits` and all its dependencies using `conda`. How to install sits using R and RStudio --------------------------------------- We suggest a staged installation, as follows: 1. Get and install base R from [CRAN](https://cran.r-project.org/). 2. Install RStudio from the [Posit website](https://posit.co/). ### Installing `sits` from CRAN The Comprehensive R Archive Network (CRAN), a network of servers (also known as mirrors) from around the world that store up\-to\-date versions of basic code and packages for R. In what follows, we describe how to use CRAN to `sits` on Windows, Linux and MacOS. ### Installing in Microsoft Windows and MacOS environments Windows and MacOS users are strongly encouraged to install binary packages from CRAN. The `sits` package relies on the `sf` and `terra` packages, which require the GDAL and PROJ libraries. Run RStudio and install binary packages `sf` and `terra`, in this order: ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("sf") [install.packages](https://rdrr.io/r/utils/install.packages.html)("terra") ``` After installing the binaries for `sf` and `terra`, install `sits` as follows; ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("sits", dependencies = TRUE) ``` To run the examples in the book, please also install `sitsdata` package, which is available from GitHub. It is necessary to use package `devtools` to install `sitsdata`. ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("devtools") devtools::[install_github](https://remotes.r-lib.org/reference/install_github.html)("e-sensing/sitsdata") ``` To install `sits` from source, please install [Rtools](https://cran.r-project.org/bin/windows/Rtools/) for Windows to have access to the compiling environment. For Mac, please follow the instructions available [here](https://mac.r-project.org/tools/). ### Installing in Ubuntu environments For Ubuntu, the first step should be to install the latest version of the GDAL, GEOS, and PROJ4 libraries and binaries. To do so, use the repository `ubuntugis-unstable`, which should be done as follows: ``` sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable sudo apt-get update sudo apt-get install libudunits2-dev libgdal-dev libgeos-dev libproj-dev sudo apt-get install gdal-bin sudo apt-get install proj-bin ``` Getting an error while adding this PPA repository could be due to the absence of the package `software-properties-common`. After installing GDAL, GEOS, and PROJ4, please install packages `sf` and `terra`: ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("sf") [install.packages](https://rdrr.io/r/utils/install.packages.html)("terra") ``` Then please proceed to install `sits`, which can be installed as a regular **R** package. ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("sits", dependencies = TRUE) ``` ### Installing in Debian environments For Debian, use the [rocker geospatial](https://github.com/rocker-org/geospatial) dockerfiles. ### Installing in Fedora environments In the case of Fedora, the following command installs all required dependencies: ``` sudo dnf install gdal-devel proj-devel geos-devel sqlite-devel udunits2-devel ``` ### Installing `sits` from CRAN The Comprehensive R Archive Network (CRAN), a network of servers (also known as mirrors) from around the world that store up\-to\-date versions of basic code and packages for R. In what follows, we describe how to use CRAN to `sits` on Windows, Linux and MacOS. ### Installing in Microsoft Windows and MacOS environments Windows and MacOS users are strongly encouraged to install binary packages from CRAN. The `sits` package relies on the `sf` and `terra` packages, which require the GDAL and PROJ libraries. Run RStudio and install binary packages `sf` and `terra`, in this order: ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("sf") [install.packages](https://rdrr.io/r/utils/install.packages.html)("terra") ``` After installing the binaries for `sf` and `terra`, install `sits` as follows; ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("sits", dependencies = TRUE) ``` To run the examples in the book, please also install `sitsdata` package, which is available from GitHub. It is necessary to use package `devtools` to install `sitsdata`. ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("devtools") devtools::[install_github](https://remotes.r-lib.org/reference/install_github.html)("e-sensing/sitsdata") ``` To install `sits` from source, please install [Rtools](https://cran.r-project.org/bin/windows/Rtools/) for Windows to have access to the compiling environment. For Mac, please follow the instructions available [here](https://mac.r-project.org/tools/). ### Installing in Ubuntu environments For Ubuntu, the first step should be to install the latest version of the GDAL, GEOS, and PROJ4 libraries and binaries. To do so, use the repository `ubuntugis-unstable`, which should be done as follows: ``` sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable sudo apt-get update sudo apt-get install libudunits2-dev libgdal-dev libgeos-dev libproj-dev sudo apt-get install gdal-bin sudo apt-get install proj-bin ``` Getting an error while adding this PPA repository could be due to the absence of the package `software-properties-common`. After installing GDAL, GEOS, and PROJ4, please install packages `sf` and `terra`: ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("sf") [install.packages](https://rdrr.io/r/utils/install.packages.html)("terra") ``` Then please proceed to install `sits`, which can be installed as a regular **R** package. ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("sits", dependencies = TRUE) ``` ### Installing in Debian environments For Debian, use the [rocker geospatial](https://github.com/rocker-org/geospatial) dockerfiles. ### Installing in Fedora environments In the case of Fedora, the following command installs all required dependencies: ``` sudo dnf install gdal-devel proj-devel geos-devel sqlite-devel udunits2-devel ``` Using Docker images ------------------- If you are familiar with Docker, there are images for `sits` available with RStudio or Jupyter notebook. Such images are provided by the Brazil Data Cube team: * [Version for R and RStudio](https://hub.docker.com/r/brazildatacube/sits-rstudio). * [Version for Jupyter Notebooks](https://hub.docker.com/r/brazildatacube/sits-jupyter). On a Windows or Mac platform, install [Docker](https://docs.docker.com/desktop/install/windows-install/) and then obtain one of the two images listed above from the Brazil Data Cube. Both images contain the full `sits` running environment. When GDAL is running in `docker` containers, please add the security flag `--security-opt seccomp=unconfined` on start. Install `sits` from CONDA ------------------------- Conda is an open\-source, cross\-platform package manager. It is a convenient way to installl Python and R packages. To use `conda`, first download the software from the [CONDA website](https://conda.io/projects/conda/en/latest/index.html). After installation, use `conda` to install sits from the terminal as follows: ``` # add conda-forge to the download channels conda config --add channels conda-forge conda config --set channel_priority strict # install sits using conda conda install conda-forge::r-sits ``` The conda installer will download all packages and libraries required to run `sits`. This is the easiest way to install `sits` on Windows. Accessing the development version --------------------------------- The source code repository of `sits` is on [GitHub](https://github.com/e-sensing/sits). There are two versions available on GitHub: `master` and `dev`. The `master` contains the current stable version, which is either the same code available in CRAN or a minor update with bug fixes. To install the `master` version, install `devtools` (if not already available) and do as follows: ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("devtools") devtools::[install_github](https://remotes.r-lib.org/reference/install_github.html)("e-sensing/sits", dependencies = TRUE) ``` To install the `dev` (development) version, which contains the latest updates but might be unstable, install `devtools` (if not already available), and then install `sits` as follows: ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("devtools") devtools::[install_github](https://remotes.r-lib.org/reference/install_github.html)("e-sensing/sits@dev", dependencies = TRUE) ``` Additional requirements ----------------------- To run the examples in the book, please also install the `sitsdata` package. We recommend installing it using `wget`. See instructions in the [GNU Wget site](https://www.gnu.org/software/wget/). ``` [options](https://rdrr.io/r/base/options.html)(download.file.method = "wget") devtools::[install_github](https://remotes.r-lib.org/reference/install_github.html)("e-sensing/sitsdata") ``` Using GPUs with `sits` ---------------------- The `torch` package automatically recognizes if a GPU is available on the machine and uses it for training and classification. There is a significant performance gain when GPUs are used instead of CPUs for deep learning models. There is no need for specific adjustments to `torch` scripts. To use GPUs, `torch` requires version 11\.6 of the CUDA library, which is available for Ubuntu 18\.04 and 20\.04\. Please follow the detailed instructions for setting up `torch` available [here](https://torch.mlverse.org/docs/articles/installation.html). ``` [install.packages](https://rdrr.io/r/utils/install.packages.html)("torch") ```
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/introduction.html
Introduction ============ [ Who is this book for? --------------------- This book, tailored for land use change experts and researchers, is a practical guide that enables them to analyze big Earth observation data sets. It provides readers with the means of producing high\-quality maps of land use and land cover, guiding them through all the steps to achieve good results. Given the natural world’s complexity and huge variations in human\-nature interactions, only local experts who know their countries and ecosystems can extract full information from big EO data. One group of readers that we are keen to engage with is the national authorities on forest, agriculture, and statistics in developing countries. We aim to foster a collaborative environment where they can use EO data to enhance their national land use and cover estimates, supporting sustainable development policies. To achieve this goal, `sits` has strong backing from the FAO Expert Group on the Use of Earth Observation data (FAO\-EOSTAT)\[[https://www.fao.org/in\-action/eostat](https://www.fao.org/in-action/eostat)]. FAO\-EOSTAT is at the forefront of using advanced EO data analysis methods for agricultural statistics in developing countries \[[\[1]](references.html#ref-DeSimone2022)][\[2]](references.html#ref-DeSimone2022a). Why work with satellite image time series? ------------------------------------------ Satellite imagery provides the most extensive data on our environment. By encompassing vast areas of the Earth’s surface, images enable researchers to analyze local and worldwide transformations. By observing the same location multiple times, satellites provide data on environmental changes and survey areas that are difficult to observe from the ground. Given its unique features, images offer essential information for many applications, including deforestation, crop production, food security, urban footprints, water scarcity, and land degradation. Using time series, experts improve their understanding of ecological patterns and processes. Instead of selecting individual images from specific dates and comparing them, researchers track change continuously [\[3]](references.html#ref-Woodcock2020). Time\-first, space\-later ------------------------- “Time\-first, space\-later” is a concept in satellite image classification that takes time series analysis as the first step for analyzing remote sensing data, with spatial information being considered after all time series are classified. The *time\-first* part brings a better understanding of changes in landscapes. Detecting and tracking seasonal and long\-term trends becomes feasible, as well as identifying anomalous events or patterns in the data, such as wildfires, floods, or droughts. Each pixel in a data cube is treated as a time series, using information available in the temporal instances of the case. Time series classification is pixel\-based, producing a set of labeled pixels. This result is then used as input to the *space\-later* part of the method. In this phase, a smoothing algorithm improves the results of time\-first classification by considering the spatial neighborhood of each pixel. The resulting map thus combines both spatial and temporal information. Land use and land cover ----------------------- The UN Food and Agriculture Organization defines land cover as “the observed biophysical cover on the Earth’s surface” [\[4]](references.html#ref-DiGregorio2016). Land cover can be observed and mapped directly through remote sensing images. In FAO’s guidelines and reports, land use is described as “the human activities or purposes for which land is managed or exploited”. Although *land cover* and *land use* denote different approaches for describing the Earth’s landscape, in practice there is considerable overlap between these concepts [\[5]](references.html#ref-Comber2008b). When classifying remote sensing images, natural areas are classified using land cover types (e.g, forest), while human\-modified areas are described with land use classes (e.g., pasture). One of the advantages of using image time series for land classification is its capacity of measuring changes in the landscape related to agricultural practices. For example, the time series of a vegetation index in an area of crop production will show a pattern of minima (planting and sowing stages) and maxima (flowering stage). Thus, classification schemas based on image time series data can be richer and more detailed than those associated only with land cover. In what follows, we use the term “land classification” to refer to image classification representing both land cover and land use classes. How `sits` works ---------------- The `sits` package uses satellite image time series for land classification, using a *time\-first, space\-later* approach. In the data preparation part, collections of big Earth observation images are organized as data cubes. Each spatial location of a data cube is associated with a time series. Locations with known labels train a machine learning algorithm, which classifies all time series of a data cube, as shown in Figure [1](introduction.html#fig:gview). Figure 1: Using time series for land classification (source: authors). The package provides tools for analysis, visualization, and classification of satellite image time series. Users follow a typical workflow for a pixel\-based classification: 1. Select an analysis\-ready data image collection from a cloud provider such as AWS, Microsoft Planetary Computer, Digital Earth Africa, or Brazil Data Cube. 2. Build a regular data cube using the chosen image collection. 3. Obtain new bands and indices with operations on data cubes. 4. Extract time series samples from the data cube to be used as training data. 5. Perform quality control and filtering on the time series samples. 6. Train a machine learning model using the time series samples. 7. Classify the data cube using the model to get class probabilities for each pixel. 8. Post\-process the probability cube to remove outliers. 9. Produce a labeled map from the post\-processed probability cube. 10. Evaluate the accuracy of the classification using best practices. Each workflow step corresponds to a function of the `sits` API, as shown in the Table below and Figure [2](introduction.html#fig:api). These functions have convenient default parameters and behaviors. A single function builds machine learning (ML) models. The classification function processes big data cubes with efficient parallel processing. Since the `sits` API is simple to learn, achieving good results do not require in\-depth knowledge about machine learning and parallel processing. Table 1: The sits API workflow for land classification. | API\_function | Inputs | Output | | --- | --- | --- | | sits\_cube() | ARD image collection | Irregular data cube | | sits\_regularize() | Irregular data cube | Regular data cube | | sits\_apply() | Regular data cube | Regular data cube with new bands and indices | | sits\_get\_data() | Data cube and sample locations | Time series | | sits\_train() | Time series and ML method | ML classification model | | sits\_classify() | ML classification model and regular data cube | Probability cube | | sits\_smooth() | Probability cube | Post\-processed probability cube | | sits\_uncertainty() | Post\-processed probability cube | Uncertainty cube | | sits\_label\_classification() | Post\-processed probability cube | Classified map | | sits\_accuracy() | Classified map and validation samples | Accuracy assessment | Figure 2: Main functions of the sits API (source: authors). Additionally, experts can perform object\-based image analysis (OBIA) with `sits`. In this case, before classifying the time series, one can use `sits_segments()` to create a set of closed polygons. These polygons are classified using a subset of the time series contained inside each segment. For details, see Chapter [Object\-based time series image analysis](https://e-sensing.github.io/sitsbook/object-based-time-series-image-analysis.html). Creating a data cube -------------------- There are two kinds of data cubes in `sits`: (a) irregular data cubes generated by selecting image collections on cloud providers such as AWS and Planetary Computer; (b) regular data cubes with images fully covering a chosen area, where each image has the same spectral bands and spatial resolution, and images follow a set of adjacent and regular time intervals. Machine learning applications need regular data cubes. Please refer to Chapter [Earth observation data cubes](https://e-sensing.github.io/sitsbook/earth-observation-data-cubes.html) for further details. The first steps in using `sits` are: (a) select an analysis\-ready data image collection available in a cloud provider or stored locally using `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)`; (b) if the collection is not regular, use `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)` to build a regular data cube. This section shows how to build a data cube from local images already organized as a regular data cube. The data cube is composed of MODIS MOD13Q1 images for the region close to the city of Sinop in Mato Grosso, Brazil. This region is one of the world’s largest producers of soybeans. All images have indexes NDVI and EVI covering a one\-year period from 2013\-09\-14 to 2014\-08\-29 (we use “year\-month\-day” for dates). There are 23 time instances, each covering a 16\-day period. This data is available in the package `sitsdata`. To build a data cube from local files, users must provide information about the original source from which the data was obtained. In this case, `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` needs the parameters: 1. `source`, the cloud provider from where the data has been obtained (in this case, the Brazil Data Cube “BDC”); 2. `collection`, the collection of the cloud provider from where the images have been extracted. In this case, data comes from the MOD13Q1 collection 6; 3. `data_dir`, the local directory where the image files are stored; 4. `parse_info`, a vector of strings stating how file names store information on “tile”, “band”, and “date”. In this case, local images are stored in files whose names are similar to `TERRA_MODIS_012010_EVI_2014-07-28.tif`. This file represents an image obtained by the MODIS sensor onboard the TERRA satellite, covering part of tile 012010 in the EVI band for date 2014\-07\-28\. ``` # load package "tibble" [library](https://rdrr.io/r/base/library.html)([tibble](https://tibble.tidyverse.org/)) # load packages "sits" and "sitsdata" [library](https://rdrr.io/r/base/library.html)([sits](https://github.com/e-sensing/sits/)) [library](https://rdrr.io/r/base/library.html)([sitsdata](https://github.com/e-sensing/sitsdata/)) # Create a data cube using local files sinop_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "MOD13Q1-6.1", bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI"), data_dir = [system.file](https://rdrr.io/r/base/system.file.html)("extdata/sinop", package = "sitsdata"), parse_info = [c](https://rdrr.io/r/base/c.html)("satellite", "sensor", "tile", "band", "date") ) # Plot the NDVI for the first date (2013-09-14) [plot](https://rdrr.io/r/graphics/plot.default.html)(sinop_cube, band = "NDVI", dates = "2013-09-14", palette = "RdYlGn" ) ``` Figure 3: False color MODIS image for NDVI band in 2013\-09\-14 from sinop data cube (source: Brazil Data Cube). The aim of the `parse_info` parameter is to extract `tile`, `band`, and `date` information from the file name. Given the large variation in image file names generated by different produces, it includes designators such as `X1` and `X2`; these are place holders for parts of the file name that is not relevant to `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)`. The R object returned by `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` contains the metadata describing the contents of the data cube. It includes data source and collection, satellite, sensor, tile in the collection, bounding box, projection, and list of files. Each file refers to one band of an image at one of the temporal instances of the cube. ``` # Show the description of the data cube sinop_cube ``` ``` #> # A tibble: 1 × 11 #> source collection satellite sensor tile xmin xmax ymin ymax crs #> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <chr> #> 1 BDC MOD13Q1-6… TERRA MODIS 0120… -6.18e6 -5.96e6 -1.35e6 -1.23e6 "PRO… #> # ℹ 1 more variable: file_info <list> ``` The list of image files which make up the data cube is stored as a data frame in the column `file_info`. For each file, `sits` stores information about spectral band, reference date, size, spatial resolution, coordinate reference system, bounding box, path to file location and cloud cover information (when available). ``` # Show information on the images files which are part of a data cube sinop_cube$file_info[[1]] ``` ``` #> # A tibble: 46 × 13 #> fid band date nrows ncols xres yres xmin ymin xmax #> <chr> <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 1 EVI 2013-09-14 551 944 232. 232. -6181982. -1353336. -5963298. #> 2 1 NDVI 2013-09-14 551 944 232. 232. -6181982. -1353336. -5963298. #> 3 2 EVI 2013-09-30 551 944 232. 232. -6181982. -1353336. -5963298. #> 4 2 NDVI 2013-09-30 551 944 232. 232. -6181982. -1353336. -5963298. #> 5 3 EVI 2013-10-16 551 944 232. 232. -6181982. -1353336. -5963298. #> 6 3 NDVI 2013-10-16 551 944 232. 232. -6181982. -1353336. -5963298. #> 7 4 EVI 2013-11-01 551 944 232. 232. -6181982. -1353336. -5963298. #> 8 4 NDVI 2013-11-01 551 944 232. 232. -6181982. -1353336. -5963298. #> 9 5 EVI 2013-11-17 551 944 232. 232. -6181982. -1353336. -5963298. #> 10 5 NDVI 2013-11-17 551 944 232. 232. -6181982. -1353336. -5963298. #> # ℹ 36 more rows #> # ℹ 3 more variables: ymax <dbl>, crs <chr>, path <chr> ``` A key attribute of a data cube is its timeline, as shown below. The command `[sits_timeline()](https://rdrr.io/pkg/sits/man/sits_timeline.html)` lists the temporal references associated to `sits` objects, including samples, data cubes and models. ``` # Show the R object that describes the data cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(sinop_cube) ``` ``` #> [1] "2013-09-14" "2013-09-30" "2013-10-16" "2013-11-01" "2013-11-17" #> [6] "2013-12-03" "2013-12-19" "2014-01-01" "2014-01-17" "2014-02-02" #> [11] "2014-02-18" "2014-03-06" "2014-03-22" "2014-04-07" "2014-04-23" #> [16] "2014-05-09" "2014-05-25" "2014-06-10" "2014-06-26" "2014-07-12" #> [21] "2014-07-28" "2014-08-13" "2014-08-29" ``` The timeline of the `sinop_cube` data cube has 23 intervals with a temporal difference of 16 days. The chosen dates capture the agricultural calendar in Mato Grosso, Brazil. The agricultural year starts in September\-October with the sowing of the summer crop (usually soybeans) which is harvested in February\-March. Then the winter crop (mostly Corn, Cotton or Millet) is planted in March and harvested in June\-July. For LULC classification, the training samples and the date cube should share a timeline with the same number of intervals and similar start and end dates. The time series tibble ---------------------- To handle time series information, `sits` uses a `tibble`. Tibbles are extensions of the `data.frame` tabular data structures provided by the `tidyverse` set of packages. The example below shows a tibble with 1,837 time series obtained from MODIS MOD13Q1 images. Each series has four attributes: two bands (NIR and MIR) and two indexes (NDVI and EVI). This dataset is available in package `sitsdata`. The time series tibble contains data and metadata. The first six columns contain the metadata: spatial and temporal information, the label assigned to the sample, and the data cube from where the data has been extracted. The `time_series` column contains the time series data for each spatiotemporal location. This data is also organized as a tibble, with a column with the dates and the other columns with the values for each spectral band. ``` # Load the MODIS samples for Mato Grosso from the "sitsdata" package [library](https://rdrr.io/r/base/library.html)([tibble](https://tibble.tidyverse.org/)) [library](https://rdrr.io/r/base/library.html)([sitsdata](https://github.com/e-sensing/sitsdata/)) [data](https://rdrr.io/r/utils/data.html)("samples_matogrosso_mod13q1", package = "sitsdata") samples_matogrosso_mod13q1 ``` ``` #> # A tibble: 1,837 × 7 #> longitude latitude start_date end_date label cube time_series #> <dbl> <dbl> <date> <date> <chr> <chr> <list> #> 1 -57.8 -9.76 2006-09-14 2007-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 2 -59.4 -9.31 2014-09-14 2015-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 3 -59.4 -9.31 2013-09-14 2014-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 4 -57.8 -9.76 2006-09-14 2007-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 5 -55.2 -10.8 2013-09-14 2014-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 6 -51.9 -13.4 2014-09-14 2015-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 7 -56.0 -10.1 2005-09-14 2006-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 8 -54.6 -10.4 2013-09-14 2014-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 9 -52.5 -11.0 2013-09-14 2014-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 10 -52.1 -14.0 2013-09-14 2014-08-29 Pasture bdc_cube <tibble [23 × 5]> #> # ℹ 1,827 more rows ``` The timeline for all time series associated with the samples follows the same agricultural calendar, starting in September 14th and ending in August 28th. All samples contain 23 values, corresponding to the same temporal interval as those of the `sinop` data cube. Notice that that although the years for the samples are different, the samples for a given year follow the same agricultural calendar. The time series can be displayed by showing the `time_series` column. ``` # Load the time series for MODIS samples for Mato Grosso samples_matogrosso_mod13q1[1, ]$time_series[[1]] ``` ``` #> # A tibble: 23 × 5 #> Index NDVI EVI NIR MIR #> <date> <dbl> <dbl> <dbl> <dbl> #> 1 2006-09-14 0.500 0.263 0.230 0.139 #> 2 2006-09-30 0.485 0.330 0.359 0.161 #> 3 2006-10-16 0.716 0.397 0.264 0.0757 #> 4 2006-11-01 0.654 0.415 0.332 0.124 #> 5 2006-11-17 0.591 0.433 0.400 0.172 #> 6 2006-12-03 0.662 0.439 0.348 0.125 #> 7 2006-12-19 0.734 0.444 0.295 0.0784 #> 8 2007-01-01 0.739 0.502 0.348 0.0887 #> 9 2007-01-17 0.768 0.526 0.351 0.0761 #> 10 2007-02-02 0.797 0.550 0.355 0.0634 #> # ℹ 13 more rows ``` The distribution of samples per class can be obtained using the `[summary()](https://rdrr.io/r/base/summary.html)` command. The classification schema uses nine labels, four associated to crops (`Soy_Corn`, `Soy_Cotton`, `Soy_Fallow`, `Soy_Millet`), two with natural vegetation (`Cerrado`, `Forest`) and one to `Pasture`. ``` # Load the MODIS samples for Mato Grosso from the "sitsdata" package [summary](https://rdrr.io/r/base/summary.html)(samples_matogrosso_mod13q1) ``` ``` #> # A tibble: 7 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Cerrado 379 0.206 #> 2 Forest 131 0.0713 #> 3 Pasture 344 0.187 #> 4 Soy_Corn 364 0.198 #> 5 Soy_Cotton 352 0.192 #> 6 Soy_Fallow 87 0.0474 #> 7 Soy_Millet 180 0.0980 ``` It is helpful to plot the dispersion of the time series. In what follows, for brevity, we will filter only one label (`Forest`) and select one index (NDVI). Note that for filtering the label we use a function from `dplyr` package, while for selecting the index we use `[sits_select()](https://rdrr.io/pkg/sits/man/sits_select.html)`. We use two different functions for selection because of they way metadata is stored in a samples files. The labels for the samples are listed in column `label` in the samples tibble, as shown above. In this case, one can use functions from the `dplyr` package to extract subsets. In particular, the function `[dplyr::filter](https://dplyr.tidyverse.org/reference/filter.html)` retaining all rows that satisfy a given condition. In the above example, the result of `[dplyr::filter](https://dplyr.tidyverse.org/reference/filter.html)` is the set of samples associated to the “Forest” label. The second selection involves obtaining only the values for the NDVI band. This operation requires access to the `time_series` column, which is stored as a list. In this case, selection with `[dplyr::filter](https://dplyr.tidyverse.org/reference/filter.html)` will not work. To handle such cases, `sits` provides `[sits_select()](https://rdrr.io/pkg/sits/man/sits_select.html)` to select subsets inside the `time_series` list. ``` # select all samples with label "Forest" samples_forest <- dplyr::[filter](https://dplyr.tidyverse.org/reference/filter.html)( samples_matogrosso_mod13q1, label == "Forest" ) # select the NDVI band for all samples with label "Forest" samples_forest_ndvi <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)( samples_forest, band = "NDVI" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(samples_forest_ndvi) ``` Figure 4: Joint plot of all samples in band NDVI for label Forest (source: authors). The above figure shows all the time series associated with label `Forest` and band NDVI (in light blue), highlighting the median (shown in dark red) and the first and third quartiles (shown in brown). The spikes are noise caused by the presence of clouds. Training a machine learning model --------------------------------- The next step is to train a machine learning (ML) model using `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)`. It takes two inputs, `samples` (a time series tibble) and `ml_method` (a function that implements a machine learning algorithm). The result is a model that is used for classification. Each ML algorithm requires specific parameters that are user\-controllable. For novice users, `sits` provides default parameters that produce good results. Please see Chapter [Machine learning for data cubes](https://e-sensing.github.io/sitsbook/machine-learning-for-data-cubes.html) for more details. Since the time series data has four attributes (EVI, NDVI, NIR, and MIR) and the data cube images have only two, we select the NDVI and EVI values and use the resulting data for training. To build the classification model, we use a random forest model called by `[sits_rfor()](https://rdrr.io/pkg/sits/man/sits_rfor.html)`. Results from the random forest model can vary between different runs, due to the stochastic nature of the algorithm, For this reason, in the code fragment below, we set the seed of R’s pseudo\-random number generation explicitly to ensure the same results are produced for documentation purposes. ``` [set.seed](https://rdrr.io/r/base/Random.html)(03022024) # Select the bands NDVI and EVI samples_2bands <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)( data = samples_matogrosso_mod13q1, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI") ) # Train a random forest model rf_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_2bands, ml_method = [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)() ) # Plot the most important variables of the model [plot](https://rdrr.io/r/graphics/plot.default.html)(rf_model) ``` Figure 5: Most relevant variables of trained random forest model (source: authors). Data cube classification ------------------------ After training the machine learning model, the next step is to classify the data cube using `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`. This function produces a set of raster probability maps, one for each class. For each of these maps, the value of a pixel is proportional to the probability that it belongs to the class. This function has two mandatory parameters: `data`, the data cube or time series tibble to be classified; and `ml_model`, the trained ML model. Optional parameters include: (a) `multicores`, number of cores to be used; (b) `memsize`, RAM used in the classification; (c) `output_dir`, the directory where the classified raster files will be written. Details of the classification process are available in “Image classification in data cubes”. ``` # Classify the raster image sinop_probs <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = sinop_cube, ml_model = rf_model, multicores = 2, memsize = 8, output_dir = "./tempdir/chp3" ) # Plot the probability cube for class Forest [plot](https://rdrr.io/r/graphics/plot.default.html)(sinop_probs, labels = "Forest", palette = "BuGn") ``` Figure 6: Probability map for class Forest (source: authors). After completing the classification, we plot the probability maps for class `Forest`. Probability maps are helpful to visualize the degree of confidence the classifier assigns to the labels for each pixel. They can be used to produce uncertainty information and support active learning, as described in Chapter [Image classification in data cubes](https://e-sensing.github.io/sitsbook/image-classification-in-data-cubes.html). Spatial smoothing ----------------- When working with big Earth observation data, there is much variability in each class. As a result, some pixels will be misclassified. These errors are more likely to occur in transition areas between classes. To address these problems, `[sits_smooth()](https://rdrr.io/pkg/sits/man/sits_smooth.html)` takes a probability cube as input and uses the class probabilities of each pixel’s neighborhood to reduce labeling uncertainty. Plotting the smoothed probability map for class Forest shows that most outliers have been removed. ``` # Perform spatial smoothing sinop_bayes <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( cube = sinop_probs, multicores = 2, memsize = 8, output_dir = "./tempdir/chp3" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(sinop_bayes, labels = "Forest", palette = "BuGn") ``` Figure 7: Smoothed probability map for class Forest (source: authors). Labeling a probability data cube -------------------------------- After removing outliers using local smoothing, the final classification map can be obtained using `[sits_label_classification()](https://rdrr.io/pkg/sits/man/sits_label_classification.html)`. This function assigns each pixel to the class with the highest probability. ``` # Label the probability file sinop_map <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = sinop_bayes, output_dir = "./tempdir/chp3" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(sinop_map) ``` Figure 8: Classification map for Sinop (source: authors). The resulting classification files can be read by QGIS. Links to the associated files are available in the `sinop_map` object in the nested table `file_info`. ``` # Show the location of the classification file sinop_map$file_info[[1]] ``` ``` #> # A tibble: 1 × 12 #> band start_date end_date ncols nrows xres yres xmin xmax ymin #> <chr> <date> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 class 2013-09-14 2014-08-29 944 551 232. 232. -6181982. -5.96e6 -1.35e6 #> # ℹ 2 more variables: ymax <dbl>, path <chr> ``` Who is this book for? --------------------- This book, tailored for land use change experts and researchers, is a practical guide that enables them to analyze big Earth observation data sets. It provides readers with the means of producing high\-quality maps of land use and land cover, guiding them through all the steps to achieve good results. Given the natural world’s complexity and huge variations in human\-nature interactions, only local experts who know their countries and ecosystems can extract full information from big EO data. One group of readers that we are keen to engage with is the national authorities on forest, agriculture, and statistics in developing countries. We aim to foster a collaborative environment where they can use EO data to enhance their national land use and cover estimates, supporting sustainable development policies. To achieve this goal, `sits` has strong backing from the FAO Expert Group on the Use of Earth Observation data (FAO\-EOSTAT)\[[https://www.fao.org/in\-action/eostat](https://www.fao.org/in-action/eostat)]. FAO\-EOSTAT is at the forefront of using advanced EO data analysis methods for agricultural statistics in developing countries \[[\[1]](references.html#ref-DeSimone2022)][\[2]](references.html#ref-DeSimone2022a). Why work with satellite image time series? ------------------------------------------ Satellite imagery provides the most extensive data on our environment. By encompassing vast areas of the Earth’s surface, images enable researchers to analyze local and worldwide transformations. By observing the same location multiple times, satellites provide data on environmental changes and survey areas that are difficult to observe from the ground. Given its unique features, images offer essential information for many applications, including deforestation, crop production, food security, urban footprints, water scarcity, and land degradation. Using time series, experts improve their understanding of ecological patterns and processes. Instead of selecting individual images from specific dates and comparing them, researchers track change continuously [\[3]](references.html#ref-Woodcock2020). Time\-first, space\-later ------------------------- “Time\-first, space\-later” is a concept in satellite image classification that takes time series analysis as the first step for analyzing remote sensing data, with spatial information being considered after all time series are classified. The *time\-first* part brings a better understanding of changes in landscapes. Detecting and tracking seasonal and long\-term trends becomes feasible, as well as identifying anomalous events or patterns in the data, such as wildfires, floods, or droughts. Each pixel in a data cube is treated as a time series, using information available in the temporal instances of the case. Time series classification is pixel\-based, producing a set of labeled pixels. This result is then used as input to the *space\-later* part of the method. In this phase, a smoothing algorithm improves the results of time\-first classification by considering the spatial neighborhood of each pixel. The resulting map thus combines both spatial and temporal information. Land use and land cover ----------------------- The UN Food and Agriculture Organization defines land cover as “the observed biophysical cover on the Earth’s surface” [\[4]](references.html#ref-DiGregorio2016). Land cover can be observed and mapped directly through remote sensing images. In FAO’s guidelines and reports, land use is described as “the human activities or purposes for which land is managed or exploited”. Although *land cover* and *land use* denote different approaches for describing the Earth’s landscape, in practice there is considerable overlap between these concepts [\[5]](references.html#ref-Comber2008b). When classifying remote sensing images, natural areas are classified using land cover types (e.g, forest), while human\-modified areas are described with land use classes (e.g., pasture). One of the advantages of using image time series for land classification is its capacity of measuring changes in the landscape related to agricultural practices. For example, the time series of a vegetation index in an area of crop production will show a pattern of minima (planting and sowing stages) and maxima (flowering stage). Thus, classification schemas based on image time series data can be richer and more detailed than those associated only with land cover. In what follows, we use the term “land classification” to refer to image classification representing both land cover and land use classes. How `sits` works ---------------- The `sits` package uses satellite image time series for land classification, using a *time\-first, space\-later* approach. In the data preparation part, collections of big Earth observation images are organized as data cubes. Each spatial location of a data cube is associated with a time series. Locations with known labels train a machine learning algorithm, which classifies all time series of a data cube, as shown in Figure [1](introduction.html#fig:gview). Figure 1: Using time series for land classification (source: authors). The package provides tools for analysis, visualization, and classification of satellite image time series. Users follow a typical workflow for a pixel\-based classification: 1. Select an analysis\-ready data image collection from a cloud provider such as AWS, Microsoft Planetary Computer, Digital Earth Africa, or Brazil Data Cube. 2. Build a regular data cube using the chosen image collection. 3. Obtain new bands and indices with operations on data cubes. 4. Extract time series samples from the data cube to be used as training data. 5. Perform quality control and filtering on the time series samples. 6. Train a machine learning model using the time series samples. 7. Classify the data cube using the model to get class probabilities for each pixel. 8. Post\-process the probability cube to remove outliers. 9. Produce a labeled map from the post\-processed probability cube. 10. Evaluate the accuracy of the classification using best practices. Each workflow step corresponds to a function of the `sits` API, as shown in the Table below and Figure [2](introduction.html#fig:api). These functions have convenient default parameters and behaviors. A single function builds machine learning (ML) models. The classification function processes big data cubes with efficient parallel processing. Since the `sits` API is simple to learn, achieving good results do not require in\-depth knowledge about machine learning and parallel processing. Table 1: The sits API workflow for land classification. | API\_function | Inputs | Output | | --- | --- | --- | | sits\_cube() | ARD image collection | Irregular data cube | | sits\_regularize() | Irregular data cube | Regular data cube | | sits\_apply() | Regular data cube | Regular data cube with new bands and indices | | sits\_get\_data() | Data cube and sample locations | Time series | | sits\_train() | Time series and ML method | ML classification model | | sits\_classify() | ML classification model and regular data cube | Probability cube | | sits\_smooth() | Probability cube | Post\-processed probability cube | | sits\_uncertainty() | Post\-processed probability cube | Uncertainty cube | | sits\_label\_classification() | Post\-processed probability cube | Classified map | | sits\_accuracy() | Classified map and validation samples | Accuracy assessment | Figure 2: Main functions of the sits API (source: authors). Additionally, experts can perform object\-based image analysis (OBIA) with `sits`. In this case, before classifying the time series, one can use `sits_segments()` to create a set of closed polygons. These polygons are classified using a subset of the time series contained inside each segment. For details, see Chapter [Object\-based time series image analysis](https://e-sensing.github.io/sitsbook/object-based-time-series-image-analysis.html). Creating a data cube -------------------- There are two kinds of data cubes in `sits`: (a) irregular data cubes generated by selecting image collections on cloud providers such as AWS and Planetary Computer; (b) regular data cubes with images fully covering a chosen area, where each image has the same spectral bands and spatial resolution, and images follow a set of adjacent and regular time intervals. Machine learning applications need regular data cubes. Please refer to Chapter [Earth observation data cubes](https://e-sensing.github.io/sitsbook/earth-observation-data-cubes.html) for further details. The first steps in using `sits` are: (a) select an analysis\-ready data image collection available in a cloud provider or stored locally using `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)`; (b) if the collection is not regular, use `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)` to build a regular data cube. This section shows how to build a data cube from local images already organized as a regular data cube. The data cube is composed of MODIS MOD13Q1 images for the region close to the city of Sinop in Mato Grosso, Brazil. This region is one of the world’s largest producers of soybeans. All images have indexes NDVI and EVI covering a one\-year period from 2013\-09\-14 to 2014\-08\-29 (we use “year\-month\-day” for dates). There are 23 time instances, each covering a 16\-day period. This data is available in the package `sitsdata`. To build a data cube from local files, users must provide information about the original source from which the data was obtained. In this case, `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` needs the parameters: 1. `source`, the cloud provider from where the data has been obtained (in this case, the Brazil Data Cube “BDC”); 2. `collection`, the collection of the cloud provider from where the images have been extracted. In this case, data comes from the MOD13Q1 collection 6; 3. `data_dir`, the local directory where the image files are stored; 4. `parse_info`, a vector of strings stating how file names store information on “tile”, “band”, and “date”. In this case, local images are stored in files whose names are similar to `TERRA_MODIS_012010_EVI_2014-07-28.tif`. This file represents an image obtained by the MODIS sensor onboard the TERRA satellite, covering part of tile 012010 in the EVI band for date 2014\-07\-28\. ``` # load package "tibble" [library](https://rdrr.io/r/base/library.html)([tibble](https://tibble.tidyverse.org/)) # load packages "sits" and "sitsdata" [library](https://rdrr.io/r/base/library.html)([sits](https://github.com/e-sensing/sits/)) [library](https://rdrr.io/r/base/library.html)([sitsdata](https://github.com/e-sensing/sitsdata/)) # Create a data cube using local files sinop_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "MOD13Q1-6.1", bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI"), data_dir = [system.file](https://rdrr.io/r/base/system.file.html)("extdata/sinop", package = "sitsdata"), parse_info = [c](https://rdrr.io/r/base/c.html)("satellite", "sensor", "tile", "band", "date") ) # Plot the NDVI for the first date (2013-09-14) [plot](https://rdrr.io/r/graphics/plot.default.html)(sinop_cube, band = "NDVI", dates = "2013-09-14", palette = "RdYlGn" ) ``` Figure 3: False color MODIS image for NDVI band in 2013\-09\-14 from sinop data cube (source: Brazil Data Cube). The aim of the `parse_info` parameter is to extract `tile`, `band`, and `date` information from the file name. Given the large variation in image file names generated by different produces, it includes designators such as `X1` and `X2`; these are place holders for parts of the file name that is not relevant to `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)`. The R object returned by `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` contains the metadata describing the contents of the data cube. It includes data source and collection, satellite, sensor, tile in the collection, bounding box, projection, and list of files. Each file refers to one band of an image at one of the temporal instances of the cube. ``` # Show the description of the data cube sinop_cube ``` ``` #> # A tibble: 1 × 11 #> source collection satellite sensor tile xmin xmax ymin ymax crs #> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <chr> #> 1 BDC MOD13Q1-6… TERRA MODIS 0120… -6.18e6 -5.96e6 -1.35e6 -1.23e6 "PRO… #> # ℹ 1 more variable: file_info <list> ``` The list of image files which make up the data cube is stored as a data frame in the column `file_info`. For each file, `sits` stores information about spectral band, reference date, size, spatial resolution, coordinate reference system, bounding box, path to file location and cloud cover information (when available). ``` # Show information on the images files which are part of a data cube sinop_cube$file_info[[1]] ``` ``` #> # A tibble: 46 × 13 #> fid band date nrows ncols xres yres xmin ymin xmax #> <chr> <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 1 EVI 2013-09-14 551 944 232. 232. -6181982. -1353336. -5963298. #> 2 1 NDVI 2013-09-14 551 944 232. 232. -6181982. -1353336. -5963298. #> 3 2 EVI 2013-09-30 551 944 232. 232. -6181982. -1353336. -5963298. #> 4 2 NDVI 2013-09-30 551 944 232. 232. -6181982. -1353336. -5963298. #> 5 3 EVI 2013-10-16 551 944 232. 232. -6181982. -1353336. -5963298. #> 6 3 NDVI 2013-10-16 551 944 232. 232. -6181982. -1353336. -5963298. #> 7 4 EVI 2013-11-01 551 944 232. 232. -6181982. -1353336. -5963298. #> 8 4 NDVI 2013-11-01 551 944 232. 232. -6181982. -1353336. -5963298. #> 9 5 EVI 2013-11-17 551 944 232. 232. -6181982. -1353336. -5963298. #> 10 5 NDVI 2013-11-17 551 944 232. 232. -6181982. -1353336. -5963298. #> # ℹ 36 more rows #> # ℹ 3 more variables: ymax <dbl>, crs <chr>, path <chr> ``` A key attribute of a data cube is its timeline, as shown below. The command `[sits_timeline()](https://rdrr.io/pkg/sits/man/sits_timeline.html)` lists the temporal references associated to `sits` objects, including samples, data cubes and models. ``` # Show the R object that describes the data cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(sinop_cube) ``` ``` #> [1] "2013-09-14" "2013-09-30" "2013-10-16" "2013-11-01" "2013-11-17" #> [6] "2013-12-03" "2013-12-19" "2014-01-01" "2014-01-17" "2014-02-02" #> [11] "2014-02-18" "2014-03-06" "2014-03-22" "2014-04-07" "2014-04-23" #> [16] "2014-05-09" "2014-05-25" "2014-06-10" "2014-06-26" "2014-07-12" #> [21] "2014-07-28" "2014-08-13" "2014-08-29" ``` The timeline of the `sinop_cube` data cube has 23 intervals with a temporal difference of 16 days. The chosen dates capture the agricultural calendar in Mato Grosso, Brazil. The agricultural year starts in September\-October with the sowing of the summer crop (usually soybeans) which is harvested in February\-March. Then the winter crop (mostly Corn, Cotton or Millet) is planted in March and harvested in June\-July. For LULC classification, the training samples and the date cube should share a timeline with the same number of intervals and similar start and end dates. The time series tibble ---------------------- To handle time series information, `sits` uses a `tibble`. Tibbles are extensions of the `data.frame` tabular data structures provided by the `tidyverse` set of packages. The example below shows a tibble with 1,837 time series obtained from MODIS MOD13Q1 images. Each series has four attributes: two bands (NIR and MIR) and two indexes (NDVI and EVI). This dataset is available in package `sitsdata`. The time series tibble contains data and metadata. The first six columns contain the metadata: spatial and temporal information, the label assigned to the sample, and the data cube from where the data has been extracted. The `time_series` column contains the time series data for each spatiotemporal location. This data is also organized as a tibble, with a column with the dates and the other columns with the values for each spectral band. ``` # Load the MODIS samples for Mato Grosso from the "sitsdata" package [library](https://rdrr.io/r/base/library.html)([tibble](https://tibble.tidyverse.org/)) [library](https://rdrr.io/r/base/library.html)([sitsdata](https://github.com/e-sensing/sitsdata/)) [data](https://rdrr.io/r/utils/data.html)("samples_matogrosso_mod13q1", package = "sitsdata") samples_matogrosso_mod13q1 ``` ``` #> # A tibble: 1,837 × 7 #> longitude latitude start_date end_date label cube time_series #> <dbl> <dbl> <date> <date> <chr> <chr> <list> #> 1 -57.8 -9.76 2006-09-14 2007-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 2 -59.4 -9.31 2014-09-14 2015-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 3 -59.4 -9.31 2013-09-14 2014-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 4 -57.8 -9.76 2006-09-14 2007-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 5 -55.2 -10.8 2013-09-14 2014-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 6 -51.9 -13.4 2014-09-14 2015-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 7 -56.0 -10.1 2005-09-14 2006-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 8 -54.6 -10.4 2013-09-14 2014-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 9 -52.5 -11.0 2013-09-14 2014-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 10 -52.1 -14.0 2013-09-14 2014-08-29 Pasture bdc_cube <tibble [23 × 5]> #> # ℹ 1,827 more rows ``` The timeline for all time series associated with the samples follows the same agricultural calendar, starting in September 14th and ending in August 28th. All samples contain 23 values, corresponding to the same temporal interval as those of the `sinop` data cube. Notice that that although the years for the samples are different, the samples for a given year follow the same agricultural calendar. The time series can be displayed by showing the `time_series` column. ``` # Load the time series for MODIS samples for Mato Grosso samples_matogrosso_mod13q1[1, ]$time_series[[1]] ``` ``` #> # A tibble: 23 × 5 #> Index NDVI EVI NIR MIR #> <date> <dbl> <dbl> <dbl> <dbl> #> 1 2006-09-14 0.500 0.263 0.230 0.139 #> 2 2006-09-30 0.485 0.330 0.359 0.161 #> 3 2006-10-16 0.716 0.397 0.264 0.0757 #> 4 2006-11-01 0.654 0.415 0.332 0.124 #> 5 2006-11-17 0.591 0.433 0.400 0.172 #> 6 2006-12-03 0.662 0.439 0.348 0.125 #> 7 2006-12-19 0.734 0.444 0.295 0.0784 #> 8 2007-01-01 0.739 0.502 0.348 0.0887 #> 9 2007-01-17 0.768 0.526 0.351 0.0761 #> 10 2007-02-02 0.797 0.550 0.355 0.0634 #> # ℹ 13 more rows ``` The distribution of samples per class can be obtained using the `[summary()](https://rdrr.io/r/base/summary.html)` command. The classification schema uses nine labels, four associated to crops (`Soy_Corn`, `Soy_Cotton`, `Soy_Fallow`, `Soy_Millet`), two with natural vegetation (`Cerrado`, `Forest`) and one to `Pasture`. ``` # Load the MODIS samples for Mato Grosso from the "sitsdata" package [summary](https://rdrr.io/r/base/summary.html)(samples_matogrosso_mod13q1) ``` ``` #> # A tibble: 7 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Cerrado 379 0.206 #> 2 Forest 131 0.0713 #> 3 Pasture 344 0.187 #> 4 Soy_Corn 364 0.198 #> 5 Soy_Cotton 352 0.192 #> 6 Soy_Fallow 87 0.0474 #> 7 Soy_Millet 180 0.0980 ``` It is helpful to plot the dispersion of the time series. In what follows, for brevity, we will filter only one label (`Forest`) and select one index (NDVI). Note that for filtering the label we use a function from `dplyr` package, while for selecting the index we use `[sits_select()](https://rdrr.io/pkg/sits/man/sits_select.html)`. We use two different functions for selection because of they way metadata is stored in a samples files. The labels for the samples are listed in column `label` in the samples tibble, as shown above. In this case, one can use functions from the `dplyr` package to extract subsets. In particular, the function `[dplyr::filter](https://dplyr.tidyverse.org/reference/filter.html)` retaining all rows that satisfy a given condition. In the above example, the result of `[dplyr::filter](https://dplyr.tidyverse.org/reference/filter.html)` is the set of samples associated to the “Forest” label. The second selection involves obtaining only the values for the NDVI band. This operation requires access to the `time_series` column, which is stored as a list. In this case, selection with `[dplyr::filter](https://dplyr.tidyverse.org/reference/filter.html)` will not work. To handle such cases, `sits` provides `[sits_select()](https://rdrr.io/pkg/sits/man/sits_select.html)` to select subsets inside the `time_series` list. ``` # select all samples with label "Forest" samples_forest <- dplyr::[filter](https://dplyr.tidyverse.org/reference/filter.html)( samples_matogrosso_mod13q1, label == "Forest" ) # select the NDVI band for all samples with label "Forest" samples_forest_ndvi <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)( samples_forest, band = "NDVI" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(samples_forest_ndvi) ``` Figure 4: Joint plot of all samples in band NDVI for label Forest (source: authors). The above figure shows all the time series associated with label `Forest` and band NDVI (in light blue), highlighting the median (shown in dark red) and the first and third quartiles (shown in brown). The spikes are noise caused by the presence of clouds. Training a machine learning model --------------------------------- The next step is to train a machine learning (ML) model using `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)`. It takes two inputs, `samples` (a time series tibble) and `ml_method` (a function that implements a machine learning algorithm). The result is a model that is used for classification. Each ML algorithm requires specific parameters that are user\-controllable. For novice users, `sits` provides default parameters that produce good results. Please see Chapter [Machine learning for data cubes](https://e-sensing.github.io/sitsbook/machine-learning-for-data-cubes.html) for more details. Since the time series data has four attributes (EVI, NDVI, NIR, and MIR) and the data cube images have only two, we select the NDVI and EVI values and use the resulting data for training. To build the classification model, we use a random forest model called by `[sits_rfor()](https://rdrr.io/pkg/sits/man/sits_rfor.html)`. Results from the random forest model can vary between different runs, due to the stochastic nature of the algorithm, For this reason, in the code fragment below, we set the seed of R’s pseudo\-random number generation explicitly to ensure the same results are produced for documentation purposes. ``` [set.seed](https://rdrr.io/r/base/Random.html)(03022024) # Select the bands NDVI and EVI samples_2bands <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)( data = samples_matogrosso_mod13q1, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI") ) # Train a random forest model rf_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_2bands, ml_method = [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)() ) # Plot the most important variables of the model [plot](https://rdrr.io/r/graphics/plot.default.html)(rf_model) ``` Figure 5: Most relevant variables of trained random forest model (source: authors). Data cube classification ------------------------ After training the machine learning model, the next step is to classify the data cube using `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`. This function produces a set of raster probability maps, one for each class. For each of these maps, the value of a pixel is proportional to the probability that it belongs to the class. This function has two mandatory parameters: `data`, the data cube or time series tibble to be classified; and `ml_model`, the trained ML model. Optional parameters include: (a) `multicores`, number of cores to be used; (b) `memsize`, RAM used in the classification; (c) `output_dir`, the directory where the classified raster files will be written. Details of the classification process are available in “Image classification in data cubes”. ``` # Classify the raster image sinop_probs <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = sinop_cube, ml_model = rf_model, multicores = 2, memsize = 8, output_dir = "./tempdir/chp3" ) # Plot the probability cube for class Forest [plot](https://rdrr.io/r/graphics/plot.default.html)(sinop_probs, labels = "Forest", palette = "BuGn") ``` Figure 6: Probability map for class Forest (source: authors). After completing the classification, we plot the probability maps for class `Forest`. Probability maps are helpful to visualize the degree of confidence the classifier assigns to the labels for each pixel. They can be used to produce uncertainty information and support active learning, as described in Chapter [Image classification in data cubes](https://e-sensing.github.io/sitsbook/image-classification-in-data-cubes.html). Spatial smoothing ----------------- When working with big Earth observation data, there is much variability in each class. As a result, some pixels will be misclassified. These errors are more likely to occur in transition areas between classes. To address these problems, `[sits_smooth()](https://rdrr.io/pkg/sits/man/sits_smooth.html)` takes a probability cube as input and uses the class probabilities of each pixel’s neighborhood to reduce labeling uncertainty. Plotting the smoothed probability map for class Forest shows that most outliers have been removed. ``` # Perform spatial smoothing sinop_bayes <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( cube = sinop_probs, multicores = 2, memsize = 8, output_dir = "./tempdir/chp3" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(sinop_bayes, labels = "Forest", palette = "BuGn") ``` Figure 7: Smoothed probability map for class Forest (source: authors). Labeling a probability data cube -------------------------------- After removing outliers using local smoothing, the final classification map can be obtained using `[sits_label_classification()](https://rdrr.io/pkg/sits/man/sits_label_classification.html)`. This function assigns each pixel to the class with the highest probability. ``` # Label the probability file sinop_map <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = sinop_bayes, output_dir = "./tempdir/chp3" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(sinop_map) ``` Figure 8: Classification map for Sinop (source: authors). The resulting classification files can be read by QGIS. Links to the associated files are available in the `sinop_map` object in the nested table `file_info`. ``` # Show the location of the classification file sinop_map$file_info[[1]] ``` ``` #> # A tibble: 1 × 12 #> band start_date end_date ncols nrows xres yres xmin xmax ymin #> <chr> <date> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 class 2013-09-14 2014-08-29 944 551 232. 232. -6181982. -5.96e6 -1.35e6 #> # ℹ 2 more variables: ymax <dbl>, path <chr> ```
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/earth-observation-data-cubes.html
Earth observation data cubes ============================ [ Analysis\-ready data(ARD) ------------------------- Analysis Ready Data (CEOS\-ARD) are satellite data that have been processed to meet the [ARD standards](https://ceos.org/ard/) defined by the Committee on Earth Observation Satellites (CEOS). ARD data simplify and accelerate the analysis of Earth observation data by providing consistent and high\-quality data that are standardized across different sensors and platforms. ARD images processing includes geometric corrections, radiometric corrections, and sometimes atmospheric corrections. Images are georeferenced, meaning they are accurately aligned with a coordinate system. Optical ARD images include cloud and shadow masking information. These masks indicate which pixels affected by clouds or cloud shadows. For optical sensors, CEOS\-ARD images have to be converted to surface reflectance values, which represent the fraction of light that is reflected by the surface. This makes the data more comparable across different times and locations. For SAR images, CEOS\-ARD specification require images to undergo Radiometric Terrain Correction (RTC) and are provided in the GammaNought (\\(\\gamma\_0\\)) backscatter values. This value which mitigates the variations from diverse observation geometries and is recommended for most land applications. ARD images are available from various satellite platforms, including Landsat, Sentinel, and commercial satellites. This provides a wide range of spatial, spectral, and temporal resolutions to suit different applications. They are organised as a collection of files, where each pixel contains a single value for each spectral band for a given date. These collections are available in cloud services such as Brazil Data Cube, Digital Earth Africa, and Microsoft’s Planetary Computer. In general, the timelines of the images of an ARD collection are different. Images still contain cloudy or missing pixels; bands for the images in the collection may have different resolutions. Figure [9](earth-observation-data-cubes.html#fig:ardt) shows an example of the Landsat ARD image collection. Figure 9: ARD image collection (source: USGS. Reproduction based on fair use doctrine). ARD image collections are organized in spatial partitions. Sentinel\-2/2A images follow the Military Grid Reference System (MGRS) tiling system, which divides the world into 60 UTM zones of 8 degrees of longitude. Each zone has blocks of 6 degrees of latitude. Blocks are split into tiles of 110 \\(\\times\\) 110 km\\(^2\\) with a 10 km overlap. Figure [10](earth-observation-data-cubes.html#fig:mgrs) shows the MGRS tiling system for a part of the Northeastern coast of Brazil, contained in UTM zone 24, block M. Figure 10: MGRS tiling system used by Sentinel\-2 images (source: US Army. Reproduction based on fair use doctrine). The Landsat\-4/5/7/8/9 satellites use the Worldwide Reference System (WRS\-2\), which breaks the coverage of Landsat satellites into images identified by path and row (see Figure [11](earth-observation-data-cubes.html#fig:wrs)). The path is the descending orbit of the satellite; the WRS\-2 system has 233 paths per orbit, and each path has 119 rows, where each row refers to a latitudinal center line of a frame of imagery. Images in WRS\-2 are geometrically corrected to the UTM projection. Figure 11: WRS\-2 tiling system used by Landsat\-5/7/8/9 images (source: INPE and ESRI. Reproduction based on fair use doctrine). Image collections handled by sits --------------------------------- In version 1\.5\.1,`sits` supports access to the following ARD image cloud providers: * Amazon Web Services (AWS): Open data Sentinel\-2/2A level 2A collections for the Earth’s land surface. * Brazil Data Cube (BDC): Open data collections of Sentinel\-2/2A, Landsat\-8, CBERS\-4/4A, and MOD13Q1 products for Brazil. These collections are organized as regular data cubes. * Copernicus Data Space Ecosystem (CDSE): Open data collections of Sentinel\-1 RTC and Sentinel\-2/2A images. * Digital Earth Africa (DEAFRICA): Open data collections of Sentinel\-1 RTC, Sentinel\-2/2A, Landsat\-5/7/8/9 for Africa. Additional products available include ALOS\_PALSAR mosaics, DEM\_COP\_30, NDVI\_ANOMALY based on Landsat data, and monthly and daily rainfall data from CHIRPS. * Digital Earth Australia (DEAUSTRALIA): Open data ARD collections of Sentinel\-2A/2B and Landsat\-5/7/8/9 images; yearly geomedian of Landsat 5/7/8 images; yearly fractional land cover from 1986 to 2024\. * Harmonized Landsat\-Sentinel (HLS): HLS, provided by NASA, is an open data collection that processes Landsat 8 and Sentinel\-2 imagery to a common standard. * Microsoft Planetary Computer (MPC): Open data collections of Sentinel\-1 GRD, Sentinel\-2/2A, Landsat\-4/5/7/8/9 images for the Earth’s land areas. Also supported are Copernicus DEM\-30 and MOD13Q1, MOD10A1 and MOD09A1 products. Sentinel\-1 RTC collections are accessible but require payment. * Swiss Data Cube (SDC): Open data collection of Sentinel\-2/2A and Landsat\-8 images for Switzerland. * Terrascope: Cloud service with EO products which includes the ESA World Cover map. * USGS: Landsat\-4/5/7/8/9 collections available in AWS, which require access payment. In addition, `sits` supports the use of Planet monthly mosaics stored as local files. For a detailed description of the providers and collections supported by `sits`, please run `[sits_list_collections()](https://rdrr.io/pkg/sits/man/sits_list_collections.html)`. Regular image data cubes ------------------------ Machine learning and deep learning (ML/DL) classification algorithms require the input data to be consistent. The dimensionality of the data used for training the model has to be the same as that of the data to be classified. There should be no gaps and no missing values. Thus, to use ML/DL algorithms for remote sensing data, ARD image collections should be converted to regular data cubes. Adapting a previous definition by Appel and Pebesma [\[6]](references.html#ref-Appel2019), we consider a *regular data cube* has the following definition and properties: 1. A regular data cube is a four\-dimensional data structure with dimensions x (longitude or easting), y (latitude or northing), time, and bands. The spatial, temporal, and attribute dimensions are independent and not interchangeable. 2. The spatial dimensions refer to a coordinate system, such as the grids defined by UTM (Universal Transverse Mercator) or MGRS (Military Grid Reference System). A grid (or tile) of the grid corresponds to a unique zone of the coordinate system. A data cube may span various tiles and UTM zones. 3. The temporal dimension is a set of continuous and equally\-spaced intervals. 4. For every combination of dimensions, a cell has a single value. All cells of a data cube have the same spatiotemporal extent. The spatial resolution of each cell is the same in X and Y dimensions. All temporal intervals are the same. Each cell contains a valid set of measures. Each pixel is associated to a unique coordinate in a zone of the coordinate system. For each position in space, the data cube should provide a set of valid time series. For each time interval, the regular data cube should provide a valid 2D image (see Figure [12](earth-observation-data-cubes.html#fig:dc)). Figure 12: Conceptual view of data cubes (source: authors). Currently, the only cloud service that provides regular data cubes by default is the Brazil Data Cube (BDC). ARD collections available in other cloud services are not regular in space and time. Bands may have different resolutions, images may not cover the entire time, and time intervals may be irregular. For this reason, subsets of these collections need to be converted to regular data cubes before further processing. To produce data cubes for machine\-learning data analysis, users should first create an irregular data cube from an ARD collection and then use `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)`, as described below. Creating data cubes ------------------- [ To obtain information on ARD image collection from cloud providers, `sits` uses the [SpatioTemporal Asset Catalogue](https://stacspec.org/en) (STAC) protocol, a specification of geospatial information which many large image collection providers have adopted. A ‘spatiotemporal asset’ is any file that represents information about the Earth captured in a specific space and time. To access STAC endpoints, `sits` uses the [rstac](http://github.com/brazil-data-cube/rstac) R package. The function `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` supports access to image collections in cloud services; it has the following parameters: * `source`: Name of the provider. * `collection`: A collection available in the provider and supported by `sits`. To find out which collections are supported by `sits`, see `[sits_list_collections()](https://rdrr.io/pkg/sits/man/sits_list_collections.html)`. * `platform`: Optional parameter specifying the platform in collections with multiple satellites. * `tiles`: Set of tiles of image collection reference system. Either `tiles` or `roi` should be specified. * `roi`: A region of interest. Either: (a) a named vector (`lon_min`, `lon_max`, `lat_min`, `lat_max`) in WGS 84 coordinates; or (b) an `sf` object. All images intersecting the convex hull of the `roi` are selected. * `bands`: Optional parameter with the bands to be used. If missing, all bands from the collection are used. * `orbit`: Optional parameter required only for Sentinel\-1 images (default \= “descending”). * `start_date`: The initial date for the temporal interval containing the time series of images. * `end_date`: The final date for the temporal interval containing the time series of images. The result of `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` is a tibble with a description of the selected images required for further processing. It does not contain the actual data, but only pointers to the images. The attributes of individual image files can be assessed by listing the `file_info` column of the tibble. Amazon Web Services ------------------- Amazon Web Services (AWS) holds two kinds of collections: *open\-data* and *requester\-pays*. Open data collections can be accessed without cost. Requester\-pays collections require payment from an AWS account. Currently, `sits` supports collection `SENTINEL-2-L2A` which is open data. The bands in 10 m resolution are B02, B03, B04, and B08\. The 20 m bands are B05, B06, B07, B8A, B11, and B12\. Bands B01 and B09 are available at 60 m resolution. A CLOUD band is also available. The example below shows how to access one tile of the open data `SENTINEL-2-L2A` collection. The `tiles` parameter allows selecting the desired area according to the MGRS reference system. ``` # Create a data cube covering an area in Brazil s2_23MMU_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "AWS", collection = "SENTINEL-2-L2A", tiles = "23MMU", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11", "CLOUD"), start_date = "2018-07-12", end_date = "2019-07-28" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_23MMU_cube, red = "B11", blue = "B02", green = "B8A", date = "2018-10-05" ) ``` Figure 13: Sentinel\-2 image in an area of the Northeastern coast of Brazil (© EU Copernicus Sentinel Programme; source: AWS). Microsoft Planetary Computer ---------------------------- The `sits` supports access to three open data collection from Microsoft’s Planetary Computer (MPC): `SENTINEL-1-GRD`, `SENTINEL-2-L2A`, `LANDSAT-C2-L2`. It also allows access to `COP-DEM-GLO-30` (Copernicus Global DEM at 30 meter resolution) and `MOD13Q1-6.1`(version 6\.1 of the MODIS MOD13Q1 product). Access to the non\-open data collection `SENTINEL-1-RTC` is available for users that have registration in MPC. ### SENTINEL\-2/2A images in MPC The SENTINEL\-2/2A ARD images available in MPC have the same bands and resolutions as those available in AWS (see above). The example below shows how to access the `SENTINEL-2-L2A` collection. ``` # Create a data cube covering an area in the Brazilian Amazon s2_20LKP_cube_MPC <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", tiles = "20LKP", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11", "CLOUD"), start_date = "2019-07-01", end_date = "2019-07-28" ) # Plot a color composite of one date of the cube [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_20LKP_cube_MPC, red = "B11", blue = "B02", green = "B8A", date = "2019-07-18" ) ``` Figure 14: Sentinel\-2 image in an area of the state of Rondonia, Brazil (© EU Copernicus Sentinel Programme; source: Microsoft). ### LANDSAT\-C2\-L2 images in MPC The `LANDSAT-C2-L2` collection provides access to data from Landsat\-4/5/7/8/9 satellites. Images from these satellites have been intercalibrated to ensure data consistency. For compatibility between the different Landsat sensors, the band names are BLUE, GREEN, RED, NIR08, SWIR16, and SWIR22\. All images have 30 m resolution. For this collection, tile search is not supported; the `roi` parameter should be used. The example below shows how to retrieve data from a region of interest covering the city of Brasilia in Brazil. ``` # Read a ROI that covers part of the Northeastern coast of Brazil roi <- [c](https://rdrr.io/r/base/c.html)( lon_min = -43.5526, lat_min = -2.9644, lon_max = -42.5124, lat_max = -2.1671 ) # Select the cube s2_L8_cube_MPC <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "LANDSAT-C2-L2", bands = [c](https://rdrr.io/r/base/c.html)("BLUE", "RED", "GREEN", "NIR08", "SWIR16", "CLOUD"), roi = roi, start_date = "2019-06-01", end_date = "2019-09-01" ) # Plot the tile that covers the Lencois Maranhenses [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_L8_cube_MPC, red = "RED", green = "GREEN", blue = "BLUE", date = "2019-06-30" ) ``` Figure 15: Landsat\-8 image in an area in Northeast Brazil (sources: USGS and Microsoft). ### SENTINEL\-1\-GRD images in MPC Sentinel\-1 GRD products consist of focused SAR data that has been detected, multi\-looked and projected to ground range using the WGS84 Earth ellipsoid model. GRD images are subject for variations in the radar signal’s intensity due to topographic effects, antenna pattern, range spreading loss, and other radiometric distortions. The most common types of distortions include foreshortening, layover and shadowing. Foreshortening occurs when the radar signal strikes a steep terrain slope facing the radar, causing the slope to appear compressed in the image. Features like mountains can appear much steeper than they are, and their true heights can be difficult to interpret. Layover happens when the radar signal reaches the top of a tall feature (like a mountain or building) before it reaches the base. As a result, the top of the feature is displaced towards the radar and appears in front of its base. This results in a reversal of the order of features along the radar line\-of\-sight, making the image interpretation challenging. Shadowing occurs when a radar signal is obstructed by a tall object, casting a shadow on the area behind it that the radar cannot illuminate. The shadowed areas appear dark in SAR images, and no information is available from these regions, similar to optical shadows. Access to Sentinel\-1 GRD images can be done either by MGRS tiles (`tiles`) or by region of interest (`roi`). We recommend using the MGRS tiling system for specifying the area of interest, since when these images are regularized, they will be re\-projected into MGRS tiles. By default, only images in descending orbit are selected. The following example shows how to create a data cube of S1 GRD images over a region in Mato Grosso Brazil that is an area of the Amazon forest that has been deforested. The resulting cube will not follow any specific projection and its coordinates will be stated as EPSG 4326 (latitude/longitude). Its geometry is derived from the SAR slant\-range perspective; thus, it will appear included in relation to the Earth’s longitude. ``` cube_s1_grd <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-1-GRD", bands = [c](https://rdrr.io/r/base/c.html)("VV"), orbit = "descending", tiles = [c](https://rdrr.io/r/base/c.html)("21LUJ", "21LVJ"), start_date = "2021-08-01", end_date = "2021-09-30" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_grd, band = "VV", palette = "Greys") ``` Figure 16: Sentinel\-1 image in an area in Mato Grosso, Brazil (© EU Copernicus Sentinel Programme; source: Microsoft). As explained earlier in this chapter, in areas with areas with large elevation differences, Sentinel\-1 GRD images will have geometric distortions. For this reason, whenever possible, we recommend the use of RTC (radiometrically terrain corrected) images as described in the next session. ### SENTINEL\-1\-RTC images in MPC An RTC SAR image has undergone corrections for both geometric distortions and radiometric distortions caused by the terrain. The purpose of RTC processing is to enhance the interpretability and usability of SAR images for various applications by providing a more accurate representation of the Earth’s surface. The radar backscatter values are normalized to account for these variations, ensuring that the image accurately represents the reflectivity of the surface features. The terrain correction addresses geometric distortions caused by the side\-looking geometry of SAR imaging, such as foreshortening, layover, and shadowing. It uses a Digital Elevation Model (DEM) to model the terrain and re\-project the SAR image from the slant range (radar line\-of\-sight) to the ground range (true geographic coordinates). This process aligns the SAR image with the actual topography, providing a more accurate spatial representation. In MPC, access to Sentinel\-1\-RTC images requires a Planetary Computer account. User will receive a Shared Access Signature (SAS) Token from MPC that allows access to RTC data. Once a user receives a token from Microsoft, she needs to include the environment variable `MPC_TOKEN` in her `.Rprofile`. Therefore, the following example only works for users that have an SAS token. ``` cube_s1_rtc <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-1-RTC", bands = [c](https://rdrr.io/r/base/c.html)("VV", "VH"), orbit = "descending", tiles = "18NZM", start_date = "2021-08-01", end_date = "2021-09-30" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_rtc, band = "VV", palette = "Greys") ``` Figure 17: Sentinel\-1\-RTC image of an area in Colombia (© EU Copernicus Sentinel Programme; source: Microsoft). The above image is from the central region of Colombia, a country with large variations in altitude due to the Andes mountains. Users are invited to compare this images with the one from the `SENTINEL-1-GRD` collection and see the significant geometrical distortions of the GRD image compared with the RTC one. ### Copernicus DEM 30 meter images in MPC The Copernicus digital elevation model 30\-meter global dataset (COP\-DEM\-GLO\-30\) is a high\-resolution topographic data product provided by the European Space Agency (ESA) under the Copernicus Program. The vertical accuracy of the Copernicus DEM 30\-meter dataset is typically within a few meters, but this can vary depending on the region and the original data sources. The primary data source for the Copernicus DEM is data from the TanDEM\-X mission, designed by the German Aerospace Center (DLR). TanDEM\-X provides high\-resolution radar data through interferometric synthetic aperture radar (InSAR) techniques. The Copernicus DEM 30 meter is organized in a 1\\(^\\circ\\) by 1\\(^\\circ\\) grid. In `sits`, access to COP\-DEM\-GLO\-30 images can be done either by MGRS tiles (`tiles`) or by region of interest (`roi`). In both case, the cube is retrieved based on the parts of the grid that intersect the region of interest or the chosen tiles. ``` cube_dem_30 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "COP-DEM-GLO-30", tiles = "20LMR", band = "ELEVATION" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_dem_30, band = "ELEVATION", palette = "RdYlGn", rev = TRUE) ``` Figure 18: Copernicus 30\-meter DEM of an area in Brazil (© DLR e.V. 2010\-2014 and \&copy Airbus Defence and Space GmbH 2014\-2018 provided under COPERNICUS by the European Union and ESA; source: Microsoft). Brazil Data Cube ---------------- The [Brazil Data Cube](http://brazildatacube.org/en) (BDC) is built by Brazil’s National Institute for Space Research (INPE), to provide regular EO data cubes from CBERS, LANDSAT, SENTINEL\-2, and TERRA/MODIS satellites for environmental applications. The collections available in the BDC are: `LANDSAT-OLI-16D` (Landsat\-8 OLI, 30 m resolution, 16\-day intervals), `SENTINEL-2-16D` (Sentinel\-2A and 2B MSI images at 10 m resolution, 16\-day intervals), `CBERS-WFI-16D` (CBERS 4 WFI, 64 m resolution, 16\-day intervals), `CBERS-WFI-8D`(CBERS 4 and 4A WFI images, 64m resolution, 8\-day intervals), and `MOD13Q1-6.1` (MODIS MOD13SQ1 product, collection 6, 250 m resolution, 16\-day intervals). For more details, use `sits_list_collections(source = "BDC")`. The BDC uses three hierarchical grids based on the Albers Equal Area projection and SIRGAS 2000 datum. The large grid has tiles of 4224\.4 \\(\\times4\\) 224\.4 km2 and is used for CBERS\-4 AWFI collections at 64 m resolution; each CBERS\-4 AWFI tile contains images of 6600 \\(\\times\\) 6600 pixels. The medium grid is used for Landsat\-8 OLI collections at 30 m resolution; tiles have an extension of 211\.2 \\(\\times\\) 211\.2 km2, and each image has 7040 \\(\\times\\) 7040 pixels. The small grid covers 105\.6 \\(\\times\\) 105\.6 km2 and is used for Sentinel\-2 MSI collections at 10 m resolutions; each image has 10560 \\(\\times\\) 10560 pixels. The data cubes in the BDC are regularly spaced in time and cloud\-corrected [\[7]](references.html#ref-Ferreira2020a). Figure 19: Hierarchical BDC tiling system showing (a) large BDC grid overlayed on Brazilian biomes, (b) one large tile, (c) four medium tiles, and (d) sixteen small tiles (Source: Ferreira et al. (2020\). Reproduction under fair use doctrine). To access the BDC, users must provide their credentials using environment variables, as shown below. Obtaining a BDC access key is free. Users must register at the [BDC site](https://brazildatacube.dpi.inpe.br/portal/explore) to obtain a key. In the example below, the data cube is defined as one tile (“005004”) of `CBERS-WFI-16D` collection, which holds CBERS AWFI images at 16 days resolution. ``` # Define a tile from the CBERS-4/4A AWFI collection cbers_tile <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "CBERS-WFI-16D", tiles = "005004", bands = [c](https://rdrr.io/r/base/c.html)("B13", "B14", "B15", "B16", "CLOUD"), start_date = "2021-05-01", end_date = "2021-09-01" ) # Plot one time instance [plot](https://rdrr.io/r/graphics/plot.default.html)(cbers_tile, red = "B15", green = "B16", blue = "B13", date = "2021-05-09" ) ``` Figure 20: CBERS\-4 WFI image in a Cerrado area in Brazil (© INPE/Brazil licensed under CC\-BY\-SA. source: Brazil Data Cube). Copernicus Data Space Ecosystem (CDSE) -------------------------------------- The Copernicus Data Space Ecosystem (CDSE) is a cloud service designed to support access to Earth observation data from the Copernicus Sentinel missions and other sources. It is designed and maintained by the European Space Agency (ESA) with support from the European Commission. Configuring user access to CDSE involves several steps to ensure proper registration, access to data, and utilization of the platform’s tools and services. Visit the Copernicus Data Space Ecosystem [registration page](https://dataspace.copernicus.eu). Complete the registration form with your details, including name, email address, organization, and sector. Confirm your email address through the verification link sent to your inbox. After registration, you will need to obtain access credentials to the S3 service implemented by CDSE, which can be obtained using the [CSDE S3 credentials site](https://eodata-s3keysmanager.dataspace.copernicus.eu/panel/s3-credentials). The site will request you to add a new credential. You will receive two keys: an an S3 access key and a secret access key. Take note of both and include the following lines in your `.Rprofile`. ``` Sys.setenv( AWS_ACCESS_KEY_ID = "your access key", AWS_SECRET_ACCESS_KEY = "your secret access key" AWS_S3_ENDPOINT = "eodata.dataspace.copernicus.eu", AWS_VIRTUAL_HOSTING = "FALSE" ) ``` After including these lines in your .Rprofile, restart `R` for the changes to take effect. By following these steps, users will have access to the Copernicus Data Space Ecosystem. ### SENTINEL\-2/2A images in CDSE CDSE hosts a global collection of Sentinel\-2 Level\-2A images, which are processed according to the [CEOS Analysis\-Ready Data](https://ceos.org/ard/) specifications. One example is provided below, where we present a Sentinel\-2 image of the Lena river delta in Siberia in summertime. ``` # obtain a collection of images of a tile covering part of Lena delta lena_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "CDSE", collection = "SENTINEL-2-L2A", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B04", "B8A", "B11", "B12"), start_date = "2023-05-01", end_date = "2023-09-01", tiles = [c](https://rdrr.io/r/base/c.html)("52XDF") ) # plot an image from summertime [plot](https://rdrr.io/r/graphics/plot.default.html)(lena_cube, date = "2023-07-06", red = "B12", green = "B8A", blue = "B04") ``` Figure 21: Sentinel\-2 image of the Lena river delta in summertime (© EU Copernicus Sentinel Programme; source: CDSE). ### SENTINEL\-1\-RTC images in CDSE An important product under development at CDSE are the radiometric terrain corrected (RTC) Sentinel\-1 images. in CDSE, this product is referred to as normalized terrain backscater (NRB). The S1\-NRB product contains radiometrically terrain corrected (RTC) gamma nought backscatter (γ0\) processed from Single Look Complex (SLC) Level\-1A data. Each acquired polarization is stored in an individual binary image file. All images are projected and gridded into the United States Military Grid Reference System (US\-MGRS). The use of the US\-MGRS tile grid ensures a very high level of interoperability with Sentinel\-2 Level\-2A ARD products making it easy to also set\-up complex analysis systems that exploit both SAR and optical data. While speckle is inherent in SAR acquisitions, speckle filtering is not applied to the S1\-NRB product in order to preserve spatial resolution. Some applications (or processing methods) may require spatial or temporal filtering for stationary backscatter estimates. For more details, please refer to the [S1\-NRB product website](https://sentinels.copernicus.eu/web/sentinel/sentinel-1-ard-normalised-radar-backscatter-nrb-product). As of July 2024, RTC images are only available for Africa. Global coverage is expected to grow as ESA expands the S1\-RTC archive. The following example shows an S1\-RTC image for the Rift valley in Ethiopia. ``` # retrieve a S1-RTC cube and plot s1_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "CDSE", collection = "SENTINEL-1-RTC", bands = [c](https://rdrr.io/r/base/c.html)("VV", "VH"), orbit = "descending", start_date = "2023-01-01", end_date = "2023-12-31", tiles = [c](https://rdrr.io/r/base/c.html)("37NCH") ) [plot](https://rdrr.io/r/graphics/plot.default.html)(s1_cube, band = "VV", date = [c](https://rdrr.io/r/base/c.html)("2023-03-03"), palette = "Greys") ``` Figure 22: Sentinel\-1\-RTC image of the Rift Valley in Ethiopia (© EU Copernicus Sentinel Programme; source: CDSE). Digital Earth Africa -------------------- Digital Earth Africa (DEAFRICA) is a cloud service that provides open\-access Earth observation data for the African continent. The ARD image collections in `sits` are: * Sentinel\-2 level 2A (`SENTINEL-2-L2A`), organised as MGRS tiles. * Sentinel\-1 radiometrically terrain corrected (`SENTINEL-1-RTC`) * Landsat\-5 (`LS5-SR`), Landsat\-7 (`LS7-SR`), Landsat\-8 (`LS8-SR`) and Landat\-9 (`LS9-SR`). All Landsat collections are ARD data and are organized as WRS\-2 tiles. * SAR L\-band images produced by PALSAR sensor onboard the Japanese ALOS satellite(`ALOS-PALSAR-MOSAIC`). Data is organized in a 5\\(^\\circ\\) by 5\\(^\\circ\\) grid with a spatial resolution of 25 meters. Images are available annually from 2007 to 2010 (ALOS/PALSAR) and from 2015 to 2022 (ALOS\-2/PALSAR\-2\). * Estimates of vegetation condition using NDVI anomalies (`NDVI-ANOMALY`) compared with the long\-term baseline condition. The available measurements are “NDVI\_MEAN” (mean NDVI for a month) and “NDVI\-STD\-ANOMALY” (standardised NDVI anomaly for a month). * Rainfall information provided by Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) from University of California in Santa Barbara. There are monthly (`RAINFALL-CHIRPS-MONTHLY`) and daily (`RAINFALL-CHIRPS-DAILY`) products over Africa. * Digital elevation model provided by the EC Copernicus program (`COP-DEM-30`) in 30 meter resolution organized in a 1\\(^\\circ\\) by 1\\(^\\circ\\) grid. * Annual geomedian images for Landsat 8 and Landsat 9 (`GM-LS8-LS9-ANNUAL` (LANDSAT/OLI)\`) in grid system WRS\-2\. * Annual geomedian images for Sentinel\-2 (`GM-S2-ANNUAL`) in MGRS grid. * Rolling three\-month geomedian images for Sentinel\-2 (`GM-S2-ROLLING`) in MGRS grid. * Semestral geomedian images for Sentinel\-2 (`GM-S2-SEMIANNUAL`) in MGRS grid. Access to DEAFRICA Sentinel\-2 images can be done wither using `tiles` or `roi` parameter. In this example, the requested `roi` produces a cube that contains one MGRS tiles (“35LPH”) covering an area of Madagascar that includes the Betsiboka Estuary. ``` dea_s2_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "DEAFRICA", collection = "SENTINEL-2-L2A", roi = [c](https://rdrr.io/r/base/c.html)( lon_min = 46.1, lat_min = -15.6, lon_max = 46.6, lat_max = -16.1 ), bands = [c](https://rdrr.io/r/base/c.html)("B02", "B04", "B08"), start_date = "2019-04-01", end_date = "2019-05-30" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(dea_s2_cube, red = "B04", blue = "B02", green = "B08") ``` Figure 23: Sentinel\-2 image in an area over Madagascar (© EU Copernicus Sentinel Programme; source: Digital Earth Africa). The next example retrieves a set of ARD Landsat\-9 data, covering the Serengeti plain in Tanzania. ``` dea_l9_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "DEAFRICA", collection = "LS9-SR", roi = [c](https://rdrr.io/r/base/c.html)( lon_min = 33.0, lat_min = -3.60, lon_max = 33.6, lat_max = -3.00 ), bands = [c](https://rdrr.io/r/base/c.html)("B04", "B05", "B06"), start_date = "2023-05-01", end_date = "2023-08-30" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(dea_l9_cube, date = "2023-06-26", red = "B06", green = "B05", blue = "B04" ) ``` Figure 24: Landsat\-9 image in an area over the Serengeti in Tanzania (source: Digital Earth Africa). The following example shows how to retrieve a subset of the ALOS\-PALSAR mosaic for year 2020, for an area near the border between Congo and Rwanda. ``` dea_alos_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "DEAFRICA", collection = "ALOS-PALSAR-MOSAIC", roi = [c](https://rdrr.io/r/base/c.html)( lon_min = 28.69, lat_min = -2.35, lon_max = 29.35, lat_max = -1.56 ), bands = [c](https://rdrr.io/r/base/c.html)("HH", "HV"), start_date = "2020-01-01", end_date = "2020-12-31" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(dea_alos_cube, band = "HH") ``` Figure 25: ALOS\-PALSAC mosaic in the Congo forest area (© JAXA EORC; source: Digital Earth Africa). Digital Earth Australia ----------------------- Digital Earth Australia (DEAUSTRALIA) is an initiative by Geoscience Australia that uses satellite data to monitor and analyze environmental changes and resources across the Australian continent. It provides many datasets that offer detailed information on phenomena such as droughts, agriculture, water availability, floods, coastal erosion, and urban development. The DEAUSTRALIA image collections in `sits` are: * GA\_LS5T\_ARD\_3: ARD images from Landsat\-5 satellite, with bands “BLUE”, “GREEN”, “RED”, “NIR”, “SWIR\-1”, “SWIR\-2”, and “CLOUD”. * GA\_LS7E\_ARD\_3: ARD images from Landsat\-7 satellite, with the same bands as Landsat\-5\. 3. GA\_LS8C\_ARD\_3: ARD images from Landsat\-8 satellite, with bands “COASTAL\-AEROSOL”, “BLUE”, “GREEN”, “RED”, “NIR”, “SWIR\-1”, “SWIR\-2”, “PANCHROMATIC”, and “CLOUD”. * GA\_LS9C\_ARD\_3: ARD images from Landsat\-9 satellite, with the same bands as Landsat\-8\. * GA\_S2AM\_ARD\_3: ARD images from Sentinel\-2A satellite, with bands “COASTAL\-AEROSOL”, “BLUE”, “GREEN”, “RED”, “RED\-EDGE\-1”, “RED\-EDGE\-2”, “RED\-EDGE\-3”, “NIR\-1”, “NIR\-2”, “SWIR\-2”, “SWIR\-3”, and “CLOUD”. * GA\_S2BM\_ARD\_3: ARD images from Sentinel\-2B satellite, with the same bands as Sentinel\-2A. * GA\_LS5T\_GM\_CYEAR\_3: Landsat\-5 geomedian images, with bands “BLUE”, “GREEN”, “RED”, “NIR”, “SWIR1”, “SWIR2”, “EDEV”, “SDEV”, “BCDEV”. * GA\_LS7E\_GM\_CYEAR\_3: Landsat\-7 geomedian images, with the same bands as Landsat\-5 geomedian. * GA\_LS8CLS9C\_GM\_CYEAR\_3: Landsat\-8/9 geomedian images, with the same bands as Landsat\-5 geomedian. * GA\_LS\_FC\_3: Landsat fractional land cover, with bands “BS”, “PV”, “NPV”. * GA\_S2LS\_INTERTIDAL\_CYEAR\_3: Landsat/Sentinel intertidal data, with bands “ELEVATION”, “ELEVATION\-UNCERTAINTY”, “EXPOSURE”, “TA\-HAT”, “TA\-HOT”, “TA\-LOT”, “TA\-LAT” “TA\-OFFSET\-HIGH”, “TA\-OFFSET\-LOW”, “TA\-SPREAD”, “QA\-NDWI\-CORR”and “QA\-NDWI\-FREQ”. The following code retrieves an image from Sentinel\-2 ``` # get roi for an MGRS tile bbox_55KGR <- [sits_mgrs_to_roi](https://rdrr.io/pkg/sits/man/sits_mgrs_to_roi.html)("55KGR") # retrieve the world cover map for the chosen roi s2_56KKV <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "DEAUSTRALIA", collection = "GA_S2AM_ARD_3", tiles = "56KKV", bands = [c](https://rdrr.io/r/base/c.html)("BLUE", "GREEN", "RED", "NIR-2", "SWIR-2", "CLOUD"), start_date = "2023-09-01", end_date = "2023-11-30" ) # plot the resulting map [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_56KKV, green = "NIR-2", blue = "BLUE", red = "SWIR-2", date = "2023-10-14") ``` Figure 26: Plot of Sentinel\-2 image obtained from the DEAUSTRALIA collection for date 2023\-10\-14 showing MGRS tile 56KKV (© EU Copernicus Sentinel Programme; source: Digital Earth Australia). Harmonized Landsat\-Sentinel ---------------------------- Harmonized Landsat Sentinel (HLS) is a NASA initiative that processes and harmonizes Landsat 8 and Sentinel\-2 imagery to a common standard, including atmospheric correction, alignment, resampling, and corrections for BRDF (bidirectional reflectance distribution function). The purpose of the HLS project is to create a unified and consistent dataset that integrates the advantages of both systems, making it easier to work with the data. The NASA Harmonized Landsat and Sentinel (HLS) service provides two image collections: * Landsat 8 OLI Surface Reflectance HLS (HLSL30\) – The HLSL30 product includes atmospherically corrected surface reflectance from the Landsat 8 OLI sensors at 30 m resolution. The dataset includes 11 spectral bands. * Sentinel\-2 MultiSpectral Instrument Surface Reflectance HLS (HLSS30\) – The HLSS30 product includes atmospherically corrected surface reflectance from the Sentinel\-2 MSI sensors at 30 m resolution. The dataset includes 12 spectral bands. The HLS tiling system is identical as the one used for Sentinel\-2 (MGRS). The tiles dimension is 109\.8 km and there is an overlap of 4,900 m on each side. To access NASA HLS, users need to registed at [NASA EarthData](https://urs.earthdata.nasa.gov/), and save their login and password in a \~/.netrc plain text file in Unix (or %HOME%\_netrc in Windows). The file must contain the following fields: ``` machine urs.earthdata.nasa.gov login <username> password <password> ``` We recommend using the earthdatalogin package to create a `.netrc` file with the `earthdatalogin::edl_netrc`. This function creates a properly configured .netrc file in the user’s home directory and an environment variable GDAL\_HTTP\_NETRC\_FILE, as shown in the example. ``` [library](https://rdrr.io/r/base/library.html)([earthdatalogin](https://boettiger-lab.github.io/earthdatalogin/)) earthdatalogin::edl_netrc( username = "<your user name>", password = "<your password>" ) ``` Access to images in NASA HLS is done by region of interest or by tiles. The following example shows an HLS Sentinel\-2 image over the Brazilian coast. ``` # define a region of interest roi <- [c](https://rdrr.io/r/base/c.html)( lon_min = -45.6422, lat_min = -24.0335, lon_max = -45.0840, lat_max = -23.6178 ) # create a cube from the HLSS30 collection hls_cube_s2 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "HLS", collection = "HLSS30", roi = roi, bands = [c](https://rdrr.io/r/base/c.html)("BLUE", "GREEN", "RED", "CLOUD"), start_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2020-06-01"), end_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2020-09-01"), progress = FALSE ) # plot the cube [plot](https://rdrr.io/r/graphics/plot.default.html)(hls_cube_s2, red = "RED", green = "GREEN", blue = "BLUE", date = "2020-06-20") ``` Figure 27: Plot of Sentinel\-2 image obtained from the NASA HLS collection for date 2020\-06\-15 showing the island of Ilhabela in the Brazilian coast (©EU Copernicus Sentinel Programme; source: NASA). Images from the HLS Landsat and Sentinel\-2 collections are accessed separately and can be combined with `[sits_merge()](https://rdrr.io/pkg/sits/man/sits_merge.html)`. The script below creates an HLS Landsat cube over the same area as the Sentinel\-2 cube above bands. The two cubes are then merged. ``` # define a region of interest roi <- [c](https://rdrr.io/r/base/c.html)( lon_min = -45.6422, lat_min = -24.0335, lon_max = -45.0840, lat_max = -23.6178 ) # create a cube from the HLSS30 collection hls_cube_l8 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "HLS", collection = "HLSL30", roi = roi, bands = [c](https://rdrr.io/r/base/c.html)("BLUE", "GREEN", "RED", "CLOUD"), start_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2020-06-01"), end_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2020-09-01"), progress = FALSE ) # merge the Sentinel-2 and Landsat-8 cubes hls_cube_merged <- [sits_merge](https://rdrr.io/pkg/sits/man/sits_merge.html)(hls_cube_s2, hls_cube_l8) ``` Comparing the timelines of the original cubes and the merged one, one can see the benefits of the merged collection for time series data analysis. ``` # Timeline of the Sentinel-2 cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(hls_cube_s2) ``` ``` #> [1] "2020-06-15" "2020-06-20" "2020-06-25" "2020-06-30" "2020-07-05" #> [6] "2020-07-10" "2020-07-20" "2020-07-25" "2020-08-04" "2020-08-09" #> [11] "2020-08-14" "2020-08-19" "2020-08-24" "2020-08-29" ``` ``` # Timeline of the Landsat-8 cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(hls_cube_l8) ``` ``` #> [1] "2020-06-09" "2020-06-25" "2020-07-11" "2020-07-27" "2020-08-12" #> [6] "2020-08-28" ``` ``` # Timeline of the Landsat-8 cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(hls_cube_merged) ``` ``` #> [1] "2020-06-09" "2020-06-15" "2020-06-20" "2020-06-25" "2020-06-30" #> [6] "2020-07-05" "2020-07-10" "2020-07-11" "2020-07-20" "2020-07-25" #> [11] "2020-07-27" "2020-08-04" "2020-08-09" "2020-08-12" "2020-08-14" #> [16] "2020-08-19" "2020-08-24" "2020-08-28" "2020-08-29" ``` ``` # plotting a harmonized Landsat image from the merged dataset # plot the cube [plot](https://rdrr.io/r/graphics/plot.default.html)(hls_cube_merged, red = "RED", green = "GREEN", blue = "BLUE", date = "2020-07-11" ) ``` Figure 28: Plot of Sentinel\-2 image obtained from merging NASA HLS collection and Sentinel\-2 collection for date 2020\-06\-15 showing the island of Ilhabela in the Brazilian coast (© EU Copernicus Sentinel Programme; source: NASA). EO products from TERRASCOPE --------------------------- Terrascope is online platform for accessing open\-source satellite images. This service, operated by VITO, offers a range of Earth observation data and processing services that are accessible free of charge. Currently, `sits` supports the World Cover 2021 maps, produced by VITO with support form the European Commission and ESA. The following code shows how to access the World Cover 2021 convering tile “22LBL”. The first step is to use `[sits_mgrs_to_roi()](https://rdrr.io/pkg/sits/man/sits_mgrs_to_roi.html)` to get the region of interest expressed as a bounding box; this box is then entered as the `roi` parameter in the `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` function. Since the World Cover data is available as a 3\\(^\\circ\\) by 3\\(^\\circ\\) grid, it is necessary to use `[sits_cube_copy()](https://rdrr.io/pkg/sits/man/sits_cube_copy.html)` to extract the exact MGRS tile. ``` # get roi for an MGRS tile bbox_22LBL <- [sits_mgrs_to_roi](https://rdrr.io/pkg/sits/man/sits_mgrs_to_roi.html)("22LBL") # retrieve the world cover map for the chosen roi world_cover_2021 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "TERRASCOPE", collection = "WORLD-COVER-2021", roi = bbox_22LBL ) # cut the 3 x 3 degree grid to match the MGRS tile 22LBL world_cover_2021_20LBL <- [sits_cube_copy](https://rdrr.io/pkg/sits/man/sits_cube_copy.html)( cube = world_cover_2021, roi = bbox_22LBL, multicores = 6, output_dir = "./tempdir/chp4" ) # plot the resulting map [plot](https://rdrr.io/r/graphics/plot.default.html)(world_cover_2021_20LBL) ``` Figure 29: Plot of World Cover 2021 map covering MGRS tile 22LBL (© TerraScope). Planet data as ARD local files ------------------------------ ARD images downloaded from cloud collections to a local computer are not associated with a STAC endpoint that describes them. They must be organized and named to allow `sits` to create a data cube from them. All local files have to be in the same directory and have the same spatial resolution and projection. Each file must contain a single image band for a single date. Each file name needs to include tile, date, and band information. Users must provide information about the original data source to allow `sits` to retrieve information about image attributes such as band names, missing values, etc. When working with local cubes, `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` needs the following parameters: * `source`: Name of the original data provider; for a list of providers and collections, use `[sits_list_collections()](https://rdrr.io/pkg/sits/man/sits_list_collections.html)`. * `collection`: Collection from where the data was extracted. * `data_dir`: Local directory for images. * `bands`: Optional parameter to describe the bands to be retrieved. * `parse_info`: Information to parse the file names. File names need to contain information on tile, date, and band, separated by a delimiter (usually `"_"`). * `delim`: Separator character between descriptors in the file name (default is `"_"`). To be able to read local files, they must belong to a collection registered by `sits`. All collections known to `sits` by default are shown using `[sits_list_collections()](https://rdrr.io/pkg/sits/man/sits_list_collections.html)`. To register a new collection, please see the information provided in the Technical Annex. The example shows how to define a data cube using Planet images from the `sitsdata` package. The dataset contains monthly PlanetScope mosaics for tile “604\-1043” for August to October 2022, with bands B01, B02, B04, and B04\. In general, `sits` users need to match the local file names to the values provided by the `parse_info` parameter. The file names of this dataset use the format `PLANETSCOPE_MOSAIC_604-1043_B4_2022-10-01.tif`, which fits the default value for `parse_info` which is `c("source", "collection", "tile", "band", "date")` and for `delim` which is “\_“, it is not necessary to set these values when creating a data cube from the local files. ``` # Define the directory where Planet files are stored data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Planet", package = "sitsdata") # Create a data cube from local files planet_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "PLANET", collection = "MOSAIC", data_dir = data_dir ) # Plot the first instance of the Planet data in natural colors [plot](https://rdrr.io/r/graphics/plot.default.html)(planet_cube, red = "B3", green = "B2", blue = "B1") ``` Figure 30: Planet image over an area in Colombia (© Planet \- reproduction based on fair use doctrine). Reading classified images as local data cube -------------------------------------------- It is also possible to create local cubes based on results that have been produced by classification or post\-classification algorithms. In this case, more parameters are required, and the parameter `parse_info` is specified differently, as follows: * `source`: Name of the original data provider. * `collection`: Name of the collection from where the data was extracted. * `data_dir`: Local directory for the classified images. * `band`: Band name associated with the type of result. Use: (a) `probs` for probability cubes produced by `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`; (b) `bayes`, for cubes produced by `[sits_smooth()](https://rdrr.io/pkg/sits/man/sits_smooth.html)`; (c) `entropy`, `least`, `ratio` or `margin`, according to the method selected when using `[sits_uncertainty()](https://rdrr.io/pkg/sits/man/sits_uncertainty.html)`; and (d) `class` for classified cubes. * `labels`: Labels associated with the names of the classes (not required for cubes produced by `[sits_uncertainty()](https://rdrr.io/pkg/sits/man/sits_uncertainty.html)`). * `version`: Version of the result (default \= `v1`). * `parse_info`: File name parsing information to allow `sits` to deduce the values of `tile`, `start_date`, `end_date`, `band`, and `version` from the file name. Unlike non\-classified image files, cubes produced by classification and post\-classification have both `start_date` and `end_date`. The following code creates a results cube based on the classification of deforestation in Brazil. This classified cube was obtained by a large data cube of Sentinel\-2 images, covering the state of Rondonia, Brazil comprising 40 tiles, 10 spectral bands, and covering the period from 2020\-06\-01 to 2021\-09\-11\. Samples of four classes were trained by a random forest classifier. Internally, classified images use integers to represent classes. Thus, labels have to be associated to the integers that represent each class name. ``` # Create a cube based on a classified image data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LLP", package = "sitsdata" ) # File name "SENTINEL-2_MSI_20LLP_2020-06-04_2021-08-26_class_v1.tif" Rondonia_class_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "AWS", collection = "SENTINEL-S2-L2A-COGS", bands = "class", labels = [c](https://rdrr.io/r/base/c.html)( "1" = "Burned_Area", "2" = "Cleared_Area", "3" = "Highly_Degraded", "4" = "Forest" ), data_dir = data_dir, parse_info = [c](https://rdrr.io/r/base/c.html)( "satellite", "sensor", "tile", "start_date", "end_date", "band", "version" ) ) # Plot the classified cube [plot](https://rdrr.io/r/graphics/plot.default.html)(Rondonia_class_cube) ``` Figure 31: Classified data cube for the year 2020/2021 in Rondonia, Brazil (© EU Copernicus Sentinel Programme; source: authors). Regularizing data cubes ----------------------- ARD collections available in AWS, MPC, USGS, and DEAFRICA are not regular in space and time. Bands may have different resolutions, images may not cover the entire tile, and time intervals are irregular. For this reason, data from these collections need to be converted to regular data cubes by calling `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)`, which uses the *gdalcubes* package [\[6]](references.html#ref-Appel2019). After obtaining a regular data cube, users can perform data analysis and classification operations, as shown in the following chapters. ### Regularizing Sentinel\-2 images In the following example, the user has created an irregular data cube from the Sentinel\-2 collection available in Microsoft’s Planetary Computer (MPC) for tiles `20LKP` and `20LLP` in the state of Rondonia, Brazil. We first build an irregular data cube using `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)`. ``` # Creating an irregular data cube from MPC s2_cube_rondonia <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", tiles = [c](https://rdrr.io/r/base/c.html)("20LKP", "20LLP"), bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11", "CLOUD"), start_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2018-06-30"), end_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2018-08-31") ) # Show the different timelines of the cube tiles [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(s2_cube_rondonia) ``` ``` #> $`20LKP` #> [1] "2018-07-03" "2018-07-08" "2018-07-13" "2018-07-18" "2018-07-23" #> [6] "2018-07-28" "2018-08-02" "2018-08-07" "2018-08-12" "2018-08-17" #> [11] "2018-08-22" "2018-08-27" #> #> $`20LLP` #> [1] "2018-06-30" "2018-07-03" "2018-07-05" "2018-07-08" "2018-07-10" #> [6] "2018-07-13" "2018-07-15" "2018-07-18" "2018-07-20" "2018-07-23" #> [11] "2018-07-25" "2018-07-28" "2018-07-30" "2018-08-02" "2018-08-04" #> [16] "2018-08-07" "2018-08-09" "2018-08-12" "2018-08-14" "2018-08-17" #> [21] "2018-08-19" "2018-08-22" "2018-08-24" "2018-08-27" "2018-08-29" ``` ``` # plot the first image of the irregular cube s2_cube_rondonia |> dplyr::[filter](https://dplyr.tidyverse.org/reference/filter.html)(tile == "20LLP") |> [plot](https://rdrr.io/r/graphics/plot.default.html)(red = "B11", green = "B8A", blue = "B02", date = "2018-07-03") ``` Figure 32: Sentinel\-2 tile 20LLP for date 2018\-07\-03 (© EU Copernicus Sentinel Programme; source: authors). Because of the different acquisition orbits of the Sentinel\-2 and Sentinel\-2A satellites, the two tiles also have different timelines. Tile `20LKP` has 12 instances, while tile `20LLP` has 24 instances for the chosen period. The function `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)` builds a data cube with a regular timeline and a best estimate of a valid pixel for each interval. The `period` parameter sets the time interval between two images. Values of `period` use the ISO8601 time period specification, which defines time intervals as `P[n]Y[n]M[n]D`, where “Y” stands for years, “M” for months, and “D” for days. Thus, `P1M` stands for a one\-month period, `P15D` for a fifteen\-day period. When joining different images to get the best image for a period, `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)` uses an aggregation method that organizes the images for the chosen interval in order of increasing cloud cover and then selects the first cloud\-free pixel. In the example, we use a small spatial resolution for the regular cube to speed up processing; in actual case, we suggest using a 10\-meter spatial resolution for the cube. ``` # Regularize the cube to 15 day intervals reg_cube_rondonia <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = s2_cube_rondonia, output_dir = "./tempdir/chp4", res = 40, period = "P16D", multicores = 6 ) # Plot the first image of the tile 20LLP of the regularized cube # The pixels of the regular data cube cover the full MGRS tile reg_cube_rondonia |> dplyr::[filter](https://dplyr.tidyverse.org/reference/filter.html)(tile == "20LLP") |> [plot](https://rdrr.io/r/graphics/plot.default.html)(red = "B11", green = "B8A", blue = "B02") ``` Figure 33: Regularized image for tile Sentinel\-2 tile 20LLP (© EU Copernicus Sentinel Programme; source: authors). ### Regularizing Sentinel\-1 images Because of their acquisition mode, SAR images are usually stored following their geometry of acquisition, which is inclined with respect to the Earth. This is the case of GRD and RTC collections available in Microsoft Planetary Computer (MPC). To allow easier use of Sentinel\-1 data and to merge them with Sentinel\-2 images, regularization in sits reprojects SAR data to the MGRS grid, as shown in the following example. The example uses the “SENTINEL\-1\-RTC” collection from MPC. Readers that do not have a subscription can replace “SENTINEL\-1\-RTC” with “SENTINEL\-1\-GRD” in the example. ``` # create an RTC cube from MPC collection for a region in Mato Grosso, Brazil. cube_s1_rtc <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-1-RTC", bands = [c](https://rdrr.io/r/base/c.html)("VV", "VH"), orbit = "descending", tiles = [c](https://rdrr.io/r/base/c.html)("22LBL"), start_date = "2021-06-01", end_date = "2021-10-01" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_rtc, band = "VH", palette = "Greys", scale = 0.7) ``` Figure 34: Original Sentinel\-1 image covering tile 22LBL (© EU Copernicus Sentinel Programme; source: Microsoft). After creating an irregular data cube from the data available in MPC, we use `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)` to produce a SAR data cube that matches MGRS tile “22LBL”. For plotting the SAR image, we select a multidate plot for the “VH” band, where the first date will be displayed in red, the second in green and the third in blue, so as to show an RGB map where changes are visually enhanced. ``` # define the output directory # # Create a directory to store files if ( { [dir.create](https://rdrr.io/r/base/files2.html)("./tempdir/chp4/sar") } # create a regular RTC cube from MPC collection for a tile 22LBL. cube_s1_reg <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = cube_s1_rtc, period = "P16D", res = 40, tiles = [c](https://rdrr.io/r/base/c.html)("22LBL"), memsize = 12, multicores = 6, output_dir = "./tempdir/chp4/sar" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_reg, band = "VH", palette = "Greys", scale = 0.7, dates = [c](https://rdrr.io/r/base/c.html)("2021-06-06", "2021-07-24", "2021-09-26") ) ``` Figure 35: Regular Sentinel\-1 image covering tile 22LBL (© EU Copernicus Sentinel Programme; source: Microsoft). ### Merging Sentinel\-1 and Sentinel\-2 images To combine Sentinel\-1 and Sentinel\-2 data, the first step is to produce regular data cubes for the same MGRS tiles with compatible time steps. The timelines do not have to be exactly the same, but they need to be close enough so that matching is acceptable and have the same number of time steps. This example uses the regular Sentinel\-1 cube for tile “22LBL” produced in the previous sections. The next step is to produce a regular Sentinel\-2 data cube for the same tile and regularize it. The cube below defines an irregular data cube retrieved from Planetary Computer. ``` # define the output directory cube_s2 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11", "CLOUD"), tiles = [c](https://rdrr.io/r/base/c.html)("22LBL"), start_date = "2021-06-01", end_date = "2021-09-30" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s2, red = "B11", green = "B8A", blue = "B02", date = "2021-07-07") ``` Figure 36: Sentinel\-2 image covering tile 22LBL (© EU Copernicus Sentinel Programme; source: authors). The next step is to create a regular data cube for tile “20LBL”. ``` if ( { [dir.create](https://rdrr.io/r/base/files2.html)("./tempdir/chp4/s2_opt") } # define the output directory cube_s2_reg <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = cube_s2, period = "P16D", res = 40, tiles = [c](https://rdrr.io/r/base/c.html)("22LBL"), memsize = 12, multicores = 6, output_dir = "./tempdir/chp4/s2_opt" ) ``` After creating the two regular cubes, we can merge them. Before this step, one should first compare their timelines to see if they match. Timelines of regular cubes are constrained by acquisition dates, which in the case of Sentinel\-1 and Sentinel\-2 are different. Attentive readers will have noticed that the start and end dates of the cubes selected from the Planetary Computer (see code above) are slightly difference, because of the need to ensure both regular cubes have the same number of time steps. The timelines for both cubes are shown below. ``` # timeline of the Sentinel-2 cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(cube_s2_reg) ``` ``` #> [1] "2021-06-02" "2021-06-18" "2021-07-04" "2021-07-20" "2021-08-05" #> [6] "2021-08-21" "2021-09-06" "2021-09-22" ``` ``` # timeline of the Sentinel-2 cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(cube_s1_reg) ``` ``` #> [1] "2021-06-06" "2021-06-22" "2021-07-08" "2021-07-24" "2021-08-09" #> [6] "2021-08-25" "2021-09-10" "2021-09-26" ``` Considering that the timelines are close enough so that the cubes can be combined, we can use the `sits_merge` function to produce a combined cube. As an example, we show a plot with both radar and optical bands. ``` # merge Sentinel-1 and Sentinel-2 cubes cube_s1_s2 <- [sits_merge](https://rdrr.io/pkg/sits/man/sits_merge.html)(cube_s2_reg, cube_s1_reg) # plot a an image with both SAR and optical bands [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_s2, red = "B11", green = "B8A", blue = "VH") ``` Figure 37: Sentinel\-2 and Sentinel\-1 RGB composite for tile 22LBL (source: authors). Combining multitemporal data cubes with digital elevation models ---------------------------------------------------------------- In many applications, especially in regions with large topographical, soil or climatic variations, is is useful to merge multitemporal data cubes with base information such as digital elevation models (DEM). Merging multitemporal satellite images with digital elevation models (DEMs) offers several advantages that enhance the analysis and interpretation of geospatial data. Elevation data provides an additional to the two\-dimensional satellite images, which help to distinguish land use and land cover classes which are impacted by altitude gradients. One example is the capacity to distinguish between low\-altitude and high\-altitude forests. In case where topography changes significantly, DEM information can improve the accuracy of classification algorithms. As an example of DEM integation in a data cube, we will consider an agricultural region of Chile which is located in a narrow area close to the Andes. There is a steep gradient so that the cube benefits from the inclusion of the DEM. ``` s2_cube_19HBA <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", tiles = "19HBA", bands = [c](https://rdrr.io/r/base/c.html)("B04", "B8A", "B12", "CLOUD"), start_date = "2021-01-01", end_date = "2021-03-31" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_cube_19HBA, red = "B12", green = "B8A", blue = "B04") ``` Figure 38: Sentinel\-2 image covering tile 19HBA (source: authors). Then, we produce a regular data cube to use for classification. In this example, we will use a reduced resolution (30 meters) to expedite processing. In practice, a resolution of 10 meters is recommended. ``` s2_cube_19HBA_reg <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = s2_cube_19HBA, period = "P16D", res = 30, output_dir = "./tempdir/chp4/s2_19HBA" ) ``` The next step is recover the DEM for the area. For this purpose, we will use the Copernicus Global DEM\-30, and select the area covered by the tile. As explained in the MPC access section above, the Copernicus DEM tiles are stored as 1\\(^\\circ\\) by 1\\(^\\circ\\) grid. For them to match an MGRS tile, they have to be regularized in a similar way as the Sentinel\-1 images, as shown below. To select a DEM, no temporal information is required. ``` # obtain the DEM cube for dem_cube_19HBA <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "COP-DEM-GLO-30", bands = "ELEVATION", tiles = "19HBA" ) ``` After obtaining the 1\\(^\\circ\\) by 1\\(^\\circ\\) data cube covering the selected tile, the next step is to regularize it. This is done using the `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)` function. This function will produce a DEM which matches exactly the chosen tile. ``` # obtain the DEM cube for dem_cube_19HBA_reg <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = dem_cube_19HBA, res = 30, bands = "ELEVATION", tiles = "19HBA", output_dir = "./tempdir/chp4/dem_19HBA" ) # plot the DEM reversing the palette [plot](https://rdrr.io/r/graphics/plot.default.html)(dem_cube_19HBA_reg, band = "ELEVATION", palette = "Spectral", rev = TRUE) ``` Figure 39: Copernicus DEM\-30 covering tile 19HBA (© DLR e.V. 2010\-2014 and \&copy Airbus Defence and Space GmbH 2014\-2018 provided under COPERNICUS by the European Union and ESA; source: Microsoft and authors). After obtaining regular data cubes from satellite images and from DEMs, there are two ways to combine them. One option is to take the DEM band as a multitemporal information, and duplicate this band for every time step so that the DEM becomes one additional time series. The alternative is to use DEMs as base cubes, and take them as a single additional band. These options are discusses in what follows. Merging multitemporal data cubes with DEM ----------------------------------------- There are two ways to combine multitemporal data cubes with DEM data. The first method takes the DEM as a base information, which is used in combination with the multispectral time series. For exemple, consider a situation of a data cube with 10 bands and 23 time steps, which has a 230\-dimensional space. Adding DEM as a base cube will include one dimension to the attribute space. This combination is supported by function `sits_add_base_cube`. In the resulting cube, the information on the image time series and that of the DEM are stored separately. The data cube metadata will now include a column called `base_info`. ``` merged_cube_base <- [sits_add_base_cube](https://rdrr.io/pkg/sits/man/sits_add_base_cube.html)(s2_cube_19HBA_reg, dem_cube_19HBA_reg) merged_cube_base$base_info[[1]] ``` ``` #> # A tibble: 1 × 11 #> source collection satellite sensor tile xmin xmax ymin ymax crs #> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <chr> #> 1 MPC COP-DEM-GLO-30 TANDEM-X X-band… 19HBA 199980 309780 5.99e6 6.1e6 EPSG… #> # ℹ 1 more variable: file_info <list> ``` Although this combination is conceptually simple, it has drawbacks. Since the attribute space now mixes times series with fixed\-time information, the only applicable classification method is random forests. Because of the way random forest works, not all attributes are used by every decision tree. During the training of each tree, at each node, a random subset of features is selected, and the best split is chosen based on this subset rather than all features. Thus, there may be a significant number of decision trees that do use the DEM attribute. As a result, the effect of the DEM information may be underestimated. The alternative is to combine the image data cube and the DEM using `sits_merge`. In this case, the DEM becomes another band. Although it may look peculiar to replicate the DEM many time to build an artificial time series, there are many advantages in doing so. All classification algorithms available in `sits` (including the deep learning ones) can be used to classify the resulting cube. For cases where the DEM information is particularly important, such organisation places DEM data at a par with other spectral bands. Users are encouraged to compare the results obtained by direct merging of DEM with spectral bands with the method where DEM is taken as a base cube. ``` merged_cube <- [sits_merge](https://rdrr.io/pkg/sits/man/sits_merge.html)(s2_cube_19HBA_reg, dem_cube_19HBA_reg) merged_cube$file_info[[1]] ``` ``` #> # A tibble: 24 × 13 #> fid band date nrows ncols xres yres xmin ymin xmax ymax #> <chr> <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 1 B04 2021-01-03 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 2 1 B12 2021-01-03 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 3 1 B8A 2021-01-03 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 4 1 ELEVATION 2021-01-03 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 5 2 B04 2021-01-19 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 6 2 B12 2021-01-19 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 7 2 B8A 2021-01-19 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 8 1 ELEVATION 2021-01-19 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 9 3 B04 2021-02-04 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 10 3 B12 2021-02-04 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> # ℹ 14 more rows #> # ℹ 2 more variables: crs <chr>, path <chr> ``` Analysis\-ready data(ARD) ------------------------- Analysis Ready Data (CEOS\-ARD) are satellite data that have been processed to meet the [ARD standards](https://ceos.org/ard/) defined by the Committee on Earth Observation Satellites (CEOS). ARD data simplify and accelerate the analysis of Earth observation data by providing consistent and high\-quality data that are standardized across different sensors and platforms. ARD images processing includes geometric corrections, radiometric corrections, and sometimes atmospheric corrections. Images are georeferenced, meaning they are accurately aligned with a coordinate system. Optical ARD images include cloud and shadow masking information. These masks indicate which pixels affected by clouds or cloud shadows. For optical sensors, CEOS\-ARD images have to be converted to surface reflectance values, which represent the fraction of light that is reflected by the surface. This makes the data more comparable across different times and locations. For SAR images, CEOS\-ARD specification require images to undergo Radiometric Terrain Correction (RTC) and are provided in the GammaNought (\\(\\gamma\_0\\)) backscatter values. This value which mitigates the variations from diverse observation geometries and is recommended for most land applications. ARD images are available from various satellite platforms, including Landsat, Sentinel, and commercial satellites. This provides a wide range of spatial, spectral, and temporal resolutions to suit different applications. They are organised as a collection of files, where each pixel contains a single value for each spectral band for a given date. These collections are available in cloud services such as Brazil Data Cube, Digital Earth Africa, and Microsoft’s Planetary Computer. In general, the timelines of the images of an ARD collection are different. Images still contain cloudy or missing pixels; bands for the images in the collection may have different resolutions. Figure [9](earth-observation-data-cubes.html#fig:ardt) shows an example of the Landsat ARD image collection. Figure 9: ARD image collection (source: USGS. Reproduction based on fair use doctrine). ARD image collections are organized in spatial partitions. Sentinel\-2/2A images follow the Military Grid Reference System (MGRS) tiling system, which divides the world into 60 UTM zones of 8 degrees of longitude. Each zone has blocks of 6 degrees of latitude. Blocks are split into tiles of 110 \\(\\times\\) 110 km\\(^2\\) with a 10 km overlap. Figure [10](earth-observation-data-cubes.html#fig:mgrs) shows the MGRS tiling system for a part of the Northeastern coast of Brazil, contained in UTM zone 24, block M. Figure 10: MGRS tiling system used by Sentinel\-2 images (source: US Army. Reproduction based on fair use doctrine). The Landsat\-4/5/7/8/9 satellites use the Worldwide Reference System (WRS\-2\), which breaks the coverage of Landsat satellites into images identified by path and row (see Figure [11](earth-observation-data-cubes.html#fig:wrs)). The path is the descending orbit of the satellite; the WRS\-2 system has 233 paths per orbit, and each path has 119 rows, where each row refers to a latitudinal center line of a frame of imagery. Images in WRS\-2 are geometrically corrected to the UTM projection. Figure 11: WRS\-2 tiling system used by Landsat\-5/7/8/9 images (source: INPE and ESRI. Reproduction based on fair use doctrine). Image collections handled by sits --------------------------------- In version 1\.5\.1,`sits` supports access to the following ARD image cloud providers: * Amazon Web Services (AWS): Open data Sentinel\-2/2A level 2A collections for the Earth’s land surface. * Brazil Data Cube (BDC): Open data collections of Sentinel\-2/2A, Landsat\-8, CBERS\-4/4A, and MOD13Q1 products for Brazil. These collections are organized as regular data cubes. * Copernicus Data Space Ecosystem (CDSE): Open data collections of Sentinel\-1 RTC and Sentinel\-2/2A images. * Digital Earth Africa (DEAFRICA): Open data collections of Sentinel\-1 RTC, Sentinel\-2/2A, Landsat\-5/7/8/9 for Africa. Additional products available include ALOS\_PALSAR mosaics, DEM\_COP\_30, NDVI\_ANOMALY based on Landsat data, and monthly and daily rainfall data from CHIRPS. * Digital Earth Australia (DEAUSTRALIA): Open data ARD collections of Sentinel\-2A/2B and Landsat\-5/7/8/9 images; yearly geomedian of Landsat 5/7/8 images; yearly fractional land cover from 1986 to 2024\. * Harmonized Landsat\-Sentinel (HLS): HLS, provided by NASA, is an open data collection that processes Landsat 8 and Sentinel\-2 imagery to a common standard. * Microsoft Planetary Computer (MPC): Open data collections of Sentinel\-1 GRD, Sentinel\-2/2A, Landsat\-4/5/7/8/9 images for the Earth’s land areas. Also supported are Copernicus DEM\-30 and MOD13Q1, MOD10A1 and MOD09A1 products. Sentinel\-1 RTC collections are accessible but require payment. * Swiss Data Cube (SDC): Open data collection of Sentinel\-2/2A and Landsat\-8 images for Switzerland. * Terrascope: Cloud service with EO products which includes the ESA World Cover map. * USGS: Landsat\-4/5/7/8/9 collections available in AWS, which require access payment. In addition, `sits` supports the use of Planet monthly mosaics stored as local files. For a detailed description of the providers and collections supported by `sits`, please run `[sits_list_collections()](https://rdrr.io/pkg/sits/man/sits_list_collections.html)`. Regular image data cubes ------------------------ Machine learning and deep learning (ML/DL) classification algorithms require the input data to be consistent. The dimensionality of the data used for training the model has to be the same as that of the data to be classified. There should be no gaps and no missing values. Thus, to use ML/DL algorithms for remote sensing data, ARD image collections should be converted to regular data cubes. Adapting a previous definition by Appel and Pebesma [\[6]](references.html#ref-Appel2019), we consider a *regular data cube* has the following definition and properties: 1. A regular data cube is a four\-dimensional data structure with dimensions x (longitude or easting), y (latitude or northing), time, and bands. The spatial, temporal, and attribute dimensions are independent and not interchangeable. 2. The spatial dimensions refer to a coordinate system, such as the grids defined by UTM (Universal Transverse Mercator) or MGRS (Military Grid Reference System). A grid (or tile) of the grid corresponds to a unique zone of the coordinate system. A data cube may span various tiles and UTM zones. 3. The temporal dimension is a set of continuous and equally\-spaced intervals. 4. For every combination of dimensions, a cell has a single value. All cells of a data cube have the same spatiotemporal extent. The spatial resolution of each cell is the same in X and Y dimensions. All temporal intervals are the same. Each cell contains a valid set of measures. Each pixel is associated to a unique coordinate in a zone of the coordinate system. For each position in space, the data cube should provide a set of valid time series. For each time interval, the regular data cube should provide a valid 2D image (see Figure [12](earth-observation-data-cubes.html#fig:dc)). Figure 12: Conceptual view of data cubes (source: authors). Currently, the only cloud service that provides regular data cubes by default is the Brazil Data Cube (BDC). ARD collections available in other cloud services are not regular in space and time. Bands may have different resolutions, images may not cover the entire time, and time intervals may be irregular. For this reason, subsets of these collections need to be converted to regular data cubes before further processing. To produce data cubes for machine\-learning data analysis, users should first create an irregular data cube from an ARD collection and then use `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)`, as described below. Creating data cubes ------------------- [ To obtain information on ARD image collection from cloud providers, `sits` uses the [SpatioTemporal Asset Catalogue](https://stacspec.org/en) (STAC) protocol, a specification of geospatial information which many large image collection providers have adopted. A ‘spatiotemporal asset’ is any file that represents information about the Earth captured in a specific space and time. To access STAC endpoints, `sits` uses the [rstac](http://github.com/brazil-data-cube/rstac) R package. The function `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` supports access to image collections in cloud services; it has the following parameters: * `source`: Name of the provider. * `collection`: A collection available in the provider and supported by `sits`. To find out which collections are supported by `sits`, see `[sits_list_collections()](https://rdrr.io/pkg/sits/man/sits_list_collections.html)`. * `platform`: Optional parameter specifying the platform in collections with multiple satellites. * `tiles`: Set of tiles of image collection reference system. Either `tiles` or `roi` should be specified. * `roi`: A region of interest. Either: (a) a named vector (`lon_min`, `lon_max`, `lat_min`, `lat_max`) in WGS 84 coordinates; or (b) an `sf` object. All images intersecting the convex hull of the `roi` are selected. * `bands`: Optional parameter with the bands to be used. If missing, all bands from the collection are used. * `orbit`: Optional parameter required only for Sentinel\-1 images (default \= “descending”). * `start_date`: The initial date for the temporal interval containing the time series of images. * `end_date`: The final date for the temporal interval containing the time series of images. The result of `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` is a tibble with a description of the selected images required for further processing. It does not contain the actual data, but only pointers to the images. The attributes of individual image files can be assessed by listing the `file_info` column of the tibble. Amazon Web Services ------------------- Amazon Web Services (AWS) holds two kinds of collections: *open\-data* and *requester\-pays*. Open data collections can be accessed without cost. Requester\-pays collections require payment from an AWS account. Currently, `sits` supports collection `SENTINEL-2-L2A` which is open data. The bands in 10 m resolution are B02, B03, B04, and B08\. The 20 m bands are B05, B06, B07, B8A, B11, and B12\. Bands B01 and B09 are available at 60 m resolution. A CLOUD band is also available. The example below shows how to access one tile of the open data `SENTINEL-2-L2A` collection. The `tiles` parameter allows selecting the desired area according to the MGRS reference system. ``` # Create a data cube covering an area in Brazil s2_23MMU_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "AWS", collection = "SENTINEL-2-L2A", tiles = "23MMU", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11", "CLOUD"), start_date = "2018-07-12", end_date = "2019-07-28" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_23MMU_cube, red = "B11", blue = "B02", green = "B8A", date = "2018-10-05" ) ``` Figure 13: Sentinel\-2 image in an area of the Northeastern coast of Brazil (© EU Copernicus Sentinel Programme; source: AWS). Microsoft Planetary Computer ---------------------------- The `sits` supports access to three open data collection from Microsoft’s Planetary Computer (MPC): `SENTINEL-1-GRD`, `SENTINEL-2-L2A`, `LANDSAT-C2-L2`. It also allows access to `COP-DEM-GLO-30` (Copernicus Global DEM at 30 meter resolution) and `MOD13Q1-6.1`(version 6\.1 of the MODIS MOD13Q1 product). Access to the non\-open data collection `SENTINEL-1-RTC` is available for users that have registration in MPC. ### SENTINEL\-2/2A images in MPC The SENTINEL\-2/2A ARD images available in MPC have the same bands and resolutions as those available in AWS (see above). The example below shows how to access the `SENTINEL-2-L2A` collection. ``` # Create a data cube covering an area in the Brazilian Amazon s2_20LKP_cube_MPC <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", tiles = "20LKP", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11", "CLOUD"), start_date = "2019-07-01", end_date = "2019-07-28" ) # Plot a color composite of one date of the cube [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_20LKP_cube_MPC, red = "B11", blue = "B02", green = "B8A", date = "2019-07-18" ) ``` Figure 14: Sentinel\-2 image in an area of the state of Rondonia, Brazil (© EU Copernicus Sentinel Programme; source: Microsoft). ### LANDSAT\-C2\-L2 images in MPC The `LANDSAT-C2-L2` collection provides access to data from Landsat\-4/5/7/8/9 satellites. Images from these satellites have been intercalibrated to ensure data consistency. For compatibility between the different Landsat sensors, the band names are BLUE, GREEN, RED, NIR08, SWIR16, and SWIR22\. All images have 30 m resolution. For this collection, tile search is not supported; the `roi` parameter should be used. The example below shows how to retrieve data from a region of interest covering the city of Brasilia in Brazil. ``` # Read a ROI that covers part of the Northeastern coast of Brazil roi <- [c](https://rdrr.io/r/base/c.html)( lon_min = -43.5526, lat_min = -2.9644, lon_max = -42.5124, lat_max = -2.1671 ) # Select the cube s2_L8_cube_MPC <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "LANDSAT-C2-L2", bands = [c](https://rdrr.io/r/base/c.html)("BLUE", "RED", "GREEN", "NIR08", "SWIR16", "CLOUD"), roi = roi, start_date = "2019-06-01", end_date = "2019-09-01" ) # Plot the tile that covers the Lencois Maranhenses [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_L8_cube_MPC, red = "RED", green = "GREEN", blue = "BLUE", date = "2019-06-30" ) ``` Figure 15: Landsat\-8 image in an area in Northeast Brazil (sources: USGS and Microsoft). ### SENTINEL\-1\-GRD images in MPC Sentinel\-1 GRD products consist of focused SAR data that has been detected, multi\-looked and projected to ground range using the WGS84 Earth ellipsoid model. GRD images are subject for variations in the radar signal’s intensity due to topographic effects, antenna pattern, range spreading loss, and other radiometric distortions. The most common types of distortions include foreshortening, layover and shadowing. Foreshortening occurs when the radar signal strikes a steep terrain slope facing the radar, causing the slope to appear compressed in the image. Features like mountains can appear much steeper than they are, and their true heights can be difficult to interpret. Layover happens when the radar signal reaches the top of a tall feature (like a mountain or building) before it reaches the base. As a result, the top of the feature is displaced towards the radar and appears in front of its base. This results in a reversal of the order of features along the radar line\-of\-sight, making the image interpretation challenging. Shadowing occurs when a radar signal is obstructed by a tall object, casting a shadow on the area behind it that the radar cannot illuminate. The shadowed areas appear dark in SAR images, and no information is available from these regions, similar to optical shadows. Access to Sentinel\-1 GRD images can be done either by MGRS tiles (`tiles`) or by region of interest (`roi`). We recommend using the MGRS tiling system for specifying the area of interest, since when these images are regularized, they will be re\-projected into MGRS tiles. By default, only images in descending orbit are selected. The following example shows how to create a data cube of S1 GRD images over a region in Mato Grosso Brazil that is an area of the Amazon forest that has been deforested. The resulting cube will not follow any specific projection and its coordinates will be stated as EPSG 4326 (latitude/longitude). Its geometry is derived from the SAR slant\-range perspective; thus, it will appear included in relation to the Earth’s longitude. ``` cube_s1_grd <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-1-GRD", bands = [c](https://rdrr.io/r/base/c.html)("VV"), orbit = "descending", tiles = [c](https://rdrr.io/r/base/c.html)("21LUJ", "21LVJ"), start_date = "2021-08-01", end_date = "2021-09-30" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_grd, band = "VV", palette = "Greys") ``` Figure 16: Sentinel\-1 image in an area in Mato Grosso, Brazil (© EU Copernicus Sentinel Programme; source: Microsoft). As explained earlier in this chapter, in areas with areas with large elevation differences, Sentinel\-1 GRD images will have geometric distortions. For this reason, whenever possible, we recommend the use of RTC (radiometrically terrain corrected) images as described in the next session. ### SENTINEL\-1\-RTC images in MPC An RTC SAR image has undergone corrections for both geometric distortions and radiometric distortions caused by the terrain. The purpose of RTC processing is to enhance the interpretability and usability of SAR images for various applications by providing a more accurate representation of the Earth’s surface. The radar backscatter values are normalized to account for these variations, ensuring that the image accurately represents the reflectivity of the surface features. The terrain correction addresses geometric distortions caused by the side\-looking geometry of SAR imaging, such as foreshortening, layover, and shadowing. It uses a Digital Elevation Model (DEM) to model the terrain and re\-project the SAR image from the slant range (radar line\-of\-sight) to the ground range (true geographic coordinates). This process aligns the SAR image with the actual topography, providing a more accurate spatial representation. In MPC, access to Sentinel\-1\-RTC images requires a Planetary Computer account. User will receive a Shared Access Signature (SAS) Token from MPC that allows access to RTC data. Once a user receives a token from Microsoft, she needs to include the environment variable `MPC_TOKEN` in her `.Rprofile`. Therefore, the following example only works for users that have an SAS token. ``` cube_s1_rtc <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-1-RTC", bands = [c](https://rdrr.io/r/base/c.html)("VV", "VH"), orbit = "descending", tiles = "18NZM", start_date = "2021-08-01", end_date = "2021-09-30" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_rtc, band = "VV", palette = "Greys") ``` Figure 17: Sentinel\-1\-RTC image of an area in Colombia (© EU Copernicus Sentinel Programme; source: Microsoft). The above image is from the central region of Colombia, a country with large variations in altitude due to the Andes mountains. Users are invited to compare this images with the one from the `SENTINEL-1-GRD` collection and see the significant geometrical distortions of the GRD image compared with the RTC one. ### Copernicus DEM 30 meter images in MPC The Copernicus digital elevation model 30\-meter global dataset (COP\-DEM\-GLO\-30\) is a high\-resolution topographic data product provided by the European Space Agency (ESA) under the Copernicus Program. The vertical accuracy of the Copernicus DEM 30\-meter dataset is typically within a few meters, but this can vary depending on the region and the original data sources. The primary data source for the Copernicus DEM is data from the TanDEM\-X mission, designed by the German Aerospace Center (DLR). TanDEM\-X provides high\-resolution radar data through interferometric synthetic aperture radar (InSAR) techniques. The Copernicus DEM 30 meter is organized in a 1\\(^\\circ\\) by 1\\(^\\circ\\) grid. In `sits`, access to COP\-DEM\-GLO\-30 images can be done either by MGRS tiles (`tiles`) or by region of interest (`roi`). In both case, the cube is retrieved based on the parts of the grid that intersect the region of interest or the chosen tiles. ``` cube_dem_30 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "COP-DEM-GLO-30", tiles = "20LMR", band = "ELEVATION" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_dem_30, band = "ELEVATION", palette = "RdYlGn", rev = TRUE) ``` Figure 18: Copernicus 30\-meter DEM of an area in Brazil (© DLR e.V. 2010\-2014 and \&copy Airbus Defence and Space GmbH 2014\-2018 provided under COPERNICUS by the European Union and ESA; source: Microsoft). ### SENTINEL\-2/2A images in MPC The SENTINEL\-2/2A ARD images available in MPC have the same bands and resolutions as those available in AWS (see above). The example below shows how to access the `SENTINEL-2-L2A` collection. ``` # Create a data cube covering an area in the Brazilian Amazon s2_20LKP_cube_MPC <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", tiles = "20LKP", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11", "CLOUD"), start_date = "2019-07-01", end_date = "2019-07-28" ) # Plot a color composite of one date of the cube [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_20LKP_cube_MPC, red = "B11", blue = "B02", green = "B8A", date = "2019-07-18" ) ``` Figure 14: Sentinel\-2 image in an area of the state of Rondonia, Brazil (© EU Copernicus Sentinel Programme; source: Microsoft). ### LANDSAT\-C2\-L2 images in MPC The `LANDSAT-C2-L2` collection provides access to data from Landsat\-4/5/7/8/9 satellites. Images from these satellites have been intercalibrated to ensure data consistency. For compatibility between the different Landsat sensors, the band names are BLUE, GREEN, RED, NIR08, SWIR16, and SWIR22\. All images have 30 m resolution. For this collection, tile search is not supported; the `roi` parameter should be used. The example below shows how to retrieve data from a region of interest covering the city of Brasilia in Brazil. ``` # Read a ROI that covers part of the Northeastern coast of Brazil roi <- [c](https://rdrr.io/r/base/c.html)( lon_min = -43.5526, lat_min = -2.9644, lon_max = -42.5124, lat_max = -2.1671 ) # Select the cube s2_L8_cube_MPC <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "LANDSAT-C2-L2", bands = [c](https://rdrr.io/r/base/c.html)("BLUE", "RED", "GREEN", "NIR08", "SWIR16", "CLOUD"), roi = roi, start_date = "2019-06-01", end_date = "2019-09-01" ) # Plot the tile that covers the Lencois Maranhenses [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_L8_cube_MPC, red = "RED", green = "GREEN", blue = "BLUE", date = "2019-06-30" ) ``` Figure 15: Landsat\-8 image in an area in Northeast Brazil (sources: USGS and Microsoft). ### SENTINEL\-1\-GRD images in MPC Sentinel\-1 GRD products consist of focused SAR data that has been detected, multi\-looked and projected to ground range using the WGS84 Earth ellipsoid model. GRD images are subject for variations in the radar signal’s intensity due to topographic effects, antenna pattern, range spreading loss, and other radiometric distortions. The most common types of distortions include foreshortening, layover and shadowing. Foreshortening occurs when the radar signal strikes a steep terrain slope facing the radar, causing the slope to appear compressed in the image. Features like mountains can appear much steeper than they are, and their true heights can be difficult to interpret. Layover happens when the radar signal reaches the top of a tall feature (like a mountain or building) before it reaches the base. As a result, the top of the feature is displaced towards the radar and appears in front of its base. This results in a reversal of the order of features along the radar line\-of\-sight, making the image interpretation challenging. Shadowing occurs when a radar signal is obstructed by a tall object, casting a shadow on the area behind it that the radar cannot illuminate. The shadowed areas appear dark in SAR images, and no information is available from these regions, similar to optical shadows. Access to Sentinel\-1 GRD images can be done either by MGRS tiles (`tiles`) or by region of interest (`roi`). We recommend using the MGRS tiling system for specifying the area of interest, since when these images are regularized, they will be re\-projected into MGRS tiles. By default, only images in descending orbit are selected. The following example shows how to create a data cube of S1 GRD images over a region in Mato Grosso Brazil that is an area of the Amazon forest that has been deforested. The resulting cube will not follow any specific projection and its coordinates will be stated as EPSG 4326 (latitude/longitude). Its geometry is derived from the SAR slant\-range perspective; thus, it will appear included in relation to the Earth’s longitude. ``` cube_s1_grd <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-1-GRD", bands = [c](https://rdrr.io/r/base/c.html)("VV"), orbit = "descending", tiles = [c](https://rdrr.io/r/base/c.html)("21LUJ", "21LVJ"), start_date = "2021-08-01", end_date = "2021-09-30" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_grd, band = "VV", palette = "Greys") ``` Figure 16: Sentinel\-1 image in an area in Mato Grosso, Brazil (© EU Copernicus Sentinel Programme; source: Microsoft). As explained earlier in this chapter, in areas with areas with large elevation differences, Sentinel\-1 GRD images will have geometric distortions. For this reason, whenever possible, we recommend the use of RTC (radiometrically terrain corrected) images as described in the next session. ### SENTINEL\-1\-RTC images in MPC An RTC SAR image has undergone corrections for both geometric distortions and radiometric distortions caused by the terrain. The purpose of RTC processing is to enhance the interpretability and usability of SAR images for various applications by providing a more accurate representation of the Earth’s surface. The radar backscatter values are normalized to account for these variations, ensuring that the image accurately represents the reflectivity of the surface features. The terrain correction addresses geometric distortions caused by the side\-looking geometry of SAR imaging, such as foreshortening, layover, and shadowing. It uses a Digital Elevation Model (DEM) to model the terrain and re\-project the SAR image from the slant range (radar line\-of\-sight) to the ground range (true geographic coordinates). This process aligns the SAR image with the actual topography, providing a more accurate spatial representation. In MPC, access to Sentinel\-1\-RTC images requires a Planetary Computer account. User will receive a Shared Access Signature (SAS) Token from MPC that allows access to RTC data. Once a user receives a token from Microsoft, she needs to include the environment variable `MPC_TOKEN` in her `.Rprofile`. Therefore, the following example only works for users that have an SAS token. ``` cube_s1_rtc <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-1-RTC", bands = [c](https://rdrr.io/r/base/c.html)("VV", "VH"), orbit = "descending", tiles = "18NZM", start_date = "2021-08-01", end_date = "2021-09-30" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_rtc, band = "VV", palette = "Greys") ``` Figure 17: Sentinel\-1\-RTC image of an area in Colombia (© EU Copernicus Sentinel Programme; source: Microsoft). The above image is from the central region of Colombia, a country with large variations in altitude due to the Andes mountains. Users are invited to compare this images with the one from the `SENTINEL-1-GRD` collection and see the significant geometrical distortions of the GRD image compared with the RTC one. ### Copernicus DEM 30 meter images in MPC The Copernicus digital elevation model 30\-meter global dataset (COP\-DEM\-GLO\-30\) is a high\-resolution topographic data product provided by the European Space Agency (ESA) under the Copernicus Program. The vertical accuracy of the Copernicus DEM 30\-meter dataset is typically within a few meters, but this can vary depending on the region and the original data sources. The primary data source for the Copernicus DEM is data from the TanDEM\-X mission, designed by the German Aerospace Center (DLR). TanDEM\-X provides high\-resolution radar data through interferometric synthetic aperture radar (InSAR) techniques. The Copernicus DEM 30 meter is organized in a 1\\(^\\circ\\) by 1\\(^\\circ\\) grid. In `sits`, access to COP\-DEM\-GLO\-30 images can be done either by MGRS tiles (`tiles`) or by region of interest (`roi`). In both case, the cube is retrieved based on the parts of the grid that intersect the region of interest or the chosen tiles. ``` cube_dem_30 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "COP-DEM-GLO-30", tiles = "20LMR", band = "ELEVATION" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_dem_30, band = "ELEVATION", palette = "RdYlGn", rev = TRUE) ``` Figure 18: Copernicus 30\-meter DEM of an area in Brazil (© DLR e.V. 2010\-2014 and \&copy Airbus Defence and Space GmbH 2014\-2018 provided under COPERNICUS by the European Union and ESA; source: Microsoft). Brazil Data Cube ---------------- The [Brazil Data Cube](http://brazildatacube.org/en) (BDC) is built by Brazil’s National Institute for Space Research (INPE), to provide regular EO data cubes from CBERS, LANDSAT, SENTINEL\-2, and TERRA/MODIS satellites for environmental applications. The collections available in the BDC are: `LANDSAT-OLI-16D` (Landsat\-8 OLI, 30 m resolution, 16\-day intervals), `SENTINEL-2-16D` (Sentinel\-2A and 2B MSI images at 10 m resolution, 16\-day intervals), `CBERS-WFI-16D` (CBERS 4 WFI, 64 m resolution, 16\-day intervals), `CBERS-WFI-8D`(CBERS 4 and 4A WFI images, 64m resolution, 8\-day intervals), and `MOD13Q1-6.1` (MODIS MOD13SQ1 product, collection 6, 250 m resolution, 16\-day intervals). For more details, use `sits_list_collections(source = "BDC")`. The BDC uses three hierarchical grids based on the Albers Equal Area projection and SIRGAS 2000 datum. The large grid has tiles of 4224\.4 \\(\\times4\\) 224\.4 km2 and is used for CBERS\-4 AWFI collections at 64 m resolution; each CBERS\-4 AWFI tile contains images of 6600 \\(\\times\\) 6600 pixels. The medium grid is used for Landsat\-8 OLI collections at 30 m resolution; tiles have an extension of 211\.2 \\(\\times\\) 211\.2 km2, and each image has 7040 \\(\\times\\) 7040 pixels. The small grid covers 105\.6 \\(\\times\\) 105\.6 km2 and is used for Sentinel\-2 MSI collections at 10 m resolutions; each image has 10560 \\(\\times\\) 10560 pixels. The data cubes in the BDC are regularly spaced in time and cloud\-corrected [\[7]](references.html#ref-Ferreira2020a). Figure 19: Hierarchical BDC tiling system showing (a) large BDC grid overlayed on Brazilian biomes, (b) one large tile, (c) four medium tiles, and (d) sixteen small tiles (Source: Ferreira et al. (2020\). Reproduction under fair use doctrine). To access the BDC, users must provide their credentials using environment variables, as shown below. Obtaining a BDC access key is free. Users must register at the [BDC site](https://brazildatacube.dpi.inpe.br/portal/explore) to obtain a key. In the example below, the data cube is defined as one tile (“005004”) of `CBERS-WFI-16D` collection, which holds CBERS AWFI images at 16 days resolution. ``` # Define a tile from the CBERS-4/4A AWFI collection cbers_tile <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "CBERS-WFI-16D", tiles = "005004", bands = [c](https://rdrr.io/r/base/c.html)("B13", "B14", "B15", "B16", "CLOUD"), start_date = "2021-05-01", end_date = "2021-09-01" ) # Plot one time instance [plot](https://rdrr.io/r/graphics/plot.default.html)(cbers_tile, red = "B15", green = "B16", blue = "B13", date = "2021-05-09" ) ``` Figure 20: CBERS\-4 WFI image in a Cerrado area in Brazil (© INPE/Brazil licensed under CC\-BY\-SA. source: Brazil Data Cube). Copernicus Data Space Ecosystem (CDSE) -------------------------------------- The Copernicus Data Space Ecosystem (CDSE) is a cloud service designed to support access to Earth observation data from the Copernicus Sentinel missions and other sources. It is designed and maintained by the European Space Agency (ESA) with support from the European Commission. Configuring user access to CDSE involves several steps to ensure proper registration, access to data, and utilization of the platform’s tools and services. Visit the Copernicus Data Space Ecosystem [registration page](https://dataspace.copernicus.eu). Complete the registration form with your details, including name, email address, organization, and sector. Confirm your email address through the verification link sent to your inbox. After registration, you will need to obtain access credentials to the S3 service implemented by CDSE, which can be obtained using the [CSDE S3 credentials site](https://eodata-s3keysmanager.dataspace.copernicus.eu/panel/s3-credentials). The site will request you to add a new credential. You will receive two keys: an an S3 access key and a secret access key. Take note of both and include the following lines in your `.Rprofile`. ``` Sys.setenv( AWS_ACCESS_KEY_ID = "your access key", AWS_SECRET_ACCESS_KEY = "your secret access key" AWS_S3_ENDPOINT = "eodata.dataspace.copernicus.eu", AWS_VIRTUAL_HOSTING = "FALSE" ) ``` After including these lines in your .Rprofile, restart `R` for the changes to take effect. By following these steps, users will have access to the Copernicus Data Space Ecosystem. ### SENTINEL\-2/2A images in CDSE CDSE hosts a global collection of Sentinel\-2 Level\-2A images, which are processed according to the [CEOS Analysis\-Ready Data](https://ceos.org/ard/) specifications. One example is provided below, where we present a Sentinel\-2 image of the Lena river delta in Siberia in summertime. ``` # obtain a collection of images of a tile covering part of Lena delta lena_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "CDSE", collection = "SENTINEL-2-L2A", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B04", "B8A", "B11", "B12"), start_date = "2023-05-01", end_date = "2023-09-01", tiles = [c](https://rdrr.io/r/base/c.html)("52XDF") ) # plot an image from summertime [plot](https://rdrr.io/r/graphics/plot.default.html)(lena_cube, date = "2023-07-06", red = "B12", green = "B8A", blue = "B04") ``` Figure 21: Sentinel\-2 image of the Lena river delta in summertime (© EU Copernicus Sentinel Programme; source: CDSE). ### SENTINEL\-1\-RTC images in CDSE An important product under development at CDSE are the radiometric terrain corrected (RTC) Sentinel\-1 images. in CDSE, this product is referred to as normalized terrain backscater (NRB). The S1\-NRB product contains radiometrically terrain corrected (RTC) gamma nought backscatter (γ0\) processed from Single Look Complex (SLC) Level\-1A data. Each acquired polarization is stored in an individual binary image file. All images are projected and gridded into the United States Military Grid Reference System (US\-MGRS). The use of the US\-MGRS tile grid ensures a very high level of interoperability with Sentinel\-2 Level\-2A ARD products making it easy to also set\-up complex analysis systems that exploit both SAR and optical data. While speckle is inherent in SAR acquisitions, speckle filtering is not applied to the S1\-NRB product in order to preserve spatial resolution. Some applications (or processing methods) may require spatial or temporal filtering for stationary backscatter estimates. For more details, please refer to the [S1\-NRB product website](https://sentinels.copernicus.eu/web/sentinel/sentinel-1-ard-normalised-radar-backscatter-nrb-product). As of July 2024, RTC images are only available for Africa. Global coverage is expected to grow as ESA expands the S1\-RTC archive. The following example shows an S1\-RTC image for the Rift valley in Ethiopia. ``` # retrieve a S1-RTC cube and plot s1_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "CDSE", collection = "SENTINEL-1-RTC", bands = [c](https://rdrr.io/r/base/c.html)("VV", "VH"), orbit = "descending", start_date = "2023-01-01", end_date = "2023-12-31", tiles = [c](https://rdrr.io/r/base/c.html)("37NCH") ) [plot](https://rdrr.io/r/graphics/plot.default.html)(s1_cube, band = "VV", date = [c](https://rdrr.io/r/base/c.html)("2023-03-03"), palette = "Greys") ``` Figure 22: Sentinel\-1\-RTC image of the Rift Valley in Ethiopia (© EU Copernicus Sentinel Programme; source: CDSE). ### SENTINEL\-2/2A images in CDSE CDSE hosts a global collection of Sentinel\-2 Level\-2A images, which are processed according to the [CEOS Analysis\-Ready Data](https://ceos.org/ard/) specifications. One example is provided below, where we present a Sentinel\-2 image of the Lena river delta in Siberia in summertime. ``` # obtain a collection of images of a tile covering part of Lena delta lena_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "CDSE", collection = "SENTINEL-2-L2A", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B04", "B8A", "B11", "B12"), start_date = "2023-05-01", end_date = "2023-09-01", tiles = [c](https://rdrr.io/r/base/c.html)("52XDF") ) # plot an image from summertime [plot](https://rdrr.io/r/graphics/plot.default.html)(lena_cube, date = "2023-07-06", red = "B12", green = "B8A", blue = "B04") ``` Figure 21: Sentinel\-2 image of the Lena river delta in summertime (© EU Copernicus Sentinel Programme; source: CDSE). ### SENTINEL\-1\-RTC images in CDSE An important product under development at CDSE are the radiometric terrain corrected (RTC) Sentinel\-1 images. in CDSE, this product is referred to as normalized terrain backscater (NRB). The S1\-NRB product contains radiometrically terrain corrected (RTC) gamma nought backscatter (γ0\) processed from Single Look Complex (SLC) Level\-1A data. Each acquired polarization is stored in an individual binary image file. All images are projected and gridded into the United States Military Grid Reference System (US\-MGRS). The use of the US\-MGRS tile grid ensures a very high level of interoperability with Sentinel\-2 Level\-2A ARD products making it easy to also set\-up complex analysis systems that exploit both SAR and optical data. While speckle is inherent in SAR acquisitions, speckle filtering is not applied to the S1\-NRB product in order to preserve spatial resolution. Some applications (or processing methods) may require spatial or temporal filtering for stationary backscatter estimates. For more details, please refer to the [S1\-NRB product website](https://sentinels.copernicus.eu/web/sentinel/sentinel-1-ard-normalised-radar-backscatter-nrb-product). As of July 2024, RTC images are only available for Africa. Global coverage is expected to grow as ESA expands the S1\-RTC archive. The following example shows an S1\-RTC image for the Rift valley in Ethiopia. ``` # retrieve a S1-RTC cube and plot s1_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "CDSE", collection = "SENTINEL-1-RTC", bands = [c](https://rdrr.io/r/base/c.html)("VV", "VH"), orbit = "descending", start_date = "2023-01-01", end_date = "2023-12-31", tiles = [c](https://rdrr.io/r/base/c.html)("37NCH") ) [plot](https://rdrr.io/r/graphics/plot.default.html)(s1_cube, band = "VV", date = [c](https://rdrr.io/r/base/c.html)("2023-03-03"), palette = "Greys") ``` Figure 22: Sentinel\-1\-RTC image of the Rift Valley in Ethiopia (© EU Copernicus Sentinel Programme; source: CDSE). Digital Earth Africa -------------------- Digital Earth Africa (DEAFRICA) is a cloud service that provides open\-access Earth observation data for the African continent. The ARD image collections in `sits` are: * Sentinel\-2 level 2A (`SENTINEL-2-L2A`), organised as MGRS tiles. * Sentinel\-1 radiometrically terrain corrected (`SENTINEL-1-RTC`) * Landsat\-5 (`LS5-SR`), Landsat\-7 (`LS7-SR`), Landsat\-8 (`LS8-SR`) and Landat\-9 (`LS9-SR`). All Landsat collections are ARD data and are organized as WRS\-2 tiles. * SAR L\-band images produced by PALSAR sensor onboard the Japanese ALOS satellite(`ALOS-PALSAR-MOSAIC`). Data is organized in a 5\\(^\\circ\\) by 5\\(^\\circ\\) grid with a spatial resolution of 25 meters. Images are available annually from 2007 to 2010 (ALOS/PALSAR) and from 2015 to 2022 (ALOS\-2/PALSAR\-2\). * Estimates of vegetation condition using NDVI anomalies (`NDVI-ANOMALY`) compared with the long\-term baseline condition. The available measurements are “NDVI\_MEAN” (mean NDVI for a month) and “NDVI\-STD\-ANOMALY” (standardised NDVI anomaly for a month). * Rainfall information provided by Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) from University of California in Santa Barbara. There are monthly (`RAINFALL-CHIRPS-MONTHLY`) and daily (`RAINFALL-CHIRPS-DAILY`) products over Africa. * Digital elevation model provided by the EC Copernicus program (`COP-DEM-30`) in 30 meter resolution organized in a 1\\(^\\circ\\) by 1\\(^\\circ\\) grid. * Annual geomedian images for Landsat 8 and Landsat 9 (`GM-LS8-LS9-ANNUAL` (LANDSAT/OLI)\`) in grid system WRS\-2\. * Annual geomedian images for Sentinel\-2 (`GM-S2-ANNUAL`) in MGRS grid. * Rolling three\-month geomedian images for Sentinel\-2 (`GM-S2-ROLLING`) in MGRS grid. * Semestral geomedian images for Sentinel\-2 (`GM-S2-SEMIANNUAL`) in MGRS grid. Access to DEAFRICA Sentinel\-2 images can be done wither using `tiles` or `roi` parameter. In this example, the requested `roi` produces a cube that contains one MGRS tiles (“35LPH”) covering an area of Madagascar that includes the Betsiboka Estuary. ``` dea_s2_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "DEAFRICA", collection = "SENTINEL-2-L2A", roi = [c](https://rdrr.io/r/base/c.html)( lon_min = 46.1, lat_min = -15.6, lon_max = 46.6, lat_max = -16.1 ), bands = [c](https://rdrr.io/r/base/c.html)("B02", "B04", "B08"), start_date = "2019-04-01", end_date = "2019-05-30" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(dea_s2_cube, red = "B04", blue = "B02", green = "B08") ``` Figure 23: Sentinel\-2 image in an area over Madagascar (© EU Copernicus Sentinel Programme; source: Digital Earth Africa). The next example retrieves a set of ARD Landsat\-9 data, covering the Serengeti plain in Tanzania. ``` dea_l9_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "DEAFRICA", collection = "LS9-SR", roi = [c](https://rdrr.io/r/base/c.html)( lon_min = 33.0, lat_min = -3.60, lon_max = 33.6, lat_max = -3.00 ), bands = [c](https://rdrr.io/r/base/c.html)("B04", "B05", "B06"), start_date = "2023-05-01", end_date = "2023-08-30" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(dea_l9_cube, date = "2023-06-26", red = "B06", green = "B05", blue = "B04" ) ``` Figure 24: Landsat\-9 image in an area over the Serengeti in Tanzania (source: Digital Earth Africa). The following example shows how to retrieve a subset of the ALOS\-PALSAR mosaic for year 2020, for an area near the border between Congo and Rwanda. ``` dea_alos_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "DEAFRICA", collection = "ALOS-PALSAR-MOSAIC", roi = [c](https://rdrr.io/r/base/c.html)( lon_min = 28.69, lat_min = -2.35, lon_max = 29.35, lat_max = -1.56 ), bands = [c](https://rdrr.io/r/base/c.html)("HH", "HV"), start_date = "2020-01-01", end_date = "2020-12-31" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(dea_alos_cube, band = "HH") ``` Figure 25: ALOS\-PALSAC mosaic in the Congo forest area (© JAXA EORC; source: Digital Earth Africa). Digital Earth Australia ----------------------- Digital Earth Australia (DEAUSTRALIA) is an initiative by Geoscience Australia that uses satellite data to monitor and analyze environmental changes and resources across the Australian continent. It provides many datasets that offer detailed information on phenomena such as droughts, agriculture, water availability, floods, coastal erosion, and urban development. The DEAUSTRALIA image collections in `sits` are: * GA\_LS5T\_ARD\_3: ARD images from Landsat\-5 satellite, with bands “BLUE”, “GREEN”, “RED”, “NIR”, “SWIR\-1”, “SWIR\-2”, and “CLOUD”. * GA\_LS7E\_ARD\_3: ARD images from Landsat\-7 satellite, with the same bands as Landsat\-5\. 3. GA\_LS8C\_ARD\_3: ARD images from Landsat\-8 satellite, with bands “COASTAL\-AEROSOL”, “BLUE”, “GREEN”, “RED”, “NIR”, “SWIR\-1”, “SWIR\-2”, “PANCHROMATIC”, and “CLOUD”. * GA\_LS9C\_ARD\_3: ARD images from Landsat\-9 satellite, with the same bands as Landsat\-8\. * GA\_S2AM\_ARD\_3: ARD images from Sentinel\-2A satellite, with bands “COASTAL\-AEROSOL”, “BLUE”, “GREEN”, “RED”, “RED\-EDGE\-1”, “RED\-EDGE\-2”, “RED\-EDGE\-3”, “NIR\-1”, “NIR\-2”, “SWIR\-2”, “SWIR\-3”, and “CLOUD”. * GA\_S2BM\_ARD\_3: ARD images from Sentinel\-2B satellite, with the same bands as Sentinel\-2A. * GA\_LS5T\_GM\_CYEAR\_3: Landsat\-5 geomedian images, with bands “BLUE”, “GREEN”, “RED”, “NIR”, “SWIR1”, “SWIR2”, “EDEV”, “SDEV”, “BCDEV”. * GA\_LS7E\_GM\_CYEAR\_3: Landsat\-7 geomedian images, with the same bands as Landsat\-5 geomedian. * GA\_LS8CLS9C\_GM\_CYEAR\_3: Landsat\-8/9 geomedian images, with the same bands as Landsat\-5 geomedian. * GA\_LS\_FC\_3: Landsat fractional land cover, with bands “BS”, “PV”, “NPV”. * GA\_S2LS\_INTERTIDAL\_CYEAR\_3: Landsat/Sentinel intertidal data, with bands “ELEVATION”, “ELEVATION\-UNCERTAINTY”, “EXPOSURE”, “TA\-HAT”, “TA\-HOT”, “TA\-LOT”, “TA\-LAT” “TA\-OFFSET\-HIGH”, “TA\-OFFSET\-LOW”, “TA\-SPREAD”, “QA\-NDWI\-CORR”and “QA\-NDWI\-FREQ”. The following code retrieves an image from Sentinel\-2 ``` # get roi for an MGRS tile bbox_55KGR <- [sits_mgrs_to_roi](https://rdrr.io/pkg/sits/man/sits_mgrs_to_roi.html)("55KGR") # retrieve the world cover map for the chosen roi s2_56KKV <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "DEAUSTRALIA", collection = "GA_S2AM_ARD_3", tiles = "56KKV", bands = [c](https://rdrr.io/r/base/c.html)("BLUE", "GREEN", "RED", "NIR-2", "SWIR-2", "CLOUD"), start_date = "2023-09-01", end_date = "2023-11-30" ) # plot the resulting map [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_56KKV, green = "NIR-2", blue = "BLUE", red = "SWIR-2", date = "2023-10-14") ``` Figure 26: Plot of Sentinel\-2 image obtained from the DEAUSTRALIA collection for date 2023\-10\-14 showing MGRS tile 56KKV (© EU Copernicus Sentinel Programme; source: Digital Earth Australia). Harmonized Landsat\-Sentinel ---------------------------- Harmonized Landsat Sentinel (HLS) is a NASA initiative that processes and harmonizes Landsat 8 and Sentinel\-2 imagery to a common standard, including atmospheric correction, alignment, resampling, and corrections for BRDF (bidirectional reflectance distribution function). The purpose of the HLS project is to create a unified and consistent dataset that integrates the advantages of both systems, making it easier to work with the data. The NASA Harmonized Landsat and Sentinel (HLS) service provides two image collections: * Landsat 8 OLI Surface Reflectance HLS (HLSL30\) – The HLSL30 product includes atmospherically corrected surface reflectance from the Landsat 8 OLI sensors at 30 m resolution. The dataset includes 11 spectral bands. * Sentinel\-2 MultiSpectral Instrument Surface Reflectance HLS (HLSS30\) – The HLSS30 product includes atmospherically corrected surface reflectance from the Sentinel\-2 MSI sensors at 30 m resolution. The dataset includes 12 spectral bands. The HLS tiling system is identical as the one used for Sentinel\-2 (MGRS). The tiles dimension is 109\.8 km and there is an overlap of 4,900 m on each side. To access NASA HLS, users need to registed at [NASA EarthData](https://urs.earthdata.nasa.gov/), and save their login and password in a \~/.netrc plain text file in Unix (or %HOME%\_netrc in Windows). The file must contain the following fields: ``` machine urs.earthdata.nasa.gov login <username> password <password> ``` We recommend using the earthdatalogin package to create a `.netrc` file with the `earthdatalogin::edl_netrc`. This function creates a properly configured .netrc file in the user’s home directory and an environment variable GDAL\_HTTP\_NETRC\_FILE, as shown in the example. ``` [library](https://rdrr.io/r/base/library.html)([earthdatalogin](https://boettiger-lab.github.io/earthdatalogin/)) earthdatalogin::edl_netrc( username = "<your user name>", password = "<your password>" ) ``` Access to images in NASA HLS is done by region of interest or by tiles. The following example shows an HLS Sentinel\-2 image over the Brazilian coast. ``` # define a region of interest roi <- [c](https://rdrr.io/r/base/c.html)( lon_min = -45.6422, lat_min = -24.0335, lon_max = -45.0840, lat_max = -23.6178 ) # create a cube from the HLSS30 collection hls_cube_s2 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "HLS", collection = "HLSS30", roi = roi, bands = [c](https://rdrr.io/r/base/c.html)("BLUE", "GREEN", "RED", "CLOUD"), start_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2020-06-01"), end_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2020-09-01"), progress = FALSE ) # plot the cube [plot](https://rdrr.io/r/graphics/plot.default.html)(hls_cube_s2, red = "RED", green = "GREEN", blue = "BLUE", date = "2020-06-20") ``` Figure 27: Plot of Sentinel\-2 image obtained from the NASA HLS collection for date 2020\-06\-15 showing the island of Ilhabela in the Brazilian coast (©EU Copernicus Sentinel Programme; source: NASA). Images from the HLS Landsat and Sentinel\-2 collections are accessed separately and can be combined with `[sits_merge()](https://rdrr.io/pkg/sits/man/sits_merge.html)`. The script below creates an HLS Landsat cube over the same area as the Sentinel\-2 cube above bands. The two cubes are then merged. ``` # define a region of interest roi <- [c](https://rdrr.io/r/base/c.html)( lon_min = -45.6422, lat_min = -24.0335, lon_max = -45.0840, lat_max = -23.6178 ) # create a cube from the HLSS30 collection hls_cube_l8 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "HLS", collection = "HLSL30", roi = roi, bands = [c](https://rdrr.io/r/base/c.html)("BLUE", "GREEN", "RED", "CLOUD"), start_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2020-06-01"), end_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2020-09-01"), progress = FALSE ) # merge the Sentinel-2 and Landsat-8 cubes hls_cube_merged <- [sits_merge](https://rdrr.io/pkg/sits/man/sits_merge.html)(hls_cube_s2, hls_cube_l8) ``` Comparing the timelines of the original cubes and the merged one, one can see the benefits of the merged collection for time series data analysis. ``` # Timeline of the Sentinel-2 cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(hls_cube_s2) ``` ``` #> [1] "2020-06-15" "2020-06-20" "2020-06-25" "2020-06-30" "2020-07-05" #> [6] "2020-07-10" "2020-07-20" "2020-07-25" "2020-08-04" "2020-08-09" #> [11] "2020-08-14" "2020-08-19" "2020-08-24" "2020-08-29" ``` ``` # Timeline of the Landsat-8 cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(hls_cube_l8) ``` ``` #> [1] "2020-06-09" "2020-06-25" "2020-07-11" "2020-07-27" "2020-08-12" #> [6] "2020-08-28" ``` ``` # Timeline of the Landsat-8 cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(hls_cube_merged) ``` ``` #> [1] "2020-06-09" "2020-06-15" "2020-06-20" "2020-06-25" "2020-06-30" #> [6] "2020-07-05" "2020-07-10" "2020-07-11" "2020-07-20" "2020-07-25" #> [11] "2020-07-27" "2020-08-04" "2020-08-09" "2020-08-12" "2020-08-14" #> [16] "2020-08-19" "2020-08-24" "2020-08-28" "2020-08-29" ``` ``` # plotting a harmonized Landsat image from the merged dataset # plot the cube [plot](https://rdrr.io/r/graphics/plot.default.html)(hls_cube_merged, red = "RED", green = "GREEN", blue = "BLUE", date = "2020-07-11" ) ``` Figure 28: Plot of Sentinel\-2 image obtained from merging NASA HLS collection and Sentinel\-2 collection for date 2020\-06\-15 showing the island of Ilhabela in the Brazilian coast (© EU Copernicus Sentinel Programme; source: NASA). EO products from TERRASCOPE --------------------------- Terrascope is online platform for accessing open\-source satellite images. This service, operated by VITO, offers a range of Earth observation data and processing services that are accessible free of charge. Currently, `sits` supports the World Cover 2021 maps, produced by VITO with support form the European Commission and ESA. The following code shows how to access the World Cover 2021 convering tile “22LBL”. The first step is to use `[sits_mgrs_to_roi()](https://rdrr.io/pkg/sits/man/sits_mgrs_to_roi.html)` to get the region of interest expressed as a bounding box; this box is then entered as the `roi` parameter in the `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` function. Since the World Cover data is available as a 3\\(^\\circ\\) by 3\\(^\\circ\\) grid, it is necessary to use `[sits_cube_copy()](https://rdrr.io/pkg/sits/man/sits_cube_copy.html)` to extract the exact MGRS tile. ``` # get roi for an MGRS tile bbox_22LBL <- [sits_mgrs_to_roi](https://rdrr.io/pkg/sits/man/sits_mgrs_to_roi.html)("22LBL") # retrieve the world cover map for the chosen roi world_cover_2021 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "TERRASCOPE", collection = "WORLD-COVER-2021", roi = bbox_22LBL ) # cut the 3 x 3 degree grid to match the MGRS tile 22LBL world_cover_2021_20LBL <- [sits_cube_copy](https://rdrr.io/pkg/sits/man/sits_cube_copy.html)( cube = world_cover_2021, roi = bbox_22LBL, multicores = 6, output_dir = "./tempdir/chp4" ) # plot the resulting map [plot](https://rdrr.io/r/graphics/plot.default.html)(world_cover_2021_20LBL) ``` Figure 29: Plot of World Cover 2021 map covering MGRS tile 22LBL (© TerraScope). Planet data as ARD local files ------------------------------ ARD images downloaded from cloud collections to a local computer are not associated with a STAC endpoint that describes them. They must be organized and named to allow `sits` to create a data cube from them. All local files have to be in the same directory and have the same spatial resolution and projection. Each file must contain a single image band for a single date. Each file name needs to include tile, date, and band information. Users must provide information about the original data source to allow `sits` to retrieve information about image attributes such as band names, missing values, etc. When working with local cubes, `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` needs the following parameters: * `source`: Name of the original data provider; for a list of providers and collections, use `[sits_list_collections()](https://rdrr.io/pkg/sits/man/sits_list_collections.html)`. * `collection`: Collection from where the data was extracted. * `data_dir`: Local directory for images. * `bands`: Optional parameter to describe the bands to be retrieved. * `parse_info`: Information to parse the file names. File names need to contain information on tile, date, and band, separated by a delimiter (usually `"_"`). * `delim`: Separator character between descriptors in the file name (default is `"_"`). To be able to read local files, they must belong to a collection registered by `sits`. All collections known to `sits` by default are shown using `[sits_list_collections()](https://rdrr.io/pkg/sits/man/sits_list_collections.html)`. To register a new collection, please see the information provided in the Technical Annex. The example shows how to define a data cube using Planet images from the `sitsdata` package. The dataset contains monthly PlanetScope mosaics for tile “604\-1043” for August to October 2022, with bands B01, B02, B04, and B04\. In general, `sits` users need to match the local file names to the values provided by the `parse_info` parameter. The file names of this dataset use the format `PLANETSCOPE_MOSAIC_604-1043_B4_2022-10-01.tif`, which fits the default value for `parse_info` which is `c("source", "collection", "tile", "band", "date")` and for `delim` which is “\_“, it is not necessary to set these values when creating a data cube from the local files. ``` # Define the directory where Planet files are stored data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Planet", package = "sitsdata") # Create a data cube from local files planet_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "PLANET", collection = "MOSAIC", data_dir = data_dir ) # Plot the first instance of the Planet data in natural colors [plot](https://rdrr.io/r/graphics/plot.default.html)(planet_cube, red = "B3", green = "B2", blue = "B1") ``` Figure 30: Planet image over an area in Colombia (© Planet \- reproduction based on fair use doctrine). Reading classified images as local data cube -------------------------------------------- It is also possible to create local cubes based on results that have been produced by classification or post\-classification algorithms. In this case, more parameters are required, and the parameter `parse_info` is specified differently, as follows: * `source`: Name of the original data provider. * `collection`: Name of the collection from where the data was extracted. * `data_dir`: Local directory for the classified images. * `band`: Band name associated with the type of result. Use: (a) `probs` for probability cubes produced by `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`; (b) `bayes`, for cubes produced by `[sits_smooth()](https://rdrr.io/pkg/sits/man/sits_smooth.html)`; (c) `entropy`, `least`, `ratio` or `margin`, according to the method selected when using `[sits_uncertainty()](https://rdrr.io/pkg/sits/man/sits_uncertainty.html)`; and (d) `class` for classified cubes. * `labels`: Labels associated with the names of the classes (not required for cubes produced by `[sits_uncertainty()](https://rdrr.io/pkg/sits/man/sits_uncertainty.html)`). * `version`: Version of the result (default \= `v1`). * `parse_info`: File name parsing information to allow `sits` to deduce the values of `tile`, `start_date`, `end_date`, `band`, and `version` from the file name. Unlike non\-classified image files, cubes produced by classification and post\-classification have both `start_date` and `end_date`. The following code creates a results cube based on the classification of deforestation in Brazil. This classified cube was obtained by a large data cube of Sentinel\-2 images, covering the state of Rondonia, Brazil comprising 40 tiles, 10 spectral bands, and covering the period from 2020\-06\-01 to 2021\-09\-11\. Samples of four classes were trained by a random forest classifier. Internally, classified images use integers to represent classes. Thus, labels have to be associated to the integers that represent each class name. ``` # Create a cube based on a classified image data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LLP", package = "sitsdata" ) # File name "SENTINEL-2_MSI_20LLP_2020-06-04_2021-08-26_class_v1.tif" Rondonia_class_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "AWS", collection = "SENTINEL-S2-L2A-COGS", bands = "class", labels = [c](https://rdrr.io/r/base/c.html)( "1" = "Burned_Area", "2" = "Cleared_Area", "3" = "Highly_Degraded", "4" = "Forest" ), data_dir = data_dir, parse_info = [c](https://rdrr.io/r/base/c.html)( "satellite", "sensor", "tile", "start_date", "end_date", "band", "version" ) ) # Plot the classified cube [plot](https://rdrr.io/r/graphics/plot.default.html)(Rondonia_class_cube) ``` Figure 31: Classified data cube for the year 2020/2021 in Rondonia, Brazil (© EU Copernicus Sentinel Programme; source: authors). Regularizing data cubes ----------------------- ARD collections available in AWS, MPC, USGS, and DEAFRICA are not regular in space and time. Bands may have different resolutions, images may not cover the entire tile, and time intervals are irregular. For this reason, data from these collections need to be converted to regular data cubes by calling `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)`, which uses the *gdalcubes* package [\[6]](references.html#ref-Appel2019). After obtaining a regular data cube, users can perform data analysis and classification operations, as shown in the following chapters. ### Regularizing Sentinel\-2 images In the following example, the user has created an irregular data cube from the Sentinel\-2 collection available in Microsoft’s Planetary Computer (MPC) for tiles `20LKP` and `20LLP` in the state of Rondonia, Brazil. We first build an irregular data cube using `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)`. ``` # Creating an irregular data cube from MPC s2_cube_rondonia <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", tiles = [c](https://rdrr.io/r/base/c.html)("20LKP", "20LLP"), bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11", "CLOUD"), start_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2018-06-30"), end_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2018-08-31") ) # Show the different timelines of the cube tiles [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(s2_cube_rondonia) ``` ``` #> $`20LKP` #> [1] "2018-07-03" "2018-07-08" "2018-07-13" "2018-07-18" "2018-07-23" #> [6] "2018-07-28" "2018-08-02" "2018-08-07" "2018-08-12" "2018-08-17" #> [11] "2018-08-22" "2018-08-27" #> #> $`20LLP` #> [1] "2018-06-30" "2018-07-03" "2018-07-05" "2018-07-08" "2018-07-10" #> [6] "2018-07-13" "2018-07-15" "2018-07-18" "2018-07-20" "2018-07-23" #> [11] "2018-07-25" "2018-07-28" "2018-07-30" "2018-08-02" "2018-08-04" #> [16] "2018-08-07" "2018-08-09" "2018-08-12" "2018-08-14" "2018-08-17" #> [21] "2018-08-19" "2018-08-22" "2018-08-24" "2018-08-27" "2018-08-29" ``` ``` # plot the first image of the irregular cube s2_cube_rondonia |> dplyr::[filter](https://dplyr.tidyverse.org/reference/filter.html)(tile == "20LLP") |> [plot](https://rdrr.io/r/graphics/plot.default.html)(red = "B11", green = "B8A", blue = "B02", date = "2018-07-03") ``` Figure 32: Sentinel\-2 tile 20LLP for date 2018\-07\-03 (© EU Copernicus Sentinel Programme; source: authors). Because of the different acquisition orbits of the Sentinel\-2 and Sentinel\-2A satellites, the two tiles also have different timelines. Tile `20LKP` has 12 instances, while tile `20LLP` has 24 instances for the chosen period. The function `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)` builds a data cube with a regular timeline and a best estimate of a valid pixel for each interval. The `period` parameter sets the time interval between two images. Values of `period` use the ISO8601 time period specification, which defines time intervals as `P[n]Y[n]M[n]D`, where “Y” stands for years, “M” for months, and “D” for days. Thus, `P1M` stands for a one\-month period, `P15D` for a fifteen\-day period. When joining different images to get the best image for a period, `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)` uses an aggregation method that organizes the images for the chosen interval in order of increasing cloud cover and then selects the first cloud\-free pixel. In the example, we use a small spatial resolution for the regular cube to speed up processing; in actual case, we suggest using a 10\-meter spatial resolution for the cube. ``` # Regularize the cube to 15 day intervals reg_cube_rondonia <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = s2_cube_rondonia, output_dir = "./tempdir/chp4", res = 40, period = "P16D", multicores = 6 ) # Plot the first image of the tile 20LLP of the regularized cube # The pixels of the regular data cube cover the full MGRS tile reg_cube_rondonia |> dplyr::[filter](https://dplyr.tidyverse.org/reference/filter.html)(tile == "20LLP") |> [plot](https://rdrr.io/r/graphics/plot.default.html)(red = "B11", green = "B8A", blue = "B02") ``` Figure 33: Regularized image for tile Sentinel\-2 tile 20LLP (© EU Copernicus Sentinel Programme; source: authors). ### Regularizing Sentinel\-1 images Because of their acquisition mode, SAR images are usually stored following their geometry of acquisition, which is inclined with respect to the Earth. This is the case of GRD and RTC collections available in Microsoft Planetary Computer (MPC). To allow easier use of Sentinel\-1 data and to merge them with Sentinel\-2 images, regularization in sits reprojects SAR data to the MGRS grid, as shown in the following example. The example uses the “SENTINEL\-1\-RTC” collection from MPC. Readers that do not have a subscription can replace “SENTINEL\-1\-RTC” with “SENTINEL\-1\-GRD” in the example. ``` # create an RTC cube from MPC collection for a region in Mato Grosso, Brazil. cube_s1_rtc <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-1-RTC", bands = [c](https://rdrr.io/r/base/c.html)("VV", "VH"), orbit = "descending", tiles = [c](https://rdrr.io/r/base/c.html)("22LBL"), start_date = "2021-06-01", end_date = "2021-10-01" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_rtc, band = "VH", palette = "Greys", scale = 0.7) ``` Figure 34: Original Sentinel\-1 image covering tile 22LBL (© EU Copernicus Sentinel Programme; source: Microsoft). After creating an irregular data cube from the data available in MPC, we use `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)` to produce a SAR data cube that matches MGRS tile “22LBL”. For plotting the SAR image, we select a multidate plot for the “VH” band, where the first date will be displayed in red, the second in green and the third in blue, so as to show an RGB map where changes are visually enhanced. ``` # define the output directory # # Create a directory to store files if ( { [dir.create](https://rdrr.io/r/base/files2.html)("./tempdir/chp4/sar") } # create a regular RTC cube from MPC collection for a tile 22LBL. cube_s1_reg <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = cube_s1_rtc, period = "P16D", res = 40, tiles = [c](https://rdrr.io/r/base/c.html)("22LBL"), memsize = 12, multicores = 6, output_dir = "./tempdir/chp4/sar" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_reg, band = "VH", palette = "Greys", scale = 0.7, dates = [c](https://rdrr.io/r/base/c.html)("2021-06-06", "2021-07-24", "2021-09-26") ) ``` Figure 35: Regular Sentinel\-1 image covering tile 22LBL (© EU Copernicus Sentinel Programme; source: Microsoft). ### Merging Sentinel\-1 and Sentinel\-2 images To combine Sentinel\-1 and Sentinel\-2 data, the first step is to produce regular data cubes for the same MGRS tiles with compatible time steps. The timelines do not have to be exactly the same, but they need to be close enough so that matching is acceptable and have the same number of time steps. This example uses the regular Sentinel\-1 cube for tile “22LBL” produced in the previous sections. The next step is to produce a regular Sentinel\-2 data cube for the same tile and regularize it. The cube below defines an irregular data cube retrieved from Planetary Computer. ``` # define the output directory cube_s2 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11", "CLOUD"), tiles = [c](https://rdrr.io/r/base/c.html)("22LBL"), start_date = "2021-06-01", end_date = "2021-09-30" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s2, red = "B11", green = "B8A", blue = "B02", date = "2021-07-07") ``` Figure 36: Sentinel\-2 image covering tile 22LBL (© EU Copernicus Sentinel Programme; source: authors). The next step is to create a regular data cube for tile “20LBL”. ``` if ( { [dir.create](https://rdrr.io/r/base/files2.html)("./tempdir/chp4/s2_opt") } # define the output directory cube_s2_reg <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = cube_s2, period = "P16D", res = 40, tiles = [c](https://rdrr.io/r/base/c.html)("22LBL"), memsize = 12, multicores = 6, output_dir = "./tempdir/chp4/s2_opt" ) ``` After creating the two regular cubes, we can merge them. Before this step, one should first compare their timelines to see if they match. Timelines of regular cubes are constrained by acquisition dates, which in the case of Sentinel\-1 and Sentinel\-2 are different. Attentive readers will have noticed that the start and end dates of the cubes selected from the Planetary Computer (see code above) are slightly difference, because of the need to ensure both regular cubes have the same number of time steps. The timelines for both cubes are shown below. ``` # timeline of the Sentinel-2 cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(cube_s2_reg) ``` ``` #> [1] "2021-06-02" "2021-06-18" "2021-07-04" "2021-07-20" "2021-08-05" #> [6] "2021-08-21" "2021-09-06" "2021-09-22" ``` ``` # timeline of the Sentinel-2 cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(cube_s1_reg) ``` ``` #> [1] "2021-06-06" "2021-06-22" "2021-07-08" "2021-07-24" "2021-08-09" #> [6] "2021-08-25" "2021-09-10" "2021-09-26" ``` Considering that the timelines are close enough so that the cubes can be combined, we can use the `sits_merge` function to produce a combined cube. As an example, we show a plot with both radar and optical bands. ``` # merge Sentinel-1 and Sentinel-2 cubes cube_s1_s2 <- [sits_merge](https://rdrr.io/pkg/sits/man/sits_merge.html)(cube_s2_reg, cube_s1_reg) # plot a an image with both SAR and optical bands [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_s2, red = "B11", green = "B8A", blue = "VH") ``` Figure 37: Sentinel\-2 and Sentinel\-1 RGB composite for tile 22LBL (source: authors). ### Regularizing Sentinel\-2 images In the following example, the user has created an irregular data cube from the Sentinel\-2 collection available in Microsoft’s Planetary Computer (MPC) for tiles `20LKP` and `20LLP` in the state of Rondonia, Brazil. We first build an irregular data cube using `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)`. ``` # Creating an irregular data cube from MPC s2_cube_rondonia <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", tiles = [c](https://rdrr.io/r/base/c.html)("20LKP", "20LLP"), bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11", "CLOUD"), start_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2018-06-30"), end_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2018-08-31") ) # Show the different timelines of the cube tiles [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(s2_cube_rondonia) ``` ``` #> $`20LKP` #> [1] "2018-07-03" "2018-07-08" "2018-07-13" "2018-07-18" "2018-07-23" #> [6] "2018-07-28" "2018-08-02" "2018-08-07" "2018-08-12" "2018-08-17" #> [11] "2018-08-22" "2018-08-27" #> #> $`20LLP` #> [1] "2018-06-30" "2018-07-03" "2018-07-05" "2018-07-08" "2018-07-10" #> [6] "2018-07-13" "2018-07-15" "2018-07-18" "2018-07-20" "2018-07-23" #> [11] "2018-07-25" "2018-07-28" "2018-07-30" "2018-08-02" "2018-08-04" #> [16] "2018-08-07" "2018-08-09" "2018-08-12" "2018-08-14" "2018-08-17" #> [21] "2018-08-19" "2018-08-22" "2018-08-24" "2018-08-27" "2018-08-29" ``` ``` # plot the first image of the irregular cube s2_cube_rondonia |> dplyr::[filter](https://dplyr.tidyverse.org/reference/filter.html)(tile == "20LLP") |> [plot](https://rdrr.io/r/graphics/plot.default.html)(red = "B11", green = "B8A", blue = "B02", date = "2018-07-03") ``` Figure 32: Sentinel\-2 tile 20LLP for date 2018\-07\-03 (© EU Copernicus Sentinel Programme; source: authors). Because of the different acquisition orbits of the Sentinel\-2 and Sentinel\-2A satellites, the two tiles also have different timelines. Tile `20LKP` has 12 instances, while tile `20LLP` has 24 instances for the chosen period. The function `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)` builds a data cube with a regular timeline and a best estimate of a valid pixel for each interval. The `period` parameter sets the time interval between two images. Values of `period` use the ISO8601 time period specification, which defines time intervals as `P[n]Y[n]M[n]D`, where “Y” stands for years, “M” for months, and “D” for days. Thus, `P1M` stands for a one\-month period, `P15D` for a fifteen\-day period. When joining different images to get the best image for a period, `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)` uses an aggregation method that organizes the images for the chosen interval in order of increasing cloud cover and then selects the first cloud\-free pixel. In the example, we use a small spatial resolution for the regular cube to speed up processing; in actual case, we suggest using a 10\-meter spatial resolution for the cube. ``` # Regularize the cube to 15 day intervals reg_cube_rondonia <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = s2_cube_rondonia, output_dir = "./tempdir/chp4", res = 40, period = "P16D", multicores = 6 ) # Plot the first image of the tile 20LLP of the regularized cube # The pixels of the regular data cube cover the full MGRS tile reg_cube_rondonia |> dplyr::[filter](https://dplyr.tidyverse.org/reference/filter.html)(tile == "20LLP") |> [plot](https://rdrr.io/r/graphics/plot.default.html)(red = "B11", green = "B8A", blue = "B02") ``` Figure 33: Regularized image for tile Sentinel\-2 tile 20LLP (© EU Copernicus Sentinel Programme; source: authors). ### Regularizing Sentinel\-1 images Because of their acquisition mode, SAR images are usually stored following their geometry of acquisition, which is inclined with respect to the Earth. This is the case of GRD and RTC collections available in Microsoft Planetary Computer (MPC). To allow easier use of Sentinel\-1 data and to merge them with Sentinel\-2 images, regularization in sits reprojects SAR data to the MGRS grid, as shown in the following example. The example uses the “SENTINEL\-1\-RTC” collection from MPC. Readers that do not have a subscription can replace “SENTINEL\-1\-RTC” with “SENTINEL\-1\-GRD” in the example. ``` # create an RTC cube from MPC collection for a region in Mato Grosso, Brazil. cube_s1_rtc <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-1-RTC", bands = [c](https://rdrr.io/r/base/c.html)("VV", "VH"), orbit = "descending", tiles = [c](https://rdrr.io/r/base/c.html)("22LBL"), start_date = "2021-06-01", end_date = "2021-10-01" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_rtc, band = "VH", palette = "Greys", scale = 0.7) ``` Figure 34: Original Sentinel\-1 image covering tile 22LBL (© EU Copernicus Sentinel Programme; source: Microsoft). After creating an irregular data cube from the data available in MPC, we use `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)` to produce a SAR data cube that matches MGRS tile “22LBL”. For plotting the SAR image, we select a multidate plot for the “VH” band, where the first date will be displayed in red, the second in green and the third in blue, so as to show an RGB map where changes are visually enhanced. ``` # define the output directory # # Create a directory to store files if ( { [dir.create](https://rdrr.io/r/base/files2.html)("./tempdir/chp4/sar") } # create a regular RTC cube from MPC collection for a tile 22LBL. cube_s1_reg <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = cube_s1_rtc, period = "P16D", res = 40, tiles = [c](https://rdrr.io/r/base/c.html)("22LBL"), memsize = 12, multicores = 6, output_dir = "./tempdir/chp4/sar" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_reg, band = "VH", palette = "Greys", scale = 0.7, dates = [c](https://rdrr.io/r/base/c.html)("2021-06-06", "2021-07-24", "2021-09-26") ) ``` Figure 35: Regular Sentinel\-1 image covering tile 22LBL (© EU Copernicus Sentinel Programme; source: Microsoft). ### Merging Sentinel\-1 and Sentinel\-2 images To combine Sentinel\-1 and Sentinel\-2 data, the first step is to produce regular data cubes for the same MGRS tiles with compatible time steps. The timelines do not have to be exactly the same, but they need to be close enough so that matching is acceptable and have the same number of time steps. This example uses the regular Sentinel\-1 cube for tile “22LBL” produced in the previous sections. The next step is to produce a regular Sentinel\-2 data cube for the same tile and regularize it. The cube below defines an irregular data cube retrieved from Planetary Computer. ``` # define the output directory cube_s2 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11", "CLOUD"), tiles = [c](https://rdrr.io/r/base/c.html)("22LBL"), start_date = "2021-06-01", end_date = "2021-09-30" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s2, red = "B11", green = "B8A", blue = "B02", date = "2021-07-07") ``` Figure 36: Sentinel\-2 image covering tile 22LBL (© EU Copernicus Sentinel Programme; source: authors). The next step is to create a regular data cube for tile “20LBL”. ``` if ( { [dir.create](https://rdrr.io/r/base/files2.html)("./tempdir/chp4/s2_opt") } # define the output directory cube_s2_reg <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = cube_s2, period = "P16D", res = 40, tiles = [c](https://rdrr.io/r/base/c.html)("22LBL"), memsize = 12, multicores = 6, output_dir = "./tempdir/chp4/s2_opt" ) ``` After creating the two regular cubes, we can merge them. Before this step, one should first compare their timelines to see if they match. Timelines of regular cubes are constrained by acquisition dates, which in the case of Sentinel\-1 and Sentinel\-2 are different. Attentive readers will have noticed that the start and end dates of the cubes selected from the Planetary Computer (see code above) are slightly difference, because of the need to ensure both regular cubes have the same number of time steps. The timelines for both cubes are shown below. ``` # timeline of the Sentinel-2 cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(cube_s2_reg) ``` ``` #> [1] "2021-06-02" "2021-06-18" "2021-07-04" "2021-07-20" "2021-08-05" #> [6] "2021-08-21" "2021-09-06" "2021-09-22" ``` ``` # timeline of the Sentinel-2 cube [sits_timeline](https://rdrr.io/pkg/sits/man/sits_timeline.html)(cube_s1_reg) ``` ``` #> [1] "2021-06-06" "2021-06-22" "2021-07-08" "2021-07-24" "2021-08-09" #> [6] "2021-08-25" "2021-09-10" "2021-09-26" ``` Considering that the timelines are close enough so that the cubes can be combined, we can use the `sits_merge` function to produce a combined cube. As an example, we show a plot with both radar and optical bands. ``` # merge Sentinel-1 and Sentinel-2 cubes cube_s1_s2 <- [sits_merge](https://rdrr.io/pkg/sits/man/sits_merge.html)(cube_s2_reg, cube_s1_reg) # plot a an image with both SAR and optical bands [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_s1_s2, red = "B11", green = "B8A", blue = "VH") ``` Figure 37: Sentinel\-2 and Sentinel\-1 RGB composite for tile 22LBL (source: authors). Combining multitemporal data cubes with digital elevation models ---------------------------------------------------------------- In many applications, especially in regions with large topographical, soil or climatic variations, is is useful to merge multitemporal data cubes with base information such as digital elevation models (DEM). Merging multitemporal satellite images with digital elevation models (DEMs) offers several advantages that enhance the analysis and interpretation of geospatial data. Elevation data provides an additional to the two\-dimensional satellite images, which help to distinguish land use and land cover classes which are impacted by altitude gradients. One example is the capacity to distinguish between low\-altitude and high\-altitude forests. In case where topography changes significantly, DEM information can improve the accuracy of classification algorithms. As an example of DEM integation in a data cube, we will consider an agricultural region of Chile which is located in a narrow area close to the Andes. There is a steep gradient so that the cube benefits from the inclusion of the DEM. ``` s2_cube_19HBA <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", tiles = "19HBA", bands = [c](https://rdrr.io/r/base/c.html)("B04", "B8A", "B12", "CLOUD"), start_date = "2021-01-01", end_date = "2021-03-31" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_cube_19HBA, red = "B12", green = "B8A", blue = "B04") ``` Figure 38: Sentinel\-2 image covering tile 19HBA (source: authors). Then, we produce a regular data cube to use for classification. In this example, we will use a reduced resolution (30 meters) to expedite processing. In practice, a resolution of 10 meters is recommended. ``` s2_cube_19HBA_reg <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = s2_cube_19HBA, period = "P16D", res = 30, output_dir = "./tempdir/chp4/s2_19HBA" ) ``` The next step is recover the DEM for the area. For this purpose, we will use the Copernicus Global DEM\-30, and select the area covered by the tile. As explained in the MPC access section above, the Copernicus DEM tiles are stored as 1\\(^\\circ\\) by 1\\(^\\circ\\) grid. For them to match an MGRS tile, they have to be regularized in a similar way as the Sentinel\-1 images, as shown below. To select a DEM, no temporal information is required. ``` # obtain the DEM cube for dem_cube_19HBA <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "COP-DEM-GLO-30", bands = "ELEVATION", tiles = "19HBA" ) ``` After obtaining the 1\\(^\\circ\\) by 1\\(^\\circ\\) data cube covering the selected tile, the next step is to regularize it. This is done using the `[sits_regularize()](https://rdrr.io/pkg/sits/man/sits_regularize.html)` function. This function will produce a DEM which matches exactly the chosen tile. ``` # obtain the DEM cube for dem_cube_19HBA_reg <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = dem_cube_19HBA, res = 30, bands = "ELEVATION", tiles = "19HBA", output_dir = "./tempdir/chp4/dem_19HBA" ) # plot the DEM reversing the palette [plot](https://rdrr.io/r/graphics/plot.default.html)(dem_cube_19HBA_reg, band = "ELEVATION", palette = "Spectral", rev = TRUE) ``` Figure 39: Copernicus DEM\-30 covering tile 19HBA (© DLR e.V. 2010\-2014 and \&copy Airbus Defence and Space GmbH 2014\-2018 provided under COPERNICUS by the European Union and ESA; source: Microsoft and authors). After obtaining regular data cubes from satellite images and from DEMs, there are two ways to combine them. One option is to take the DEM band as a multitemporal information, and duplicate this band for every time step so that the DEM becomes one additional time series. The alternative is to use DEMs as base cubes, and take them as a single additional band. These options are discusses in what follows. Merging multitemporal data cubes with DEM ----------------------------------------- There are two ways to combine multitemporal data cubes with DEM data. The first method takes the DEM as a base information, which is used in combination with the multispectral time series. For exemple, consider a situation of a data cube with 10 bands and 23 time steps, which has a 230\-dimensional space. Adding DEM as a base cube will include one dimension to the attribute space. This combination is supported by function `sits_add_base_cube`. In the resulting cube, the information on the image time series and that of the DEM are stored separately. The data cube metadata will now include a column called `base_info`. ``` merged_cube_base <- [sits_add_base_cube](https://rdrr.io/pkg/sits/man/sits_add_base_cube.html)(s2_cube_19HBA_reg, dem_cube_19HBA_reg) merged_cube_base$base_info[[1]] ``` ``` #> # A tibble: 1 × 11 #> source collection satellite sensor tile xmin xmax ymin ymax crs #> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <chr> #> 1 MPC COP-DEM-GLO-30 TANDEM-X X-band… 19HBA 199980 309780 5.99e6 6.1e6 EPSG… #> # ℹ 1 more variable: file_info <list> ``` Although this combination is conceptually simple, it has drawbacks. Since the attribute space now mixes times series with fixed\-time information, the only applicable classification method is random forests. Because of the way random forest works, not all attributes are used by every decision tree. During the training of each tree, at each node, a random subset of features is selected, and the best split is chosen based on this subset rather than all features. Thus, there may be a significant number of decision trees that do use the DEM attribute. As a result, the effect of the DEM information may be underestimated. The alternative is to combine the image data cube and the DEM using `sits_merge`. In this case, the DEM becomes another band. Although it may look peculiar to replicate the DEM many time to build an artificial time series, there are many advantages in doing so. All classification algorithms available in `sits` (including the deep learning ones) can be used to classify the resulting cube. For cases where the DEM information is particularly important, such organisation places DEM data at a par with other spectral bands. Users are encouraged to compare the results obtained by direct merging of DEM with spectral bands with the method where DEM is taken as a base cube. ``` merged_cube <- [sits_merge](https://rdrr.io/pkg/sits/man/sits_merge.html)(s2_cube_19HBA_reg, dem_cube_19HBA_reg) merged_cube$file_info[[1]] ``` ``` #> # A tibble: 24 × 13 #> fid band date nrows ncols xres yres xmin ymin xmax ymax #> <chr> <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 1 B04 2021-01-03 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 2 1 B12 2021-01-03 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 3 1 B8A 2021-01-03 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 4 1 ELEVATION 2021-01-03 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 5 2 B04 2021-01-19 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 6 2 B12 2021-01-19 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 7 2 B8A 2021-01-19 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 8 1 ELEVATION 2021-01-19 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 9 3 B04 2021-02-04 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> 10 3 B12 2021-02-04 3660 3660 30 30 199980 5.99e6 309780 6.1e6 #> # ℹ 14 more rows #> # ℹ 2 more variables: crs <chr>, path <chr> ```
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/operations-on-data-cubes.html
Operations on data cubes ======================== [ Pixel\-based and neighborhood\-based operations ----------------------------------------------- Pixel\-based operations in remote sensing images refer to image processing techniques that operate on individual pixels or cells in an image without considering their spatial relationships with neighboring pixels. These operations are typically applied to each pixel in the image independently and can be used to extract information on spectral, radiometric, or spatial properties. Pixel\-based operations produce spectral indexes which combine data from multiple bands. Neighborhood\-based operations are applied to groups of pixels in an image. The neighborhood is typically defined as a rectangular or circular region centered on a given pixel. These operations can be used for removing noise, detecting edges, and sharpening, among other uses. The `[sits_apply()](https://rdrr.io/pkg/sits/man/sits_apply.html)` function computes new indices from a desired mathematical operation as a function of the bands available on the cube using any valid R expression. It applies the operation for all tiles and all temporal intervals. There are two types of operations in `[sits_apply()](https://rdrr.io/pkg/sits/man/sits_apply.html)`: * Pixel\-based operations that produce an index based on individual pixels of existing bands. The input bands and indexes should be part of the input data cube and have the same names used in the cube. The new index will be computed for every pixel of all images in the time series. Besides arithmetic operators, the function also accepts vectorized R functions that can be applied to matrices (e.g., `[sqrt()](https://rdrr.io/r/base/MathFun.html)`, `[log()](https://rdrr.io/r/base/Log.html)`, and `[sin()](https://rdrr.io/r/base/Trig.html)`). * Neighborhood\-based operations that produce a derived value based on a window centered around each individual pixel. The available functions are `w_median()`, `w_sum()`, `w_mean()`, `w_min()`, `w_max()`, `w_sd()` (standard deviation), and `w_var()` (variance). Users set the window size (only odd values are allowed). The following examples show how to use `[sits_apply()](https://rdrr.io/pkg/sits/man/sits_apply.html)`. Computing vegetation indexes ---------------------------- Using vegetation indexes is an established practice in remote sensing. These indexes aim to improve the discrimination of vegetation structure by combining two wavebands, one where leaf pigments reflect incoming light with another where leaves absorb incoming radiation. Green leaves from natural vegetation such as forests have a strong emissivity rate in the near\-infrared bands and low emissivity rates in the red bands of the electromagnetic spectrum. These spectral properties are used to calculate the Normalized Difference Vegetation Index (NDVI), a widely used index that is computed as the normalized difference between the values of infra\-red and red bands. Including red\-edge bands in Sentinel\-2 images has broadened the scope of the bands used to calculate these indices [\[8]](references.html#ref-Xie2019), [\[9]](references.html#ref-Sun2020a). In what follows, we show examples of vegetation index calculation using a Sentinel\-2 data cube. First, we define a data cube for a tile in the state of Rondonia, Brazil, including bands used to compute different vegetation indexes. We regularize the cube using a target resolution of 60 m to reduce processing time. ``` # Create a directory to store files if ( { [dir.create](https://rdrr.io/r/base/files2.html)("./tempdir/chp5") } ``` ``` # Create an irregular data cube from AWS s2_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "AWS", collection = "SENTINEL-S2-L2A-COGS", tiles = "20LKP", bands = [c](https://rdrr.io/r/base/c.html)( "B02", "B03", "B04", "B05", "B06", "B07", "B08", "B8A", "B11", "B12", "CLOUD" ), start_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2018-07-01"), end_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2018-08-31") ) ``` ``` # Regularize the cube to 15 day intervals reg_cube <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = s2_cube, output_dir = "./tempdir/chp5", res = 60, period = "P15D", multicores = 4 ) ``` There are many options for calculating vegetation indexes using Sentinel\-2 bands. The most widely used method combines band B08 (785\-899 nm) and band B04 (650\-680 nm). Recent works in the literature propose using the red\-edge bands B05 (698\-713 nm), B06 (733\-748 nm), and B07 (773\-793 nm) for capturing subtle variations in chlorophyll absorption producing indexes, which are called Normalized Difference Vegetation Red\-edge indexes (NDRE) [\[8]](references.html#ref-Xie2019). In a recent review, Chaves et al. argue that red\-edge bands are important for distinguishing leaf structure and chlorophyll content of different vegetation species [\[10]](references.html#ref-Chaves2020). In the example below, we show how to include indexes in the regular data cube with the Sentinel\-2 spectral bands. We first calculate the NDVI in the usual way, using bands B08 and B04\. ``` # Calculate NDVI index using bands B08 and B04 reg_cube <- [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(reg_cube, NDVI = (B08 - B04) / (B08 + B04), output_dir = "./tempdir/chp5" ) ``` ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "NDVI", palette = "RdYlGn") ``` Figure 40: NDVI using bands B08 and B04 of Sentinel\-2 (© EU Copernicus Programme modified by authors). We now compare the traditional NDVI with other vegetation index computed using red\-edge bands. The example below such the NDRE1 index, obtained using bands B06 and B05\. Sun et al. argue that a vegetation index built using bands B06 and B07 provides a better approximation to leaf area index estimates than NDVI [\[9]](references.html#ref-Sun2020a). Notice that the contrast between forests and deforested areas is more robust in the NDRE1 index than with NDVI. ``` # Calculate NDRE1 index using bands B06 and B05 reg_cube <- [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(reg_cube, NDRE1 = (B06 - B05) / (B06 + B05), output_dir = "./tempdir/chp5" ) ``` ``` # Plot NDRE1 index [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "NDRE1", palette = "RdYlGn") ``` Figure 41: NDRE1 using bands B06 and B05 of Sentinel\-2 (© EU Copernicus Programme modified by authors). Spectral indexes for identifying burned areas --------------------------------------------- Band combination can also generate spectral indices for detecting degradation by fires, which are an important element in environmental degradation. Forest fires significantly impact emissions and impoverish natural ecosystems [\[11]](references.html#ref-Nepstad1999). Fires open the canopy, making the microclimate drier and increasing the amount of dry fuel [\[12]](references.html#ref-Gao2020). One well\-established technique for detecting burned areas with remote sensing images is the normalized burn ratio (NBR), the difference between the near\-infrared and the short wave infrared band, calculated using bands B8A and B12\. ``` # Calculate the NBR index reg_cube <- [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(reg_cube, NBR = (B12 - B8A) / (B12 + B8A), output_dir = "./tempdir/chp5" ) ``` ``` # Plot the NBR for the first date [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "NBR", palette = "Reds") ``` Figure 42: NBR ratio using Sentinel\-2 B11 and B8A (© EU Copernicus Programme modified by authors). Support for non\-normalized indexes ----------------------------------- All data cube operations discussed so far produce normalized indexes. By default, the indexes generated by the sits\_apply() function are normalized between \-1 and 1, scaled by a factor of 0\.0001\. Normalized indexes are saved as INT2S (Integer with sign). If the normalized parameter is FALSE, no scaling factor will be applied and the index will be saved as FLT4S (Float with sign). The code below shows an exemple of the non\-normalized index, CVI \- chlorophyll vegetation index. CVI is a spectral index used to estimate the chlorophyll content and overall health of vegetation. It combines bands in visible and near\-infrared (NIR) regions to assess vegetation characteristics. Since CVI is not normalized, we have to set the parameter `normalized` to `FALSE` to inform `[sits_apply()](https://rdrr.io/pkg/sits/man/sits_apply.html)` to generate an FLT4S image. ``` # Calculate the NBR index reg_cube <- [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(reg_cube, CVI = (B8A / B03) * (B05 / B03), normalized = FALSE, output_dir = "./tempdir/chp5" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "CVI", palette = "Greens") ``` Figure 43: CVI index using bands B03, B05, and B8A (© EU Copernicus Programme modified by authors). Temporal combination operations ------------------------------- There are cases when users want to produce results which combine the values a time series associated to each pixel of a data cube using reduction operators. In the context of time series analysis, a reduction operator is a function that reduces a sequence of data points into a single value or a smaller set of values. This process involves summarizing or aggregating the information from the time series in a meaningful way. Reduction operators are often used to extract key statistics or features from the data, making it easier to analyze and interpret. To produce temporal combinations, `sits` provides `sits_reduce`, with associated functions: * `t_max()`: maximum value of the series. * `t_min()`: minimum value of the series * `t_mean()`: mean of the series. * `t_median()`: median of the series. * `t_sum()`: sum of all the points in the series. * `t_std()`: standard deviation of the series. * `t_skewness()`: skewness of the series. * `t_kurtosis()`: kurtosis of the series. * `t_amplitude()`: difference between maximum and minimum values of the cycle. A small amplitude means a stable cycle. * `t_fslope()`: maximum value of the first slope of the cycle. Indicates when the cycle presents an abrupt change in the curve. The slope between two values relates the speed of the growth or senescence phases * `t_mse()`: average spectral energy density. The energy of the time series is distributed by frequency. * `t_fqr()`: value of the first quartile of the series (0\.25\). * `t_tqr()`: value of the third quartile of the series (0\.75\). * `t_iqr()`: interquartile range (difference between the third and first quartiles). The functions `t_sum()`, `t_std()`, `t_skewness()`, `t_kurtosis()`, and `t_mse()` produce values greater than the limit of a two\-byte integer. Therefore, we save the images generated by these in floating point format. The following examples show an example temporal reduction operations. ``` # Calculate the NBR index ave_cube <- [sits_reduce](https://rdrr.io/pkg/sits/man/sits_reduce.html)(reg_cube, NDVIMAX = t_max(NDVI), output_dir = "./tempdir/chp5/reduce" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(ave_cube, band = "NDVIMAX", palette = "Greens") ``` Figure 44: maximum NDVI for Sentinel\-2 cube (© EU Copernicus Programme modified by authors). Spectral mixture analysis ------------------------- Many pixels in images of medium\-resolution satellites such as Landsat or Sentinel\-2 contain a mixture of spectral responses of different land cover types inside a resolution element [\[13]](references.html#ref-Roberts1993). In many applications, it is desirable to obtain the proportion of a given class inside a mixed pixel. For this purpose, the literature proposes mixture models; these models represent pixel values as a combination of multiple pure land cover types [\[14]](references.html#ref-Shimabukuro2019). Assuming that the spectral response of pure land cover classes (called endmembers) is known, spectral mixture analysis derives new bands containing the proportion of each endmember inside a pixel. The most used method for spectral mixture analysis is the linear model [\[14]](references.html#ref-Shimabukuro2019). The main idea behind the linear mixture model is that the observed pixel spectrum can be expressed as a linear combination of the spectra of the pure endmembers, weighted by their respective proportions (or abundances) within the pixel. Mathematically, the model can be represented as: \\\[ R\_i \= \\sum\_{j\=1}^N a\_{i,j}\*x\_j \+ \\epsilon\_i, i \\in {1,...M}, M \> N, \\] where \\(i\=1,..M\\) is the set of spectral bands and \\(j\=1,..N\\) is the set of land classes. For each pixel, \\(R\_i\\) is the reflectance in the i\-th spectral band, \\(x\_j\\) is the reflectance value due to the j\-th endmember, and \\(a\_{i,j}\\) is the proportion between the j\-th endmember and the i\-th spectral band. To solve this system of equations and obtain the proportion of each endmember, `sits` uses a non\-negative least squares (NNLS) regression algorithm, which is available in the R package `RStoolbox` and was developed by Jakob Schwalb\-Willmann, based on the sequential coordinate\-wise algorithm (SCA) proposed on Franc et al. [\[15]](references.html#ref-Franc2005). To run the mixture model in `sits`, it is necessary to inform the values of pixels which represent spectral responses of a unique class. These are the so\-called “pure” pixels. Because the quality of the resulting endmember images depends on the quality of the pure pixels, they should be chosen carefully and based on expert knowledge of the area. Since `sits` supports multiple endmember spectral mixture analysis [\[16]](references.html#ref-Roberts1998), users can specify more than one pure pixel per endmember to account for natural variability. In `sits`, spectral mixture analysis is done by `[sits_mixture_model()](https://rdrr.io/pkg/sits/man/sits_mixture_model.html)`, which has two mandatory parameters: `cube` (a data cube) and `endmembers`, a named table (or equivalent) that defines the pure pixels. The `endmembers` table must have the following named columns: (a) `type`, which defines the class associated with an endmember; (b) names, the names of the bands. Each line of the table must contain the value of each endmember for all bands (see example). To improve readability, we suggest that the `endmembers` parameters be defined as a `tribble`. A `tribble` is a `tibble` with an easier to read row\-by\-row layout. In the example below, we define three endmembers for classes `Forest`, `Soil`, and `Water`. Note that the values for each band are expressed as integers ranging from 0 to 10,000\. ``` # Define the endmembers for three classes and six bands em <- tibble::[tribble](https://tibble.tidyverse.org/reference/tribble.html)( ~class, ~B02, ~B03, ~B04, ~B8A, ~B11, ~B12, "forest", 200, 352, 189, 2800, 1340, 546, "soil", 400, 650, 700, 3600, 3500, 1800, "water", 700, 1100, 1400, 850, 40, 26 ) # Generate the mixture model reg_cube <- [sits_mixture_model](https://rdrr.io/pkg/sits/man/sits_mixture_model.html)( data = reg_cube, endmembers = em, multicores = 4, memsize = 12, output_dir = "./tempdir/chp5" ) ``` ``` # Plot the FOREST for the first date using the Greens palette [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "FOREST", palette = "Greens") ``` Figure 45: Percentage of forest per pixel estimated by mixture model ((© EU Copernicus Programme modified by authors). ``` # Plot the water endmember for the first date using the Blues palette [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "WATER", palette = "Blues") ``` Figure 46: Percentage of water per pixel estimated by mixture model (source: authors). ``` # Plot the SOIL endmember for the first date using the orange red (OrRd) palette [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "SOIL", palette = "OrRd") ``` Figure 47: Percentage of soil per pixel estimated by mixture model (source: authors). Linear mixture models (LMM) improve the interpretation of remote sensing images by accounting for mixed pixels and providing a more accurate representation of the Earth’s surface. LMMs provide a more accurate representation of mixed pixels by considering the contributions of multiple land classes within a single pixel. This can lead to improved land cover classification accuracy compared to conventional per\-pixel classification methods, which may struggle to accurately classify mixed pixels. LMMs also allow for the estimation of the abundances of each land class within a pixel, providing valuable sub\-pixel information. This can be especially useful in applications where the spatial resolution of the sensor is not fine enough to resolve individual land cover types, such as monitoring urban growth or studying vegetation dynamics. By considering the sub\-pixel composition of land classes, LMMs can provide a more sensitive measure of changes in land cover over time. This can lead to more accurate and precise change detection, particularly in areas with complex land cover patterns or where subtle changes in land cover may occur. Applications of spectral mixture analysis in remote sensing include forest degradation [\[20]](references.html#ref-Chen2021), wetland surface dynamics [\[21]](references.html#ref-Halabisky2016), and urban area characterization [\[22]](references.html#ref-Wu2003). These models providing valuable information for a wide range of applications, from land mapping and change detection to resource management and environmental monitoring. Pixel\-based and neighborhood\-based operations ----------------------------------------------- Pixel\-based operations in remote sensing images refer to image processing techniques that operate on individual pixels or cells in an image without considering their spatial relationships with neighboring pixels. These operations are typically applied to each pixel in the image independently and can be used to extract information on spectral, radiometric, or spatial properties. Pixel\-based operations produce spectral indexes which combine data from multiple bands. Neighborhood\-based operations are applied to groups of pixels in an image. The neighborhood is typically defined as a rectangular or circular region centered on a given pixel. These operations can be used for removing noise, detecting edges, and sharpening, among other uses. The `[sits_apply()](https://rdrr.io/pkg/sits/man/sits_apply.html)` function computes new indices from a desired mathematical operation as a function of the bands available on the cube using any valid R expression. It applies the operation for all tiles and all temporal intervals. There are two types of operations in `[sits_apply()](https://rdrr.io/pkg/sits/man/sits_apply.html)`: * Pixel\-based operations that produce an index based on individual pixels of existing bands. The input bands and indexes should be part of the input data cube and have the same names used in the cube. The new index will be computed for every pixel of all images in the time series. Besides arithmetic operators, the function also accepts vectorized R functions that can be applied to matrices (e.g., `[sqrt()](https://rdrr.io/r/base/MathFun.html)`, `[log()](https://rdrr.io/r/base/Log.html)`, and `[sin()](https://rdrr.io/r/base/Trig.html)`). * Neighborhood\-based operations that produce a derived value based on a window centered around each individual pixel. The available functions are `w_median()`, `w_sum()`, `w_mean()`, `w_min()`, `w_max()`, `w_sd()` (standard deviation), and `w_var()` (variance). Users set the window size (only odd values are allowed). The following examples show how to use `[sits_apply()](https://rdrr.io/pkg/sits/man/sits_apply.html)`. Computing vegetation indexes ---------------------------- Using vegetation indexes is an established practice in remote sensing. These indexes aim to improve the discrimination of vegetation structure by combining two wavebands, one where leaf pigments reflect incoming light with another where leaves absorb incoming radiation. Green leaves from natural vegetation such as forests have a strong emissivity rate in the near\-infrared bands and low emissivity rates in the red bands of the electromagnetic spectrum. These spectral properties are used to calculate the Normalized Difference Vegetation Index (NDVI), a widely used index that is computed as the normalized difference between the values of infra\-red and red bands. Including red\-edge bands in Sentinel\-2 images has broadened the scope of the bands used to calculate these indices [\[8]](references.html#ref-Xie2019), [\[9]](references.html#ref-Sun2020a). In what follows, we show examples of vegetation index calculation using a Sentinel\-2 data cube. First, we define a data cube for a tile in the state of Rondonia, Brazil, including bands used to compute different vegetation indexes. We regularize the cube using a target resolution of 60 m to reduce processing time. ``` # Create a directory to store files if ( { [dir.create](https://rdrr.io/r/base/files2.html)("./tempdir/chp5") } ``` ``` # Create an irregular data cube from AWS s2_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "AWS", collection = "SENTINEL-S2-L2A-COGS", tiles = "20LKP", bands = [c](https://rdrr.io/r/base/c.html)( "B02", "B03", "B04", "B05", "B06", "B07", "B08", "B8A", "B11", "B12", "CLOUD" ), start_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2018-07-01"), end_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2018-08-31") ) ``` ``` # Regularize the cube to 15 day intervals reg_cube <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = s2_cube, output_dir = "./tempdir/chp5", res = 60, period = "P15D", multicores = 4 ) ``` There are many options for calculating vegetation indexes using Sentinel\-2 bands. The most widely used method combines band B08 (785\-899 nm) and band B04 (650\-680 nm). Recent works in the literature propose using the red\-edge bands B05 (698\-713 nm), B06 (733\-748 nm), and B07 (773\-793 nm) for capturing subtle variations in chlorophyll absorption producing indexes, which are called Normalized Difference Vegetation Red\-edge indexes (NDRE) [\[8]](references.html#ref-Xie2019). In a recent review, Chaves et al. argue that red\-edge bands are important for distinguishing leaf structure and chlorophyll content of different vegetation species [\[10]](references.html#ref-Chaves2020). In the example below, we show how to include indexes in the regular data cube with the Sentinel\-2 spectral bands. We first calculate the NDVI in the usual way, using bands B08 and B04\. ``` # Calculate NDVI index using bands B08 and B04 reg_cube <- [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(reg_cube, NDVI = (B08 - B04) / (B08 + B04), output_dir = "./tempdir/chp5" ) ``` ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "NDVI", palette = "RdYlGn") ``` Figure 40: NDVI using bands B08 and B04 of Sentinel\-2 (© EU Copernicus Programme modified by authors). We now compare the traditional NDVI with other vegetation index computed using red\-edge bands. The example below such the NDRE1 index, obtained using bands B06 and B05\. Sun et al. argue that a vegetation index built using bands B06 and B07 provides a better approximation to leaf area index estimates than NDVI [\[9]](references.html#ref-Sun2020a). Notice that the contrast between forests and deforested areas is more robust in the NDRE1 index than with NDVI. ``` # Calculate NDRE1 index using bands B06 and B05 reg_cube <- [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(reg_cube, NDRE1 = (B06 - B05) / (B06 + B05), output_dir = "./tempdir/chp5" ) ``` ``` # Plot NDRE1 index [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "NDRE1", palette = "RdYlGn") ``` Figure 41: NDRE1 using bands B06 and B05 of Sentinel\-2 (© EU Copernicus Programme modified by authors). Spectral indexes for identifying burned areas --------------------------------------------- Band combination can also generate spectral indices for detecting degradation by fires, which are an important element in environmental degradation. Forest fires significantly impact emissions and impoverish natural ecosystems [\[11]](references.html#ref-Nepstad1999). Fires open the canopy, making the microclimate drier and increasing the amount of dry fuel [\[12]](references.html#ref-Gao2020). One well\-established technique for detecting burned areas with remote sensing images is the normalized burn ratio (NBR), the difference between the near\-infrared and the short wave infrared band, calculated using bands B8A and B12\. ``` # Calculate the NBR index reg_cube <- [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(reg_cube, NBR = (B12 - B8A) / (B12 + B8A), output_dir = "./tempdir/chp5" ) ``` ``` # Plot the NBR for the first date [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "NBR", palette = "Reds") ``` Figure 42: NBR ratio using Sentinel\-2 B11 and B8A (© EU Copernicus Programme modified by authors). Support for non\-normalized indexes ----------------------------------- All data cube operations discussed so far produce normalized indexes. By default, the indexes generated by the sits\_apply() function are normalized between \-1 and 1, scaled by a factor of 0\.0001\. Normalized indexes are saved as INT2S (Integer with sign). If the normalized parameter is FALSE, no scaling factor will be applied and the index will be saved as FLT4S (Float with sign). The code below shows an exemple of the non\-normalized index, CVI \- chlorophyll vegetation index. CVI is a spectral index used to estimate the chlorophyll content and overall health of vegetation. It combines bands in visible and near\-infrared (NIR) regions to assess vegetation characteristics. Since CVI is not normalized, we have to set the parameter `normalized` to `FALSE` to inform `[sits_apply()](https://rdrr.io/pkg/sits/man/sits_apply.html)` to generate an FLT4S image. ``` # Calculate the NBR index reg_cube <- [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(reg_cube, CVI = (B8A / B03) * (B05 / B03), normalized = FALSE, output_dir = "./tempdir/chp5" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "CVI", palette = "Greens") ``` Figure 43: CVI index using bands B03, B05, and B8A (© EU Copernicus Programme modified by authors). Temporal combination operations ------------------------------- There are cases when users want to produce results which combine the values a time series associated to each pixel of a data cube using reduction operators. In the context of time series analysis, a reduction operator is a function that reduces a sequence of data points into a single value or a smaller set of values. This process involves summarizing or aggregating the information from the time series in a meaningful way. Reduction operators are often used to extract key statistics or features from the data, making it easier to analyze and interpret. To produce temporal combinations, `sits` provides `sits_reduce`, with associated functions: * `t_max()`: maximum value of the series. * `t_min()`: minimum value of the series * `t_mean()`: mean of the series. * `t_median()`: median of the series. * `t_sum()`: sum of all the points in the series. * `t_std()`: standard deviation of the series. * `t_skewness()`: skewness of the series. * `t_kurtosis()`: kurtosis of the series. * `t_amplitude()`: difference between maximum and minimum values of the cycle. A small amplitude means a stable cycle. * `t_fslope()`: maximum value of the first slope of the cycle. Indicates when the cycle presents an abrupt change in the curve. The slope between two values relates the speed of the growth or senescence phases * `t_mse()`: average spectral energy density. The energy of the time series is distributed by frequency. * `t_fqr()`: value of the first quartile of the series (0\.25\). * `t_tqr()`: value of the third quartile of the series (0\.75\). * `t_iqr()`: interquartile range (difference between the third and first quartiles). The functions `t_sum()`, `t_std()`, `t_skewness()`, `t_kurtosis()`, and `t_mse()` produce values greater than the limit of a two\-byte integer. Therefore, we save the images generated by these in floating point format. The following examples show an example temporal reduction operations. ``` # Calculate the NBR index ave_cube <- [sits_reduce](https://rdrr.io/pkg/sits/man/sits_reduce.html)(reg_cube, NDVIMAX = t_max(NDVI), output_dir = "./tempdir/chp5/reduce" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(ave_cube, band = "NDVIMAX", palette = "Greens") ``` Figure 44: maximum NDVI for Sentinel\-2 cube (© EU Copernicus Programme modified by authors). Spectral mixture analysis ------------------------- Many pixels in images of medium\-resolution satellites such as Landsat or Sentinel\-2 contain a mixture of spectral responses of different land cover types inside a resolution element [\[13]](references.html#ref-Roberts1993). In many applications, it is desirable to obtain the proportion of a given class inside a mixed pixel. For this purpose, the literature proposes mixture models; these models represent pixel values as a combination of multiple pure land cover types [\[14]](references.html#ref-Shimabukuro2019). Assuming that the spectral response of pure land cover classes (called endmembers) is known, spectral mixture analysis derives new bands containing the proportion of each endmember inside a pixel. The most used method for spectral mixture analysis is the linear model [\[14]](references.html#ref-Shimabukuro2019). The main idea behind the linear mixture model is that the observed pixel spectrum can be expressed as a linear combination of the spectra of the pure endmembers, weighted by their respective proportions (or abundances) within the pixel. Mathematically, the model can be represented as: \\\[ R\_i \= \\sum\_{j\=1}^N a\_{i,j}\*x\_j \+ \\epsilon\_i, i \\in {1,...M}, M \> N, \\] where \\(i\=1,..M\\) is the set of spectral bands and \\(j\=1,..N\\) is the set of land classes. For each pixel, \\(R\_i\\) is the reflectance in the i\-th spectral band, \\(x\_j\\) is the reflectance value due to the j\-th endmember, and \\(a\_{i,j}\\) is the proportion between the j\-th endmember and the i\-th spectral band. To solve this system of equations and obtain the proportion of each endmember, `sits` uses a non\-negative least squares (NNLS) regression algorithm, which is available in the R package `RStoolbox` and was developed by Jakob Schwalb\-Willmann, based on the sequential coordinate\-wise algorithm (SCA) proposed on Franc et al. [\[15]](references.html#ref-Franc2005). To run the mixture model in `sits`, it is necessary to inform the values of pixels which represent spectral responses of a unique class. These are the so\-called “pure” pixels. Because the quality of the resulting endmember images depends on the quality of the pure pixels, they should be chosen carefully and based on expert knowledge of the area. Since `sits` supports multiple endmember spectral mixture analysis [\[16]](references.html#ref-Roberts1998), users can specify more than one pure pixel per endmember to account for natural variability. In `sits`, spectral mixture analysis is done by `[sits_mixture_model()](https://rdrr.io/pkg/sits/man/sits_mixture_model.html)`, which has two mandatory parameters: `cube` (a data cube) and `endmembers`, a named table (or equivalent) that defines the pure pixels. The `endmembers` table must have the following named columns: (a) `type`, which defines the class associated with an endmember; (b) names, the names of the bands. Each line of the table must contain the value of each endmember for all bands (see example). To improve readability, we suggest that the `endmembers` parameters be defined as a `tribble`. A `tribble` is a `tibble` with an easier to read row\-by\-row layout. In the example below, we define three endmembers for classes `Forest`, `Soil`, and `Water`. Note that the values for each band are expressed as integers ranging from 0 to 10,000\. ``` # Define the endmembers for three classes and six bands em <- tibble::[tribble](https://tibble.tidyverse.org/reference/tribble.html)( ~class, ~B02, ~B03, ~B04, ~B8A, ~B11, ~B12, "forest", 200, 352, 189, 2800, 1340, 546, "soil", 400, 650, 700, 3600, 3500, 1800, "water", 700, 1100, 1400, 850, 40, 26 ) # Generate the mixture model reg_cube <- [sits_mixture_model](https://rdrr.io/pkg/sits/man/sits_mixture_model.html)( data = reg_cube, endmembers = em, multicores = 4, memsize = 12, output_dir = "./tempdir/chp5" ) ``` ``` # Plot the FOREST for the first date using the Greens palette [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "FOREST", palette = "Greens") ``` Figure 45: Percentage of forest per pixel estimated by mixture model ((© EU Copernicus Programme modified by authors). ``` # Plot the water endmember for the first date using the Blues palette [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "WATER", palette = "Blues") ``` Figure 46: Percentage of water per pixel estimated by mixture model (source: authors). ``` # Plot the SOIL endmember for the first date using the orange red (OrRd) palette [plot](https://rdrr.io/r/graphics/plot.default.html)(reg_cube, band = "SOIL", palette = "OrRd") ``` Figure 47: Percentage of soil per pixel estimated by mixture model (source: authors). Linear mixture models (LMM) improve the interpretation of remote sensing images by accounting for mixed pixels and providing a more accurate representation of the Earth’s surface. LMMs provide a more accurate representation of mixed pixels by considering the contributions of multiple land classes within a single pixel. This can lead to improved land cover classification accuracy compared to conventional per\-pixel classification methods, which may struggle to accurately classify mixed pixels. LMMs also allow for the estimation of the abundances of each land class within a pixel, providing valuable sub\-pixel information. This can be especially useful in applications where the spatial resolution of the sensor is not fine enough to resolve individual land cover types, such as monitoring urban growth or studying vegetation dynamics. By considering the sub\-pixel composition of land classes, LMMs can provide a more sensitive measure of changes in land cover over time. This can lead to more accurate and precise change detection, particularly in areas with complex land cover patterns or where subtle changes in land cover may occur. Applications of spectral mixture analysis in remote sensing include forest degradation [\[20]](references.html#ref-Chen2021), wetland surface dynamics [\[21]](references.html#ref-Halabisky2016), and urban area characterization [\[22]](references.html#ref-Wu2003). These models providing valuable information for a wide range of applications, from land mapping and change detection to resource management and environmental monitoring.
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/working-with-time-series.html
Working with time series ======================== Data structures for satellite time series ----------------------------------------- [ The `sits` package uses sets of time series data describing properties in spatiotemporal locations of interest. For land classification, these sets consist of samples labeled by experts. The package can also be used for any type of classification, provided that the timeline and bands of the time series used for training match that of the data cubes. In `sits`, time series are stored in a `tibble` data structure. The following code shows the first three lines of a time series tibble containing 1,882 labeled samples of land classes in Mato Grosso state of Brazil. The samples have time series extracted from the MODIS MOD13Q1 product from 2000 to 2016, provided every 16 days at 250 m resolution in the Sinusoidal projection. Based on ground surveys and high\-resolution imagery, it includes samples of seven classes: `Forest`, `Cerrado`, `Pasture`, `Soy_Fallow`, `Soy_Cotton`, `Soy_Corn`, and `Soy_Millet`. ``` # Samples [data](https://rdrr.io/r/utils/data.html)("samples_matogrosso_mod13q1") samples_matogrosso_mod13q1[1:4, ] ``` ``` #> # A tibble: 4 × 7 #> longitude latitude start_date end_date label cube time_series #> <dbl> <dbl> <date> <date> <chr> <chr> <list> #> 1 -57.8 -9.76 2006-09-14 2007-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 2 -59.4 -9.31 2014-09-14 2015-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 3 -59.4 -9.31 2013-09-14 2014-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 4 -57.8 -9.76 2006-09-14 2007-08-29 Pasture bdc_cube <tibble [23 × 5]> ``` The time series tibble contains data and metadata. The first six columns contain spatial and temporal information, the label assigned to the sample, and the data cube from where the data has been extracted. The first sample has been labeled `Pasture` at location (\-58\.5631, \-13\.8844\), being valid for the period (2006\-09\-14, 2007\-08\-29\). Informing the dates where the label is valid is crucial for correct classification. In this case, the researchers labeling the samples used the agricultural calendar in Brazil. The relevant dates for other applications and other countries will likely differ from those used in the example. The `time_series` column contains the time series data for each spatiotemporal location. This data is also organized as a tibble, with a column with the dates and the other columns with the values for each spectral band. Utilities for handling time series ---------------------------------- The package provides functions for data manipulation and displaying information for time series tibbles. For example, `[summary()](https://rdrr.io/r/base/summary.html)` shows the labels of the sample set and their frequencies. ``` [summary](https://rdrr.io/r/base/summary.html)(samples_matogrosso_mod13q1) ``` ``` #> # A tibble: 7 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Cerrado 379 0.206 #> 2 Forest 131 0.0713 #> 3 Pasture 344 0.187 #> 4 Soy_Corn 364 0.198 #> 5 Soy_Cotton 352 0.192 #> 6 Soy_Fallow 87 0.0474 #> 7 Soy_Millet 180 0.0980 ``` In many cases, it is helpful to relabel the dataset. For example, there may be situations where using a smaller set of labels is desirable because samples in one label on the original set may not be distinguishable from samples with other labels. We then could use `sits_labels()<-` to assign new labels. The example below shows how to do relabeling on a time series set shown above; all samples associated with crops are grouped in a single `Croplands` label. ``` # Copy the sample set for Mato Grosso samples_new_labels <- samples_matogrosso_mod13q1 # Show the current labels [sits_labels](https://rdrr.io/pkg/sits/man/sits_labels.html)(samples_new_labels) ``` ``` #> [1] "Cerrado" "Forest" "Pasture" "Soy_Corn" "Soy_Cotton" #> [6] "Soy_Fallow" "Soy_Millet" ``` ``` # Update the labels [sits_labels](https://rdrr.io/pkg/sits/man/sits_labels.html)(samples_new_labels) <- [c](https://rdrr.io/r/base/c.html)( "Cerrado", "Forest", "Pasture", "Croplands", "Cropland", "Cropland", "Cropland" ) [summary](https://rdrr.io/r/base/summary.html)(samples_new_labels) ``` ``` #> # A tibble: 5 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Cerrado 379 0.206 #> 2 Cropland 619 0.337 #> 3 Croplands 364 0.198 #> 4 Forest 131 0.0713 #> 5 Pasture 344 0.187 ``` Since metadata and the embedded time series use the tibble data format, the functions from `dplyr`, `tidyr`, and `purrr` packages of the `tidyverse` [\[23]](references.html#ref-Wickham2017) can be used to process the data. For example, the following code uses `[sits_select()](https://rdrr.io/pkg/sits/man/sits_select.html)` to get a subset of the sample dataset with two bands (NDVI and EVI) and then uses the `[dplyr::filter()](https://dplyr.tidyverse.org/reference/filter.html)` to select the samples labeled as `Cerrado`. ``` # Select NDVI band samples_ndvi <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)(samples_matogrosso_mod13q1, bands = "NDVI" ) # Select only samples with Cerrado label samples_cerrado <- dplyr::[filter](https://dplyr.tidyverse.org/reference/filter.html)( samples_ndvi, label == "Cerrado" ) ``` Time series visualisation ------------------------- Given a few samples to display, `[plot()](https://rdrr.io/r/graphics/plot.default.html)` tries to group as many spatial locations together. In the following example, the first 12 samples labeled as `Cerrado` refer to the same spatial location in consecutive time periods. For this reason, these samples are plotted together. ``` # Plot the first 12 samples [plot](https://rdrr.io/r/graphics/plot.default.html)(samples_cerrado[1:12, ]) ``` Figure 48: Plot of the first ‘Cerrado’ samples (source: authors). For many samples, the default visualization combines all samples together in a single temporal interval, even if they belong to different years. This plot shows the spread of values for the time series of each band. The strong red line in the plot indicates the median of the values, while the two orange lines are the first and third interquartile ranges. See `[?sits::plot](https://rdrr.io/pkg/sits/man/plot.html)` for more details on data visualization in `sits`. ``` # Plot all cerrado samples together [plot](https://rdrr.io/r/graphics/plot.default.html)(samples_cerrado) ``` Figure 49: Plot of all Cerrado samples (source: authors). To see the spatial distribution of the samples, use `[sits_view()](https://rdrr.io/pkg/sits/man/sits_view.html)` to create an interactive plot. The spatial visulisation is useful to show where the data has been collected. ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(samples_matogrosso_mod13q1) ``` Visualizing sample patterns --------------------------- When dealing with large time series, its is useful to obtain a single plot that captures the essential temporal variability of each class. Following the work on the `dtwSat` R package [\[24]](references.html#ref-Maus2019), we use a generalized additive model (GAM) to obtain a single time series based on statistical approximation. In a GAM, the predictor depends linearly on a smooth function of the predictor variables. \\\[ y \= \\beta\_{i} \+ f(x) \+ \\epsilon, \\epsilon \\sim N(0, \\sigma^2\). \\] The function `[sits_patterns()](https://rdrr.io/pkg/sits/man/sits_patterns.html)` uses a GAM to predict an idealized approximation to the time series associated with each class for all bands. The resulting patterns can be viewed using `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. ``` # Estimate the patterns for each class and plot them samples_matogrosso_mod13q1 |> [sits_patterns](https://rdrr.io/pkg/sits/man/sits_patterns.html)() |> [plot](https://rdrr.io/r/graphics/plot.default.html)() ``` Figure 50: Patterns for the samples for Mato Grosso (source: authors). The resulting patterns provide some insights over the time series behaviour of each class. The response of the Forest class is quite distinctive. They also show that it should be possible to separate between the single and double cropping classes. There are similarities between the double\-cropping classes (`Soy_Corn` and `Soy_Millet`) and between the `Cerrado` and `Pasture` classes. The subtle differences between class signatures provide hints at possible ways by which machine learning algorithms might distinguish between classes. One example is the difference between the middle\-infrared response during the dry season (May to September) to differentiate between `Cerrado` and `Pasture`. Geographical variability of training samples -------------------------------------------- When working with machine learning classification of Earth observation data, it is important to evaluate if the training samples are well distributed in the study area. Training data often comes from ground surveys made at chosen locations. In large areas, ideally representative samples need to capture spatial variability. In practice, however, ground surveys or other means of data collection are limited to selected areas. In many cases, the geographical distribution of the training data does not cover the study area equally. Such mismatch can be a problem for achieving a good quality classification. As stated by Meyer and Pebesma [\[25]](references.html#ref-Meyer2022): “large gaps in geographic space do not always imply large gaps in feature space”. Meyer and Pebesma propose using a spatial distance distribution plot, which displays two distributions of nearest\-neighbor distances: sample\-to\-sample and prediction\-location\-to\-sample [\[25]](references.html#ref-Meyer2022). The difference between the two distributions reflects the degree of spatial clustering in the reference data. Ideally, the two distributions should be similar. Cases where the sample\-to\-sample distance distribution does not match prediction\-location\-to\-sample distribution indicate possible problems in training data collection. `sits` implements spatial distance distribution plots with the `[sits_geo_dist()](https://rdrr.io/pkg/sits/man/sits_geo_dist.html)` function. This function gets a training data in the `samples` parameter, and the study area in the `roi` parameter expressed as an `sf` object. Additional parameters are `n` (maximum number of samples for each distribution) and `crs` (coordinate reference system for the samples). By default, `n` is 1000, and `crs` is “EPSG:4326”. The example below shows how to use `[sits_geo_dist()](https://rdrr.io/pkg/sits/man/sits_geo_dist.html)`. ``` # Read a shapefile for the state of Mato Grosso, Brazil mt_shp <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/shapefiles/mato_grosso/mt.shp", package = "sits" ) # Convert to an sf object mt_sf <- sf::[read_sf](https://r-spatial.github.io/sf/reference/st_read.html)(mt_shp) # Calculate sample-to-sample and sample-to-prediction distances distances <- [sits_geo_dist](https://rdrr.io/pkg/sits/man/sits_geo_dist.html)( samples = samples_modis_ndvi, roi = mt_sf ) # Plot sample-to-sample and sample-to-prediction distances [plot](https://rdrr.io/r/graphics/plot.default.html)(distances) ``` Figure 51: Distribution of sample\-to\-sample and sample\-to\-prediction distances (source: authors). The plot shows a mismatch between the sample\-to\-sample and the sample\-to\-prediction distributions. Most samples are closer to each other than they are close to the location where values need to be predicted. In this case, there are many areas where few or no samples have been collected and where the prediction uncertainty will be higher. In this and similar cases, improving the distribution of training samples is always welcome. If that is not possible, areas with insufficient samples could have lower accuracy. This information must be reported to potential users of classification results. Obtaining time series data from data cubes ------------------------------------------ To get a set of time series in `sits`, first create a regular data cube and then request one or more time series from the cube using `[sits_get_data()](https://rdrr.io/pkg/sits/man/sits_get_data.html)`. This function uses two mandatory parameters: `cube` and `samples`. The `cube` indicates the data cube from which the time series will be extracted. The `samples` parameter accepts the following data types: * A data.frame with information on `latitude` and `longitude` (mandatory), `start_date`, `end_date`, and `label` for each sample point. * A csv file with columns `latitude`, `longitude`, `start_date`, `end_date`, and `label`. * A shapefile containing either `POINT`or `POLYGON` geometries. See details below. * An `sf` object (from the `sf` package) with `POINT` or `POLYGON` geometry information. See details below. In the example below, given a data cube, the user provides the latitude and longitude of the desired location. Since the bands, start date, and end date of the time series are missing, `sits` obtains them from the data cube. The result is a tibble with one time series that can be visualized using `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. ``` # Obtain a raster cube based on local files data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/sinop", package = "sitsdata") raster_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "MOD13Q1-6.1", data_dir = data_dir, parse_info = [c](https://rdrr.io/r/base/c.html)("satellite", "sensor", "tile", "band", "date") ) # Obtain a time series from the raster cube from a point sample_latlong <- tibble::[tibble](https://tibble.tidyverse.org/reference/tibble.html)( longitude = -55.57320, latitude = -11.50566 ) series <- [sits_get_data](https://rdrr.io/pkg/sits/man/sits_get_data.html)( cube = raster_cube, samples = sample_latlong ) [plot](https://rdrr.io/r/graphics/plot.default.html)(series) ``` Figure 52: NDVI and EVI time series fetched from local raster cube (source: authors). A useful case is when a set of labeled samples can be used as a training dataset. In this case, trusted observations are usually labeled and commonly stored in plain text files in comma\-separated values (csv) or using shapefiles (shp). ``` # Retrieve a list of samples described by a csv file samples_csv_file <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/samples/samples_sinop_crop.csv", package = "sits" ) # Read the csv file into an R object samples_csv <- [read.csv](https://rdrr.io/r/utils/read.table.html)(samples_csv_file) # Print the first three samples samples_csv[1:3, ] ``` ``` #> # A tibble: 3 × 6 #> id longitude latitude start_date end_date label #> <int> <dbl> <dbl> <chr> <chr> <chr> #> 1 1 -55.7 -11.8 2013-09-14 2014-08-29 Pasture #> 2 2 -55.6 -11.8 2013-09-14 2014-08-29 Pasture #> 3 3 -55.7 -11.8 2013-09-14 2014-08-29 Forest ``` To retrieve training samples for time series analysis, users must provide the temporal information (`start_date` and `end_date`). In the simplest case, all samples share the same dates. That is not a strict requirement. It is possible to specify different dates as long as they have a compatible duration. For example, the dataset `samples_matogrosso_mod13q1` provided with the `sitsdata` package contains samples from different years covering the same duration. These samples are from the MOD13Q1 product, which contains the same number of images per year. Thus, all time series in the dataset `samples_matogrosso_mod13q1` have the same number of dates. Given a suitably built csv sample file, `[sits_get_data()](https://rdrr.io/pkg/sits/man/sits_get_data.html)` requires two parameters: (a) `cube`, the name of the R object that describes the data cube; (b) `samples`, the name of the CSV file. ``` # Get the points from a data cube in raster brick format points <- [sits_get_data](https://rdrr.io/pkg/sits/man/sits_get_data.html)( cube = raster_cube, samples = samples_csv_file ) # Show the tibble with the first three points points[1:3, ] ``` ``` #> # A tibble: 3 × 7 #> longitude latitude start_date end_date label cube time_series #> <dbl> <dbl> <date> <date> <chr> <chr> <list> #> 1 -55.8 -11.7 2013-09-14 2014-08-29 Cerrado MOD13Q1-6.1 <tibble> #> 2 -55.8 -11.7 2013-09-14 2014-08-29 Cerrado MOD13Q1-6.1 <tibble> #> 3 -55.7 -11.7 2013-09-14 2014-08-29 Soy_Corn MOD13Q1-6.1 <tibble> ``` Users can also specify samples by providing shapefiles or `sf` objects containing `POINT` or `POLYGON` geometries. The geographical location is inferred from the geometries associated with the shapefile or `sf` object. For files containing points, the geographical location is obtained directly. For polygon geometries, the parameter `n_sam_pol` (defaults to 20\) determines the number of samples to be extracted from each polygon. The temporal information can be provided explicitly by the user; if absent, it is inferred from the data cube. If label information is available in the shapefile or `sf` object, the parameter `label_attr` is compulsory to indicate which column contains the label associated with each time series. ``` # Obtain a set of points inside the state of Mato Grosso, Brazil shp_file <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/shapefiles/mato_grosso/mt.shp", package = "sits" ) # Read the shapefile into an "sf" object sf_shape <- sf::[st_read](https://r-spatial.github.io/sf/reference/st_read.html)(shp_file) ``` ``` #> Reading layer `mt' from data source #> `/Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/library/sits/extdata/shapefiles/mato_grosso/mt.shp' #> using driver `ESRI Shapefile' #> Simple feature collection with 1 feature and 3 fields #> Geometry type: MULTIPOLYGON #> Dimension: XY #> Bounding box: xmin: -61.63284 ymin: -18.03993 xmax: -50.22481 ymax: -7.349034 #> Geodetic CRS: SIRGAS 2000 ``` ``` # Create a data cube based on MOD13Q1 collection from BDC modis_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "MOD13Q1-6.1", bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI"), roi = sf_shape, start_date = "2020-06-01", end_date = "2021-08-29" ) # Read the points from the cube and produce a tibble with time series samples_mt <- [sits_get_data](https://rdrr.io/pkg/sits/man/sits_get_data.html)( cube = modis_cube, samples = shp_file, start_date = "2020-06-01", end_date = "2021-08-29", n_sam_pol = 20, multicores = 4 ) ``` Filtering time series --------------------- Satellite image time series is generally contaminated by atmospheric influence, geolocation error, and directional effects [\[26]](references.html#ref-Lambin2006). Atmospheric noise, sun angle, interferences on observations or different equipment specifications, and the nature of the climate\-land dynamics can be sources of variability [\[27]](references.html#ref-Atkinson2012). Inter\-annual climate variability also changes the phenological cycles of the vegetation, resulting in time series whose periods and intensities do not match on a year\-to\-year basis. To make the best use of available satellite data archives, methods for satellite image time series analysis need to deal with *noisy* and *non\-homogeneous* datasets. The literature on satellite image time series has several applications of filtering to correct or smooth vegetation index data. The package supports the well\-known Savitzky–Golay (`[sits_sgolay()](https://rdrr.io/pkg/sits/man/sits_sgolay.html)`) and Whittaker (`[sits_whittaker()](https://rdrr.io/pkg/sits/man/sits_whittaker.html)`) filters. In an evaluation of NDVI time series filtering for estimating phenological parameters in India, Atkinson et al. found that the Whittaker filter provides good results [\[27]](references.html#ref-Atkinson2012). Zhou et al. found that the Savitzky\-Golay filter is suitable for reconstructing tropical evergreen broadleaf forests [\[28]](references.html#ref-Zhou2016). ### Savitzky–Golay filter The Savitzky\-Golay filter fits a successive array of \\(2n\+1\\) adjacent data points with a \\(d\\)\-degree polynomial through linear least squares. The main parameters for the filter are the polynomial degree (\\(d\\)) and the length of the window data points (\\(n\\)). It generally produces smoother results for a larger value of \\(n\\) and/or a smaller value of \\(d\\) [\[29]](references.html#ref-Chen2004). The optimal value for these two parameters can vary from case to case. In `sits`, the parameter `order` sets the order of the polynomial (default \= 3\), the parameter `length` sets the size of the temporal window (default \= 5\), and the parameter `scaling` sets the temporal expansion (default \= 1\). The following example shows the effect of Savitsky\-Golay filter on a point extracted from the MOD13Q1 product, ranging from 2000\-02\-18 to 2018\-01\-01\. ``` # Take NDVI band of the first sample dataset point_ndvi <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)(point_mt_6bands, band = "NDVI") # Apply Savitzky Golay filter point_sg <- [sits_sgolay](https://rdrr.io/pkg/sits/man/sits_sgolay.html)(point_ndvi, length = 11) # Merge the point and plot the series [sits_merge](https://rdrr.io/pkg/sits/man/sits_merge.html)(point_sg, point_ndvi) |> [plot](https://rdrr.io/r/graphics/plot.default.html)() ``` Figure 53: Savitzky\-Golay filter applied on a multi\-year NDVI time series (source: authors). The resulting smoothed curve has both desirable and unwanted properties. From 2000 to 2008, the Savitsky\-Golay filter removes noise from clouds. However, after 2010, when the region was converted to agriculture, the filter removes an important part of the natural variability from the crop cycle. Therefore, the `length` parameter is arguably too big, resulting in oversmoothing. ### Whittaker filter The Whittaker smoother attempts to fit a curve representing the raw data, but is penalized if subsequent points vary too much [\[30]](references.html#ref-Atzberger2011). The Whittaker filter balances the residual to the original data and the smoothness of the fitted curve. The filter has one parameter: \\(\\lambda{}\\) that works as a smoothing weight parameter. The following example shows the effect of the Whittaker filter on a point extracted from the MOD13Q1 product, ranging from 2000\-02\-18 to 2018\-01\-01\. The `lambda` parameter controls the smoothing of the filter. By default, it is set to 0\.5, a small value. The example shows the effect of a larger smoothing parameter. ``` # Take NDVI band of the first sample dataset point_ndvi <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)(point_mt_6bands, band = "NDVI") # Apply Whitakker filter point_whit <- [sits_whittaker](https://rdrr.io/pkg/sits/man/sits_whittaker.html)(point_ndvi, lambda = 8) # Merge the point and plot the series [sits_merge](https://rdrr.io/pkg/sits/man/sits_merge.html)(point_whit, point_ndvi) |> [plot](https://rdrr.io/r/graphics/plot.default.html)() ``` Figure 54: Whittaker filter applied on a one\-year NDVI time series (source: authors). Similar to what is observed in the Savitsky\-Golay filter, high values of the smoothing parameter `lambda` produce an over\-smoothed time series that reduces the capacity of the time series to represent natural variations in crop growth. For this reason, low smoothing values are recommended when using `[sits_whittaker()](https://rdrr.io/pkg/sits/man/sits_whittaker.html)`. Data structures for satellite time series ----------------------------------------- [ The `sits` package uses sets of time series data describing properties in spatiotemporal locations of interest. For land classification, these sets consist of samples labeled by experts. The package can also be used for any type of classification, provided that the timeline and bands of the time series used for training match that of the data cubes. In `sits`, time series are stored in a `tibble` data structure. The following code shows the first three lines of a time series tibble containing 1,882 labeled samples of land classes in Mato Grosso state of Brazil. The samples have time series extracted from the MODIS MOD13Q1 product from 2000 to 2016, provided every 16 days at 250 m resolution in the Sinusoidal projection. Based on ground surveys and high\-resolution imagery, it includes samples of seven classes: `Forest`, `Cerrado`, `Pasture`, `Soy_Fallow`, `Soy_Cotton`, `Soy_Corn`, and `Soy_Millet`. ``` # Samples [data](https://rdrr.io/r/utils/data.html)("samples_matogrosso_mod13q1") samples_matogrosso_mod13q1[1:4, ] ``` ``` #> # A tibble: 4 × 7 #> longitude latitude start_date end_date label cube time_series #> <dbl> <dbl> <date> <date> <chr> <chr> <list> #> 1 -57.8 -9.76 2006-09-14 2007-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 2 -59.4 -9.31 2014-09-14 2015-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 3 -59.4 -9.31 2013-09-14 2014-08-29 Pasture bdc_cube <tibble [23 × 5]> #> 4 -57.8 -9.76 2006-09-14 2007-08-29 Pasture bdc_cube <tibble [23 × 5]> ``` The time series tibble contains data and metadata. The first six columns contain spatial and temporal information, the label assigned to the sample, and the data cube from where the data has been extracted. The first sample has been labeled `Pasture` at location (\-58\.5631, \-13\.8844\), being valid for the period (2006\-09\-14, 2007\-08\-29\). Informing the dates where the label is valid is crucial for correct classification. In this case, the researchers labeling the samples used the agricultural calendar in Brazil. The relevant dates for other applications and other countries will likely differ from those used in the example. The `time_series` column contains the time series data for each spatiotemporal location. This data is also organized as a tibble, with a column with the dates and the other columns with the values for each spectral band. Utilities for handling time series ---------------------------------- The package provides functions for data manipulation and displaying information for time series tibbles. For example, `[summary()](https://rdrr.io/r/base/summary.html)` shows the labels of the sample set and their frequencies. ``` [summary](https://rdrr.io/r/base/summary.html)(samples_matogrosso_mod13q1) ``` ``` #> # A tibble: 7 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Cerrado 379 0.206 #> 2 Forest 131 0.0713 #> 3 Pasture 344 0.187 #> 4 Soy_Corn 364 0.198 #> 5 Soy_Cotton 352 0.192 #> 6 Soy_Fallow 87 0.0474 #> 7 Soy_Millet 180 0.0980 ``` In many cases, it is helpful to relabel the dataset. For example, there may be situations where using a smaller set of labels is desirable because samples in one label on the original set may not be distinguishable from samples with other labels. We then could use `sits_labels()<-` to assign new labels. The example below shows how to do relabeling on a time series set shown above; all samples associated with crops are grouped in a single `Croplands` label. ``` # Copy the sample set for Mato Grosso samples_new_labels <- samples_matogrosso_mod13q1 # Show the current labels [sits_labels](https://rdrr.io/pkg/sits/man/sits_labels.html)(samples_new_labels) ``` ``` #> [1] "Cerrado" "Forest" "Pasture" "Soy_Corn" "Soy_Cotton" #> [6] "Soy_Fallow" "Soy_Millet" ``` ``` # Update the labels [sits_labels](https://rdrr.io/pkg/sits/man/sits_labels.html)(samples_new_labels) <- [c](https://rdrr.io/r/base/c.html)( "Cerrado", "Forest", "Pasture", "Croplands", "Cropland", "Cropland", "Cropland" ) [summary](https://rdrr.io/r/base/summary.html)(samples_new_labels) ``` ``` #> # A tibble: 5 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Cerrado 379 0.206 #> 2 Cropland 619 0.337 #> 3 Croplands 364 0.198 #> 4 Forest 131 0.0713 #> 5 Pasture 344 0.187 ``` Since metadata and the embedded time series use the tibble data format, the functions from `dplyr`, `tidyr`, and `purrr` packages of the `tidyverse` [\[23]](references.html#ref-Wickham2017) can be used to process the data. For example, the following code uses `[sits_select()](https://rdrr.io/pkg/sits/man/sits_select.html)` to get a subset of the sample dataset with two bands (NDVI and EVI) and then uses the `[dplyr::filter()](https://dplyr.tidyverse.org/reference/filter.html)` to select the samples labeled as `Cerrado`. ``` # Select NDVI band samples_ndvi <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)(samples_matogrosso_mod13q1, bands = "NDVI" ) # Select only samples with Cerrado label samples_cerrado <- dplyr::[filter](https://dplyr.tidyverse.org/reference/filter.html)( samples_ndvi, label == "Cerrado" ) ``` Time series visualisation ------------------------- Given a few samples to display, `[plot()](https://rdrr.io/r/graphics/plot.default.html)` tries to group as many spatial locations together. In the following example, the first 12 samples labeled as `Cerrado` refer to the same spatial location in consecutive time periods. For this reason, these samples are plotted together. ``` # Plot the first 12 samples [plot](https://rdrr.io/r/graphics/plot.default.html)(samples_cerrado[1:12, ]) ``` Figure 48: Plot of the first ‘Cerrado’ samples (source: authors). For many samples, the default visualization combines all samples together in a single temporal interval, even if they belong to different years. This plot shows the spread of values for the time series of each band. The strong red line in the plot indicates the median of the values, while the two orange lines are the first and third interquartile ranges. See `[?sits::plot](https://rdrr.io/pkg/sits/man/plot.html)` for more details on data visualization in `sits`. ``` # Plot all cerrado samples together [plot](https://rdrr.io/r/graphics/plot.default.html)(samples_cerrado) ``` Figure 49: Plot of all Cerrado samples (source: authors). To see the spatial distribution of the samples, use `[sits_view()](https://rdrr.io/pkg/sits/man/sits_view.html)` to create an interactive plot. The spatial visulisation is useful to show where the data has been collected. ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(samples_matogrosso_mod13q1) ``` Visualizing sample patterns --------------------------- When dealing with large time series, its is useful to obtain a single plot that captures the essential temporal variability of each class. Following the work on the `dtwSat` R package [\[24]](references.html#ref-Maus2019), we use a generalized additive model (GAM) to obtain a single time series based on statistical approximation. In a GAM, the predictor depends linearly on a smooth function of the predictor variables. \\\[ y \= \\beta\_{i} \+ f(x) \+ \\epsilon, \\epsilon \\sim N(0, \\sigma^2\). \\] The function `[sits_patterns()](https://rdrr.io/pkg/sits/man/sits_patterns.html)` uses a GAM to predict an idealized approximation to the time series associated with each class for all bands. The resulting patterns can be viewed using `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. ``` # Estimate the patterns for each class and plot them samples_matogrosso_mod13q1 |> [sits_patterns](https://rdrr.io/pkg/sits/man/sits_patterns.html)() |> [plot](https://rdrr.io/r/graphics/plot.default.html)() ``` Figure 50: Patterns for the samples for Mato Grosso (source: authors). The resulting patterns provide some insights over the time series behaviour of each class. The response of the Forest class is quite distinctive. They also show that it should be possible to separate between the single and double cropping classes. There are similarities between the double\-cropping classes (`Soy_Corn` and `Soy_Millet`) and between the `Cerrado` and `Pasture` classes. The subtle differences between class signatures provide hints at possible ways by which machine learning algorithms might distinguish between classes. One example is the difference between the middle\-infrared response during the dry season (May to September) to differentiate between `Cerrado` and `Pasture`. Geographical variability of training samples -------------------------------------------- When working with machine learning classification of Earth observation data, it is important to evaluate if the training samples are well distributed in the study area. Training data often comes from ground surveys made at chosen locations. In large areas, ideally representative samples need to capture spatial variability. In practice, however, ground surveys or other means of data collection are limited to selected areas. In many cases, the geographical distribution of the training data does not cover the study area equally. Such mismatch can be a problem for achieving a good quality classification. As stated by Meyer and Pebesma [\[25]](references.html#ref-Meyer2022): “large gaps in geographic space do not always imply large gaps in feature space”. Meyer and Pebesma propose using a spatial distance distribution plot, which displays two distributions of nearest\-neighbor distances: sample\-to\-sample and prediction\-location\-to\-sample [\[25]](references.html#ref-Meyer2022). The difference between the two distributions reflects the degree of spatial clustering in the reference data. Ideally, the two distributions should be similar. Cases where the sample\-to\-sample distance distribution does not match prediction\-location\-to\-sample distribution indicate possible problems in training data collection. `sits` implements spatial distance distribution plots with the `[sits_geo_dist()](https://rdrr.io/pkg/sits/man/sits_geo_dist.html)` function. This function gets a training data in the `samples` parameter, and the study area in the `roi` parameter expressed as an `sf` object. Additional parameters are `n` (maximum number of samples for each distribution) and `crs` (coordinate reference system for the samples). By default, `n` is 1000, and `crs` is “EPSG:4326”. The example below shows how to use `[sits_geo_dist()](https://rdrr.io/pkg/sits/man/sits_geo_dist.html)`. ``` # Read a shapefile for the state of Mato Grosso, Brazil mt_shp <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/shapefiles/mato_grosso/mt.shp", package = "sits" ) # Convert to an sf object mt_sf <- sf::[read_sf](https://r-spatial.github.io/sf/reference/st_read.html)(mt_shp) # Calculate sample-to-sample and sample-to-prediction distances distances <- [sits_geo_dist](https://rdrr.io/pkg/sits/man/sits_geo_dist.html)( samples = samples_modis_ndvi, roi = mt_sf ) # Plot sample-to-sample and sample-to-prediction distances [plot](https://rdrr.io/r/graphics/plot.default.html)(distances) ``` Figure 51: Distribution of sample\-to\-sample and sample\-to\-prediction distances (source: authors). The plot shows a mismatch between the sample\-to\-sample and the sample\-to\-prediction distributions. Most samples are closer to each other than they are close to the location where values need to be predicted. In this case, there are many areas where few or no samples have been collected and where the prediction uncertainty will be higher. In this and similar cases, improving the distribution of training samples is always welcome. If that is not possible, areas with insufficient samples could have lower accuracy. This information must be reported to potential users of classification results. Obtaining time series data from data cubes ------------------------------------------ To get a set of time series in `sits`, first create a regular data cube and then request one or more time series from the cube using `[sits_get_data()](https://rdrr.io/pkg/sits/man/sits_get_data.html)`. This function uses two mandatory parameters: `cube` and `samples`. The `cube` indicates the data cube from which the time series will be extracted. The `samples` parameter accepts the following data types: * A data.frame with information on `latitude` and `longitude` (mandatory), `start_date`, `end_date`, and `label` for each sample point. * A csv file with columns `latitude`, `longitude`, `start_date`, `end_date`, and `label`. * A shapefile containing either `POINT`or `POLYGON` geometries. See details below. * An `sf` object (from the `sf` package) with `POINT` or `POLYGON` geometry information. See details below. In the example below, given a data cube, the user provides the latitude and longitude of the desired location. Since the bands, start date, and end date of the time series are missing, `sits` obtains them from the data cube. The result is a tibble with one time series that can be visualized using `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. ``` # Obtain a raster cube based on local files data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/sinop", package = "sitsdata") raster_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "MOD13Q1-6.1", data_dir = data_dir, parse_info = [c](https://rdrr.io/r/base/c.html)("satellite", "sensor", "tile", "band", "date") ) # Obtain a time series from the raster cube from a point sample_latlong <- tibble::[tibble](https://tibble.tidyverse.org/reference/tibble.html)( longitude = -55.57320, latitude = -11.50566 ) series <- [sits_get_data](https://rdrr.io/pkg/sits/man/sits_get_data.html)( cube = raster_cube, samples = sample_latlong ) [plot](https://rdrr.io/r/graphics/plot.default.html)(series) ``` Figure 52: NDVI and EVI time series fetched from local raster cube (source: authors). A useful case is when a set of labeled samples can be used as a training dataset. In this case, trusted observations are usually labeled and commonly stored in plain text files in comma\-separated values (csv) or using shapefiles (shp). ``` # Retrieve a list of samples described by a csv file samples_csv_file <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/samples/samples_sinop_crop.csv", package = "sits" ) # Read the csv file into an R object samples_csv <- [read.csv](https://rdrr.io/r/utils/read.table.html)(samples_csv_file) # Print the first three samples samples_csv[1:3, ] ``` ``` #> # A tibble: 3 × 6 #> id longitude latitude start_date end_date label #> <int> <dbl> <dbl> <chr> <chr> <chr> #> 1 1 -55.7 -11.8 2013-09-14 2014-08-29 Pasture #> 2 2 -55.6 -11.8 2013-09-14 2014-08-29 Pasture #> 3 3 -55.7 -11.8 2013-09-14 2014-08-29 Forest ``` To retrieve training samples for time series analysis, users must provide the temporal information (`start_date` and `end_date`). In the simplest case, all samples share the same dates. That is not a strict requirement. It is possible to specify different dates as long as they have a compatible duration. For example, the dataset `samples_matogrosso_mod13q1` provided with the `sitsdata` package contains samples from different years covering the same duration. These samples are from the MOD13Q1 product, which contains the same number of images per year. Thus, all time series in the dataset `samples_matogrosso_mod13q1` have the same number of dates. Given a suitably built csv sample file, `[sits_get_data()](https://rdrr.io/pkg/sits/man/sits_get_data.html)` requires two parameters: (a) `cube`, the name of the R object that describes the data cube; (b) `samples`, the name of the CSV file. ``` # Get the points from a data cube in raster brick format points <- [sits_get_data](https://rdrr.io/pkg/sits/man/sits_get_data.html)( cube = raster_cube, samples = samples_csv_file ) # Show the tibble with the first three points points[1:3, ] ``` ``` #> # A tibble: 3 × 7 #> longitude latitude start_date end_date label cube time_series #> <dbl> <dbl> <date> <date> <chr> <chr> <list> #> 1 -55.8 -11.7 2013-09-14 2014-08-29 Cerrado MOD13Q1-6.1 <tibble> #> 2 -55.8 -11.7 2013-09-14 2014-08-29 Cerrado MOD13Q1-6.1 <tibble> #> 3 -55.7 -11.7 2013-09-14 2014-08-29 Soy_Corn MOD13Q1-6.1 <tibble> ``` Users can also specify samples by providing shapefiles or `sf` objects containing `POINT` or `POLYGON` geometries. The geographical location is inferred from the geometries associated with the shapefile or `sf` object. For files containing points, the geographical location is obtained directly. For polygon geometries, the parameter `n_sam_pol` (defaults to 20\) determines the number of samples to be extracted from each polygon. The temporal information can be provided explicitly by the user; if absent, it is inferred from the data cube. If label information is available in the shapefile or `sf` object, the parameter `label_attr` is compulsory to indicate which column contains the label associated with each time series. ``` # Obtain a set of points inside the state of Mato Grosso, Brazil shp_file <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/shapefiles/mato_grosso/mt.shp", package = "sits" ) # Read the shapefile into an "sf" object sf_shape <- sf::[st_read](https://r-spatial.github.io/sf/reference/st_read.html)(shp_file) ``` ``` #> Reading layer `mt' from data source #> `/Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/library/sits/extdata/shapefiles/mato_grosso/mt.shp' #> using driver `ESRI Shapefile' #> Simple feature collection with 1 feature and 3 fields #> Geometry type: MULTIPOLYGON #> Dimension: XY #> Bounding box: xmin: -61.63284 ymin: -18.03993 xmax: -50.22481 ymax: -7.349034 #> Geodetic CRS: SIRGAS 2000 ``` ``` # Create a data cube based on MOD13Q1 collection from BDC modis_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "MOD13Q1-6.1", bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI"), roi = sf_shape, start_date = "2020-06-01", end_date = "2021-08-29" ) # Read the points from the cube and produce a tibble with time series samples_mt <- [sits_get_data](https://rdrr.io/pkg/sits/man/sits_get_data.html)( cube = modis_cube, samples = shp_file, start_date = "2020-06-01", end_date = "2021-08-29", n_sam_pol = 20, multicores = 4 ) ``` Filtering time series --------------------- Satellite image time series is generally contaminated by atmospheric influence, geolocation error, and directional effects [\[26]](references.html#ref-Lambin2006). Atmospheric noise, sun angle, interferences on observations or different equipment specifications, and the nature of the climate\-land dynamics can be sources of variability [\[27]](references.html#ref-Atkinson2012). Inter\-annual climate variability also changes the phenological cycles of the vegetation, resulting in time series whose periods and intensities do not match on a year\-to\-year basis. To make the best use of available satellite data archives, methods for satellite image time series analysis need to deal with *noisy* and *non\-homogeneous* datasets. The literature on satellite image time series has several applications of filtering to correct or smooth vegetation index data. The package supports the well\-known Savitzky–Golay (`[sits_sgolay()](https://rdrr.io/pkg/sits/man/sits_sgolay.html)`) and Whittaker (`[sits_whittaker()](https://rdrr.io/pkg/sits/man/sits_whittaker.html)`) filters. In an evaluation of NDVI time series filtering for estimating phenological parameters in India, Atkinson et al. found that the Whittaker filter provides good results [\[27]](references.html#ref-Atkinson2012). Zhou et al. found that the Savitzky\-Golay filter is suitable for reconstructing tropical evergreen broadleaf forests [\[28]](references.html#ref-Zhou2016). ### Savitzky–Golay filter The Savitzky\-Golay filter fits a successive array of \\(2n\+1\\) adjacent data points with a \\(d\\)\-degree polynomial through linear least squares. The main parameters for the filter are the polynomial degree (\\(d\\)) and the length of the window data points (\\(n\\)). It generally produces smoother results for a larger value of \\(n\\) and/or a smaller value of \\(d\\) [\[29]](references.html#ref-Chen2004). The optimal value for these two parameters can vary from case to case. In `sits`, the parameter `order` sets the order of the polynomial (default \= 3\), the parameter `length` sets the size of the temporal window (default \= 5\), and the parameter `scaling` sets the temporal expansion (default \= 1\). The following example shows the effect of Savitsky\-Golay filter on a point extracted from the MOD13Q1 product, ranging from 2000\-02\-18 to 2018\-01\-01\. ``` # Take NDVI band of the first sample dataset point_ndvi <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)(point_mt_6bands, band = "NDVI") # Apply Savitzky Golay filter point_sg <- [sits_sgolay](https://rdrr.io/pkg/sits/man/sits_sgolay.html)(point_ndvi, length = 11) # Merge the point and plot the series [sits_merge](https://rdrr.io/pkg/sits/man/sits_merge.html)(point_sg, point_ndvi) |> [plot](https://rdrr.io/r/graphics/plot.default.html)() ``` Figure 53: Savitzky\-Golay filter applied on a multi\-year NDVI time series (source: authors). The resulting smoothed curve has both desirable and unwanted properties. From 2000 to 2008, the Savitsky\-Golay filter removes noise from clouds. However, after 2010, when the region was converted to agriculture, the filter removes an important part of the natural variability from the crop cycle. Therefore, the `length` parameter is arguably too big, resulting in oversmoothing. ### Whittaker filter The Whittaker smoother attempts to fit a curve representing the raw data, but is penalized if subsequent points vary too much [\[30]](references.html#ref-Atzberger2011). The Whittaker filter balances the residual to the original data and the smoothness of the fitted curve. The filter has one parameter: \\(\\lambda{}\\) that works as a smoothing weight parameter. The following example shows the effect of the Whittaker filter on a point extracted from the MOD13Q1 product, ranging from 2000\-02\-18 to 2018\-01\-01\. The `lambda` parameter controls the smoothing of the filter. By default, it is set to 0\.5, a small value. The example shows the effect of a larger smoothing parameter. ``` # Take NDVI band of the first sample dataset point_ndvi <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)(point_mt_6bands, band = "NDVI") # Apply Whitakker filter point_whit <- [sits_whittaker](https://rdrr.io/pkg/sits/man/sits_whittaker.html)(point_ndvi, lambda = 8) # Merge the point and plot the series [sits_merge](https://rdrr.io/pkg/sits/man/sits_merge.html)(point_whit, point_ndvi) |> [plot](https://rdrr.io/r/graphics/plot.default.html)() ``` Figure 54: Whittaker filter applied on a one\-year NDVI time series (source: authors). Similar to what is observed in the Savitsky\-Golay filter, high values of the smoothing parameter `lambda` produce an over\-smoothed time series that reduces the capacity of the time series to represent natural variations in crop growth. For this reason, low smoothing values are recommended when using `[sits_whittaker()](https://rdrr.io/pkg/sits/man/sits_whittaker.html)`. ### Savitzky–Golay filter The Savitzky\-Golay filter fits a successive array of \\(2n\+1\\) adjacent data points with a \\(d\\)\-degree polynomial through linear least squares. The main parameters for the filter are the polynomial degree (\\(d\\)) and the length of the window data points (\\(n\\)). It generally produces smoother results for a larger value of \\(n\\) and/or a smaller value of \\(d\\) [\[29]](references.html#ref-Chen2004). The optimal value for these two parameters can vary from case to case. In `sits`, the parameter `order` sets the order of the polynomial (default \= 3\), the parameter `length` sets the size of the temporal window (default \= 5\), and the parameter `scaling` sets the temporal expansion (default \= 1\). The following example shows the effect of Savitsky\-Golay filter on a point extracted from the MOD13Q1 product, ranging from 2000\-02\-18 to 2018\-01\-01\. ``` # Take NDVI band of the first sample dataset point_ndvi <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)(point_mt_6bands, band = "NDVI") # Apply Savitzky Golay filter point_sg <- [sits_sgolay](https://rdrr.io/pkg/sits/man/sits_sgolay.html)(point_ndvi, length = 11) # Merge the point and plot the series [sits_merge](https://rdrr.io/pkg/sits/man/sits_merge.html)(point_sg, point_ndvi) |> [plot](https://rdrr.io/r/graphics/plot.default.html)() ``` Figure 53: Savitzky\-Golay filter applied on a multi\-year NDVI time series (source: authors). The resulting smoothed curve has both desirable and unwanted properties. From 2000 to 2008, the Savitsky\-Golay filter removes noise from clouds. However, after 2010, when the region was converted to agriculture, the filter removes an important part of the natural variability from the crop cycle. Therefore, the `length` parameter is arguably too big, resulting in oversmoothing. ### Whittaker filter The Whittaker smoother attempts to fit a curve representing the raw data, but is penalized if subsequent points vary too much [\[30]](references.html#ref-Atzberger2011). The Whittaker filter balances the residual to the original data and the smoothness of the fitted curve. The filter has one parameter: \\(\\lambda{}\\) that works as a smoothing weight parameter. The following example shows the effect of the Whittaker filter on a point extracted from the MOD13Q1 product, ranging from 2000\-02\-18 to 2018\-01\-01\. The `lambda` parameter controls the smoothing of the filter. By default, it is set to 0\.5, a small value. The example shows the effect of a larger smoothing parameter. ``` # Take NDVI band of the first sample dataset point_ndvi <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)(point_mt_6bands, band = "NDVI") # Apply Whitakker filter point_whit <- [sits_whittaker](https://rdrr.io/pkg/sits/man/sits_whittaker.html)(point_ndvi, lambda = 8) # Merge the point and plot the series [sits_merge](https://rdrr.io/pkg/sits/man/sits_merge.html)(point_whit, point_ndvi) |> [plot](https://rdrr.io/r/graphics/plot.default.html)() ``` Figure 54: Whittaker filter applied on a one\-year NDVI time series (source: authors). Similar to what is observed in the Savitsky\-Golay filter, high values of the smoothing parameter `lambda` produce an over\-smoothed time series that reduces the capacity of the time series to represent natural variations in crop growth. For this reason, low smoothing values are recommended when using `[sits_whittaker()](https://rdrr.io/pkg/sits/man/sits_whittaker.html)`.
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/improving-the-quality-of-training-samples.html
Improving the quality of training samples ========================================= [ Selecting good training samples for machine learning classification of satellite images is critical to achieving accurate results. Experience with machine learning methods has shown that the number and quality of training samples are crucial factors in obtaining accurate results [\[31]](references.html#ref-Maxwell2018). Large and accurate datasets are preferable, regardless of the algorithm used, while noisy training samples can negatively impact classification performance [\[32]](references.html#ref-Frenay2014). Thus, it is beneficial to use pre\-processing methods to improve the quality of samples and eliminate those that may have been incorrectly labeled or possess low discriminatory power. It is necessary to distinguish between wrongly labeled samples and differences resulting from the natural variability of class signatures. When working in a large geographic region, the variability of vegetation phenology leads to different patterns being assigned to the same label. A related issue is the limitation of crisp boundaries to describe the natural world. Class definitions use idealized descriptions (e.g., “a savanna woodland has tree cover of 50% to 90% ranging from 8 to 15 m in height”). Class boundaries are fuzzy and sometimes overlap, making it hard to distinguish between them. To improve sample quality, `sits` provides methods for evaluating the training data. Given a set of training samples, experts should first cross\-validate the training set to assess their inherent prediction error. The results show whether the data is internally consistent. Since cross\-validation does not predict actual model performance, this chapter provides additional tools for improving the quality of training sets. More detailed information is available on Chapter [Validation and accuracy measurements](https://e-sensing.github.io/sitsbook/validation-and-accuracy-measurements.html). Datasets used in this chapter ----------------------------- The examples of this chapter use two datasets: * `cerrado_2classes`: a set of time series for the Cerrado region of Brazil, the second largest biome in South America with an area of more than 2 million km^2\. The data contains 746 samples divided into 2 classes (`Cerrado` and `Pasture`). Each time series covers 12 months (23 data points) from MOD13Q1 product, and has 2 bands (EVI, and NDVI). * `samples_cerrado_mod13q1`: a set of time series from the Cerrado region of Brazil. The data ranges from 2000 to 2017 and includes 50,160 samples divided into 12 classes (`Dense_Woodland`, `Dunes`, `Fallow_Cotton`, `Millet_Cotton`, `Pasture`, `Rocky_Savanna`, `Savanna`, `Savanna_Parkland`, `Silviculture`, `Soy_Corn`, `Soy_Cotton`, and `Soy_Fallow`). Each time series covers 12 months (23 data points) from MOD13Q1 product, and has 4 bands (EVI, NDVI, MIR, and NIR). We use bands NDVI and EVI for faster processing. ``` [library](https://rdrr.io/r/base/library.html)([sits](https://github.com/e-sensing/sits/)) [library](https://rdrr.io/r/base/library.html)([sitsdata](https://github.com/e-sensing/sitsdata/)) # Take only the NDVI and EVI bands samples_cerrado_mod13q1_2bands <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)( data = samples_cerrado_mod13q1, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI") ) # Show the summary of the samples [summary](https://rdrr.io/r/base/summary.html)(samples_cerrado_mod13q1_2bands) ``` ``` #> # A tibble: 12 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Dense_Woodland 9966 0.199 #> 2 Dunes 550 0.0110 #> 3 Fallow_Cotton 630 0.0126 #> 4 Millet_Cotton 316 0.00630 #> 5 Pasture 7206 0.144 #> 6 Rocky_Savanna 8005 0.160 #> 7 Savanna 9172 0.183 #> 8 Savanna_Parkland 2699 0.0538 #> 9 Silviculture 423 0.00843 #> 10 Soy_Corn 4971 0.0991 #> 11 Soy_Cotton 4124 0.0822 #> 12 Soy_Fallow 2098 0.0418 ``` Cross\-validation of training sets ---------------------------------- Cross\-validation is a technique to estimate the inherent prediction error of a model [\[33]](references.html#ref-Hastie2009). Since cross\-validation uses only the training samples, its results are not accuracy measures unless the samples have been carefully collected to represent the diversity of possible occurrences of classes in the study area [\[34]](references.html#ref-Wadoux2021). In practice, when working in large areas, it is hard to obtain random stratified samples which cover the different variations in land classes associated with the ecosystems of the study area. Thus, cross\-validation should be taken as a measure of model performance on the training data and not an estimate of overall map accuracy. Cross\-validation uses part of the available samples to fit the classification model and a different part to test it. The k\-fold validation method splits the data into \\(k\\) partitions with approximately the same size and proceeds by fitting the model and testing it \\(k\\) times. At each step, we take one distinct partition for the test and the remaining \\({k\-1}\\) for training the model and calculate its prediction error for classifying the test partition. A simple average gives us an estimation of the expected prediction error. The recommended choices of \\(k\\) are \\(5\\) or \\(10\\) [\[33]](references.html#ref-Hastie2009). `[sits_kfold_validate()](https://rdrr.io/pkg/sits/man/sits_kfold_validate.html)` supports k\-fold validation in `sits`. The result is the confusion matrix and the accuracy statistics (overall and by class). In the examples below, we use multiprocessing to speed up the results. The parameters of `sits_kfold_validate` are: 1. `samples`: training samples organized as a time series tibble; 2. `folds`: number of folds, or how many times to split the data (default \= 5\); 3. `ml_method`: ML/DL method to be used for the validation (default \= random forest); 4. `multicores`: number of cores to be used for parallel processing (default \= 2\). Below we show an example of cross\-validation on the `samples_cerrado_mod13q1` dataset. ``` rfor_validate <- [sits_kfold_validate](https://rdrr.io/pkg/sits/man/sits_kfold_validate.html)( samples = samples_cerrado_mod13q1_2bands, folds = 5, ml_method = [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)(), multicores = 5 ) rfor_validate ``` ``` #> Confusion Matrix and Statistics #> #> Reference #> Prediction Pasture Dense_Woodland Rocky_Savanna Savanna_Parkland #> Pasture 6618 23 9 5 #> Dense_Woodland 496 9674 604 0 #> Rocky_Savanna 8 62 7309 27 #> Savanna_Parkland 4 0 50 2641 #> Savanna 56 200 33 26 #> Dunes 0 0 0 0 #> Soy_Corn 9 0 0 0 #> Soy_Cotton 1 0 0 0 #> Soy_Fallow 11 0 0 0 #> Fallow_Cotton 3 0 0 0 #> Silviculture 0 7 0 0 #> Millet_Cotton 0 0 0 0 #> Reference #> Prediction Savanna Dunes Soy_Corn Soy_Cotton Soy_Fallow Fallow_Cotton #> Pasture 114 0 36 12 25 41 #> Dense_Woodland 138 0 3 2 1 0 #> Rocky_Savanna 9 0 0 0 0 0 #> Savanna_Parkland 15 0 1 0 1 1 #> Savanna 8896 0 9 0 1 0 #> Dunes 0 550 0 0 0 0 #> Soy_Corn 0 0 4851 58 355 8 #> Soy_Cotton 0 0 40 4041 0 19 #> Soy_Fallow 0 0 29 0 1710 1 #> Fallow_Cotton 0 0 2 3 5 555 #> Silviculture 0 0 0 0 0 0 #> Millet_Cotton 0 0 0 8 0 5 #> Reference #> Prediction Silviculture Millet_Cotton #> Pasture 1 1 #> Dense_Woodland 102 0 #> Rocky_Savanna 0 0 #> Savanna_Parkland 0 0 #> Savanna 8 0 #> Dunes 0 0 #> Soy_Corn 0 3 #> Soy_Cotton 0 21 #> Soy_Fallow 0 0 #> Fallow_Cotton 0 20 #> Silviculture 312 0 #> Millet_Cotton 0 271 #> #> Overall Statistics #> #> Accuracy : 0.9455 #> 95% CI : (0.9435, 0.9475) #> #> Kappa : 0.9365 #> #> Statistics by Class: #> #> Class: Pasture Class: Dense_Woodland #> Prod Acc (Sensitivity) 0.9184 0.9707 #> Specificity 0.9938 0.9665 #> User Acc (Pos Pred Value) 0.9612 0.8779 #> Neg Pred Value 0.9864 0.9925 #> F1 score 0.9393 0.9219 #> Class: Rocky_Savanna Class: Savanna_Parkland #> Prod Acc (Sensitivity) 0.9131 0.9785 #> Specificity 0.9975 0.9985 #> User Acc (Pos Pred Value) 0.9857 0.9735 #> Neg Pred Value 0.9837 0.9988 #> F1 score 0.9480 0.9760 #> Class: Savanna Class: Dunes Class: Soy_Corn #> Prod Acc (Sensitivity) 0.9699 1 0.9759 #> Specificity 0.9919 1 0.9904 #> User Acc (Pos Pred Value) 0.9639 1 0.9181 #> Neg Pred Value 0.9933 1 0.9973 #> F1 score 0.9669 1 0.9461 #> Class: Soy_Cotton Class: Soy_Fallow #> Prod Acc (Sensitivity) 0.9799 0.8151 #> Specificity 0.9982 0.9991 #> User Acc (Pos Pred Value) 0.9803 0.9766 #> Neg Pred Value 0.9982 0.9920 #> F1 score 0.9801 0.8885 #> Class: Fallow_Cotton Class: Silviculture #> Prod Acc (Sensitivity) 0.8810 0.7376 #> Specificity 0.9993 0.9999 #> User Acc (Pos Pred Value) 0.9439 0.9781 #> Neg Pred Value 0.9985 0.9978 #> F1 score 0.9113 0.8410 #> Class: Millet_Cotton #> Prod Acc (Sensitivity) 0.8576 #> Specificity 0.9997 #> User Acc (Pos Pred Value) 0.9542 #> Neg Pred Value 0.9991 #> F1 score 0.9033 ``` The results show a good validation, reaching 94% accuracy. However, this accuracy does not guarantee a good classification result. It only shows if the training data is internally consistent. In what follows, we present additional methods for improving sample quality. Cross\-validation measures how well the model fits the training data. Using these results to measure classification accuracy is only valid if the training data is a good sample of the entire dataset. Training data is subject to various sources of bias. In land classification, some classes are much more frequent than others, so the training dataset will be imbalanced. Regional differences in soil and climate conditions for large areas will lead the same classes to have different spectral responses. Field analysts may be restricted to places they have access (e.g., along roads) when collecting samples. An additional problem is mixed pixels. Expert interpreters select samples that stand out in fieldwork or reference images. Border pixels are unlikely to be chosen as part of the training data. For all these reasons, cross\-validation results do not measure classification accuracy for the entire dataset. Hierarchical clustering for sample quality control -------------------------------------------------- The package provides two clustering methods to assess sample quality: Agglomerative Hierarchical Clustering (AHC) and Self\-organizing Maps (SOM). These methods have different computational complexities. AHC has a computational complexity of \\(\\mathcal{O}(n^2\)\\), given the number of time series \\(n\\), whereas SOM complexity is linear. For large data, AHC requires substantial memory and running time; in these cases, SOM is recommended. This section describes how to run AHC in `sits`. The SOM\-based technique is presented in the next section. AHC computes the dissimilarity between any two elements from a dataset. Depending on the distance functions and linkage criteria, the algorithm decides which two clusters are merged at each iteration. This approach is helpful for exploring samples due to its visualization power and ease of use [\[35]](references.html#ref-Keogh2003). In `sits`, AHC is implemented using `[sits_cluster_dendro()](https://rdrr.io/pkg/sits/man/sits_cluster_dendro.html)`. ``` # Take a set of patterns for 2 classes # Create a dendrogram, plot, and get the optimal cluster based on ARI index clusters <- [sits_cluster_dendro](https://rdrr.io/pkg/sits/man/sits_cluster_dendro.html)( samples = cerrado_2classes, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI"), dist_method = "dtw_basic", linkage = "ward.D2" ) ``` Figure 55: Example of hierarchical clustering for a two class set of time series (source: authors). The `[sits_cluster_dendro()](https://rdrr.io/pkg/sits/man/sits_cluster_dendro.html)` function has one mandatory parameter (`samples`), with the samples to be evaluated. Optional parameters include `bands`, `dist_method`, and `linkage`. The `dist_method` parameter specifies how to calculate the distance between two time series. We recommend a metric that uses dynamic time warping (DTW) [\[36]](references.html#ref-Petitjean2012), as DTW is a reliable method for measuring differences between satellite image time series [\[37]](references.html#ref-Maus2016). The options available in `sits` are based on those provided by package `dtwclust`, which include `dtw_basic`, `dtw_lb`, and `dtw2`. Please check `[?dtwclust::tsclust](https://rdrr.io/pkg/dtwclust/man/tsclust.html)` for more information on DTW distances. The `linkage` parameter defines the distance metric between clusters. The recommended linkage criteria are: `complete` or `ward.D2`. Complete linkage prioritizes the within\-cluster dissimilarities, producing clusters with shorter distance samples, but results are sensitive to outliers. As an alternative, Ward proposes to use the sum\-of\-squares error to minimize data variance [\[38]](references.html#ref-Ward1963); his method is available as `ward.D2` option to the `linkage` parameter. To cut the dendrogram, the `[sits_cluster_dendro()](https://rdrr.io/pkg/sits/man/sits_cluster_dendro.html)` function computes the adjusted rand index (ARI) [\[39]](references.html#ref-Rand1971), returning the height where the cut of the dendrogram maximizes the index. In the example, the ARI index indicates that there are six clusters. The result of `[sits_cluster_dendro()](https://rdrr.io/pkg/sits/man/sits_cluster_dendro.html)` is a time series tibble with one additional column called “cluster”. The function `[sits_cluster_frequency()](https://rdrr.io/pkg/sits/man/sits_cluster_frequency.html)` provides information on the composition of each cluster. ``` # Show clusters samples frequency [sits_cluster_frequency](https://rdrr.io/pkg/sits/man/sits_cluster_frequency.html)(clusters) ``` ``` #> #> 1 2 3 4 5 6 Total #> Cerrado 203 13 23 80 1 80 400 #> Pasture 2 176 28 0 140 0 346 #> Total 205 189 51 80 141 80 746 ``` The cluster frequency table shows that each cluster has a predominance of either `Cerrado` or `Pasture` labels, except for cluster 3, which has a mix of samples from both labels. Such confusion may have resulted from incorrect labeling, inadequacy of selected bands and spatial resolution, or even a natural confusion due to the variability of the land classes. To remove cluster 3, use `[dplyr::filter()](https://dplyr.tidyverse.org/reference/filter.html)`. The resulting clusters still contain mixed labels, possibly resulting from outliers. In this case, `[sits_cluster_clean()](https://rdrr.io/pkg/sits/man/sits_cluster_clean.html)` removes the outliers, leaving only the most frequent label. After cleaning the samples, the resulting set of samples is likely to improve the classification results. ``` # Remove cluster 3 from the samples clusters_new <- dplyr::[filter](https://dplyr.tidyverse.org/reference/filter.html)(clusters, cluster != 3) # Clear clusters, leaving only the majority label clean <- [sits_cluster_clean](https://rdrr.io/pkg/sits/man/sits_cluster_clean.html)(clusters_new) # Show clusters samples frequency [sits_cluster_frequency](https://rdrr.io/pkg/sits/man/sits_cluster_frequency.html)(clean) ``` ``` #> #> 1 2 4 5 6 Total #> Cerrado 203 0 80 0 80 363 #> Pasture 0 176 0 140 0 316 #> Total 203 176 80 140 80 679 ``` Using SOM for sample quality control ------------------------------------ [ `sits` provides a clustering technique based on self\-organizing maps (SOM) as an alternative to hierarchical clustering for quality control of training samples. SOM is a dimensionality reduction technique [\[40]](references.html#ref-Kohonen1990), where high\-dimensional data is mapped into a two\-dimensional map, keeping the topological relations between data patterns. As shown in Figure [56](improving-the-quality-of-training-samples.html#fig:som2d), the SOM 2D map is composed of units called neurons. Each neuron has a weight vector, with the same dimension as the training samples. At the start, neurons are assigned a small random value and then trained by competitive learning. The algorithm computes the distances of each member of the training set to all neurons and finds the neuron closest to the input, called the best matching unit. Figure 56: SOM 2D map creation (Source: Santos et al. (2021\). Reproduction under fair use doctrine). The input data for quality assessment is a set of training samples, which are high\-dimensional data; for example, a time series with 25 instances of 4 spectral bands has 100 dimensions. When projecting a high\-dimensional dataset into a 2D SOM map, the units of the map (called neurons) compete for each sample. Each time series will be mapped to one of the neurons. Since the number of neurons is smaller than the number of classes, each neuron will be associated with many time series. The resulting 2D map will be a set of clusters. Given that SOM preserves the topological structure of neighborhoods in multiple dimensions, clusters that contain training samples with a given label will usually be neighbors in 2D space. The neighbors of each neuron of a SOM map provide information on intraclass and interclass variability, which is used to detect noisy samples. The methodology of using SOM for sample quality assessment is discussed in detail in the reference paper [\[41]](references.html#ref-Santos2021a). Figure 57: Using SOM for class noise reduction (Source: Santos et al. (2021\). Reproduction under fair use doctrine). Creating the SOM map -------------------- To perform the SOM\-based quality assessment, the first step is to run `[sits_som_map()](https://rdrr.io/pkg/sits/man/sits_som.html)`, which uses the `kohonen` R package to compute a SOM grid [\[42]](references.html#ref-Wehrens2018), controlled by five parameters. The grid size is given by `grid_xdim` and `grid_ydim`. The starting learning rate is `alpha`, which decreases during the interactions. To measure the separation between samples, use `distance` (either “dtw” or “euclidean”). The number of iterations is set by `rlen`. When using `[sits_som_map()](https://rdrr.io/pkg/sits/man/sits_som.html)` in machines which have multiprocessing support for the OpenMP protocol, setting the laerning mode parameter `mode` to “patch” improves processing time. In MacOS and Windows, please use “online”. We suggest using the Dynamic Time Warping (“dtw”) metric as the distance measure. It is a technique used to measure the similarity between two temporal sequences that may vary in speed or timing [\[43]](references.html#ref-Berndt1994). The core idea of DTW is to find the optimal alignment between two sequences by allowing non\-linear mapping of one sequence onto another. In time series analysis, DTW matches two series slightly out of sync. This property is useful in land use studies for matching time series of agricultural areas [\[44]](references.html#ref-Maus2015). ``` # Clustering time series using SOM som_cluster <- [sits_som_map](https://rdrr.io/pkg/sits/man/sits_som.html)(samples_cerrado_mod13q1_2bands, grid_xdim = 15, grid_ydim = 15, alpha = 1.0, distance = "dtw", rlen = 20 ) ``` ``` # Plot the SOM map [plot](https://rdrr.io/r/graphics/plot.default.html)(som_cluster) ``` Figure 58: SOM map for the Cerrado samples (source: authors). The output of the `[sits_som_map()](https://rdrr.io/pkg/sits/man/sits_som.html)` is a list with three elements: (a) `data`, the original set of time series with two additional columns for each time series: `id_sample` (the original id of each sample) and `id_neuron` (the id of the neuron to which it belongs); (b) `labelled_neurons`, a tibble with information on the neurons. For each neuron, it gives the prior and posterior probabilities of all labels which occur in the samples assigned to it; and (c) the SOM grid. To plot the SOM grid, use `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. The neurons are labelled using majority voting. The SOM grid shows that most classes are associated with neurons close to each other, although there are exceptions. Some Pasture neurons are far from the main cluster because the transition between open savanna and pasture areas is not always well defined and depends on climate and latitude. Also, the neurons associated with Soy\_Fallow are dispersed in the map, indicating possible problems in distinguishing this class from the other agricultural classes. The SOM map can be used to remove outliers, as shown below. Measuring confusion between labels using SOM -------------------------------------------- The second step in SOM\-based quality assessment is understanding the confusion between labels. The function `[sits_som_evaluate_cluster()](https://rdrr.io/pkg/sits/man/sits_som_evaluate_cluster.html)` groups neurons by their majority label and produces a tibble. Neurons are grouped into clusters, and there will be as many clusters as there are labels. The results shows the percentage of samples of each label in each cluster. Ideally, all samples of each cluster would have the same label. In practice, cluster contain samples with different label. This information helps on measuring the confusion between samples. ``` # Produce a tibble with a summary of the mixed labels som_eval <- [sits_som_evaluate_cluster](https://rdrr.io/pkg/sits/man/sits_som_evaluate_cluster.html)(som_cluster) # Show the result som_eval ``` ``` #> # A tibble: 66 × 4 #> id_cluster cluster class mixture_percentage #> <int> <chr> <chr> <dbl> #> 1 1 Dense_Woodland Dense_Woodland 78.1 #> 2 1 Dense_Woodland Pasture 5.56 #> 3 1 Dense_Woodland Rocky_Savanna 8.95 #> 4 1 Dense_Woodland Savanna 3.88 #> 5 1 Dense_Woodland Silviculture 3.48 #> 6 1 Dense_Woodland Soy_Corn 0.0249 #> 7 2 Dunes Dunes 100 #> 8 3 Fallow_Cotton Dense_Woodland 0.169 #> 9 3 Fallow_Cotton Fallow_Cotton 49.5 #> 10 3 Fallow_Cotton Millet_Cotton 13.9 #> # ℹ 56 more rows ``` Many labels are associated with clusters where there are some samples with a different label. Such confusion between labels arises because sample labeling is subjective and can be biased. In many cases, interpreters use high\-resolution data to identify samples. However, the actual images to be classified are captured by satellites with lower resolution. In our case study, a MOD13Q1 image has pixels with 250 m resolution. As such, the correspondence between labeled locations in high\-resolution images and mid to low\-resolution images is not direct. The confusion by sample label can be visualized in a bar plot using `[plot()](https://rdrr.io/r/graphics/plot.default.html)`, as shown below. The bar plot shows some confusion between the labels associated with the natural vegetation typical of the Brazilian Cerrado (`Savanna`, `Savanna_Parkland`, `Rocky_Savanna`). This mixture is due to the large variability of the natural vegetation of the Cerrado biome, which makes it difficult to draw sharp boundaries between classes. Some confusion is also visible between the agricultural classes. The `Millet_Cotton` class is a particularly difficult one since many of the samples assigned to this class are confused with `Soy_Cotton` and `Fallow_Cotton`. ``` # Plot the confusion between clusters [plot](https://rdrr.io/r/graphics/plot.default.html)(som_eval) ``` Figure 59: Confusion between classes as measured by SOM (source: authors). Detecting noisy samples using SOM --------------------------------- The third step in the quality assessment uses the discrete probability distribution associated with each neuron, which is included in the `labeled_neurons` tibble produced by `[sits_som_map()](https://rdrr.io/pkg/sits/man/sits_som.html)`. This approach associates probabilities with frequency of occurrence. More homogeneous neurons (those with one label has high frequency) are assumed to be composed of good quality samples. Heterogeneous neurons (those with two or more classes with significant frequencies) are likely to contain noisy samples. The algorithm computes two values for each sample: * *prior probability*: the probability that the label assigned to the sample is correct, considering the frequency of samples in the same neuron. For example, if a neuron has 20 samples, of which 15 are labeled as `Pasture` and 5 as `Forest`, all samples labeled Forest are assigned a prior probability of 25%. This indicates that Forest samples in this neuron may not be of good quality. * *posterior probability*: the probability that the label assigned to the sample is correct, considering the neighboring neurons. Take the case of the above\-mentioned neuron whose samples labeled `Pasture` have a prior probability of 75%. What happens if all the neighboring neurons have `Forest` as a majority label? To answer this question, we use Bayesian inference to estimate if these samples are noisy based on the surrounding neurons [\[45]](references.html#ref-Santos2021). To identify noisy samples, we take the result of the `[sits_som_map()](https://rdrr.io/pkg/sits/man/sits_som.html)` function as the first argument to the function `[sits_som_clean_samples()](https://rdrr.io/pkg/sits/man/sits_som_clean_samples.html)`. This function finds out which samples are noisy, which are clean, and which need to be further examined by the user. It requires the `prior_threshold` and `posterior_threshold` parameters according to the following rules: * If the prior probability of a sample is less than `prior_threshold`, the sample is assumed to be noisy and tagged as “remove”; * If the prior probability is greater or equal to `prior_threshold` and the posterior probability calculated by Bayesian inference is greater or equal to `posterior_threshold`, the sample is assumed not to be noisy and thus is tagged as “clean”; * If the prior probability is greater or equal to `prior_threshold` and the posterior probability is less than `posterior_threshold`, we have a situation when the sample is part of the majority level of those assigned to its neuron, but its label is not consistent with most of its neighbors. This is an anomalous condition and is tagged as “analyze”. Users are encouraged to inspect such samples to find out whether they are in fact noisy or not. The default value for both `prior_threshold` and `posterior_threshold` is 60%. The `[sits_som_clean_samples()](https://rdrr.io/pkg/sits/man/sits_som_clean_samples.html)` has an additional parameter (`keep`), which indicates which samples should be kept in the set based on their prior and posterior probabilities. The default for `keep` is `c("clean", "analyze")`. As a result of the cleaning, about 900 samples have been considered to be noisy and thus removed. ``` new_samples <- [sits_som_clean_samples](https://rdrr.io/pkg/sits/man/sits_som_clean_samples.html)( som_map = som_cluster, prior_threshold = 0.6, posterior_threshold = 0.6, keep = [c](https://rdrr.io/r/base/c.html)("clean", "analyze") ) # Print the new sample distribution [summary](https://rdrr.io/r/base/summary.html)(new_samples) ``` ``` #> # A tibble: 9 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Dense_Woodland 8519 0.220 #> 2 Dunes 550 0.0142 #> 3 Pasture 5509 0.142 #> 4 Rocky_Savanna 5508 0.142 #> 5 Savanna 7651 0.197 #> 6 Savanna_Parkland 1619 0.0418 #> 7 Soy_Corn 4595 0.119 #> 8 Soy_Cotton 3515 0.0907 #> 9 Soy_Fallow 1309 0.0338 ``` All samples of the class which had the highest confusion with others(`Millet_Cotton`) have been removed. Most samples of class `Silviculture` (planted forests) have also been removed since they have been confused with natural forests and woodlands in the SOM map. Further analysis includes calculating the SOM map and confusion matrix for the new set, as shown in the following example. ``` # Evaluate the mixture in the SOM clusters of new samples new_cluster <- [sits_som_map](https://rdrr.io/pkg/sits/man/sits_som.html)( data = new_samples, grid_xdim = 15, grid_ydim = 15, alpha = 1.0, rlen = 20, distance = "dtw" ) ``` ``` new_cluster_mixture <- [sits_som_evaluate_cluster](https://rdrr.io/pkg/sits/man/sits_som_evaluate_cluster.html)(new_cluster) # Plot the mixture information. [plot](https://rdrr.io/r/graphics/plot.default.html)(new_cluster_mixture) ``` Figure 60: Cluster confusion plot for samples cleaned by SOM (source: authors). As expected, the new confusion map shows a significant improvement over the previous one. This result should be interpreted carefully since it may be due to different effects. The most direct interpretation is that `Millet_Cotton` and `Silviculture` cannot be easily separated from the other classes, given the current attributes (a time series of NDVI and EVI indices from MODIS images). In such situations, users should consider improving the number of samples from the less represented classes, including more MODIS bands, or working with higher resolution satellites. The results of the SOM method should be interpreted based on the users’ understanding of the ecosystems and agricultural practices of the study region. The SOM\-based analysis discards samples that can be confused with samples of other classes. After removing noisy samples or uncertain classes, the dataset obtains a better validation score since there is less confusion between classes. Users should analyse the results with care. Not all discarded samples are low\-quality ones. Confusion between samples of different classes can result from inconsistent labeling or from the lack of capacity of satellite data to distinguish between chosen classes. When many samples are discarded, as in the current example, revising the whole classification schema is advisable. The aim of selecting training data should always be to match the reality on the ground to the power of remote sensing data to identify differences. No analysis procedure can replace actual user experience and knowledge of the study region. Reducing sample imbalance ------------------------- Many training samples for Earth observation data analysis are imbalanced. This situation arises when the distribution of samples associated with each label is uneven. One example is the Cerrado dataset used in this Chapter. The three most frequent labels (`Dense Woodland`, `Savanna`, and `Pasture`) include 53% of all samples, while the three least frequent labels (`Millet-Cotton`, `Silviculture`, and `Dunes`) comprise only 2\.5% of the dataset. Sample imbalance is an undesirable property of a training set since machine learning algorithms tend to be more accurate for classes with many samples. The instances belonging to the minority group are misclassified more often than those belonging to the majority group. Thus, reducing sample imbalance can positively affect classification accuracy [\[46]](references.html#ref-Johnson2019). The function `[sits_reduce_imbalance()](https://rdrr.io/pkg/sits/man/sits_reduce_imbalance.html)` deals with training set imbalance; it increases the number of samples of least frequent labels, and reduces the number of samples of most frequent labels. Oversampling requires generating synthetic samples. The package uses the SMOTE method that estimates new samples by considering the cluster formed by the nearest neighbors of each minority label. SMOTE takes two samples from this cluster and produces a new one by randomly interpolating them [\[47]](references.html#ref-Chawla2002). To perform undersampling, `[sits_reduce_imbalance()](https://rdrr.io/pkg/sits/man/sits_reduce_imbalance.html)` builds a SOM map for each majority label based on the required number of samples to be selected. Each dimension of the SOM is set to `ceiling(sqrt(new_number_samples/4))` to allow a reasonable number of neurons to group similar samples. After calculating the SOM map, the algorithm extracts four samples per neuron to generate a reduced set of samples that approximates the variation of the original one. The `[sits_reduce_imbalance()](https://rdrr.io/pkg/sits/man/sits_reduce_imbalance.html)` algorithm has two parameters: `n_samples_over` and `n_samples_under`. The first parameter indicates the minimum number of samples per class. All classes with samples less than its value are oversampled. The second parameter controls the maximum number of samples per class; all classes with more samples than its value are undersampled. The following example uses `[sits_reduce_imbalance()](https://rdrr.io/pkg/sits/man/sits_reduce_imbalance.html)` with the Cerrado samples. We generate a balanced dataset where all classes have a minimum of 1000 and and a maximum of 1500 samples. We use `[sits_som_evaluate_cluster()](https://rdrr.io/pkg/sits/man/sits_som_evaluate_cluster.html)` to estimate the confusion between classes of the balanced dataset. ``` # Reducing imbalances in the Cerrado dataset balanced_samples <- [sits_reduce_imbalance](https://rdrr.io/pkg/sits/man/sits_reduce_imbalance.html)( samples = samples_cerrado_mod13q1_2bands, n_samples_over = 1000, n_samples_under = 1500, multicores = 4 ) ``` ``` # Print the balanced samples # Some classes have more than 1500 samples due to the SOM map # Each label has between 10% and 6% of the full set [summary](https://rdrr.io/r/base/summary.html)(balanced_samples) ``` ``` #> # A tibble: 12 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Dense_Woodland 1596 0.0974 #> 2 Dunes 1000 0.0610 #> 3 Fallow_Cotton 1000 0.0610 #> 4 Millet_Cotton 1000 0.0610 #> 5 Pasture 1592 0.0971 #> 6 Rocky_Savanna 1476 0.0901 #> 7 Savanna 1600 0.0976 #> 8 Savanna_Parkland 1564 0.0954 #> 9 Silviculture 1000 0.0610 #> 10 Soy_Corn 1588 0.0969 #> 11 Soy_Cotton 1568 0.0957 #> 12 Soy_Fallow 1404 0.0857 ``` ``` # Clustering time series using SOM som_cluster_bal <- [sits_som_map](https://rdrr.io/pkg/sits/man/sits_som.html)( data = balanced_samples, grid_xdim = 15, grid_ydim = 15, alpha = 1.0, distance = "dtw", rlen = 20, mode = "pbatch" ) ``` ``` # Produce a tibble with a summary of the mixed labels som_eval <- [sits_som_evaluate_cluster](https://rdrr.io/pkg/sits/man/sits_som_evaluate_cluster.html)(som_cluster_bal) ``` ``` # Show the result [plot](https://rdrr.io/r/graphics/plot.default.html)(som_eval) ``` Figure 61: Confusion by cluster for the balanced dataset (source: authors). As shown in Figure [61](improving-the-quality-of-training-samples.html#fig:seval), the balanced dataset shows less confusion per label than the unbalanced one. In this case, many classes that were confused with others in the original confusion map are now better represented. Reducing sample imbalance should be tried as an alternative to reducing the number of samples of the classes using SOM. In general, users should balance their training data for better performance. Conclusion ---------- The quality of training data is critical to improving the accuracy of maps resulting from machine learning classification methods. To address this challenge, the `sits` package provides three methods for improving training samples. For large datasets, we recommend using both imbalance\-reducing and SOM\-based algorithms. The SOM\-based method identifies potential mislabeled samples and outliers that require further investigation. The results demonstrate a positive impact on the overall classification accuracy. The complexity and diversity of our planet defy simple label names with hard boundaries. Due to representational and data handling issues, all classification systems have a limited number of categories, which inevitably fail to adequately describe the nuances of the planet’s landscapes. All representation systems are thus limited and application\-dependent. As stated by Janowicz [\[48]](references.html#ref-Janowicz2012): “geographical concepts are situated and context\-dependent and can be described from different, equally valid, points of view; thus, ontological commitments are arbitrary to a large extent”. The availability of big data and satellite image time series is a further challenge. In principle, image time series can capture more subtle changes for land classification. Experts must conceive classification systems and training data collections by understanding how time series information relates to actual land change. Methods for quality analysis, such as those presented in this Chapter, cannot replace user understanding and informed choices. Datasets used in this chapter ----------------------------- The examples of this chapter use two datasets: * `cerrado_2classes`: a set of time series for the Cerrado region of Brazil, the second largest biome in South America with an area of more than 2 million km^2\. The data contains 746 samples divided into 2 classes (`Cerrado` and `Pasture`). Each time series covers 12 months (23 data points) from MOD13Q1 product, and has 2 bands (EVI, and NDVI). * `samples_cerrado_mod13q1`: a set of time series from the Cerrado region of Brazil. The data ranges from 2000 to 2017 and includes 50,160 samples divided into 12 classes (`Dense_Woodland`, `Dunes`, `Fallow_Cotton`, `Millet_Cotton`, `Pasture`, `Rocky_Savanna`, `Savanna`, `Savanna_Parkland`, `Silviculture`, `Soy_Corn`, `Soy_Cotton`, and `Soy_Fallow`). Each time series covers 12 months (23 data points) from MOD13Q1 product, and has 4 bands (EVI, NDVI, MIR, and NIR). We use bands NDVI and EVI for faster processing. ``` [library](https://rdrr.io/r/base/library.html)([sits](https://github.com/e-sensing/sits/)) [library](https://rdrr.io/r/base/library.html)([sitsdata](https://github.com/e-sensing/sitsdata/)) # Take only the NDVI and EVI bands samples_cerrado_mod13q1_2bands <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)( data = samples_cerrado_mod13q1, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI") ) # Show the summary of the samples [summary](https://rdrr.io/r/base/summary.html)(samples_cerrado_mod13q1_2bands) ``` ``` #> # A tibble: 12 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Dense_Woodland 9966 0.199 #> 2 Dunes 550 0.0110 #> 3 Fallow_Cotton 630 0.0126 #> 4 Millet_Cotton 316 0.00630 #> 5 Pasture 7206 0.144 #> 6 Rocky_Savanna 8005 0.160 #> 7 Savanna 9172 0.183 #> 8 Savanna_Parkland 2699 0.0538 #> 9 Silviculture 423 0.00843 #> 10 Soy_Corn 4971 0.0991 #> 11 Soy_Cotton 4124 0.0822 #> 12 Soy_Fallow 2098 0.0418 ``` Cross\-validation of training sets ---------------------------------- Cross\-validation is a technique to estimate the inherent prediction error of a model [\[33]](references.html#ref-Hastie2009). Since cross\-validation uses only the training samples, its results are not accuracy measures unless the samples have been carefully collected to represent the diversity of possible occurrences of classes in the study area [\[34]](references.html#ref-Wadoux2021). In practice, when working in large areas, it is hard to obtain random stratified samples which cover the different variations in land classes associated with the ecosystems of the study area. Thus, cross\-validation should be taken as a measure of model performance on the training data and not an estimate of overall map accuracy. Cross\-validation uses part of the available samples to fit the classification model and a different part to test it. The k\-fold validation method splits the data into \\(k\\) partitions with approximately the same size and proceeds by fitting the model and testing it \\(k\\) times. At each step, we take one distinct partition for the test and the remaining \\({k\-1}\\) for training the model and calculate its prediction error for classifying the test partition. A simple average gives us an estimation of the expected prediction error. The recommended choices of \\(k\\) are \\(5\\) or \\(10\\) [\[33]](references.html#ref-Hastie2009). `[sits_kfold_validate()](https://rdrr.io/pkg/sits/man/sits_kfold_validate.html)` supports k\-fold validation in `sits`. The result is the confusion matrix and the accuracy statistics (overall and by class). In the examples below, we use multiprocessing to speed up the results. The parameters of `sits_kfold_validate` are: 1. `samples`: training samples organized as a time series tibble; 2. `folds`: number of folds, or how many times to split the data (default \= 5\); 3. `ml_method`: ML/DL method to be used for the validation (default \= random forest); 4. `multicores`: number of cores to be used for parallel processing (default \= 2\). Below we show an example of cross\-validation on the `samples_cerrado_mod13q1` dataset. ``` rfor_validate <- [sits_kfold_validate](https://rdrr.io/pkg/sits/man/sits_kfold_validate.html)( samples = samples_cerrado_mod13q1_2bands, folds = 5, ml_method = [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)(), multicores = 5 ) rfor_validate ``` ``` #> Confusion Matrix and Statistics #> #> Reference #> Prediction Pasture Dense_Woodland Rocky_Savanna Savanna_Parkland #> Pasture 6618 23 9 5 #> Dense_Woodland 496 9674 604 0 #> Rocky_Savanna 8 62 7309 27 #> Savanna_Parkland 4 0 50 2641 #> Savanna 56 200 33 26 #> Dunes 0 0 0 0 #> Soy_Corn 9 0 0 0 #> Soy_Cotton 1 0 0 0 #> Soy_Fallow 11 0 0 0 #> Fallow_Cotton 3 0 0 0 #> Silviculture 0 7 0 0 #> Millet_Cotton 0 0 0 0 #> Reference #> Prediction Savanna Dunes Soy_Corn Soy_Cotton Soy_Fallow Fallow_Cotton #> Pasture 114 0 36 12 25 41 #> Dense_Woodland 138 0 3 2 1 0 #> Rocky_Savanna 9 0 0 0 0 0 #> Savanna_Parkland 15 0 1 0 1 1 #> Savanna 8896 0 9 0 1 0 #> Dunes 0 550 0 0 0 0 #> Soy_Corn 0 0 4851 58 355 8 #> Soy_Cotton 0 0 40 4041 0 19 #> Soy_Fallow 0 0 29 0 1710 1 #> Fallow_Cotton 0 0 2 3 5 555 #> Silviculture 0 0 0 0 0 0 #> Millet_Cotton 0 0 0 8 0 5 #> Reference #> Prediction Silviculture Millet_Cotton #> Pasture 1 1 #> Dense_Woodland 102 0 #> Rocky_Savanna 0 0 #> Savanna_Parkland 0 0 #> Savanna 8 0 #> Dunes 0 0 #> Soy_Corn 0 3 #> Soy_Cotton 0 21 #> Soy_Fallow 0 0 #> Fallow_Cotton 0 20 #> Silviculture 312 0 #> Millet_Cotton 0 271 #> #> Overall Statistics #> #> Accuracy : 0.9455 #> 95% CI : (0.9435, 0.9475) #> #> Kappa : 0.9365 #> #> Statistics by Class: #> #> Class: Pasture Class: Dense_Woodland #> Prod Acc (Sensitivity) 0.9184 0.9707 #> Specificity 0.9938 0.9665 #> User Acc (Pos Pred Value) 0.9612 0.8779 #> Neg Pred Value 0.9864 0.9925 #> F1 score 0.9393 0.9219 #> Class: Rocky_Savanna Class: Savanna_Parkland #> Prod Acc (Sensitivity) 0.9131 0.9785 #> Specificity 0.9975 0.9985 #> User Acc (Pos Pred Value) 0.9857 0.9735 #> Neg Pred Value 0.9837 0.9988 #> F1 score 0.9480 0.9760 #> Class: Savanna Class: Dunes Class: Soy_Corn #> Prod Acc (Sensitivity) 0.9699 1 0.9759 #> Specificity 0.9919 1 0.9904 #> User Acc (Pos Pred Value) 0.9639 1 0.9181 #> Neg Pred Value 0.9933 1 0.9973 #> F1 score 0.9669 1 0.9461 #> Class: Soy_Cotton Class: Soy_Fallow #> Prod Acc (Sensitivity) 0.9799 0.8151 #> Specificity 0.9982 0.9991 #> User Acc (Pos Pred Value) 0.9803 0.9766 #> Neg Pred Value 0.9982 0.9920 #> F1 score 0.9801 0.8885 #> Class: Fallow_Cotton Class: Silviculture #> Prod Acc (Sensitivity) 0.8810 0.7376 #> Specificity 0.9993 0.9999 #> User Acc (Pos Pred Value) 0.9439 0.9781 #> Neg Pred Value 0.9985 0.9978 #> F1 score 0.9113 0.8410 #> Class: Millet_Cotton #> Prod Acc (Sensitivity) 0.8576 #> Specificity 0.9997 #> User Acc (Pos Pred Value) 0.9542 #> Neg Pred Value 0.9991 #> F1 score 0.9033 ``` The results show a good validation, reaching 94% accuracy. However, this accuracy does not guarantee a good classification result. It only shows if the training data is internally consistent. In what follows, we present additional methods for improving sample quality. Cross\-validation measures how well the model fits the training data. Using these results to measure classification accuracy is only valid if the training data is a good sample of the entire dataset. Training data is subject to various sources of bias. In land classification, some classes are much more frequent than others, so the training dataset will be imbalanced. Regional differences in soil and climate conditions for large areas will lead the same classes to have different spectral responses. Field analysts may be restricted to places they have access (e.g., along roads) when collecting samples. An additional problem is mixed pixels. Expert interpreters select samples that stand out in fieldwork or reference images. Border pixels are unlikely to be chosen as part of the training data. For all these reasons, cross\-validation results do not measure classification accuracy for the entire dataset. Hierarchical clustering for sample quality control -------------------------------------------------- The package provides two clustering methods to assess sample quality: Agglomerative Hierarchical Clustering (AHC) and Self\-organizing Maps (SOM). These methods have different computational complexities. AHC has a computational complexity of \\(\\mathcal{O}(n^2\)\\), given the number of time series \\(n\\), whereas SOM complexity is linear. For large data, AHC requires substantial memory and running time; in these cases, SOM is recommended. This section describes how to run AHC in `sits`. The SOM\-based technique is presented in the next section. AHC computes the dissimilarity between any two elements from a dataset. Depending on the distance functions and linkage criteria, the algorithm decides which two clusters are merged at each iteration. This approach is helpful for exploring samples due to its visualization power and ease of use [\[35]](references.html#ref-Keogh2003). In `sits`, AHC is implemented using `[sits_cluster_dendro()](https://rdrr.io/pkg/sits/man/sits_cluster_dendro.html)`. ``` # Take a set of patterns for 2 classes # Create a dendrogram, plot, and get the optimal cluster based on ARI index clusters <- [sits_cluster_dendro](https://rdrr.io/pkg/sits/man/sits_cluster_dendro.html)( samples = cerrado_2classes, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI"), dist_method = "dtw_basic", linkage = "ward.D2" ) ``` Figure 55: Example of hierarchical clustering for a two class set of time series (source: authors). The `[sits_cluster_dendro()](https://rdrr.io/pkg/sits/man/sits_cluster_dendro.html)` function has one mandatory parameter (`samples`), with the samples to be evaluated. Optional parameters include `bands`, `dist_method`, and `linkage`. The `dist_method` parameter specifies how to calculate the distance between two time series. We recommend a metric that uses dynamic time warping (DTW) [\[36]](references.html#ref-Petitjean2012), as DTW is a reliable method for measuring differences between satellite image time series [\[37]](references.html#ref-Maus2016). The options available in `sits` are based on those provided by package `dtwclust`, which include `dtw_basic`, `dtw_lb`, and `dtw2`. Please check `[?dtwclust::tsclust](https://rdrr.io/pkg/dtwclust/man/tsclust.html)` for more information on DTW distances. The `linkage` parameter defines the distance metric between clusters. The recommended linkage criteria are: `complete` or `ward.D2`. Complete linkage prioritizes the within\-cluster dissimilarities, producing clusters with shorter distance samples, but results are sensitive to outliers. As an alternative, Ward proposes to use the sum\-of\-squares error to minimize data variance [\[38]](references.html#ref-Ward1963); his method is available as `ward.D2` option to the `linkage` parameter. To cut the dendrogram, the `[sits_cluster_dendro()](https://rdrr.io/pkg/sits/man/sits_cluster_dendro.html)` function computes the adjusted rand index (ARI) [\[39]](references.html#ref-Rand1971), returning the height where the cut of the dendrogram maximizes the index. In the example, the ARI index indicates that there are six clusters. The result of `[sits_cluster_dendro()](https://rdrr.io/pkg/sits/man/sits_cluster_dendro.html)` is a time series tibble with one additional column called “cluster”. The function `[sits_cluster_frequency()](https://rdrr.io/pkg/sits/man/sits_cluster_frequency.html)` provides information on the composition of each cluster. ``` # Show clusters samples frequency [sits_cluster_frequency](https://rdrr.io/pkg/sits/man/sits_cluster_frequency.html)(clusters) ``` ``` #> #> 1 2 3 4 5 6 Total #> Cerrado 203 13 23 80 1 80 400 #> Pasture 2 176 28 0 140 0 346 #> Total 205 189 51 80 141 80 746 ``` The cluster frequency table shows that each cluster has a predominance of either `Cerrado` or `Pasture` labels, except for cluster 3, which has a mix of samples from both labels. Such confusion may have resulted from incorrect labeling, inadequacy of selected bands and spatial resolution, or even a natural confusion due to the variability of the land classes. To remove cluster 3, use `[dplyr::filter()](https://dplyr.tidyverse.org/reference/filter.html)`. The resulting clusters still contain mixed labels, possibly resulting from outliers. In this case, `[sits_cluster_clean()](https://rdrr.io/pkg/sits/man/sits_cluster_clean.html)` removes the outliers, leaving only the most frequent label. After cleaning the samples, the resulting set of samples is likely to improve the classification results. ``` # Remove cluster 3 from the samples clusters_new <- dplyr::[filter](https://dplyr.tidyverse.org/reference/filter.html)(clusters, cluster != 3) # Clear clusters, leaving only the majority label clean <- [sits_cluster_clean](https://rdrr.io/pkg/sits/man/sits_cluster_clean.html)(clusters_new) # Show clusters samples frequency [sits_cluster_frequency](https://rdrr.io/pkg/sits/man/sits_cluster_frequency.html)(clean) ``` ``` #> #> 1 2 4 5 6 Total #> Cerrado 203 0 80 0 80 363 #> Pasture 0 176 0 140 0 316 #> Total 203 176 80 140 80 679 ``` Using SOM for sample quality control ------------------------------------ [ `sits` provides a clustering technique based on self\-organizing maps (SOM) as an alternative to hierarchical clustering for quality control of training samples. SOM is a dimensionality reduction technique [\[40]](references.html#ref-Kohonen1990), where high\-dimensional data is mapped into a two\-dimensional map, keeping the topological relations between data patterns. As shown in Figure [56](improving-the-quality-of-training-samples.html#fig:som2d), the SOM 2D map is composed of units called neurons. Each neuron has a weight vector, with the same dimension as the training samples. At the start, neurons are assigned a small random value and then trained by competitive learning. The algorithm computes the distances of each member of the training set to all neurons and finds the neuron closest to the input, called the best matching unit. Figure 56: SOM 2D map creation (Source: Santos et al. (2021\). Reproduction under fair use doctrine). The input data for quality assessment is a set of training samples, which are high\-dimensional data; for example, a time series with 25 instances of 4 spectral bands has 100 dimensions. When projecting a high\-dimensional dataset into a 2D SOM map, the units of the map (called neurons) compete for each sample. Each time series will be mapped to one of the neurons. Since the number of neurons is smaller than the number of classes, each neuron will be associated with many time series. The resulting 2D map will be a set of clusters. Given that SOM preserves the topological structure of neighborhoods in multiple dimensions, clusters that contain training samples with a given label will usually be neighbors in 2D space. The neighbors of each neuron of a SOM map provide information on intraclass and interclass variability, which is used to detect noisy samples. The methodology of using SOM for sample quality assessment is discussed in detail in the reference paper [\[41]](references.html#ref-Santos2021a). Figure 57: Using SOM for class noise reduction (Source: Santos et al. (2021\). Reproduction under fair use doctrine). Creating the SOM map -------------------- To perform the SOM\-based quality assessment, the first step is to run `[sits_som_map()](https://rdrr.io/pkg/sits/man/sits_som.html)`, which uses the `kohonen` R package to compute a SOM grid [\[42]](references.html#ref-Wehrens2018), controlled by five parameters. The grid size is given by `grid_xdim` and `grid_ydim`. The starting learning rate is `alpha`, which decreases during the interactions. To measure the separation between samples, use `distance` (either “dtw” or “euclidean”). The number of iterations is set by `rlen`. When using `[sits_som_map()](https://rdrr.io/pkg/sits/man/sits_som.html)` in machines which have multiprocessing support for the OpenMP protocol, setting the laerning mode parameter `mode` to “patch” improves processing time. In MacOS and Windows, please use “online”. We suggest using the Dynamic Time Warping (“dtw”) metric as the distance measure. It is a technique used to measure the similarity between two temporal sequences that may vary in speed or timing [\[43]](references.html#ref-Berndt1994). The core idea of DTW is to find the optimal alignment between two sequences by allowing non\-linear mapping of one sequence onto another. In time series analysis, DTW matches two series slightly out of sync. This property is useful in land use studies for matching time series of agricultural areas [\[44]](references.html#ref-Maus2015). ``` # Clustering time series using SOM som_cluster <- [sits_som_map](https://rdrr.io/pkg/sits/man/sits_som.html)(samples_cerrado_mod13q1_2bands, grid_xdim = 15, grid_ydim = 15, alpha = 1.0, distance = "dtw", rlen = 20 ) ``` ``` # Plot the SOM map [plot](https://rdrr.io/r/graphics/plot.default.html)(som_cluster) ``` Figure 58: SOM map for the Cerrado samples (source: authors). The output of the `[sits_som_map()](https://rdrr.io/pkg/sits/man/sits_som.html)` is a list with three elements: (a) `data`, the original set of time series with two additional columns for each time series: `id_sample` (the original id of each sample) and `id_neuron` (the id of the neuron to which it belongs); (b) `labelled_neurons`, a tibble with information on the neurons. For each neuron, it gives the prior and posterior probabilities of all labels which occur in the samples assigned to it; and (c) the SOM grid. To plot the SOM grid, use `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. The neurons are labelled using majority voting. The SOM grid shows that most classes are associated with neurons close to each other, although there are exceptions. Some Pasture neurons are far from the main cluster because the transition between open savanna and pasture areas is not always well defined and depends on climate and latitude. Also, the neurons associated with Soy\_Fallow are dispersed in the map, indicating possible problems in distinguishing this class from the other agricultural classes. The SOM map can be used to remove outliers, as shown below. Measuring confusion between labels using SOM -------------------------------------------- The second step in SOM\-based quality assessment is understanding the confusion between labels. The function `[sits_som_evaluate_cluster()](https://rdrr.io/pkg/sits/man/sits_som_evaluate_cluster.html)` groups neurons by their majority label and produces a tibble. Neurons are grouped into clusters, and there will be as many clusters as there are labels. The results shows the percentage of samples of each label in each cluster. Ideally, all samples of each cluster would have the same label. In practice, cluster contain samples with different label. This information helps on measuring the confusion between samples. ``` # Produce a tibble with a summary of the mixed labels som_eval <- [sits_som_evaluate_cluster](https://rdrr.io/pkg/sits/man/sits_som_evaluate_cluster.html)(som_cluster) # Show the result som_eval ``` ``` #> # A tibble: 66 × 4 #> id_cluster cluster class mixture_percentage #> <int> <chr> <chr> <dbl> #> 1 1 Dense_Woodland Dense_Woodland 78.1 #> 2 1 Dense_Woodland Pasture 5.56 #> 3 1 Dense_Woodland Rocky_Savanna 8.95 #> 4 1 Dense_Woodland Savanna 3.88 #> 5 1 Dense_Woodland Silviculture 3.48 #> 6 1 Dense_Woodland Soy_Corn 0.0249 #> 7 2 Dunes Dunes 100 #> 8 3 Fallow_Cotton Dense_Woodland 0.169 #> 9 3 Fallow_Cotton Fallow_Cotton 49.5 #> 10 3 Fallow_Cotton Millet_Cotton 13.9 #> # ℹ 56 more rows ``` Many labels are associated with clusters where there are some samples with a different label. Such confusion between labels arises because sample labeling is subjective and can be biased. In many cases, interpreters use high\-resolution data to identify samples. However, the actual images to be classified are captured by satellites with lower resolution. In our case study, a MOD13Q1 image has pixels with 250 m resolution. As such, the correspondence between labeled locations in high\-resolution images and mid to low\-resolution images is not direct. The confusion by sample label can be visualized in a bar plot using `[plot()](https://rdrr.io/r/graphics/plot.default.html)`, as shown below. The bar plot shows some confusion between the labels associated with the natural vegetation typical of the Brazilian Cerrado (`Savanna`, `Savanna_Parkland`, `Rocky_Savanna`). This mixture is due to the large variability of the natural vegetation of the Cerrado biome, which makes it difficult to draw sharp boundaries between classes. Some confusion is also visible between the agricultural classes. The `Millet_Cotton` class is a particularly difficult one since many of the samples assigned to this class are confused with `Soy_Cotton` and `Fallow_Cotton`. ``` # Plot the confusion between clusters [plot](https://rdrr.io/r/graphics/plot.default.html)(som_eval) ``` Figure 59: Confusion between classes as measured by SOM (source: authors). Detecting noisy samples using SOM --------------------------------- The third step in the quality assessment uses the discrete probability distribution associated with each neuron, which is included in the `labeled_neurons` tibble produced by `[sits_som_map()](https://rdrr.io/pkg/sits/man/sits_som.html)`. This approach associates probabilities with frequency of occurrence. More homogeneous neurons (those with one label has high frequency) are assumed to be composed of good quality samples. Heterogeneous neurons (those with two or more classes with significant frequencies) are likely to contain noisy samples. The algorithm computes two values for each sample: * *prior probability*: the probability that the label assigned to the sample is correct, considering the frequency of samples in the same neuron. For example, if a neuron has 20 samples, of which 15 are labeled as `Pasture` and 5 as `Forest`, all samples labeled Forest are assigned a prior probability of 25%. This indicates that Forest samples in this neuron may not be of good quality. * *posterior probability*: the probability that the label assigned to the sample is correct, considering the neighboring neurons. Take the case of the above\-mentioned neuron whose samples labeled `Pasture` have a prior probability of 75%. What happens if all the neighboring neurons have `Forest` as a majority label? To answer this question, we use Bayesian inference to estimate if these samples are noisy based on the surrounding neurons [\[45]](references.html#ref-Santos2021). To identify noisy samples, we take the result of the `[sits_som_map()](https://rdrr.io/pkg/sits/man/sits_som.html)` function as the first argument to the function `[sits_som_clean_samples()](https://rdrr.io/pkg/sits/man/sits_som_clean_samples.html)`. This function finds out which samples are noisy, which are clean, and which need to be further examined by the user. It requires the `prior_threshold` and `posterior_threshold` parameters according to the following rules: * If the prior probability of a sample is less than `prior_threshold`, the sample is assumed to be noisy and tagged as “remove”; * If the prior probability is greater or equal to `prior_threshold` and the posterior probability calculated by Bayesian inference is greater or equal to `posterior_threshold`, the sample is assumed not to be noisy and thus is tagged as “clean”; * If the prior probability is greater or equal to `prior_threshold` and the posterior probability is less than `posterior_threshold`, we have a situation when the sample is part of the majority level of those assigned to its neuron, but its label is not consistent with most of its neighbors. This is an anomalous condition and is tagged as “analyze”. Users are encouraged to inspect such samples to find out whether they are in fact noisy or not. The default value for both `prior_threshold` and `posterior_threshold` is 60%. The `[sits_som_clean_samples()](https://rdrr.io/pkg/sits/man/sits_som_clean_samples.html)` has an additional parameter (`keep`), which indicates which samples should be kept in the set based on their prior and posterior probabilities. The default for `keep` is `c("clean", "analyze")`. As a result of the cleaning, about 900 samples have been considered to be noisy and thus removed. ``` new_samples <- [sits_som_clean_samples](https://rdrr.io/pkg/sits/man/sits_som_clean_samples.html)( som_map = som_cluster, prior_threshold = 0.6, posterior_threshold = 0.6, keep = [c](https://rdrr.io/r/base/c.html)("clean", "analyze") ) # Print the new sample distribution [summary](https://rdrr.io/r/base/summary.html)(new_samples) ``` ``` #> # A tibble: 9 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Dense_Woodland 8519 0.220 #> 2 Dunes 550 0.0142 #> 3 Pasture 5509 0.142 #> 4 Rocky_Savanna 5508 0.142 #> 5 Savanna 7651 0.197 #> 6 Savanna_Parkland 1619 0.0418 #> 7 Soy_Corn 4595 0.119 #> 8 Soy_Cotton 3515 0.0907 #> 9 Soy_Fallow 1309 0.0338 ``` All samples of the class which had the highest confusion with others(`Millet_Cotton`) have been removed. Most samples of class `Silviculture` (planted forests) have also been removed since they have been confused with natural forests and woodlands in the SOM map. Further analysis includes calculating the SOM map and confusion matrix for the new set, as shown in the following example. ``` # Evaluate the mixture in the SOM clusters of new samples new_cluster <- [sits_som_map](https://rdrr.io/pkg/sits/man/sits_som.html)( data = new_samples, grid_xdim = 15, grid_ydim = 15, alpha = 1.0, rlen = 20, distance = "dtw" ) ``` ``` new_cluster_mixture <- [sits_som_evaluate_cluster](https://rdrr.io/pkg/sits/man/sits_som_evaluate_cluster.html)(new_cluster) # Plot the mixture information. [plot](https://rdrr.io/r/graphics/plot.default.html)(new_cluster_mixture) ``` Figure 60: Cluster confusion plot for samples cleaned by SOM (source: authors). As expected, the new confusion map shows a significant improvement over the previous one. This result should be interpreted carefully since it may be due to different effects. The most direct interpretation is that `Millet_Cotton` and `Silviculture` cannot be easily separated from the other classes, given the current attributes (a time series of NDVI and EVI indices from MODIS images). In such situations, users should consider improving the number of samples from the less represented classes, including more MODIS bands, or working with higher resolution satellites. The results of the SOM method should be interpreted based on the users’ understanding of the ecosystems and agricultural practices of the study region. The SOM\-based analysis discards samples that can be confused with samples of other classes. After removing noisy samples or uncertain classes, the dataset obtains a better validation score since there is less confusion between classes. Users should analyse the results with care. Not all discarded samples are low\-quality ones. Confusion between samples of different classes can result from inconsistent labeling or from the lack of capacity of satellite data to distinguish between chosen classes. When many samples are discarded, as in the current example, revising the whole classification schema is advisable. The aim of selecting training data should always be to match the reality on the ground to the power of remote sensing data to identify differences. No analysis procedure can replace actual user experience and knowledge of the study region. Reducing sample imbalance ------------------------- Many training samples for Earth observation data analysis are imbalanced. This situation arises when the distribution of samples associated with each label is uneven. One example is the Cerrado dataset used in this Chapter. The three most frequent labels (`Dense Woodland`, `Savanna`, and `Pasture`) include 53% of all samples, while the three least frequent labels (`Millet-Cotton`, `Silviculture`, and `Dunes`) comprise only 2\.5% of the dataset. Sample imbalance is an undesirable property of a training set since machine learning algorithms tend to be more accurate for classes with many samples. The instances belonging to the minority group are misclassified more often than those belonging to the majority group. Thus, reducing sample imbalance can positively affect classification accuracy [\[46]](references.html#ref-Johnson2019). The function `[sits_reduce_imbalance()](https://rdrr.io/pkg/sits/man/sits_reduce_imbalance.html)` deals with training set imbalance; it increases the number of samples of least frequent labels, and reduces the number of samples of most frequent labels. Oversampling requires generating synthetic samples. The package uses the SMOTE method that estimates new samples by considering the cluster formed by the nearest neighbors of each minority label. SMOTE takes two samples from this cluster and produces a new one by randomly interpolating them [\[47]](references.html#ref-Chawla2002). To perform undersampling, `[sits_reduce_imbalance()](https://rdrr.io/pkg/sits/man/sits_reduce_imbalance.html)` builds a SOM map for each majority label based on the required number of samples to be selected. Each dimension of the SOM is set to `ceiling(sqrt(new_number_samples/4))` to allow a reasonable number of neurons to group similar samples. After calculating the SOM map, the algorithm extracts four samples per neuron to generate a reduced set of samples that approximates the variation of the original one. The `[sits_reduce_imbalance()](https://rdrr.io/pkg/sits/man/sits_reduce_imbalance.html)` algorithm has two parameters: `n_samples_over` and `n_samples_under`. The first parameter indicates the minimum number of samples per class. All classes with samples less than its value are oversampled. The second parameter controls the maximum number of samples per class; all classes with more samples than its value are undersampled. The following example uses `[sits_reduce_imbalance()](https://rdrr.io/pkg/sits/man/sits_reduce_imbalance.html)` with the Cerrado samples. We generate a balanced dataset where all classes have a minimum of 1000 and and a maximum of 1500 samples. We use `[sits_som_evaluate_cluster()](https://rdrr.io/pkg/sits/man/sits_som_evaluate_cluster.html)` to estimate the confusion between classes of the balanced dataset. ``` # Reducing imbalances in the Cerrado dataset balanced_samples <- [sits_reduce_imbalance](https://rdrr.io/pkg/sits/man/sits_reduce_imbalance.html)( samples = samples_cerrado_mod13q1_2bands, n_samples_over = 1000, n_samples_under = 1500, multicores = 4 ) ``` ``` # Print the balanced samples # Some classes have more than 1500 samples due to the SOM map # Each label has between 10% and 6% of the full set [summary](https://rdrr.io/r/base/summary.html)(balanced_samples) ``` ``` #> # A tibble: 12 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Dense_Woodland 1596 0.0974 #> 2 Dunes 1000 0.0610 #> 3 Fallow_Cotton 1000 0.0610 #> 4 Millet_Cotton 1000 0.0610 #> 5 Pasture 1592 0.0971 #> 6 Rocky_Savanna 1476 0.0901 #> 7 Savanna 1600 0.0976 #> 8 Savanna_Parkland 1564 0.0954 #> 9 Silviculture 1000 0.0610 #> 10 Soy_Corn 1588 0.0969 #> 11 Soy_Cotton 1568 0.0957 #> 12 Soy_Fallow 1404 0.0857 ``` ``` # Clustering time series using SOM som_cluster_bal <- [sits_som_map](https://rdrr.io/pkg/sits/man/sits_som.html)( data = balanced_samples, grid_xdim = 15, grid_ydim = 15, alpha = 1.0, distance = "dtw", rlen = 20, mode = "pbatch" ) ``` ``` # Produce a tibble with a summary of the mixed labels som_eval <- [sits_som_evaluate_cluster](https://rdrr.io/pkg/sits/man/sits_som_evaluate_cluster.html)(som_cluster_bal) ``` ``` # Show the result [plot](https://rdrr.io/r/graphics/plot.default.html)(som_eval) ``` Figure 61: Confusion by cluster for the balanced dataset (source: authors). As shown in Figure [61](improving-the-quality-of-training-samples.html#fig:seval), the balanced dataset shows less confusion per label than the unbalanced one. In this case, many classes that were confused with others in the original confusion map are now better represented. Reducing sample imbalance should be tried as an alternative to reducing the number of samples of the classes using SOM. In general, users should balance their training data for better performance. Conclusion ---------- The quality of training data is critical to improving the accuracy of maps resulting from machine learning classification methods. To address this challenge, the `sits` package provides three methods for improving training samples. For large datasets, we recommend using both imbalance\-reducing and SOM\-based algorithms. The SOM\-based method identifies potential mislabeled samples and outliers that require further investigation. The results demonstrate a positive impact on the overall classification accuracy. The complexity and diversity of our planet defy simple label names with hard boundaries. Due to representational and data handling issues, all classification systems have a limited number of categories, which inevitably fail to adequately describe the nuances of the planet’s landscapes. All representation systems are thus limited and application\-dependent. As stated by Janowicz [\[48]](references.html#ref-Janowicz2012): “geographical concepts are situated and context\-dependent and can be described from different, equally valid, points of view; thus, ontological commitments are arbitrary to a large extent”. The availability of big data and satellite image time series is a further challenge. In principle, image time series can capture more subtle changes for land classification. Experts must conceive classification systems and training data collections by understanding how time series information relates to actual land change. Methods for quality analysis, such as those presented in this Chapter, cannot replace user understanding and informed choices.
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/machine-learning-for-data-cubes.html
Machine learning for data cubes =============================== [ Machine learning classification ------------------------------- Machine learning classification is a type of supervised learning in which an algorithm is trained to predict which class an input data point belongs to. The goal of machine learning models is to approximate a function \\(y \= f(x)\\) that maps an input \\(x\\) to a class \\(y\\). A model defines a mapping \\(y \= f(x;\\theta)\\) and learns the value of the parameters \\(\\theta\\) that result in the best function approximation [\[49]](references.html#ref-Goodfellow2016). The difference between the different algorithms is their approach to building the mapping that classifies the input data. In `sits`, machine learning is used to classify individual time series using the `time-first` approach. The package includes two kinds of methods for time series classification: * Machine learning algorithms that do not explicitly consider the temporal structure of the time series. They treat time series as a vector in a high\-dimensional feature space, taking each time series instance as independent from the others. They include random forest (`[sits_rfor()](https://rdrr.io/pkg/sits/man/sits_rfor.html)`), support vector machine (`[sits_svm()](https://rdrr.io/pkg/sits/man/sits_svm.html)`), extreme gradient boosting (`[sits_xgboost()](https://rdrr.io/pkg/sits/man/sits_xgboost.html)`), and multilayer perceptron (`[sits_mlp()](https://rdrr.io/pkg/sits/man/sits_mlp.html)`). * Deep learning methods where temporal relations between observed values in a time series are taken into account. These models are specifically designed for time series. The temporal order of values in a time series is relevant for the classification model. From this class of models, `sits` supports 1D convolution neural networks (`[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)`) and temporal attention\-based encoders (`[sits_tae()](https://rdrr.io/pkg/sits/man/sits_tae.html)` and `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)`). Based on experience with `sits`, random forest, extreme gradient boosting, and temporal deep learning models outperform SVM and multilayer perceptron models. The reason is that some dates provide more information than others in the temporal behavior of land classes. For instance, when monitoring deforestation, dates corresponding to forest removal actions are more informative than earlier or later dates. Similarly, a few dates may capture a large portion of the variation in crop mapping. Therefore, classification methods that consider the temporal order of samples are more likely to capture the seasonal behavior of image time series. Random forest and extreme gradient boosting methods that use individual measures as nodes in decision trees can also capture specific events such as deforestation. The following examples show how to train machine learning methods and apply them to classify a single time series. We use the set `samples_matogrosso_mod13q1`, containing time series samples from the Brazilian Mato Grosso state obtained from the MODIS MOD13Q1 product. It has 1,892 samples and nine classes (`Cerrado`, `Forest`, `Pasture`, `Soy_Corn`, `Soy_Cotton`, `Soy_Fallow`, `Soy_Millet`). Each time series covers 12 months (23 data points) with six bands (NDVI, EVI, BLUE, RED, NIR, MIR). The samples are arranged along an agricultural year, starting in September and ending in August. The dataset was used in the paper “Big Earth observation time series analysis for monitoring Brazilian agriculture” [\[50]](references.html#ref-Picoli2018), being available in the R package `sitsdata`. Common interface to machine learning and deep learning models ------------------------------------------------------------- The `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` function provides a standard interface to all machine learning models. This function takes two mandatory parameters: the training data (`samples`) and the ML algorithm (`ml_method`). After the model is estimated, it can classify individual time series or data cubes with `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`. In what follows, we show how to apply each method to classify a single time series. Then, in Chapter [Image classification in data cubes](https://e-sensing.github.io/sitsbook/image-classification-in-data-cubes.html), we discuss how to classify data cubes. Since `sits` is aimed at remote sensing users who are not machine learning experts, it provides a set of default values for all classification models. These settings have been chosen based on testing by the authors. Nevertheless, users can control all parameters for each model. Novice users can rely on the default values, while experienced ones can fine\-tune model parameters to meet their needs. Model tuning is discussed at the end of this Chapter. When a set of time series organized as tibble is taken as input to the classifier, the result is the same tibble with one additional column (`predicted`), which contains the information on the labels assigned for each interval. The results can be shown in text format using the function `[sits_show_prediction()](https://rdrr.io/pkg/sits/man/sits_show_prediction.html)` or graphically using `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. Random forest ------------- Random forest is a machine learning algorithm that uses an ensemble learning method for classification tasks. The algorithm consists of multiple decision trees, each trained on a different subset of the training data and with a different subset of features. To make a prediction, each decision tree in the forest independently classifies the input data. The final prediction is made based on the majority vote of all the decision trees. The randomness in the algorithm comes from the random subsets of data and features used to train each decision tree, which helps to reduce overfitting and improve the accuracy of the model. This classifier measures the importance of each feature in the classification task, which can be helpful in feature selection and data visualization. For an in\-depth discussion of the robustness of random forest method for satellite image time series classification, please see Pelletier et al [\[51]](references.html#ref-Pelletier2016). Figure 62: Random forest algorithm (Source: Venkata Jagannath in Wikipedia \- licenced as CC\-BY\-SA 4\.0\). `sits` provides `[sits_rfor()](https://rdrr.io/pkg/sits/man/sits_rfor.html)`, which uses the R `randomForest` package [\[52]](references.html#ref-Wright2017); its main parameter is `num_trees`, which is the number of trees to grow with a default value of 100\. The model can be visualized using `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. ``` # Train the Mato Grosso samples with random forest model rfor_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_matogrosso_mod13q1, ml_method = [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)(num_trees = 100) ) # Plot the most important variables of the model [plot](https://rdrr.io/r/graphics/plot.default.html)(rfor_model) ``` Figure 63: Most important variables in random forest model (source: authors). The most important explanatory variables are the NIR (near infrared) band on date 17 (2007\-05\-25\) and the MIR (middle infrared) band on date 22 (2007\-08\-13\). The NIR value at the end of May captures the growth of the second crop for double cropping classes. Values of the MIR band at the end of the period (late July to late August) capture bare soil signatures to distinguish between agricultural and natural classes. This corresponds to summertime when the ground is drier after harvesting crops. ``` # Classify using random forest model and plot the result point_class <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = point_mt_mod13q1, ml_model = rfor_model ) [plot](https://rdrr.io/r/graphics/plot.default.html)(point_class, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` ``` knitr::[include_graphics](https://rdrr.io/pkg/knitr/man/include_graphics.html)("./images/mlrforplot.png") ``` Figure 64: Classification of time series using random forest (source: authors). The result shows that the area started as a forest in 2000, was deforested from 2004 to 2005, used as pasture from 2006 to 2007, and for double\-cropping agriculture from 2009 onwards. This behavior is consistent with expert evaluation of land change process in this region of Amazonia. Random forest is robust to outliers and can deal with irrelevant inputs [\[33]](references.html#ref-Hastie2009). The method tends to overemphasize some variables because its performance tends to stabilize after part of the trees is grown [\[33]](references.html#ref-Hastie2009). In cases where abrupt change occurs, such as deforestation mapping, random forest (if properly trained) will emphasize the temporal instances and bands that capture such quick change. Support vector machine ---------------------- The support vector machine (SVM) classifier is a generalization of a linear classifier that finds an optimal separation hyperplane that minimizes misclassification [\[53]](references.html#ref-Cortes1995). Since a set of samples with \\(n\\) features defines an n\-dimensional feature space, hyperplanes are linear \\({(n\-1\)}\\)\-dimensional boundaries that define linear partitions in that space. If the classes are linearly separable on the feature space, there will be an optimal solution defined by the maximal margin hyperplane, which is the separating hyperplane that is farthest from the training observations [\[54]](references.html#ref-James2013). The maximal margin is computed as the smallest distance from the observations to the hyperplane. The solution for the hyperplane coefficients depends only on the samples that define the maximum margin criteria, the so\-called support vectors. Figure 65: Maximum\-margin hyperplane and margins for an SVM trained with samples from two classes. Samples on the margin are called the support vectors. (Source: Larhmam in Wikipedia \- licensed as CC\-BY\-SA\-4\.0\). For data that is not linearly separable, SVM includes kernel functions that map the original feature space into a higher dimensional space, providing nonlinear boundaries to the original feature space. Despite having a linear boundary on the enlarged feature space, the new classification model generally translates its hyperplane to a nonlinear boundary in the original attribute space. Kernels are an efficient computational strategy to produce nonlinear boundaries in the input attribute space; thus, they improve training\-class separation. SVM is one of the most widely used algorithms in machine learning applications and has been applied to classify remote sensing data [\[55]](references.html#ref-Mountrakis2011). In `sits`, SVM is implemented as a wrapper of `e1071` R package that uses the `LIBSVM` implementation [\[56]](references.html#ref-Chang2011). The `sits` package adopts the *one\-against\-one* method for multiclass classification. For a \\(q\\) class problem, this method creates \\({q(q\-1\)/2}\\) SVM binary models, one for each class pair combination, testing any unknown input vectors throughout all those models. A voting scheme computes the overall result. The example below shows how to apply SVM to classify time series using default values. The main parameters are `kernel`, which controls whether to use a nonlinear transformation (default is `radial`), `cost`, which measures the punishment for wrongly\-classified samples (default is 10\), and `cross`, which sets the value of the k\-fold cross validation (default is 10\). ``` # Train an SVM model svm_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_matogrosso_mod13q1, ml_method = [sits_svm](https://rdrr.io/pkg/sits/man/sits_svm.html)() ) # Classify using the SVM model and plot the result point_class <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = point_mt_mod13q1, ml_model = svm_model ) # Plot the result [plot](https://rdrr.io/r/graphics/plot.default.html)(point_class, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` Figure 66: Classification of time series using SVM (source: authors). The SVM classifier is less stable and less robust to outliers than the random forest method. In this example, it tends to misclassify some of the data. In 2008, it is likely that the correct land class was still `Pasture` rather than `Soy_Millet` as produced by the algorithm, while the `Soy_Cotton` class in 2012 is also inconsistent with the previous and latter classification of `Soy_Corn`. Extreme gradient boosting ------------------------- XGBoost (eXtreme Gradient Boosting) [\[57]](references.html#ref-Chen2016) is an implementation of gradient boosted decision trees designed for speed and performance. It is an ensemble learning method, meaning it combines the predictions from multiple models to produce a final prediction. XGBoost builds trees one at a time, where each new tree helps to correct errors made by previously trained tree. Each tree builds a new model to correct the errors made by previous models. Using gradient descent, the algorithm iteratively adjusts the predictions of each tree by focusing on instances where previous trees made errors. Models are added sequentially until no further improvements can be made. Although random forest and boosting use trees for classification, there are significant differences. While random forest builds multiple decision trees in parallel and merges them together for a more accurate and stable prediction, XGBoost builds trees one at a time, where each new tree helps to correct errors made by previously trained tree. XGBoost is often preferred for its speed and performance, particularly on large datasets, and is well\-suited for problems where precision is paramount. Random Forest, on the other hand, is simpler to implement, more interpretable, and can be more robust to overfitting, making it a good choice for general\-purpose applications. Figure 67: Flow chart of XGBoost algorithm (Source: Guo et al., Applied Sciences, 2020\. \- licenced as CC\-BY\-SA 4\.0\). The boosting method starts from a weak predictor and then improves performance sequentially by fitting a better model at each iteration. It fits a simple classifier to the training data and uses the residuals of the fit to build a predictor. Typically, the base classifier is a regression tree. Although random forest and boosting use trees for classification, there are significant differences. The performance of random forest generally increases with the number of trees until it becomes stable. Boosting trees apply finer divisions over previous results to improve performance [\[33]](references.html#ref-Hastie2009). Some recent papers show that it outperforms random forest for remote sensing image classification [\[58]](references.html#ref-Jafarzadeh2021). However, this result is not generalizable since the quality of the training dataset controls actual performance. In `sits`, the XGBoost method is implemented by the `sits_xbgoost()` function, based on `XGBoost` R package, and has five hyperparameters that require tuning. The `sits_xbgoost()` function takes the user choices as input to a cross\-validation to determine suitable values for the predictor. The learning rate `eta` varies from 0\.0 to 1\.0 and should be kept small (default is 0\.3\) to avoid overfitting. The minimum loss value `gamma` specifies the minimum reduction required to make a split. Its default is 0; increasing it makes the algorithm more conservative. The `max_depth` value controls the maximum depth of the trees. Increasing this value will make the model more complex and likely to overfit (default is 6\). The `subsample` parameter controls the percentage of samples supplied to a tree. Its default is 1 (maximum). Setting it to lower values means that xgboost randomly collects only part of the data instances to grow trees, thus preventing overfitting. The `nrounds` parameter controls the maximum number of boosting interactions; its default is 100, which has proven to be enough in most cases. To follow the convergence of the algorithm, users can turn the `verbose` parameter on. In general, the results using the extreme gradient boosting algorithm are similar to the random forest method. ``` # Train using XGBoost xgb_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_matogrosso_mod13q1, ml_method = [sits_xgboost](https://rdrr.io/pkg/sits/man/sits_xgboost.html)(verbose = 0) ) # Classify using SVM model and plot the result point_class_xgb <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = point_mt_mod13q1, ml_model = xgb_model ) [plot](https://rdrr.io/r/graphics/plot.default.html)(point_class_xgb, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` Figure 68: Classification of time series using XGBoost (source: authors). Deep learning using multilayer perceptron ----------------------------------------- To support deep learning methods, `sits` uses the `torch` R package, which takes the Facebook `torch` C\+\+ library as a back\-end. Machine learning algorithms that use the R `torch` package are similar to those developed using `PyTorch`. The simplest deep learning method is multilayer perceptron (MLP), which are feedforward artificial neural networks. An MLP consists of three kinds of nodes: an input layer, a set of hidden layers, and an output layer. The input layer has the same dimension as the number of features in the dataset. The hidden layers attempt to approximate the best classification function. The output layer decides which class should be assigned to the input. In `sits`, MLP models can be built using `[sits_mlp()](https://rdrr.io/pkg/sits/man/sits_mlp.html)`. Since there is no established model for generic classification of satellite image time series, designing MLP models requires parameter customization. The most important decisions are the number of layers in the model and the number of neurons per layer. These values are set by the `layers` parameter, which is a list of integer values. The size of the list is the number of layers, and each element indicates the number of nodes per layer. The choice of the number of layers depends on the inherent separability of the dataset to be classified. For datasets where the classes have different signatures, a shallow model (with three layers) may provide appropriate responses. More complex situations require models of deeper hierarchy. Models with many hidden layers may take a long time to train and may not converge. We suggest to start with three layers and test different options for the number of neurons per layer before increasing the number of layers. In our experience, using three to five layers is a reasonable compromise if the training data has a good quality. Further increase in the number of layers will not improve the model. MLP models also need to include the activation function. The activation function of a node defines the output of that node given an input or set of inputs. Following standard practices [\[49]](references.html#ref-Goodfellow2016), we use the `relu` activation function. The optimization method (`optimizer`) represents the gradient descent algorithm to be used. These methods aim to maximize an objective function by updating the parameters in the opposite direction of the gradient of the objective function [\[59]](references.html#ref-Ruder2016). Since gradient descent plays a key role in deep learning model fitting, developing optimizers is an important topic of research [\[60]](references.html#ref-Bottou2018). Many optimizers have been proposed in the literature, and recent results are reviewed by Schmidt et al. [\[61]](references.html#ref-Schmidt2021). The Adamw optimizer provides a good baseline and reliable performance for general deep learning applications [\[62]](references.html#ref-Kingma2017). By default, all deep learning algorithms in `sits` use Adamw. Another relevant parameter is the list of dropout rates (`dropout`). Dropout is a technique for randomly dropping units from the neural network during training [\[63]](references.html#ref-Srivastava2014). By randomly discarding some neurons, dropout reduces overfitting. Since a cascade of neural nets aims to improve learning as more data is acquired, discarding some neurons may seem like a waste of resources. In practice, dropout prevents an early convergence to a local minimum [\[49]](references.html#ref-Goodfellow2016). We suggest users experiment with different dropout rates, starting from small values (10\-30%) and increasing as required. The following example shows how to use `[sits_mlp()](https://rdrr.io/pkg/sits/man/sits_mlp.html)`. The default parameters have been chosen based on a modified version of [\[64]](references.html#ref-Wang2017), which proposes using multilayer perceptron as a baseline for time series classification. These parameters are: (a) Three layers with 512 neurons each, specified by the parameter `layers`; (b) Using the “relu” activation function; (c) dropout rates of 40%, 30%, and 20% for the layers; (d) the “optimizer\_adamw” as optimizer (default value); (e) a number of training steps (`epochs`) of 100; (f) a `batch_size` of 64, which indicates how many time series are used for input at a given step; and (g) a validation percentage of 20%, which means 20% of the samples will be randomly set aside for validation. To simplify the output, the `verbose` option has been turned off. After the model has been generated, we plot its training history. ``` # Train using an MLP model # This is an example of how to set parameters # First-time users should test default options first mlp_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_matogrosso_mod13q1, ml_method = [sits_mlp](https://rdrr.io/pkg/sits/man/sits_mlp.html)( optimizer = torch::[optim_adamw](https://rdrr.io/pkg/torch/man/optim_adamw.html), layers = [c](https://rdrr.io/r/base/c.html)(512, 512, 512), dropout_rates = [c](https://rdrr.io/r/base/c.html)(0.40, 0.30, 0.20), epochs = 80, batch_size = 64, verbose = FALSE, validation_split = 0.2 ) ) # Show training evolution [plot](https://rdrr.io/r/graphics/plot.default.html)(mlp_model) ``` Figure 69: Evolution of training accuracy of MLP model (source: authors). Then, we classify a 16\-year time series using the multilayer perceptron model. ``` # Classify using MLP model and plot the result point_mt_mod13q1 |> [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)(mlp_model) |> [plot](https://rdrr.io/r/graphics/plot.default.html)(bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` Figure 70: Classification of time series using MLP (source: authors). In theory, multilayer perceptron model can capture more subtle changes than random forest and XGBoost In this specific case, the result is similar to theirs. Although the model mixes the `Soy_Corn` and `Soy_Millet` classes, the distinction between their temporal signatures is quite subtle. Also, in this case, this suggests the need to improve the number of samples. In this example, the MLP model shows an increase in sensitivity compared to previous models. We recommend to compare different configurations since the MLP model is sensitive to changes in its parameters. Temporal Convolutional Neural Network (TempCNN) ----------------------------------------------- Convolutional neural networks (CNN) are deep learning methods that apply convolution filters (sliding windows) to the input data sequentially. The Temporal Convolutional Neural Network (TempCNN) is a neural network architecture specifically designed to process sequential data such as time series. In the case of time series, a 1D CNN applies a moving temporal window to the time series to produce another time series as the result of the convolution. The TempCNN architecture for satellite image time series classification is proposed by Pelletier et al. [\[65]](references.html#ref-Pelletier2019). It has three 1D convolutional layers and a final softmax layer for classification (see Figure [71](machine-learning-for-data-cubes.html#fig:mltcnnfig)). The authors combine different methods to avoid overfitting and reduce the vanishing gradient effect, including dropout, regularization, and batch normalization. In the TempCNN reference paper [\[65]](references.html#ref-Pelletier2019), the authors favourably compare their model with the Recurrent Neural Network proposed by Russwurm and Körner [\[66]](references.html#ref-Russwurm2018). Figure [71](machine-learning-for-data-cubes.html#fig:mltcnnfig) shows the architecture of the TempCNN model. TempCNN applies one\-dimensional convolutions on the input sequence to capture temporal dependencies, allowing the network to learn long\-term dependencies in the input sequence. Each layer of the model captures temporal dependencies at a different scale. Due to its multi\-scale approach, TempCNN can capture complex temporal patterns in the data and produce accurate predictions. Figure 71: Structure of tempCNN architecture (Source: Pelletier et al. (2019\). Reproduction under fair use doctrine). The function `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` implements the model. The first parameter is the `optimizer` used in the backpropagation phase for gradient descent. The default is `adamw` which is considered as a stable and reliable optimization function. The parameter `cnn_layers` controls the number of 1D\-CNN layers and the size of the filters applied at each layer; the default values are three CNNs with 128 units. The parameter `cnn_kernels` indicates the size of the convolution kernels; the default is kernels of size 7\. Activation for all 1D\-CNN layers uses the “relu” function. The dropout rates for each 1D\-CNN layer are controlled individually by the parameter `cnn_dropout_rates`. The `validation_split` controls the size of the test set relative to the full dataset. We recommend setting aside at least 20% of the samples for validation. ``` [library](https://rdrr.io/r/base/library.html)([torchopt](https://github.com/e-sensing/torchopt/)) # Train using tempCNN tempcnn_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_matogrosso_mod13q1, [sits_tempcnn](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)( optimizer = torch::[optim_adamw](https://rdrr.io/pkg/torch/man/optim_adamw.html), cnn_layers = [c](https://rdrr.io/r/base/c.html)(256, 256, 256), cnn_kernels = [c](https://rdrr.io/r/base/c.html)(7, 7, 7), cnn_dropout_rates = [c](https://rdrr.io/r/base/c.html)(0.2, 0.2, 0.2), epochs = 80, batch_size = 64, validation_split = 0.2, verbose = FALSE ) ) # Show training evolution [plot](https://rdrr.io/r/graphics/plot.default.html)(tempcnn_model) ``` Figure 72: Training evolution of TempCNN model (source: authors). Using the TempCNN model, we classify a 16\-year time series. ``` # Classify using TempCNN model and plot the result point_mt_mod13q1 |> [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)(tempcnn_model) |> [plot](https://rdrr.io/r/graphics/plot.default.html)(bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` Figure 73: Classification of time series using TempCNN (source: authors). ``` knitr::[include_graphics](https://rdrr.io/pkg/knitr/man/include_graphics.html)("./images/mltcnnplot.png") ``` Figure 74: Classification of time series using TempCNN (source: authors). The result has important differences from the previous ones. The TempCNN model indicates the `Soy_Cotton` class as the most likely one in 2004\. While this result is possibly wrong, it shows that the time series for 2004 is different from those of Forest and Pasture classes. One possible explanation is that there was forest degradation in 2004, leading to a signature that is a mix of forest and bare soil. In this case, including forest degradation samples could improve the training data. In our experience, TempCNN models are a reliable way of classifying image time series [\[67]](references.html#ref-Simoes2021). Recent work which compares different models also provides evidence that TempCNN models have satisfactory behavior, especially in the case of crop classes [\[68]](references.html#ref-Russwurm2020). Attention\-based models ----------------------- Attention\-based deep learning models are a class of models that use a mechanism inspired by human attention to focus on specific parts of input during processing. These models have been shown to be effective for various tasks such as machine translation, image captioning, and speech recognition. The basic idea behind attention\-based models is to allow the model to selectively focus on different input parts at different times. This can be done by introducing a mechanism that assigns weights to each element of the input, indicating the relative importance of that element to the current processing step. The model can then use them to compute a weighted sum of the input. The results capture the model’s attention on specific parts of the input. Attention\-based models have become one of the most used deep learning architectures for problems that involve sequential data inputs, e.g., text recognition and automatic translation. The general idea is that not all inputs are alike in applications such as language translation. Consider the English sentence “Look at all the lonely people”. A sound translation system needs to relate the words “look” and “people” as the key parts of this sentence to ensure such link is captured in the translation. A specific type of attention models, called transformers, enables the recognition of such complex relationships between input and output sequences [\[69]](references.html#ref-Vaswani2017). The basic structure of transformers is the same as other neural network algorithms. They have an encoder that transforms textual input values into numerical vectors and a decoder that processes these vectors to provide suitable answers. The difference is how the values are handled internally. In an MLP, all inputs are treated equally at first; based on iterative matching of training and test data, the backpropagation technique feeds information back to the initial layers to identify the most suitable combination of inputs that produces the best output. Convolutional nets (CNN) combine input values that are close in time (1D) or space (2D) to produce higher\-level information that helps to distinguish the different components of the input data. For text recognition, the initial choice of deep learning studies was to use recurrent neural networks (RNN) that handle input sequences. However, neither MLPs, CNNs, or RNNs have been able to capture the structure of complex inputs such as natural language. The success of transformer\-based solutions accounts for substantial improvements in natural language processing. The two main differences between transformer models and other algorithms are positional encoding and self\-attention. Positional encoding assigns an index to each input value, ensuring that the relative locations of the inputs are maintained throughout the learning and processing phases. Self\-attention compares every word in a sentence to every other word in the same sentence, including itself. In this way, it learns contextual information about the relation between the words. This conception has been validated in large language models such as BERT [\[70]](references.html#ref-Devlin2019) and GPT\-3 [\[71]](references.html#ref-Brown2020). The application of attention\-based models for satellite image time series analysis is proposed by Garnot et al. [\[72]](references.html#ref-Garnot2020a) and Russwurm and Körner [\[68]](references.html#ref-Russwurm2020). A self\-attention network can learn to focus on specific time steps and image features most relevant for distinguishing between different classes. The algorithm tries to identify which combination of individual temporal observations is most relevant to identify each class. For example, crop identification will use observations that capture the onset of the growing season, the date of maximum growth, and the end of the growing season. In the case of deforestation, the algorithm tries to identify the dates when the forest is being cut. Attention\-based models are a means to identify events that characterize each land class. The first model proposed by Garnot et al. is a full transformer\-based model [\[72]](references.html#ref-Garnot2020a). Considering that image time series classification is easier than natural language processing, Garnot et al. also propose a simplified version of the full transformer model [\[73]](references.html#ref-Garnot2020b). This simpler model uses a reduced way to compute the attention matrix, reducing time for training and classification without loss of quality of the result. In `sits`, the full transformer\-based model proposed by Garnot et al. [\[72]](references.html#ref-Garnot2020a) is implemented using `[sits_tae()](https://rdrr.io/pkg/sits/man/sits_tae.html)`. The default parameters are those proposed by the authors. The default optimizer is `optim_adamw`, as also used in the `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` function. ``` # Train a machine learning model using TAE tae_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_matogrosso_mod13q1, [sits_tae](https://rdrr.io/pkg/sits/man/sits_tae.html)( epochs = 80, batch_size = 64, optimizer = torch::[optim_adamw](https://rdrr.io/pkg/torch/man/optim_adamw.html), validation_split = 0.2, verbose = FALSE ) ) # Show training evolution [plot](https://rdrr.io/r/graphics/plot.default.html)(tae_model) ``` Figure 75: Training evolution of Temporal Self\-Attention model (source: authors). Then, we classify a 16\-year time series using the TAE model. ``` # Classify using DL model and plot the result point_mt_mod13q1 |> [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)(tae_model) |> [plot](https://rdrr.io/r/graphics/plot.default.html)(bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` Figure 76: Classification of time series using TAE (source: authors). Garnot and co\-authors also proposed the Lightweight Temporal Self\-Attention Encoder (LTAE) [\[73]](references.html#ref-Garnot2020b), which the authors claim can achieve high classification accuracy with fewer parameters compared to other neural network models. It is a good choice for applications where computational resources are limited. The `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)` function implements this algorithm. The most important parameter to be set is the learning rate `lr`. Values ranging from 0\.001 to 0\.005 should produce good results. See also the section below on model tuning. ``` # Train a machine learning model using TAE ltae_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_matogrosso_mod13q1, [sits_lighttae](https://rdrr.io/pkg/sits/man/sits_lighttae.html)( epochs = 80, batch_size = 64, optimizer = torch::[optim_adamw](https://rdrr.io/pkg/torch/man/optim_adamw.html), opt_hparams = [list](https://rdrr.io/r/base/list.html)(lr = 0.001), validation_split = 0.2 ) ) # Show training evolution [plot](https://rdrr.io/r/graphics/plot.default.html)(ltae_model) ``` Figure 77: Training evolution of Lightweight Temporal Self\-Attention model (source: authors). Then, we classify a 16\-year time series using the LightTAE model. ``` # Classify using DL model and plot the result point_mt_mod13q1 |> [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)(ltae_model) |> [plot](https://rdrr.io/r/graphics/plot.default.html)(bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` Figure 78: Classification of time series using LightTAE (source: authors). The behaviour of both `[sits_tae()](https://rdrr.io/pkg/sits/man/sits_tae.html)` and `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)` is similar to that of `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)`. It points out the possible need for more classes and training data to better represent the transition period between 2004 and 2010\. One possibility is that the training data associated with the Pasture class is only consistent with the time series between the years 2005 to 2008\. However, the transition from Forest to Pasture in 2004 and from Pasture to Agriculture in 2009\-2010 is subject to uncertainty since the classifiers do not agree on the resulting classes. In general, deep learning temporal\-aware models are more sensitive to class variability than random forest and extreme gradient boosters. Deep learning model tuning -------------------------- Model tuning is the process of selecting the best set of hyperparameters for a specific application. When using deep learning models for image classification, it is a highly recommended step to enable a better fit of the algorithm to the training data. Hyperparameters are parameters of the model that are not learned during training but instead are set prior to training and affect the behavior of the model during training. Examples include the learning rate, batch size, number of epochs, number of hidden layers, number of neurons in each layer, activation functions, regularization parameters, and optimization algorithms. Deep learning model tuning involves selecting the best combination of hyperparameters that results in the optimal performance of the model on a given task. This is done by training and evaluating the model with different sets of hyperparameters to select the set that gives the best performance. Deep learning algorithms try to find the optimal point representing the best value of the prediction function that, given an input \\(X\\) of data points, predicts the result \\(Y\\). In our case, \\(X\\) is a multidimensional time series, and \\(Y\\) is a vector of probabilities for the possible output classes. For complex situations, the best prediction function is time\-consuming to estimate. For this reason, deep learning methods rely on gradient descent methods to speed up predictions and converge faster than an exhaustive search [\[74]](references.html#ref-Bengio2012). All gradient descent methods use an optimization algorithm adjusted with hyperparameters such as the learning and regularization rates [\[61]](references.html#ref-Schmidt2021). The learning rate controls the numerical step of the gradient descent function, and the regularization rate controls model overfitting. Adjusting these values to an optimal setting requires using model tuning methods. To reduce the learning curve, `sits` provides default values for all machine learning and deep learning methods, ensuring a reasonable baseline performance. However, refininig model hyperparameters might be necessary, especially for more complex models such as `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)` or `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)`. To that end, the package provides the `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` function. The most straightforward approach to model tuning is to run a grid search; this involves defining a range for each hyperparameter and then testing all possible combinations. This approach leads to a combinatorial explosion and thus is not recommended. Instead, Bergstra and Bengio propose randomly chosen trials [\[75]](references.html#ref-Bergstra2012). Their paper shows that randomized trials are more efficient than grid search trials, selecting adequate hyperparameters at a fraction of the computational cost. The `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` function follows Bergstra and Bengio by using a random search on the chosen hyperparameters. Experiments with image time series show that other optimizers may have better performance for the specific problem of land classification. For this reason, the authors developed the `torchopt` R package, which includes several recently proposed optimizers, including Madgrad [\[76]](references.html#ref-Defazio2021), and Yogi [\[77]](references.html#ref-Zaheer2018). Using the `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` function allows testing these and other optimizers available in `torch` and `torch_opt` packages. The `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` function takes the following parameters: * `samples`: Training dataset to be used by the model. * `samples_validation`: Optional dataset containing time series to be used for validation. If missing, the next parameter will be used. * `validation_split`: If `samples_validation` is not used, this parameter defines the proportion of time series in the training dataset to be used for validation (default is 20%). * `ml_method()`: Deep learning method (either `[sits_mlp()](https://rdrr.io/pkg/sits/man/sits_mlp.html)`, `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)`, `[sits_tae()](https://rdrr.io/pkg/sits/man/sits_tae.html)` or `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)`). * `params`: Defines the optimizer and its hyperparameters by calling `[sits_tuning_hparams()](https://rdrr.io/pkg/sits/man/sits_tuning_hparams.html)`, as shown in the example below. * `trials`: Number of trials to run the random search. * `multicores`: Number of cores to be used for the procedure. * `progress`: Show a progress bar? The `[sits_tuning_hparams()](https://rdrr.io/pkg/sits/man/sits_tuning_hparams.html)` function inside `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` allows defining optimizers and their hyperparameters, including `lr` (learning rate), `eps` (controls numerical stability), and `weight_decay` (controls overfitting). The default values for `eps` and `weight_decay` in all `sits` deep learning functions are 1e\-08 and 1e\-06, respectively. The default `lr` for `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)` and `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` is 0\.005\. Users have different ways to randomize the hyperparameters, including: * `choice()` (a list of options); * `uniform` (a uniform distribution); * `randint` (random integers from a uniform distribution); * `normal(mean, sd)` (normal distribution); * `beta(shape1, shape2)` (beta distribution); * `loguniform(max, min)` (loguniform distribution). We suggest to use the log\-uniform distribution to search over a wide range of values that span several orders of magnitude. This is common for hyperparameters like learning rates, which can vary from very small values (e.g., 0\.0001\) to larger values (e.g., 1\.0\) in a logarithmic manner. By default, `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` uses a loguniform distribution between 10^\-2 and 10^\-4 for the learning rate and the same distribution between 10^\-2 and 10^\-8 for the weight decay. ``` tuned <- [sits_tuning](https://rdrr.io/pkg/sits/man/sits_tuning.html)( samples = samples_matogrosso_mod13q1, ml_method = [sits_lighttae](https://rdrr.io/pkg/sits/man/sits_lighttae.html)(), params = [sits_tuning_hparams](https://rdrr.io/pkg/sits/man/sits_tuning_hparams.html)( optimizer = torch::[optim_adamw](https://rdrr.io/pkg/torch/man/optim_adamw.html), opt_hparams = [list](https://rdrr.io/r/base/list.html)( lr = loguniform(10^-2, 10^-4), weight_decay = loguniform(10^-2, 10^-8) ) ), trials = 40, multicores = 6, progress = FALSE ) ``` The result is a tibble with different values of accuracy, kappa, decision matrix, and hyperparameters. The best results obtain accuracy values between 0\.978 and 0\.970, as shown below. The best result is obtained by a learning rate of 0\.0013 and a weight decay of 3\.73e\-07\. The worst result has an accuracy of 0\.891, which shows the importance of the tuning procedure. ``` # Obtain accuracy, kappa, lr, and weight decay for the 5 best results # Hyperparameters are organized as a list hparams_5 <- tuned[1:5, ]$opt_hparams # Extract learning rate and weight decay from the list lr_5 <- purrr::[map_dbl](https://purrr.tidyverse.org/reference/map.html)(hparams_5, function(h) h$lr) wd_5 <- purrr::[map_dbl](https://purrr.tidyverse.org/reference/map.html)(hparams_5, function(h) h$weight_decay) # Create a tibble to display the results best_5 <- tibble::[tibble](https://tibble.tidyverse.org/reference/tibble.html)( accuracy = tuned[1:5, ]$accuracy, kappa = tuned[1:5, ]$kappa, lr = lr_5, weight_decay = wd_5 ) # Print the best five combination of hyperparameters best_5 ``` ``` #> # A tibble: 5 × 4 #> accuracy kappa lr weight_decay #> <dbl> <dbl> <dbl> <dbl> #> 1 0.978 0.974 0.00136 0.000000373 #> 2 0.975 0.970 0.00269 0.0000000861 #> 3 0.973 0.967 0.00162 0.00218 #> 4 0.970 0.964 0.000378 0.00000868 #> 5 0.970 0.964 0.00198 0.00000275 ``` For large datasets, the tuning process is time\-consuming. Despite this cost, it is recommended to achieve the best performance. In general, tuning hyperparameters for models such as `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` and `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)` will result in a slight performance improvement over the default parameters on overall accuracy. The performance gain will be stronger in the less well represented classes, where significant gains in producer’s and user’s accuracies are possible. When detecting change in less frequent classes, tuning can make a substantial difference in the results. Considerations on model choice ------------------------------ The results should not be taken as an indication of which method performs better. The most crucial factor for achieving a good result is the quality of the training data [\[31]](references.html#ref-Maxwell2018). Experience shows that classification quality depends on the training samples and how well the model matches these samples. For examples of ML for classifying large areas, please see the papers by the authors [\[7]](references.html#ref-Ferreira2020a), [\[50]](references.html#ref-Picoli2018), [\[78]](references.html#ref-Picoli2020a), [\[79]](references.html#ref-Simoes2020). In the specific case of satellite image time series, Russwurm et al. present a comparative study between seven deep neural networks for the classification of agricultural crops, using random forest as a baseline [\[68]](references.html#ref-Russwurm2020). The data is composed of Sentinel\-2 images over Britanny, France. Their results indicate a slight difference between the best model (attention\-based transformer model) over TempCNN and random forest. Attention\-based models obtain accuracy ranging from 80\-81%, TempCNN gets 78\-80%, and random forest obtains 78%. Based on this result and also on the authors’ experience, we make the following recommendations: * Random forest provides a good baseline for image time series classification and should be included in users’ assessments. * XGBoost is a worthy alternative to random forest. In principle, XGBoost is more sensitive to data variations at the cost of possible overfitting. * TempCNN is a reliable model with reasonable training time, which is close to the state\-of\-the\-art in deep learning classifiers for image time series. * Attention\-based models (TAE and LightTAE) can achieve the best overall performance with well\-designed and balanced training sets and hyperparameter tuning. The best means of improving classification performance is to provide an accurate and reliable training dataset. Accuracy improvements resulting from using deep learning methods instead of random forest of xgboost are on the order of 3\-5%, while gains when using good training data improve results by 10\-30%. As a basic rule, make sure you have good quality samples before training and classification. Machine learning classification ------------------------------- Machine learning classification is a type of supervised learning in which an algorithm is trained to predict which class an input data point belongs to. The goal of machine learning models is to approximate a function \\(y \= f(x)\\) that maps an input \\(x\\) to a class \\(y\\). A model defines a mapping \\(y \= f(x;\\theta)\\) and learns the value of the parameters \\(\\theta\\) that result in the best function approximation [\[49]](references.html#ref-Goodfellow2016). The difference between the different algorithms is their approach to building the mapping that classifies the input data. In `sits`, machine learning is used to classify individual time series using the `time-first` approach. The package includes two kinds of methods for time series classification: * Machine learning algorithms that do not explicitly consider the temporal structure of the time series. They treat time series as a vector in a high\-dimensional feature space, taking each time series instance as independent from the others. They include random forest (`[sits_rfor()](https://rdrr.io/pkg/sits/man/sits_rfor.html)`), support vector machine (`[sits_svm()](https://rdrr.io/pkg/sits/man/sits_svm.html)`), extreme gradient boosting (`[sits_xgboost()](https://rdrr.io/pkg/sits/man/sits_xgboost.html)`), and multilayer perceptron (`[sits_mlp()](https://rdrr.io/pkg/sits/man/sits_mlp.html)`). * Deep learning methods where temporal relations between observed values in a time series are taken into account. These models are specifically designed for time series. The temporal order of values in a time series is relevant for the classification model. From this class of models, `sits` supports 1D convolution neural networks (`[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)`) and temporal attention\-based encoders (`[sits_tae()](https://rdrr.io/pkg/sits/man/sits_tae.html)` and `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)`). Based on experience with `sits`, random forest, extreme gradient boosting, and temporal deep learning models outperform SVM and multilayer perceptron models. The reason is that some dates provide more information than others in the temporal behavior of land classes. For instance, when monitoring deforestation, dates corresponding to forest removal actions are more informative than earlier or later dates. Similarly, a few dates may capture a large portion of the variation in crop mapping. Therefore, classification methods that consider the temporal order of samples are more likely to capture the seasonal behavior of image time series. Random forest and extreme gradient boosting methods that use individual measures as nodes in decision trees can also capture specific events such as deforestation. The following examples show how to train machine learning methods and apply them to classify a single time series. We use the set `samples_matogrosso_mod13q1`, containing time series samples from the Brazilian Mato Grosso state obtained from the MODIS MOD13Q1 product. It has 1,892 samples and nine classes (`Cerrado`, `Forest`, `Pasture`, `Soy_Corn`, `Soy_Cotton`, `Soy_Fallow`, `Soy_Millet`). Each time series covers 12 months (23 data points) with six bands (NDVI, EVI, BLUE, RED, NIR, MIR). The samples are arranged along an agricultural year, starting in September and ending in August. The dataset was used in the paper “Big Earth observation time series analysis for monitoring Brazilian agriculture” [\[50]](references.html#ref-Picoli2018), being available in the R package `sitsdata`. Common interface to machine learning and deep learning models ------------------------------------------------------------- The `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` function provides a standard interface to all machine learning models. This function takes two mandatory parameters: the training data (`samples`) and the ML algorithm (`ml_method`). After the model is estimated, it can classify individual time series or data cubes with `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`. In what follows, we show how to apply each method to classify a single time series. Then, in Chapter [Image classification in data cubes](https://e-sensing.github.io/sitsbook/image-classification-in-data-cubes.html), we discuss how to classify data cubes. Since `sits` is aimed at remote sensing users who are not machine learning experts, it provides a set of default values for all classification models. These settings have been chosen based on testing by the authors. Nevertheless, users can control all parameters for each model. Novice users can rely on the default values, while experienced ones can fine\-tune model parameters to meet their needs. Model tuning is discussed at the end of this Chapter. When a set of time series organized as tibble is taken as input to the classifier, the result is the same tibble with one additional column (`predicted`), which contains the information on the labels assigned for each interval. The results can be shown in text format using the function `[sits_show_prediction()](https://rdrr.io/pkg/sits/man/sits_show_prediction.html)` or graphically using `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. Random forest ------------- Random forest is a machine learning algorithm that uses an ensemble learning method for classification tasks. The algorithm consists of multiple decision trees, each trained on a different subset of the training data and with a different subset of features. To make a prediction, each decision tree in the forest independently classifies the input data. The final prediction is made based on the majority vote of all the decision trees. The randomness in the algorithm comes from the random subsets of data and features used to train each decision tree, which helps to reduce overfitting and improve the accuracy of the model. This classifier measures the importance of each feature in the classification task, which can be helpful in feature selection and data visualization. For an in\-depth discussion of the robustness of random forest method for satellite image time series classification, please see Pelletier et al [\[51]](references.html#ref-Pelletier2016). Figure 62: Random forest algorithm (Source: Venkata Jagannath in Wikipedia \- licenced as CC\-BY\-SA 4\.0\). `sits` provides `[sits_rfor()](https://rdrr.io/pkg/sits/man/sits_rfor.html)`, which uses the R `randomForest` package [\[52]](references.html#ref-Wright2017); its main parameter is `num_trees`, which is the number of trees to grow with a default value of 100\. The model can be visualized using `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. ``` # Train the Mato Grosso samples with random forest model rfor_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_matogrosso_mod13q1, ml_method = [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)(num_trees = 100) ) # Plot the most important variables of the model [plot](https://rdrr.io/r/graphics/plot.default.html)(rfor_model) ``` Figure 63: Most important variables in random forest model (source: authors). The most important explanatory variables are the NIR (near infrared) band on date 17 (2007\-05\-25\) and the MIR (middle infrared) band on date 22 (2007\-08\-13\). The NIR value at the end of May captures the growth of the second crop for double cropping classes. Values of the MIR band at the end of the period (late July to late August) capture bare soil signatures to distinguish between agricultural and natural classes. This corresponds to summertime when the ground is drier after harvesting crops. ``` # Classify using random forest model and plot the result point_class <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = point_mt_mod13q1, ml_model = rfor_model ) [plot](https://rdrr.io/r/graphics/plot.default.html)(point_class, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` ``` knitr::[include_graphics](https://rdrr.io/pkg/knitr/man/include_graphics.html)("./images/mlrforplot.png") ``` Figure 64: Classification of time series using random forest (source: authors). The result shows that the area started as a forest in 2000, was deforested from 2004 to 2005, used as pasture from 2006 to 2007, and for double\-cropping agriculture from 2009 onwards. This behavior is consistent with expert evaluation of land change process in this region of Amazonia. Random forest is robust to outliers and can deal with irrelevant inputs [\[33]](references.html#ref-Hastie2009). The method tends to overemphasize some variables because its performance tends to stabilize after part of the trees is grown [\[33]](references.html#ref-Hastie2009). In cases where abrupt change occurs, such as deforestation mapping, random forest (if properly trained) will emphasize the temporal instances and bands that capture such quick change. Support vector machine ---------------------- The support vector machine (SVM) classifier is a generalization of a linear classifier that finds an optimal separation hyperplane that minimizes misclassification [\[53]](references.html#ref-Cortes1995). Since a set of samples with \\(n\\) features defines an n\-dimensional feature space, hyperplanes are linear \\({(n\-1\)}\\)\-dimensional boundaries that define linear partitions in that space. If the classes are linearly separable on the feature space, there will be an optimal solution defined by the maximal margin hyperplane, which is the separating hyperplane that is farthest from the training observations [\[54]](references.html#ref-James2013). The maximal margin is computed as the smallest distance from the observations to the hyperplane. The solution for the hyperplane coefficients depends only on the samples that define the maximum margin criteria, the so\-called support vectors. Figure 65: Maximum\-margin hyperplane and margins for an SVM trained with samples from two classes. Samples on the margin are called the support vectors. (Source: Larhmam in Wikipedia \- licensed as CC\-BY\-SA\-4\.0\). For data that is not linearly separable, SVM includes kernel functions that map the original feature space into a higher dimensional space, providing nonlinear boundaries to the original feature space. Despite having a linear boundary on the enlarged feature space, the new classification model generally translates its hyperplane to a nonlinear boundary in the original attribute space. Kernels are an efficient computational strategy to produce nonlinear boundaries in the input attribute space; thus, they improve training\-class separation. SVM is one of the most widely used algorithms in machine learning applications and has been applied to classify remote sensing data [\[55]](references.html#ref-Mountrakis2011). In `sits`, SVM is implemented as a wrapper of `e1071` R package that uses the `LIBSVM` implementation [\[56]](references.html#ref-Chang2011). The `sits` package adopts the *one\-against\-one* method for multiclass classification. For a \\(q\\) class problem, this method creates \\({q(q\-1\)/2}\\) SVM binary models, one for each class pair combination, testing any unknown input vectors throughout all those models. A voting scheme computes the overall result. The example below shows how to apply SVM to classify time series using default values. The main parameters are `kernel`, which controls whether to use a nonlinear transformation (default is `radial`), `cost`, which measures the punishment for wrongly\-classified samples (default is 10\), and `cross`, which sets the value of the k\-fold cross validation (default is 10\). ``` # Train an SVM model svm_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_matogrosso_mod13q1, ml_method = [sits_svm](https://rdrr.io/pkg/sits/man/sits_svm.html)() ) # Classify using the SVM model and plot the result point_class <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = point_mt_mod13q1, ml_model = svm_model ) # Plot the result [plot](https://rdrr.io/r/graphics/plot.default.html)(point_class, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` Figure 66: Classification of time series using SVM (source: authors). The SVM classifier is less stable and less robust to outliers than the random forest method. In this example, it tends to misclassify some of the data. In 2008, it is likely that the correct land class was still `Pasture` rather than `Soy_Millet` as produced by the algorithm, while the `Soy_Cotton` class in 2012 is also inconsistent with the previous and latter classification of `Soy_Corn`. Extreme gradient boosting ------------------------- XGBoost (eXtreme Gradient Boosting) [\[57]](references.html#ref-Chen2016) is an implementation of gradient boosted decision trees designed for speed and performance. It is an ensemble learning method, meaning it combines the predictions from multiple models to produce a final prediction. XGBoost builds trees one at a time, where each new tree helps to correct errors made by previously trained tree. Each tree builds a new model to correct the errors made by previous models. Using gradient descent, the algorithm iteratively adjusts the predictions of each tree by focusing on instances where previous trees made errors. Models are added sequentially until no further improvements can be made. Although random forest and boosting use trees for classification, there are significant differences. While random forest builds multiple decision trees in parallel and merges them together for a more accurate and stable prediction, XGBoost builds trees one at a time, where each new tree helps to correct errors made by previously trained tree. XGBoost is often preferred for its speed and performance, particularly on large datasets, and is well\-suited for problems where precision is paramount. Random Forest, on the other hand, is simpler to implement, more interpretable, and can be more robust to overfitting, making it a good choice for general\-purpose applications. Figure 67: Flow chart of XGBoost algorithm (Source: Guo et al., Applied Sciences, 2020\. \- licenced as CC\-BY\-SA 4\.0\). The boosting method starts from a weak predictor and then improves performance sequentially by fitting a better model at each iteration. It fits a simple classifier to the training data and uses the residuals of the fit to build a predictor. Typically, the base classifier is a regression tree. Although random forest and boosting use trees for classification, there are significant differences. The performance of random forest generally increases with the number of trees until it becomes stable. Boosting trees apply finer divisions over previous results to improve performance [\[33]](references.html#ref-Hastie2009). Some recent papers show that it outperforms random forest for remote sensing image classification [\[58]](references.html#ref-Jafarzadeh2021). However, this result is not generalizable since the quality of the training dataset controls actual performance. In `sits`, the XGBoost method is implemented by the `sits_xbgoost()` function, based on `XGBoost` R package, and has five hyperparameters that require tuning. The `sits_xbgoost()` function takes the user choices as input to a cross\-validation to determine suitable values for the predictor. The learning rate `eta` varies from 0\.0 to 1\.0 and should be kept small (default is 0\.3\) to avoid overfitting. The minimum loss value `gamma` specifies the minimum reduction required to make a split. Its default is 0; increasing it makes the algorithm more conservative. The `max_depth` value controls the maximum depth of the trees. Increasing this value will make the model more complex and likely to overfit (default is 6\). The `subsample` parameter controls the percentage of samples supplied to a tree. Its default is 1 (maximum). Setting it to lower values means that xgboost randomly collects only part of the data instances to grow trees, thus preventing overfitting. The `nrounds` parameter controls the maximum number of boosting interactions; its default is 100, which has proven to be enough in most cases. To follow the convergence of the algorithm, users can turn the `verbose` parameter on. In general, the results using the extreme gradient boosting algorithm are similar to the random forest method. ``` # Train using XGBoost xgb_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_matogrosso_mod13q1, ml_method = [sits_xgboost](https://rdrr.io/pkg/sits/man/sits_xgboost.html)(verbose = 0) ) # Classify using SVM model and plot the result point_class_xgb <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = point_mt_mod13q1, ml_model = xgb_model ) [plot](https://rdrr.io/r/graphics/plot.default.html)(point_class_xgb, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` Figure 68: Classification of time series using XGBoost (source: authors). Deep learning using multilayer perceptron ----------------------------------------- To support deep learning methods, `sits` uses the `torch` R package, which takes the Facebook `torch` C\+\+ library as a back\-end. Machine learning algorithms that use the R `torch` package are similar to those developed using `PyTorch`. The simplest deep learning method is multilayer perceptron (MLP), which are feedforward artificial neural networks. An MLP consists of three kinds of nodes: an input layer, a set of hidden layers, and an output layer. The input layer has the same dimension as the number of features in the dataset. The hidden layers attempt to approximate the best classification function. The output layer decides which class should be assigned to the input. In `sits`, MLP models can be built using `[sits_mlp()](https://rdrr.io/pkg/sits/man/sits_mlp.html)`. Since there is no established model for generic classification of satellite image time series, designing MLP models requires parameter customization. The most important decisions are the number of layers in the model and the number of neurons per layer. These values are set by the `layers` parameter, which is a list of integer values. The size of the list is the number of layers, and each element indicates the number of nodes per layer. The choice of the number of layers depends on the inherent separability of the dataset to be classified. For datasets where the classes have different signatures, a shallow model (with three layers) may provide appropriate responses. More complex situations require models of deeper hierarchy. Models with many hidden layers may take a long time to train and may not converge. We suggest to start with three layers and test different options for the number of neurons per layer before increasing the number of layers. In our experience, using three to five layers is a reasonable compromise if the training data has a good quality. Further increase in the number of layers will not improve the model. MLP models also need to include the activation function. The activation function of a node defines the output of that node given an input or set of inputs. Following standard practices [\[49]](references.html#ref-Goodfellow2016), we use the `relu` activation function. The optimization method (`optimizer`) represents the gradient descent algorithm to be used. These methods aim to maximize an objective function by updating the parameters in the opposite direction of the gradient of the objective function [\[59]](references.html#ref-Ruder2016). Since gradient descent plays a key role in deep learning model fitting, developing optimizers is an important topic of research [\[60]](references.html#ref-Bottou2018). Many optimizers have been proposed in the literature, and recent results are reviewed by Schmidt et al. [\[61]](references.html#ref-Schmidt2021). The Adamw optimizer provides a good baseline and reliable performance for general deep learning applications [\[62]](references.html#ref-Kingma2017). By default, all deep learning algorithms in `sits` use Adamw. Another relevant parameter is the list of dropout rates (`dropout`). Dropout is a technique for randomly dropping units from the neural network during training [\[63]](references.html#ref-Srivastava2014). By randomly discarding some neurons, dropout reduces overfitting. Since a cascade of neural nets aims to improve learning as more data is acquired, discarding some neurons may seem like a waste of resources. In practice, dropout prevents an early convergence to a local minimum [\[49]](references.html#ref-Goodfellow2016). We suggest users experiment with different dropout rates, starting from small values (10\-30%) and increasing as required. The following example shows how to use `[sits_mlp()](https://rdrr.io/pkg/sits/man/sits_mlp.html)`. The default parameters have been chosen based on a modified version of [\[64]](references.html#ref-Wang2017), which proposes using multilayer perceptron as a baseline for time series classification. These parameters are: (a) Three layers with 512 neurons each, specified by the parameter `layers`; (b) Using the “relu” activation function; (c) dropout rates of 40%, 30%, and 20% for the layers; (d) the “optimizer\_adamw” as optimizer (default value); (e) a number of training steps (`epochs`) of 100; (f) a `batch_size` of 64, which indicates how many time series are used for input at a given step; and (g) a validation percentage of 20%, which means 20% of the samples will be randomly set aside for validation. To simplify the output, the `verbose` option has been turned off. After the model has been generated, we plot its training history. ``` # Train using an MLP model # This is an example of how to set parameters # First-time users should test default options first mlp_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_matogrosso_mod13q1, ml_method = [sits_mlp](https://rdrr.io/pkg/sits/man/sits_mlp.html)( optimizer = torch::[optim_adamw](https://rdrr.io/pkg/torch/man/optim_adamw.html), layers = [c](https://rdrr.io/r/base/c.html)(512, 512, 512), dropout_rates = [c](https://rdrr.io/r/base/c.html)(0.40, 0.30, 0.20), epochs = 80, batch_size = 64, verbose = FALSE, validation_split = 0.2 ) ) # Show training evolution [plot](https://rdrr.io/r/graphics/plot.default.html)(mlp_model) ``` Figure 69: Evolution of training accuracy of MLP model (source: authors). Then, we classify a 16\-year time series using the multilayer perceptron model. ``` # Classify using MLP model and plot the result point_mt_mod13q1 |> [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)(mlp_model) |> [plot](https://rdrr.io/r/graphics/plot.default.html)(bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` Figure 70: Classification of time series using MLP (source: authors). In theory, multilayer perceptron model can capture more subtle changes than random forest and XGBoost In this specific case, the result is similar to theirs. Although the model mixes the `Soy_Corn` and `Soy_Millet` classes, the distinction between their temporal signatures is quite subtle. Also, in this case, this suggests the need to improve the number of samples. In this example, the MLP model shows an increase in sensitivity compared to previous models. We recommend to compare different configurations since the MLP model is sensitive to changes in its parameters. Temporal Convolutional Neural Network (TempCNN) ----------------------------------------------- Convolutional neural networks (CNN) are deep learning methods that apply convolution filters (sliding windows) to the input data sequentially. The Temporal Convolutional Neural Network (TempCNN) is a neural network architecture specifically designed to process sequential data such as time series. In the case of time series, a 1D CNN applies a moving temporal window to the time series to produce another time series as the result of the convolution. The TempCNN architecture for satellite image time series classification is proposed by Pelletier et al. [\[65]](references.html#ref-Pelletier2019). It has three 1D convolutional layers and a final softmax layer for classification (see Figure [71](machine-learning-for-data-cubes.html#fig:mltcnnfig)). The authors combine different methods to avoid overfitting and reduce the vanishing gradient effect, including dropout, regularization, and batch normalization. In the TempCNN reference paper [\[65]](references.html#ref-Pelletier2019), the authors favourably compare their model with the Recurrent Neural Network proposed by Russwurm and Körner [\[66]](references.html#ref-Russwurm2018). Figure [71](machine-learning-for-data-cubes.html#fig:mltcnnfig) shows the architecture of the TempCNN model. TempCNN applies one\-dimensional convolutions on the input sequence to capture temporal dependencies, allowing the network to learn long\-term dependencies in the input sequence. Each layer of the model captures temporal dependencies at a different scale. Due to its multi\-scale approach, TempCNN can capture complex temporal patterns in the data and produce accurate predictions. Figure 71: Structure of tempCNN architecture (Source: Pelletier et al. (2019\). Reproduction under fair use doctrine). The function `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` implements the model. The first parameter is the `optimizer` used in the backpropagation phase for gradient descent. The default is `adamw` which is considered as a stable and reliable optimization function. The parameter `cnn_layers` controls the number of 1D\-CNN layers and the size of the filters applied at each layer; the default values are three CNNs with 128 units. The parameter `cnn_kernels` indicates the size of the convolution kernels; the default is kernels of size 7\. Activation for all 1D\-CNN layers uses the “relu” function. The dropout rates for each 1D\-CNN layer are controlled individually by the parameter `cnn_dropout_rates`. The `validation_split` controls the size of the test set relative to the full dataset. We recommend setting aside at least 20% of the samples for validation. ``` [library](https://rdrr.io/r/base/library.html)([torchopt](https://github.com/e-sensing/torchopt/)) # Train using tempCNN tempcnn_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_matogrosso_mod13q1, [sits_tempcnn](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)( optimizer = torch::[optim_adamw](https://rdrr.io/pkg/torch/man/optim_adamw.html), cnn_layers = [c](https://rdrr.io/r/base/c.html)(256, 256, 256), cnn_kernels = [c](https://rdrr.io/r/base/c.html)(7, 7, 7), cnn_dropout_rates = [c](https://rdrr.io/r/base/c.html)(0.2, 0.2, 0.2), epochs = 80, batch_size = 64, validation_split = 0.2, verbose = FALSE ) ) # Show training evolution [plot](https://rdrr.io/r/graphics/plot.default.html)(tempcnn_model) ``` Figure 72: Training evolution of TempCNN model (source: authors). Using the TempCNN model, we classify a 16\-year time series. ``` # Classify using TempCNN model and plot the result point_mt_mod13q1 |> [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)(tempcnn_model) |> [plot](https://rdrr.io/r/graphics/plot.default.html)(bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` Figure 73: Classification of time series using TempCNN (source: authors). ``` knitr::[include_graphics](https://rdrr.io/pkg/knitr/man/include_graphics.html)("./images/mltcnnplot.png") ``` Figure 74: Classification of time series using TempCNN (source: authors). The result has important differences from the previous ones. The TempCNN model indicates the `Soy_Cotton` class as the most likely one in 2004\. While this result is possibly wrong, it shows that the time series for 2004 is different from those of Forest and Pasture classes. One possible explanation is that there was forest degradation in 2004, leading to a signature that is a mix of forest and bare soil. In this case, including forest degradation samples could improve the training data. In our experience, TempCNN models are a reliable way of classifying image time series [\[67]](references.html#ref-Simoes2021). Recent work which compares different models also provides evidence that TempCNN models have satisfactory behavior, especially in the case of crop classes [\[68]](references.html#ref-Russwurm2020). Attention\-based models ----------------------- Attention\-based deep learning models are a class of models that use a mechanism inspired by human attention to focus on specific parts of input during processing. These models have been shown to be effective for various tasks such as machine translation, image captioning, and speech recognition. The basic idea behind attention\-based models is to allow the model to selectively focus on different input parts at different times. This can be done by introducing a mechanism that assigns weights to each element of the input, indicating the relative importance of that element to the current processing step. The model can then use them to compute a weighted sum of the input. The results capture the model’s attention on specific parts of the input. Attention\-based models have become one of the most used deep learning architectures for problems that involve sequential data inputs, e.g., text recognition and automatic translation. The general idea is that not all inputs are alike in applications such as language translation. Consider the English sentence “Look at all the lonely people”. A sound translation system needs to relate the words “look” and “people” as the key parts of this sentence to ensure such link is captured in the translation. A specific type of attention models, called transformers, enables the recognition of such complex relationships between input and output sequences [\[69]](references.html#ref-Vaswani2017). The basic structure of transformers is the same as other neural network algorithms. They have an encoder that transforms textual input values into numerical vectors and a decoder that processes these vectors to provide suitable answers. The difference is how the values are handled internally. In an MLP, all inputs are treated equally at first; based on iterative matching of training and test data, the backpropagation technique feeds information back to the initial layers to identify the most suitable combination of inputs that produces the best output. Convolutional nets (CNN) combine input values that are close in time (1D) or space (2D) to produce higher\-level information that helps to distinguish the different components of the input data. For text recognition, the initial choice of deep learning studies was to use recurrent neural networks (RNN) that handle input sequences. However, neither MLPs, CNNs, or RNNs have been able to capture the structure of complex inputs such as natural language. The success of transformer\-based solutions accounts for substantial improvements in natural language processing. The two main differences between transformer models and other algorithms are positional encoding and self\-attention. Positional encoding assigns an index to each input value, ensuring that the relative locations of the inputs are maintained throughout the learning and processing phases. Self\-attention compares every word in a sentence to every other word in the same sentence, including itself. In this way, it learns contextual information about the relation between the words. This conception has been validated in large language models such as BERT [\[70]](references.html#ref-Devlin2019) and GPT\-3 [\[71]](references.html#ref-Brown2020). The application of attention\-based models for satellite image time series analysis is proposed by Garnot et al. [\[72]](references.html#ref-Garnot2020a) and Russwurm and Körner [\[68]](references.html#ref-Russwurm2020). A self\-attention network can learn to focus on specific time steps and image features most relevant for distinguishing between different classes. The algorithm tries to identify which combination of individual temporal observations is most relevant to identify each class. For example, crop identification will use observations that capture the onset of the growing season, the date of maximum growth, and the end of the growing season. In the case of deforestation, the algorithm tries to identify the dates when the forest is being cut. Attention\-based models are a means to identify events that characterize each land class. The first model proposed by Garnot et al. is a full transformer\-based model [\[72]](references.html#ref-Garnot2020a). Considering that image time series classification is easier than natural language processing, Garnot et al. also propose a simplified version of the full transformer model [\[73]](references.html#ref-Garnot2020b). This simpler model uses a reduced way to compute the attention matrix, reducing time for training and classification without loss of quality of the result. In `sits`, the full transformer\-based model proposed by Garnot et al. [\[72]](references.html#ref-Garnot2020a) is implemented using `[sits_tae()](https://rdrr.io/pkg/sits/man/sits_tae.html)`. The default parameters are those proposed by the authors. The default optimizer is `optim_adamw`, as also used in the `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` function. ``` # Train a machine learning model using TAE tae_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_matogrosso_mod13q1, [sits_tae](https://rdrr.io/pkg/sits/man/sits_tae.html)( epochs = 80, batch_size = 64, optimizer = torch::[optim_adamw](https://rdrr.io/pkg/torch/man/optim_adamw.html), validation_split = 0.2, verbose = FALSE ) ) # Show training evolution [plot](https://rdrr.io/r/graphics/plot.default.html)(tae_model) ``` Figure 75: Training evolution of Temporal Self\-Attention model (source: authors). Then, we classify a 16\-year time series using the TAE model. ``` # Classify using DL model and plot the result point_mt_mod13q1 |> [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)(tae_model) |> [plot](https://rdrr.io/r/graphics/plot.default.html)(bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` Figure 76: Classification of time series using TAE (source: authors). Garnot and co\-authors also proposed the Lightweight Temporal Self\-Attention Encoder (LTAE) [\[73]](references.html#ref-Garnot2020b), which the authors claim can achieve high classification accuracy with fewer parameters compared to other neural network models. It is a good choice for applications where computational resources are limited. The `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)` function implements this algorithm. The most important parameter to be set is the learning rate `lr`. Values ranging from 0\.001 to 0\.005 should produce good results. See also the section below on model tuning. ``` # Train a machine learning model using TAE ltae_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_matogrosso_mod13q1, [sits_lighttae](https://rdrr.io/pkg/sits/man/sits_lighttae.html)( epochs = 80, batch_size = 64, optimizer = torch::[optim_adamw](https://rdrr.io/pkg/torch/man/optim_adamw.html), opt_hparams = [list](https://rdrr.io/r/base/list.html)(lr = 0.001), validation_split = 0.2 ) ) # Show training evolution [plot](https://rdrr.io/r/graphics/plot.default.html)(ltae_model) ``` Figure 77: Training evolution of Lightweight Temporal Self\-Attention model (source: authors). Then, we classify a 16\-year time series using the LightTAE model. ``` # Classify using DL model and plot the result point_mt_mod13q1 |> [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)(ltae_model) |> [plot](https://rdrr.io/r/graphics/plot.default.html)(bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI")) ``` Figure 78: Classification of time series using LightTAE (source: authors). The behaviour of both `[sits_tae()](https://rdrr.io/pkg/sits/man/sits_tae.html)` and `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)` is similar to that of `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)`. It points out the possible need for more classes and training data to better represent the transition period between 2004 and 2010\. One possibility is that the training data associated with the Pasture class is only consistent with the time series between the years 2005 to 2008\. However, the transition from Forest to Pasture in 2004 and from Pasture to Agriculture in 2009\-2010 is subject to uncertainty since the classifiers do not agree on the resulting classes. In general, deep learning temporal\-aware models are more sensitive to class variability than random forest and extreme gradient boosters. Deep learning model tuning -------------------------- Model tuning is the process of selecting the best set of hyperparameters for a specific application. When using deep learning models for image classification, it is a highly recommended step to enable a better fit of the algorithm to the training data. Hyperparameters are parameters of the model that are not learned during training but instead are set prior to training and affect the behavior of the model during training. Examples include the learning rate, batch size, number of epochs, number of hidden layers, number of neurons in each layer, activation functions, regularization parameters, and optimization algorithms. Deep learning model tuning involves selecting the best combination of hyperparameters that results in the optimal performance of the model on a given task. This is done by training and evaluating the model with different sets of hyperparameters to select the set that gives the best performance. Deep learning algorithms try to find the optimal point representing the best value of the prediction function that, given an input \\(X\\) of data points, predicts the result \\(Y\\). In our case, \\(X\\) is a multidimensional time series, and \\(Y\\) is a vector of probabilities for the possible output classes. For complex situations, the best prediction function is time\-consuming to estimate. For this reason, deep learning methods rely on gradient descent methods to speed up predictions and converge faster than an exhaustive search [\[74]](references.html#ref-Bengio2012). All gradient descent methods use an optimization algorithm adjusted with hyperparameters such as the learning and regularization rates [\[61]](references.html#ref-Schmidt2021). The learning rate controls the numerical step of the gradient descent function, and the regularization rate controls model overfitting. Adjusting these values to an optimal setting requires using model tuning methods. To reduce the learning curve, `sits` provides default values for all machine learning and deep learning methods, ensuring a reasonable baseline performance. However, refininig model hyperparameters might be necessary, especially for more complex models such as `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)` or `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)`. To that end, the package provides the `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` function. The most straightforward approach to model tuning is to run a grid search; this involves defining a range for each hyperparameter and then testing all possible combinations. This approach leads to a combinatorial explosion and thus is not recommended. Instead, Bergstra and Bengio propose randomly chosen trials [\[75]](references.html#ref-Bergstra2012). Their paper shows that randomized trials are more efficient than grid search trials, selecting adequate hyperparameters at a fraction of the computational cost. The `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` function follows Bergstra and Bengio by using a random search on the chosen hyperparameters. Experiments with image time series show that other optimizers may have better performance for the specific problem of land classification. For this reason, the authors developed the `torchopt` R package, which includes several recently proposed optimizers, including Madgrad [\[76]](references.html#ref-Defazio2021), and Yogi [\[77]](references.html#ref-Zaheer2018). Using the `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` function allows testing these and other optimizers available in `torch` and `torch_opt` packages. The `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` function takes the following parameters: * `samples`: Training dataset to be used by the model. * `samples_validation`: Optional dataset containing time series to be used for validation. If missing, the next parameter will be used. * `validation_split`: If `samples_validation` is not used, this parameter defines the proportion of time series in the training dataset to be used for validation (default is 20%). * `ml_method()`: Deep learning method (either `[sits_mlp()](https://rdrr.io/pkg/sits/man/sits_mlp.html)`, `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)`, `[sits_tae()](https://rdrr.io/pkg/sits/man/sits_tae.html)` or `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)`). * `params`: Defines the optimizer and its hyperparameters by calling `[sits_tuning_hparams()](https://rdrr.io/pkg/sits/man/sits_tuning_hparams.html)`, as shown in the example below. * `trials`: Number of trials to run the random search. * `multicores`: Number of cores to be used for the procedure. * `progress`: Show a progress bar? The `[sits_tuning_hparams()](https://rdrr.io/pkg/sits/man/sits_tuning_hparams.html)` function inside `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` allows defining optimizers and their hyperparameters, including `lr` (learning rate), `eps` (controls numerical stability), and `weight_decay` (controls overfitting). The default values for `eps` and `weight_decay` in all `sits` deep learning functions are 1e\-08 and 1e\-06, respectively. The default `lr` for `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)` and `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` is 0\.005\. Users have different ways to randomize the hyperparameters, including: * `choice()` (a list of options); * `uniform` (a uniform distribution); * `randint` (random integers from a uniform distribution); * `normal(mean, sd)` (normal distribution); * `beta(shape1, shape2)` (beta distribution); * `loguniform(max, min)` (loguniform distribution). We suggest to use the log\-uniform distribution to search over a wide range of values that span several orders of magnitude. This is common for hyperparameters like learning rates, which can vary from very small values (e.g., 0\.0001\) to larger values (e.g., 1\.0\) in a logarithmic manner. By default, `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` uses a loguniform distribution between 10^\-2 and 10^\-4 for the learning rate and the same distribution between 10^\-2 and 10^\-8 for the weight decay. ``` tuned <- [sits_tuning](https://rdrr.io/pkg/sits/man/sits_tuning.html)( samples = samples_matogrosso_mod13q1, ml_method = [sits_lighttae](https://rdrr.io/pkg/sits/man/sits_lighttae.html)(), params = [sits_tuning_hparams](https://rdrr.io/pkg/sits/man/sits_tuning_hparams.html)( optimizer = torch::[optim_adamw](https://rdrr.io/pkg/torch/man/optim_adamw.html), opt_hparams = [list](https://rdrr.io/r/base/list.html)( lr = loguniform(10^-2, 10^-4), weight_decay = loguniform(10^-2, 10^-8) ) ), trials = 40, multicores = 6, progress = FALSE ) ``` The result is a tibble with different values of accuracy, kappa, decision matrix, and hyperparameters. The best results obtain accuracy values between 0\.978 and 0\.970, as shown below. The best result is obtained by a learning rate of 0\.0013 and a weight decay of 3\.73e\-07\. The worst result has an accuracy of 0\.891, which shows the importance of the tuning procedure. ``` # Obtain accuracy, kappa, lr, and weight decay for the 5 best results # Hyperparameters are organized as a list hparams_5 <- tuned[1:5, ]$opt_hparams # Extract learning rate and weight decay from the list lr_5 <- purrr::[map_dbl](https://purrr.tidyverse.org/reference/map.html)(hparams_5, function(h) h$lr) wd_5 <- purrr::[map_dbl](https://purrr.tidyverse.org/reference/map.html)(hparams_5, function(h) h$weight_decay) # Create a tibble to display the results best_5 <- tibble::[tibble](https://tibble.tidyverse.org/reference/tibble.html)( accuracy = tuned[1:5, ]$accuracy, kappa = tuned[1:5, ]$kappa, lr = lr_5, weight_decay = wd_5 ) # Print the best five combination of hyperparameters best_5 ``` ``` #> # A tibble: 5 × 4 #> accuracy kappa lr weight_decay #> <dbl> <dbl> <dbl> <dbl> #> 1 0.978 0.974 0.00136 0.000000373 #> 2 0.975 0.970 0.00269 0.0000000861 #> 3 0.973 0.967 0.00162 0.00218 #> 4 0.970 0.964 0.000378 0.00000868 #> 5 0.970 0.964 0.00198 0.00000275 ``` For large datasets, the tuning process is time\-consuming. Despite this cost, it is recommended to achieve the best performance. In general, tuning hyperparameters for models such as `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` and `[sits_lighttae()](https://rdrr.io/pkg/sits/man/sits_lighttae.html)` will result in a slight performance improvement over the default parameters on overall accuracy. The performance gain will be stronger in the less well represented classes, where significant gains in producer’s and user’s accuracies are possible. When detecting change in less frequent classes, tuning can make a substantial difference in the results. Considerations on model choice ------------------------------ The results should not be taken as an indication of which method performs better. The most crucial factor for achieving a good result is the quality of the training data [\[31]](references.html#ref-Maxwell2018). Experience shows that classification quality depends on the training samples and how well the model matches these samples. For examples of ML for classifying large areas, please see the papers by the authors [\[7]](references.html#ref-Ferreira2020a), [\[50]](references.html#ref-Picoli2018), [\[78]](references.html#ref-Picoli2020a), [\[79]](references.html#ref-Simoes2020). In the specific case of satellite image time series, Russwurm et al. present a comparative study between seven deep neural networks for the classification of agricultural crops, using random forest as a baseline [\[68]](references.html#ref-Russwurm2020). The data is composed of Sentinel\-2 images over Britanny, France. Their results indicate a slight difference between the best model (attention\-based transformer model) over TempCNN and random forest. Attention\-based models obtain accuracy ranging from 80\-81%, TempCNN gets 78\-80%, and random forest obtains 78%. Based on this result and also on the authors’ experience, we make the following recommendations: * Random forest provides a good baseline for image time series classification and should be included in users’ assessments. * XGBoost is a worthy alternative to random forest. In principle, XGBoost is more sensitive to data variations at the cost of possible overfitting. * TempCNN is a reliable model with reasonable training time, which is close to the state\-of\-the\-art in deep learning classifiers for image time series. * Attention\-based models (TAE and LightTAE) can achieve the best overall performance with well\-designed and balanced training sets and hyperparameter tuning. The best means of improving classification performance is to provide an accurate and reliable training dataset. Accuracy improvements resulting from using deep learning methods instead of random forest of xgboost are on the order of 3\-5%, while gains when using good training data improve results by 10\-30%. As a basic rule, make sure you have good quality samples before training and classification.
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/classification-of-raster-data-cubes.html
Classification of raster data cubes =================================== [ This Chapter discusses how to classify data cubes by providing a step\-by\-step example. Our study area is the state of Rondonia, Brazil, which underwent substantial deforestation in the last decades. The objective of the case study is to detect deforested areas. Data cube for case study ------------------------ The examples of this chapter use a pre\-built data cube of Sentinel\-2 images, available in the package `sitsdata`. These images are from the `SENTINEL-2-L2A` collection in Microsoft Planetary Computer (`MPC`). The data consists of bands BO2, B8A, and B11, and indexes NDVI, EVI and NBR in a small area of \\(1200 \\times 1200\\) pixels in the state of Rondonia. As explained in Chapter [Earth observation data cubes](https://e-sensing.github.io/sitsbook/earth-observation-data-cubes.html), we must inform `sits` how to parse these file names to obtain tile, date, and band information. Image files are named according to the convention “satellite\_ sensor\_tile\_band\_date” (e.g., `SENTINEL-2_MSI_20LKP_BO2_2020_06_04.tif`) which is the default format in `sits`. ``` # Files are available in a local directory data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LMR/", package = "sitsdata" ) # Read data cube rondonia_20LMR <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir ) # Plot the cube [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LMR, date = "2022-07-16", band = "NDVI") ``` Figure 79: Color composite image of the cube for date 2023\-07\-16 (© EU Copernicus Sentinel Programme; source: Microsoft). Training data for the case study -------------------------------- This case study uses the training dataset `samples_deforestation_rondonia`, available in package `sitsdata`. This dataset consists of 6007 samples collected from Sentinel\-2 images covering the state of Rondonia. There are nine classes: `Clear_Cut_Bare_Soil`, `Clear_Cut_Burned_Area`, `Mountainside_Forest`, `Forest`, `Riparian_Forest`, `Clear_Cut_Vegetation`, `Water`, `Wetland`, and `Seasonally_Flooded`. Each time series contains values from Sentinel\-2/2A bands B02, B03, B04, B05, B06, B07, B8A, B08, B11 and B12, from 2022\-01\-05 to 2022\-12\-23 in 16\-day intervals. The samples are intended to detect deforestation events and have been collected by remote sensing experts using visual interpretation. f ``` [library](https://rdrr.io/r/base/library.html)([sitsdata](https://github.com/e-sensing/sitsdata/)) # Obtain the samples [data](https://rdrr.io/r/utils/data.html)("samples_deforestation_rondonia") # Show the contents of the samples [summary](https://rdrr.io/r/base/summary.html)(samples_deforestation_rondonia) ``` ``` #> # A tibble: 9 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Clear_Cut_Bare_Soil 944 0.157 #> 2 Clear_Cut_Burned_Area 983 0.164 #> 3 Clear_Cut_Vegetation 603 0.100 #> 4 Forest 964 0.160 #> 5 Mountainside_Forest 211 0.0351 #> 6 Riparian_Forest 1247 0.208 #> 7 Seasonally_Flooded 731 0.122 #> 8 Water 109 0.0181 #> 9 Wetland 215 0.0358 ``` It is helpful to plot the basic patterns associated with the samples to understand the training set better. The function `[sits_patterns()](https://rdrr.io/pkg/sits/man/sits_patterns.html)` uses a generalized additive model (GAM) to predict a smooth, idealized approximation to the time series associated with each class for all bands. Since the data cube used in the classification has 10 bands, we obtain the indexes NDVI, EVI and NBR before showing the patterns. ``` samples_deforestation_rondonia |> [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(NDVI = (B08 - B04) / (B08 + B04)) |> [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(NBR = (B08 - B12) / (B08 + B12)) |> [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(EVI = 2.5 * (B08 - B04) / ((B08 + 6.0 * B04 - 7.5 * B02) + 1.0)) |> [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)(bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI", "NBR")) |> [sits_patterns](https://rdrr.io/pkg/sits/man/sits_patterns.html)() |> [plot](https://rdrr.io/r/graphics/plot.default.html)() ``` Figure 80: Patterns associated to the training samples (source: authors). The patterns show different temporal responses for the selected classes. They match the typical behavior of deforestation in the Amazon. In most cases, the forest is cut at the start of the dry season (May/June). At the end of the dry season, some clear\-cut areas are burned to clean the remains; this action is reflected in the steep fall of the response of B11 values of burned area samples after August. (….) The areas where native trees have been cut but some vegatation remain (“Clear\_Cut\_Vegetation”) have values in the B8A band that increase during the period. Training machine learning models -------------------------------- The next step is to train a machine learning model to illustrate CPU\-based classification. We build a random forest model using `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` and then create a plot to find out what are the most important variables for the model. ``` # set the seed to get the same result [set.seed](https://rdrr.io/r/base/Random.html)(03022024) # Train model using random forest model rfor_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_deforestation_rondonia, ml_method = [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)() ) ``` ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(rfor_model) ``` Figure 81: Most relevant variables of the Random Forest model (source: authors). The figure shows that EVI index values on dates 9 (“2022\-05\-13”) and 15 (“2022\-08\-17”) are the most informative variables for the random forest model. These bands and dates represent inflection points in the image time series. Classification of machine learning models in CPUs ------------------------------------------------- By default, all classification algorithms in `sits` use CPU\-based parallel processing, done internally by the package. The algorithms are adaptable; the only requirement for users is to inform the configuration of their machines. To achieve efficiency, `sits` implements a fault\-tolerant multitasking procedure, using a cluster of independent workers linked to a virtual machine. To avoid communication overhead, all large payloads are read and stored independently; direct interaction between the main process and the workers is kept at a minimum. Details of CPU\-based parallel processing in `sits` can be found in the [Technical annex](https://e-sensing.github.io/sitsbook/technical-annex.html). To classify both data cubes and sets of time series, use `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`, which uses parallel processing to speed up the performance, as described at the end of this Chapter. Its most relevant parameters are: (a) `data`, either a data cube or a set of time series; (b) `ml_model`, a trained model using one of the machine learning methods provided; (c) `multicores`, number of CPU cores that will be used for processing; (d) `memsize`, memory available for classification; (e) `output_dir`, directory where results will be stored; (f) `version`, for version control. To follow the processing steps, turn on the parameters `verbose` to print information and `progress` to get a progress bar. The classification result is a data cube with a set of probability layers, one for each output class. Each probability layer contains the model’s assessment of how likely each pixel belongs to the related class. The probability cube can be visualized with `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. In this example, we show only the probabilities associated to label “Forest”. ``` # Classify data cube to obtain a probability cube rondonia_20LMR_probs <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = rondonia_20LMR, ml_model = rfor_model, output_dir = "./tempdir/chp9", version = "rf-raster", multicores = 4, memsize = 16 ) [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LMR_probs, labels = "Forest", palette = "YlGn") ``` Figure 82: Probabilities for class Forest (source: authors). The probability cube provides information on the output values of the algorithm for each class. Most probability maps contain outliers or misclassified pixels. The labeled map generated from the pixel\-based time series classification method exhibits several misclassified pixels, which are small patches surrounded by a different class. This occurrence of outliers is a common issue that arises due to the inherent nature of this classification approach. Regardless of their resolution, mixed pixels are prevalent in images, and each class exhibits considerable data variability. As a result, these factors can lead to outliers that are more likely to be misclassified. To overcome this limitation, `sits` employs post\-processing smoothing techniques that leverage the spatial context of the probability cubes to refine the results. These techniques will be discussed in the Chapter [Bayesian smoothing for post\-processing](https://e-sensing.github.io/sitsbook/bayesian-smoothing-for-post-processing.html). In what follows, we will generate the smoothed cube to illustrate the procedure. ``` # Smoothen a probability cube rondonia_20LMR_bayes <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( cube = rondonia_20LMR_probs, output_dir = "./tempdir/chp9", version = "rf-raster", multicores = 4, memsize = 16 ) [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LMR_bayes, labels = [c](https://rdrr.io/r/base/c.html)("Forest"), palette = "YlGn") ``` Figure 83: Smoothened probabilities for class Forest (source: authors). In general, users should perform a post\-processing smoothing after obtaining the probability maps in raster format. After the post\-processing operation, we apply `[sits_label_classification()](https://rdrr.io/pkg/sits/man/sits_label_classification.html)` to obtain a map with the most likely class for each pixel. For each pixel, the `[sits_label_classification()](https://rdrr.io/pkg/sits/man/sits_label_classification.html)` function takes the label with highest probability and assigns it to the resulting map. The output is a labelled map with classes. ``` # Generate a thematic map rondonia_20LMR_class <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = rondonia_20LMR_bayes, multicores = 4, memsize = 12, output_dir = "./tempdir/chp9", version = "rf-raster" ) # Plot the thematic map [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LMR_class, legend_text_size = 0.7 ) ``` Figure 84: Final map of deforestation obtained by random forest model(source: authors). Training and running deep learning models ----------------------------------------- The next examples show how to run deep learning models in `sits`. The case study uses the Temporal CNN model [\[65]](references.html#ref-Pelletier2019), which is described in Chapter [Machine learning for data cubes](https://e-sensing.github.io/sitsbook/machine-learning-for-data-cubes.html). We first show the need for model tuning, before applying the model for data cube classification. ### Deep learning model tuning In the example, we use `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` to find good hyperparameters to train the `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` algorithm for the Rondonia dataset. The hyperparameters for the `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` method include the size of the layers, convolution kernels, dropout rates, learning rate, and weight decay. Please refer to the description of the Temporal CNN algorithm in Chapter [Machine learning for data cubes](https://e-sensing.github.io/sitsbook/machine-learning-for-data-cubes.html) ``` tuned_tempcnn <- [sits_tuning](https://rdrr.io/pkg/sits/man/sits_tuning.html)( samples = samples_deforestation_rondonia, ml_method = [sits_tempcnn](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)(), params = [sits_tuning_hparams](https://rdrr.io/pkg/sits/man/sits_tuning_hparams.html)( cnn_layers = choice([c](https://rdrr.io/r/base/c.html)(256, 256, 256), [c](https://rdrr.io/r/base/c.html)(128, 128, 128), [c](https://rdrr.io/r/base/c.html)(64, 64, 64)), cnn_kernels = choice([c](https://rdrr.io/r/base/c.html)(3, 3, 3), [c](https://rdrr.io/r/base/c.html)(5, 5, 5), [c](https://rdrr.io/r/base/c.html)(7, 7, 7)), cnn_dropout_rates = choice( [c](https://rdrr.io/r/base/c.html)(0.15, 0.15, 0.15), [c](https://rdrr.io/r/base/c.html)(0.2, 0.2, 0.2), [c](https://rdrr.io/r/base/c.html)(0.3, 0.3, 0.3), [c](https://rdrr.io/r/base/c.html)(0.4, 0.4, 0.4) ), optimizer = torch::[optim_adamw](https://rdrr.io/pkg/torch/man/optim_adamw.html), opt_hparams = [list](https://rdrr.io/r/base/list.html)( lr = loguniform(10^-2, 10^-4), weight_decay = loguniform(10^-2, 10^-8) ) ), trials = 50, multicores = 4 ) ``` The result of `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` is tibble with different values of accuracy, kappa, decision matrix, and hyperparameters. The five best results obtain accuracy values between 0\.939 and 0\.908, as shown below. The best result is obtained by a learning rate of 3\.76e\-04 and a weight decay of 1\.5e\-04, and three CNN layers of size 256, kernel size of 5, and dropout rates of 0\.2\. ``` # Obtain accuracy, kappa, cnn_layers, cnn_kernels, and cnn_dropout_rates the best result cnn_params <- tuned_tempcnn[1, [c](https://rdrr.io/r/base/c.html)("accuracy", "kappa", "cnn_layers", "cnn_kernels", "cnn_dropout_rates"), ] # Learning rates and weight decay are organized as a list hparams_best <- tuned_tempcnn[1, ]$opt_hparams[[1]] # Extract learning rate and weight decay lr_wd <- tibble::[tibble](https://tibble.tidyverse.org/reference/tibble.html)( lr_best = hparams_best$lr, wd_best = hparams_best$weight_decay ) # Print the best parameters dplyr::[bind_cols](https://dplyr.tidyverse.org/reference/bind_cols.html)(cnn_params, lr_wd) ``` ``` #> # A tibble: 1 × 7 #> accuracy kappa cnn_layers cnn_kernels cnn_dropout_rates lr_best wd_best #> <dbl> <dbl> <chr> <chr> <chr> <dbl> <dbl> #> 1 0.939 0.929 c(256, 256, 256) c(5, 5, 5) c(0.2, 0.2, 0.2) 0.000376 1.53e-4 ``` ### Classification in GPUs using parallel processing Deep learning time series classification methods in `sits`, which include `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)`, `[sits_mlp()](https://rdrr.io/pkg/sits/man/sits_mlp.html)`, `sits_lightae()` and `[sits_tae()](https://rdrr.io/pkg/sits/man/sits_tae.html)`, are written using the `torch` package, which is an adaptation of pyTorch to the R environment. These algorithms can use a CUDA\-compatible NVDIA GPU if one is available and has been properly configured. Please refer to the `torch` [installation guide](https://torch.mlverse.org/docs/articles/installation) for details on how to configure `torch` to use GPUs. If no GPU is available, these algorithms will run on regular CPUs, using the same paralellization methods described in the traditional machine learning methods. Typically, there is a 10\-fold performance increase when running `torch` based methods in GPUs relative to their processing time in GPU. To illustrate the use of GPUs, we take the same data cube and training data used in the previous examples and use a Temporal CNN method. The first step is to obtain a deep learning model using the hyperparameters produced by the tuning procedure shown earlier. We run ``` tcnn_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_deforestation_rondonia, [sits_tempcnn](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)( cnn_layers = [c](https://rdrr.io/r/base/c.html)(256, 256, 256), cnn_kernels = [c](https://rdrr.io/r/base/c.html)(5, 5, 5), cnn_dropout_rates = [c](https://rdrr.io/r/base/c.html)(0.2, 0.2, 0.2), opt_hparams = [list](https://rdrr.io/r/base/list.html)( lr = 0.000376, weight_decay = 0.000153 ) ) ) ``` After training the model, we classify the data cube. If a GPU is available, users need to provide the additional parameter `gpu_memory` to `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`. This information will be used by `sits` to optimize access to the GPU and speed up processing. ``` rondonia_20LMR_probs_tcnn <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( rondonia_20LMR, ml_model = tcnn_model, output_dir = "./tempdir/chp9", version = "tcnn-raster", gpu_memory = 16, multicores = 6, memsize = 24 ) ``` After classification, we can smooth the probability cube and then label the resulting smoothed probabilities to obtain a classified map. ``` # Smoothen the probability map rondonia_20LMR_bayes_tcnn <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( rondonia_20LMR_probs_tcnn, output_dir = "./tempdir/chp9", version = "tcnn-raster", multicores = 6, memsize = 24 ) # Obtain the final labelled map rondonia_20LMR_class_tcnn <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( rondonia_20LMR_bayes_tcnn, output_dir = "./tempdir/chp9", version = "tcnn-raster", multicores = 6, memsize = 24 ) ``` ``` # plot the final classification map [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LMR_class_tcnn, legend_text_size = 0.7 ) ``` Figure 85: Final map of deforestation obtained using TempCNN model (source: authors). Map reclassification -------------------- Reclassification of a remote sensing map refers to changing the classes assigned to different pixels in the image. The purpose of reclassification is to modify the information contained in the image to better suit a specific use case. In `sits`, reclassification involves assigning new classes to pixels based on additional information from a reference map. Users define rules according to the desired outcome. These rules are then applied to the classified map to produce a new map with updated classes. To illustrate the reclassification in `sits`, we take a classified data cube stored in the `sitsdata` package. As discussed in Chapter [Earth observation data cubes](https://e-sensing.github.io/sitsbook/earth-observation-data-cubes.html), `sits` can create a data cube from a classified image file. Users need to provide the original data source and collection, the directory where data is stored (`data_dir`), the information on how to retrieve data cube parameters from file names (`parse_info`), and the labels used in the classification. ``` # Open classification map data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-Class", package = "sitsdata") rondonia_class <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir, parse_info = [c](https://rdrr.io/r/base/c.html)( "satellite", "sensor", "tile", "start_date", "end_date", "band", "version" ), bands = "class", labels = [c](https://rdrr.io/r/base/c.html)( "1" = "Water", "2" = "Clear_Cut_Burned_Area", "3" = "Clear_Cut_Bare_Soil", "4" = "Clear_Cut_Vegetation", "5" = "Forest", "6" = "Bare_Soil", "7" = "Wetland" ) ) [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_class, legend_text_size = 0.7 ) ``` Figure 86: Original classification map (source: authors). The above map shows the total extent of deforestation by clear cuts estimated by the `sits` random forest algorithm in an area in Rondonia, Brazil, based on a time series of Sentinel\-2 images for the period 2020\-06\-04 to 2021\-08\-26\. Suppose we want to estimate the deforestation that occurred from June 2020 to August 2021\. We need a reference map containing information on forest cuts before 2020\. In this example, we use as a reference the PRODES deforestation map of Amazonia created by Brazil’s National Institute for Space Research (INPE). This map is produced by visual interpretation. PRODES measures deforestation every year, starting from August of one year to July of the following year. It contains classes that represent the natural world (Forest, Water, NonForest, and NonForest2\) and classes that capture the yearly deforestation increments. These classes are named “dYYYY” and “rYYYY”; the first refers to deforestation in a given year (e.g., “d2008” for deforestation for August 2007 to July 2008\); the second to places where the satellite data is not sufficient to determine the land class (e.g., “r2010” for 2010\). This map is available on package `sitsdata`, as shown below. ``` data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/PRODES", package = "sitsdata") prodes_2021 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "USGS", collection = "LANDSAT-C2L2-SR", data_dir = data_dir, parse_info = [c](https://rdrr.io/r/base/c.html)( "product", "sensor", "tile", "start_date", "end_date", "band", "version" ), bands = "class", version = "v20220606", labels = [c](https://rdrr.io/r/base/c.html)( "1" = "Forest", "2" = "Water", "3" = "NonForest", "4" = "NonForest2", "6" = "d2007", "7" = "d2008", "8" = "d2009", "9" = "d2010", "10" = "d2011", "11" = "d2012", "12" = "d2013", "13" = "d2014", "14" = "d2015", "15" = "d2016", "16" = "d2017", "17" = "d2018", "18" = "r2010", "19" = "r2011", "20" = "r2012", "21" = "r2013", "22" = "r2014", "23" = "r2015", "24" = "r2016", "25" = "r2017", "26" = "r2018", "27" = "d2019", "28" = "r2019", "29" = "d2020", "31" = "r2020", "32" = "Clouds2021", "33" = "d2021", "34" = "r2021" ) ) ``` Since the labels of the deforestation map are specialized and are not part of the default `sits` color table, we define a legend for better visualization of the different deforestation classes. ``` # Use the RColorBrewer palette "YlOrBr" for the deforestation years colors <- grDevices::[hcl.colors](https://rdrr.io/r/grDevices/palettes.html)(n = 15, palette = "YlOrBr") # Define the legend for the deforestation map def_legend <- [c](https://rdrr.io/r/base/c.html)( "Forest" = "forestgreen", "Water" = "dodgerblue3", "NonForest" = "bisque2", "NonForest2" = "bisque2", "d2007" = colors[1], "d2008" = colors[2], "d2009" = colors[3], "d2010" = colors[4], "d2011" = colors[5], "d2012" = colors[6], "d2013" = colors[7], "d2014" = colors[8], "d2015" = colors[9], "d2016" = colors[10], "d2017" = colors[11], "d2018" = colors[12], "d2019" = colors[13], "d2020" = colors[14], "d2021" = colors[15], "r2010" = "lightcyan", "r2011" = "lightcyan", "r2012" = "lightcyan", "r2013" = "lightcyan", "r2014" = "lightcyan", "r2015" = "lightcyan", "r2016" = "lightcyan", "r2017" = "lightcyan", "r2018" = "lightcyan", "r2019" = "lightcyan", "r2020" = "lightcyan", "r2021" = "lightcyan", "Clouds2021" = "lightblue2" ) ``` Using this new legend, we can visualize the PRODES deforestation map. ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(prodes_2021, legend = def_legend) ``` Figure 87: Deforestation map produced by PRODES (source: authors). Taking the PRODES map as our reference, we can include new labels in the classified map produced by `sits` using `[sits_reclassify()](https://rdrr.io/pkg/sits/man/sits_reclassify.html)`. The new class name Defor\_2020 will be applied to all pixels that PRODES considers that have been deforested before July 2020\. We also include a Non\_Forest class to include all pixels that PRODES takes as not covered by native vegetation, such as wetlands and rocky areas. The PRODES classes will be used as a mask over the `sits` deforestation map. The `[sits_reclassify()](https://rdrr.io/pkg/sits/man/sits_reclassify.html)` operation requires the parameters: (a) `cube`, the classified data cube whose pixels will be reclassified; (b) `mask`, the reference data cube used as a mask; (c) `rules`, a named list. The names of the `rules` list will be the new label. Each new label is associated with a `mask` vector that includes the labels of the reference map that will be joined. `[sits_reclassify()](https://rdrr.io/pkg/sits/man/sits_reclassify.html)` then compares the original and reference map pixel by pixel. For each pixel of the reference map whose labels are in one of the `rules`, the algorithm relabels the original map. The result will be a reclassified map with the original labels plus the new labels that have been masked using the reference map. ``` # Reclassify cube rondonia_def_2021 <- [sits_reclassify](https://rdrr.io/pkg/sits/man/sits_reclassify.html)( cube = rondonia_class, mask = prodes_2021, rules = [list](https://rdrr.io/r/base/list.html)( "Non_Forest" = mask [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("NonForest", "NonForest2"), "Deforestation_Mask" = mask [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)( "d2007", "d2008", "d2009", "d2010", "d2011", "d2012", "d2013", "d2014", "d2015", "d2016", "d2017", "d2018", "d2019", "d2020", "r2010", "r2011", "r2012", "r2013", "r2014", "r2015", "r2016", "r2017", "r2018", "r2019", "r2020", "r2021" ), "Water" = mask == "Water" ), memsize = 8, multicores = 2, output_dir = "./tempdir/chp9", version = "reclass" ) # Plot the reclassified map [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_def_2021, legend_text_size = 0.7 ) ``` Figure 88: Deforestation map by sits masked by PRODES map (source: authors). The reclassified map has been split into deforestation before mid\-2020 (using the PRODES map) and the areas classified by `sits` that are taken as being deforested from mid\-2020 to mid\-2021\. This allows experts to measure how much deforestation occurred in this period according to `sits` and compare the result with the PRODES map. The `[sits_reclassify()](https://rdrr.io/pkg/sits/man/sits_reclassify.html)` function is not restricted to comparing deforestation maps. It can be used in any case that requires masking of a result based on a reference map. Data cube for case study ------------------------ The examples of this chapter use a pre\-built data cube of Sentinel\-2 images, available in the package `sitsdata`. These images are from the `SENTINEL-2-L2A` collection in Microsoft Planetary Computer (`MPC`). The data consists of bands BO2, B8A, and B11, and indexes NDVI, EVI and NBR in a small area of \\(1200 \\times 1200\\) pixels in the state of Rondonia. As explained in Chapter [Earth observation data cubes](https://e-sensing.github.io/sitsbook/earth-observation-data-cubes.html), we must inform `sits` how to parse these file names to obtain tile, date, and band information. Image files are named according to the convention “satellite\_ sensor\_tile\_band\_date” (e.g., `SENTINEL-2_MSI_20LKP_BO2_2020_06_04.tif`) which is the default format in `sits`. ``` # Files are available in a local directory data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LMR/", package = "sitsdata" ) # Read data cube rondonia_20LMR <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir ) # Plot the cube [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LMR, date = "2022-07-16", band = "NDVI") ``` Figure 79: Color composite image of the cube for date 2023\-07\-16 (© EU Copernicus Sentinel Programme; source: Microsoft). Training data for the case study -------------------------------- This case study uses the training dataset `samples_deforestation_rondonia`, available in package `sitsdata`. This dataset consists of 6007 samples collected from Sentinel\-2 images covering the state of Rondonia. There are nine classes: `Clear_Cut_Bare_Soil`, `Clear_Cut_Burned_Area`, `Mountainside_Forest`, `Forest`, `Riparian_Forest`, `Clear_Cut_Vegetation`, `Water`, `Wetland`, and `Seasonally_Flooded`. Each time series contains values from Sentinel\-2/2A bands B02, B03, B04, B05, B06, B07, B8A, B08, B11 and B12, from 2022\-01\-05 to 2022\-12\-23 in 16\-day intervals. The samples are intended to detect deforestation events and have been collected by remote sensing experts using visual interpretation. f ``` [library](https://rdrr.io/r/base/library.html)([sitsdata](https://github.com/e-sensing/sitsdata/)) # Obtain the samples [data](https://rdrr.io/r/utils/data.html)("samples_deforestation_rondonia") # Show the contents of the samples [summary](https://rdrr.io/r/base/summary.html)(samples_deforestation_rondonia) ``` ``` #> # A tibble: 9 × 3 #> label count prop #> <chr> <int> <dbl> #> 1 Clear_Cut_Bare_Soil 944 0.157 #> 2 Clear_Cut_Burned_Area 983 0.164 #> 3 Clear_Cut_Vegetation 603 0.100 #> 4 Forest 964 0.160 #> 5 Mountainside_Forest 211 0.0351 #> 6 Riparian_Forest 1247 0.208 #> 7 Seasonally_Flooded 731 0.122 #> 8 Water 109 0.0181 #> 9 Wetland 215 0.0358 ``` It is helpful to plot the basic patterns associated with the samples to understand the training set better. The function `[sits_patterns()](https://rdrr.io/pkg/sits/man/sits_patterns.html)` uses a generalized additive model (GAM) to predict a smooth, idealized approximation to the time series associated with each class for all bands. Since the data cube used in the classification has 10 bands, we obtain the indexes NDVI, EVI and NBR before showing the patterns. ``` samples_deforestation_rondonia |> [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(NDVI = (B08 - B04) / (B08 + B04)) |> [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(NBR = (B08 - B12) / (B08 + B12)) |> [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)(EVI = 2.5 * (B08 - B04) / ((B08 + 6.0 * B04 - 7.5 * B02) + 1.0)) |> [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)(bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI", "NBR")) |> [sits_patterns](https://rdrr.io/pkg/sits/man/sits_patterns.html)() |> [plot](https://rdrr.io/r/graphics/plot.default.html)() ``` Figure 80: Patterns associated to the training samples (source: authors). The patterns show different temporal responses for the selected classes. They match the typical behavior of deforestation in the Amazon. In most cases, the forest is cut at the start of the dry season (May/June). At the end of the dry season, some clear\-cut areas are burned to clean the remains; this action is reflected in the steep fall of the response of B11 values of burned area samples after August. (….) The areas where native trees have been cut but some vegatation remain (“Clear\_Cut\_Vegetation”) have values in the B8A band that increase during the period. Training machine learning models -------------------------------- The next step is to train a machine learning model to illustrate CPU\-based classification. We build a random forest model using `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` and then create a plot to find out what are the most important variables for the model. ``` # set the seed to get the same result [set.seed](https://rdrr.io/r/base/Random.html)(03022024) # Train model using random forest model rfor_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_deforestation_rondonia, ml_method = [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)() ) ``` ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(rfor_model) ``` Figure 81: Most relevant variables of the Random Forest model (source: authors). The figure shows that EVI index values on dates 9 (“2022\-05\-13”) and 15 (“2022\-08\-17”) are the most informative variables for the random forest model. These bands and dates represent inflection points in the image time series. Classification of machine learning models in CPUs ------------------------------------------------- By default, all classification algorithms in `sits` use CPU\-based parallel processing, done internally by the package. The algorithms are adaptable; the only requirement for users is to inform the configuration of their machines. To achieve efficiency, `sits` implements a fault\-tolerant multitasking procedure, using a cluster of independent workers linked to a virtual machine. To avoid communication overhead, all large payloads are read and stored independently; direct interaction between the main process and the workers is kept at a minimum. Details of CPU\-based parallel processing in `sits` can be found in the [Technical annex](https://e-sensing.github.io/sitsbook/technical-annex.html). To classify both data cubes and sets of time series, use `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`, which uses parallel processing to speed up the performance, as described at the end of this Chapter. Its most relevant parameters are: (a) `data`, either a data cube or a set of time series; (b) `ml_model`, a trained model using one of the machine learning methods provided; (c) `multicores`, number of CPU cores that will be used for processing; (d) `memsize`, memory available for classification; (e) `output_dir`, directory where results will be stored; (f) `version`, for version control. To follow the processing steps, turn on the parameters `verbose` to print information and `progress` to get a progress bar. The classification result is a data cube with a set of probability layers, one for each output class. Each probability layer contains the model’s assessment of how likely each pixel belongs to the related class. The probability cube can be visualized with `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. In this example, we show only the probabilities associated to label “Forest”. ``` # Classify data cube to obtain a probability cube rondonia_20LMR_probs <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = rondonia_20LMR, ml_model = rfor_model, output_dir = "./tempdir/chp9", version = "rf-raster", multicores = 4, memsize = 16 ) [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LMR_probs, labels = "Forest", palette = "YlGn") ``` Figure 82: Probabilities for class Forest (source: authors). The probability cube provides information on the output values of the algorithm for each class. Most probability maps contain outliers or misclassified pixels. The labeled map generated from the pixel\-based time series classification method exhibits several misclassified pixels, which are small patches surrounded by a different class. This occurrence of outliers is a common issue that arises due to the inherent nature of this classification approach. Regardless of their resolution, mixed pixels are prevalent in images, and each class exhibits considerable data variability. As a result, these factors can lead to outliers that are more likely to be misclassified. To overcome this limitation, `sits` employs post\-processing smoothing techniques that leverage the spatial context of the probability cubes to refine the results. These techniques will be discussed in the Chapter [Bayesian smoothing for post\-processing](https://e-sensing.github.io/sitsbook/bayesian-smoothing-for-post-processing.html). In what follows, we will generate the smoothed cube to illustrate the procedure. ``` # Smoothen a probability cube rondonia_20LMR_bayes <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( cube = rondonia_20LMR_probs, output_dir = "./tempdir/chp9", version = "rf-raster", multicores = 4, memsize = 16 ) [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LMR_bayes, labels = [c](https://rdrr.io/r/base/c.html)("Forest"), palette = "YlGn") ``` Figure 83: Smoothened probabilities for class Forest (source: authors). In general, users should perform a post\-processing smoothing after obtaining the probability maps in raster format. After the post\-processing operation, we apply `[sits_label_classification()](https://rdrr.io/pkg/sits/man/sits_label_classification.html)` to obtain a map with the most likely class for each pixel. For each pixel, the `[sits_label_classification()](https://rdrr.io/pkg/sits/man/sits_label_classification.html)` function takes the label with highest probability and assigns it to the resulting map. The output is a labelled map with classes. ``` # Generate a thematic map rondonia_20LMR_class <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = rondonia_20LMR_bayes, multicores = 4, memsize = 12, output_dir = "./tempdir/chp9", version = "rf-raster" ) # Plot the thematic map [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LMR_class, legend_text_size = 0.7 ) ``` Figure 84: Final map of deforestation obtained by random forest model(source: authors). Training and running deep learning models ----------------------------------------- The next examples show how to run deep learning models in `sits`. The case study uses the Temporal CNN model [\[65]](references.html#ref-Pelletier2019), which is described in Chapter [Machine learning for data cubes](https://e-sensing.github.io/sitsbook/machine-learning-for-data-cubes.html). We first show the need for model tuning, before applying the model for data cube classification. ### Deep learning model tuning In the example, we use `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` to find good hyperparameters to train the `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` algorithm for the Rondonia dataset. The hyperparameters for the `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` method include the size of the layers, convolution kernels, dropout rates, learning rate, and weight decay. Please refer to the description of the Temporal CNN algorithm in Chapter [Machine learning for data cubes](https://e-sensing.github.io/sitsbook/machine-learning-for-data-cubes.html) ``` tuned_tempcnn <- [sits_tuning](https://rdrr.io/pkg/sits/man/sits_tuning.html)( samples = samples_deforestation_rondonia, ml_method = [sits_tempcnn](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)(), params = [sits_tuning_hparams](https://rdrr.io/pkg/sits/man/sits_tuning_hparams.html)( cnn_layers = choice([c](https://rdrr.io/r/base/c.html)(256, 256, 256), [c](https://rdrr.io/r/base/c.html)(128, 128, 128), [c](https://rdrr.io/r/base/c.html)(64, 64, 64)), cnn_kernels = choice([c](https://rdrr.io/r/base/c.html)(3, 3, 3), [c](https://rdrr.io/r/base/c.html)(5, 5, 5), [c](https://rdrr.io/r/base/c.html)(7, 7, 7)), cnn_dropout_rates = choice( [c](https://rdrr.io/r/base/c.html)(0.15, 0.15, 0.15), [c](https://rdrr.io/r/base/c.html)(0.2, 0.2, 0.2), [c](https://rdrr.io/r/base/c.html)(0.3, 0.3, 0.3), [c](https://rdrr.io/r/base/c.html)(0.4, 0.4, 0.4) ), optimizer = torch::[optim_adamw](https://rdrr.io/pkg/torch/man/optim_adamw.html), opt_hparams = [list](https://rdrr.io/r/base/list.html)( lr = loguniform(10^-2, 10^-4), weight_decay = loguniform(10^-2, 10^-8) ) ), trials = 50, multicores = 4 ) ``` The result of `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` is tibble with different values of accuracy, kappa, decision matrix, and hyperparameters. The five best results obtain accuracy values between 0\.939 and 0\.908, as shown below. The best result is obtained by a learning rate of 3\.76e\-04 and a weight decay of 1\.5e\-04, and three CNN layers of size 256, kernel size of 5, and dropout rates of 0\.2\. ``` # Obtain accuracy, kappa, cnn_layers, cnn_kernels, and cnn_dropout_rates the best result cnn_params <- tuned_tempcnn[1, [c](https://rdrr.io/r/base/c.html)("accuracy", "kappa", "cnn_layers", "cnn_kernels", "cnn_dropout_rates"), ] # Learning rates and weight decay are organized as a list hparams_best <- tuned_tempcnn[1, ]$opt_hparams[[1]] # Extract learning rate and weight decay lr_wd <- tibble::[tibble](https://tibble.tidyverse.org/reference/tibble.html)( lr_best = hparams_best$lr, wd_best = hparams_best$weight_decay ) # Print the best parameters dplyr::[bind_cols](https://dplyr.tidyverse.org/reference/bind_cols.html)(cnn_params, lr_wd) ``` ``` #> # A tibble: 1 × 7 #> accuracy kappa cnn_layers cnn_kernels cnn_dropout_rates lr_best wd_best #> <dbl> <dbl> <chr> <chr> <chr> <dbl> <dbl> #> 1 0.939 0.929 c(256, 256, 256) c(5, 5, 5) c(0.2, 0.2, 0.2) 0.000376 1.53e-4 ``` ### Classification in GPUs using parallel processing Deep learning time series classification methods in `sits`, which include `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)`, `[sits_mlp()](https://rdrr.io/pkg/sits/man/sits_mlp.html)`, `sits_lightae()` and `[sits_tae()](https://rdrr.io/pkg/sits/man/sits_tae.html)`, are written using the `torch` package, which is an adaptation of pyTorch to the R environment. These algorithms can use a CUDA\-compatible NVDIA GPU if one is available and has been properly configured. Please refer to the `torch` [installation guide](https://torch.mlverse.org/docs/articles/installation) for details on how to configure `torch` to use GPUs. If no GPU is available, these algorithms will run on regular CPUs, using the same paralellization methods described in the traditional machine learning methods. Typically, there is a 10\-fold performance increase when running `torch` based methods in GPUs relative to their processing time in GPU. To illustrate the use of GPUs, we take the same data cube and training data used in the previous examples and use a Temporal CNN method. The first step is to obtain a deep learning model using the hyperparameters produced by the tuning procedure shown earlier. We run ``` tcnn_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_deforestation_rondonia, [sits_tempcnn](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)( cnn_layers = [c](https://rdrr.io/r/base/c.html)(256, 256, 256), cnn_kernels = [c](https://rdrr.io/r/base/c.html)(5, 5, 5), cnn_dropout_rates = [c](https://rdrr.io/r/base/c.html)(0.2, 0.2, 0.2), opt_hparams = [list](https://rdrr.io/r/base/list.html)( lr = 0.000376, weight_decay = 0.000153 ) ) ) ``` After training the model, we classify the data cube. If a GPU is available, users need to provide the additional parameter `gpu_memory` to `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`. This information will be used by `sits` to optimize access to the GPU and speed up processing. ``` rondonia_20LMR_probs_tcnn <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( rondonia_20LMR, ml_model = tcnn_model, output_dir = "./tempdir/chp9", version = "tcnn-raster", gpu_memory = 16, multicores = 6, memsize = 24 ) ``` After classification, we can smooth the probability cube and then label the resulting smoothed probabilities to obtain a classified map. ``` # Smoothen the probability map rondonia_20LMR_bayes_tcnn <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( rondonia_20LMR_probs_tcnn, output_dir = "./tempdir/chp9", version = "tcnn-raster", multicores = 6, memsize = 24 ) # Obtain the final labelled map rondonia_20LMR_class_tcnn <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( rondonia_20LMR_bayes_tcnn, output_dir = "./tempdir/chp9", version = "tcnn-raster", multicores = 6, memsize = 24 ) ``` ``` # plot the final classification map [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LMR_class_tcnn, legend_text_size = 0.7 ) ``` Figure 85: Final map of deforestation obtained using TempCNN model (source: authors). ### Deep learning model tuning In the example, we use `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` to find good hyperparameters to train the `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` algorithm for the Rondonia dataset. The hyperparameters for the `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` method include the size of the layers, convolution kernels, dropout rates, learning rate, and weight decay. Please refer to the description of the Temporal CNN algorithm in Chapter [Machine learning for data cubes](https://e-sensing.github.io/sitsbook/machine-learning-for-data-cubes.html) ``` tuned_tempcnn <- [sits_tuning](https://rdrr.io/pkg/sits/man/sits_tuning.html)( samples = samples_deforestation_rondonia, ml_method = [sits_tempcnn](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)(), params = [sits_tuning_hparams](https://rdrr.io/pkg/sits/man/sits_tuning_hparams.html)( cnn_layers = choice([c](https://rdrr.io/r/base/c.html)(256, 256, 256), [c](https://rdrr.io/r/base/c.html)(128, 128, 128), [c](https://rdrr.io/r/base/c.html)(64, 64, 64)), cnn_kernels = choice([c](https://rdrr.io/r/base/c.html)(3, 3, 3), [c](https://rdrr.io/r/base/c.html)(5, 5, 5), [c](https://rdrr.io/r/base/c.html)(7, 7, 7)), cnn_dropout_rates = choice( [c](https://rdrr.io/r/base/c.html)(0.15, 0.15, 0.15), [c](https://rdrr.io/r/base/c.html)(0.2, 0.2, 0.2), [c](https://rdrr.io/r/base/c.html)(0.3, 0.3, 0.3), [c](https://rdrr.io/r/base/c.html)(0.4, 0.4, 0.4) ), optimizer = torch::[optim_adamw](https://rdrr.io/pkg/torch/man/optim_adamw.html), opt_hparams = [list](https://rdrr.io/r/base/list.html)( lr = loguniform(10^-2, 10^-4), weight_decay = loguniform(10^-2, 10^-8) ) ), trials = 50, multicores = 4 ) ``` The result of `[sits_tuning()](https://rdrr.io/pkg/sits/man/sits_tuning.html)` is tibble with different values of accuracy, kappa, decision matrix, and hyperparameters. The five best results obtain accuracy values between 0\.939 and 0\.908, as shown below. The best result is obtained by a learning rate of 3\.76e\-04 and a weight decay of 1\.5e\-04, and three CNN layers of size 256, kernel size of 5, and dropout rates of 0\.2\. ``` # Obtain accuracy, kappa, cnn_layers, cnn_kernels, and cnn_dropout_rates the best result cnn_params <- tuned_tempcnn[1, [c](https://rdrr.io/r/base/c.html)("accuracy", "kappa", "cnn_layers", "cnn_kernels", "cnn_dropout_rates"), ] # Learning rates and weight decay are organized as a list hparams_best <- tuned_tempcnn[1, ]$opt_hparams[[1]] # Extract learning rate and weight decay lr_wd <- tibble::[tibble](https://tibble.tidyverse.org/reference/tibble.html)( lr_best = hparams_best$lr, wd_best = hparams_best$weight_decay ) # Print the best parameters dplyr::[bind_cols](https://dplyr.tidyverse.org/reference/bind_cols.html)(cnn_params, lr_wd) ``` ``` #> # A tibble: 1 × 7 #> accuracy kappa cnn_layers cnn_kernels cnn_dropout_rates lr_best wd_best #> <dbl> <dbl> <chr> <chr> <chr> <dbl> <dbl> #> 1 0.939 0.929 c(256, 256, 256) c(5, 5, 5) c(0.2, 0.2, 0.2) 0.000376 1.53e-4 ``` ### Classification in GPUs using parallel processing Deep learning time series classification methods in `sits`, which include `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)`, `[sits_mlp()](https://rdrr.io/pkg/sits/man/sits_mlp.html)`, `sits_lightae()` and `[sits_tae()](https://rdrr.io/pkg/sits/man/sits_tae.html)`, are written using the `torch` package, which is an adaptation of pyTorch to the R environment. These algorithms can use a CUDA\-compatible NVDIA GPU if one is available and has been properly configured. Please refer to the `torch` [installation guide](https://torch.mlverse.org/docs/articles/installation) for details on how to configure `torch` to use GPUs. If no GPU is available, these algorithms will run on regular CPUs, using the same paralellization methods described in the traditional machine learning methods. Typically, there is a 10\-fold performance increase when running `torch` based methods in GPUs relative to their processing time in GPU. To illustrate the use of GPUs, we take the same data cube and training data used in the previous examples and use a Temporal CNN method. The first step is to obtain a deep learning model using the hyperparameters produced by the tuning procedure shown earlier. We run ``` tcnn_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_deforestation_rondonia, [sits_tempcnn](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)( cnn_layers = [c](https://rdrr.io/r/base/c.html)(256, 256, 256), cnn_kernels = [c](https://rdrr.io/r/base/c.html)(5, 5, 5), cnn_dropout_rates = [c](https://rdrr.io/r/base/c.html)(0.2, 0.2, 0.2), opt_hparams = [list](https://rdrr.io/r/base/list.html)( lr = 0.000376, weight_decay = 0.000153 ) ) ) ``` After training the model, we classify the data cube. If a GPU is available, users need to provide the additional parameter `gpu_memory` to `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`. This information will be used by `sits` to optimize access to the GPU and speed up processing. ``` rondonia_20LMR_probs_tcnn <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( rondonia_20LMR, ml_model = tcnn_model, output_dir = "./tempdir/chp9", version = "tcnn-raster", gpu_memory = 16, multicores = 6, memsize = 24 ) ``` After classification, we can smooth the probability cube and then label the resulting smoothed probabilities to obtain a classified map. ``` # Smoothen the probability map rondonia_20LMR_bayes_tcnn <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( rondonia_20LMR_probs_tcnn, output_dir = "./tempdir/chp9", version = "tcnn-raster", multicores = 6, memsize = 24 ) # Obtain the final labelled map rondonia_20LMR_class_tcnn <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( rondonia_20LMR_bayes_tcnn, output_dir = "./tempdir/chp9", version = "tcnn-raster", multicores = 6, memsize = 24 ) ``` ``` # plot the final classification map [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LMR_class_tcnn, legend_text_size = 0.7 ) ``` Figure 85: Final map of deforestation obtained using TempCNN model (source: authors). Map reclassification -------------------- Reclassification of a remote sensing map refers to changing the classes assigned to different pixels in the image. The purpose of reclassification is to modify the information contained in the image to better suit a specific use case. In `sits`, reclassification involves assigning new classes to pixels based on additional information from a reference map. Users define rules according to the desired outcome. These rules are then applied to the classified map to produce a new map with updated classes. To illustrate the reclassification in `sits`, we take a classified data cube stored in the `sitsdata` package. As discussed in Chapter [Earth observation data cubes](https://e-sensing.github.io/sitsbook/earth-observation-data-cubes.html), `sits` can create a data cube from a classified image file. Users need to provide the original data source and collection, the directory where data is stored (`data_dir`), the information on how to retrieve data cube parameters from file names (`parse_info`), and the labels used in the classification. ``` # Open classification map data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-Class", package = "sitsdata") rondonia_class <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir, parse_info = [c](https://rdrr.io/r/base/c.html)( "satellite", "sensor", "tile", "start_date", "end_date", "band", "version" ), bands = "class", labels = [c](https://rdrr.io/r/base/c.html)( "1" = "Water", "2" = "Clear_Cut_Burned_Area", "3" = "Clear_Cut_Bare_Soil", "4" = "Clear_Cut_Vegetation", "5" = "Forest", "6" = "Bare_Soil", "7" = "Wetland" ) ) [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_class, legend_text_size = 0.7 ) ``` Figure 86: Original classification map (source: authors). The above map shows the total extent of deforestation by clear cuts estimated by the `sits` random forest algorithm in an area in Rondonia, Brazil, based on a time series of Sentinel\-2 images for the period 2020\-06\-04 to 2021\-08\-26\. Suppose we want to estimate the deforestation that occurred from June 2020 to August 2021\. We need a reference map containing information on forest cuts before 2020\. In this example, we use as a reference the PRODES deforestation map of Amazonia created by Brazil’s National Institute for Space Research (INPE). This map is produced by visual interpretation. PRODES measures deforestation every year, starting from August of one year to July of the following year. It contains classes that represent the natural world (Forest, Water, NonForest, and NonForest2\) and classes that capture the yearly deforestation increments. These classes are named “dYYYY” and “rYYYY”; the first refers to deforestation in a given year (e.g., “d2008” for deforestation for August 2007 to July 2008\); the second to places where the satellite data is not sufficient to determine the land class (e.g., “r2010” for 2010\). This map is available on package `sitsdata`, as shown below. ``` data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/PRODES", package = "sitsdata") prodes_2021 <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "USGS", collection = "LANDSAT-C2L2-SR", data_dir = data_dir, parse_info = [c](https://rdrr.io/r/base/c.html)( "product", "sensor", "tile", "start_date", "end_date", "band", "version" ), bands = "class", version = "v20220606", labels = [c](https://rdrr.io/r/base/c.html)( "1" = "Forest", "2" = "Water", "3" = "NonForest", "4" = "NonForest2", "6" = "d2007", "7" = "d2008", "8" = "d2009", "9" = "d2010", "10" = "d2011", "11" = "d2012", "12" = "d2013", "13" = "d2014", "14" = "d2015", "15" = "d2016", "16" = "d2017", "17" = "d2018", "18" = "r2010", "19" = "r2011", "20" = "r2012", "21" = "r2013", "22" = "r2014", "23" = "r2015", "24" = "r2016", "25" = "r2017", "26" = "r2018", "27" = "d2019", "28" = "r2019", "29" = "d2020", "31" = "r2020", "32" = "Clouds2021", "33" = "d2021", "34" = "r2021" ) ) ``` Since the labels of the deforestation map are specialized and are not part of the default `sits` color table, we define a legend for better visualization of the different deforestation classes. ``` # Use the RColorBrewer palette "YlOrBr" for the deforestation years colors <- grDevices::[hcl.colors](https://rdrr.io/r/grDevices/palettes.html)(n = 15, palette = "YlOrBr") # Define the legend for the deforestation map def_legend <- [c](https://rdrr.io/r/base/c.html)( "Forest" = "forestgreen", "Water" = "dodgerblue3", "NonForest" = "bisque2", "NonForest2" = "bisque2", "d2007" = colors[1], "d2008" = colors[2], "d2009" = colors[3], "d2010" = colors[4], "d2011" = colors[5], "d2012" = colors[6], "d2013" = colors[7], "d2014" = colors[8], "d2015" = colors[9], "d2016" = colors[10], "d2017" = colors[11], "d2018" = colors[12], "d2019" = colors[13], "d2020" = colors[14], "d2021" = colors[15], "r2010" = "lightcyan", "r2011" = "lightcyan", "r2012" = "lightcyan", "r2013" = "lightcyan", "r2014" = "lightcyan", "r2015" = "lightcyan", "r2016" = "lightcyan", "r2017" = "lightcyan", "r2018" = "lightcyan", "r2019" = "lightcyan", "r2020" = "lightcyan", "r2021" = "lightcyan", "Clouds2021" = "lightblue2" ) ``` Using this new legend, we can visualize the PRODES deforestation map. ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(prodes_2021, legend = def_legend) ``` Figure 87: Deforestation map produced by PRODES (source: authors). Taking the PRODES map as our reference, we can include new labels in the classified map produced by `sits` using `[sits_reclassify()](https://rdrr.io/pkg/sits/man/sits_reclassify.html)`. The new class name Defor\_2020 will be applied to all pixels that PRODES considers that have been deforested before July 2020\. We also include a Non\_Forest class to include all pixels that PRODES takes as not covered by native vegetation, such as wetlands and rocky areas. The PRODES classes will be used as a mask over the `sits` deforestation map. The `[sits_reclassify()](https://rdrr.io/pkg/sits/man/sits_reclassify.html)` operation requires the parameters: (a) `cube`, the classified data cube whose pixels will be reclassified; (b) `mask`, the reference data cube used as a mask; (c) `rules`, a named list. The names of the `rules` list will be the new label. Each new label is associated with a `mask` vector that includes the labels of the reference map that will be joined. `[sits_reclassify()](https://rdrr.io/pkg/sits/man/sits_reclassify.html)` then compares the original and reference map pixel by pixel. For each pixel of the reference map whose labels are in one of the `rules`, the algorithm relabels the original map. The result will be a reclassified map with the original labels plus the new labels that have been masked using the reference map. ``` # Reclassify cube rondonia_def_2021 <- [sits_reclassify](https://rdrr.io/pkg/sits/man/sits_reclassify.html)( cube = rondonia_class, mask = prodes_2021, rules = [list](https://rdrr.io/r/base/list.html)( "Non_Forest" = mask [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)("NonForest", "NonForest2"), "Deforestation_Mask" = mask [%in%](https://rdrr.io/r/base/match.html) [c](https://rdrr.io/r/base/c.html)( "d2007", "d2008", "d2009", "d2010", "d2011", "d2012", "d2013", "d2014", "d2015", "d2016", "d2017", "d2018", "d2019", "d2020", "r2010", "r2011", "r2012", "r2013", "r2014", "r2015", "r2016", "r2017", "r2018", "r2019", "r2020", "r2021" ), "Water" = mask == "Water" ), memsize = 8, multicores = 2, output_dir = "./tempdir/chp9", version = "reclass" ) # Plot the reclassified map [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_def_2021, legend_text_size = 0.7 ) ``` Figure 88: Deforestation map by sits masked by PRODES map (source: authors). The reclassified map has been split into deforestation before mid\-2020 (using the PRODES map) and the areas classified by `sits` that are taken as being deforested from mid\-2020 to mid\-2021\. This allows experts to measure how much deforestation occurred in this period according to `sits` and compare the result with the PRODES map. The `[sits_reclassify()](https://rdrr.io/pkg/sits/man/sits_reclassify.html)` function is not restricted to comparing deforestation maps. It can be used in any case that requires masking of a result based on a reference map.
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/bayesian-smoothing-for-post-processing.html
Bayesian smoothing for post\-processing ======================================= [ Introduction ------------ Machine learning algorithms rely on training samples that are derived from “pure” pixels, hand\-picked by users to represent the desired output classes. Given the presence of mixed pixels in images regardless of resolution, and the considerable data variability within each class, these classifiers often produce results with outliers or misclassified pixels. Therefore, post\-processing techniques have become crucial to refine the labels of a classified image [\[80]](references.html#ref-Huang2014). Post\-processing methods reduce salt\-and\-pepper and border effects, where single pixels or small groups of pixels are classified differently from their larger surrounding areas; these effects lead to visual discontinuity and inconsistency. By mitigating these errors and minimizing noise, post\-processing improves the quality of the initial classification results, bringing a significant gain in the overall accuracy and interpretability of the final output [\[81]](references.html#ref-Schindler2012). The `sits` package uses a *time\-first, space\-later* approach. Since machine learning classifiers in `sits` are mostly pixel\-based, it is necessary to complement them with spatial smoothing methods. These methods improve the accuracy of land classification by incorporating spatial and contextual information into the classification process. The smoothing method available in `sits` uses an Empirical Bayes approach, adjusted to the specific properties of land classification. The assumption is that class probabilities at the local level should be similar and provide the baseline for comparison with the pixel values produced by the classifier. Based on these two elements, Bayesian smoothing adjusts the probabilities for the pixels, considering a spatial dependence. The need for post\-processing ----------------------------- The main idea behind our post\-processing method is that a pixel\-based classification should take into account its neighborhood pixels. Consider the figure blow which shows a class assignment produced by a random forest algorithm on a image time series. The classified map has been produced by taking, for each pixel, the class of higher probability produced by the algorithm. The resulting map has many noisy areas with a high spatial variability of class assignments. This happens more frequently in two cases: (a) small clusters of pixels of one class inside a larger area of a different class; (b) transition zones between classes. In general, images of heterogeneous landscapes with high spatial variability have many mixed pixels, whose spectral response combines different types of land cover in a single ground resolved cell. For example, many pixels in the border between areas of classes `Forest` and `Clear_Cut_Bare_Soil` are wrongly assigned to the `Clear_Cut_Vegetation` class. This wrong assignment occurs because these pixels have a mixed response. Inside the ground cell captured by the sensor as a single pixel value, there are both trees and bare soil areas. Such results are undesirable and need to be corrected by post\-processing. Figure 89: Detail of labelled map produced by pixel\-based random forest without smoothing (source: authors) To maintain consistency and coherence in our class representations, we should minimise small variations or misclassifications. We incorporate spatial coherence as a post\-processing step to accomplish this. The probabilities associated with each pixel will change based on statistical inference, which depends on the values for each neighbourhood. Using the recalculated probabilities for each pixel, we get a better version of the final classified map. Consider the figure below, which is the result of Bayesian smoothing on the random forest algorithm outcomes. The noisy border pixels between two large areas of the same class have been removed. We have also removed small clusters of pixels belonging to one class inside larger areas of other classes. The outcome is a more uniform map, like the ones created through visual interpretation or object\-based analysis. Details like narrow vegetation corridors or small forest roads might be missing in the smoothed image. However, the improved spatial consistency of the final map compensates for such losses, due to the removal of misclassified pixels that have mixed spectral responses. Figure 90: Detail of labelled map produced by pixel\-based random forest after smoothing (source: uthors) Empirical Bayesian estimation ----------------------------- The Bayesian estimate is based on the probabilities produced by the classifiers. Let \\(p\_{i,k} \\geq 0\\) be the prior probability of the \\(i\\)\-th pixel belonging to class \\(k \\in \\{1, \\ldots, m\\}\\). The probabilities \\(p\_{i,k}\\) are the classifier’s output, being subject to noise, outliers, and classification errors. Our estimation aims to remove these effects and obtain values that approximate the actual class probability better. We convert the class probability values \\(p\_{i,k}\\) to log\-odds values using the logit function, to transform probability values ranging from \\(0\\) to \\(1\\) to values from negative infinity to infinity. The conversion from probabilities logit values is helpful to support our assumption of normal distribution for our data. \\\[ x\_{i,k} \= \\ln \\left(\\frac{p\_{i,k}}{1 \- p\_{i,k}}\\right) \\] We assume that the logit of the prior probability of the pixels \\(i\\) associated to class \\(k\\) is described by a Gaussian distribution function \\\[\\begin{equation} x\_{i,k} \= \\log\\left( \\frac{\\pi\_{i,k}}{1\-\\pi\_{i,k}} \\right) \\sim N(m\_{i,k}, s^2\_{i,k}) \\end{equation}\\] where \\(m\_{i,k}\\) represents the local mean value and \\(s^2\_{i,k}\\) the local class variance. The local mean and variance are computed based on the local neighborhood of the point. We express the likelihood as a conditional Gaussian distribution of the logit \\(x\_{i,k}\\) of the observed values \\(p\_{i,k}\\) over \\(\\mu\_{i,k}\\): \\\[\\begin{equation} (x\_{i,k} \| \\mu\_{i,k}) \= \\log(p\_{i,k}/(1\-p\_{i,k})) \\sim N(\\mu\_{i,k}, \\sigma^2\). \\end{equation}\\] In the above equation, \\(\\mu\_{i,k}\\) is the posterior expected mean of the logit probability associated to the \\(i\-th\\) pixel. The variance \\(\\sigma^2\_{k}\\) will be estimated based on user expertise and taken as a hyperparameter to control the smoothness of the resulting estimate. The standard Bayesian updating [\[82]](references.html#ref-Gelman2014) leads to the posterior distribution which can be expressed as a weighted mean \\\[ {E}\[\\mu\_{i,k} \| x\_{i,k}] \= \\Biggl \[ \\frac{s^2\_{i,k}}{\\sigma^2\_{k} \+s^2\_{i,k}} \\Biggr ] \\times x\_{i,k} \+ \\Biggl \[ \\frac{\\sigma^2\_{k}}{\\sigma^2\_{k} \+s^2\_{i,k}} \\Biggr ] \\times m\_{i,k}, \\] where: * \\(x\_{i,k}\\) is the logit value for pixel \\(i\\) and class \\(k\\). * \\(m\_{i,k}\\) is the average of logit values for pixels of class \\(k\\) in the neighborhood of pixel \\(i\\). * \\(s^2\_{i,k}\\) is the variance of logit values for pixels of class \\(k\\) in the neighborhood of pixel \\(i\\). * \\(\\sigma^2\_{k}\\) is an user\-derived hyperparameter which estimates the variance for class \\(k\\), expressed in logits. The above equation is a weighted average between the value \\(x\_{i,k}\\) for the pixel and the mean \\(m\_{i,k}\\) for the neighboring pixels. When the variance \\(s^2\_{i,k}\\) for the neighbors is too high, the algorithm gives more weight to the pixel value \\(x\_{i,k}\\). When class variance \\(\\sigma^2\_k\\) increases, the results gives more weight to the neighborhood mean \\(m\_{i,k}\\). Bayesian smoothing for land classification assumes that image patches with similar characteristics have a dominant class. This dominant class has higher average probabilities and lower variance than other classes. A pixel assigned to a different class will likely exhibit high local variance in such regions. As a result, post\-processing should adjust the class of this pixel to match the dominant class. There is usually no prior information to specify \\(m\_{i,k}\\) and \\(s^2\_{i,k}\\). Because of that, we adopt an Empirical Bayes (EB) approach to obtain estimates of these prior parameters by considering the pixel neighborhood. However, using a standard symmetrical neighborhood for each pixel, based uniquely on the distance between locations, would not produce reasonable results for border pixels. For this reason, our EB estimates uses non\-isotropic neighbourhood, as explained below. Using non\-isotropic neighborhoods ---------------------------------- The fundamental idea behind Bayesian smoothing for land classification is that individual pixels area related to those close to it. Each pixel usually has the same class as most of its neighbors. These closeness relations are expressed in similar values of class probability. If we find a pixel assigned to `Water` surrounded by pixels labeled as `Forest`, such pixel may have been wrongly labelled. To check if the pixel has been mislabeled, we look at the class probabilities for the pixels and its neighbors. There are possible situations: * The outlier has a class probability distribution very different from its neighbors. For example, its probability for belonging to the `Water` class is 80% while that of being a `Forest` is 20%. If we also consider that `Water` pixels have a smaller variance, since water areas have a strong signal in multispectral images, our post\-processing method will not change the pixel’s label. * The outlier has a class probability distribution similar from its neighbors. Consider a case where a pixel has a 47% probability for `Water` and 43% probability for `Forest`. This small difference indicates that we need to look at the neighborhood to improve the information produced by the classifier. In these cases, the post\-processing estimate may change the pixel’s label. Pixels in the border between two areas of different classes pose a challenge. Only some of their neighbors belong to the same class as the pixel. To address this issue, we employ a non\-isotropic definition of a neighborhood to estimate the prior class distribution. For instance, consider a boundary pixel with a neighborhood defined by a 7 x 7 window, located along the border between `Forest` and `Grassland` classes. To estimate the prior probability of the pixel being labeled as a `Forest`, we should only take into account the neighbors on one side of the border that are likely to be correctly classified as `Forest`. Pixels on the opposite side of the border should be disregarded, since they are unlikely to belong to the same spatial process. In practice, we use only half of the pixels in the 7 x 7 window, opting for those that have a higher probability of being named as `Forest`. For the prior probability of the `Grassland` class, we reverse the selection and only consider those on the opposite side of the border. Although this choice of neighborhood may seem unconventional, it is consistent with the assumption of non\-continuity of the spatial processes describing each class. A dense forest patch, for example, will have pixels with strong spatial autocorrelation for values of the Forest class; however, this spatial autocorrelation doesn’t extend across its border with other land classes. Effect of the hyperparameter ---------------------------- The parameter \\(\\sigma^2\_k\\) controls the level of smoothness. If \\(\\sigma^2\_k\\) is zero, the value \\({E}\[\\mu\_{i,k} \| x\_{i,k}]\\) will be equal to the pixel value \\(x\_{i,k}\\). The parameter \\(\\sigma^2\_k\\) expresses confidence in the inherent variability of the distribution of values of a class \\(k\\). The smaller the parameter \\(\\sigma^2\_k\\), the more we trust the estimated probability values produced by the classifier for class \\(k\\). Conversely, higher values of \\(\\sigma^2\_k\\) indicate lower confidence in the classifier outputs and improved confidence in the local averages. Consider the following two\-class example. Take a pixel with probability \\(0\.4\\) (logit \\(x\_{i,1} \= \-0\.4054\\)) for class A and probability \\(0\.6\\) (logit \\(x\_{i,2} \= 0\.4054\\)) for class B. Without post\-processing, the pixel will be labeled as class B. Consider that the local average is \\(0\.6\\) (logit \\(m\_{i,1} \= 0\.4054\\)) for class A and \\(0\.4\\) (logit \\(m\_{i,2} \= \-0\.4054\\)) for class B. This is a case of an outlier classified originally as class B in the midst of a set of class A pixels. Given this situation, we apply the proposed method. Suppose the local variance of logits to be \\(s^2\_{i,1} \= 5\\) for class A and \\(s^2\_{i,2} \= 10\\) and for class B. This difference is to be expected if the local variability of class A is smaller than that of class B. To complete the estimate, we need to set the parameter \\(\\sigma^2\_{k}\\), representing our belief in the variability of the probability values for each class. Setting \\(\\sigma^2\_{k}\\) will be based on our confidence in the local variability of each class around pixel \\({i}\\). If we considered the local variability to be high, we can take both \\(\\sigma^2\_1\\) for class A and \\(\\sigma^2\_2\\) for class B to be both 10\. In this case, the Bayesian estimated probability for class A is \\(0\.52\\) and for class B is \\(0\.48\\) and the pixel will be relabeled as being class A. By contrast, if we consider local variability to be high If we set \\(\\sigma^2\\) to be 5 for both classes A and B, the Bayesian probability estimate will be \\(0\.48\\) for class A and \\(0\.52\\) for class B. In this case, the original class will be kept. Therefore, the result is sensitive to the subjective choice of the hyperparameter. In the example below, we will show how to use the local logit variance to set the appropriate values of \\(\\sigma^2\\). Running Bayesian smoothing -------------------------- We now show how to run Bayesian smoothing on a data cube covering an area of Sentinel\-2 tile “20LLQ” in the period 2020\-06\-04 to 2021\-08\-26\. The training data has six classes: (a) `Forest` for natural tropical forest; (b) `Water` for lakes and rivers; (c) “Wetlands” for areas where water covers the soil in the wet season; (d) `Clear_Cut_Burned_Area` for areas where fires cleared the land after tree removal; (e) `Clear_Cut_Bare_Soil` where the forest has been completely removed; (f) `Clear_Cut_Vegetation` where some vegetation remains after most trees have been removed. To simplify the example, our input is the probability cube generated by a random forest model. We recover the probability data cube and then plot the the results of the machine learning method for classes `Forest`, `Clear_Cut_Bare_Soil`, `Clear_Cut_Vegetation`, and `Clear_Cut_Burned_Area`. ``` # define the classes of the probability cube labels <- [c](https://rdrr.io/r/base/c.html)( "1" = "Water", "2" = "Clear_Cut_Burned_Area", "3" = "Clear_Cut_Bare_Soil", "4" = "Clear_Cut_Vegetation", "5" = "Forest", "6" = "Wetland" ) # directory where the data is stored data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LLQ/", package = "sitsdata") # create a probability data cube from a file rondonia_20LLQ_probs <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir, bands = "probs", labels = labels, parse_info = [c](https://rdrr.io/r/base/c.html)( "satellite", "sensor", "tile", "start_date", "end_date", "band", "version" ) ) # plot the probabilities for water and forest [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_probs, labels = [c](https://rdrr.io/r/base/c.html)("Forest", "Clear_Cut_Bare_Soil") ) ``` Figure 91: Probability map produced for classes Forest and Clear\_Cut\_Bare\_Soil (source: authors). ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_probs, labels = [c](https://rdrr.io/r/base/c.html)("Clear_Cut_Vegetation", "Clear_Cut_Burned_Area") ) ``` Figure 92: Probability map produced for classes Forest and Clear\_Cut\_Bare\_Soil (source: authors). The probability map for `Forest` shows high values associated with compact patches and linear stretches in riparian areas. Class `Clear_Cut_Bare_Soil` is mostly composed of dense areas of high probability whose geometrical boundaries result from forest cuts. Areas of class `Clear_Cut_Vegetation` are is less well\-defined than the others; this is to be expected since this is a transitional class between a natural forest and areas of bare soil. Patches associated to class `Clear_Cut_Burned_Area` include both homogeneous areas of high probability and areas of mixed response. Since classes have different behaviours, the post\-processing procedure should enable users to control how to handle outliers and border pixels of each class. The next step is to show the labelled map resulting from the raw class probabilites. We produce a classification map by taking the class of higher probability to each pixel, without considering the spatial context. There are many places with the so\-called “salt\-and\-pepper” effect which result from misclassified pixels. The non\-smoothed labelled map shows the need for post\-processing, since it contains a significant number of outliers and areas with mixed labelling. ``` # Generate the thematic map rondonia_20LLQ_class <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = rondonia_20LLQ_probs, multicores = 4, memsize = 12, output_dir = "./tempdir/chp10", version = "no_smooth" ) # Plot the result [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_class, legend_text_size = 0.8, legend_position = "outside" ) ``` Figure 93: Classified map without smoothing (source: authors). Assessing the local logit variance ---------------------------------- To determine appropriate settings for the \\(\\sigma^2\_{k}\\) hyperparameter for each class to perform Bayesian smoothing, it is useful to calculate the local logit variances for each class. For each pixel, we estimate the local variance \\(s^2\_{i,k}\\) by considering the non\-isotropic neighborhood. The local logit variances are estimated by `[sits_variance()](https://rdrr.io/pkg/sits/man/sits_variance.html)`; Its main parameters are: (a) `cube`, a probability data cube; (b) `window_size`, dimension of the local neighbourhood; (c) `neigh_fraction`, the percentage of pixels in the neighbourhood used to calculate the variance. The example below uses half of the pixels of a \\(7\\times 7\\) window to estimate the variance. The chosen pixels will be those with the highest probability pixels to be more representative of the actual class distribution. The output values are the logit variances in the vicinity of each pixel. The choice of the \\(7 \\times 7\\) window size is a compromise between having enough values to estimate the parameters of a normal distribution and the need to capture local effects for class patches of small sizes. Classes such as `Water` tend to be spatially limited; a bigger window size could result in invalid values for their respective normal distributions. ``` # calculate variance rondonia_20LLQ_var <- [sits_variance](https://rdrr.io/pkg/sits/man/sits_variance.html)( cube = rondonia_20LLQ_probs, window_size = 7, neigh_fraction = 0.50, output_dir = "./tempdir/chp10", multicores = 4, memsize = 16 ) [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_var, labels = [c](https://rdrr.io/r/base/c.html)("Forest", "Clear_Cut_Bare_Soil"), palette = "Spectral", rev = TRUE ) ``` Figure 94: Variance map for classes Forest and Clear\_Cut\_Bare\_Soil (source: authors). ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_var, labels = [c](https://rdrr.io/r/base/c.html)("Clear_Cut_Vegetation", "Clear_Cut_Burned_Area"), palette = "Spectral", rev = TRUE ) ``` Figure 95: Variance map for clases Clear\_Cut\_Vegetation and Clear\_Cut\_Burned\_Area (source: authors). Comparing the variance maps with the probability maps, one sees that areas of high probability of classes `Forest` and `Clear_Cut_Bare_Soil` are mostly made of compact patches. Recall these are the two dominant classes in the area, and deforestation is a process that converts forest to bare soil. Many areas of high logit variance for these classes are related to border pixels which have a mixed response. Areas of large patches of high logit variance for these classes are associated to lower class probabilities and will not be relevant to the final result. By contrast, the transitional classes `Clear_Cut_Vegetation` and `Clear_Cut_Burned_Area` have a different spatial pattern of their probability and logit variance. The first has a high spatial variability, since pixels of this class arise when the forest has not been completely removed and there is some remaining vegetation after trees are cut. The extent of remaining vegetation after most trees have been removed is not uniform. For this reason, many areas of high local logit variance of class `Clear_Cut_Vegetation` are located in mixed patches inside pixels of class `Forest` and on the border between `Forest` and `Clear_Cut_Bare_Soil`. This situation is consistent with the earlier observation that transitional classes may appear as artificial effects of mixed pixels in borders between other classes. Instances of class `ClearCut_Burned_Area` arise following a forest fire. Most pixels of this class tend to form mid\-sized to large spatial clusters, because of how forest fires start and propagate. It is desirable to preserve the contiguity of the burned areas and remove pixels of other classes inside these clusters. Isolated points of class `ClearCut_Burned_Area` can be removed without significant information loss. The distinct patterns of these classes are measured quantitatively by the `[summary()](https://rdrr.io/r/base/summary.html)` function. For variance cubes, this function provides information on the logit variance values of the higher inter\-quartile values. ``` # get the summary of the logit variance [summary](https://rdrr.io/r/base/summary.html)(rondonia_20LLQ_var) ``` ``` #> Water Clear_Cut_Burned_Area Clear_Cut_Bare_Soil Clear_Cut_Vegetation #> 75% 4.22 0.25 0.40 0.5500 #> 80% 4.74 0.31 0.49 0.6800 #> 85% 5.07 0.38 0.64 0.8700 #> 90% 5.36 0.51 0.87 1.1810 #> 95% 5.88 0.76 1.68 1.8405 #> 100% 22.10 8.74 12.79 14.0800 #> Forest Wetland #> 75% 1.2600 0.2800 #> 80% 2.0020 0.3400 #> 85% 3.0715 0.4300 #> 90% 4.2700 0.5700 #> 95% 5.1400 1.2105 #> 100% 21.1800 8.8700 ``` The summary statistics show that most local variance values are low, which is an expected result. Areas of low variance correspond to pixel neighborhoods of high logit values for one of the classes and low logit values for the others. High values of the local variances are relevant in areas of confusion between classes. Using the variance to select values of hyperparameters ------------------------------------------------------ We make the following recommendations for setting the \\(\\sigma^2\_{k}\\) parameter, based on the local logit variance: * Set the \\(\\sigma^2\_{k}\\) parameter with high values (in the 95%\-100% range) to increase the neighborhood influence compared with the probability values for each pixel. Such choice will produce denser spatial clusters and remove “salt\-and\-pepper” outliers. * Set the \\(\\sigma^2\_{k}\\) parameter with low values (in the 75%\-80% range) to reduce the neighborhood influence, for classes that we want to preserve their original spatial shapes. Consider the case of forest areas and watersheds. If an expert wishes to have compact areas classified as forests without many outliers inside them, she will set the \\(\\sigma^2\\) parameter for the class `Forest` to be high. For comparison, to avoid that small watersheds with few similar neighbors being relabeled, it is advisable to avoid a strong influence of the neighbors, setting \\(\\sigma^2\\) to be as low as possible. In contrast, transitional classes such as `Clear_Cut_Vegetation` are likely to be associated with some outliers; use large \\(\\sigma^2\_{k}\\) for them. To remove the outliers and classification errors, we run a smoothing procedure with `[sits_smooth()](https://rdrr.io/pkg/sits/man/sits_smooth.html)` with parameters: (a) `cube`, a probability cube produced by `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`; (b) `window_size`, the local window to compute the neighborhood probabilities; (d) `neigh_fraction`, fraction of local neighbors used to calculate local statistics; (e) `smoothness`, a vector with estimates of the prior variance of each class; (f) `multicores`, number of CPU cores that will be used for processing; (g) `memsize`, memory available for classification; (h) `output_dir`, a directory where results will be stored; (i) `version`, for version control. The resulting cube can be visualized with `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. The parameters `window_size` and `neigh_fraction` control how many pixels in a neighborhood the Bayesian estimator will use to calculate the local statistics. For example, setting `window size` to \\(7\\) and `neigh_fraction` to \\(0\.50\\) (the defaults) ensures that \\(25\\) samples are used to estimate the local statistics. The `smoothness` values for the classes are set as recommended above. ``` # Compute Bayesian smoothing rondonia_20LLQ_smooth <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( cube = rondonia_20LLQ_probs, window_size = 7, neigh_fraction = 0.50, smoothness = [c](https://rdrr.io/r/base/c.html)( "Water" = 5.0, "Clear_Cut_Burned_Area" = 9.5, "Clear_Cut_Bare_Soil" = 0.5, "Clear_Cut_Vegetation" = 15, "Forest" = 2.5, "Wetland" = 0.40 ), multicores = 4, memsize = 12, output_dir = "./tempdir/chp10" ) # Plot the result [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_smooth, labels = [c](https://rdrr.io/r/base/c.html)("Clear_Cut_Vegetation", "Forest")) ``` Figure 96: Probability maps after bayesian smoothing (source: authors). Bayesian smoothing has removed some of the local variability associated with misclassified pixels that differ from their neighbors, specially in the case of transitional classes such as `Clear_Cut_Vegetation`. The smoothing impact is best appreciated by comparing the labeled map produced without smoothing to the one that follows the procedure, as shown below. ``` # Generate the thematic map rondonia_20LLQ_class_v2 <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = rondonia_20LLQ_smooth, multicores = 4, memsize = 12, output_dir = "./tempdir/chp10", version = "smooth" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_class_v2, legend_text_size = 0.7 ) ``` Figure 97: Final classification map after Bayesian smoothing with 7 x 7 window, using high smoothness values (source: authors). In the smoothed map, outliers inside forest areas and in the class borders have been removed. The salt\-and\-pepper effect associated to transitional classes has also been replaced by more coherent estimates. The smoothed map shown much improvements compared with the non\-smoothed one. In conclusion, post\-processing is a desirable step in any classification process. Bayesian smoothing improves the borders between the objects created by the classification and removes outliers that result from pixel\-based classification. It is a reliable method that should be used in most situations. Introduction ------------ Machine learning algorithms rely on training samples that are derived from “pure” pixels, hand\-picked by users to represent the desired output classes. Given the presence of mixed pixels in images regardless of resolution, and the considerable data variability within each class, these classifiers often produce results with outliers or misclassified pixels. Therefore, post\-processing techniques have become crucial to refine the labels of a classified image [\[80]](references.html#ref-Huang2014). Post\-processing methods reduce salt\-and\-pepper and border effects, where single pixels or small groups of pixels are classified differently from their larger surrounding areas; these effects lead to visual discontinuity and inconsistency. By mitigating these errors and minimizing noise, post\-processing improves the quality of the initial classification results, bringing a significant gain in the overall accuracy and interpretability of the final output [\[81]](references.html#ref-Schindler2012). The `sits` package uses a *time\-first, space\-later* approach. Since machine learning classifiers in `sits` are mostly pixel\-based, it is necessary to complement them with spatial smoothing methods. These methods improve the accuracy of land classification by incorporating spatial and contextual information into the classification process. The smoothing method available in `sits` uses an Empirical Bayes approach, adjusted to the specific properties of land classification. The assumption is that class probabilities at the local level should be similar and provide the baseline for comparison with the pixel values produced by the classifier. Based on these two elements, Bayesian smoothing adjusts the probabilities for the pixels, considering a spatial dependence. The need for post\-processing ----------------------------- The main idea behind our post\-processing method is that a pixel\-based classification should take into account its neighborhood pixels. Consider the figure blow which shows a class assignment produced by a random forest algorithm on a image time series. The classified map has been produced by taking, for each pixel, the class of higher probability produced by the algorithm. The resulting map has many noisy areas with a high spatial variability of class assignments. This happens more frequently in two cases: (a) small clusters of pixels of one class inside a larger area of a different class; (b) transition zones between classes. In general, images of heterogeneous landscapes with high spatial variability have many mixed pixels, whose spectral response combines different types of land cover in a single ground resolved cell. For example, many pixels in the border between areas of classes `Forest` and `Clear_Cut_Bare_Soil` are wrongly assigned to the `Clear_Cut_Vegetation` class. This wrong assignment occurs because these pixels have a mixed response. Inside the ground cell captured by the sensor as a single pixel value, there are both trees and bare soil areas. Such results are undesirable and need to be corrected by post\-processing. Figure 89: Detail of labelled map produced by pixel\-based random forest without smoothing (source: authors) To maintain consistency and coherence in our class representations, we should minimise small variations or misclassifications. We incorporate spatial coherence as a post\-processing step to accomplish this. The probabilities associated with each pixel will change based on statistical inference, which depends on the values for each neighbourhood. Using the recalculated probabilities for each pixel, we get a better version of the final classified map. Consider the figure below, which is the result of Bayesian smoothing on the random forest algorithm outcomes. The noisy border pixels between two large areas of the same class have been removed. We have also removed small clusters of pixels belonging to one class inside larger areas of other classes. The outcome is a more uniform map, like the ones created through visual interpretation or object\-based analysis. Details like narrow vegetation corridors or small forest roads might be missing in the smoothed image. However, the improved spatial consistency of the final map compensates for such losses, due to the removal of misclassified pixels that have mixed spectral responses. Figure 90: Detail of labelled map produced by pixel\-based random forest after smoothing (source: uthors) Empirical Bayesian estimation ----------------------------- The Bayesian estimate is based on the probabilities produced by the classifiers. Let \\(p\_{i,k} \\geq 0\\) be the prior probability of the \\(i\\)\-th pixel belonging to class \\(k \\in \\{1, \\ldots, m\\}\\). The probabilities \\(p\_{i,k}\\) are the classifier’s output, being subject to noise, outliers, and classification errors. Our estimation aims to remove these effects and obtain values that approximate the actual class probability better. We convert the class probability values \\(p\_{i,k}\\) to log\-odds values using the logit function, to transform probability values ranging from \\(0\\) to \\(1\\) to values from negative infinity to infinity. The conversion from probabilities logit values is helpful to support our assumption of normal distribution for our data. \\\[ x\_{i,k} \= \\ln \\left(\\frac{p\_{i,k}}{1 \- p\_{i,k}}\\right) \\] We assume that the logit of the prior probability of the pixels \\(i\\) associated to class \\(k\\) is described by a Gaussian distribution function \\\[\\begin{equation} x\_{i,k} \= \\log\\left( \\frac{\\pi\_{i,k}}{1\-\\pi\_{i,k}} \\right) \\sim N(m\_{i,k}, s^2\_{i,k}) \\end{equation}\\] where \\(m\_{i,k}\\) represents the local mean value and \\(s^2\_{i,k}\\) the local class variance. The local mean and variance are computed based on the local neighborhood of the point. We express the likelihood as a conditional Gaussian distribution of the logit \\(x\_{i,k}\\) of the observed values \\(p\_{i,k}\\) over \\(\\mu\_{i,k}\\): \\\[\\begin{equation} (x\_{i,k} \| \\mu\_{i,k}) \= \\log(p\_{i,k}/(1\-p\_{i,k})) \\sim N(\\mu\_{i,k}, \\sigma^2\). \\end{equation}\\] In the above equation, \\(\\mu\_{i,k}\\) is the posterior expected mean of the logit probability associated to the \\(i\-th\\) pixel. The variance \\(\\sigma^2\_{k}\\) will be estimated based on user expertise and taken as a hyperparameter to control the smoothness of the resulting estimate. The standard Bayesian updating [\[82]](references.html#ref-Gelman2014) leads to the posterior distribution which can be expressed as a weighted mean \\\[ {E}\[\\mu\_{i,k} \| x\_{i,k}] \= \\Biggl \[ \\frac{s^2\_{i,k}}{\\sigma^2\_{k} \+s^2\_{i,k}} \\Biggr ] \\times x\_{i,k} \+ \\Biggl \[ \\frac{\\sigma^2\_{k}}{\\sigma^2\_{k} \+s^2\_{i,k}} \\Biggr ] \\times m\_{i,k}, \\] where: * \\(x\_{i,k}\\) is the logit value for pixel \\(i\\) and class \\(k\\). * \\(m\_{i,k}\\) is the average of logit values for pixels of class \\(k\\) in the neighborhood of pixel \\(i\\). * \\(s^2\_{i,k}\\) is the variance of logit values for pixels of class \\(k\\) in the neighborhood of pixel \\(i\\). * \\(\\sigma^2\_{k}\\) is an user\-derived hyperparameter which estimates the variance for class \\(k\\), expressed in logits. The above equation is a weighted average between the value \\(x\_{i,k}\\) for the pixel and the mean \\(m\_{i,k}\\) for the neighboring pixels. When the variance \\(s^2\_{i,k}\\) for the neighbors is too high, the algorithm gives more weight to the pixel value \\(x\_{i,k}\\). When class variance \\(\\sigma^2\_k\\) increases, the results gives more weight to the neighborhood mean \\(m\_{i,k}\\). Bayesian smoothing for land classification assumes that image patches with similar characteristics have a dominant class. This dominant class has higher average probabilities and lower variance than other classes. A pixel assigned to a different class will likely exhibit high local variance in such regions. As a result, post\-processing should adjust the class of this pixel to match the dominant class. There is usually no prior information to specify \\(m\_{i,k}\\) and \\(s^2\_{i,k}\\). Because of that, we adopt an Empirical Bayes (EB) approach to obtain estimates of these prior parameters by considering the pixel neighborhood. However, using a standard symmetrical neighborhood for each pixel, based uniquely on the distance between locations, would not produce reasonable results for border pixels. For this reason, our EB estimates uses non\-isotropic neighbourhood, as explained below. Using non\-isotropic neighborhoods ---------------------------------- The fundamental idea behind Bayesian smoothing for land classification is that individual pixels area related to those close to it. Each pixel usually has the same class as most of its neighbors. These closeness relations are expressed in similar values of class probability. If we find a pixel assigned to `Water` surrounded by pixels labeled as `Forest`, such pixel may have been wrongly labelled. To check if the pixel has been mislabeled, we look at the class probabilities for the pixels and its neighbors. There are possible situations: * The outlier has a class probability distribution very different from its neighbors. For example, its probability for belonging to the `Water` class is 80% while that of being a `Forest` is 20%. If we also consider that `Water` pixels have a smaller variance, since water areas have a strong signal in multispectral images, our post\-processing method will not change the pixel’s label. * The outlier has a class probability distribution similar from its neighbors. Consider a case where a pixel has a 47% probability for `Water` and 43% probability for `Forest`. This small difference indicates that we need to look at the neighborhood to improve the information produced by the classifier. In these cases, the post\-processing estimate may change the pixel’s label. Pixels in the border between two areas of different classes pose a challenge. Only some of their neighbors belong to the same class as the pixel. To address this issue, we employ a non\-isotropic definition of a neighborhood to estimate the prior class distribution. For instance, consider a boundary pixel with a neighborhood defined by a 7 x 7 window, located along the border between `Forest` and `Grassland` classes. To estimate the prior probability of the pixel being labeled as a `Forest`, we should only take into account the neighbors on one side of the border that are likely to be correctly classified as `Forest`. Pixels on the opposite side of the border should be disregarded, since they are unlikely to belong to the same spatial process. In practice, we use only half of the pixels in the 7 x 7 window, opting for those that have a higher probability of being named as `Forest`. For the prior probability of the `Grassland` class, we reverse the selection and only consider those on the opposite side of the border. Although this choice of neighborhood may seem unconventional, it is consistent with the assumption of non\-continuity of the spatial processes describing each class. A dense forest patch, for example, will have pixels with strong spatial autocorrelation for values of the Forest class; however, this spatial autocorrelation doesn’t extend across its border with other land classes. Effect of the hyperparameter ---------------------------- The parameter \\(\\sigma^2\_k\\) controls the level of smoothness. If \\(\\sigma^2\_k\\) is zero, the value \\({E}\[\\mu\_{i,k} \| x\_{i,k}]\\) will be equal to the pixel value \\(x\_{i,k}\\). The parameter \\(\\sigma^2\_k\\) expresses confidence in the inherent variability of the distribution of values of a class \\(k\\). The smaller the parameter \\(\\sigma^2\_k\\), the more we trust the estimated probability values produced by the classifier for class \\(k\\). Conversely, higher values of \\(\\sigma^2\_k\\) indicate lower confidence in the classifier outputs and improved confidence in the local averages. Consider the following two\-class example. Take a pixel with probability \\(0\.4\\) (logit \\(x\_{i,1} \= \-0\.4054\\)) for class A and probability \\(0\.6\\) (logit \\(x\_{i,2} \= 0\.4054\\)) for class B. Without post\-processing, the pixel will be labeled as class B. Consider that the local average is \\(0\.6\\) (logit \\(m\_{i,1} \= 0\.4054\\)) for class A and \\(0\.4\\) (logit \\(m\_{i,2} \= \-0\.4054\\)) for class B. This is a case of an outlier classified originally as class B in the midst of a set of class A pixels. Given this situation, we apply the proposed method. Suppose the local variance of logits to be \\(s^2\_{i,1} \= 5\\) for class A and \\(s^2\_{i,2} \= 10\\) and for class B. This difference is to be expected if the local variability of class A is smaller than that of class B. To complete the estimate, we need to set the parameter \\(\\sigma^2\_{k}\\), representing our belief in the variability of the probability values for each class. Setting \\(\\sigma^2\_{k}\\) will be based on our confidence in the local variability of each class around pixel \\({i}\\). If we considered the local variability to be high, we can take both \\(\\sigma^2\_1\\) for class A and \\(\\sigma^2\_2\\) for class B to be both 10\. In this case, the Bayesian estimated probability for class A is \\(0\.52\\) and for class B is \\(0\.48\\) and the pixel will be relabeled as being class A. By contrast, if we consider local variability to be high If we set \\(\\sigma^2\\) to be 5 for both classes A and B, the Bayesian probability estimate will be \\(0\.48\\) for class A and \\(0\.52\\) for class B. In this case, the original class will be kept. Therefore, the result is sensitive to the subjective choice of the hyperparameter. In the example below, we will show how to use the local logit variance to set the appropriate values of \\(\\sigma^2\\). Running Bayesian smoothing -------------------------- We now show how to run Bayesian smoothing on a data cube covering an area of Sentinel\-2 tile “20LLQ” in the period 2020\-06\-04 to 2021\-08\-26\. The training data has six classes: (a) `Forest` for natural tropical forest; (b) `Water` for lakes and rivers; (c) “Wetlands” for areas where water covers the soil in the wet season; (d) `Clear_Cut_Burned_Area` for areas where fires cleared the land after tree removal; (e) `Clear_Cut_Bare_Soil` where the forest has been completely removed; (f) `Clear_Cut_Vegetation` where some vegetation remains after most trees have been removed. To simplify the example, our input is the probability cube generated by a random forest model. We recover the probability data cube and then plot the the results of the machine learning method for classes `Forest`, `Clear_Cut_Bare_Soil`, `Clear_Cut_Vegetation`, and `Clear_Cut_Burned_Area`. ``` # define the classes of the probability cube labels <- [c](https://rdrr.io/r/base/c.html)( "1" = "Water", "2" = "Clear_Cut_Burned_Area", "3" = "Clear_Cut_Bare_Soil", "4" = "Clear_Cut_Vegetation", "5" = "Forest", "6" = "Wetland" ) # directory where the data is stored data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LLQ/", package = "sitsdata") # create a probability data cube from a file rondonia_20LLQ_probs <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir, bands = "probs", labels = labels, parse_info = [c](https://rdrr.io/r/base/c.html)( "satellite", "sensor", "tile", "start_date", "end_date", "band", "version" ) ) # plot the probabilities for water and forest [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_probs, labels = [c](https://rdrr.io/r/base/c.html)("Forest", "Clear_Cut_Bare_Soil") ) ``` Figure 91: Probability map produced for classes Forest and Clear\_Cut\_Bare\_Soil (source: authors). ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_probs, labels = [c](https://rdrr.io/r/base/c.html)("Clear_Cut_Vegetation", "Clear_Cut_Burned_Area") ) ``` Figure 92: Probability map produced for classes Forest and Clear\_Cut\_Bare\_Soil (source: authors). The probability map for `Forest` shows high values associated with compact patches and linear stretches in riparian areas. Class `Clear_Cut_Bare_Soil` is mostly composed of dense areas of high probability whose geometrical boundaries result from forest cuts. Areas of class `Clear_Cut_Vegetation` are is less well\-defined than the others; this is to be expected since this is a transitional class between a natural forest and areas of bare soil. Patches associated to class `Clear_Cut_Burned_Area` include both homogeneous areas of high probability and areas of mixed response. Since classes have different behaviours, the post\-processing procedure should enable users to control how to handle outliers and border pixels of each class. The next step is to show the labelled map resulting from the raw class probabilites. We produce a classification map by taking the class of higher probability to each pixel, without considering the spatial context. There are many places with the so\-called “salt\-and\-pepper” effect which result from misclassified pixels. The non\-smoothed labelled map shows the need for post\-processing, since it contains a significant number of outliers and areas with mixed labelling. ``` # Generate the thematic map rondonia_20LLQ_class <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = rondonia_20LLQ_probs, multicores = 4, memsize = 12, output_dir = "./tempdir/chp10", version = "no_smooth" ) # Plot the result [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_class, legend_text_size = 0.8, legend_position = "outside" ) ``` Figure 93: Classified map without smoothing (source: authors). Assessing the local logit variance ---------------------------------- To determine appropriate settings for the \\(\\sigma^2\_{k}\\) hyperparameter for each class to perform Bayesian smoothing, it is useful to calculate the local logit variances for each class. For each pixel, we estimate the local variance \\(s^2\_{i,k}\\) by considering the non\-isotropic neighborhood. The local logit variances are estimated by `[sits_variance()](https://rdrr.io/pkg/sits/man/sits_variance.html)`; Its main parameters are: (a) `cube`, a probability data cube; (b) `window_size`, dimension of the local neighbourhood; (c) `neigh_fraction`, the percentage of pixels in the neighbourhood used to calculate the variance. The example below uses half of the pixels of a \\(7\\times 7\\) window to estimate the variance. The chosen pixels will be those with the highest probability pixels to be more representative of the actual class distribution. The output values are the logit variances in the vicinity of each pixel. The choice of the \\(7 \\times 7\\) window size is a compromise between having enough values to estimate the parameters of a normal distribution and the need to capture local effects for class patches of small sizes. Classes such as `Water` tend to be spatially limited; a bigger window size could result in invalid values for their respective normal distributions. ``` # calculate variance rondonia_20LLQ_var <- [sits_variance](https://rdrr.io/pkg/sits/man/sits_variance.html)( cube = rondonia_20LLQ_probs, window_size = 7, neigh_fraction = 0.50, output_dir = "./tempdir/chp10", multicores = 4, memsize = 16 ) [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_var, labels = [c](https://rdrr.io/r/base/c.html)("Forest", "Clear_Cut_Bare_Soil"), palette = "Spectral", rev = TRUE ) ``` Figure 94: Variance map for classes Forest and Clear\_Cut\_Bare\_Soil (source: authors). ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_var, labels = [c](https://rdrr.io/r/base/c.html)("Clear_Cut_Vegetation", "Clear_Cut_Burned_Area"), palette = "Spectral", rev = TRUE ) ``` Figure 95: Variance map for clases Clear\_Cut\_Vegetation and Clear\_Cut\_Burned\_Area (source: authors). Comparing the variance maps with the probability maps, one sees that areas of high probability of classes `Forest` and `Clear_Cut_Bare_Soil` are mostly made of compact patches. Recall these are the two dominant classes in the area, and deforestation is a process that converts forest to bare soil. Many areas of high logit variance for these classes are related to border pixels which have a mixed response. Areas of large patches of high logit variance for these classes are associated to lower class probabilities and will not be relevant to the final result. By contrast, the transitional classes `Clear_Cut_Vegetation` and `Clear_Cut_Burned_Area` have a different spatial pattern of their probability and logit variance. The first has a high spatial variability, since pixels of this class arise when the forest has not been completely removed and there is some remaining vegetation after trees are cut. The extent of remaining vegetation after most trees have been removed is not uniform. For this reason, many areas of high local logit variance of class `Clear_Cut_Vegetation` are located in mixed patches inside pixels of class `Forest` and on the border between `Forest` and `Clear_Cut_Bare_Soil`. This situation is consistent with the earlier observation that transitional classes may appear as artificial effects of mixed pixels in borders between other classes. Instances of class `ClearCut_Burned_Area` arise following a forest fire. Most pixels of this class tend to form mid\-sized to large spatial clusters, because of how forest fires start and propagate. It is desirable to preserve the contiguity of the burned areas and remove pixels of other classes inside these clusters. Isolated points of class `ClearCut_Burned_Area` can be removed without significant information loss. The distinct patterns of these classes are measured quantitatively by the `[summary()](https://rdrr.io/r/base/summary.html)` function. For variance cubes, this function provides information on the logit variance values of the higher inter\-quartile values. ``` # get the summary of the logit variance [summary](https://rdrr.io/r/base/summary.html)(rondonia_20LLQ_var) ``` ``` #> Water Clear_Cut_Burned_Area Clear_Cut_Bare_Soil Clear_Cut_Vegetation #> 75% 4.22 0.25 0.40 0.5500 #> 80% 4.74 0.31 0.49 0.6800 #> 85% 5.07 0.38 0.64 0.8700 #> 90% 5.36 0.51 0.87 1.1810 #> 95% 5.88 0.76 1.68 1.8405 #> 100% 22.10 8.74 12.79 14.0800 #> Forest Wetland #> 75% 1.2600 0.2800 #> 80% 2.0020 0.3400 #> 85% 3.0715 0.4300 #> 90% 4.2700 0.5700 #> 95% 5.1400 1.2105 #> 100% 21.1800 8.8700 ``` The summary statistics show that most local variance values are low, which is an expected result. Areas of low variance correspond to pixel neighborhoods of high logit values for one of the classes and low logit values for the others. High values of the local variances are relevant in areas of confusion between classes. Using the variance to select values of hyperparameters ------------------------------------------------------ We make the following recommendations for setting the \\(\\sigma^2\_{k}\\) parameter, based on the local logit variance: * Set the \\(\\sigma^2\_{k}\\) parameter with high values (in the 95%\-100% range) to increase the neighborhood influence compared with the probability values for each pixel. Such choice will produce denser spatial clusters and remove “salt\-and\-pepper” outliers. * Set the \\(\\sigma^2\_{k}\\) parameter with low values (in the 75%\-80% range) to reduce the neighborhood influence, for classes that we want to preserve their original spatial shapes. Consider the case of forest areas and watersheds. If an expert wishes to have compact areas classified as forests without many outliers inside them, she will set the \\(\\sigma^2\\) parameter for the class `Forest` to be high. For comparison, to avoid that small watersheds with few similar neighbors being relabeled, it is advisable to avoid a strong influence of the neighbors, setting \\(\\sigma^2\\) to be as low as possible. In contrast, transitional classes such as `Clear_Cut_Vegetation` are likely to be associated with some outliers; use large \\(\\sigma^2\_{k}\\) for them. To remove the outliers and classification errors, we run a smoothing procedure with `[sits_smooth()](https://rdrr.io/pkg/sits/man/sits_smooth.html)` with parameters: (a) `cube`, a probability cube produced by `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`; (b) `window_size`, the local window to compute the neighborhood probabilities; (d) `neigh_fraction`, fraction of local neighbors used to calculate local statistics; (e) `smoothness`, a vector with estimates of the prior variance of each class; (f) `multicores`, number of CPU cores that will be used for processing; (g) `memsize`, memory available for classification; (h) `output_dir`, a directory where results will be stored; (i) `version`, for version control. The resulting cube can be visualized with `[plot()](https://rdrr.io/r/graphics/plot.default.html)`. The parameters `window_size` and `neigh_fraction` control how many pixels in a neighborhood the Bayesian estimator will use to calculate the local statistics. For example, setting `window size` to \\(7\\) and `neigh_fraction` to \\(0\.50\\) (the defaults) ensures that \\(25\\) samples are used to estimate the local statistics. The `smoothness` values for the classes are set as recommended above. ``` # Compute Bayesian smoothing rondonia_20LLQ_smooth <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( cube = rondonia_20LLQ_probs, window_size = 7, neigh_fraction = 0.50, smoothness = [c](https://rdrr.io/r/base/c.html)( "Water" = 5.0, "Clear_Cut_Burned_Area" = 9.5, "Clear_Cut_Bare_Soil" = 0.5, "Clear_Cut_Vegetation" = 15, "Forest" = 2.5, "Wetland" = 0.40 ), multicores = 4, memsize = 12, output_dir = "./tempdir/chp10" ) # Plot the result [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_smooth, labels = [c](https://rdrr.io/r/base/c.html)("Clear_Cut_Vegetation", "Forest")) ``` Figure 96: Probability maps after bayesian smoothing (source: authors). Bayesian smoothing has removed some of the local variability associated with misclassified pixels that differ from their neighbors, specially in the case of transitional classes such as `Clear_Cut_Vegetation`. The smoothing impact is best appreciated by comparing the labeled map produced without smoothing to the one that follows the procedure, as shown below. ``` # Generate the thematic map rondonia_20LLQ_class_v2 <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = rondonia_20LLQ_smooth, multicores = 4, memsize = 12, output_dir = "./tempdir/chp10", version = "smooth" ) [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_20LLQ_class_v2, legend_text_size = 0.7 ) ``` Figure 97: Final classification map after Bayesian smoothing with 7 x 7 window, using high smoothness values (source: authors). In the smoothed map, outliers inside forest areas and in the class borders have been removed. The salt\-and\-pepper effect associated to transitional classes has also been replaced by more coherent estimates. The smoothed map shown much improvements compared with the non\-smoothed one. In conclusion, post\-processing is a desirable step in any classification process. Bayesian smoothing improves the borders between the objects created by the classification and removes outliers that result from pixel\-based classification. It is a reliable method that should be used in most situations.
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/validation-and-accuracy-measurements.html
Validation and accuracy measurements ==================================== Introduction ------------ Statistically robust and transparent approaches for assessing accuracy are essential parts of the land classification process. The `sits` package supports the “good practice” recommendations for designing and implementing an accuracy assessment of a change map and estimating the area based on reference sample data. These recommendations address three components: sampling design, reference data collection, and accuracy estimates [\[83]](references.html#ref-Olofsson2014). . The sampling design is implemented as a random stratified approach, ensuring that every land use and land cover class in the population is included in the sample. Sampling designs use established statistical methods aimed at providing unbiased estimates. Based on a chosen design, `sits` supports a selection of random samples per class. These samples should be evaluated accurately using high\-quality reference data, ideally collected through field visits or using high\-resolution imagery. In this way, we get a reference classification that is more accurate than the map classification being evaluated. The accuracy assessment is reported as an error matrix. It supports estimates of overall accuracy, user’s and producer’s accuracy. Based on the error matrix, it is possible to estimate each class’s proportion and adjust for classification errors. The estimated area includes confidence intervals. Example data set ---------------- Our study area is the state of Rondonia (RO) in the Brazilian Amazon, which has a total area of . According to official Brazilian government statistics, as of 2021, there are of tropical forests in RO, which corresponds to 53% of the state’s total area. Significant human occupation started in 1970, led by settlement projects promoted by then Brazil’s military government [\[84]](references.html#ref-Alves2003). Small and large\-scale cattle ranching occupies most deforested areas. Deforestation in Rondonia is highly fragmented, partly due to the original occupation by small settlers. Such fragmentation poses considerable challenges for automated methods to distinguish between clear\-cut and highly degraded areas. While visual interpreters rely upon experience and field knowledge, researchers must carefully train automated methods to achieve the same distinction. We used Sentinel\-2 and Sentinel\-2A ARD (analysis ready) images from 2022\-01\-01 to 2022\-12\-31\. Using all 10 spectral bands, we produced a regular data cube with a 16\-day interval, with 23 instances per year. The best pixels for each period were selected to obtain as low cloud cover as possible. Persistent cloud cover pixels remaining in each period are then temporally interpolated to obtain estimated values. As a result, each pixel is associated with a valid time series. To fully cover RO, we used 41 MGRS tiles; the final data cube has 1\.1 TB. The work considered nine LUCC classes: (a) stable natural land cover, including `Forest` and `Water`; (b) events associated with clear\-cuts, including `Clear_Cut_Vegetation`, `Clear_Cut_Bare_Soil`, and `Clear_Cut_Burned_Area`; (c) natural areas with seasonal variability, `Wetland`, `Seasonally_Flooded_Forest`, and `Riparian_Forest`; (d) stable forest areas subject to topographic effects, including `Mountainside_Forest`. In this chapter, we will take the classification map as our starting point for accuracy assessment. This map can be retrieved from the `sitsdata` package as follows. ``` # define the classes of the probability cube labels <- [c](https://rdrr.io/r/base/c.html)( "1" = "Clear_Cut_Bare_Soil", "2" = "Clear_Cut_Burned_Area", "3" = "Mountainside_Forest", "4" = "Forest", "5" = "Riparian_Forest", "6" = "Clear_Cut_Vegetation", "7" = "Water", "8" = "Seasonally_Flooded", "9" = "Wetland" ) # directory where the data is stored data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-Class-2022-Mosaic/", package = "sitsdata") # create a probability data cube from a file rondonia_2022_class <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir, bands = "class", labels = labels, version = "mosaic" ) ``` ``` # plot the classification map [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_2022_class) ``` Figure 98: Classified mosaic for land cover in Rondonia, Brazil for 2022 (source: authors). Stratified sampling design and allocation ----------------------------------------- The sampling design outlines the method for selecting a subset of the map, which serves as the foundation for the accuracy assessment. The subset needs to satisfy a compromise between statistical and practical consideration. The subset needs to provide enough data for statistically\-valid quality assessment, and also ensure that each element of the the sample can be evaluated correctly. Selection of the sample size thus combines an expected level of user’s accuracy for each class with a viable choice of size and location. Following the recommended best practices for estimating accuracy of LUCC maps [\[83]](references.html#ref-Olofsson2014), `sits` uses Cochran’s method for stratified random sampling [\[85]](references.html#ref-Cochran1977). The method divides the population into homogeneous subgroups, or strata, and then applying random sampling within each stratum. In the case of LUCC, we take the classification map as the basis for the stratification. The area occupied by each class is considered as an homogeneous subgroup. Cochran’s method for stratified random sampling helps to increase the precision of the estimates by reducing the overall variance, particularly when there is significant variability between strata but relatively less variability within each stratum. To determine the overall number of samples to measure accuracy, we use the following formula [\[85]](references.html#ref-Cochran1977): \\\[ n \= \\left( \\frac{\\sum\_{i\=1}^L W\_i S\_i}{S(\\hat{O})} \\right)^2 \\] where * \\(L\\) is the number of classes * \\(S(\\hat{O})\\) is the expected standard error of the accuracy estimate * \\(S\_i\\) is the standard deviation of the estimated area for class \\(i\\) * \\(W\_i\\) is is the mapped proportion of area of class \\(i\\) The standard deviation per class (stratum) is estimated based on the expected user’s accuracy \\(U\_i\\) for each class as \\\[ S\_i \= \\sqrt{U\_i(1 \- U\_i)} \\] Therefore, the total number of samples depends on the assumptions about the user’s accuracies \\(U\_i\\) and the expected standard error \\(S(\\hat{O})\\). Once the sample size is estimated, there are several methods for allocating samples per class[\[83]](references.html#ref-Olofsson2014). One option is proportional allocation, when sample size in each stratum is proportional to the stratum’s size in the population. In land use mapping, some classes often have small areas compared to the more frequent ones. Using proportional allocation, rare classes will have small sample sizes decreasing their accuracy. Another option is equal allocation, where all classes will have the same number of samples; however, equal allocation may fail to capture the natural variation of classes with large areas. As alternatives to proportional and equal allocation, [\[83]](references.html#ref-Olofsson2014) suggests ad\-hoc approaches where each class is assigned a minimum number of samples. He proposes three allocations where 50, 75 and 100 sample units are allocated to the less common classes, and proportional allocation is used for more frequent ones. These allocation methods should be considered as suggestions, and users should be flexible to select alternative sampling designs. The allocation methods proposed by [\[83]](references.html#ref-Olofsson2014) are supported by function `[sits_sampling_design()](https://rdrr.io/pkg/sits/man/sits_sampling_design.html)`, which has the following parameters: * `cube`: a classified data cube; * `expected_ua`: a named vector with the expected user’s accuracies for each class; * `alloc_options`: fixed sample allocation for rare classes; * `std_err`: expected standard error of the accuracy estimate; * `rare_class_prop`: proportional area limit to determine which are the rare classes. In the case of Rondonia, the following sampling design was adopted. ``` ro_sampling_design <- [sits_sampling_design](https://rdrr.io/pkg/sits/man/sits_sampling_design.html)( cube = rondonia_2022_class, expected_ua = [c](https://rdrr.io/r/base/c.html)( "Clear_Cut_Bare_Soil" = 0.75, "Clear_Cut_Burned_Area" = 0.70, "Mountainside_Forest" = 0.70, "Forest" = 0.75, "Riparian_Forest" = 0.70, "Clear_Cut_Vegetation" = 0.70, "Water" = 0.70, "Seasonally_Flooded" = 0.70, "Wetland" = 0.70 ), alloc_options = [c](https://rdrr.io/r/base/c.html)(120, 100), std_err = 0.01, rare_class_prop = 0.1 ) # show sampling desing ro_sampling_design ``` ``` #> prop expected_ua std_dev equal alloc_120 alloc_100 #> Clear_Cut_Bare_Soil 0.3841309 0.75 0.433 210 438 496 #> Clear_Cut_Burned_Area 0.004994874 0.7 0.458 210 120 100 #> Clear_Cut_Vegetation 0.009201698 0.7 0.458 210 120 100 #> Forest 0.538726 0.75 0.433 210 614 696 #> Mountainside_Forest 0.004555433 0.7 0.458 210 120 100 #> Riparian_Forest 0.005482552 0.7 0.458 210 120 100 #> Seasonally_Flooded 0.007677294 0.7 0.458 210 120 100 #> Water 0.007682599 0.7 0.458 210 120 100 #> Wetland 0.03754864 0.7 0.458 210 120 100 #> alloc_prop #> Clear_Cut_Bare_Soil 727 #> Clear_Cut_Burned_Area 9 #> Clear_Cut_Vegetation 17 #> Forest 1019 #> Mountainside_Forest 9 #> Riparian_Forest 10 #> Seasonally_Flooded 15 #> Water 15 #> Wetland 71 ``` The next step is to chose one of the options for sampling design to generate a set of points for stratified sampling. These points can then be used for accuracy assessment. This is achieved by function `[sits_stratified_sampling()](https://rdrr.io/pkg/sits/man/sits_stratified_sampling.html)` which takes the following parameters: * `cube`: a classified data cube; * `sampling_design`: the output of function `[sits_sampling_design()](https://rdrr.io/pkg/sits/man/sits_sampling_design.html)`; * `alloc`: one of the sampling allocation options produced by `[sits_sampling_design()](https://rdrr.io/pkg/sits/man/sits_sampling_design.html)`; * `overhead`: additional proportion of number of samples per class (see below); * `multicores`: number of cores to run the function in parallel; * `shp_file`: name of shapefile to save results for later use (optional); * `progress`: show progress bar? In the example below, we chose the “alloc\_120” option from the sampling design to generate a set of stratified samples. The output of the function is an `sf` object with points with location (latitude and longitude) and class assigned in the map. We can also generate a SHP file with the sample information. The script below shows how to usee `[sits_stratified_sampling()](https://rdrr.io/pkg/sits/man/sits_stratified_sampling.html)` and also how to convert an `sf` object to a CSV file. ``` ro_samples_sf <- [sits_stratified_sampling](https://rdrr.io/pkg/sits/man/sits_stratified_sampling.html)( cube = rondonia_2022_class, sampling_design = ro_sampling_design, alloc = "alloc_120", multicores = 4, shp_file = "./tempdir/chp11/ro_samples.shp" ) ``` ``` #> Writing layer `ro_samples' to data source #> `./tempdir/chp11/ro_samples.shp' using driver `ESRI Shapefile' #> Writing 2261 features with 1 fields and geometry type Point. ``` ``` # save sf object as CSV file sf::[st_write](https://r-spatial.github.io/sf/reference/st_write.html)(ro_samples_sf, "./tempdir/chp11/ro_samples.csv", layer_options = "GEOMETRY=AS_XY", append = FALSE ) ``` ``` #> Writing layer `ro_samples' to data source #> `./tempdir/chp11/ro_samples.csv' using driver `CSV' #> options: GEOMETRY=AS_XY #> Writing 2261 features with 1 fields and geometry type Point. ``` Using the CSV file (or the optional shapefile) users can visualize the points in a standard GIS such as QGIS. For each point, they will indicate what is the correct class. In this way, they will obtain a confusion matrix which will be used for accuracy assessment. The `overhead` parameter is useful for users to discard border or doubtful pixels where the interpreter cannot be confident of her class assignment. By discarding points whose attribution is uncertain, they will improve the quality of the assessment. After all sampling points are labelled in QGIS (or similar), users should produce a CSV file, a SHP file, a data frame, or an `sf` object, with at least three columns: `latitude`, `longitude` and `label`. See the next section for an example on how to use this data set for accuracay assessment. Accuracy assessment of classified images ---------------------------------------- To measure the accuracy of classified images, `[sits_accuracy()](https://rdrr.io/pkg/sits/man/sits_accuracy.html)` uses an area\-weighted technique, following the best practices proposed by Olofsson et al. [\[86]](references.html#ref-Olofsson2013). The need for area\-weighted estimates arises because the land classes are not evenly distributed in space. In some applications (e.g., deforestation) where the interest lies in assessing how much of the image has changed, the area mapped as deforested is likely to be a small fraction of the total area. If users disregard the relative importance of small areas where change is taking place, the overall accuracy estimate will be inflated and unrealistic. For this reason, Olofsson et al. argue that “mapped areas should be adjusted to eliminate bias attributable to map classification error, and these error\-adjusted area estimates should be accompanied by confidence intervals to quantify the sampling variability of the estimated area” [\[86]](references.html#ref-Olofsson2013). With this motivation, when measuring the accuracy of classified images, `[sits_accuracy()](https://rdrr.io/pkg/sits/man/sits_accuracy.html)` follows the procedure set by Olofsson et al. [\[86]](references.html#ref-Olofsson2013). Given a classified image and a validation file, the first step calculates the confusion matrix in the traditional way, i.e., by identifying the commission and omission errors. Then it calculates the unbiased estimator of the proportion of area in cell \\(i,j\\) of the error matrix \\\[ \\hat{p\_{i,j}} \= W\_i\\frac{n\_{i,j}}{n\_i}, \\] where the total area of the map is \\(A\_{tot}\\), the mapping area of class \\(i\\) is \\(A\_{m,i}\\) and the proportion of area mapped as class \\(i\\) is \\(W\_i \= {A\_{m,i}}/{A\_{tot}}\\). Adjusting for area size allows producing an unbiased estimation of the total area of class \\(j\\), defined as a stratified estimator \\\[ \\hat{A\_j} \= A\_{tot}\\sum\_{i\=1}^KW\_i\\frac{n\_{i,j}}{n\_i}. \\] This unbiased area estimator includes the effect of false negatives (omission error) while not considering the effect of false positives (commission error). The area estimates also allow for an unbiased estimate of the user’s and producer’s accuracy for each class. Following Olofsson et al. [\[86]](references.html#ref-Olofsson2013), we provide the 95% confidence interval for \\(\\hat{A\_j}\\). To produce the adjusted area estimates for classified maps, `[sits_accuracy()](https://rdrr.io/pkg/sits/man/sits_accuracy.html)` uses the following parameters: * `data`: a classified data cube; * `validation`: a CSV file, SHP file, GPKG file, `sf` object or data frame containing at least three columns: `latitude`, `longitude` and `label`, containing a set of well\-selected labeled points obtained from the samples suggested by `sits_stratified_sample()`. In the example below, we use a validation set produced by the researchers which produced the Rondonia data set, described above. We selected this data set both to serve as an example of `[sits_accuracy()](https://rdrr.io/pkg/sits/man/sits_accuracy.html)` and to illustrate the pitfalls of using visual interpretation of results of image time series classification. In this case, the validation team used an image from a single date late in 2022 to assess the results. This choice is not adequate for assessing results of time series classification. In many cases, including the example used in this chapter, the training set includes transitional classes such as `Clear_Cut_Burned_Area` and `Clear_Cut_Vegetation`. The associated samples refer to events that occur in specific times of the year. An area may start the year as a `Forest` land cover, only to be cut and burned during the peak of the dry season and later be completely clean. The classifier will recognize the signs of burned area and will signal that such event occurred. When using only a single date to evaluate the classification results, this correct estimate by the classifier will be missed by the interpreter. For this reason, the results shown below are merely illustrative and do not reflect a correct accuracy assessment. The validation team used QGIS to produce a CSV file with validation data, which is then used to assess the area accuracy using the best practices recommended by [\[83]](references.html#ref-Olofsson2014). ``` # Get ground truth points valid_csv <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-Class-2022-Mosaic/rondonia_samples_validation.csv", package = "sitsdata" ) # Calculate accuracy according to Olofsson's method area_acc <- [sits_accuracy](https://rdrr.io/pkg/sits/man/sits_accuracy.html)(rondonia_2022_class, validation = valid_csv, multicores = 4 ) # Print the area estimated accuracy area_acc ``` ``` #> Area Weighted Statistics #> Overall Accuracy = 0.84 #> #> Area-Weighted Users and Producers Accuracy #> User Producer #> Clear_Cut_Bare_Soil 0.82 1.00 #> Clear_Cut_Burned_Area 0.88 0.08 #> Mountainside_Forest 0.69 0.05 #> Forest 0.85 1.00 #> Riparian_Forest 0.66 0.58 #> Clear_Cut_Vegetation 0.82 0.24 #> Water 0.97 0.67 #> Seasonally_Flooded 0.86 0.68 #> Wetland 0.87 0.69 #> #> Mapped Area x Estimated Area (ha) #> Mapped Area (ha) Error-Adjusted Area (ha) #> Clear_Cut_Bare_Soil 9537617.8 7787913.8 #> Clear_Cut_Burned_Area 124018.1 1383784.0 #> Mountainside_Forest 113107.2 1665469.0 #> Forest 13376070.4 11377193.6 #> Riparian_Forest 136126.7 155704.6 #> Clear_Cut_Vegetation 228469.7 766171.1 #> Water 190751.9 275599.8 #> Seasonally_Flooded 190620.2 241225.8 #> Wetland 932298.3 1176018.6 #> Conf Interval (ha) #> Clear_Cut_Bare_Soil 321996.87 #> Clear_Cut_Burned_Area 278746.61 #> Mountainside_Forest 299925.62 #> Forest 333181.28 #> Riparian_Forest 60452.25 #> Clear_Cut_Vegetation 186476.04 #> Water 78786.79 #> Seasonally_Flooded 58098.50 #> Wetland 163726.86 ``` The confusion matrix is also available, as follows. ``` area_acc$error_matrix ``` ``` #> #> Clear_Cut_Bare_Soil Clear_Cut_Burned_Area #> Clear_Cut_Bare_Soil 415 65 #> Clear_Cut_Burned_Area 1 42 #> Mountainside_Forest 1 0 #> Forest 0 0 #> Riparian_Forest 4 0 #> Clear_Cut_Vegetation 1 17 #> Water 0 0 #> Seasonally_Flooded 0 0 #> Wetland 0 2 #> #> Mountainside_Forest Forest Riparian_Forest #> Clear_Cut_Bare_Soil 0 0 0 #> Clear_Cut_Burned_Area 0 0 0 #> Mountainside_Forest 22 9 0 #> Forest 95 680 3 #> Riparian_Forest 4 5 111 #> Clear_Cut_Vegetation 0 0 0 #> Water 0 0 3 #> Seasonally_Flooded 0 0 1 #> Wetland 0 0 1 #> #> Clear_Cut_Vegetation Water Seasonally_Flooded Wetland #> Clear_Cut_Bare_Soil 10 3 1 15 #> Clear_Cut_Burned_Area 1 0 1 3 #> Mountainside_Forest 0 0 0 0 #> Forest 19 2 0 3 #> Riparian_Forest 43 0 0 0 #> Clear_Cut_Vegetation 82 0 0 0 #> Water 0 121 1 0 #> Seasonally_Flooded 0 1 118 18 #> Wetland 4 0 6 88 ``` These results show the challenges of conducting validation assessments with image time series. While stable classes like `Forest` and `Clear_Cut_Bare_Soil` exhibit high user’s accuracy (UA) and producer’s accuracy (PA), the transitional classes (`Clear_Cut_Burned_Area` and `Clear_Cut_Vegetation`) have low PA. This discrepancy is not a true reflection of classification accuracy, but rather a result of inadequate visual interpretation practices. As mentioned earlier, the visual interpretation for quality assessment utilised only a single date, a method traditionally used for single images, but ineffective for image time series. A detailed examination of the confusion matrix reveals a clear distinction between natural areas (e.g., `Forest` and `Riparian_Forest`) and areas associated with deforestation (e.g., `Clear_Cut_Bare_Soil` and `Clear_Cut_Burned_Area`). The low producer’s accuracy values for transitional classes `Clear_Cut_Burned_Area` and `Clear_Cut_Vegetation` are artefacts of the validation procedure. Validation relied on only one date near the end of the calendar year, causing transitional classes to be overlooked. This chapter provides an example of the recommended statistical methods for designing stratified samples for accuracy assessment. However, these sampling methods depend on perfect or near\-perfect validation by end\-users. Ensuring best practices in accuracy assessment involves a well\-designed sample set and a sample interpretation that aligns with the classifier’s training set. Introduction ------------ Statistically robust and transparent approaches for assessing accuracy are essential parts of the land classification process. The `sits` package supports the “good practice” recommendations for designing and implementing an accuracy assessment of a change map and estimating the area based on reference sample data. These recommendations address three components: sampling design, reference data collection, and accuracy estimates [\[83]](references.html#ref-Olofsson2014). . The sampling design is implemented as a random stratified approach, ensuring that every land use and land cover class in the population is included in the sample. Sampling designs use established statistical methods aimed at providing unbiased estimates. Based on a chosen design, `sits` supports a selection of random samples per class. These samples should be evaluated accurately using high\-quality reference data, ideally collected through field visits or using high\-resolution imagery. In this way, we get a reference classification that is more accurate than the map classification being evaluated. The accuracy assessment is reported as an error matrix. It supports estimates of overall accuracy, user’s and producer’s accuracy. Based on the error matrix, it is possible to estimate each class’s proportion and adjust for classification errors. The estimated area includes confidence intervals. Example data set ---------------- Our study area is the state of Rondonia (RO) in the Brazilian Amazon, which has a total area of . According to official Brazilian government statistics, as of 2021, there are of tropical forests in RO, which corresponds to 53% of the state’s total area. Significant human occupation started in 1970, led by settlement projects promoted by then Brazil’s military government [\[84]](references.html#ref-Alves2003). Small and large\-scale cattle ranching occupies most deforested areas. Deforestation in Rondonia is highly fragmented, partly due to the original occupation by small settlers. Such fragmentation poses considerable challenges for automated methods to distinguish between clear\-cut and highly degraded areas. While visual interpreters rely upon experience and field knowledge, researchers must carefully train automated methods to achieve the same distinction. We used Sentinel\-2 and Sentinel\-2A ARD (analysis ready) images from 2022\-01\-01 to 2022\-12\-31\. Using all 10 spectral bands, we produced a regular data cube with a 16\-day interval, with 23 instances per year. The best pixels for each period were selected to obtain as low cloud cover as possible. Persistent cloud cover pixels remaining in each period are then temporally interpolated to obtain estimated values. As a result, each pixel is associated with a valid time series. To fully cover RO, we used 41 MGRS tiles; the final data cube has 1\.1 TB. The work considered nine LUCC classes: (a) stable natural land cover, including `Forest` and `Water`; (b) events associated with clear\-cuts, including `Clear_Cut_Vegetation`, `Clear_Cut_Bare_Soil`, and `Clear_Cut_Burned_Area`; (c) natural areas with seasonal variability, `Wetland`, `Seasonally_Flooded_Forest`, and `Riparian_Forest`; (d) stable forest areas subject to topographic effects, including `Mountainside_Forest`. In this chapter, we will take the classification map as our starting point for accuracy assessment. This map can be retrieved from the `sitsdata` package as follows. ``` # define the classes of the probability cube labels <- [c](https://rdrr.io/r/base/c.html)( "1" = "Clear_Cut_Bare_Soil", "2" = "Clear_Cut_Burned_Area", "3" = "Mountainside_Forest", "4" = "Forest", "5" = "Riparian_Forest", "6" = "Clear_Cut_Vegetation", "7" = "Water", "8" = "Seasonally_Flooded", "9" = "Wetland" ) # directory where the data is stored data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-Class-2022-Mosaic/", package = "sitsdata") # create a probability data cube from a file rondonia_2022_class <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir, bands = "class", labels = labels, version = "mosaic" ) ``` ``` # plot the classification map [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_2022_class) ``` Figure 98: Classified mosaic for land cover in Rondonia, Brazil for 2022 (source: authors). Stratified sampling design and allocation ----------------------------------------- The sampling design outlines the method for selecting a subset of the map, which serves as the foundation for the accuracy assessment. The subset needs to satisfy a compromise between statistical and practical consideration. The subset needs to provide enough data for statistically\-valid quality assessment, and also ensure that each element of the the sample can be evaluated correctly. Selection of the sample size thus combines an expected level of user’s accuracy for each class with a viable choice of size and location. Following the recommended best practices for estimating accuracy of LUCC maps [\[83]](references.html#ref-Olofsson2014), `sits` uses Cochran’s method for stratified random sampling [\[85]](references.html#ref-Cochran1977). The method divides the population into homogeneous subgroups, or strata, and then applying random sampling within each stratum. In the case of LUCC, we take the classification map as the basis for the stratification. The area occupied by each class is considered as an homogeneous subgroup. Cochran’s method for stratified random sampling helps to increase the precision of the estimates by reducing the overall variance, particularly when there is significant variability between strata but relatively less variability within each stratum. To determine the overall number of samples to measure accuracy, we use the following formula [\[85]](references.html#ref-Cochran1977): \\\[ n \= \\left( \\frac{\\sum\_{i\=1}^L W\_i S\_i}{S(\\hat{O})} \\right)^2 \\] where * \\(L\\) is the number of classes * \\(S(\\hat{O})\\) is the expected standard error of the accuracy estimate * \\(S\_i\\) is the standard deviation of the estimated area for class \\(i\\) * \\(W\_i\\) is is the mapped proportion of area of class \\(i\\) The standard deviation per class (stratum) is estimated based on the expected user’s accuracy \\(U\_i\\) for each class as \\\[ S\_i \= \\sqrt{U\_i(1 \- U\_i)} \\] Therefore, the total number of samples depends on the assumptions about the user’s accuracies \\(U\_i\\) and the expected standard error \\(S(\\hat{O})\\). Once the sample size is estimated, there are several methods for allocating samples per class[\[83]](references.html#ref-Olofsson2014). One option is proportional allocation, when sample size in each stratum is proportional to the stratum’s size in the population. In land use mapping, some classes often have small areas compared to the more frequent ones. Using proportional allocation, rare classes will have small sample sizes decreasing their accuracy. Another option is equal allocation, where all classes will have the same number of samples; however, equal allocation may fail to capture the natural variation of classes with large areas. As alternatives to proportional and equal allocation, [\[83]](references.html#ref-Olofsson2014) suggests ad\-hoc approaches where each class is assigned a minimum number of samples. He proposes three allocations where 50, 75 and 100 sample units are allocated to the less common classes, and proportional allocation is used for more frequent ones. These allocation methods should be considered as suggestions, and users should be flexible to select alternative sampling designs. The allocation methods proposed by [\[83]](references.html#ref-Olofsson2014) are supported by function `[sits_sampling_design()](https://rdrr.io/pkg/sits/man/sits_sampling_design.html)`, which has the following parameters: * `cube`: a classified data cube; * `expected_ua`: a named vector with the expected user’s accuracies for each class; * `alloc_options`: fixed sample allocation for rare classes; * `std_err`: expected standard error of the accuracy estimate; * `rare_class_prop`: proportional area limit to determine which are the rare classes. In the case of Rondonia, the following sampling design was adopted. ``` ro_sampling_design <- [sits_sampling_design](https://rdrr.io/pkg/sits/man/sits_sampling_design.html)( cube = rondonia_2022_class, expected_ua = [c](https://rdrr.io/r/base/c.html)( "Clear_Cut_Bare_Soil" = 0.75, "Clear_Cut_Burned_Area" = 0.70, "Mountainside_Forest" = 0.70, "Forest" = 0.75, "Riparian_Forest" = 0.70, "Clear_Cut_Vegetation" = 0.70, "Water" = 0.70, "Seasonally_Flooded" = 0.70, "Wetland" = 0.70 ), alloc_options = [c](https://rdrr.io/r/base/c.html)(120, 100), std_err = 0.01, rare_class_prop = 0.1 ) # show sampling desing ro_sampling_design ``` ``` #> prop expected_ua std_dev equal alloc_120 alloc_100 #> Clear_Cut_Bare_Soil 0.3841309 0.75 0.433 210 438 496 #> Clear_Cut_Burned_Area 0.004994874 0.7 0.458 210 120 100 #> Clear_Cut_Vegetation 0.009201698 0.7 0.458 210 120 100 #> Forest 0.538726 0.75 0.433 210 614 696 #> Mountainside_Forest 0.004555433 0.7 0.458 210 120 100 #> Riparian_Forest 0.005482552 0.7 0.458 210 120 100 #> Seasonally_Flooded 0.007677294 0.7 0.458 210 120 100 #> Water 0.007682599 0.7 0.458 210 120 100 #> Wetland 0.03754864 0.7 0.458 210 120 100 #> alloc_prop #> Clear_Cut_Bare_Soil 727 #> Clear_Cut_Burned_Area 9 #> Clear_Cut_Vegetation 17 #> Forest 1019 #> Mountainside_Forest 9 #> Riparian_Forest 10 #> Seasonally_Flooded 15 #> Water 15 #> Wetland 71 ``` The next step is to chose one of the options for sampling design to generate a set of points for stratified sampling. These points can then be used for accuracy assessment. This is achieved by function `[sits_stratified_sampling()](https://rdrr.io/pkg/sits/man/sits_stratified_sampling.html)` which takes the following parameters: * `cube`: a classified data cube; * `sampling_design`: the output of function `[sits_sampling_design()](https://rdrr.io/pkg/sits/man/sits_sampling_design.html)`; * `alloc`: one of the sampling allocation options produced by `[sits_sampling_design()](https://rdrr.io/pkg/sits/man/sits_sampling_design.html)`; * `overhead`: additional proportion of number of samples per class (see below); * `multicores`: number of cores to run the function in parallel; * `shp_file`: name of shapefile to save results for later use (optional); * `progress`: show progress bar? In the example below, we chose the “alloc\_120” option from the sampling design to generate a set of stratified samples. The output of the function is an `sf` object with points with location (latitude and longitude) and class assigned in the map. We can also generate a SHP file with the sample information. The script below shows how to usee `[sits_stratified_sampling()](https://rdrr.io/pkg/sits/man/sits_stratified_sampling.html)` and also how to convert an `sf` object to a CSV file. ``` ro_samples_sf <- [sits_stratified_sampling](https://rdrr.io/pkg/sits/man/sits_stratified_sampling.html)( cube = rondonia_2022_class, sampling_design = ro_sampling_design, alloc = "alloc_120", multicores = 4, shp_file = "./tempdir/chp11/ro_samples.shp" ) ``` ``` #> Writing layer `ro_samples' to data source #> `./tempdir/chp11/ro_samples.shp' using driver `ESRI Shapefile' #> Writing 2261 features with 1 fields and geometry type Point. ``` ``` # save sf object as CSV file sf::[st_write](https://r-spatial.github.io/sf/reference/st_write.html)(ro_samples_sf, "./tempdir/chp11/ro_samples.csv", layer_options = "GEOMETRY=AS_XY", append = FALSE ) ``` ``` #> Writing layer `ro_samples' to data source #> `./tempdir/chp11/ro_samples.csv' using driver `CSV' #> options: GEOMETRY=AS_XY #> Writing 2261 features with 1 fields and geometry type Point. ``` Using the CSV file (or the optional shapefile) users can visualize the points in a standard GIS such as QGIS. For each point, they will indicate what is the correct class. In this way, they will obtain a confusion matrix which will be used for accuracy assessment. The `overhead` parameter is useful for users to discard border or doubtful pixels where the interpreter cannot be confident of her class assignment. By discarding points whose attribution is uncertain, they will improve the quality of the assessment. After all sampling points are labelled in QGIS (or similar), users should produce a CSV file, a SHP file, a data frame, or an `sf` object, with at least three columns: `latitude`, `longitude` and `label`. See the next section for an example on how to use this data set for accuracay assessment. Accuracy assessment of classified images ---------------------------------------- To measure the accuracy of classified images, `[sits_accuracy()](https://rdrr.io/pkg/sits/man/sits_accuracy.html)` uses an area\-weighted technique, following the best practices proposed by Olofsson et al. [\[86]](references.html#ref-Olofsson2013). The need for area\-weighted estimates arises because the land classes are not evenly distributed in space. In some applications (e.g., deforestation) where the interest lies in assessing how much of the image has changed, the area mapped as deforested is likely to be a small fraction of the total area. If users disregard the relative importance of small areas where change is taking place, the overall accuracy estimate will be inflated and unrealistic. For this reason, Olofsson et al. argue that “mapped areas should be adjusted to eliminate bias attributable to map classification error, and these error\-adjusted area estimates should be accompanied by confidence intervals to quantify the sampling variability of the estimated area” [\[86]](references.html#ref-Olofsson2013). With this motivation, when measuring the accuracy of classified images, `[sits_accuracy()](https://rdrr.io/pkg/sits/man/sits_accuracy.html)` follows the procedure set by Olofsson et al. [\[86]](references.html#ref-Olofsson2013). Given a classified image and a validation file, the first step calculates the confusion matrix in the traditional way, i.e., by identifying the commission and omission errors. Then it calculates the unbiased estimator of the proportion of area in cell \\(i,j\\) of the error matrix \\\[ \\hat{p\_{i,j}} \= W\_i\\frac{n\_{i,j}}{n\_i}, \\] where the total area of the map is \\(A\_{tot}\\), the mapping area of class \\(i\\) is \\(A\_{m,i}\\) and the proportion of area mapped as class \\(i\\) is \\(W\_i \= {A\_{m,i}}/{A\_{tot}}\\). Adjusting for area size allows producing an unbiased estimation of the total area of class \\(j\\), defined as a stratified estimator \\\[ \\hat{A\_j} \= A\_{tot}\\sum\_{i\=1}^KW\_i\\frac{n\_{i,j}}{n\_i}. \\] This unbiased area estimator includes the effect of false negatives (omission error) while not considering the effect of false positives (commission error). The area estimates also allow for an unbiased estimate of the user’s and producer’s accuracy for each class. Following Olofsson et al. [\[86]](references.html#ref-Olofsson2013), we provide the 95% confidence interval for \\(\\hat{A\_j}\\). To produce the adjusted area estimates for classified maps, `[sits_accuracy()](https://rdrr.io/pkg/sits/man/sits_accuracy.html)` uses the following parameters: * `data`: a classified data cube; * `validation`: a CSV file, SHP file, GPKG file, `sf` object or data frame containing at least three columns: `latitude`, `longitude` and `label`, containing a set of well\-selected labeled points obtained from the samples suggested by `sits_stratified_sample()`. In the example below, we use a validation set produced by the researchers which produced the Rondonia data set, described above. We selected this data set both to serve as an example of `[sits_accuracy()](https://rdrr.io/pkg/sits/man/sits_accuracy.html)` and to illustrate the pitfalls of using visual interpretation of results of image time series classification. In this case, the validation team used an image from a single date late in 2022 to assess the results. This choice is not adequate for assessing results of time series classification. In many cases, including the example used in this chapter, the training set includes transitional classes such as `Clear_Cut_Burned_Area` and `Clear_Cut_Vegetation`. The associated samples refer to events that occur in specific times of the year. An area may start the year as a `Forest` land cover, only to be cut and burned during the peak of the dry season and later be completely clean. The classifier will recognize the signs of burned area and will signal that such event occurred. When using only a single date to evaluate the classification results, this correct estimate by the classifier will be missed by the interpreter. For this reason, the results shown below are merely illustrative and do not reflect a correct accuracy assessment. The validation team used QGIS to produce a CSV file with validation data, which is then used to assess the area accuracy using the best practices recommended by [\[83]](references.html#ref-Olofsson2014). ``` # Get ground truth points valid_csv <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-Class-2022-Mosaic/rondonia_samples_validation.csv", package = "sitsdata" ) # Calculate accuracy according to Olofsson's method area_acc <- [sits_accuracy](https://rdrr.io/pkg/sits/man/sits_accuracy.html)(rondonia_2022_class, validation = valid_csv, multicores = 4 ) # Print the area estimated accuracy area_acc ``` ``` #> Area Weighted Statistics #> Overall Accuracy = 0.84 #> #> Area-Weighted Users and Producers Accuracy #> User Producer #> Clear_Cut_Bare_Soil 0.82 1.00 #> Clear_Cut_Burned_Area 0.88 0.08 #> Mountainside_Forest 0.69 0.05 #> Forest 0.85 1.00 #> Riparian_Forest 0.66 0.58 #> Clear_Cut_Vegetation 0.82 0.24 #> Water 0.97 0.67 #> Seasonally_Flooded 0.86 0.68 #> Wetland 0.87 0.69 #> #> Mapped Area x Estimated Area (ha) #> Mapped Area (ha) Error-Adjusted Area (ha) #> Clear_Cut_Bare_Soil 9537617.8 7787913.8 #> Clear_Cut_Burned_Area 124018.1 1383784.0 #> Mountainside_Forest 113107.2 1665469.0 #> Forest 13376070.4 11377193.6 #> Riparian_Forest 136126.7 155704.6 #> Clear_Cut_Vegetation 228469.7 766171.1 #> Water 190751.9 275599.8 #> Seasonally_Flooded 190620.2 241225.8 #> Wetland 932298.3 1176018.6 #> Conf Interval (ha) #> Clear_Cut_Bare_Soil 321996.87 #> Clear_Cut_Burned_Area 278746.61 #> Mountainside_Forest 299925.62 #> Forest 333181.28 #> Riparian_Forest 60452.25 #> Clear_Cut_Vegetation 186476.04 #> Water 78786.79 #> Seasonally_Flooded 58098.50 #> Wetland 163726.86 ``` The confusion matrix is also available, as follows. ``` area_acc$error_matrix ``` ``` #> #> Clear_Cut_Bare_Soil Clear_Cut_Burned_Area #> Clear_Cut_Bare_Soil 415 65 #> Clear_Cut_Burned_Area 1 42 #> Mountainside_Forest 1 0 #> Forest 0 0 #> Riparian_Forest 4 0 #> Clear_Cut_Vegetation 1 17 #> Water 0 0 #> Seasonally_Flooded 0 0 #> Wetland 0 2 #> #> Mountainside_Forest Forest Riparian_Forest #> Clear_Cut_Bare_Soil 0 0 0 #> Clear_Cut_Burned_Area 0 0 0 #> Mountainside_Forest 22 9 0 #> Forest 95 680 3 #> Riparian_Forest 4 5 111 #> Clear_Cut_Vegetation 0 0 0 #> Water 0 0 3 #> Seasonally_Flooded 0 0 1 #> Wetland 0 0 1 #> #> Clear_Cut_Vegetation Water Seasonally_Flooded Wetland #> Clear_Cut_Bare_Soil 10 3 1 15 #> Clear_Cut_Burned_Area 1 0 1 3 #> Mountainside_Forest 0 0 0 0 #> Forest 19 2 0 3 #> Riparian_Forest 43 0 0 0 #> Clear_Cut_Vegetation 82 0 0 0 #> Water 0 121 1 0 #> Seasonally_Flooded 0 1 118 18 #> Wetland 4 0 6 88 ``` These results show the challenges of conducting validation assessments with image time series. While stable classes like `Forest` and `Clear_Cut_Bare_Soil` exhibit high user’s accuracy (UA) and producer’s accuracy (PA), the transitional classes (`Clear_Cut_Burned_Area` and `Clear_Cut_Vegetation`) have low PA. This discrepancy is not a true reflection of classification accuracy, but rather a result of inadequate visual interpretation practices. As mentioned earlier, the visual interpretation for quality assessment utilised only a single date, a method traditionally used for single images, but ineffective for image time series. A detailed examination of the confusion matrix reveals a clear distinction between natural areas (e.g., `Forest` and `Riparian_Forest`) and areas associated with deforestation (e.g., `Clear_Cut_Bare_Soil` and `Clear_Cut_Burned_Area`). The low producer’s accuracy values for transitional classes `Clear_Cut_Burned_Area` and `Clear_Cut_Vegetation` are artefacts of the validation procedure. Validation relied on only one date near the end of the calendar year, causing transitional classes to be overlooked. This chapter provides an example of the recommended statistical methods for designing stratified samples for accuracy assessment. However, these sampling methods depend on perfect or near\-perfect validation by end\-users. Ensuring best practices in accuracy assessment involves a well\-designed sample set and a sample interpretation that aligns with the classifier’s training set.
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/uncertainty-and-active-learning.html
Uncertainty and active learning =============================== [ Land classification tasks have unique characteristics that differ from other machine learning domains, such as image recognition and natural language processing. The main challenge for land classification is to describe the diversity of the planet’s landscapes in a handful of labels. However, the diversity of the world’s ecosystem makes all classification systems to be biased approximations of reality. As stated by Murphy: “The gradation of properties in the world means that our smallish number of categories will never map perfectly onto all objects” [\[87]](references.html#ref-Murphy2002). For this reason, `sits` provides tools to improve classifications using a process called active learning. Active learning is an iterative process of sample selection, labeling, and model retraining. The following steps provide a general overview of how to use active learning: 1. Collect initial training samples: Start by collecting a small set of representative training samples that cover the range of land classes of interest. 2. Train a machine learning model: Use the initial training samples to train a machine learning model to classify remote sensing data. 3. Classify the data cube using the model. 4. Identify areas of uncertainty. 5. Select samples for re\-labeling: Select a set of unlabeled samples that the model is most uncertain about, i.e., those that the model is least confident in classifying. 6. Label the selected samples: The user labels the selected samples, adding them to the training set. 7. Retrain the model: The model is retrained using the newly labeled samples, and the process repeats itself, starting at step 2\. 8. Stop when the classification accuracy is satisfactory: The iterative process continues until the classification accuracy reaches a satisfactory level. In traditional classification methods, experts provide a set of training samples and use a machine learning algorithm to produce a map. By contrast, the active learning approach puts the human in the loop [\[88]](references.html#ref-Monarch2021). At each iteration, an unlabeled set of samples is presented to the user, which assigns classes to them and includes them in the training set [\[89]](references.html#ref-Crawford2013). The process is repeated until the expert is satisfied with the result, as shown in Figure [99](uncertainty-and-active-learning.html#fig:al). Figure 99: Active learning approach (Source: Crawford et al. (2013\). Reproduction under fair use doctrine). Active learning aims to reduce bias and errors in sample selection and, as such, improve the accuracy of the result. At each interaction, experts are asked to review pixels where the machine learning classifier indicates a high uncertainty value. Sources of classification uncertainty include missing classes and or mislabeled samples. In `sits`, active learning is supported by functions `[sits_uncertainty()](https://rdrr.io/pkg/sits/man/sits_uncertainty.html)` and `[sits_uncertainty_sampling()](https://rdrr.io/pkg/sits/man/sits_uncertainty_sampling.html)`. Measuring uncertainty --------------------- Uncertainty refers to the degree of doubt or ambiguity in the accuracy of the classification results. Several sources of uncertainty can arise during land classification using satellite data, including: 1. Classification errors: These can occur when the classification algorithm misinterprets the spectral, spatial or temporal characteristics of the input data, leading to the misclassification of land classes. 2. Ambiguity in classification schema: The definition of land classes can be ambiguous or subjective, leading to inconsistencies in the classification results. 3. Variability in the landscape: Natural and human\-induced variations in the landscape can make it difficult to accurately classify land areas. 4. Limitations of the data: The quality and quantity of input data can influence the accuracy of the classification results. Quantifying uncertainty in land classification is important for ensuring that the results are reliable and valid for decision\-making. Various methods, such as confusion and error matrices, can be used to estimate and visualize the level of uncertainty in classification results. Additionally, incorporating uncertainty estimates into decision\-making processes can help to identify regions where further investigation or data collection is needed. The function `[sits_uncertainty()](https://rdrr.io/pkg/sits/man/sits_uncertainty.html)` calculates the uncertainty cube based on the probabilities produced by the classifier. It takes a probability cube as input. The uncertainty measure is relevant in the context of active learning. It helps to increase the quantity and quality of training samples by providing information about the model’s confidence. The supported types of uncertainty are ‘entropy’, ‘least’, ‘margin’, and ‘ratio’. Least confidence sampling is the difference between no uncertainty (100% confidence) and the probability of the most likely class, normalized by the number of classes. Let \\(P\_1(i)\\) be the higher class probability for pixel \\(i\\). Then least confidence sampling is expressed as \\\[ \\theta\_{LC} \= (1 \- P\_1(i)) \* \\frac{n}{n\-1}. \\] The margin of confidence sampling is the difference between the two most confident predictions, expressed from 0% (no uncertainty) to 100% (maximum uncertainty). Let \\(P\_1(i)\\) and \\(P\_2(i)\\) be the two higher class probabilities for pixel \\(i\\). Then, the margin of confidence is expressed as \\\[ \\theta\_{MC} \= 1 \- (P\_1(i) \- P\_2(i)). \\] The ratio of confidence is the measure of the ratio between the two most confident predictions, expressed in a range from 0% (no uncertainty) to 100% (maximum uncertainty). Let \\(P\_1(i)\\) and \\(P\_2(i)\\) be the two higher class probabilities for pixel \\(i\\). Then, the ratio of confidence is expressed as \\\[ \\theta\_{RC} \= \\frac{P\_2(i)}{P\_1(i)}. \\] Entropy is a measure of uncertainty used by Claude Shannon in his classic work “A Mathematical Theory of Communication”. It is related to the amount of variability in the probabilities associated with a pixel. The lower the variability, the lower the entropy. Let \\(P\_k(i)\\) be the probability of class \\(k\\) for pixel \\(i\\). The entropy is calculated as \\\[ \\theta\_{E} \= \\frac{\-\\sum\_{k\=1}^K P\_k(i) \* log\_2(P\_k(i))}{log\_2{n}}. \\] The parameters for `[sits_uncertainty()](https://rdrr.io/pkg/sits/man/sits_uncertainty.html)` are: `cube`, a probability data cube; `type`, the uncertainty measure (default is `least`). As with other processing functions, `multicores` is the number of cores to run the function and `memsize` is the maximum overall memory (in GB) to run the function, `output_dir` is the output directory for image files, and `version` is the result version. Using uncertainty measures for active learning ---------------------------------------------- The following case study shows how uncertainty measures can be used in the context of active learning. The study area is a subset of one Sentinel\-2 tile in the state of Rondonia, Brazil. The work aims to detect deforestation in Brazilian Amazonia. The study area is close to the Samuel Hydroelectric Dam, located on the Madeira River in the Brazilian state of Rondônia. Building the dam led to a loss of 56,000 ha of native forest. The dam’s construction caused the displacement of several indigenous communities and traditional populations, leading to social and cultural disruption. Additionally, flooding large forest areas resulted in losing habitats and biodiversity, including several endangered species. The dam has altered the natural flow of the Madeira River, leading to changes in water quality and temperature and affecting the aquatic life that depends on the river. The changes in river flow have also impacted the navigation and transportation activities of the local communities [\[90]](references.html#ref-Fearnside2005). The first step is to produce a regular data cube for the chosen area from 2020\-06\-01 to 2021\-09\-01\. To reduce processing time and storage, we use only three bands (B02, B8A, and B11\) plus the cloud band, and take a small area inside the tile. After obtaining a regular cube, we plot the study area in two dates during the temporal interval of the data cube. The first image is taken at the beginning of the dry season in 2020\-07\-04, when the inundation area of the dam was covered by shallow water. ``` # Select a S2 tile s2_cube_ro <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "AWS", collection = "SENTINEL-S2-L2A-COGS", tiles = "20LMR", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11", "SCL"), start_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2020-06-01"), end_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2021-09-01"), progress = FALSE ) # Select a small area inside the tile roi <- [c](https://rdrr.io/r/base/c.html)( lon_max = -63.25790, lon_min = -63.6078, lat_max = -8.72290, lat_min = -8.95630 ) # Regularize the small area cube s2_reg_cube_ro <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = s2_cube_ro, output_dir = "./tempdir/chp12/", res = 20, roi = roi, period = "P16D", multicores = 4, progress = FALSE ) ``` ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_reg_cube_ro, red = "B11", green = "B8A", blue = "B02", date = "2020-07-04" ) ``` Figure 100: Area in Rondonia near Samuel dam (source: authors). The second image is from 2020\-11\-09 and shows that most of the inundation area dries during the dry season. In early November 2020, after the end of the dry season, the inundation area is dry and has a response similar to bare soil and burned areas. The Madeira River can be seen running through the dried inundation area. ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_reg_cube_ro, red = "B11", green = "B8A", blue = "B02", date = "2020-11-09" ) ``` Figure 101: Area in Rondonia near Samuel dam in November 2021 (source: authors). The third image is from 2021\-08\-08\. In early August 2021, after the wet season, the inundation area is again covered by a shallow water layer. Several burned and clear\-cut areas can also be seen in the August 2021 image compared with the July 2020 one. Given the contrast between the wet and dry seasons, correct land classification of this area is hard. ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_reg_cube_ro, red = "B11", green = "B8A", blue = "B02", date = "2021-08-08") ``` Figure 102: Area in Rondonia near Samuel dam in August 2021 (source: authors). The next step is to classify this study area using a training set with 480 times series collected over the state of Rondonia (Brazil) for detecting deforestation. The training set uses 4 classes (`Burned_Area`, `Forest`,Highly\_Degraded`, and`Cleared\_Area\`). The cube is classified using a Random Forest model, post\-processed by Bayesian smoothing, and then labeled. ``` [library](https://rdrr.io/r/base/library.html)([sitsdata](https://github.com/e-sensing/sitsdata/)) # Load the training set [data](https://rdrr.io/r/utils/data.html)("samples_prodes_4classes") # Select the same three bands used in the data cube samples_4classes_3bands <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)( data = samples_prodes_4classes, bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11") ) # Train a random forest model rfor_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_4classes_3bands, ml_method = [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)() ) # Classify the small area cube s2_cube_probs <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = s2_reg_cube_ro, ml_model = rfor_model, output_dir = "./tempdir/chp12/", memsize = 15, multicores = 5 ) # Post-process the probability cube s2_cube_bayes <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( cube = s2_cube_probs, output_dir = "./tempdir/chp12/", memsize = 16, multicores = 4 ) # Label the post-processed probability cube s2_cube_label <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = s2_cube_bayes, output_dir = "./tempdir/chp12/", memsize = 16, multicores = 4 ) [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_cube_label) ``` Figure 103: Classified map for area in Rondonia near Samuel dam (source: authors). The resulting map correctly identifies the forest area and the deforestation. However, it wrongly classifies the area covered by the Samuel hydroelectric dam. The reason is the lack of samples for classes related to surface water and wetlands. To improve the classification, we need to improve our samples. To do that, the first step is to calculate the uncertainty of the classification. ``` # Calculate the uncertainty cube s2_cube_uncert <- [sits_uncertainty](https://rdrr.io/pkg/sits/man/sits_uncertainty.html)( cube = s2_cube_bayes, type = "margin", output_dir = "./tempdir/chp12/", memsize = 16, multicores = 4 ) [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_cube_uncert) ``` Figure 104: Uncertainty map for classification in Rondonia near Samuel dam (source: authors). As expected, the places of highest uncertainty are those covered by surface water or associated with wetlands. These places are likely to be misclassified. For this reason, `sits` provides `[sits_uncertainty_sampling()](https://rdrr.io/pkg/sits/man/sits_uncertainty_sampling.html)`, which takes the uncertainty cube as its input and produces a tibble with locations in WGS84 with high uncertainty. The function has three parameters: `n`, the number of uncertain points to be included; `min_uncert`, the minimum value of uncertainty for pixels to be included in the list; and `sampling_window`, which defines a window where only one sample will be selected. The aim of `sampling_window` is to improve the spatial distribution of the new samples by avoiding points in the same neighborhood to be included. After running the function, we can use `[sits_view()](https://rdrr.io/pkg/sits/man/sits_view.html)` to visualize the location of the samples. ``` # Find samples with high uncertainty new_samples <- [sits_uncertainty_sampling](https://rdrr.io/pkg/sits/man/sits_uncertainty_sampling.html)( uncert_cube = s2_cube_uncert, n = 20, min_uncert = 0.5, sampling_window = 10 ) # View the location of the samples [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(new_samples) ``` Figure 105: Location of uncertain pixel for classification in Rondonia near Samuel dam (source: authors). The visualization shows that the samples are located in the areas covered by the Samuel data. Thus, we designate these samples as Wetlands. A more detailed evaluation, which is recommended in practice, requires analysing these samples with an exploration software such as QGIS and individually labeling each sample. In our case, we will take a direct approach for illustration purposes. ``` # Label the new samples new_samples$label <- "Wetland" # Obtain the time series from the regularized cube new_samples_ts <- [sits_get_data](https://rdrr.io/pkg/sits/man/sits_get_data.html)( cube = s2_reg_cube_ro, samples = new_samples ) # Join the new samples with the original ones with 4 classes samples_round_2 <- dplyr::[bind_rows](https://dplyr.tidyverse.org/reference/bind_rows.html)( samples_4classes_3bands, new_samples_ts ) # Train a RF model with the new sample set rfor_model_v2 <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_round_2, ml_method = [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)() ) # Classify the small area cube s2_cube_probs_v2 <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = s2_reg_cube_ro, ml_model = rfor_model_v2, output_dir = "./tempdir/chp12/", version = "v2", memsize = 16, multicores = 4 ) # Post-process the probability cube s2_cube_bayes_v2 <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( cube = s2_cube_probs_v2, output_dir = "./tempdir/chp12/", version = "v2", memsize = 16, multicores = 4 ) # Label the post-processed probability cube s2_cube_label_v2 <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = s2_cube_bayes_v2, output_dir = "./tempdir/chp12/", version = "v2", memsize = 16, multicores = 4 ) # Plot the second version of the classified cube [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_cube_label_v2) ``` Figure 106: New land classification in Rondonia near Samuel dam (source: authors). The results show a significant quality gain over the earlier classification. There are still some areas of confusion in the exposed soils inside the inundation area, some of which have been classified as burned areas. It is also useful to show the uncertainty map associated with the second model. ``` # Calculate the uncertainty cube s2_cube_uncert_v2 <- [sits_uncertainty](https://rdrr.io/pkg/sits/man/sits_uncertainty.html)( cube = s2_cube_bayes_v2, type = "margin", output_dir = "./tempdir/chp12/", version = "v2", memsize = 16, multicores = 4 ) [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_cube_uncert_v2) ``` Figure 107: Uncertainty map for classification in Rondonia near Samuel dam \- improved model (source: authors). As the new uncertainty map shows, there is a significant improvement in the quality of the classification. The remaining areas of high uncertainty are those affected by the contrast between the wet and dry seasons close to the inundation area. These areas are low\-laying places that sometimes are covered by water and sometimes are bare soil areas throughout the year, depending on the intensity of the rainy season. To further improve the classification quality, we could obtain new samples of those uncertain areas, label them, and add them to samples. In general, as this Chapter shows, combining uncertainty measurements with active learning is a recommended practice for improving classification results. Measuring uncertainty --------------------- Uncertainty refers to the degree of doubt or ambiguity in the accuracy of the classification results. Several sources of uncertainty can arise during land classification using satellite data, including: 1. Classification errors: These can occur when the classification algorithm misinterprets the spectral, spatial or temporal characteristics of the input data, leading to the misclassification of land classes. 2. Ambiguity in classification schema: The definition of land classes can be ambiguous or subjective, leading to inconsistencies in the classification results. 3. Variability in the landscape: Natural and human\-induced variations in the landscape can make it difficult to accurately classify land areas. 4. Limitations of the data: The quality and quantity of input data can influence the accuracy of the classification results. Quantifying uncertainty in land classification is important for ensuring that the results are reliable and valid for decision\-making. Various methods, such as confusion and error matrices, can be used to estimate and visualize the level of uncertainty in classification results. Additionally, incorporating uncertainty estimates into decision\-making processes can help to identify regions where further investigation or data collection is needed. The function `[sits_uncertainty()](https://rdrr.io/pkg/sits/man/sits_uncertainty.html)` calculates the uncertainty cube based on the probabilities produced by the classifier. It takes a probability cube as input. The uncertainty measure is relevant in the context of active learning. It helps to increase the quantity and quality of training samples by providing information about the model’s confidence. The supported types of uncertainty are ‘entropy’, ‘least’, ‘margin’, and ‘ratio’. Least confidence sampling is the difference between no uncertainty (100% confidence) and the probability of the most likely class, normalized by the number of classes. Let \\(P\_1(i)\\) be the higher class probability for pixel \\(i\\). Then least confidence sampling is expressed as \\\[ \\theta\_{LC} \= (1 \- P\_1(i)) \* \\frac{n}{n\-1}. \\] The margin of confidence sampling is the difference between the two most confident predictions, expressed from 0% (no uncertainty) to 100% (maximum uncertainty). Let \\(P\_1(i)\\) and \\(P\_2(i)\\) be the two higher class probabilities for pixel \\(i\\). Then, the margin of confidence is expressed as \\\[ \\theta\_{MC} \= 1 \- (P\_1(i) \- P\_2(i)). \\] The ratio of confidence is the measure of the ratio between the two most confident predictions, expressed in a range from 0% (no uncertainty) to 100% (maximum uncertainty). Let \\(P\_1(i)\\) and \\(P\_2(i)\\) be the two higher class probabilities for pixel \\(i\\). Then, the ratio of confidence is expressed as \\\[ \\theta\_{RC} \= \\frac{P\_2(i)}{P\_1(i)}. \\] Entropy is a measure of uncertainty used by Claude Shannon in his classic work “A Mathematical Theory of Communication”. It is related to the amount of variability in the probabilities associated with a pixel. The lower the variability, the lower the entropy. Let \\(P\_k(i)\\) be the probability of class \\(k\\) for pixel \\(i\\). The entropy is calculated as \\\[ \\theta\_{E} \= \\frac{\-\\sum\_{k\=1}^K P\_k(i) \* log\_2(P\_k(i))}{log\_2{n}}. \\] The parameters for `[sits_uncertainty()](https://rdrr.io/pkg/sits/man/sits_uncertainty.html)` are: `cube`, a probability data cube; `type`, the uncertainty measure (default is `least`). As with other processing functions, `multicores` is the number of cores to run the function and `memsize` is the maximum overall memory (in GB) to run the function, `output_dir` is the output directory for image files, and `version` is the result version. Using uncertainty measures for active learning ---------------------------------------------- The following case study shows how uncertainty measures can be used in the context of active learning. The study area is a subset of one Sentinel\-2 tile in the state of Rondonia, Brazil. The work aims to detect deforestation in Brazilian Amazonia. The study area is close to the Samuel Hydroelectric Dam, located on the Madeira River in the Brazilian state of Rondônia. Building the dam led to a loss of 56,000 ha of native forest. The dam’s construction caused the displacement of several indigenous communities and traditional populations, leading to social and cultural disruption. Additionally, flooding large forest areas resulted in losing habitats and biodiversity, including several endangered species. The dam has altered the natural flow of the Madeira River, leading to changes in water quality and temperature and affecting the aquatic life that depends on the river. The changes in river flow have also impacted the navigation and transportation activities of the local communities [\[90]](references.html#ref-Fearnside2005). The first step is to produce a regular data cube for the chosen area from 2020\-06\-01 to 2021\-09\-01\. To reduce processing time and storage, we use only three bands (B02, B8A, and B11\) plus the cloud band, and take a small area inside the tile. After obtaining a regular cube, we plot the study area in two dates during the temporal interval of the data cube. The first image is taken at the beginning of the dry season in 2020\-07\-04, when the inundation area of the dam was covered by shallow water. ``` # Select a S2 tile s2_cube_ro <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "AWS", collection = "SENTINEL-S2-L2A-COGS", tiles = "20LMR", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11", "SCL"), start_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2020-06-01"), end_date = [as.Date](https://rdrr.io/r/base/as.Date.html)("2021-09-01"), progress = FALSE ) # Select a small area inside the tile roi <- [c](https://rdrr.io/r/base/c.html)( lon_max = -63.25790, lon_min = -63.6078, lat_max = -8.72290, lat_min = -8.95630 ) # Regularize the small area cube s2_reg_cube_ro <- [sits_regularize](https://rdrr.io/pkg/sits/man/sits_regularize.html)( cube = s2_cube_ro, output_dir = "./tempdir/chp12/", res = 20, roi = roi, period = "P16D", multicores = 4, progress = FALSE ) ``` ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_reg_cube_ro, red = "B11", green = "B8A", blue = "B02", date = "2020-07-04" ) ``` Figure 100: Area in Rondonia near Samuel dam (source: authors). The second image is from 2020\-11\-09 and shows that most of the inundation area dries during the dry season. In early November 2020, after the end of the dry season, the inundation area is dry and has a response similar to bare soil and burned areas. The Madeira River can be seen running through the dried inundation area. ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_reg_cube_ro, red = "B11", green = "B8A", blue = "B02", date = "2020-11-09" ) ``` Figure 101: Area in Rondonia near Samuel dam in November 2021 (source: authors). The third image is from 2021\-08\-08\. In early August 2021, after the wet season, the inundation area is again covered by a shallow water layer. Several burned and clear\-cut areas can also be seen in the August 2021 image compared with the July 2020 one. Given the contrast between the wet and dry seasons, correct land classification of this area is hard. ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_reg_cube_ro, red = "B11", green = "B8A", blue = "B02", date = "2021-08-08") ``` Figure 102: Area in Rondonia near Samuel dam in August 2021 (source: authors). The next step is to classify this study area using a training set with 480 times series collected over the state of Rondonia (Brazil) for detecting deforestation. The training set uses 4 classes (`Burned_Area`, `Forest`,Highly\_Degraded`, and`Cleared\_Area\`). The cube is classified using a Random Forest model, post\-processed by Bayesian smoothing, and then labeled. ``` [library](https://rdrr.io/r/base/library.html)([sitsdata](https://github.com/e-sensing/sitsdata/)) # Load the training set [data](https://rdrr.io/r/utils/data.html)("samples_prodes_4classes") # Select the same three bands used in the data cube samples_4classes_3bands <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)( data = samples_prodes_4classes, bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11") ) # Train a random forest model rfor_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_4classes_3bands, ml_method = [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)() ) # Classify the small area cube s2_cube_probs <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = s2_reg_cube_ro, ml_model = rfor_model, output_dir = "./tempdir/chp12/", memsize = 15, multicores = 5 ) # Post-process the probability cube s2_cube_bayes <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( cube = s2_cube_probs, output_dir = "./tempdir/chp12/", memsize = 16, multicores = 4 ) # Label the post-processed probability cube s2_cube_label <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = s2_cube_bayes, output_dir = "./tempdir/chp12/", memsize = 16, multicores = 4 ) [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_cube_label) ``` Figure 103: Classified map for area in Rondonia near Samuel dam (source: authors). The resulting map correctly identifies the forest area and the deforestation. However, it wrongly classifies the area covered by the Samuel hydroelectric dam. The reason is the lack of samples for classes related to surface water and wetlands. To improve the classification, we need to improve our samples. To do that, the first step is to calculate the uncertainty of the classification. ``` # Calculate the uncertainty cube s2_cube_uncert <- [sits_uncertainty](https://rdrr.io/pkg/sits/man/sits_uncertainty.html)( cube = s2_cube_bayes, type = "margin", output_dir = "./tempdir/chp12/", memsize = 16, multicores = 4 ) [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_cube_uncert) ``` Figure 104: Uncertainty map for classification in Rondonia near Samuel dam (source: authors). As expected, the places of highest uncertainty are those covered by surface water or associated with wetlands. These places are likely to be misclassified. For this reason, `sits` provides `[sits_uncertainty_sampling()](https://rdrr.io/pkg/sits/man/sits_uncertainty_sampling.html)`, which takes the uncertainty cube as its input and produces a tibble with locations in WGS84 with high uncertainty. The function has three parameters: `n`, the number of uncertain points to be included; `min_uncert`, the minimum value of uncertainty for pixels to be included in the list; and `sampling_window`, which defines a window where only one sample will be selected. The aim of `sampling_window` is to improve the spatial distribution of the new samples by avoiding points in the same neighborhood to be included. After running the function, we can use `[sits_view()](https://rdrr.io/pkg/sits/man/sits_view.html)` to visualize the location of the samples. ``` # Find samples with high uncertainty new_samples <- [sits_uncertainty_sampling](https://rdrr.io/pkg/sits/man/sits_uncertainty_sampling.html)( uncert_cube = s2_cube_uncert, n = 20, min_uncert = 0.5, sampling_window = 10 ) # View the location of the samples [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(new_samples) ``` Figure 105: Location of uncertain pixel for classification in Rondonia near Samuel dam (source: authors). The visualization shows that the samples are located in the areas covered by the Samuel data. Thus, we designate these samples as Wetlands. A more detailed evaluation, which is recommended in practice, requires analysing these samples with an exploration software such as QGIS and individually labeling each sample. In our case, we will take a direct approach for illustration purposes. ``` # Label the new samples new_samples$label <- "Wetland" # Obtain the time series from the regularized cube new_samples_ts <- [sits_get_data](https://rdrr.io/pkg/sits/man/sits_get_data.html)( cube = s2_reg_cube_ro, samples = new_samples ) # Join the new samples with the original ones with 4 classes samples_round_2 <- dplyr::[bind_rows](https://dplyr.tidyverse.org/reference/bind_rows.html)( samples_4classes_3bands, new_samples_ts ) # Train a RF model with the new sample set rfor_model_v2 <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples = samples_round_2, ml_method = [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)() ) # Classify the small area cube s2_cube_probs_v2 <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = s2_reg_cube_ro, ml_model = rfor_model_v2, output_dir = "./tempdir/chp12/", version = "v2", memsize = 16, multicores = 4 ) # Post-process the probability cube s2_cube_bayes_v2 <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( cube = s2_cube_probs_v2, output_dir = "./tempdir/chp12/", version = "v2", memsize = 16, multicores = 4 ) # Label the post-processed probability cube s2_cube_label_v2 <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = s2_cube_bayes_v2, output_dir = "./tempdir/chp12/", version = "v2", memsize = 16, multicores = 4 ) # Plot the second version of the classified cube [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_cube_label_v2) ``` Figure 106: New land classification in Rondonia near Samuel dam (source: authors). The results show a significant quality gain over the earlier classification. There are still some areas of confusion in the exposed soils inside the inundation area, some of which have been classified as burned areas. It is also useful to show the uncertainty map associated with the second model. ``` # Calculate the uncertainty cube s2_cube_uncert_v2 <- [sits_uncertainty](https://rdrr.io/pkg/sits/man/sits_uncertainty.html)( cube = s2_cube_bayes_v2, type = "margin", output_dir = "./tempdir/chp12/", version = "v2", memsize = 16, multicores = 4 ) [plot](https://rdrr.io/r/graphics/plot.default.html)(s2_cube_uncert_v2) ``` Figure 107: Uncertainty map for classification in Rondonia near Samuel dam \- improved model (source: authors). As the new uncertainty map shows, there is a significant improvement in the quality of the classification. The remaining areas of high uncertainty are those affected by the contrast between the wet and dry seasons close to the inundation area. These areas are low\-laying places that sometimes are covered by water and sometimes are bare soil areas throughout the year, depending on the intensity of the rainy season. To further improve the classification quality, we could obtain new samples of those uncertain areas, label them, and add them to samples. In general, as this Chapter shows, combining uncertainty measurements with active learning is a recommended practice for improving classification results.
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/ensemble-prediction-from-multiple-models.html
Ensemble prediction from multiple models ======================================== Ensemble prediction is a powerful technique for combining predictions from multiple models to produce more accurate and robust predictions. Errors of individual models cancel out or are reduced when combined with the predictions of other models. As a result, ensemble predictions can lead to better overall accuracy and reduce the risk of overfitting. This can be especially useful when working with complex or uncertain data. By combining the predictions of multiple models, users can identify which features or factors are most important for making accurate predictions. When using ensemble methods, choosing diverse models with different sources of error is essential to ensure that the ensemble predictions are more precise and robust. The `sits` package provides `[sits_combine_predictions()](https://rdrr.io/pkg/sits/man/sits_combine_predictions.html)` to estimate ensemble predictions using probability cubes produced by `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)` and optionally post\-processed with `[sits_smooth()](https://rdrr.io/pkg/sits/man/sits_smooth.html)`. There are two ways to make ensemble predictions from multiple models: * Averaging: In this approach, the predictions of each model are averaged to produce the final prediction. This method works well when the models have similar accuracy and errors. * Uncertainty: Predictions from different models are compared in terms of their uncertainties on a pixel\-by\-pixel basis; predictions with lower uncertainty are chosen as being more likely to be valid. In what follows, we will use the same sample dataset and data cube used in Chapter [Image classification in data cubes](https://e-sensing.github.io/sitsbook/image-classification-in-data-cubes.html) to illustrate how to produce an ensemble prediction. The dataset `samples_deforestation_rondonia` consists of 6007 samples collected from Sentinel\-2 images covering the state of Rondonia. Each time series contains values from all Sentinel\-2/2A spectral bands for year 2022 in 16\-day intervals. The data cube is a subset of the Sentinel\-2 tile “20LMR” which contains all spectral bands, plus spectral indices “NVDI”, “EVI” and “NBR” for the year 2022\. The first step is to recover the data cube which is available in the `sitsdata` package, and to select only the spectral bands. ``` # Files are available in a local directory data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LMR/", package = "sitsdata") # Read data cube ro_cube_20LMR <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir ) # reduce the number of bands ro_cube_20LMR <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)( data = ro_cube_20LMR, bands = [c](https://rdrr.io/r/base/c.html)("B02", "B03", "B04", "B05", "B06", "B07", "B08", "B11", "B12", "B8A") ) # plot one time step of the cube [plot](https://rdrr.io/r/graphics/plot.default.html)(ro_cube_20LMR, blue = "B02", green = "B8A", red = "B11", date = "2022-08-17") ``` Figure 108: Subset of Sentinel\-2 tile 20LMR ((©: EU Copernicus Sentinel Programme; source: Microsoft). We will train three models: Random Forests (RF), Light Temporal Attention Encoder (LTAE), and Temporal Convolution Neural Networks (TempCNN), classify the cube with them, and then combine their results. The example uses all spectral bands. We first run the RF classification. ``` # train a random forest model rfor_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)(samples_deforestation_rondonia, [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)()) # classify the data cube ro_cube_20LMR_rfor_probs <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( ro_cube_20LMR, rfor_model, output_dir = "./tempdir/chp13", multicores = 6, memsize = 24, version = "rfor" ) ro_cube_20LMR_rfor_variance <- [sits_variance](https://rdrr.io/pkg/sits/man/sits_variance.html)( ro_cube_20LMR_rfor_probs, window_size = 9, output_dir = "./tempdir/chp13", multicores = 6, memsize = 24, version = "rfor" ) [summary](https://rdrr.io/r/base/summary.html)(ro_cube_20LMR_rfor_variance) ``` ``` #> Clear_Cut_Bare_Soil Clear_Cut_Burned_Area Clear_Cut_Vegetation Forest #> 75% 4.73 4.8225 0.380 0.9000 #> 80% 5.29 5.3020 0.460 1.2500 #> 85% 5.64 5.6400 0.570 1.9915 #> 90% 5.98 5.9900 0.830 4.0500 #> 95% 6.62 6.6605 4.261 6.3800 #> 100% 18.44 16.1800 13.390 23.4400 #> Mountainside_Forest Riparian_Forest Seasonally_Flooded Water Wetland #> 75% 0.8000 0.9800 0.35 1.0300 4.71 #> 80% 2.2000 1.4520 0.45 1.5100 5.23 #> 85% 4.0415 2.6115 0.65 2.3800 5.63 #> 90% 5.4400 4.3800 1.08 3.8000 5.97 #> 95% 6.4200 6.0200 3.15 7.5715 6.49 #> 100% 12.5200 29.5500 18.67 51.5200 17.74 ``` Based on the variance values, we apply the smoothness hyperparameter according to the recommendations proposed before. We choose values of \\(\\sigma^2\_{k}\\) that reflect our prior expectation of the spatial patterns of each class. For classes `Clear_Cut_Vegetation` and `Clear_Cut_Burned_Area`, to produce denser spatial clusters and remove “salt\-and\-pepper” outliers, we take \\(\\sigma^2\_{k}\\) values in 95%\-100% range. In the case of the most frequent classes `Forest` and `Clear_Cut_Bare_Soil` we want to preserve their original spatial shapes as much as possible; the same logic applies to less frequent classes `Water` and `Wetland`. For this reason, we set \\(\\sigma^2\_{k}\\) values in the 75%\-80% range for these classes. The class spatial patterns correspond to our prior expectations. ``` ro_cube_20LMR_rfor_bayes <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( ro_cube_20LMR_rfor_probs, output_dir = "./tempdir/chp13", smoothness = [c](https://rdrr.io/r/base/c.html)( "Clear_Cut_Bare_Soil" = 5.25, "Clear_Cut_Burned_Area" = 15.0, "Clear_Cut_Vegetation" = 12.0, "Forest" = 1.8, "Mountainside_Forest" = 6.5, "Riparian_Forest" = 6.0, "Seasonally_Flooded" = 3.5, "Water" = 1.5, "Wetland" = 5.5 ), multicores = 6, memsize = 24, version = "rfor" ) ro_cube_20LMR_rfor_class <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( ro_cube_20LMR_rfor_bayes, output_dir = "./tempdir/chp13", multicores = 6, memsize = 24, version = "rfor" ) ``` ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(ro_cube_20LMR_rfor_class, legend_text_size = 0.7, legend_position = "outside" ) ``` Figure 109: Land classification in Rondonia using a random forest algorithm (source: authors). The next step is to classify the same area using a tempCNN algorithm, as shown below. ``` # train a tempcnn model tcnn_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_deforestation_rondonia, [sits_tempcnn](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)() ) # classify the data cube ro_cube_20LMR_tcnn_probs <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( ro_cube_20LMR, tcnn_model, output_dir = "./tempdir/chp13", multicores = 2, memsize = 8, gpu_memory = 8, version = "tcnn" ) ro_cube_20LMR_tcnn_variance <- [sits_variance](https://rdrr.io/pkg/sits/man/sits_variance.html)( ro_cube_20LMR_tcnn_probs, window_size = 9, output_dir = "./tempdir/chp13", multicores = 6, memsize = 24, version = "tcnn" ) [summary](https://rdrr.io/r/base/summary.html)(ro_cube_20LMR_tcnn_variance) ``` ``` ro_cube_20LMR_tcnn_bayes <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( ro_cube_20LMR_tcnn_probs, output_dir = "./tempdir/chp13", window_size = 11, smoothness = [c](https://rdrr.io/r/base/c.html)( "Clear_Cut_Bare_Soil" = 1.5, "Clear_Cut_Burned_Area" = 20.0, "Clear_Cut_Vegetation" = 25.0, "Forest" = 4.0, "Mountainside_Forest" = 3.0, "Riparian_Forest" = 40.0, "Seasonally_Flooded" = 30.0, "Water" = 1.0, "Wetland" = 2.0 ), multicores = 2, memsize = 6, version = "tcnn" ) ro_cube_20LMR_tcnn_class <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( ro_cube_20LMR_tcnn_bayes, output_dir = "./tempdir/chp13", multicores = 2, memsize = 6, version = "tcnn" ) ``` ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(ro_cube_20LMR_tcnn_class, legend_text_size = 0.7, legend_position = "outside" ) ``` Figure 110: Land classification in Rondonia using tempCNN (source: authors). The third model is the Light Temporal Attention Encoder (LTAE), which has been discussed. ``` # train a tempcnn model ltae_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)(samples_deforestation_rondonia, [sits_lighttae](https://rdrr.io/pkg/sits/man/sits_lighttae.html)()) # classify the data cube ro_cube_20LMR_ltae_probs <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( ro_cube_20LMR, ltae_model, output_dir = "./tempdir/chp13", multicores = 2, memsize = 8, gpu_memory = 8, version = "ltae" ) ro_cube_20LMR_ltae_variance <- [sits_variance](https://rdrr.io/pkg/sits/man/sits_variance.html)( ro_cube_20LMR_ltae_probs, window_size = 9, output_dir = "./tempdir/chp13", multicores = 6, memsize = 24, version = "ltae" ) [summary](https://rdrr.io/r/base/summary.html)(ro_cube_20LMR_ltae_variance) ``` We use the same rationale for selecting the `smoothness` parameter for the Bayesian smoothing operation as in the cases above. ``` ro_cube_20LMR_ltae_bayes <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( ro_cube_20LMR_tcnn_probs, output_dir = "./tempdir/chp13", window_size = 11, smoothness = [c](https://rdrr.io/r/base/c.html)( "Clear_Cut_Bare_Soil" = 1.2, "Clear_Cut_Burned_Area" = 10.0, "Clear_Cut_Vegetation" = 15.0, "Forest" = 4.0, "Mountainside_Forest" = 8.0, "Riparian_Forest" = 25.0, "Seasonally_Flooded" = 30.0, "Water" = 0.3, "Wetland" = 1.8 ), multicores = 2, memsize = 6, version = "ltae" ) ro_cube_20LMR_ltae_class <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( ro_cube_20LMR_ltae_bayes, output_dir = "./tempdir/chp13", multicores = 2, memsize = 6, version = "ltae" ) ``` ``` [plot](https://rdrr.io/r/graphics/plot.default.html)(ro_cube_20LMR_ltae_class, legend_text_size = 0.7, legend_position = "outside" ) ``` Figure 111: Land classification in Rondonia using tempCNN (source: authors). To understand the differences between the results, it is useful to compare the resulting class areas produced by the different algorithms. ``` # get the summary of the map produced by RF sum1 <- [summary](https://rdrr.io/r/base/summary.html)(ro_cube_20LMR_rfor_class) |> dplyr::[select](https://dplyr.tidyverse.org/reference/select.html)("class", "area_km2") [colnames](https://rdrr.io/r/base/colnames.html)(sum1) <- [c](https://rdrr.io/r/base/c.html)("class", "rfor") # get the summary of the map produced by TCNN sum2 <- [summary](https://rdrr.io/r/base/summary.html)(ro_cube_20LMR_tcnn_class) |> dplyr::[select](https://dplyr.tidyverse.org/reference/select.html)("class", "area_km2") [colnames](https://rdrr.io/r/base/colnames.html)(sum2) <- [c](https://rdrr.io/r/base/c.html)("class", "tcnn") # get the summary of the map produced by LTAE sum3 <- [summary](https://rdrr.io/r/base/summary.html)(ro_cube_20LMR_ltae_class) |> dplyr::[select](https://dplyr.tidyverse.org/reference/select.html)("class", "area_km2") [colnames](https://rdrr.io/r/base/colnames.html)(sum3) <- [c](https://rdrr.io/r/base/c.html)("class", "ltae") # compare class areas of non-smoothed and smoothed maps dplyr::[inner_join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(sum1, sum2, by = "class") |> dplyr::[inner_join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(sum3, by = "class") ``` ``` #> # A tibble: 9 × 4 #> class rfor tcnn ltae #> <chr> <dbl> <dbl> <dbl> #> 1 Clear_Cut_Bare_Soil 80 67 66 #> 2 Clear_Cut_Burned_Area 1.7 4.4 4.5 #> 3 Clear_Cut_Vegetation 19 18 18 #> 4 Forest 280 240 240 #> 5 Mountainside_Forest 0.0088 0.065 0.051 #> 6 Riparian_Forest 47 45 44 #> 7 Seasonally_Flooded 70 120 120 #> 8 Water 63 67 67 #> 9 Wetland 14 11 11 ``` The study area presents many challenges for land classification, given the presence of wetlands, riparian forests and seasonally\-flooded areas. The results show the algorithms obtain quite different results, since each model has different sensitivities. The RF method is biased towards the most frequent classes, especially for `Clear_Cut_Bare_Soil` and `Forest`. The area estimated by RF for class `Clear_Cut_Burned_Area` is the smallest of the three models. Most pixels assigned by LTAE and TCNN as burned areas are assigned by RF as being areas of bare soil. The RF algorithm tends to be more conservative. The reason is because RF decision\-making uses values from single attributes (values of a single band in a given time instance), while LTAE and TCNN consider the relations between instances of the time series. Since the RF model is sensitive to the response of images at the end of the period, it tends to focus on values that indicate the presence of forests and bare soils during the dry season, which peaks in August. The LTAE model is more balanced to the overall separation of classes in the entire attribute space, and produces larger estimates of riparian and seasonally flooded forest than the other methods. In contrast, both LTAE and TCNN make more mistakes than RF in including flooded areas in the center\-left part of the image on the left side of the rives as `Clear_Cut_Vegetation` when the right label would be riparian or flooded forests. Given the differences and complementaries between the three predicted outcomes, combining them using `[sits_combine_predictions()](https://rdrr.io/pkg/sits/man/sits_combine_predictions.html)` is useful. This function takes the following arguments: (a) `cubes`, a list of the cubes to be combined. These cubes should be probability cubes generated by which optionally may have been smoothened; (b) `type`, which indicates how to combine the probability maps. The options are `average`, which performs a weighted mean of the probabilities, and `uncertainty`, which uses the uncertainty cubes to combine the predictions; (c) `weights`, a vector of weights to be used to combine the predictions when `average` is selected; (d) `uncertainty_cubes`, a list of uncertainty cubes associated to the predictions; (e) `multicores`, number of cores to be used; (f) `memsize`, RAM used in the classification; (g) `output_dir`, the directory where the classified raster files will be written. ``` # Combine the two predictions by taking the average of the probabilities for each class ro_cube_20LMR_average_probs <- [sits_combine_predictions](https://rdrr.io/pkg/sits/man/sits_combine_predictions.html)( cubes = [list](https://rdrr.io/r/base/list.html)( ro_cube_20LMR_tcnn_bayes, ro_cube_20LMR_rfor_bayes, ro_cube_20LMR_ltae_bayes ), type = "average", version = "average-rfor-tcnn-ltae", output_dir = "./tempdir/chp13/", weights = [c](https://rdrr.io/r/base/c.html)(0.33, 0.34, 0.33), memsize = 16, multicores = 4 ) # Label the average probability cube ro_cube_20LMR_average_class <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = ro_cube_20LMR_average_probs, output_dir = "./tempdir/chp13/", version = "average-rfor-tcnn-ltae", memsize = 16, multicores = 4 ) ``` ``` # Plot the second version of the classified cube [plot](https://rdrr.io/r/graphics/plot.default.html)(ro_cube_20LMR_average_class, legend_text_size = 0.7, legend_position = "outside" ) ``` Figure 112: Land classification in Rondonia using the average of the probabilities produced by Random Forest and SVM algorithms (source: authors). We can also consider the class areas produced by the ensemble combination and compare them to the original estimates. ``` # get the summary of the map produced by LTAE sum4 <- [summary](https://rdrr.io/r/base/summary.html)(ro_cube_20LMR_average_class) |> dplyr::[select](https://dplyr.tidyverse.org/reference/select.html)("class", "area_km2") [colnames](https://rdrr.io/r/base/colnames.html)(sum4) <- [c](https://rdrr.io/r/base/c.html)("class", "ave") # compare class areas of non-smoothed and smoothed maps dplyr::[inner_join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(sum1, sum2, by = "class") |> dplyr::[inner_join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(sum3, by = "class") |> dplyr::[inner_join](https://dplyr.tidyverse.org/reference/mutate-joins.html)(sum4, by = "class") ``` ``` #> # A tibble: 9 × 5 #> class rfor tcnn ltae ave #> <chr> <dbl> <dbl> <dbl> <dbl> #> 1 Clear_Cut_Bare_Soil 80 67 66 70 #> 2 Clear_Cut_Burned_Area 1.7 4.4 4.5 4 #> 3 Clear_Cut_Vegetation 19 18 18 16 #> 4 Forest 280 240 240 250 #> 5 Mountainside_Forest 0.0088 0.065 0.051 0.036 #> 6 Riparian_Forest 47 45 44 46 #> 7 Seasonally_Flooded 70 120 120 110 #> 8 Water 63 67 67 67 #> 9 Wetland 14 11 11 11 ``` As expected, the ensemble map combines information from the three models. Taking the RF model prediction as a base, there is a reduction in the areas of classes `Clear_Cut_Bare_Soil` and `Forest`, confirming the tendency of the RF model to overemphasize the most frequent classes. The LTAE and TempCNN models are more sensitive to class variations and capture time\-varying classes such as `Riparian_Forest` and `Clear_Cut_Burned_Area` in more detail than the RF model. However, both TempCNN and LTAE tend to confuse the deforestation\-related class `Clear_Cut_Vegetation` and the natural class `Riparian_Forest` more than the RF model. This effect is evident in the left bank of the Madeira river in the centre\-left region of the image. Also, both the LTAE and TempCNN maps are more grainy and have more spatial variability than the RF map. The average map provides a compromise between RF’s strong empahsis on the most frequent classes and the tendency of deep learning methods to produce outliers based on temporal relationship. The average map is less grainy and more spatially consistent than the LTAE and TempCNN maps, while introducing variability which is not present in the RF map. This chapter shows the possibilities of ensemble prediction. There are many ways to get better results than those presented here. Increasing the number of spectral bands would improve the final accuracy. Also, Bayesian smoothing for deep learning models should not rely on default parameters; rather it needs to rely on variance analysis, increase the spatial window and provide more informed hyperparameters. In general, ensemble prediction should be consider in all situations where one is not satisfied with the results of individual models. Combining model output increses the reliability of the result and thus shouls be considered in all situations where similar classes are present.
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/object-based-time-series-image-analysis.html
Object\-based time series image analysis ======================================== [ Object\-Based Image Analysis (OBIA) is an approach to remote sensing image analysis that partitions an image into closed segments which are then classified and analyzed. For high\-resolution images (1 meter or smaller) the aim of OBIA is to create objects that represent meaningful features in the real world, like buildings, roads, fields, forests, and water bodies. In case of medium resolution images (such as Sentinel\-2 or Landsat) the segments represent groups of the image with similar spectral responses which in general do not correspond directly to individual objects in the ground. These groups of pixels are called super\-pixels. In both situations, the aim of OBIA is to obtain a spatial partition of the image which can then be assigned to a single class. When applicable, OBIA reduces processing time and produces labeled maps with greater spatial consistency. The general sequence of the processes involved in OBIA in `sits` is: 1. Segmentation: The first step is to group together pixels that are similar based a distance metric that considers the values of all bands in all time instances. We build a multitemporal attribute space where each time/band combination is taken as an independent dimension. Thus, distance metrics for segmentation in a data cube with 10 bands and 24 time steps use a 240\-dimension space. 2. Probability Estimation: After the image has been partitioned into distinct objects, the next step is to classify each segment. For satellite image time series, a subset of time series inside each segment is classified. 3. Labeling: Once the set of probabilities have been obtained for each time series inside a segment, they can be used for labeling. This is done by considering the median value of the probabilities for the time series inside the segment that have been classified. For each class, we take the median of the probability values. Then, median values for the classes are normalised, and the most likely value is assigned as the class for the segment. Image segmentation in sits -------------------------- The first step of the OBIA procedure in `sits` is to select a data cube to be segmented and function that performs the segmentation. For this purpose, `sits` provides a generic `[sits_segment()](https://rdrr.io/pkg/sits/man/sits_segment.html)` function, which allows users to select different segmentation algorithms. The `[sits_segment()](https://rdrr.io/pkg/sits/man/sits_segment.html)` function has the following parameters: * `cube`: a regular data cube. * `seg_fn`: function to apply the segmentation * `roi`: spatial region of interest in the cube * `start_date`: starting date for the space\-time segmentation * `end_date`: final date for the space\-time segmentation * `memsize`: memory available for processing * `multicores`: number of cores available for processing * `output_dir`: output directory for the resulting cube * `version`: version of the result * `progress`: show progress bar? In `sits` version 1\.4\.2, there is only one segmentation function available (`sits_slic`) which implements the extended version of the Simple Linear Iterative Clustering (SLIC) which is described below. In future versions of `sits`, we expect to include additional functions that support spatio\-temporal segmentation. Simple linear iterative clustering algorithm -------------------------------------------- After building the multidimensional space, we use the Simple Linear Iterative Clustering (SLIC) algorithm [\[91]](references.html#ref-Achanta2012) that clusters pixels to efficiently generate compact, nearly uniform superpixels. This algorithm has been adapted by Nowosad and Stepinski [\[92]](references.html#ref-Nowosad2022) to work with multispectral images. SLIC uses spectral similarity and proximity in the image space to segment the image into superpixels. Superpixels are clusters of pixels with similar spectral responses that are close together, which correspond to coherent object parts in the image. Here’s a high\-level view of the extended SLIC algorithm: 1. The algorithm starts by dividing the image into a grid, where each cell of the grid will become a superpixel. 2. For each cell, the pixel in the center becomes the initial “cluster center” for that superpixel. 3. For each pixel, the algorithm calculates a distance to each of the nearby cluster centers. This distance includes both a spatial component (how far the pixel is from the center of the superpixel in terms of x and y coordinates) and a spectral component (how different the pixel’s spectral values are from the average values of the superpixel). The spectral distance is calculated using all the temporal instances of the bands. 4. Each pixel is assigned to the closest cluster. After all pixels have been assigned to clusters, the algorithm recalculates the cluster centers by averaging the spatial coordinates and spectral values of all pixels within each cluster. 5. Steps 3\-4 are repeated for a set number of iterations, or until the cluster assignments stop changing. The outcome of the SLIC algorithm is a set of superpixels which try to capture the to boundaries of objects within the image. The SLIC implementation in `sits` 1\.4\.1 uses the `supercells` R package [\[92]](references.html#ref-Nowosad2022). The parameters for the `[sits_slic()](https://rdrr.io/pkg/sits/man/sits_slic.html)` function are: * `dist_fn`: metric used to calculate the distance between values. By default, the “euclidean” metric is used. Alternatives include “jsd” (Jensen\-Shannon distance), and “dtw” (dynamic time warping) or one of 46 distance and similarity measures implemented in the R package `philentropy` [\[93]](references.html#ref-Drost2018). * `avg_fn`: function to calculate a value of each superpixel. There are two internal functions implemented in C\+\+ \- “mean” and “median”. It is also possible to provide a user\-defined R function that returns one value based on an R vector. * `step`: distance, measured in the number of cells, between initial superpixels’ centers. * `compactness`: A value that controls superpixels’ density. Larger values cause clusters to be more compact. * `minarea`: minimal size of the output superpixels (measured in number of cells). Example of SLIC\-based segmentation and classification ------------------------------------------------------ To show an example of SLIC\-based segmentation, we first build a data cube, using images available in the `sitsdata` package. ``` # directory where files are located data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LMR", package = "sitsdata") # Builds a cube based on existing files cube_20LMR <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "AWS", collection = "SENTINEL-2-L2A", data_dir = data_dir, bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11") ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_20LMR, red = "B11", green = "B8A", blue = "B02", date = "2022-07-16") ``` Figure 113: Sentinel\-2 image in an area of Rondonia, Brazil (source: authors). The following example produces a segmented image. For the SLIC algorithm, we take the initial separation between cluster centres (`step`) to be 20 pixels, the `compactness` to be 1, and the minimum area for each superpixel (`min_area`) to be 20 pixels. ``` # segment a cube using SLIC # Files are available in a local directory segments_20LMR <- [sits_segment](https://rdrr.io/pkg/sits/man/sits_segment.html)( cube = cube_20LMR, output_dir = "./tempdir/chp14", seg_fn = [sits_slic](https://rdrr.io/pkg/sits/man/sits_slic.html)( step = 20, compactness = 1, dist_fun = "euclidean", iter = 20, minarea = 20 ) ) [plot](https://rdrr.io/r/graphics/plot.default.html)(segments_20LMR, red = "B11", green = "B8A", blue = "B02", date = "2022-07-16" ) ``` It is useful to visualize the segments in a leaflet together with the RGB image using `[sits_view()](https://rdrr.io/pkg/sits/man/sits_view.html)`. ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(segments_20LMR, red = "B11", green = "B8A", blue = "B02", dates = "2022-07-16" ) ``` After obtaining the segments, the next step is to classify them. This is done by first training a classification model. In this case study, we will use an SVM model. ``` svm_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)(samples_deforestation, [sits_svm](https://rdrr.io/pkg/sits/man/sits_svm.html)()) ``` The segment classification procedure applies the model to a number of user\-defined samples inside each segment. Each of these samples is then assigned a set of probability values, one for each class. We then obtain the median value of the probabilities for each class and normalize them. The output of the procedure is a vector data cube containing a set of classified segments. The parameters for the `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)` ``` segments_20LMR_probs_svm <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = segments_20LMR, ml_model = svm_model, output_dir = "./tempdir/chp14", n_sam_pol = 40, gpu_memory = 16, memsize = 24, multicores = 6, version = "svm-segments" ) segments_20LMR_class_svm <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( segments_20LMR_probs_svm, output_dir = "./tempdir/chp14", memsize = 24, multicores = 6, version = "svm-segments" ) ``` To view the classified segments together with the original image, use `[plot()](https://rdrr.io/r/graphics/plot.default.html)` or `[sits_view()](https://rdrr.io/pkg/sits/man/sits_view.html)`, as in the following example. ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)( segments_20LMR_class_svm, red = "B11", green = "B8A", blue = "B02", dates = "2022-07-16", ) ``` We conclude that OBIA analysis applied to image time series is a worthy and efficient technique for land classification, combining the desirable sharp object boundary properties required by land use and cover maps with the analytical power of image time series. Image segmentation in sits -------------------------- The first step of the OBIA procedure in `sits` is to select a data cube to be segmented and function that performs the segmentation. For this purpose, `sits` provides a generic `[sits_segment()](https://rdrr.io/pkg/sits/man/sits_segment.html)` function, which allows users to select different segmentation algorithms. The `[sits_segment()](https://rdrr.io/pkg/sits/man/sits_segment.html)` function has the following parameters: * `cube`: a regular data cube. * `seg_fn`: function to apply the segmentation * `roi`: spatial region of interest in the cube * `start_date`: starting date for the space\-time segmentation * `end_date`: final date for the space\-time segmentation * `memsize`: memory available for processing * `multicores`: number of cores available for processing * `output_dir`: output directory for the resulting cube * `version`: version of the result * `progress`: show progress bar? In `sits` version 1\.4\.2, there is only one segmentation function available (`sits_slic`) which implements the extended version of the Simple Linear Iterative Clustering (SLIC) which is described below. In future versions of `sits`, we expect to include additional functions that support spatio\-temporal segmentation. Simple linear iterative clustering algorithm -------------------------------------------- After building the multidimensional space, we use the Simple Linear Iterative Clustering (SLIC) algorithm [\[91]](references.html#ref-Achanta2012) that clusters pixels to efficiently generate compact, nearly uniform superpixels. This algorithm has been adapted by Nowosad and Stepinski [\[92]](references.html#ref-Nowosad2022) to work with multispectral images. SLIC uses spectral similarity and proximity in the image space to segment the image into superpixels. Superpixels are clusters of pixels with similar spectral responses that are close together, which correspond to coherent object parts in the image. Here’s a high\-level view of the extended SLIC algorithm: 1. The algorithm starts by dividing the image into a grid, where each cell of the grid will become a superpixel. 2. For each cell, the pixel in the center becomes the initial “cluster center” for that superpixel. 3. For each pixel, the algorithm calculates a distance to each of the nearby cluster centers. This distance includes both a spatial component (how far the pixel is from the center of the superpixel in terms of x and y coordinates) and a spectral component (how different the pixel’s spectral values are from the average values of the superpixel). The spectral distance is calculated using all the temporal instances of the bands. 4. Each pixel is assigned to the closest cluster. After all pixels have been assigned to clusters, the algorithm recalculates the cluster centers by averaging the spatial coordinates and spectral values of all pixels within each cluster. 5. Steps 3\-4 are repeated for a set number of iterations, or until the cluster assignments stop changing. The outcome of the SLIC algorithm is a set of superpixels which try to capture the to boundaries of objects within the image. The SLIC implementation in `sits` 1\.4\.1 uses the `supercells` R package [\[92]](references.html#ref-Nowosad2022). The parameters for the `[sits_slic()](https://rdrr.io/pkg/sits/man/sits_slic.html)` function are: * `dist_fn`: metric used to calculate the distance between values. By default, the “euclidean” metric is used. Alternatives include “jsd” (Jensen\-Shannon distance), and “dtw” (dynamic time warping) or one of 46 distance and similarity measures implemented in the R package `philentropy` [\[93]](references.html#ref-Drost2018). * `avg_fn`: function to calculate a value of each superpixel. There are two internal functions implemented in C\+\+ \- “mean” and “median”. It is also possible to provide a user\-defined R function that returns one value based on an R vector. * `step`: distance, measured in the number of cells, between initial superpixels’ centers. * `compactness`: A value that controls superpixels’ density. Larger values cause clusters to be more compact. * `minarea`: minimal size of the output superpixels (measured in number of cells). Example of SLIC\-based segmentation and classification ------------------------------------------------------ To show an example of SLIC\-based segmentation, we first build a data cube, using images available in the `sitsdata` package. ``` # directory where files are located data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LMR", package = "sitsdata") # Builds a cube based on existing files cube_20LMR <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "AWS", collection = "SENTINEL-2-L2A", data_dir = data_dir, bands = [c](https://rdrr.io/r/base/c.html)("B02", "B8A", "B11") ) [plot](https://rdrr.io/r/graphics/plot.default.html)(cube_20LMR, red = "B11", green = "B8A", blue = "B02", date = "2022-07-16") ``` Figure 113: Sentinel\-2 image in an area of Rondonia, Brazil (source: authors). The following example produces a segmented image. For the SLIC algorithm, we take the initial separation between cluster centres (`step`) to be 20 pixels, the `compactness` to be 1, and the minimum area for each superpixel (`min_area`) to be 20 pixels. ``` # segment a cube using SLIC # Files are available in a local directory segments_20LMR <- [sits_segment](https://rdrr.io/pkg/sits/man/sits_segment.html)( cube = cube_20LMR, output_dir = "./tempdir/chp14", seg_fn = [sits_slic](https://rdrr.io/pkg/sits/man/sits_slic.html)( step = 20, compactness = 1, dist_fun = "euclidean", iter = 20, minarea = 20 ) ) [plot](https://rdrr.io/r/graphics/plot.default.html)(segments_20LMR, red = "B11", green = "B8A", blue = "B02", date = "2022-07-16" ) ``` It is useful to visualize the segments in a leaflet together with the RGB image using `[sits_view()](https://rdrr.io/pkg/sits/man/sits_view.html)`. ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(segments_20LMR, red = "B11", green = "B8A", blue = "B02", dates = "2022-07-16" ) ``` After obtaining the segments, the next step is to classify them. This is done by first training a classification model. In this case study, we will use an SVM model. ``` svm_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)(samples_deforestation, [sits_svm](https://rdrr.io/pkg/sits/man/sits_svm.html)()) ``` The segment classification procedure applies the model to a number of user\-defined samples inside each segment. Each of these samples is then assigned a set of probability values, one for each class. We then obtain the median value of the probabilities for each class and normalize them. The output of the procedure is a vector data cube containing a set of classified segments. The parameters for the `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)` ``` segments_20LMR_probs_svm <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = segments_20LMR, ml_model = svm_model, output_dir = "./tempdir/chp14", n_sam_pol = 40, gpu_memory = 16, memsize = 24, multicores = 6, version = "svm-segments" ) segments_20LMR_class_svm <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( segments_20LMR_probs_svm, output_dir = "./tempdir/chp14", memsize = 24, multicores = 6, version = "svm-segments" ) ``` To view the classified segments together with the original image, use `[plot()](https://rdrr.io/r/graphics/plot.default.html)` or `[sits_view()](https://rdrr.io/pkg/sits/man/sits_view.html)`, as in the following example. ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)( segments_20LMR_class_svm, red = "B11", green = "B8A", blue = "B02", dates = "2022-07-16", ) ``` We conclude that OBIA analysis applied to image time series is a worthy and efficient technique for land classification, combining the desirable sharp object boundary properties required by land use and cover maps with the analytical power of image time series.
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/data-visualisation-in-sits.html
Data visualisation in `sits` ============================ This Chapter contains a discussion on plotting and visualisation of data cubes in `sits`. Plotting -------- The `[plot()](https://rdrr.io/r/graphics/plot.default.html)` function produces a graphical display of data cubes, time series, models, and SOM maps. For each type of data, there is a dedicated version of the `[plot()](https://rdrr.io/r/graphics/plot.default.html)` function. See `[?plot.sits](https://rdrr.io/pkg/sits/man/plot.html)` for details. Plotting of time series, models and SOM outputs uses the `ggplot2` package; maps are plotted using the `tmap` package. When plotting images and classified maps, users can control the output, which appropriate parameters for each type of image. In this chapter, we provide examples of the options available for plotting different types of maps. Plotting and visualisation function in `sits` use COG overview if available. COG overviews are reduced\-resolution versions of the main image, stored within the same file. Overviews allow for quick rendering at lower zoom levels, improving performance when dealing with large images. Usually, a single GeoTIFF will have many overviews, to match different zoom levels ### Plotting false color maps We refer to false color maps as images which are plotted on a color scale. Usually these are single bands, indexes such as NDVI or DEMs. For these data sets, the parameters for `[plot()](https://rdrr.io/r/graphics/plot.default.html)` are: * `x`: data cube containing data to be visualised; * `band`: band or index to be plotted; * `pallete`: color scheme to be used for false color maps, which should be one of the `RColorBrewer` palettes. These palettes have been designed to be effective for map display by Prof Cynthia Brewer as described at the [Brewer website](http://colorbrewer2.org). By default, optical images use the `RdYlGn` scheme, SAR images use `Greys`, and DEM cubes use `Spectral`. * `rev`: whether the color palette should be reversed; `TRUE` for DEM cubes, and `FALSE` otherwise. * `scale`: global scale parameter used by `tmap`. All font sizes, symbol sizes, border widths, and line widths are controlled by this value. Default is 0\.75; users should vary this parameter and see the results. * `first_quantile`: 1st quantile for stretching images (default \= 0\.05\). * `last_quantile`: last quantile for stretching images (default \= 0\.95\). * `max_cog_size`: for cloud\-oriented geotiff files (COG), sets the maximum number of lines or columns of the COG overview to be used for plotting. The following optional parameters are available to allow for detailed control over the plot output: \- `graticules_labels_size`: size of coordinates labels (default \= 0\.8\). \- `legend_title_size`: relative size of legend title (default \= 1\.0\). \- `legend_text_size`: relative size of legend text (default \= 1\.0\). \- `legend_bg_color`: color of legend background (default \= “white”). \- `legend_bg_alpha`: legend opacity (default \= 0\.5\). \- `legend_position`: where to place the legend (options \= “inside” or “outside” with “inside” as default). The following example shows a plot of an NDVI index of a data cube. This data cube covers part of MGRS tile `20LMR` and contains bands “B02”, “B03”, “B04”, “B05”, “B06”, “B07”, “B08”, “B11”, “B12”, “B8A”, “EVI”, “NBR”, and “NDVI” for the period 2022\-01\-05 to 2022\-12\-23\. We will use parameters with other than their defaults. ``` # set the directory where the data is data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LMR", package = "sitsdata") # read the data cube ro_20LMR <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir ) # plot the NDVI for date 2022-08-01 [plot](https://rdrr.io/r/graphics/plot.default.html)(ro_20LMR, band = "NDVI", date = "2022-08-01", palette = "Greens", legend_position = "outside", scale = 1.0 ) ``` Figure 114: Sentinel\-2 NDVI index covering tile 20LMR (© EU Copernicus Sentinel Programme; source: Microsoft modified by authors). ### Plotting RGB color composite maps For RGB color composite maps, the parameters for the `plot` function are: * `x`: data cube containing data to be visualised; * `band`: band or index to be plotted; * `date`: date to be plotted (must be part of the cube timeline); * `red`: band or index associated to the red color; * `green`: band or index associated to the green color; * `blue`: band or index associated to the blue color; * `scale`: global scale parameter used by `tmap`. All font sizes, symbol sizes, border widths, and line widths are controlled by this value. Default is 0\.75; users should vary this parameter and see the results. * `first_quantile`: 1st quantile for stretching images (default \= 0\.05\). * `last_quantile`: last quantile for stretching images (default \= 0\.95\). * `max_cog_size`: for cloud\-oriented geotiff files (COG), sets the maximum number of lines or columns of the COG overview to be used for plotting. The optional parameters listed in the previous section are also available. An example follows: ``` # plot the NDVI for date 2022-08-01 [plot](https://rdrr.io/r/graphics/plot.default.html)(ro_20LMR, red = "B11", green = "B8A", blue = "B02", date = "2022-08-01", palette = "Greens", scale = 1.0 ) ``` Figure 115: Sentinel\-2 color composite covering tile 20LMR (© EU Copernicus Sentinel Programme; source: Microsoft modified by authors). ### Plotting classified maps Classified maps pose an additional challenge for plotting because of the association between labels and colors. In this case, `sits` allows three alternatives: * Predefined color scheme: `sits` includes some well\-established color schemes such as `IBGP`, `UMD`, `ESA_CCI_LC`, and `WORLDCOVER`. There is a predefined color table with associates labels commonly used in LUCC classification to colors. Users can also create their color schemas. Please see section “How Colors Work on `sits` in this chapter. * legend: in this case, users provide a named vector with labels and colors, as shown in the example below. * palette: an RColorBrewer categorical palette, which is assigned to labels which are not in the color table. The parameters for `[plot()](https://rdrr.io/r/graphics/plot.default.html)` applied to a classified data cube are: * `x`: data cube containing a classified map; * `legend`: legend which associated colors to the classes, which is `NULL` by default. * `palette`: color palette used for undefined colors, which is `Spectral` by default. * `scale`: global scale parameter used by `tmap`. The optional parameters listed in the previous section are also available. For an example of plotting a classified data cube with default color scheme, please see the section “Reading classified images as local data cube” in the “Earth observation data cubes” chapter. In what follows we show a similar case using a legend. ``` # Create a cube based on a classified image data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LLP", package = "sitsdata" ) # Read the classified cube rondonia_class_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "AWS", collection = "SENTINEL-S2-L2A-COGS", bands = "class", labels = [c](https://rdrr.io/r/base/c.html)( "1" = "Burned", "2" = "Cleared", "3" = "Degraded", "4" = "Natural_Forest" ), data_dir = data_dir ) # Plot the classified cube [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_class_cube, legend = [c](https://rdrr.io/r/base/c.html)( "Burned" = "#a93226", "Cleared" = "#f9e79f", "Degraded" = "#d4efdf", "Natural_Forest" = "#1e8449" ), scale = 1.0, legend_position = "outside" ) ``` Figure 116: Classified data cube for the year 2020/2021 in Rondonia, Brazil (© EU Copernicus Sentinel Programme; source: authors). Visualization of data cubes in interactive maps ----------------------------------------------- Data cubes and samples can also be shown as interactive maps using `[sits_view()](https://rdrr.io/pkg/sits/man/sits_view.html)`. This function creates tiled overlays of different kinds of data cubes, allowing comparison between the original, intermediate and final results. It also includes background maps. The following example creates an interactive map combining the original data cube with the classified map. ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(rondonia_class_cube, legend = [c](https://rdrr.io/r/base/c.html)( "Burned" = "#a93226", "Cleared" = "#f9e79f", "Degraded" = "#d4efdf", "Natural_Forest" = "#1e8449" ) ) ``` How colors work in sits ----------------------- In examples provided in the book, the color legend is taken from a predefined color pallete provided by `sits`. The default color definition file used by `sits` has 220 class names, which can be shown using `[sits_colors()](https://rdrr.io/pkg/sits/man/sits_colors.html)` ``` #> [1] "Returning all available colors" ``` ``` #> # A tibble: 241 × 2 #> name color #> <chr> <chr> #> 1 Evergreen_Broadleaf_Forest #1E8449 #> 2 Evergreen_Broadleaf_Forests #1E8449 #> 3 Tree_Cover_Broadleaved_Evergreen #1E8449 #> 4 Forest #1E8449 #> 5 Forests #1E8449 #> 6 Closed_Forest #1E8449 #> 7 Closed_Forests #1E8449 #> 8 Mountainside_Forest #229C59 #> 9 Mountainside_Forests #229C59 #> 10 Open_Forest #53A145 #> # ℹ 231 more rows ``` These colors are grouped by typical legends used by the Earth observation community, which include “IGBP”, “UMD”, “ESA\_CCI\_LC”, “WORLDCOVER”, “PRODES”, “PRODES\_VISUAL”, “TERRA\_CLASS”, “TERRA\_CLASS\_PT”. The following commands shows the colors associated with the IGBP legend [\[94]](references.html#ref-Herold2009). ``` # Display default `sits` colors [sits_colors_show](https://rdrr.io/pkg/sits/man/sits_colors_show.html)(legend = "IGBP") ``` Figure 117: Colors used in the sits package to represeny IGBP legend (source: authors). The default color table can be extended using `[sits_colors_set()](https://rdrr.io/pkg/sits/man/sits_colors_set.html)`. As an example of a user\-defined color table, consider a definition that covers level 1 of the Anderson Classification System used in the US National Land Cover Data, obtained by defining a set of colors associated to a new legend. The colors should be defined by HEX values and the color names should consist of a single string; multiple names need to be connected with an underscore(“\_“). ``` # Define a color table based on the Anderson Land Classification System us_nlcd <- tibble::[tibble](https://tibble.tidyverse.org/reference/tibble.html)(name = [character](https://rdrr.io/r/base/character.html)(), color = [character](https://rdrr.io/r/base/character.html)()) us_nlcd <- us_nlcd |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Urban_Built_Up", color = "#85929E") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Agricultural_Land", color = "#F0B27A") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Rangeland", color = "#F1C40F") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Forest_Land", color = "#27AE60") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Water", color = "#2980B9") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Wetland", color = "#D4E6F1") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Barren_Land", color = "#FDEBD0") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Tundra", color = "#EBDEF0") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Snow_and_Ice", color = "#F7F9F9") # Load the color table into `sits` [sits_colors_set](https://rdrr.io/pkg/sits/man/sits_colors_set.html)(colors = us_nlcd, legend = "US_NLCD") # Show the new legend [sits_colors_show](https://rdrr.io/pkg/sits/man/sits_colors_show.html)(legend = "US_NLCD") ``` Figure 118: Example of defining colors for the Anderson Land Classification Scheme(source: authors). The original default `sits` color table can be restored using `[sits_colors_reset()](https://rdrr.io/pkg/sits/man/sits_colors_reset.html)`. Exporting colors to QGIS ------------------------ To simplify the process of importing your data to QGIS, the color palette used to display classified maps in `sits` can be exported as a QGIS style using `sits_colors_qgis`. The function takes two parameters: (a) `cube`, a classified data cube; and (b) `file`, the file where the QGIS style in XML will be written to. In this case study, we first retrieve and plot a classified data cube and then export the colors to a QGIS XML style. ``` # Create a cube based on a classified image data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-Class-2022-Mosaic", package = "sitsdata" ) # labels of the classified image labels <- [c](https://rdrr.io/r/base/c.html)( "1" = "Clear_Cut_Bare_Soil", "2" = "Clear_Cut_Burned_Area", "3" = "Clear_Cut_Vegetation", "4" = "Forest", "5" = "Mountainside_Forest", "6" = "Riparian_Forest", "7" = "Seasonally_Flooded", "8" = "Water", "9" = "Wetland" ) # read classified data cube ro_class <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir, bands = "class", labels = labels, version = "mosaic" ) # Plot the classified cube [plot](https://rdrr.io/r/graphics/plot.default.html)(ro_class, scale = 1.0) ``` Figure 119: Classified data cube for the year 2022 for Rondonia, Brazil (© EU Copernicus Sentinel Programme; source: authors). The file to be read by QGIS is a TIFF file whose location is informed by the data cube, as follows. ``` # Show the location of the classified map ro_class[["file_info"]][[1]]$path ``` ``` #> [1] "/Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/library/sitsdata/extdata/Rondonia-Class-2022-Mosaic/SENTINEL-2_MSI_MOSAIC_2022-01-05_2022-12-23_class_mosaic.tif" ``` The color schema can be exported to QGIS as follows. ``` # Export the color schema to QGIS [sits_colors_qgis](https://rdrr.io/pkg/sits/man/sits_colors_qgis.html)(ro_class, file = "./tempdir/chp15/qgis_style.xml") ``` Plotting -------- The `[plot()](https://rdrr.io/r/graphics/plot.default.html)` function produces a graphical display of data cubes, time series, models, and SOM maps. For each type of data, there is a dedicated version of the `[plot()](https://rdrr.io/r/graphics/plot.default.html)` function. See `[?plot.sits](https://rdrr.io/pkg/sits/man/plot.html)` for details. Plotting of time series, models and SOM outputs uses the `ggplot2` package; maps are plotted using the `tmap` package. When plotting images and classified maps, users can control the output, which appropriate parameters for each type of image. In this chapter, we provide examples of the options available for plotting different types of maps. Plotting and visualisation function in `sits` use COG overview if available. COG overviews are reduced\-resolution versions of the main image, stored within the same file. Overviews allow for quick rendering at lower zoom levels, improving performance when dealing with large images. Usually, a single GeoTIFF will have many overviews, to match different zoom levels ### Plotting false color maps We refer to false color maps as images which are plotted on a color scale. Usually these are single bands, indexes such as NDVI or DEMs. For these data sets, the parameters for `[plot()](https://rdrr.io/r/graphics/plot.default.html)` are: * `x`: data cube containing data to be visualised; * `band`: band or index to be plotted; * `pallete`: color scheme to be used for false color maps, which should be one of the `RColorBrewer` palettes. These palettes have been designed to be effective for map display by Prof Cynthia Brewer as described at the [Brewer website](http://colorbrewer2.org). By default, optical images use the `RdYlGn` scheme, SAR images use `Greys`, and DEM cubes use `Spectral`. * `rev`: whether the color palette should be reversed; `TRUE` for DEM cubes, and `FALSE` otherwise. * `scale`: global scale parameter used by `tmap`. All font sizes, symbol sizes, border widths, and line widths are controlled by this value. Default is 0\.75; users should vary this parameter and see the results. * `first_quantile`: 1st quantile for stretching images (default \= 0\.05\). * `last_quantile`: last quantile for stretching images (default \= 0\.95\). * `max_cog_size`: for cloud\-oriented geotiff files (COG), sets the maximum number of lines or columns of the COG overview to be used for plotting. The following optional parameters are available to allow for detailed control over the plot output: \- `graticules_labels_size`: size of coordinates labels (default \= 0\.8\). \- `legend_title_size`: relative size of legend title (default \= 1\.0\). \- `legend_text_size`: relative size of legend text (default \= 1\.0\). \- `legend_bg_color`: color of legend background (default \= “white”). \- `legend_bg_alpha`: legend opacity (default \= 0\.5\). \- `legend_position`: where to place the legend (options \= “inside” or “outside” with “inside” as default). The following example shows a plot of an NDVI index of a data cube. This data cube covers part of MGRS tile `20LMR` and contains bands “B02”, “B03”, “B04”, “B05”, “B06”, “B07”, “B08”, “B11”, “B12”, “B8A”, “EVI”, “NBR”, and “NDVI” for the period 2022\-01\-05 to 2022\-12\-23\. We will use parameters with other than their defaults. ``` # set the directory where the data is data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LMR", package = "sitsdata") # read the data cube ro_20LMR <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir ) # plot the NDVI for date 2022-08-01 [plot](https://rdrr.io/r/graphics/plot.default.html)(ro_20LMR, band = "NDVI", date = "2022-08-01", palette = "Greens", legend_position = "outside", scale = 1.0 ) ``` Figure 114: Sentinel\-2 NDVI index covering tile 20LMR (© EU Copernicus Sentinel Programme; source: Microsoft modified by authors). ### Plotting RGB color composite maps For RGB color composite maps, the parameters for the `plot` function are: * `x`: data cube containing data to be visualised; * `band`: band or index to be plotted; * `date`: date to be plotted (must be part of the cube timeline); * `red`: band or index associated to the red color; * `green`: band or index associated to the green color; * `blue`: band or index associated to the blue color; * `scale`: global scale parameter used by `tmap`. All font sizes, symbol sizes, border widths, and line widths are controlled by this value. Default is 0\.75; users should vary this parameter and see the results. * `first_quantile`: 1st quantile for stretching images (default \= 0\.05\). * `last_quantile`: last quantile for stretching images (default \= 0\.95\). * `max_cog_size`: for cloud\-oriented geotiff files (COG), sets the maximum number of lines or columns of the COG overview to be used for plotting. The optional parameters listed in the previous section are also available. An example follows: ``` # plot the NDVI for date 2022-08-01 [plot](https://rdrr.io/r/graphics/plot.default.html)(ro_20LMR, red = "B11", green = "B8A", blue = "B02", date = "2022-08-01", palette = "Greens", scale = 1.0 ) ``` Figure 115: Sentinel\-2 color composite covering tile 20LMR (© EU Copernicus Sentinel Programme; source: Microsoft modified by authors). ### Plotting classified maps Classified maps pose an additional challenge for plotting because of the association between labels and colors. In this case, `sits` allows three alternatives: * Predefined color scheme: `sits` includes some well\-established color schemes such as `IBGP`, `UMD`, `ESA_CCI_LC`, and `WORLDCOVER`. There is a predefined color table with associates labels commonly used in LUCC classification to colors. Users can also create their color schemas. Please see section “How Colors Work on `sits` in this chapter. * legend: in this case, users provide a named vector with labels and colors, as shown in the example below. * palette: an RColorBrewer categorical palette, which is assigned to labels which are not in the color table. The parameters for `[plot()](https://rdrr.io/r/graphics/plot.default.html)` applied to a classified data cube are: * `x`: data cube containing a classified map; * `legend`: legend which associated colors to the classes, which is `NULL` by default. * `palette`: color palette used for undefined colors, which is `Spectral` by default. * `scale`: global scale parameter used by `tmap`. The optional parameters listed in the previous section are also available. For an example of plotting a classified data cube with default color scheme, please see the section “Reading classified images as local data cube” in the “Earth observation data cubes” chapter. In what follows we show a similar case using a legend. ``` # Create a cube based on a classified image data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LLP", package = "sitsdata" ) # Read the classified cube rondonia_class_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "AWS", collection = "SENTINEL-S2-L2A-COGS", bands = "class", labels = [c](https://rdrr.io/r/base/c.html)( "1" = "Burned", "2" = "Cleared", "3" = "Degraded", "4" = "Natural_Forest" ), data_dir = data_dir ) # Plot the classified cube [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_class_cube, legend = [c](https://rdrr.io/r/base/c.html)( "Burned" = "#a93226", "Cleared" = "#f9e79f", "Degraded" = "#d4efdf", "Natural_Forest" = "#1e8449" ), scale = 1.0, legend_position = "outside" ) ``` Figure 116: Classified data cube for the year 2020/2021 in Rondonia, Brazil (© EU Copernicus Sentinel Programme; source: authors). ### Plotting false color maps We refer to false color maps as images which are plotted on a color scale. Usually these are single bands, indexes such as NDVI or DEMs. For these data sets, the parameters for `[plot()](https://rdrr.io/r/graphics/plot.default.html)` are: * `x`: data cube containing data to be visualised; * `band`: band or index to be plotted; * `pallete`: color scheme to be used for false color maps, which should be one of the `RColorBrewer` palettes. These palettes have been designed to be effective for map display by Prof Cynthia Brewer as described at the [Brewer website](http://colorbrewer2.org). By default, optical images use the `RdYlGn` scheme, SAR images use `Greys`, and DEM cubes use `Spectral`. * `rev`: whether the color palette should be reversed; `TRUE` for DEM cubes, and `FALSE` otherwise. * `scale`: global scale parameter used by `tmap`. All font sizes, symbol sizes, border widths, and line widths are controlled by this value. Default is 0\.75; users should vary this parameter and see the results. * `first_quantile`: 1st quantile for stretching images (default \= 0\.05\). * `last_quantile`: last quantile for stretching images (default \= 0\.95\). * `max_cog_size`: for cloud\-oriented geotiff files (COG), sets the maximum number of lines or columns of the COG overview to be used for plotting. The following optional parameters are available to allow for detailed control over the plot output: \- `graticules_labels_size`: size of coordinates labels (default \= 0\.8\). \- `legend_title_size`: relative size of legend title (default \= 1\.0\). \- `legend_text_size`: relative size of legend text (default \= 1\.0\). \- `legend_bg_color`: color of legend background (default \= “white”). \- `legend_bg_alpha`: legend opacity (default \= 0\.5\). \- `legend_position`: where to place the legend (options \= “inside” or “outside” with “inside” as default). The following example shows a plot of an NDVI index of a data cube. This data cube covers part of MGRS tile `20LMR` and contains bands “B02”, “B03”, “B04”, “B05”, “B06”, “B07”, “B08”, “B11”, “B12”, “B8A”, “EVI”, “NBR”, and “NDVI” for the period 2022\-01\-05 to 2022\-12\-23\. We will use parameters with other than their defaults. ``` # set the directory where the data is data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LMR", package = "sitsdata") # read the data cube ro_20LMR <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir ) # plot the NDVI for date 2022-08-01 [plot](https://rdrr.io/r/graphics/plot.default.html)(ro_20LMR, band = "NDVI", date = "2022-08-01", palette = "Greens", legend_position = "outside", scale = 1.0 ) ``` Figure 114: Sentinel\-2 NDVI index covering tile 20LMR (© EU Copernicus Sentinel Programme; source: Microsoft modified by authors). ### Plotting RGB color composite maps For RGB color composite maps, the parameters for the `plot` function are: * `x`: data cube containing data to be visualised; * `band`: band or index to be plotted; * `date`: date to be plotted (must be part of the cube timeline); * `red`: band or index associated to the red color; * `green`: band or index associated to the green color; * `blue`: band or index associated to the blue color; * `scale`: global scale parameter used by `tmap`. All font sizes, symbol sizes, border widths, and line widths are controlled by this value. Default is 0\.75; users should vary this parameter and see the results. * `first_quantile`: 1st quantile for stretching images (default \= 0\.05\). * `last_quantile`: last quantile for stretching images (default \= 0\.95\). * `max_cog_size`: for cloud\-oriented geotiff files (COG), sets the maximum number of lines or columns of the COG overview to be used for plotting. The optional parameters listed in the previous section are also available. An example follows: ``` # plot the NDVI for date 2022-08-01 [plot](https://rdrr.io/r/graphics/plot.default.html)(ro_20LMR, red = "B11", green = "B8A", blue = "B02", date = "2022-08-01", palette = "Greens", scale = 1.0 ) ``` Figure 115: Sentinel\-2 color composite covering tile 20LMR (© EU Copernicus Sentinel Programme; source: Microsoft modified by authors). ### Plotting classified maps Classified maps pose an additional challenge for plotting because of the association between labels and colors. In this case, `sits` allows three alternatives: * Predefined color scheme: `sits` includes some well\-established color schemes such as `IBGP`, `UMD`, `ESA_CCI_LC`, and `WORLDCOVER`. There is a predefined color table with associates labels commonly used in LUCC classification to colors. Users can also create their color schemas. Please see section “How Colors Work on `sits` in this chapter. * legend: in this case, users provide a named vector with labels and colors, as shown in the example below. * palette: an RColorBrewer categorical palette, which is assigned to labels which are not in the color table. The parameters for `[plot()](https://rdrr.io/r/graphics/plot.default.html)` applied to a classified data cube are: * `x`: data cube containing a classified map; * `legend`: legend which associated colors to the classes, which is `NULL` by default. * `palette`: color palette used for undefined colors, which is `Spectral` by default. * `scale`: global scale parameter used by `tmap`. The optional parameters listed in the previous section are also available. For an example of plotting a classified data cube with default color scheme, please see the section “Reading classified images as local data cube” in the “Earth observation data cubes” chapter. In what follows we show a similar case using a legend. ``` # Create a cube based on a classified image data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-20LLP", package = "sitsdata" ) # Read the classified cube rondonia_class_cube <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "AWS", collection = "SENTINEL-S2-L2A-COGS", bands = "class", labels = [c](https://rdrr.io/r/base/c.html)( "1" = "Burned", "2" = "Cleared", "3" = "Degraded", "4" = "Natural_Forest" ), data_dir = data_dir ) # Plot the classified cube [plot](https://rdrr.io/r/graphics/plot.default.html)(rondonia_class_cube, legend = [c](https://rdrr.io/r/base/c.html)( "Burned" = "#a93226", "Cleared" = "#f9e79f", "Degraded" = "#d4efdf", "Natural_Forest" = "#1e8449" ), scale = 1.0, legend_position = "outside" ) ``` Figure 116: Classified data cube for the year 2020/2021 in Rondonia, Brazil (© EU Copernicus Sentinel Programme; source: authors). Visualization of data cubes in interactive maps ----------------------------------------------- Data cubes and samples can also be shown as interactive maps using `[sits_view()](https://rdrr.io/pkg/sits/man/sits_view.html)`. This function creates tiled overlays of different kinds of data cubes, allowing comparison between the original, intermediate and final results. It also includes background maps. The following example creates an interactive map combining the original data cube with the classified map. ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(rondonia_class_cube, legend = [c](https://rdrr.io/r/base/c.html)( "Burned" = "#a93226", "Cleared" = "#f9e79f", "Degraded" = "#d4efdf", "Natural_Forest" = "#1e8449" ) ) ``` How colors work in sits ----------------------- In examples provided in the book, the color legend is taken from a predefined color pallete provided by `sits`. The default color definition file used by `sits` has 220 class names, which can be shown using `[sits_colors()](https://rdrr.io/pkg/sits/man/sits_colors.html)` ``` #> [1] "Returning all available colors" ``` ``` #> # A tibble: 241 × 2 #> name color #> <chr> <chr> #> 1 Evergreen_Broadleaf_Forest #1E8449 #> 2 Evergreen_Broadleaf_Forests #1E8449 #> 3 Tree_Cover_Broadleaved_Evergreen #1E8449 #> 4 Forest #1E8449 #> 5 Forests #1E8449 #> 6 Closed_Forest #1E8449 #> 7 Closed_Forests #1E8449 #> 8 Mountainside_Forest #229C59 #> 9 Mountainside_Forests #229C59 #> 10 Open_Forest #53A145 #> # ℹ 231 more rows ``` These colors are grouped by typical legends used by the Earth observation community, which include “IGBP”, “UMD”, “ESA\_CCI\_LC”, “WORLDCOVER”, “PRODES”, “PRODES\_VISUAL”, “TERRA\_CLASS”, “TERRA\_CLASS\_PT”. The following commands shows the colors associated with the IGBP legend [\[94]](references.html#ref-Herold2009). ``` # Display default `sits` colors [sits_colors_show](https://rdrr.io/pkg/sits/man/sits_colors_show.html)(legend = "IGBP") ``` Figure 117: Colors used in the sits package to represeny IGBP legend (source: authors). The default color table can be extended using `[sits_colors_set()](https://rdrr.io/pkg/sits/man/sits_colors_set.html)`. As an example of a user\-defined color table, consider a definition that covers level 1 of the Anderson Classification System used in the US National Land Cover Data, obtained by defining a set of colors associated to a new legend. The colors should be defined by HEX values and the color names should consist of a single string; multiple names need to be connected with an underscore(“\_“). ``` # Define a color table based on the Anderson Land Classification System us_nlcd <- tibble::[tibble](https://tibble.tidyverse.org/reference/tibble.html)(name = [character](https://rdrr.io/r/base/character.html)(), color = [character](https://rdrr.io/r/base/character.html)()) us_nlcd <- us_nlcd |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Urban_Built_Up", color = "#85929E") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Agricultural_Land", color = "#F0B27A") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Rangeland", color = "#F1C40F") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Forest_Land", color = "#27AE60") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Water", color = "#2980B9") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Wetland", color = "#D4E6F1") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Barren_Land", color = "#FDEBD0") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Tundra", color = "#EBDEF0") |> tibble::[add_row](https://tibble.tidyverse.org/reference/add_row.html)(name = "Snow_and_Ice", color = "#F7F9F9") # Load the color table into `sits` [sits_colors_set](https://rdrr.io/pkg/sits/man/sits_colors_set.html)(colors = us_nlcd, legend = "US_NLCD") # Show the new legend [sits_colors_show](https://rdrr.io/pkg/sits/man/sits_colors_show.html)(legend = "US_NLCD") ``` Figure 118: Example of defining colors for the Anderson Land Classification Scheme(source: authors). The original default `sits` color table can be restored using `[sits_colors_reset()](https://rdrr.io/pkg/sits/man/sits_colors_reset.html)`. Exporting colors to QGIS ------------------------ To simplify the process of importing your data to QGIS, the color palette used to display classified maps in `sits` can be exported as a QGIS style using `sits_colors_qgis`. The function takes two parameters: (a) `cube`, a classified data cube; and (b) `file`, the file where the QGIS style in XML will be written to. In this case study, we first retrieve and plot a classified data cube and then export the colors to a QGIS XML style. ``` # Create a cube based on a classified image data_dir <- [system.file](https://rdrr.io/r/base/system.file.html)("extdata/Rondonia-Class-2022-Mosaic", package = "sitsdata" ) # labels of the classified image labels <- [c](https://rdrr.io/r/base/c.html)( "1" = "Clear_Cut_Bare_Soil", "2" = "Clear_Cut_Burned_Area", "3" = "Clear_Cut_Vegetation", "4" = "Forest", "5" = "Mountainside_Forest", "6" = "Riparian_Forest", "7" = "Seasonally_Flooded", "8" = "Water", "9" = "Wetland" ) # read classified data cube ro_class <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", data_dir = data_dir, bands = "class", labels = labels, version = "mosaic" ) # Plot the classified cube [plot](https://rdrr.io/r/graphics/plot.default.html)(ro_class, scale = 1.0) ``` Figure 119: Classified data cube for the year 2022 for Rondonia, Brazil (© EU Copernicus Sentinel Programme; source: authors). The file to be read by QGIS is a TIFF file whose location is informed by the data cube, as follows. ``` # Show the location of the classified map ro_class[["file_info"]][[1]]$path ``` ``` #> [1] "/Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/library/sitsdata/extdata/Rondonia-Class-2022-Mosaic/SENTINEL-2_MSI_MOSAIC_2022-01-05_2022-12-23_class_mosaic.tif" ``` The color schema can be exported to QGIS as follows. ``` # Export the color schema to QGIS [sits_colors_qgis](https://rdrr.io/pkg/sits/man/sits_colors_qgis.html)(ro_class, file = "./tempdir/chp15/qgis_style.xml") ```
Machine Learning
e-sensing.github.io
https://e-sensing.github.io/sitsbook/technical-annex.html
Technical annex =============== This Chapter contains technical details on the algorithms available in `sits`. It is intended to support those that want to understand how the package works and also want to contribute to its development. Adding functions to the `sits` API ---------------------------------- ### General principles New functions that build on the `sits` API should follow the general principles below. * The target audience for `sits` is the community of remote sensing experts with Earth Sciences background who want to use state\-of\-the\-art data analysis methods with minimal investment in programming skills. The design of the `sits` API considers the typical workflow for land classification using satellite image time series and thus provides a clear and direct set of functions, which are easy to learn and master. * For this reason, we welcome contributors that provide useful additions to the existing API, such as new ML/DL classification algorithms. In case of a new API function, before making a pull request please raise an issue stating your rationale for a new function. * Most functions in `sits` use the S3 programming model with a strong emphasis on generic methods wich are specialized depending on the input data type. See for example the implementation of the `[sits_bands()](https://rdrr.io/pkg/sits/man/sits_bands.html)` function. * Please do not include contributed code using the S4 programming model. Doing so would break the structure and the logic of existing code. Convert your code from S4 to S3\. * Use generic functions as much as possible, as they improve modularity and maintenance. If your code has decision points using `if-else` clauses, such as `if A, do X; else do Y` consider using generic functions. * Functions that use the `torch` package use the R6 model to be compatible with that package. See for example, the code in `sits_tempcnn.R` and `api_torch.R`. To convert `pyTorch` code to R and include it is straightforward. Please see the [Technical Annex](https://e-sensing.github.io/sitsbook/technical-annex.html) of the sits on\-line book. * The sits code relies on the packages of the `tidyverse` to work with tables and list. We use `dplyr` and `tidyr` for data selection and wrangling, `purrr` and `slider` for loops on lists and table, `lubridate` to handle dates and times. ### Adherence to the `sits` data types The `sits` package in built on top of three data types: time series tibble, data cubes and models. Most `sits` functions have one or more of these types as inputs and one of them as return values. The time series tibble contains data and metadata. The first six columns contain the metadata: spatial and temporal information, the label assigned to the sample, and the data cube from where the data has been extracted. The time\_series column contains the time series data for each spatiotemporal location. All time series tibbles are objects of class `sits`. The `cube` data type is designed to store metadata about image files. In principle, images which are part of a data cube share the same geographical region, have the same bands, and have been regularized to fit into a pre\-defined temporal interval. Data cubes in `sits` are organized by tiles. A tile is an element of a satellite’s mission reference system, for example MGRS for Sentinel\-2 and WRS2 for Landsat. A `cube` is a tibble where each row contains information about data covering one tile. Each row of the cube tibble contains a column named `file_info`; this column contains a list that stores a tibble The `cube` data type is specialised in `raster_cube` (ARD images), `vector_cube` (ARD cube with segmentation vectors). `probs_cube` (probabilities produced by classification algorithms on raster data), `probs_vector_cube`(probabilites generated by vector classification of segments), `uncertainty_cube` (cubes with uncertainty information), and `class_cube` (labelled maps). See the code in `sits_plot.R` as an example of specialisation of `plot` to handle different classes of raster data. All ML/DL models in `sits` which are the result of `sits_train` belong to the `ml_model` class. In addition, models are assigned a second class, which is unique to ML models (e.g, `rfor_model`, `svm_model`) and generic for all DL `torch` based models (`torch_model`). The class information is used for plotting models and for establishing if a model can run on GPUs. ### Literal values, error messages, and testing The internal `sits` code has no literal values, which are all stored in the YAML configuration files `./inst/extdata/config.yml` and `./inst/extdata/config_internals.yml`. The first file contains configuration parameters that are relevant to users, related to visualisation and plotting; the second contains parameters that are relevant only for developers. These values are accessible using the `.conf` function. For example, the value of the default size for ploting COG files is accessed using the command `.conf["plot", "max_size"]`. Error messages are also stored outside of the code in the YAML configuration file `./inst/extdata/config_messages.yml`. These values are accessible using the `.conf` function. For example, the error associated to an invalid NA value for an input parameter is accessible using th function `.conf("messages", ".check_na_parameter")`. We strive for high code coverage (\> 90%). Every parameter of all `sits` function (including internal ones) is checked for consistency. Please see `api_check.R`. ### Supporting new STAC\-based image catalogues If you want to include a STAC\-based catalogue not yet supported by `sits`, we encourage you to look at existing implementations of catalogues such as Microsoft Planetary Computer (MPC), Digital Earth Africa (DEA) and AWS. STAC\-based catalogues in `sits` are associated to YAML description files, which are available in the directory `.inst/exdata/sources`. For example, the YAML file `config_source_mpc.yml` describes the contents of the MPC collections supported by `sits`. Please first provide an YAML file which lists the detailed contents of the new catalogue you wish to include. Follow the examples provided. After writing the YAML file, you need to consider how to access and query the new catalogue. The entry point for access to all catalogues is the `sits_cube.stac_cube()` function, which in turn calls a sequence of functions which are described in the generic interface `api_source.R`. Most calls of this API are handled by the functions of `api_source_stac.R` which provides an interface to the `rstac` package and handles STAC queries. Each STAC catalogue is different. The STAC specification allows providers to implement their data descriptions with specific information. For this reason, the generic API described in `api_source.R` needs to be specialized for each provider. Whenever a provider needs specific implementations of parts of the STAC protocol, we include them in separate files. For example, `api_source_mpc.R` implements specific quirks of the MPC platform. Similarly, specific support for CDSE (Copernicus Data Space Environment) is available in `api_source_cdse.R`. ### Including new methods for machine learning This section provides guidance for experts that want to include new methods for machine learning that work in connection with `sits`. The discussion below assumes familiarity with the R language. Developers should consult Hadley Wickham’s excellent book [Advanced R](https://adv-r.hadley.nz/), especially Chapter 10 on “Function Factories”. All machine learning and deep learning algorithm in `sits` follow the same logic; all models are created by `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)`. This function has two parameters: (a) `samples`, a set of time series with the training samples; (b) `ml_method`, a function that fits the model to the input data. The result is a function that is passed on to `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)` to classify time series or data cubes. The structure of `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` is simple, as shown below. ``` sits_train <- function(samples, ml_method) { # train a ml classifier with the given data result <- ml_method(samples) # return a valid machine learning method [return](https://rdrr.io/r/base/function.html)(result) } ``` In R terms, `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` is a function factory, or a function that makes functions. Such behavior is possible because functions are first\-class objects in R. In other words, they can be bound to a name in the same way that variables are. A second propriety of R is that functions capture (enclose) the environment in which they are created. In other words, when a function is returned as a result of another function, the internal variables used to create it are available inside its environment. In programming language, this technique is called “closure”. The following definition from Wikipedia captures the purpose of clousures: *“Operationally, a closure is a record storing a function together with an environment. The environment is a mapping associating each free variable of the function with the value or reference to which the name was bound when the closure was created. A closure allows the function to access those captured variables through the closure’s copies of their values or references, even when the function is invoked outside their scope.”* In `sits`, the properties of closures are used as a basis for making training and classification independent. The return of `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` is a model that contains information on how to classify input values, as well as information on the samples used to train the model. To ensure all models work in the same fashion, machine learning functions in `sits` also share the same data structure for prediction. This data structure is created by `[sits_predictors()](https://rdrr.io/pkg/sits/man/sits_predictors.html)`, which transforms the time series tibble into a set of values suitable for using as training data, as shown in the following example. ``` [data](https://rdrr.io/r/utils/data.html)("samples_matogrosso_mod13q1", package = "sitsdata") pred <- [sits_predictors](https://rdrr.io/pkg/sits/man/sits_predictors.html)(samples_matogrosso_mod13q1) pred ``` ``` #> # A tibble: 1,837 × 94 #> sample_id label NDVI1 NDVI2 NDVI3 NDVI4 NDVI5 NDVI6 NDVI7 NDVI8 NDVI9 NDVI10 #> <int> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 1 Pastu… 0.500 0.485 0.716 0.654 0.591 0.662 0.734 0.739 0.768 0.797 #> 2 2 Pastu… 0.364 0.484 0.605 0.726 0.778 0.829 0.762 0.762 0.643 0.610 #> 3 3 Pastu… 0.577 0.674 0.639 0.569 0.596 0.623 0.650 0.650 0.637 0.646 #> 4 4 Pastu… 0.597 0.699 0.789 0.792 0.794 0.72 0.646 0.705 0.757 0.810 #> 5 5 Pastu… 0.388 0.491 0.527 0.660 0.677 0.746 0.816 0.816 0.825 0.835 #> 6 6 Pastu… 0.350 0.345 0.364 0.429 0.506 0.583 0.660 0.616 0.580 0.651 #> 7 7 Pastu… 0.490 0.527 0.543 0.583 0.594 0.605 0.616 0.627 0.622 0.644 #> 8 8 Pastu… 0.435 0.574 0.395 0.392 0.518 0.597 0.648 0.774 0.786 0.798 #> 9 9 Pastu… 0.396 0.473 0.542 0.587 0.649 0.697 0.696 0.695 0.699 0.703 #> 10 10 Pastu… 0.354 0.387 0.527 0.577 0.626 0.723 0.655 0.655 0.646 0.536 #> # ℹ 1,827 more rows #> # ℹ 82 more variables: NDVI11 <dbl>, NDVI12 <dbl>, NDVI13 <dbl>, NDVI14 <dbl>, #> # NDVI15 <dbl>, NDVI16 <dbl>, NDVI17 <dbl>, NDVI18 <dbl>, NDVI19 <dbl>, #> # NDVI20 <dbl>, NDVI21 <dbl>, NDVI22 <dbl>, NDVI23 <dbl>, EVI1 <dbl>, #> # EVI2 <dbl>, EVI3 <dbl>, EVI4 <dbl>, EVI5 <dbl>, EVI6 <dbl>, EVI7 <dbl>, #> # EVI8 <dbl>, EVI9 <dbl>, EVI10 <dbl>, EVI11 <dbl>, EVI12 <dbl>, EVI13 <dbl>, #> # EVI14 <dbl>, EVI15 <dbl>, EVI16 <dbl>, EVI17 <dbl>, EVI18 <dbl>, … ``` The predictors tibble is organized as a combination of the “X” and “Y” values used by machine learning algorithms. The first two columns are `sample_id` and `label`. The other columns contain the data values, organized by band and time. For machine learning methods that are not time\-sensitive, such as random forest, this organization is sufficient for training. In the case of time\-sensitive methods such as `tempCNN`, further arrangements are necessary to ensure the tensors have the right dimensions. Please refer to the `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` source code for an example of how to adapt the prediction table to appropriate `torch` tensor. Some algorithms require data normalization. Therefore, the `[sits_predictors()](https://rdrr.io/pkg/sits/man/sits_predictors.html)` code is usually combined with methods that extract statistical information and then normalize the data, as in the example below. ``` # Data normalization ml_stats <- [sits_stats](https://rdrr.io/pkg/sits/man/sits_stats.html)(samples) # extract the training samples train_samples <- [sits_predictors](https://rdrr.io/pkg/sits/man/sits_predictors.html)(samples) # normalize the training samples train_samples <- [sits_pred_normalize](https://rdrr.io/pkg/sits/man/sits_pred_normalize.html)(pred = train_samples, stats = ml_stats) ``` The following example shows the implementation of the LightGBM algorithm, designed to efficiently handle large\-scale datasets and perform fast training and inference [\[95]](references.html#ref-Ke2017). Gradient boosting is a machine learning technique that builds an ensemble of weak prediction models, typically decision trees, to create a stronger model. LightGBM specifically focuses on optimizing the training and prediction speed, making it particularly suitable for large datasets. The example builds a model using the `lightgbm` package. This model will then be applied later to obtain a classification. Since LightGBM is a gradient boosting model, it uses part of the data as testing data to improve the model’s performance. The split between the training and test samples is controlled by a parameter, as shown in the following code extract. ``` # split the data into training and validation datasets # create partitions different splits of the input data test_samples <- [sits_pred_sample](https://rdrr.io/pkg/sits/man/sits_pred_sample.html)(train_samples, frac = validation_split ) # Remove the lines used for validation sel <- !(train_samples$sample_id [%in%](https://rdrr.io/r/base/match.html) test_samples$sample_id) train_samples <- train_samples[sel, ] ``` To include the `lightgbm` package as part of `sits`, we need to create a new training function which is compatible with the other machine learning methods of the package and will be called by `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)`. For compatibility, this new function will be called `sits_lightgbm()`. Its implementation uses two functions from the `lightgbm`: (a) `[lgb.Dataset()](https://rdrr.io/pkg/lightgbm/man/lgb.Dataset.html)`, which transforms training and test samples into internal structures; (b) `[lgb.train()](https://rdrr.io/pkg/lightgbm/man/lgb.train.html)`, which trains the model. The parameters of `[lightgbm::lgb.train()](https://rdrr.io/pkg/lightgbm/man/lgb.train.html)` are: (a) `boosting_type`, boosting algorithm; (b) `objective`, classification objective (c) `num_iterations`, number of runs; (d) `max_depth`, maximum tree depth; (d) `min_samples_leaf`, minimum size of data in one leaf (to avoid overfitting); (f) `learning_rate`, learning rate of the algorithm; (g) `n_iter_no_change`, number of successive iterations to stop training when validation metrics do not improve; (h) `validation_split`, fraction of training data to be used as validation data. ``` # install "lightgbm" package if not available if ( # create a function in sits style for LightGBM algorithm sits_lightgbm <- function(samples = NULL, boosting_type = "gbdt", objective = "multiclass", min_samples_leaf = 10, max_depth = 6, learning_rate = 0.1, num_iterations = 100, n_iter_no_change = 10, validation_split = 0.2, ...) { # function that returns MASS::lda model based on a sits sample tibble train_fun <- function(samples) { # Extract the predictors train_samples <- [sits_predictors](https://rdrr.io/pkg/sits/man/sits_predictors.html)(samples) # find number of labels labels <- [sits_labels](https://rdrr.io/pkg/sits/man/sits_labels.html)(samples) n_labels <- [length](https://rdrr.io/r/base/length.html)(labels) # lightGBM uses numerical labels starting from 0 int_labels <- [c](https://rdrr.io/r/base/c.html)(1:n_labels) - 1 # create a named vector with integers match the class labels [names](https://rdrr.io/r/base/names.html)(int_labels) <- labels # add number of classes to lightGBM params # split the data into training and validation datasets # create partitions different splits of the input data test_samples <- [sits_pred_sample](https://rdrr.io/pkg/sits/man/sits_pred_sample.html)(train_samples, frac = validation_split ) # Remove the lines used for validation sel <- !(train_samples$sample_id [%in%](https://rdrr.io/r/base/match.html) test_samples$sample_id) train_samples <- train_samples[sel, ] # transform the training data to LGBM dataset lgbm_train_samples <- lightgbm::[lgb.Dataset](https://rdrr.io/pkg/lightgbm/man/lgb.Dataset.html)( data = [as.matrix](https://rdrr.io/r/base/matrix.html)(train_samples[, -2:0]), label = [unname](https://rdrr.io/r/base/unname.html)(int_labels[train_samples[[2]]]) ) # transform the test data to LGBM dataset lgbm_test_samples <- lightgbm::[lgb.Dataset](https://rdrr.io/pkg/lightgbm/man/lgb.Dataset.html)( data = [as.matrix](https://rdrr.io/r/base/matrix.html)(test_samples[, -2:0]), label = [unname](https://rdrr.io/r/base/unname.html)(int_labels[test_samples[[2]]]) ) # set the parameters for the lightGBM training lgb_params <- [list](https://rdrr.io/r/base/list.html)( boosting_type = boosting_type, objective = objective, min_samples_leaf = min_samples_leaf, max_depth = max_depth, learning_rate = learning_rate, num_iterations = num_iterations, n_iter_no_change = n_iter_no_change, num_class = n_labels ) # call method and return the trained model lgbm_model <- lightgbm::[lgb.train](https://rdrr.io/pkg/lightgbm/man/lgb.train.html)( data = lgbm_train_samples, valids = [list](https://rdrr.io/r/base/list.html)(test_data = lgbm_test_samples), params = lgb_params, verbose = -1, ... ) # serialize the model for parallel processing lgbm_model_string <- lgbm_model$save_model_to_string(NULL) # construct model predict closure function and returns predict_fun <- function(values) { # reload the model (unserialize) lgbm_model <- lightgbm::[lgb.load](https://rdrr.io/pkg/lightgbm/man/lgb.load.html)(model_str = lgbm_model_string) # predict probabilities prediction <- stats::[predict](https://rdrr.io/r/stats/predict.html)(lgbm_model, data = [as.matrix](https://rdrr.io/r/base/matrix.html)(values), rawscore = FALSE, reshape = TRUE ) # adjust the names of the columns of the probs [colnames](https://rdrr.io/r/base/colnames.html)(prediction) <- labels # retrieve the prediction results [return](https://rdrr.io/r/base/function.html)(prediction) } # Set model class [class](https://rdrr.io/r/base/class.html)(predict_fun) <- [c](https://rdrr.io/r/base/c.html)("lightgbm_model", "sits_model", [class](https://rdrr.io/r/base/class.html)(predict_fun)) [return](https://rdrr.io/r/base/function.html)(predict_fun) } result <- [sits_factory_function](https://rdrr.io/pkg/sits/man/sits_factory_function.html)(samples, train_fun) [return](https://rdrr.io/r/base/function.html)(result) } ``` The above code has two nested functions: `train_fun()` and `predict_fun()`. When `sits_lightgbm()` is called, `train_fun()` transforms the input samples into predictors and uses them to train the algorithm, creating a model (`lgbm_model`). This model is included as part of the function’s closure and becomes available at classification time. Inside `train_fun()`, we include `predict_fun()`, which applies the `lgbm_model` object to classify to the input values. The `train_fun` object is then returned as a closure, using the `sits_factory_function` constructor. This function allows the model to be called either as part of `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` or to be called independently, with the same result. ``` sits_factory_function <- function(data, fun) { # if no data is given, we prepare a # function to be called as a parameter of other functions if (purrr::is_null(data)) { result <- fun } else { # ...otherwise compute the result on the input data result <- fun(data) } [return](https://rdrr.io/r/base/function.html)(result) } ``` As a result, the following calls are equivalent. ``` # building a model using sits_train lgbm_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)(samples, sits_lightgbm()) # building a model directly lgbm_model <- sits_lightgbm(samples) ``` There is one additional requirement for the algorithm to be compatible with `sits`. Data cube processing algorithms in `sits` run in parallel. For this reason, once the classification model is trained, it is serialized, as shown in the following line. The serialized version of the model is exported to the function closure, so it can be used at classification time. ``` # serialize the model for parallel processing lgbm_model_string <- lgbm_model$save_model_to_string(NULL) ``` During classification, `predict_fun()` is called in parallel by each CPU. At this moment, the serialized string is transformed back into a model, which is then run to obtain the classification, as shown in the code. ``` # unserialize the model lgbm_model <- lightgbm::[lgb.load](https://rdrr.io/pkg/lightgbm/man/lgb.load.html)(model_str = lgbm_model_string) ``` Therefore, using function factories that produce closures, `sits` keeps the classification function independent of the machine learning or deep learning algorithm. This policy allows independent proposal, testing, and development of new classification methods. It also enables improvements on parallel processing methods without affecting the existing classification methods. To illustrate this separation between training and classification, the new algorithm developed in the chapter using `lightgbm` will be used to classify a data cube. The code is the same as the one in Chapter [Introduction](https://e-sensing.github.io/sitsbook/introduction.html) as an example of data cube classification, except for the use of `lgb_method()`. ``` [data](https://rdrr.io/r/utils/data.html)("samples_matogrosso_mod13q1", package = "sitsdata") # Create a data cube using local files sinop <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "MOD13Q1-6.1", data_dir = [system.file](https://rdrr.io/r/base/system.file.html)("extdata/sinop", package = "sitsdata"), parse_info = [c](https://rdrr.io/r/base/c.html)("X1", "X2", "tile", "band", "date") ) # The data cube has only "NDVI" and "EVI" bands # Select the bands NDVI and EVI samples_2bands <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)( data = samples_matogrosso_mod13q1, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI") ) # train lightGBM model lgb_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)(samples_2bands, sits_lightgbm()) # Classify the data cube sinop_probs <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = sinop, ml_model = lgb_model, multicores = 2, memsize = 8, output_dir = "./tempdir/chp15" ) # Perform spatial smoothing sinop_bayes <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( cube = sinop_probs, multicores = 2, memsize = 8, output_dir = "./tempdir/chp15" ) # Label the smoothed file sinop_map <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = sinop_bayes, output_dir = "./tempdir/chp15" ) # plot the result [plot](https://rdrr.io/r/graphics/plot.default.html)(sinop_map, title = "Sinop Classification Map") ``` Figure 120: Classification map for Sinop using LightGBM (source: authors). How parallel processing works in virtual machines with CPUs ----------------------------------------------------------- This section provides an overview of how `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`, `[sits_smooth()](https://rdrr.io/pkg/sits/man/sits_smooth.html)`, and `[sits_label_classification()](https://rdrr.io/pkg/sits/man/sits_label_classification.html)` process images in parallel. To achieve efficiency, `sits` implements a fault\-tolerant multitasking procedure for big Earth observation data classification. The learning curve is shortened as there is no need to learn how to do multiprocessing. Image classification in `sits` is done by a cluster of independent workers linked to a virtual machine. To avoid communication overhead, all large payloads are read and stored independently; direct interaction between the main process and the workers is kept at a minimum. The classification procedure benefits from the fact that most images available in cloud collections are stored as COGs (cloud\-optimized GeoTIFF). COGs are regular GeoTIFF files organized in regular square blocks to improve visualization and access for large datasets. Thus, data requests can be optimized to access only portions of the images. All cloud services supported by `sits` use COG files. The classification algorithm in `sits` uses COGs to ensure optimal data access, reducing I/O demand as much as possible. The approach for parallel processing in `sits`, depicted in Figure [121](technical-annex.html#fig:par), has the following steps: 1. Based on the block size of individual COG files, calculate the size of each chunk that must be loaded in memory, considering the number of bands and the timeline’s length. Chunk access is optimized for the efficient transfer of data blocks. 2. Divide the total memory available by the chunk size to determine how many processes can run in parallel. 3. Each core processes a chunk and produces a subset of the result. 4. Repeat the process until all chunks in the cube have been processed. 5. Check that subimages have been produced correctly. If there is a problem with one or more subimages, run a failure recovery procedure to ensure all data is processed. 6. After generating all subimages, join them to obtain the result. Figure 121: Parallel processing in sits (Source: Simoes et al. (2021\). Reproduction under fair use doctrine). This approach has many advantages. It has no dependencies on proprietary software and runs in any virtual machine that supports R. Processing is done in a concurrent and independent way, with no communication between workers. Failure of one worker does not cause the failure of big data processing. The software is prepared to resume classification processing from the last processed chunk, preventing failures such as memory exhaustion, power supply interruption, or network breakdown. To reduce processing time, it is necessary to adjust `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`, `[sits_smooth()](https://rdrr.io/pkg/sits/man/sits_smooth.html)`, and `[sits_label_classification()](https://rdrr.io/pkg/sits/man/sits_label_classification.html)` according to the capabilities of the host environment. The `memsize` parameter controls the size of the main memory (in GBytes) to be used for classification. A practical approach is to set `memsize` to the maximum memory available in the virtual machine for classification and to choose `multicores` as the largest number of cores available. Based on the memory available and the size of blocks in COG files, `sits` will access the images in an optimized way. In this way, `sits` tries to ensure the best possible use of the available resources. Exporting data to JSON ---------------------- Both the data cube and the time series tibble can be exported to exchange formats such as JSON. ``` [library](https://rdrr.io/r/base/library.html)([jsonlite](https://jeroen.r-universe.dev/jsonlite)) # Export the data cube to JSON jsonlite::[write_json](https://rdrr.io/pkg/jsonlite/man/read_json.html)( x = sinop, path = "./data_cube.json", pretty = TRUE ) # Export the time series to JSON jsonlite::[write_json](https://rdrr.io/pkg/jsonlite/man/read_json.html)( x = samples_prodes_4classes, path = "./time_series.json", pretty = TRUE ) ``` SITS and Google Earth Engine APIs: A side\-by\-side exploration --------------------------------------------------------------- This section presents a side\-by\-side exploration of the `sits` and Google Earth Engine (`gee`) APIs, focusing on their respective capabilities in handling satellite data. The exploration is structured around three key examples: (1\) creating a mosaic, (2\) calculating the Normalized Difference Vegetation Index (NDVI), and (3\) performing a Land Use and Land Cover (LULC) classification. Each example demonstrates how these tasks are executed using `sits` and `gee`, offering a clear view of their methodologies and highlighting the similarities and the unique approaches each API employs. ### Example 1: Creating a Mosaic A common application among scientists and developers in the field of Remote Sensing is the creation of satellite image mosaics. These mosaics are formed by combining two or more images, typically used for visualization in various applications. In this example, we will demonstrate how to create an image mosaic using `sits` and `gee` APIs. In this example, a Region of Interest (ROI) is defined using a bounding box with longitude and latitude coordinates. Below are the code snippets for specifying this ROI in both `sits` and `gee` environments. **sits** ``` roi <- [c](https://rdrr.io/r/base/c.html)( "lon_min" = -63.410, "lat_min" = -9.783, "lon_max" = -62.614, "lat_max" = -9.331 ) ``` **gee** ``` var roi = ee.Geometry.Rectangle([-63.410,-9.783,-62.614,-9.331]); ``` Next, we will load the satellite imagery. For this example, we used data from [Sentinel\-2](https://sentinels.copernicus.eu/web/sentinel/copernicus/sentinel-2). In `sits`, several providers offer Sentinel\-2 ARD images. In this example, we will use images provided by the Microsoft Planetary Computer (**MPC**). **sits** ``` data <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B03", "B04"), tiles = [c](https://rdrr.io/r/base/c.html)("20LNQ", "20LMQ"), start_date = "2024-08-01", end_date = "2024-08-03" ) ``` **gee** ``` var data = ee.ImageCollection('COPERNICUS/S2_SR_HARMONIZED') .filterDate('2024-08-01', '2024-08-03') .filter(ee.Filter.inList('MGRS_TILE', ['20LNQ', '20LMQ'])) .select(['B4', 'B3', 'B2']); ``` > `sits` provides search filters for a collection as parameters in the > `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` function, whereas `gee` offers these filters as methods of an > `ImageCollection` object. In `sits`, we will use the `[sits_mosaic()](https://rdrr.io/pkg/sits/man/sits_mosaic.html)` function to create mosaics of our images. In `gee`, we will utilize the `mosaic()` method. `[sits_mosaic()](https://rdrr.io/pkg/sits/man/sits_mosaic.html)` function crops the mosaic based on the `roi` parameter. In `gee`, cropping is performed using the `[clip()](https://rdrr.io/r/graphics/clip.html)` method. We will use the same `roi` that was used to filter the images to perform the cropping on the mosaic. See the following code: **sits** ``` mosaic <- [sits_mosaic](https://rdrr.io/pkg/sits/man/sits_mosaic.html)( cube = data, roi = roi, multicores = 4, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)() ) ``` **gee** ``` var mosaic = data.mosaic().clip(roi); ``` Finally, the results can be visualized in an interactive map. **sits** ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)( x = mosaic, red = "B04", green = "B03", blue = "B02" ) ``` **gee** ``` // Define view region Map.centerObject(roi, 10); // Add mosaic Image Map.addLayer(mosaic, { min: 0, max: 3000 }, 'Mosaic'); ``` ### Example 2: Calculating NDVI This example demonstrates how to generate time\-series of Normalized Difference Vegetation Index (NDVI) using both the `sits` and `gee` APIs. In this example, a Region of Interest (ROI) is defined using the `sinop_roi.shp` file. Below are the code snippets for specifying this file in both `sits` and `gee` environments. > To reproduce the example, you can download the shapefile using [this link](data/sits-gee/sinop_roi.zip). > In `sits`, you can just use it. In `gee`, it would be required to upload the > file in your user space. **sits** ``` roi_data <- "sinop_roi.shp" ``` **gee** ``` var roi_data = ee.FeatureCollection("/path/to/sinop_roi"); ``` Next, we load the satellite imagery. For this example, we use data from [Landsat\-8](https://www.usgs.gov/landsat-missions/landsat-8). In `sits`, this data is retrieved from the Brazil Data Cube, although other sources are available. For `gee`, the data provided by the platform is used. In `sits`, when the data is loaded, all necessary transformations to make the data ready for use (e.g., `factor`, `offset`, `cloud masking`) are applied automatically. In `gee`, users are responsible for performing these transformations themselves. **sits** ``` data <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "LANDSAT-OLI-16D", bands = [c](https://rdrr.io/r/base/c.html)("RED", "NIR08", "CLOUD"), roi = roi_data, start_date = "2019-05-01", end_date = "2019-07-01" ) ``` **gee** ``` var data = ee.ImageCollection("LANDSAT/LC08/C02/T1_L2") .filterBounds(roi_data) .filterDate("2019-05-01", "2019-07-01") .select(["SR_B4", "SR_B5", "QA_PIXEL"]); // factor and offset data = data.map(function(image) { var opticalBands = image.select('SR_B.').multiply(0.0000275).add(-0.2); return image.addBands(opticalBands, null, true); }); data = data.map(function(image) { // Select the pixel_qa band var qa = image.select('QA_PIXEL'); // Create a mask to identify cloud and cloud shadow var cloudMask = qa.bitwiseAnd(1 << 5).eq(0) // Clouds .and(qa.bitwiseAnd(1 << 3).eq(0)); // Cloud shadows // Apply the cloud mask to the image return image.updateMask(cloudMask); }); ``` After loading the satellite imagery, the NDVI can be generated. In `sits`, a function allows users to specify the formula used to create a new attribute, in this case, NDVI. In `gee`, a callback function is used, where the NDVI is calculated for each image. **sits** ``` data_ndvi <- [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)( data = data, NDVI = (NIR08 - RED) / (NIR08 + RED), output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)(), multicores = 4, progress = TRUE ) ``` **gee** ``` var data_ndvi = data.map(function(image) { var ndvi = image.normalizedDifference(["SR_B5", "SR_B4"]).rename('NDVI'); return image.addBands(ndvi); }); data_ndvi = data_ndvi.select("NDVI"); ``` The results are clipped to the ROI defined at the beginning of the example to facilitate visualization. > In both APIs, you can define a ROI before performing the > operation to optimize resource usage. However, in this example, the data is > cropped after the calculation. **sits** ``` data_ndvi <- [sits_mosaic](https://rdrr.io/pkg/sits/man/sits_mosaic.html)( cube = data_ndvi, roi = roi_data, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)(), multicores = 4 ) ``` **gee** ``` data_ndvi = data_ndvi.map(function(image) { return image.clip(roi_data); }); ``` Finally, the results can be visualized in an interactive map. **sits** ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(data_ndvi, band = "NDVI", date = "2019-05-25", opacity = 1) ``` **gee** ``` // Define view region Map.centerObject(roi_data, 10); // Add classification map (colors from sits) Map.addLayer(data_ndvi, { min: 0, max: 1, palette: ["red", 'white', 'green'] }, "NDVI Image"); ``` ### Example 3: Land Use and Land Cover (LULC) Classification This example demonstrates how to perform Land Use and Land Cover (LULC) classification using satellite image time series and machine\-learning models in both `sits` and `gee`. This example defines the region of interest (ROI) using a shapefile named `sinop_roi.shp`. Below are the code snippets for specifying this file in both `sits` and `gee` environments. > To reproduce the example, you can download the shapefile using [this link](data/sits-gee/sinop_roi.zip). > In `sits`, you can just use it. In `gee`, it would be required to upload the > file in your user space. **sits** ``` roi_data <- "sinop_roi.shp" ``` **gee** ``` var roi_data = ee.FeatureCollection("/path/to/sinop_roi"); ``` To train a classifier, sample data with labels representing the behavior of each class to be identified is necessary. In this example, we use a small set with `18` samples. The following code snippets show how these samples are defined in each environment. In `sits`, labels can be of type `string`, whereas `gee` requires labels to be `integers`. To accommodate this difference, two versions of the same sample set were created: (1\) one with `string` labels for use with `sits`, and (2\) another with `integer` labels for use with `gee`. > To download these samples, you can use the following links: > [samples\_sinop\_crop for sits](data/sits-gee/samples_sinop_crop.zip) or [samples\_sinop\_crop for gee](data/sits-gee/samples_sinop_crop_gee.zip) **sits** ``` samples <- "samples_sinop_crop.shp" ``` **gee** ``` var samples = ee.FeatureCollection("samples_sinop_crop_gee"); ``` Next, we load the satellite imagery. For this example, we use data from [MOD13Q1](https://ladsweb.modaps.eosdis.nasa.gov/missions-and-measurements/products/MOD13Q1) . In `sits`, this data is retrieved from the Brazil Data Cube, but other sources are also available. In `gee`, the platform directly provides this data. In `sits`, all necessary data transformations for classification tasks are handled automatically. In contrast, `gee` requires users to manually transform the data into the correct format. In this context it’s important to note that, in the `gee` code, transforming all images into bands mimics the approach used by `sits` for non\-temporal classifiers. However, this method is not inherently scalable in `gee` and may need adjustments for larger datasets or more bands. Additionally, for temporal classifiers like TempCNN, other transformations are necessary and must be manually implemented by the user in `gee`. In contrast, `sits` provides a consistent API experience, regardless of the data size or machine learning algorithm. **sits** ``` data <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "MOD13Q1-6.1", bands = [c](https://rdrr.io/r/base/c.html)("NDVI"), roi = roi_data, start_date = "2013-09-01", end_date = "2014-08-29" ) ``` **gee** ``` var data = ee.ImageCollection("MODIS/061/MOD13Q1") .filterBounds(roi_data) .filterDate("2013-09-01", "2014-09-01") .select(["NDVI"]); // Transform all images to bands data = data.toBands(); ``` In this example, we’ll use a Random Forest classifier to create a LULC map. To train the classifier, we need sample data linked to time\-series. This step shows how to extract and associate time\-series with samples. **sits** ``` samples_ts <- [sits_get_data](https://rdrr.io/pkg/sits/man/sits_get_data.html)( cube = data, samples = samples, multicores = 4 ) ``` **gee** ``` var samples_ts = data.sampleRegions({ collection: samples, properties: ["label"] }); ``` With the time\-series data extracted for each sample, we can now train the Random Forest classifier **sits** ``` classifier <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_ts, [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)(num_trees = 100) ) ``` **gee** ``` var classifier = ee.Classifier.smileRandomForest(100).train({ features: samples_ts, classProperty: "label", inputProperties: data.bandNames() }); ``` Now, it is possible to generate the classification map using the trained Random Forest model. In `sits`, the classification process starts with a probability map. This map provides the probability of each class for every pixel, offering insights into the classifier’s performance. It also allows for refining the results using methods like Bayesian probability smoothing. After generating the probability map, it is possible to produce the class map, where each pixel is assigned to the class with the highest probability. In `gee`, while it is possible to generate probabilities, it is not strictly required to produce the classification map. Yet, as of the date of this document, there is no out\-of\-the\-box solution available for utilizing these probabilities to enhance classification results, as presented in `sits`. **sits** ``` probabilities <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = data, ml_model = classifier, multicores = 4, roi = roi_data, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)() ) class_map <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = probabilities, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)(), multicores = 4 ) ``` **gee** ``` var probs_map = data.classify(classifier.setOutputMode("MULTIPROBABILITY")); var class_map = data.classify(classifier); ``` The results are clipped to the ROI defined at the beginning of the example to facilitate visualization. > In both APIs, it’s possible to define an ROI before processing. However, this > was not applied in this example. **sits** ``` class_map <- [sits_mosaic](https://rdrr.io/pkg/sits/man/sits_mosaic.html)( cube = class_map, roi = roi_data, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)(), multicores = 4 ) ``` **gee** ``` class_map = class_map.clip(roi_data); ``` Finally, the results can be visualized on an interactive map. **sits** ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(class_map, opacity = 1) ``` **gee** ``` // Define view region Map.centerObject(roi_data, 10); // Add classification map (colors from sits) Map.addLayer(class_map, { min: 1, max: 4, palette: ["#FAD12D", "#1E8449", "#D68910", "#a2d43f"] }, "Classification map"); ``` Adding functions to the `sits` API ---------------------------------- ### General principles New functions that build on the `sits` API should follow the general principles below. * The target audience for `sits` is the community of remote sensing experts with Earth Sciences background who want to use state\-of\-the\-art data analysis methods with minimal investment in programming skills. The design of the `sits` API considers the typical workflow for land classification using satellite image time series and thus provides a clear and direct set of functions, which are easy to learn and master. * For this reason, we welcome contributors that provide useful additions to the existing API, such as new ML/DL classification algorithms. In case of a new API function, before making a pull request please raise an issue stating your rationale for a new function. * Most functions in `sits` use the S3 programming model with a strong emphasis on generic methods wich are specialized depending on the input data type. See for example the implementation of the `[sits_bands()](https://rdrr.io/pkg/sits/man/sits_bands.html)` function. * Please do not include contributed code using the S4 programming model. Doing so would break the structure and the logic of existing code. Convert your code from S4 to S3\. * Use generic functions as much as possible, as they improve modularity and maintenance. If your code has decision points using `if-else` clauses, such as `if A, do X; else do Y` consider using generic functions. * Functions that use the `torch` package use the R6 model to be compatible with that package. See for example, the code in `sits_tempcnn.R` and `api_torch.R`. To convert `pyTorch` code to R and include it is straightforward. Please see the [Technical Annex](https://e-sensing.github.io/sitsbook/technical-annex.html) of the sits on\-line book. * The sits code relies on the packages of the `tidyverse` to work with tables and list. We use `dplyr` and `tidyr` for data selection and wrangling, `purrr` and `slider` for loops on lists and table, `lubridate` to handle dates and times. ### Adherence to the `sits` data types The `sits` package in built on top of three data types: time series tibble, data cubes and models. Most `sits` functions have one or more of these types as inputs and one of them as return values. The time series tibble contains data and metadata. The first six columns contain the metadata: spatial and temporal information, the label assigned to the sample, and the data cube from where the data has been extracted. The time\_series column contains the time series data for each spatiotemporal location. All time series tibbles are objects of class `sits`. The `cube` data type is designed to store metadata about image files. In principle, images which are part of a data cube share the same geographical region, have the same bands, and have been regularized to fit into a pre\-defined temporal interval. Data cubes in `sits` are organized by tiles. A tile is an element of a satellite’s mission reference system, for example MGRS for Sentinel\-2 and WRS2 for Landsat. A `cube` is a tibble where each row contains information about data covering one tile. Each row of the cube tibble contains a column named `file_info`; this column contains a list that stores a tibble The `cube` data type is specialised in `raster_cube` (ARD images), `vector_cube` (ARD cube with segmentation vectors). `probs_cube` (probabilities produced by classification algorithms on raster data), `probs_vector_cube`(probabilites generated by vector classification of segments), `uncertainty_cube` (cubes with uncertainty information), and `class_cube` (labelled maps). See the code in `sits_plot.R` as an example of specialisation of `plot` to handle different classes of raster data. All ML/DL models in `sits` which are the result of `sits_train` belong to the `ml_model` class. In addition, models are assigned a second class, which is unique to ML models (e.g, `rfor_model`, `svm_model`) and generic for all DL `torch` based models (`torch_model`). The class information is used for plotting models and for establishing if a model can run on GPUs. ### Literal values, error messages, and testing The internal `sits` code has no literal values, which are all stored in the YAML configuration files `./inst/extdata/config.yml` and `./inst/extdata/config_internals.yml`. The first file contains configuration parameters that are relevant to users, related to visualisation and plotting; the second contains parameters that are relevant only for developers. These values are accessible using the `.conf` function. For example, the value of the default size for ploting COG files is accessed using the command `.conf["plot", "max_size"]`. Error messages are also stored outside of the code in the YAML configuration file `./inst/extdata/config_messages.yml`. These values are accessible using the `.conf` function. For example, the error associated to an invalid NA value for an input parameter is accessible using th function `.conf("messages", ".check_na_parameter")`. We strive for high code coverage (\> 90%). Every parameter of all `sits` function (including internal ones) is checked for consistency. Please see `api_check.R`. ### Supporting new STAC\-based image catalogues If you want to include a STAC\-based catalogue not yet supported by `sits`, we encourage you to look at existing implementations of catalogues such as Microsoft Planetary Computer (MPC), Digital Earth Africa (DEA) and AWS. STAC\-based catalogues in `sits` are associated to YAML description files, which are available in the directory `.inst/exdata/sources`. For example, the YAML file `config_source_mpc.yml` describes the contents of the MPC collections supported by `sits`. Please first provide an YAML file which lists the detailed contents of the new catalogue you wish to include. Follow the examples provided. After writing the YAML file, you need to consider how to access and query the new catalogue. The entry point for access to all catalogues is the `sits_cube.stac_cube()` function, which in turn calls a sequence of functions which are described in the generic interface `api_source.R`. Most calls of this API are handled by the functions of `api_source_stac.R` which provides an interface to the `rstac` package and handles STAC queries. Each STAC catalogue is different. The STAC specification allows providers to implement their data descriptions with specific information. For this reason, the generic API described in `api_source.R` needs to be specialized for each provider. Whenever a provider needs specific implementations of parts of the STAC protocol, we include them in separate files. For example, `api_source_mpc.R` implements specific quirks of the MPC platform. Similarly, specific support for CDSE (Copernicus Data Space Environment) is available in `api_source_cdse.R`. ### Including new methods for machine learning This section provides guidance for experts that want to include new methods for machine learning that work in connection with `sits`. The discussion below assumes familiarity with the R language. Developers should consult Hadley Wickham’s excellent book [Advanced R](https://adv-r.hadley.nz/), especially Chapter 10 on “Function Factories”. All machine learning and deep learning algorithm in `sits` follow the same logic; all models are created by `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)`. This function has two parameters: (a) `samples`, a set of time series with the training samples; (b) `ml_method`, a function that fits the model to the input data. The result is a function that is passed on to `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)` to classify time series or data cubes. The structure of `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` is simple, as shown below. ``` sits_train <- function(samples, ml_method) { # train a ml classifier with the given data result <- ml_method(samples) # return a valid machine learning method [return](https://rdrr.io/r/base/function.html)(result) } ``` In R terms, `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` is a function factory, or a function that makes functions. Such behavior is possible because functions are first\-class objects in R. In other words, they can be bound to a name in the same way that variables are. A second propriety of R is that functions capture (enclose) the environment in which they are created. In other words, when a function is returned as a result of another function, the internal variables used to create it are available inside its environment. In programming language, this technique is called “closure”. The following definition from Wikipedia captures the purpose of clousures: *“Operationally, a closure is a record storing a function together with an environment. The environment is a mapping associating each free variable of the function with the value or reference to which the name was bound when the closure was created. A closure allows the function to access those captured variables through the closure’s copies of their values or references, even when the function is invoked outside their scope.”* In `sits`, the properties of closures are used as a basis for making training and classification independent. The return of `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` is a model that contains information on how to classify input values, as well as information on the samples used to train the model. To ensure all models work in the same fashion, machine learning functions in `sits` also share the same data structure for prediction. This data structure is created by `[sits_predictors()](https://rdrr.io/pkg/sits/man/sits_predictors.html)`, which transforms the time series tibble into a set of values suitable for using as training data, as shown in the following example. ``` [data](https://rdrr.io/r/utils/data.html)("samples_matogrosso_mod13q1", package = "sitsdata") pred <- [sits_predictors](https://rdrr.io/pkg/sits/man/sits_predictors.html)(samples_matogrosso_mod13q1) pred ``` ``` #> # A tibble: 1,837 × 94 #> sample_id label NDVI1 NDVI2 NDVI3 NDVI4 NDVI5 NDVI6 NDVI7 NDVI8 NDVI9 NDVI10 #> <int> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 1 Pastu… 0.500 0.485 0.716 0.654 0.591 0.662 0.734 0.739 0.768 0.797 #> 2 2 Pastu… 0.364 0.484 0.605 0.726 0.778 0.829 0.762 0.762 0.643 0.610 #> 3 3 Pastu… 0.577 0.674 0.639 0.569 0.596 0.623 0.650 0.650 0.637 0.646 #> 4 4 Pastu… 0.597 0.699 0.789 0.792 0.794 0.72 0.646 0.705 0.757 0.810 #> 5 5 Pastu… 0.388 0.491 0.527 0.660 0.677 0.746 0.816 0.816 0.825 0.835 #> 6 6 Pastu… 0.350 0.345 0.364 0.429 0.506 0.583 0.660 0.616 0.580 0.651 #> 7 7 Pastu… 0.490 0.527 0.543 0.583 0.594 0.605 0.616 0.627 0.622 0.644 #> 8 8 Pastu… 0.435 0.574 0.395 0.392 0.518 0.597 0.648 0.774 0.786 0.798 #> 9 9 Pastu… 0.396 0.473 0.542 0.587 0.649 0.697 0.696 0.695 0.699 0.703 #> 10 10 Pastu… 0.354 0.387 0.527 0.577 0.626 0.723 0.655 0.655 0.646 0.536 #> # ℹ 1,827 more rows #> # ℹ 82 more variables: NDVI11 <dbl>, NDVI12 <dbl>, NDVI13 <dbl>, NDVI14 <dbl>, #> # NDVI15 <dbl>, NDVI16 <dbl>, NDVI17 <dbl>, NDVI18 <dbl>, NDVI19 <dbl>, #> # NDVI20 <dbl>, NDVI21 <dbl>, NDVI22 <dbl>, NDVI23 <dbl>, EVI1 <dbl>, #> # EVI2 <dbl>, EVI3 <dbl>, EVI4 <dbl>, EVI5 <dbl>, EVI6 <dbl>, EVI7 <dbl>, #> # EVI8 <dbl>, EVI9 <dbl>, EVI10 <dbl>, EVI11 <dbl>, EVI12 <dbl>, EVI13 <dbl>, #> # EVI14 <dbl>, EVI15 <dbl>, EVI16 <dbl>, EVI17 <dbl>, EVI18 <dbl>, … ``` The predictors tibble is organized as a combination of the “X” and “Y” values used by machine learning algorithms. The first two columns are `sample_id` and `label`. The other columns contain the data values, organized by band and time. For machine learning methods that are not time\-sensitive, such as random forest, this organization is sufficient for training. In the case of time\-sensitive methods such as `tempCNN`, further arrangements are necessary to ensure the tensors have the right dimensions. Please refer to the `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` source code for an example of how to adapt the prediction table to appropriate `torch` tensor. Some algorithms require data normalization. Therefore, the `[sits_predictors()](https://rdrr.io/pkg/sits/man/sits_predictors.html)` code is usually combined with methods that extract statistical information and then normalize the data, as in the example below. ``` # Data normalization ml_stats <- [sits_stats](https://rdrr.io/pkg/sits/man/sits_stats.html)(samples) # extract the training samples train_samples <- [sits_predictors](https://rdrr.io/pkg/sits/man/sits_predictors.html)(samples) # normalize the training samples train_samples <- [sits_pred_normalize](https://rdrr.io/pkg/sits/man/sits_pred_normalize.html)(pred = train_samples, stats = ml_stats) ``` The following example shows the implementation of the LightGBM algorithm, designed to efficiently handle large\-scale datasets and perform fast training and inference [\[95]](references.html#ref-Ke2017). Gradient boosting is a machine learning technique that builds an ensemble of weak prediction models, typically decision trees, to create a stronger model. LightGBM specifically focuses on optimizing the training and prediction speed, making it particularly suitable for large datasets. The example builds a model using the `lightgbm` package. This model will then be applied later to obtain a classification. Since LightGBM is a gradient boosting model, it uses part of the data as testing data to improve the model’s performance. The split between the training and test samples is controlled by a parameter, as shown in the following code extract. ``` # split the data into training and validation datasets # create partitions different splits of the input data test_samples <- [sits_pred_sample](https://rdrr.io/pkg/sits/man/sits_pred_sample.html)(train_samples, frac = validation_split ) # Remove the lines used for validation sel <- !(train_samples$sample_id [%in%](https://rdrr.io/r/base/match.html) test_samples$sample_id) train_samples <- train_samples[sel, ] ``` To include the `lightgbm` package as part of `sits`, we need to create a new training function which is compatible with the other machine learning methods of the package and will be called by `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)`. For compatibility, this new function will be called `sits_lightgbm()`. Its implementation uses two functions from the `lightgbm`: (a) `[lgb.Dataset()](https://rdrr.io/pkg/lightgbm/man/lgb.Dataset.html)`, which transforms training and test samples into internal structures; (b) `[lgb.train()](https://rdrr.io/pkg/lightgbm/man/lgb.train.html)`, which trains the model. The parameters of `[lightgbm::lgb.train()](https://rdrr.io/pkg/lightgbm/man/lgb.train.html)` are: (a) `boosting_type`, boosting algorithm; (b) `objective`, classification objective (c) `num_iterations`, number of runs; (d) `max_depth`, maximum tree depth; (d) `min_samples_leaf`, minimum size of data in one leaf (to avoid overfitting); (f) `learning_rate`, learning rate of the algorithm; (g) `n_iter_no_change`, number of successive iterations to stop training when validation metrics do not improve; (h) `validation_split`, fraction of training data to be used as validation data. ``` # install "lightgbm" package if not available if ( # create a function in sits style for LightGBM algorithm sits_lightgbm <- function(samples = NULL, boosting_type = "gbdt", objective = "multiclass", min_samples_leaf = 10, max_depth = 6, learning_rate = 0.1, num_iterations = 100, n_iter_no_change = 10, validation_split = 0.2, ...) { # function that returns MASS::lda model based on a sits sample tibble train_fun <- function(samples) { # Extract the predictors train_samples <- [sits_predictors](https://rdrr.io/pkg/sits/man/sits_predictors.html)(samples) # find number of labels labels <- [sits_labels](https://rdrr.io/pkg/sits/man/sits_labels.html)(samples) n_labels <- [length](https://rdrr.io/r/base/length.html)(labels) # lightGBM uses numerical labels starting from 0 int_labels <- [c](https://rdrr.io/r/base/c.html)(1:n_labels) - 1 # create a named vector with integers match the class labels [names](https://rdrr.io/r/base/names.html)(int_labels) <- labels # add number of classes to lightGBM params # split the data into training and validation datasets # create partitions different splits of the input data test_samples <- [sits_pred_sample](https://rdrr.io/pkg/sits/man/sits_pred_sample.html)(train_samples, frac = validation_split ) # Remove the lines used for validation sel <- !(train_samples$sample_id [%in%](https://rdrr.io/r/base/match.html) test_samples$sample_id) train_samples <- train_samples[sel, ] # transform the training data to LGBM dataset lgbm_train_samples <- lightgbm::[lgb.Dataset](https://rdrr.io/pkg/lightgbm/man/lgb.Dataset.html)( data = [as.matrix](https://rdrr.io/r/base/matrix.html)(train_samples[, -2:0]), label = [unname](https://rdrr.io/r/base/unname.html)(int_labels[train_samples[[2]]]) ) # transform the test data to LGBM dataset lgbm_test_samples <- lightgbm::[lgb.Dataset](https://rdrr.io/pkg/lightgbm/man/lgb.Dataset.html)( data = [as.matrix](https://rdrr.io/r/base/matrix.html)(test_samples[, -2:0]), label = [unname](https://rdrr.io/r/base/unname.html)(int_labels[test_samples[[2]]]) ) # set the parameters for the lightGBM training lgb_params <- [list](https://rdrr.io/r/base/list.html)( boosting_type = boosting_type, objective = objective, min_samples_leaf = min_samples_leaf, max_depth = max_depth, learning_rate = learning_rate, num_iterations = num_iterations, n_iter_no_change = n_iter_no_change, num_class = n_labels ) # call method and return the trained model lgbm_model <- lightgbm::[lgb.train](https://rdrr.io/pkg/lightgbm/man/lgb.train.html)( data = lgbm_train_samples, valids = [list](https://rdrr.io/r/base/list.html)(test_data = lgbm_test_samples), params = lgb_params, verbose = -1, ... ) # serialize the model for parallel processing lgbm_model_string <- lgbm_model$save_model_to_string(NULL) # construct model predict closure function and returns predict_fun <- function(values) { # reload the model (unserialize) lgbm_model <- lightgbm::[lgb.load](https://rdrr.io/pkg/lightgbm/man/lgb.load.html)(model_str = lgbm_model_string) # predict probabilities prediction <- stats::[predict](https://rdrr.io/r/stats/predict.html)(lgbm_model, data = [as.matrix](https://rdrr.io/r/base/matrix.html)(values), rawscore = FALSE, reshape = TRUE ) # adjust the names of the columns of the probs [colnames](https://rdrr.io/r/base/colnames.html)(prediction) <- labels # retrieve the prediction results [return](https://rdrr.io/r/base/function.html)(prediction) } # Set model class [class](https://rdrr.io/r/base/class.html)(predict_fun) <- [c](https://rdrr.io/r/base/c.html)("lightgbm_model", "sits_model", [class](https://rdrr.io/r/base/class.html)(predict_fun)) [return](https://rdrr.io/r/base/function.html)(predict_fun) } result <- [sits_factory_function](https://rdrr.io/pkg/sits/man/sits_factory_function.html)(samples, train_fun) [return](https://rdrr.io/r/base/function.html)(result) } ``` The above code has two nested functions: `train_fun()` and `predict_fun()`. When `sits_lightgbm()` is called, `train_fun()` transforms the input samples into predictors and uses them to train the algorithm, creating a model (`lgbm_model`). This model is included as part of the function’s closure and becomes available at classification time. Inside `train_fun()`, we include `predict_fun()`, which applies the `lgbm_model` object to classify to the input values. The `train_fun` object is then returned as a closure, using the `sits_factory_function` constructor. This function allows the model to be called either as part of `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` or to be called independently, with the same result. ``` sits_factory_function <- function(data, fun) { # if no data is given, we prepare a # function to be called as a parameter of other functions if (purrr::is_null(data)) { result <- fun } else { # ...otherwise compute the result on the input data result <- fun(data) } [return](https://rdrr.io/r/base/function.html)(result) } ``` As a result, the following calls are equivalent. ``` # building a model using sits_train lgbm_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)(samples, sits_lightgbm()) # building a model directly lgbm_model <- sits_lightgbm(samples) ``` There is one additional requirement for the algorithm to be compatible with `sits`. Data cube processing algorithms in `sits` run in parallel. For this reason, once the classification model is trained, it is serialized, as shown in the following line. The serialized version of the model is exported to the function closure, so it can be used at classification time. ``` # serialize the model for parallel processing lgbm_model_string <- lgbm_model$save_model_to_string(NULL) ``` During classification, `predict_fun()` is called in parallel by each CPU. At this moment, the serialized string is transformed back into a model, which is then run to obtain the classification, as shown in the code. ``` # unserialize the model lgbm_model <- lightgbm::[lgb.load](https://rdrr.io/pkg/lightgbm/man/lgb.load.html)(model_str = lgbm_model_string) ``` Therefore, using function factories that produce closures, `sits` keeps the classification function independent of the machine learning or deep learning algorithm. This policy allows independent proposal, testing, and development of new classification methods. It also enables improvements on parallel processing methods without affecting the existing classification methods. To illustrate this separation between training and classification, the new algorithm developed in the chapter using `lightgbm` will be used to classify a data cube. The code is the same as the one in Chapter [Introduction](https://e-sensing.github.io/sitsbook/introduction.html) as an example of data cube classification, except for the use of `lgb_method()`. ``` [data](https://rdrr.io/r/utils/data.html)("samples_matogrosso_mod13q1", package = "sitsdata") # Create a data cube using local files sinop <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "MOD13Q1-6.1", data_dir = [system.file](https://rdrr.io/r/base/system.file.html)("extdata/sinop", package = "sitsdata"), parse_info = [c](https://rdrr.io/r/base/c.html)("X1", "X2", "tile", "band", "date") ) # The data cube has only "NDVI" and "EVI" bands # Select the bands NDVI and EVI samples_2bands <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)( data = samples_matogrosso_mod13q1, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI") ) # train lightGBM model lgb_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)(samples_2bands, sits_lightgbm()) # Classify the data cube sinop_probs <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = sinop, ml_model = lgb_model, multicores = 2, memsize = 8, output_dir = "./tempdir/chp15" ) # Perform spatial smoothing sinop_bayes <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( cube = sinop_probs, multicores = 2, memsize = 8, output_dir = "./tempdir/chp15" ) # Label the smoothed file sinop_map <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = sinop_bayes, output_dir = "./tempdir/chp15" ) # plot the result [plot](https://rdrr.io/r/graphics/plot.default.html)(sinop_map, title = "Sinop Classification Map") ``` Figure 120: Classification map for Sinop using LightGBM (source: authors). ### General principles New functions that build on the `sits` API should follow the general principles below. * The target audience for `sits` is the community of remote sensing experts with Earth Sciences background who want to use state\-of\-the\-art data analysis methods with minimal investment in programming skills. The design of the `sits` API considers the typical workflow for land classification using satellite image time series and thus provides a clear and direct set of functions, which are easy to learn and master. * For this reason, we welcome contributors that provide useful additions to the existing API, such as new ML/DL classification algorithms. In case of a new API function, before making a pull request please raise an issue stating your rationale for a new function. * Most functions in `sits` use the S3 programming model with a strong emphasis on generic methods wich are specialized depending on the input data type. See for example the implementation of the `[sits_bands()](https://rdrr.io/pkg/sits/man/sits_bands.html)` function. * Please do not include contributed code using the S4 programming model. Doing so would break the structure and the logic of existing code. Convert your code from S4 to S3\. * Use generic functions as much as possible, as they improve modularity and maintenance. If your code has decision points using `if-else` clauses, such as `if A, do X; else do Y` consider using generic functions. * Functions that use the `torch` package use the R6 model to be compatible with that package. See for example, the code in `sits_tempcnn.R` and `api_torch.R`. To convert `pyTorch` code to R and include it is straightforward. Please see the [Technical Annex](https://e-sensing.github.io/sitsbook/technical-annex.html) of the sits on\-line book. * The sits code relies on the packages of the `tidyverse` to work with tables and list. We use `dplyr` and `tidyr` for data selection and wrangling, `purrr` and `slider` for loops on lists and table, `lubridate` to handle dates and times. ### Adherence to the `sits` data types The `sits` package in built on top of three data types: time series tibble, data cubes and models. Most `sits` functions have one or more of these types as inputs and one of them as return values. The time series tibble contains data and metadata. The first six columns contain the metadata: spatial and temporal information, the label assigned to the sample, and the data cube from where the data has been extracted. The time\_series column contains the time series data for each spatiotemporal location. All time series tibbles are objects of class `sits`. The `cube` data type is designed to store metadata about image files. In principle, images which are part of a data cube share the same geographical region, have the same bands, and have been regularized to fit into a pre\-defined temporal interval. Data cubes in `sits` are organized by tiles. A tile is an element of a satellite’s mission reference system, for example MGRS for Sentinel\-2 and WRS2 for Landsat. A `cube` is a tibble where each row contains information about data covering one tile. Each row of the cube tibble contains a column named `file_info`; this column contains a list that stores a tibble The `cube` data type is specialised in `raster_cube` (ARD images), `vector_cube` (ARD cube with segmentation vectors). `probs_cube` (probabilities produced by classification algorithms on raster data), `probs_vector_cube`(probabilites generated by vector classification of segments), `uncertainty_cube` (cubes with uncertainty information), and `class_cube` (labelled maps). See the code in `sits_plot.R` as an example of specialisation of `plot` to handle different classes of raster data. All ML/DL models in `sits` which are the result of `sits_train` belong to the `ml_model` class. In addition, models are assigned a second class, which is unique to ML models (e.g, `rfor_model`, `svm_model`) and generic for all DL `torch` based models (`torch_model`). The class information is used for plotting models and for establishing if a model can run on GPUs. ### Literal values, error messages, and testing The internal `sits` code has no literal values, which are all stored in the YAML configuration files `./inst/extdata/config.yml` and `./inst/extdata/config_internals.yml`. The first file contains configuration parameters that are relevant to users, related to visualisation and plotting; the second contains parameters that are relevant only for developers. These values are accessible using the `.conf` function. For example, the value of the default size for ploting COG files is accessed using the command `.conf["plot", "max_size"]`. Error messages are also stored outside of the code in the YAML configuration file `./inst/extdata/config_messages.yml`. These values are accessible using the `.conf` function. For example, the error associated to an invalid NA value for an input parameter is accessible using th function `.conf("messages", ".check_na_parameter")`. We strive for high code coverage (\> 90%). Every parameter of all `sits` function (including internal ones) is checked for consistency. Please see `api_check.R`. ### Supporting new STAC\-based image catalogues If you want to include a STAC\-based catalogue not yet supported by `sits`, we encourage you to look at existing implementations of catalogues such as Microsoft Planetary Computer (MPC), Digital Earth Africa (DEA) and AWS. STAC\-based catalogues in `sits` are associated to YAML description files, which are available in the directory `.inst/exdata/sources`. For example, the YAML file `config_source_mpc.yml` describes the contents of the MPC collections supported by `sits`. Please first provide an YAML file which lists the detailed contents of the new catalogue you wish to include. Follow the examples provided. After writing the YAML file, you need to consider how to access and query the new catalogue. The entry point for access to all catalogues is the `sits_cube.stac_cube()` function, which in turn calls a sequence of functions which are described in the generic interface `api_source.R`. Most calls of this API are handled by the functions of `api_source_stac.R` which provides an interface to the `rstac` package and handles STAC queries. Each STAC catalogue is different. The STAC specification allows providers to implement their data descriptions with specific information. For this reason, the generic API described in `api_source.R` needs to be specialized for each provider. Whenever a provider needs specific implementations of parts of the STAC protocol, we include them in separate files. For example, `api_source_mpc.R` implements specific quirks of the MPC platform. Similarly, specific support for CDSE (Copernicus Data Space Environment) is available in `api_source_cdse.R`. ### Including new methods for machine learning This section provides guidance for experts that want to include new methods for machine learning that work in connection with `sits`. The discussion below assumes familiarity with the R language. Developers should consult Hadley Wickham’s excellent book [Advanced R](https://adv-r.hadley.nz/), especially Chapter 10 on “Function Factories”. All machine learning and deep learning algorithm in `sits` follow the same logic; all models are created by `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)`. This function has two parameters: (a) `samples`, a set of time series with the training samples; (b) `ml_method`, a function that fits the model to the input data. The result is a function that is passed on to `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)` to classify time series or data cubes. The structure of `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` is simple, as shown below. ``` sits_train <- function(samples, ml_method) { # train a ml classifier with the given data result <- ml_method(samples) # return a valid machine learning method [return](https://rdrr.io/r/base/function.html)(result) } ``` In R terms, `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` is a function factory, or a function that makes functions. Such behavior is possible because functions are first\-class objects in R. In other words, they can be bound to a name in the same way that variables are. A second propriety of R is that functions capture (enclose) the environment in which they are created. In other words, when a function is returned as a result of another function, the internal variables used to create it are available inside its environment. In programming language, this technique is called “closure”. The following definition from Wikipedia captures the purpose of clousures: *“Operationally, a closure is a record storing a function together with an environment. The environment is a mapping associating each free variable of the function with the value or reference to which the name was bound when the closure was created. A closure allows the function to access those captured variables through the closure’s copies of their values or references, even when the function is invoked outside their scope.”* In `sits`, the properties of closures are used as a basis for making training and classification independent. The return of `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` is a model that contains information on how to classify input values, as well as information on the samples used to train the model. To ensure all models work in the same fashion, machine learning functions in `sits` also share the same data structure for prediction. This data structure is created by `[sits_predictors()](https://rdrr.io/pkg/sits/man/sits_predictors.html)`, which transforms the time series tibble into a set of values suitable for using as training data, as shown in the following example. ``` [data](https://rdrr.io/r/utils/data.html)("samples_matogrosso_mod13q1", package = "sitsdata") pred <- [sits_predictors](https://rdrr.io/pkg/sits/man/sits_predictors.html)(samples_matogrosso_mod13q1) pred ``` ``` #> # A tibble: 1,837 × 94 #> sample_id label NDVI1 NDVI2 NDVI3 NDVI4 NDVI5 NDVI6 NDVI7 NDVI8 NDVI9 NDVI10 #> <int> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 1 Pastu… 0.500 0.485 0.716 0.654 0.591 0.662 0.734 0.739 0.768 0.797 #> 2 2 Pastu… 0.364 0.484 0.605 0.726 0.778 0.829 0.762 0.762 0.643 0.610 #> 3 3 Pastu… 0.577 0.674 0.639 0.569 0.596 0.623 0.650 0.650 0.637 0.646 #> 4 4 Pastu… 0.597 0.699 0.789 0.792 0.794 0.72 0.646 0.705 0.757 0.810 #> 5 5 Pastu… 0.388 0.491 0.527 0.660 0.677 0.746 0.816 0.816 0.825 0.835 #> 6 6 Pastu… 0.350 0.345 0.364 0.429 0.506 0.583 0.660 0.616 0.580 0.651 #> 7 7 Pastu… 0.490 0.527 0.543 0.583 0.594 0.605 0.616 0.627 0.622 0.644 #> 8 8 Pastu… 0.435 0.574 0.395 0.392 0.518 0.597 0.648 0.774 0.786 0.798 #> 9 9 Pastu… 0.396 0.473 0.542 0.587 0.649 0.697 0.696 0.695 0.699 0.703 #> 10 10 Pastu… 0.354 0.387 0.527 0.577 0.626 0.723 0.655 0.655 0.646 0.536 #> # ℹ 1,827 more rows #> # ℹ 82 more variables: NDVI11 <dbl>, NDVI12 <dbl>, NDVI13 <dbl>, NDVI14 <dbl>, #> # NDVI15 <dbl>, NDVI16 <dbl>, NDVI17 <dbl>, NDVI18 <dbl>, NDVI19 <dbl>, #> # NDVI20 <dbl>, NDVI21 <dbl>, NDVI22 <dbl>, NDVI23 <dbl>, EVI1 <dbl>, #> # EVI2 <dbl>, EVI3 <dbl>, EVI4 <dbl>, EVI5 <dbl>, EVI6 <dbl>, EVI7 <dbl>, #> # EVI8 <dbl>, EVI9 <dbl>, EVI10 <dbl>, EVI11 <dbl>, EVI12 <dbl>, EVI13 <dbl>, #> # EVI14 <dbl>, EVI15 <dbl>, EVI16 <dbl>, EVI17 <dbl>, EVI18 <dbl>, … ``` The predictors tibble is organized as a combination of the “X” and “Y” values used by machine learning algorithms. The first two columns are `sample_id` and `label`. The other columns contain the data values, organized by band and time. For machine learning methods that are not time\-sensitive, such as random forest, this organization is sufficient for training. In the case of time\-sensitive methods such as `tempCNN`, further arrangements are necessary to ensure the tensors have the right dimensions. Please refer to the `[sits_tempcnn()](https://rdrr.io/pkg/sits/man/sits_tempcnn.html)` source code for an example of how to adapt the prediction table to appropriate `torch` tensor. Some algorithms require data normalization. Therefore, the `[sits_predictors()](https://rdrr.io/pkg/sits/man/sits_predictors.html)` code is usually combined with methods that extract statistical information and then normalize the data, as in the example below. ``` # Data normalization ml_stats <- [sits_stats](https://rdrr.io/pkg/sits/man/sits_stats.html)(samples) # extract the training samples train_samples <- [sits_predictors](https://rdrr.io/pkg/sits/man/sits_predictors.html)(samples) # normalize the training samples train_samples <- [sits_pred_normalize](https://rdrr.io/pkg/sits/man/sits_pred_normalize.html)(pred = train_samples, stats = ml_stats) ``` The following example shows the implementation of the LightGBM algorithm, designed to efficiently handle large\-scale datasets and perform fast training and inference [\[95]](references.html#ref-Ke2017). Gradient boosting is a machine learning technique that builds an ensemble of weak prediction models, typically decision trees, to create a stronger model. LightGBM specifically focuses on optimizing the training and prediction speed, making it particularly suitable for large datasets. The example builds a model using the `lightgbm` package. This model will then be applied later to obtain a classification. Since LightGBM is a gradient boosting model, it uses part of the data as testing data to improve the model’s performance. The split between the training and test samples is controlled by a parameter, as shown in the following code extract. ``` # split the data into training and validation datasets # create partitions different splits of the input data test_samples <- [sits_pred_sample](https://rdrr.io/pkg/sits/man/sits_pred_sample.html)(train_samples, frac = validation_split ) # Remove the lines used for validation sel <- !(train_samples$sample_id [%in%](https://rdrr.io/r/base/match.html) test_samples$sample_id) train_samples <- train_samples[sel, ] ``` To include the `lightgbm` package as part of `sits`, we need to create a new training function which is compatible with the other machine learning methods of the package and will be called by `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)`. For compatibility, this new function will be called `sits_lightgbm()`. Its implementation uses two functions from the `lightgbm`: (a) `[lgb.Dataset()](https://rdrr.io/pkg/lightgbm/man/lgb.Dataset.html)`, which transforms training and test samples into internal structures; (b) `[lgb.train()](https://rdrr.io/pkg/lightgbm/man/lgb.train.html)`, which trains the model. The parameters of `[lightgbm::lgb.train()](https://rdrr.io/pkg/lightgbm/man/lgb.train.html)` are: (a) `boosting_type`, boosting algorithm; (b) `objective`, classification objective (c) `num_iterations`, number of runs; (d) `max_depth`, maximum tree depth; (d) `min_samples_leaf`, minimum size of data in one leaf (to avoid overfitting); (f) `learning_rate`, learning rate of the algorithm; (g) `n_iter_no_change`, number of successive iterations to stop training when validation metrics do not improve; (h) `validation_split`, fraction of training data to be used as validation data. ``` # install "lightgbm" package if not available if ( # create a function in sits style for LightGBM algorithm sits_lightgbm <- function(samples = NULL, boosting_type = "gbdt", objective = "multiclass", min_samples_leaf = 10, max_depth = 6, learning_rate = 0.1, num_iterations = 100, n_iter_no_change = 10, validation_split = 0.2, ...) { # function that returns MASS::lda model based on a sits sample tibble train_fun <- function(samples) { # Extract the predictors train_samples <- [sits_predictors](https://rdrr.io/pkg/sits/man/sits_predictors.html)(samples) # find number of labels labels <- [sits_labels](https://rdrr.io/pkg/sits/man/sits_labels.html)(samples) n_labels <- [length](https://rdrr.io/r/base/length.html)(labels) # lightGBM uses numerical labels starting from 0 int_labels <- [c](https://rdrr.io/r/base/c.html)(1:n_labels) - 1 # create a named vector with integers match the class labels [names](https://rdrr.io/r/base/names.html)(int_labels) <- labels # add number of classes to lightGBM params # split the data into training and validation datasets # create partitions different splits of the input data test_samples <- [sits_pred_sample](https://rdrr.io/pkg/sits/man/sits_pred_sample.html)(train_samples, frac = validation_split ) # Remove the lines used for validation sel <- !(train_samples$sample_id [%in%](https://rdrr.io/r/base/match.html) test_samples$sample_id) train_samples <- train_samples[sel, ] # transform the training data to LGBM dataset lgbm_train_samples <- lightgbm::[lgb.Dataset](https://rdrr.io/pkg/lightgbm/man/lgb.Dataset.html)( data = [as.matrix](https://rdrr.io/r/base/matrix.html)(train_samples[, -2:0]), label = [unname](https://rdrr.io/r/base/unname.html)(int_labels[train_samples[[2]]]) ) # transform the test data to LGBM dataset lgbm_test_samples <- lightgbm::[lgb.Dataset](https://rdrr.io/pkg/lightgbm/man/lgb.Dataset.html)( data = [as.matrix](https://rdrr.io/r/base/matrix.html)(test_samples[, -2:0]), label = [unname](https://rdrr.io/r/base/unname.html)(int_labels[test_samples[[2]]]) ) # set the parameters for the lightGBM training lgb_params <- [list](https://rdrr.io/r/base/list.html)( boosting_type = boosting_type, objective = objective, min_samples_leaf = min_samples_leaf, max_depth = max_depth, learning_rate = learning_rate, num_iterations = num_iterations, n_iter_no_change = n_iter_no_change, num_class = n_labels ) # call method and return the trained model lgbm_model <- lightgbm::[lgb.train](https://rdrr.io/pkg/lightgbm/man/lgb.train.html)( data = lgbm_train_samples, valids = [list](https://rdrr.io/r/base/list.html)(test_data = lgbm_test_samples), params = lgb_params, verbose = -1, ... ) # serialize the model for parallel processing lgbm_model_string <- lgbm_model$save_model_to_string(NULL) # construct model predict closure function and returns predict_fun <- function(values) { # reload the model (unserialize) lgbm_model <- lightgbm::[lgb.load](https://rdrr.io/pkg/lightgbm/man/lgb.load.html)(model_str = lgbm_model_string) # predict probabilities prediction <- stats::[predict](https://rdrr.io/r/stats/predict.html)(lgbm_model, data = [as.matrix](https://rdrr.io/r/base/matrix.html)(values), rawscore = FALSE, reshape = TRUE ) # adjust the names of the columns of the probs [colnames](https://rdrr.io/r/base/colnames.html)(prediction) <- labels # retrieve the prediction results [return](https://rdrr.io/r/base/function.html)(prediction) } # Set model class [class](https://rdrr.io/r/base/class.html)(predict_fun) <- [c](https://rdrr.io/r/base/c.html)("lightgbm_model", "sits_model", [class](https://rdrr.io/r/base/class.html)(predict_fun)) [return](https://rdrr.io/r/base/function.html)(predict_fun) } result <- [sits_factory_function](https://rdrr.io/pkg/sits/man/sits_factory_function.html)(samples, train_fun) [return](https://rdrr.io/r/base/function.html)(result) } ``` The above code has two nested functions: `train_fun()` and `predict_fun()`. When `sits_lightgbm()` is called, `train_fun()` transforms the input samples into predictors and uses them to train the algorithm, creating a model (`lgbm_model`). This model is included as part of the function’s closure and becomes available at classification time. Inside `train_fun()`, we include `predict_fun()`, which applies the `lgbm_model` object to classify to the input values. The `train_fun` object is then returned as a closure, using the `sits_factory_function` constructor. This function allows the model to be called either as part of `[sits_train()](https://rdrr.io/pkg/sits/man/sits_train.html)` or to be called independently, with the same result. ``` sits_factory_function <- function(data, fun) { # if no data is given, we prepare a # function to be called as a parameter of other functions if (purrr::is_null(data)) { result <- fun } else { # ...otherwise compute the result on the input data result <- fun(data) } [return](https://rdrr.io/r/base/function.html)(result) } ``` As a result, the following calls are equivalent. ``` # building a model using sits_train lgbm_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)(samples, sits_lightgbm()) # building a model directly lgbm_model <- sits_lightgbm(samples) ``` There is one additional requirement for the algorithm to be compatible with `sits`. Data cube processing algorithms in `sits` run in parallel. For this reason, once the classification model is trained, it is serialized, as shown in the following line. The serialized version of the model is exported to the function closure, so it can be used at classification time. ``` # serialize the model for parallel processing lgbm_model_string <- lgbm_model$save_model_to_string(NULL) ``` During classification, `predict_fun()` is called in parallel by each CPU. At this moment, the serialized string is transformed back into a model, which is then run to obtain the classification, as shown in the code. ``` # unserialize the model lgbm_model <- lightgbm::[lgb.load](https://rdrr.io/pkg/lightgbm/man/lgb.load.html)(model_str = lgbm_model_string) ``` Therefore, using function factories that produce closures, `sits` keeps the classification function independent of the machine learning or deep learning algorithm. This policy allows independent proposal, testing, and development of new classification methods. It also enables improvements on parallel processing methods without affecting the existing classification methods. To illustrate this separation between training and classification, the new algorithm developed in the chapter using `lightgbm` will be used to classify a data cube. The code is the same as the one in Chapter [Introduction](https://e-sensing.github.io/sitsbook/introduction.html) as an example of data cube classification, except for the use of `lgb_method()`. ``` [data](https://rdrr.io/r/utils/data.html)("samples_matogrosso_mod13q1", package = "sitsdata") # Create a data cube using local files sinop <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "MOD13Q1-6.1", data_dir = [system.file](https://rdrr.io/r/base/system.file.html)("extdata/sinop", package = "sitsdata"), parse_info = [c](https://rdrr.io/r/base/c.html)("X1", "X2", "tile", "band", "date") ) # The data cube has only "NDVI" and "EVI" bands # Select the bands NDVI and EVI samples_2bands <- [sits_select](https://rdrr.io/pkg/sits/man/sits_select.html)( data = samples_matogrosso_mod13q1, bands = [c](https://rdrr.io/r/base/c.html)("NDVI", "EVI") ) # train lightGBM model lgb_model <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)(samples_2bands, sits_lightgbm()) # Classify the data cube sinop_probs <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = sinop, ml_model = lgb_model, multicores = 2, memsize = 8, output_dir = "./tempdir/chp15" ) # Perform spatial smoothing sinop_bayes <- [sits_smooth](https://rdrr.io/pkg/sits/man/sits_smooth.html)( cube = sinop_probs, multicores = 2, memsize = 8, output_dir = "./tempdir/chp15" ) # Label the smoothed file sinop_map <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = sinop_bayes, output_dir = "./tempdir/chp15" ) # plot the result [plot](https://rdrr.io/r/graphics/plot.default.html)(sinop_map, title = "Sinop Classification Map") ``` Figure 120: Classification map for Sinop using LightGBM (source: authors). How parallel processing works in virtual machines with CPUs ----------------------------------------------------------- This section provides an overview of how `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`, `[sits_smooth()](https://rdrr.io/pkg/sits/man/sits_smooth.html)`, and `[sits_label_classification()](https://rdrr.io/pkg/sits/man/sits_label_classification.html)` process images in parallel. To achieve efficiency, `sits` implements a fault\-tolerant multitasking procedure for big Earth observation data classification. The learning curve is shortened as there is no need to learn how to do multiprocessing. Image classification in `sits` is done by a cluster of independent workers linked to a virtual machine. To avoid communication overhead, all large payloads are read and stored independently; direct interaction between the main process and the workers is kept at a minimum. The classification procedure benefits from the fact that most images available in cloud collections are stored as COGs (cloud\-optimized GeoTIFF). COGs are regular GeoTIFF files organized in regular square blocks to improve visualization and access for large datasets. Thus, data requests can be optimized to access only portions of the images. All cloud services supported by `sits` use COG files. The classification algorithm in `sits` uses COGs to ensure optimal data access, reducing I/O demand as much as possible. The approach for parallel processing in `sits`, depicted in Figure [121](technical-annex.html#fig:par), has the following steps: 1. Based on the block size of individual COG files, calculate the size of each chunk that must be loaded in memory, considering the number of bands and the timeline’s length. Chunk access is optimized for the efficient transfer of data blocks. 2. Divide the total memory available by the chunk size to determine how many processes can run in parallel. 3. Each core processes a chunk and produces a subset of the result. 4. Repeat the process until all chunks in the cube have been processed. 5. Check that subimages have been produced correctly. If there is a problem with one or more subimages, run a failure recovery procedure to ensure all data is processed. 6. After generating all subimages, join them to obtain the result. Figure 121: Parallel processing in sits (Source: Simoes et al. (2021\). Reproduction under fair use doctrine). This approach has many advantages. It has no dependencies on proprietary software and runs in any virtual machine that supports R. Processing is done in a concurrent and independent way, with no communication between workers. Failure of one worker does not cause the failure of big data processing. The software is prepared to resume classification processing from the last processed chunk, preventing failures such as memory exhaustion, power supply interruption, or network breakdown. To reduce processing time, it is necessary to adjust `[sits_classify()](https://rdrr.io/pkg/sits/man/sits_classify.html)`, `[sits_smooth()](https://rdrr.io/pkg/sits/man/sits_smooth.html)`, and `[sits_label_classification()](https://rdrr.io/pkg/sits/man/sits_label_classification.html)` according to the capabilities of the host environment. The `memsize` parameter controls the size of the main memory (in GBytes) to be used for classification. A practical approach is to set `memsize` to the maximum memory available in the virtual machine for classification and to choose `multicores` as the largest number of cores available. Based on the memory available and the size of blocks in COG files, `sits` will access the images in an optimized way. In this way, `sits` tries to ensure the best possible use of the available resources. Exporting data to JSON ---------------------- Both the data cube and the time series tibble can be exported to exchange formats such as JSON. ``` [library](https://rdrr.io/r/base/library.html)([jsonlite](https://jeroen.r-universe.dev/jsonlite)) # Export the data cube to JSON jsonlite::[write_json](https://rdrr.io/pkg/jsonlite/man/read_json.html)( x = sinop, path = "./data_cube.json", pretty = TRUE ) # Export the time series to JSON jsonlite::[write_json](https://rdrr.io/pkg/jsonlite/man/read_json.html)( x = samples_prodes_4classes, path = "./time_series.json", pretty = TRUE ) ``` SITS and Google Earth Engine APIs: A side\-by\-side exploration --------------------------------------------------------------- This section presents a side\-by\-side exploration of the `sits` and Google Earth Engine (`gee`) APIs, focusing on their respective capabilities in handling satellite data. The exploration is structured around three key examples: (1\) creating a mosaic, (2\) calculating the Normalized Difference Vegetation Index (NDVI), and (3\) performing a Land Use and Land Cover (LULC) classification. Each example demonstrates how these tasks are executed using `sits` and `gee`, offering a clear view of their methodologies and highlighting the similarities and the unique approaches each API employs. ### Example 1: Creating a Mosaic A common application among scientists and developers in the field of Remote Sensing is the creation of satellite image mosaics. These mosaics are formed by combining two or more images, typically used for visualization in various applications. In this example, we will demonstrate how to create an image mosaic using `sits` and `gee` APIs. In this example, a Region of Interest (ROI) is defined using a bounding box with longitude and latitude coordinates. Below are the code snippets for specifying this ROI in both `sits` and `gee` environments. **sits** ``` roi <- [c](https://rdrr.io/r/base/c.html)( "lon_min" = -63.410, "lat_min" = -9.783, "lon_max" = -62.614, "lat_max" = -9.331 ) ``` **gee** ``` var roi = ee.Geometry.Rectangle([-63.410,-9.783,-62.614,-9.331]); ``` Next, we will load the satellite imagery. For this example, we used data from [Sentinel\-2](https://sentinels.copernicus.eu/web/sentinel/copernicus/sentinel-2). In `sits`, several providers offer Sentinel\-2 ARD images. In this example, we will use images provided by the Microsoft Planetary Computer (**MPC**). **sits** ``` data <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B03", "B04"), tiles = [c](https://rdrr.io/r/base/c.html)("20LNQ", "20LMQ"), start_date = "2024-08-01", end_date = "2024-08-03" ) ``` **gee** ``` var data = ee.ImageCollection('COPERNICUS/S2_SR_HARMONIZED') .filterDate('2024-08-01', '2024-08-03') .filter(ee.Filter.inList('MGRS_TILE', ['20LNQ', '20LMQ'])) .select(['B4', 'B3', 'B2']); ``` > `sits` provides search filters for a collection as parameters in the > `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` function, whereas `gee` offers these filters as methods of an > `ImageCollection` object. In `sits`, we will use the `[sits_mosaic()](https://rdrr.io/pkg/sits/man/sits_mosaic.html)` function to create mosaics of our images. In `gee`, we will utilize the `mosaic()` method. `[sits_mosaic()](https://rdrr.io/pkg/sits/man/sits_mosaic.html)` function crops the mosaic based on the `roi` parameter. In `gee`, cropping is performed using the `[clip()](https://rdrr.io/r/graphics/clip.html)` method. We will use the same `roi` that was used to filter the images to perform the cropping on the mosaic. See the following code: **sits** ``` mosaic <- [sits_mosaic](https://rdrr.io/pkg/sits/man/sits_mosaic.html)( cube = data, roi = roi, multicores = 4, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)() ) ``` **gee** ``` var mosaic = data.mosaic().clip(roi); ``` Finally, the results can be visualized in an interactive map. **sits** ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)( x = mosaic, red = "B04", green = "B03", blue = "B02" ) ``` **gee** ``` // Define view region Map.centerObject(roi, 10); // Add mosaic Image Map.addLayer(mosaic, { min: 0, max: 3000 }, 'Mosaic'); ``` ### Example 2: Calculating NDVI This example demonstrates how to generate time\-series of Normalized Difference Vegetation Index (NDVI) using both the `sits` and `gee` APIs. In this example, a Region of Interest (ROI) is defined using the `sinop_roi.shp` file. Below are the code snippets for specifying this file in both `sits` and `gee` environments. > To reproduce the example, you can download the shapefile using [this link](data/sits-gee/sinop_roi.zip). > In `sits`, you can just use it. In `gee`, it would be required to upload the > file in your user space. **sits** ``` roi_data <- "sinop_roi.shp" ``` **gee** ``` var roi_data = ee.FeatureCollection("/path/to/sinop_roi"); ``` Next, we load the satellite imagery. For this example, we use data from [Landsat\-8](https://www.usgs.gov/landsat-missions/landsat-8). In `sits`, this data is retrieved from the Brazil Data Cube, although other sources are available. For `gee`, the data provided by the platform is used. In `sits`, when the data is loaded, all necessary transformations to make the data ready for use (e.g., `factor`, `offset`, `cloud masking`) are applied automatically. In `gee`, users are responsible for performing these transformations themselves. **sits** ``` data <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "LANDSAT-OLI-16D", bands = [c](https://rdrr.io/r/base/c.html)("RED", "NIR08", "CLOUD"), roi = roi_data, start_date = "2019-05-01", end_date = "2019-07-01" ) ``` **gee** ``` var data = ee.ImageCollection("LANDSAT/LC08/C02/T1_L2") .filterBounds(roi_data) .filterDate("2019-05-01", "2019-07-01") .select(["SR_B4", "SR_B5", "QA_PIXEL"]); // factor and offset data = data.map(function(image) { var opticalBands = image.select('SR_B.').multiply(0.0000275).add(-0.2); return image.addBands(opticalBands, null, true); }); data = data.map(function(image) { // Select the pixel_qa band var qa = image.select('QA_PIXEL'); // Create a mask to identify cloud and cloud shadow var cloudMask = qa.bitwiseAnd(1 << 5).eq(0) // Clouds .and(qa.bitwiseAnd(1 << 3).eq(0)); // Cloud shadows // Apply the cloud mask to the image return image.updateMask(cloudMask); }); ``` After loading the satellite imagery, the NDVI can be generated. In `sits`, a function allows users to specify the formula used to create a new attribute, in this case, NDVI. In `gee`, a callback function is used, where the NDVI is calculated for each image. **sits** ``` data_ndvi <- [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)( data = data, NDVI = (NIR08 - RED) / (NIR08 + RED), output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)(), multicores = 4, progress = TRUE ) ``` **gee** ``` var data_ndvi = data.map(function(image) { var ndvi = image.normalizedDifference(["SR_B5", "SR_B4"]).rename('NDVI'); return image.addBands(ndvi); }); data_ndvi = data_ndvi.select("NDVI"); ``` The results are clipped to the ROI defined at the beginning of the example to facilitate visualization. > In both APIs, you can define a ROI before performing the > operation to optimize resource usage. However, in this example, the data is > cropped after the calculation. **sits** ``` data_ndvi <- [sits_mosaic](https://rdrr.io/pkg/sits/man/sits_mosaic.html)( cube = data_ndvi, roi = roi_data, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)(), multicores = 4 ) ``` **gee** ``` data_ndvi = data_ndvi.map(function(image) { return image.clip(roi_data); }); ``` Finally, the results can be visualized in an interactive map. **sits** ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(data_ndvi, band = "NDVI", date = "2019-05-25", opacity = 1) ``` **gee** ``` // Define view region Map.centerObject(roi_data, 10); // Add classification map (colors from sits) Map.addLayer(data_ndvi, { min: 0, max: 1, palette: ["red", 'white', 'green'] }, "NDVI Image"); ``` ### Example 3: Land Use and Land Cover (LULC) Classification This example demonstrates how to perform Land Use and Land Cover (LULC) classification using satellite image time series and machine\-learning models in both `sits` and `gee`. This example defines the region of interest (ROI) using a shapefile named `sinop_roi.shp`. Below are the code snippets for specifying this file in both `sits` and `gee` environments. > To reproduce the example, you can download the shapefile using [this link](data/sits-gee/sinop_roi.zip). > In `sits`, you can just use it. In `gee`, it would be required to upload the > file in your user space. **sits** ``` roi_data <- "sinop_roi.shp" ``` **gee** ``` var roi_data = ee.FeatureCollection("/path/to/sinop_roi"); ``` To train a classifier, sample data with labels representing the behavior of each class to be identified is necessary. In this example, we use a small set with `18` samples. The following code snippets show how these samples are defined in each environment. In `sits`, labels can be of type `string`, whereas `gee` requires labels to be `integers`. To accommodate this difference, two versions of the same sample set were created: (1\) one with `string` labels for use with `sits`, and (2\) another with `integer` labels for use with `gee`. > To download these samples, you can use the following links: > [samples\_sinop\_crop for sits](data/sits-gee/samples_sinop_crop.zip) or [samples\_sinop\_crop for gee](data/sits-gee/samples_sinop_crop_gee.zip) **sits** ``` samples <- "samples_sinop_crop.shp" ``` **gee** ``` var samples = ee.FeatureCollection("samples_sinop_crop_gee"); ``` Next, we load the satellite imagery. For this example, we use data from [MOD13Q1](https://ladsweb.modaps.eosdis.nasa.gov/missions-and-measurements/products/MOD13Q1) . In `sits`, this data is retrieved from the Brazil Data Cube, but other sources are also available. In `gee`, the platform directly provides this data. In `sits`, all necessary data transformations for classification tasks are handled automatically. In contrast, `gee` requires users to manually transform the data into the correct format. In this context it’s important to note that, in the `gee` code, transforming all images into bands mimics the approach used by `sits` for non\-temporal classifiers. However, this method is not inherently scalable in `gee` and may need adjustments for larger datasets or more bands. Additionally, for temporal classifiers like TempCNN, other transformations are necessary and must be manually implemented by the user in `gee`. In contrast, `sits` provides a consistent API experience, regardless of the data size or machine learning algorithm. **sits** ``` data <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "MOD13Q1-6.1", bands = [c](https://rdrr.io/r/base/c.html)("NDVI"), roi = roi_data, start_date = "2013-09-01", end_date = "2014-08-29" ) ``` **gee** ``` var data = ee.ImageCollection("MODIS/061/MOD13Q1") .filterBounds(roi_data) .filterDate("2013-09-01", "2014-09-01") .select(["NDVI"]); // Transform all images to bands data = data.toBands(); ``` In this example, we’ll use a Random Forest classifier to create a LULC map. To train the classifier, we need sample data linked to time\-series. This step shows how to extract and associate time\-series with samples. **sits** ``` samples_ts <- [sits_get_data](https://rdrr.io/pkg/sits/man/sits_get_data.html)( cube = data, samples = samples, multicores = 4 ) ``` **gee** ``` var samples_ts = data.sampleRegions({ collection: samples, properties: ["label"] }); ``` With the time\-series data extracted for each sample, we can now train the Random Forest classifier **sits** ``` classifier <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_ts, [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)(num_trees = 100) ) ``` **gee** ``` var classifier = ee.Classifier.smileRandomForest(100).train({ features: samples_ts, classProperty: "label", inputProperties: data.bandNames() }); ``` Now, it is possible to generate the classification map using the trained Random Forest model. In `sits`, the classification process starts with a probability map. This map provides the probability of each class for every pixel, offering insights into the classifier’s performance. It also allows for refining the results using methods like Bayesian probability smoothing. After generating the probability map, it is possible to produce the class map, where each pixel is assigned to the class with the highest probability. In `gee`, while it is possible to generate probabilities, it is not strictly required to produce the classification map. Yet, as of the date of this document, there is no out\-of\-the\-box solution available for utilizing these probabilities to enhance classification results, as presented in `sits`. **sits** ``` probabilities <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = data, ml_model = classifier, multicores = 4, roi = roi_data, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)() ) class_map <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = probabilities, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)(), multicores = 4 ) ``` **gee** ``` var probs_map = data.classify(classifier.setOutputMode("MULTIPROBABILITY")); var class_map = data.classify(classifier); ``` The results are clipped to the ROI defined at the beginning of the example to facilitate visualization. > In both APIs, it’s possible to define an ROI before processing. However, this > was not applied in this example. **sits** ``` class_map <- [sits_mosaic](https://rdrr.io/pkg/sits/man/sits_mosaic.html)( cube = class_map, roi = roi_data, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)(), multicores = 4 ) ``` **gee** ``` class_map = class_map.clip(roi_data); ``` Finally, the results can be visualized on an interactive map. **sits** ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(class_map, opacity = 1) ``` **gee** ``` // Define view region Map.centerObject(roi_data, 10); // Add classification map (colors from sits) Map.addLayer(class_map, { min: 1, max: 4, palette: ["#FAD12D", "#1E8449", "#D68910", "#a2d43f"] }, "Classification map"); ``` ### Example 1: Creating a Mosaic A common application among scientists and developers in the field of Remote Sensing is the creation of satellite image mosaics. These mosaics are formed by combining two or more images, typically used for visualization in various applications. In this example, we will demonstrate how to create an image mosaic using `sits` and `gee` APIs. In this example, a Region of Interest (ROI) is defined using a bounding box with longitude and latitude coordinates. Below are the code snippets for specifying this ROI in both `sits` and `gee` environments. **sits** ``` roi <- [c](https://rdrr.io/r/base/c.html)( "lon_min" = -63.410, "lat_min" = -9.783, "lon_max" = -62.614, "lat_max" = -9.331 ) ``` **gee** ``` var roi = ee.Geometry.Rectangle([-63.410,-9.783,-62.614,-9.331]); ``` Next, we will load the satellite imagery. For this example, we used data from [Sentinel\-2](https://sentinels.copernicus.eu/web/sentinel/copernicus/sentinel-2). In `sits`, several providers offer Sentinel\-2 ARD images. In this example, we will use images provided by the Microsoft Planetary Computer (**MPC**). **sits** ``` data <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "MPC", collection = "SENTINEL-2-L2A", bands = [c](https://rdrr.io/r/base/c.html)("B02", "B03", "B04"), tiles = [c](https://rdrr.io/r/base/c.html)("20LNQ", "20LMQ"), start_date = "2024-08-01", end_date = "2024-08-03" ) ``` **gee** ``` var data = ee.ImageCollection('COPERNICUS/S2_SR_HARMONIZED') .filterDate('2024-08-01', '2024-08-03') .filter(ee.Filter.inList('MGRS_TILE', ['20LNQ', '20LMQ'])) .select(['B4', 'B3', 'B2']); ``` > `sits` provides search filters for a collection as parameters in the > `[sits_cube()](https://rdrr.io/pkg/sits/man/sits_cube.html)` function, whereas `gee` offers these filters as methods of an > `ImageCollection` object. In `sits`, we will use the `[sits_mosaic()](https://rdrr.io/pkg/sits/man/sits_mosaic.html)` function to create mosaics of our images. In `gee`, we will utilize the `mosaic()` method. `[sits_mosaic()](https://rdrr.io/pkg/sits/man/sits_mosaic.html)` function crops the mosaic based on the `roi` parameter. In `gee`, cropping is performed using the `[clip()](https://rdrr.io/r/graphics/clip.html)` method. We will use the same `roi` that was used to filter the images to perform the cropping on the mosaic. See the following code: **sits** ``` mosaic <- [sits_mosaic](https://rdrr.io/pkg/sits/man/sits_mosaic.html)( cube = data, roi = roi, multicores = 4, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)() ) ``` **gee** ``` var mosaic = data.mosaic().clip(roi); ``` Finally, the results can be visualized in an interactive map. **sits** ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)( x = mosaic, red = "B04", green = "B03", blue = "B02" ) ``` **gee** ``` // Define view region Map.centerObject(roi, 10); // Add mosaic Image Map.addLayer(mosaic, { min: 0, max: 3000 }, 'Mosaic'); ``` ### Example 2: Calculating NDVI This example demonstrates how to generate time\-series of Normalized Difference Vegetation Index (NDVI) using both the `sits` and `gee` APIs. In this example, a Region of Interest (ROI) is defined using the `sinop_roi.shp` file. Below are the code snippets for specifying this file in both `sits` and `gee` environments. > To reproduce the example, you can download the shapefile using [this link](data/sits-gee/sinop_roi.zip). > In `sits`, you can just use it. In `gee`, it would be required to upload the > file in your user space. **sits** ``` roi_data <- "sinop_roi.shp" ``` **gee** ``` var roi_data = ee.FeatureCollection("/path/to/sinop_roi"); ``` Next, we load the satellite imagery. For this example, we use data from [Landsat\-8](https://www.usgs.gov/landsat-missions/landsat-8). In `sits`, this data is retrieved from the Brazil Data Cube, although other sources are available. For `gee`, the data provided by the platform is used. In `sits`, when the data is loaded, all necessary transformations to make the data ready for use (e.g., `factor`, `offset`, `cloud masking`) are applied automatically. In `gee`, users are responsible for performing these transformations themselves. **sits** ``` data <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "LANDSAT-OLI-16D", bands = [c](https://rdrr.io/r/base/c.html)("RED", "NIR08", "CLOUD"), roi = roi_data, start_date = "2019-05-01", end_date = "2019-07-01" ) ``` **gee** ``` var data = ee.ImageCollection("LANDSAT/LC08/C02/T1_L2") .filterBounds(roi_data) .filterDate("2019-05-01", "2019-07-01") .select(["SR_B4", "SR_B5", "QA_PIXEL"]); // factor and offset data = data.map(function(image) { var opticalBands = image.select('SR_B.').multiply(0.0000275).add(-0.2); return image.addBands(opticalBands, null, true); }); data = data.map(function(image) { // Select the pixel_qa band var qa = image.select('QA_PIXEL'); // Create a mask to identify cloud and cloud shadow var cloudMask = qa.bitwiseAnd(1 << 5).eq(0) // Clouds .and(qa.bitwiseAnd(1 << 3).eq(0)); // Cloud shadows // Apply the cloud mask to the image return image.updateMask(cloudMask); }); ``` After loading the satellite imagery, the NDVI can be generated. In `sits`, a function allows users to specify the formula used to create a new attribute, in this case, NDVI. In `gee`, a callback function is used, where the NDVI is calculated for each image. **sits** ``` data_ndvi <- [sits_apply](https://rdrr.io/pkg/sits/man/sits_apply.html)( data = data, NDVI = (NIR08 - RED) / (NIR08 + RED), output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)(), multicores = 4, progress = TRUE ) ``` **gee** ``` var data_ndvi = data.map(function(image) { var ndvi = image.normalizedDifference(["SR_B5", "SR_B4"]).rename('NDVI'); return image.addBands(ndvi); }); data_ndvi = data_ndvi.select("NDVI"); ``` The results are clipped to the ROI defined at the beginning of the example to facilitate visualization. > In both APIs, you can define a ROI before performing the > operation to optimize resource usage. However, in this example, the data is > cropped after the calculation. **sits** ``` data_ndvi <- [sits_mosaic](https://rdrr.io/pkg/sits/man/sits_mosaic.html)( cube = data_ndvi, roi = roi_data, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)(), multicores = 4 ) ``` **gee** ``` data_ndvi = data_ndvi.map(function(image) { return image.clip(roi_data); }); ``` Finally, the results can be visualized in an interactive map. **sits** ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(data_ndvi, band = "NDVI", date = "2019-05-25", opacity = 1) ``` **gee** ``` // Define view region Map.centerObject(roi_data, 10); // Add classification map (colors from sits) Map.addLayer(data_ndvi, { min: 0, max: 1, palette: ["red", 'white', 'green'] }, "NDVI Image"); ``` ### Example 3: Land Use and Land Cover (LULC) Classification This example demonstrates how to perform Land Use and Land Cover (LULC) classification using satellite image time series and machine\-learning models in both `sits` and `gee`. This example defines the region of interest (ROI) using a shapefile named `sinop_roi.shp`. Below are the code snippets for specifying this file in both `sits` and `gee` environments. > To reproduce the example, you can download the shapefile using [this link](data/sits-gee/sinop_roi.zip). > In `sits`, you can just use it. In `gee`, it would be required to upload the > file in your user space. **sits** ``` roi_data <- "sinop_roi.shp" ``` **gee** ``` var roi_data = ee.FeatureCollection("/path/to/sinop_roi"); ``` To train a classifier, sample data with labels representing the behavior of each class to be identified is necessary. In this example, we use a small set with `18` samples. The following code snippets show how these samples are defined in each environment. In `sits`, labels can be of type `string`, whereas `gee` requires labels to be `integers`. To accommodate this difference, two versions of the same sample set were created: (1\) one with `string` labels for use with `sits`, and (2\) another with `integer` labels for use with `gee`. > To download these samples, you can use the following links: > [samples\_sinop\_crop for sits](data/sits-gee/samples_sinop_crop.zip) or [samples\_sinop\_crop for gee](data/sits-gee/samples_sinop_crop_gee.zip) **sits** ``` samples <- "samples_sinop_crop.shp" ``` **gee** ``` var samples = ee.FeatureCollection("samples_sinop_crop_gee"); ``` Next, we load the satellite imagery. For this example, we use data from [MOD13Q1](https://ladsweb.modaps.eosdis.nasa.gov/missions-and-measurements/products/MOD13Q1) . In `sits`, this data is retrieved from the Brazil Data Cube, but other sources are also available. In `gee`, the platform directly provides this data. In `sits`, all necessary data transformations for classification tasks are handled automatically. In contrast, `gee` requires users to manually transform the data into the correct format. In this context it’s important to note that, in the `gee` code, transforming all images into bands mimics the approach used by `sits` for non\-temporal classifiers. However, this method is not inherently scalable in `gee` and may need adjustments for larger datasets or more bands. Additionally, for temporal classifiers like TempCNN, other transformations are necessary and must be manually implemented by the user in `gee`. In contrast, `sits` provides a consistent API experience, regardless of the data size or machine learning algorithm. **sits** ``` data <- [sits_cube](https://rdrr.io/pkg/sits/man/sits_cube.html)( source = "BDC", collection = "MOD13Q1-6.1", bands = [c](https://rdrr.io/r/base/c.html)("NDVI"), roi = roi_data, start_date = "2013-09-01", end_date = "2014-08-29" ) ``` **gee** ``` var data = ee.ImageCollection("MODIS/061/MOD13Q1") .filterBounds(roi_data) .filterDate("2013-09-01", "2014-09-01") .select(["NDVI"]); // Transform all images to bands data = data.toBands(); ``` In this example, we’ll use a Random Forest classifier to create a LULC map. To train the classifier, we need sample data linked to time\-series. This step shows how to extract and associate time\-series with samples. **sits** ``` samples_ts <- [sits_get_data](https://rdrr.io/pkg/sits/man/sits_get_data.html)( cube = data, samples = samples, multicores = 4 ) ``` **gee** ``` var samples_ts = data.sampleRegions({ collection: samples, properties: ["label"] }); ``` With the time\-series data extracted for each sample, we can now train the Random Forest classifier **sits** ``` classifier <- [sits_train](https://rdrr.io/pkg/sits/man/sits_train.html)( samples_ts, [sits_rfor](https://rdrr.io/pkg/sits/man/sits_rfor.html)(num_trees = 100) ) ``` **gee** ``` var classifier = ee.Classifier.smileRandomForest(100).train({ features: samples_ts, classProperty: "label", inputProperties: data.bandNames() }); ``` Now, it is possible to generate the classification map using the trained Random Forest model. In `sits`, the classification process starts with a probability map. This map provides the probability of each class for every pixel, offering insights into the classifier’s performance. It also allows for refining the results using methods like Bayesian probability smoothing. After generating the probability map, it is possible to produce the class map, where each pixel is assigned to the class with the highest probability. In `gee`, while it is possible to generate probabilities, it is not strictly required to produce the classification map. Yet, as of the date of this document, there is no out\-of\-the\-box solution available for utilizing these probabilities to enhance classification results, as presented in `sits`. **sits** ``` probabilities <- [sits_classify](https://rdrr.io/pkg/sits/man/sits_classify.html)( data = data, ml_model = classifier, multicores = 4, roi = roi_data, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)() ) class_map <- [sits_label_classification](https://rdrr.io/pkg/sits/man/sits_label_classification.html)( cube = probabilities, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)(), multicores = 4 ) ``` **gee** ``` var probs_map = data.classify(classifier.setOutputMode("MULTIPROBABILITY")); var class_map = data.classify(classifier); ``` The results are clipped to the ROI defined at the beginning of the example to facilitate visualization. > In both APIs, it’s possible to define an ROI before processing. However, this > was not applied in this example. **sits** ``` class_map <- [sits_mosaic](https://rdrr.io/pkg/sits/man/sits_mosaic.html)( cube = class_map, roi = roi_data, output_dir = [tempdir](https://rdrr.io/r/base/tempfile.html)(), multicores = 4 ) ``` **gee** ``` class_map = class_map.clip(roi_data); ``` Finally, the results can be visualized on an interactive map. **sits** ``` [sits_view](https://rdrr.io/pkg/sits/man/sits_view.html)(class_map, opacity = 1) ``` **gee** ``` // Define view region Map.centerObject(roi_data, 10); // Add classification map (colors from sits) Map.addLayer(class_map, { min: 1, max: 4, palette: ["#FAD12D", "#1E8449", "#D68910", "#a2d43f"] }, "Classification map"); ```
Machine Learning
topepo.github.io
https://topepo.github.io/caret/visualizations.html
2 Visualizations ================ The `featurePlot` function is a wrapper for different [`lattice`](http://cran.r-project.org/web/packages/lattice/index.html) plots to visualize the data. For example, the following figures show the default plot for continuous outcomes generated using the `featurePlot` function. For classification data sets, the `iris` data are used for illustration. ``` str(iris) ``` ``` ## 'data.frame': 150 obs. of 5 variables: ## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ... ## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ... ## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ... ## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ... ## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ... ``` **Scatterplot Matrix** ``` library(AppliedPredictiveModeling) transparentTheme(trans = .4) library(caret) featurePlot(x = iris[, 1:4], y = iris$Species, plot = "pairs", ## Add a key at the top auto.key = list(columns = 3)) ``` **Scatterplot Matrix with Ellipses** ``` featurePlot(x = iris[, 1:4], y = iris$Species, plot = "ellipse", ## Add a key at the top auto.key = list(columns = 3)) ``` **Overlayed Density Plots** ``` transparentTheme(trans = .9) featurePlot(x = iris[, 1:4], y = iris$Species, plot = "density", ## Pass in options to xyplot() to ## make it prettier scales = list(x = list(relation="free"), y = list(relation="free")), adjust = 1.5, pch = "|", layout = c(4, 1), auto.key = list(columns = 3)) ``` **Box Plots** ``` featurePlot(x = iris[, 1:4], y = iris$Species, plot = "box", ## Pass in options to bwplot() scales = list(y = list(relation="free"), x = list(rot = 90)), layout = c(4,1 ), auto.key = list(columns = 2)) ``` **Scatter Plots** For regression, the Boston Housing data is used: ``` library(mlbench) data(BostonHousing) regVar <- c("age", "lstat", "tax") str(BostonHousing[, regVar]) ``` ``` ## 'data.frame': 506 obs. of 3 variables: ## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ... ## $ lstat: num 4.98 9.14 4.03 2.94 5.33 ... ## $ tax : num 296 242 242 222 222 222 311 311 311 311 ... ``` When the predictors are continuous, `featurePlot` can be used to create scatter plots of each of the predictors with the outcome. For example: ``` theme1 <- trellis.par.get() theme1$plot.symbol$col = rgb(.2, .2, .2, .4) theme1$plot.symbol$pch = 16 theme1$plot.line$col = rgb(1, 0, 0, .7) theme1$plot.line$lwd <- 2 trellis.par.set(theme1) featurePlot(x = BostonHousing[, regVar], y = BostonHousing$medv, plot = "scatter", layout = c(3, 1)) ``` Note that the x\-axis scales are different. The function automatically uses `scales = list(y = list(relation = "free"))` so you don’t have to add it. We can also pass in options to the [`lattice`](http://cran.r-project.org/web/packages/lattice/index.html) function `xyplot`. For example, we can add a scatter plot smoother by passing in new options: ``` featurePlot(x = BostonHousing[, regVar], y = BostonHousing$medv, plot = "scatter", type = c("p", "smooth"), span = .5, layout = c(3, 1)) ``` The options `degree` and `span` control the smoothness of the smoother.
Machine Learning
topepo.github.io
https://topepo.github.io/caret/pre-processing.html
3 Pre\-Processing ================= * [Creating Dummy Variables](pre-processing.html#dummy) * [Zero\- and Near Zero\-Variance Predictors](pre-processing.html#nzv) * [Identifying Correlated Predictors](pre-processing.html#corr) * [Linear Dependencies](pre-processing.html#lindep) * [The `preProcess` Function](pre-processing.html#pp) * [Centering and Scaling](pre-processing.html#cs) * [Imputation](pre-processing.html#impute) * [Transforming Predictors](pre-processing.html#trans) * [Putting It All Together](pre-processing.html#all) * [Class Distance Calculations](pre-processing.html#cent) [`caret`](http://cran.r-project.org/web/packages/caret/index.html) includes several functions to pre\-process the predictor data. It assumes that all of the data are numeric (i.e. factors have been converted to dummy variables via `model.matrix`, `dummyVars` or other means). Note that the later chapter on using [`recipes`](https://topepo.github.io/recipes/) with `train` shows how that approach can offer a more diverse and customizable interface to pre\-processing in the package. 3\.1 Creating Dummy Variables ----------------------------- The function `dummyVars` can be used to generate a complete (less than full rank parameterized) set of dummy variables from one or more factors. The function takes a formula and a data set and outputs an object that can be used to create the dummy variables using the predict method. For example, the `etitanic` data set in the [`earth`](http://cran.r-project.org/web/packages/earth/index.html) package includes two factors: `pclass` (passenger class, with levels 1st, 2nd, 3rd) and `sex` (with levels female, male). The base R function `model.matrix` would generate the following variables: ``` library(earth) data(etitanic) head(model.matrix(survived ~ ., data = etitanic)) ``` ``` ## (Intercept) pclass2nd pclass3rd sexmale age sibsp parch ## 1 1 0 0 0 29.0000 0 0 ## 2 1 0 0 1 0.9167 1 2 ## 3 1 0 0 0 2.0000 1 2 ## 4 1 0 0 1 30.0000 1 2 ## 5 1 0 0 0 25.0000 1 2 ## 6 1 0 0 1 48.0000 0 0 ``` Using `dummyVars`: ``` dummies <- dummyVars(survived ~ ., data = etitanic) head(predict(dummies, newdata = etitanic)) ``` ``` ## pclass.1st pclass.2nd pclass.3rd sex.female sex.male age sibsp parch ## 1 1 0 0 1 0 29.0000 0 0 ## 2 1 0 0 0 1 0.9167 1 2 ## 3 1 0 0 1 0 2.0000 1 2 ## 4 1 0 0 0 1 30.0000 1 2 ## 5 1 0 0 1 0 25.0000 1 2 ## 6 1 0 0 0 1 48.0000 0 0 ``` Note there is no intercept and each factor has a dummy variable for each level, so this parameterization may not be useful for some model functions, such as `lm`. 3\.2 Zero\- and Near Zero\-Variance Predictors ---------------------------------------------- In some situations, the data generating mechanism can create predictors that only have a single unique value (i.e. a “zero\-variance predictor”). For many models (excluding tree\-based models), this may cause the model to crash or the fit to be unstable. Similarly, predictors might have only a handful of unique values that occur with very low frequencies. For example, in the drug resistance data, the `nR11` descriptor (number of 11\-membered rings) data have a few unique numeric values that are highly unbalanced: ``` data(mdrr) data.frame(table(mdrrDescr$nR11)) ``` ``` ## Var1 Freq ## 1 0 501 ## 2 1 4 ## 3 2 23 ``` The concern here that these predictors may become zero\-variance predictors when the data are split into cross\-validation/bootstrap sub\-samples or that a few samples may have an undue influence on the model. These “near\-zero\-variance” predictors may need to be identified and eliminated prior to modeling. To identify these types of predictors, the following two metrics can be calculated: * the frequency of the most prevalent value over the second most frequent value (called the “frequency ratio’’), which would be near one for well\-behaved predictors and very large for highly\-unbalanced data and * the “percent of unique values’’ is the number of unique values divided by the total number of samples (times 100\) that approaches zero as the granularity of the data increases If the frequency ratio is greater than a pre\-specified threshold and the unique value percentage is less than a threshold, we might consider a predictor to be near zero\-variance. We would not want to falsely identify data that have low granularity but are evenly distributed, such as data from a discrete uniform distribution. Using both criteria should not falsely detect such predictors. Looking at the MDRR data, the `nearZeroVar` function can be used to identify near zero\-variance variables (the `saveMetrics` argument can be used to show the details and usually defaults to `FALSE`): ``` nzv <- nearZeroVar(mdrrDescr, saveMetrics= TRUE) nzv[nzv$nzv,][1:10,] ``` ``` ## freqRatio percentUnique zeroVar nzv ## nTB 23.00000 0.3787879 FALSE TRUE ## nBR 131.00000 0.3787879 FALSE TRUE ## nI 527.00000 0.3787879 FALSE TRUE ## nR03 527.00000 0.3787879 FALSE TRUE ## nR08 527.00000 0.3787879 FALSE TRUE ## nR11 21.78261 0.5681818 FALSE TRUE ## nR12 57.66667 0.3787879 FALSE TRUE ## D.Dr03 527.00000 0.3787879 FALSE TRUE ## D.Dr07 123.50000 5.8712121 FALSE TRUE ## D.Dr08 527.00000 0.3787879 FALSE TRUE ``` ``` dim(mdrrDescr) ``` ``` ## [1] 528 342 ``` ``` nzv <- nearZeroVar(mdrrDescr) filteredDescr <- mdrrDescr[, -nzv] dim(filteredDescr) ``` ``` ## [1] 528 297 ``` By default, `nearZeroVar` will return the positions of the variables that are flagged to be problematic. 3\.3 Identifying Correlated Predictors -------------------------------------- While there are some models that thrive on correlated predictors (such as `pls`), other models may benefit from reducing the level of correlation between the predictors. Given a correlation matrix, the `findCorrelation` function uses the following algorithm to flag predictors for removal: ``` descrCor <- cor(filteredDescr) highCorr <- sum(abs(descrCor[upper.tri(descrCor)]) > .999) ``` For the previous MDRR data, there are 65 descriptors that are almost perfectly correlated (\|correlation\| \> 0\.999\), such as the total information index of atomic composition (`IAC`) and the total information content index (neighborhood symmetry of 0\-order) (`TIC0`) (correlation \= 1\). The code chunk below shows the effect of removing descriptors with absolute correlations above 0\.75\. ``` descrCor <- cor(filteredDescr) summary(descrCor[upper.tri(descrCor)]) ``` ``` ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## -0.99607 -0.05373 0.25006 0.26078 0.65527 1.00000 ``` ``` highlyCorDescr <- findCorrelation(descrCor, cutoff = .75) filteredDescr <- filteredDescr[,-highlyCorDescr] descrCor2 <- cor(filteredDescr) summary(descrCor2[upper.tri(descrCor2)]) ``` ``` ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## -0.70728 -0.05378 0.04418 0.06692 0.18858 0.74458 ``` 3\.4 Linear Dependencies ------------------------ The function `findLinearCombos` uses the QR decomposition of a matrix to enumerate sets of linear combinations (if they exist). For example, consider the following matrix that is could have been produced by a less\-than\-full\-rank parameterizations of a two\-way experimental layout: ``` ltfrDesign <- matrix(0, nrow=6, ncol=6) ltfrDesign[,1] <- c(1, 1, 1, 1, 1, 1) ltfrDesign[,2] <- c(1, 1, 1, 0, 0, 0) ltfrDesign[,3] <- c(0, 0, 0, 1, 1, 1) ltfrDesign[,4] <- c(1, 0, 0, 1, 0, 0) ltfrDesign[,5] <- c(0, 1, 0, 0, 1, 0) ltfrDesign[,6] <- c(0, 0, 1, 0, 0, 1) ``` Note that columns two and three add up to the first column. Similarly, columns four, five and six add up the first column. `findLinearCombos` will return a list that enumerates these dependencies. For each linear combination, it will incrementally remove columns from the matrix and test to see if the dependencies have been resolved. `findLinearCombos` will also return a vector of column positions can be removed to eliminate the linear dependencies: ``` comboInfo <- findLinearCombos(ltfrDesign) comboInfo ``` ``` ## $linearCombos ## $linearCombos[[1]] ## [1] 3 1 2 ## ## $linearCombos[[2]] ## [1] 6 1 4 5 ## ## ## $remove ## [1] 3 6 ``` ``` ltfrDesign[, -comboInfo$remove] ``` ``` ## [,1] [,2] [,3] [,4] ## [1,] 1 1 1 0 ## [2,] 1 1 0 1 ## [3,] 1 1 0 0 ## [4,] 1 0 1 0 ## [5,] 1 0 0 1 ## [6,] 1 0 0 0 ``` These types of dependencies can arise when large numbers of binary chemical fingerprints are used to describe the structure of a molecule. 3\.5 The `preProcess` Function ------------------------------ The `preProcess` class can be used for many operations on predictors, including centering and scaling. The function `preProcess` estimates the required parameters for each operation and `predict.preProcess` is used to apply them to specific data sets. This function can also be interfaces when calling the `train` function. Several types of techniques are described in the next few sections and then another example is used to demonstrate how multiple methods can be used. Note that, in all cases, the `preProcess` function estimates whatever it requires from a specific data set (e.g. the training set) and then applies these transformations to *any* data set without recomputing the values 3\.6 Centering and Scaling -------------------------- In the example below, the half of the MDRR data are used to estimate the location and scale of the predictors. The function `preProcess` doesn’t actually pre\-process the data. `predict.preProcess` is used to pre\-process this and other data sets. ``` set.seed(96) inTrain <- sample(seq(along = mdrrClass), length(mdrrClass)/2) training <- filteredDescr[inTrain,] test <- filteredDescr[-inTrain,] trainMDRR <- mdrrClass[inTrain] testMDRR <- mdrrClass[-inTrain] preProcValues <- preProcess(training, method = c("center", "scale")) trainTransformed <- predict(preProcValues, training) testTransformed <- predict(preProcValues, test) ``` The `preProcess` option `"range"` scales the data to the interval between zero and one. 3\.7 Imputation --------------- `preProcess` can be used to impute data sets based only on information in the training set. One method of doing this is with K\-nearest neighbors. For an arbitrary sample, the K closest neighbors are found in the training set and the value for the predictor is imputed using these values (e.g. using the mean). Using this approach will automatically trigger `preProcess` to center and scale the data, regardless of what is in the `method` argument. Alternatively, bagged trees can also be used to impute. For each predictor in the data, a bagged tree is created using all of the other predictors in the training set. When a new sample has a missing predictor value, the bagged model is used to predict the value. While, in theory, this is a more powerful method of imputing, the computational costs are much higher than the nearest neighbor technique. 3\.8 Transforming Predictors ---------------------------- In some cases, there is a need to use principal component analysis (PCA) to transform the data to a smaller sub–space where the new variable are uncorrelated with one another. The `preProcess` class can apply this transformation by including `"pca"` in the `method` argument. Doing this will also force scaling of the predictors. Note that when PCA is requested, `predict.preProcess` changes the column names to `PC1`, `PC2` and so on. Similarly, independent component analysis (ICA) can also be used to find new variables that are linear combinations of the original set such that the components are independent (as opposed to uncorrelated in PCA). The new variables will be labeled as `IC1`, `IC2` and so on. The “spatial sign” transformation ([Serneels et al, 2006](http://pubs.acs.org/cgi-bin/abstract.cgi/jcisd8/2006/46/i03/abs/ci050498u.html)) projects the data for a predictor to the unit circle in p dimensions, where p is the number of predictors. Essentially, a vector of data is divided by its norm. The two figures below show two centered and scaled descriptors from the MDRR data before and after the spatial sign transformation. The predictors should be centered and scaled before applying this transformation. ``` library(AppliedPredictiveModeling) transparentTheme(trans = .4) ``` ``` plotSubset <- data.frame(scale(mdrrDescr[, c("nC", "X4v")])) xyplot(nC ~ X4v, data = plotSubset, groups = mdrrClass, auto.key = list(columns = 2)) ``` After the spatial sign: ``` transformed <- spatialSign(plotSubset) transformed <- as.data.frame(transformed) xyplot(nC ~ X4v, data = transformed, groups = mdrrClass, auto.key = list(columns = 2)) ``` Another option, `"BoxCox"` will estimate a Box–Cox transformation on the predictors if the data are greater than zero. ``` preProcValues2 <- preProcess(training, method = "BoxCox") trainBC <- predict(preProcValues2, training) testBC <- predict(preProcValues2, test) preProcValues2 ``` ``` ## Created from 264 samples and 31 variables ## ## Pre-processing: ## - Box-Cox transformation (31) ## - ignored (0) ## ## Lambda estimates for Box-Cox transformation: ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## -2.0000 -0.2500 0.5000 0.4387 2.0000 2.0000 ``` The `NA` values correspond to the predictors that could not be transformed. This transformation requires the data to be greater than zero. Two similar transformations, the Yeo\-Johnson and exponential transformation of Manly (1976\) can also be used in `preProcess`. 3\.9 Putting It All Together ---------------------------- In *Applied Predictive Modeling* there is a case study where the execution times of jobs in a high performance computing environment are being predicted. The data are: ``` library(AppliedPredictiveModeling) data(schedulingData) str(schedulingData) ``` ``` ## 'data.frame': 4331 obs. of 8 variables: ## $ Protocol : Factor w/ 14 levels "A","C","D","E",..: 4 4 4 4 4 4 4 4 4 4 ... ## $ Compounds : num 997 97 101 93 100 100 105 98 101 95 ... ## $ InputFields: num 137 103 75 76 82 82 88 95 91 92 ... ## $ Iterations : num 20 20 10 20 20 20 20 20 20 20 ... ## $ NumPending : num 0 0 0 0 0 0 0 0 0 0 ... ## $ Hour : num 14 13.8 13.8 10.1 10.4 ... ## $ Day : Factor w/ 7 levels "Mon","Tue","Wed",..: 2 2 4 5 5 3 5 5 5 3 ... ## $ Class : Factor w/ 4 levels "VF","F","M","L": 2 1 1 1 1 1 1 1 1 1 ... ``` The data are a mix of categorical and numeric predictors. Suppose we want to use the Yeo\-Johnson transformation on the continuous predictors then center and scale them. Let’s also suppose that we will be running a tree\-based models so we might want to keep the factors as factors (as opposed to creating dummy variables). We run the function on all the columns except the last, which is the outcome. ``` pp_hpc <- preProcess(schedulingData[, -8], method = c("center", "scale", "YeoJohnson")) pp_hpc ``` ``` ## Created from 4331 samples and 7 variables ## ## Pre-processing: ## - centered (5) ## - ignored (2) ## - scaled (5) ## - Yeo-Johnson transformation (5) ## ## Lambda estimates for Yeo-Johnson transformation: ## -0.08, -0.03, -1.05, -1.1, 1.44 ``` ``` transformed <- predict(pp_hpc, newdata = schedulingData[, -8]) head(transformed) ``` ``` ## Protocol Compounds InputFields Iterations NumPending Hour Day ## 1 E 1.2289592 -0.6324580 -0.0615593 -0.554123 0.004586516 Tue ## 2 E -0.6065826 -0.8120473 -0.0615593 -0.554123 -0.043733201 Tue ## 3 E -0.5719534 -1.0131504 -2.7894869 -0.554123 -0.034967177 Thu ## 4 E -0.6427737 -1.0047277 -0.0615593 -0.554123 -0.964170752 Fri ## 5 E -0.5804713 -0.9564504 -0.0615593 -0.554123 -0.902085020 Fri ## 6 E -0.5804713 -0.9564504 -0.0615593 -0.554123 0.698108782 Wed ``` The two predictors labeled as “ignored” in the output are the two factor predictors. These are not altered but the numeric predictors are transformed. However, the predictor for the number of pending jobs, has a very sparse and unbalanced distribution: ``` mean(schedulingData$NumPending == 0) ``` ``` ## [1] 0.7561764 ``` For some other models, this might be an issue (especially if we resample or down\-sample the data). We can add a filter to check for zero\- or near zero\-variance predictors prior to running the pre\-processing calculations: ``` pp_no_nzv <- preProcess(schedulingData[, -8], method = c("center", "scale", "YeoJohnson", "nzv")) pp_no_nzv ``` ``` ## Created from 4331 samples and 7 variables ## ## Pre-processing: ## - centered (4) ## - ignored (2) ## - removed (1) ## - scaled (4) ## - Yeo-Johnson transformation (4) ## ## Lambda estimates for Yeo-Johnson transformation: ## -0.08, -0.03, -1.05, 1.44 ``` ``` predict(pp_no_nzv, newdata = schedulingData[1:6, -8]) ``` ``` ## Protocol Compounds InputFields Iterations Hour Day ## 1 E 1.2289592 -0.6324580 -0.0615593 0.004586516 Tue ## 2 E -0.6065826 -0.8120473 -0.0615593 -0.043733201 Tue ## 3 E -0.5719534 -1.0131504 -2.7894869 -0.034967177 Thu ## 4 E -0.6427737 -1.0047277 -0.0615593 -0.964170752 Fri ## 5 E -0.5804713 -0.9564504 -0.0615593 -0.902085020 Fri ## 6 E -0.5804713 -0.9564504 -0.0615593 0.698108782 Wed ``` Note that one predictor is labeled as “removed” and the processed data lack the sparse predictor. 3\.10 Class Distance Calculations --------------------------------- [`caret`](http://cran.r-project.org/web/packages/caret/index.html) contains functions to generate new predictors variables based on distances to class centroids (similar to how linear discriminant analysis works). For each level of a factor variable, the class centroid and covariance matrix is calculated. For new samples, the Mahalanobis distance to each of the class centroids is computed and can be used as an additional predictor. This can be helpful for non\-linear models when the true decision boundary is actually linear. In cases where there are more predictors within a class than samples, the `classDist` function has arguments called `pca` and `keep` arguments that allow for principal components analysis within each class to be used to avoid issues with singular covariance matrices. `predict.classDist` is then used to generate the class distances. By default, the distances are logged, but this can be changed via the `trans` argument to `predict.classDist`. As an example, we can used the MDRR data. ``` centroids <- classDist(trainBC, trainMDRR) distances <- predict(centroids, testBC) distances <- as.data.frame(distances) head(distances) ``` ``` ## dist.Active dist.Inactive ## ACEPROMAZINE 3.787139 3.941234 ## ACEPROMETAZINE 4.306137 3.992772 ## MESORIDAZINE 3.707296 4.324115 ## PERIMETAZINE 4.079938 4.117170 ## PROPERICIAZINE 4.174101 4.430957 ## DUOPERONE 4.355328 6.000025 ``` This image shows a scatterplot matrix of the class distances for the held\-out samples: ``` xyplot(dist.Active ~ dist.Inactive, data = distances, groups = testMDRR, auto.key = list(columns = 2)) ``` 3\.1 Creating Dummy Variables ----------------------------- The function `dummyVars` can be used to generate a complete (less than full rank parameterized) set of dummy variables from one or more factors. The function takes a formula and a data set and outputs an object that can be used to create the dummy variables using the predict method. For example, the `etitanic` data set in the [`earth`](http://cran.r-project.org/web/packages/earth/index.html) package includes two factors: `pclass` (passenger class, with levels 1st, 2nd, 3rd) and `sex` (with levels female, male). The base R function `model.matrix` would generate the following variables: ``` library(earth) data(etitanic) head(model.matrix(survived ~ ., data = etitanic)) ``` ``` ## (Intercept) pclass2nd pclass3rd sexmale age sibsp parch ## 1 1 0 0 0 29.0000 0 0 ## 2 1 0 0 1 0.9167 1 2 ## 3 1 0 0 0 2.0000 1 2 ## 4 1 0 0 1 30.0000 1 2 ## 5 1 0 0 0 25.0000 1 2 ## 6 1 0 0 1 48.0000 0 0 ``` Using `dummyVars`: ``` dummies <- dummyVars(survived ~ ., data = etitanic) head(predict(dummies, newdata = etitanic)) ``` ``` ## pclass.1st pclass.2nd pclass.3rd sex.female sex.male age sibsp parch ## 1 1 0 0 1 0 29.0000 0 0 ## 2 1 0 0 0 1 0.9167 1 2 ## 3 1 0 0 1 0 2.0000 1 2 ## 4 1 0 0 0 1 30.0000 1 2 ## 5 1 0 0 1 0 25.0000 1 2 ## 6 1 0 0 0 1 48.0000 0 0 ``` Note there is no intercept and each factor has a dummy variable for each level, so this parameterization may not be useful for some model functions, such as `lm`. 3\.2 Zero\- and Near Zero\-Variance Predictors ---------------------------------------------- In some situations, the data generating mechanism can create predictors that only have a single unique value (i.e. a “zero\-variance predictor”). For many models (excluding tree\-based models), this may cause the model to crash or the fit to be unstable. Similarly, predictors might have only a handful of unique values that occur with very low frequencies. For example, in the drug resistance data, the `nR11` descriptor (number of 11\-membered rings) data have a few unique numeric values that are highly unbalanced: ``` data(mdrr) data.frame(table(mdrrDescr$nR11)) ``` ``` ## Var1 Freq ## 1 0 501 ## 2 1 4 ## 3 2 23 ``` The concern here that these predictors may become zero\-variance predictors when the data are split into cross\-validation/bootstrap sub\-samples or that a few samples may have an undue influence on the model. These “near\-zero\-variance” predictors may need to be identified and eliminated prior to modeling. To identify these types of predictors, the following two metrics can be calculated: * the frequency of the most prevalent value over the second most frequent value (called the “frequency ratio’’), which would be near one for well\-behaved predictors and very large for highly\-unbalanced data and * the “percent of unique values’’ is the number of unique values divided by the total number of samples (times 100\) that approaches zero as the granularity of the data increases If the frequency ratio is greater than a pre\-specified threshold and the unique value percentage is less than a threshold, we might consider a predictor to be near zero\-variance. We would not want to falsely identify data that have low granularity but are evenly distributed, such as data from a discrete uniform distribution. Using both criteria should not falsely detect such predictors. Looking at the MDRR data, the `nearZeroVar` function can be used to identify near zero\-variance variables (the `saveMetrics` argument can be used to show the details and usually defaults to `FALSE`): ``` nzv <- nearZeroVar(mdrrDescr, saveMetrics= TRUE) nzv[nzv$nzv,][1:10,] ``` ``` ## freqRatio percentUnique zeroVar nzv ## nTB 23.00000 0.3787879 FALSE TRUE ## nBR 131.00000 0.3787879 FALSE TRUE ## nI 527.00000 0.3787879 FALSE TRUE ## nR03 527.00000 0.3787879 FALSE TRUE ## nR08 527.00000 0.3787879 FALSE TRUE ## nR11 21.78261 0.5681818 FALSE TRUE ## nR12 57.66667 0.3787879 FALSE TRUE ## D.Dr03 527.00000 0.3787879 FALSE TRUE ## D.Dr07 123.50000 5.8712121 FALSE TRUE ## D.Dr08 527.00000 0.3787879 FALSE TRUE ``` ``` dim(mdrrDescr) ``` ``` ## [1] 528 342 ``` ``` nzv <- nearZeroVar(mdrrDescr) filteredDescr <- mdrrDescr[, -nzv] dim(filteredDescr) ``` ``` ## [1] 528 297 ``` By default, `nearZeroVar` will return the positions of the variables that are flagged to be problematic. 3\.3 Identifying Correlated Predictors -------------------------------------- While there are some models that thrive on correlated predictors (such as `pls`), other models may benefit from reducing the level of correlation between the predictors. Given a correlation matrix, the `findCorrelation` function uses the following algorithm to flag predictors for removal: ``` descrCor <- cor(filteredDescr) highCorr <- sum(abs(descrCor[upper.tri(descrCor)]) > .999) ``` For the previous MDRR data, there are 65 descriptors that are almost perfectly correlated (\|correlation\| \> 0\.999\), such as the total information index of atomic composition (`IAC`) and the total information content index (neighborhood symmetry of 0\-order) (`TIC0`) (correlation \= 1\). The code chunk below shows the effect of removing descriptors with absolute correlations above 0\.75\. ``` descrCor <- cor(filteredDescr) summary(descrCor[upper.tri(descrCor)]) ``` ``` ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## -0.99607 -0.05373 0.25006 0.26078 0.65527 1.00000 ``` ``` highlyCorDescr <- findCorrelation(descrCor, cutoff = .75) filteredDescr <- filteredDescr[,-highlyCorDescr] descrCor2 <- cor(filteredDescr) summary(descrCor2[upper.tri(descrCor2)]) ``` ``` ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## -0.70728 -0.05378 0.04418 0.06692 0.18858 0.74458 ``` 3\.4 Linear Dependencies ------------------------ The function `findLinearCombos` uses the QR decomposition of a matrix to enumerate sets of linear combinations (if they exist). For example, consider the following matrix that is could have been produced by a less\-than\-full\-rank parameterizations of a two\-way experimental layout: ``` ltfrDesign <- matrix(0, nrow=6, ncol=6) ltfrDesign[,1] <- c(1, 1, 1, 1, 1, 1) ltfrDesign[,2] <- c(1, 1, 1, 0, 0, 0) ltfrDesign[,3] <- c(0, 0, 0, 1, 1, 1) ltfrDesign[,4] <- c(1, 0, 0, 1, 0, 0) ltfrDesign[,5] <- c(0, 1, 0, 0, 1, 0) ltfrDesign[,6] <- c(0, 0, 1, 0, 0, 1) ``` Note that columns two and three add up to the first column. Similarly, columns four, five and six add up the first column. `findLinearCombos` will return a list that enumerates these dependencies. For each linear combination, it will incrementally remove columns from the matrix and test to see if the dependencies have been resolved. `findLinearCombos` will also return a vector of column positions can be removed to eliminate the linear dependencies: ``` comboInfo <- findLinearCombos(ltfrDesign) comboInfo ``` ``` ## $linearCombos ## $linearCombos[[1]] ## [1] 3 1 2 ## ## $linearCombos[[2]] ## [1] 6 1 4 5 ## ## ## $remove ## [1] 3 6 ``` ``` ltfrDesign[, -comboInfo$remove] ``` ``` ## [,1] [,2] [,3] [,4] ## [1,] 1 1 1 0 ## [2,] 1 1 0 1 ## [3,] 1 1 0 0 ## [4,] 1 0 1 0 ## [5,] 1 0 0 1 ## [6,] 1 0 0 0 ``` These types of dependencies can arise when large numbers of binary chemical fingerprints are used to describe the structure of a molecule. 3\.5 The `preProcess` Function ------------------------------ The `preProcess` class can be used for many operations on predictors, including centering and scaling. The function `preProcess` estimates the required parameters for each operation and `predict.preProcess` is used to apply them to specific data sets. This function can also be interfaces when calling the `train` function. Several types of techniques are described in the next few sections and then another example is used to demonstrate how multiple methods can be used. Note that, in all cases, the `preProcess` function estimates whatever it requires from a specific data set (e.g. the training set) and then applies these transformations to *any* data set without recomputing the values 3\.6 Centering and Scaling -------------------------- In the example below, the half of the MDRR data are used to estimate the location and scale of the predictors. The function `preProcess` doesn’t actually pre\-process the data. `predict.preProcess` is used to pre\-process this and other data sets. ``` set.seed(96) inTrain <- sample(seq(along = mdrrClass), length(mdrrClass)/2) training <- filteredDescr[inTrain,] test <- filteredDescr[-inTrain,] trainMDRR <- mdrrClass[inTrain] testMDRR <- mdrrClass[-inTrain] preProcValues <- preProcess(training, method = c("center", "scale")) trainTransformed <- predict(preProcValues, training) testTransformed <- predict(preProcValues, test) ``` The `preProcess` option `"range"` scales the data to the interval between zero and one. 3\.7 Imputation --------------- `preProcess` can be used to impute data sets based only on information in the training set. One method of doing this is with K\-nearest neighbors. For an arbitrary sample, the K closest neighbors are found in the training set and the value for the predictor is imputed using these values (e.g. using the mean). Using this approach will automatically trigger `preProcess` to center and scale the data, regardless of what is in the `method` argument. Alternatively, bagged trees can also be used to impute. For each predictor in the data, a bagged tree is created using all of the other predictors in the training set. When a new sample has a missing predictor value, the bagged model is used to predict the value. While, in theory, this is a more powerful method of imputing, the computational costs are much higher than the nearest neighbor technique. 3\.8 Transforming Predictors ---------------------------- In some cases, there is a need to use principal component analysis (PCA) to transform the data to a smaller sub–space where the new variable are uncorrelated with one another. The `preProcess` class can apply this transformation by including `"pca"` in the `method` argument. Doing this will also force scaling of the predictors. Note that when PCA is requested, `predict.preProcess` changes the column names to `PC1`, `PC2` and so on. Similarly, independent component analysis (ICA) can also be used to find new variables that are linear combinations of the original set such that the components are independent (as opposed to uncorrelated in PCA). The new variables will be labeled as `IC1`, `IC2` and so on. The “spatial sign” transformation ([Serneels et al, 2006](http://pubs.acs.org/cgi-bin/abstract.cgi/jcisd8/2006/46/i03/abs/ci050498u.html)) projects the data for a predictor to the unit circle in p dimensions, where p is the number of predictors. Essentially, a vector of data is divided by its norm. The two figures below show two centered and scaled descriptors from the MDRR data before and after the spatial sign transformation. The predictors should be centered and scaled before applying this transformation. ``` library(AppliedPredictiveModeling) transparentTheme(trans = .4) ``` ``` plotSubset <- data.frame(scale(mdrrDescr[, c("nC", "X4v")])) xyplot(nC ~ X4v, data = plotSubset, groups = mdrrClass, auto.key = list(columns = 2)) ``` After the spatial sign: ``` transformed <- spatialSign(plotSubset) transformed <- as.data.frame(transformed) xyplot(nC ~ X4v, data = transformed, groups = mdrrClass, auto.key = list(columns = 2)) ``` Another option, `"BoxCox"` will estimate a Box–Cox transformation on the predictors if the data are greater than zero. ``` preProcValues2 <- preProcess(training, method = "BoxCox") trainBC <- predict(preProcValues2, training) testBC <- predict(preProcValues2, test) preProcValues2 ``` ``` ## Created from 264 samples and 31 variables ## ## Pre-processing: ## - Box-Cox transformation (31) ## - ignored (0) ## ## Lambda estimates for Box-Cox transformation: ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## -2.0000 -0.2500 0.5000 0.4387 2.0000 2.0000 ``` The `NA` values correspond to the predictors that could not be transformed. This transformation requires the data to be greater than zero. Two similar transformations, the Yeo\-Johnson and exponential transformation of Manly (1976\) can also be used in `preProcess`. 3\.9 Putting It All Together ---------------------------- In *Applied Predictive Modeling* there is a case study where the execution times of jobs in a high performance computing environment are being predicted. The data are: ``` library(AppliedPredictiveModeling) data(schedulingData) str(schedulingData) ``` ``` ## 'data.frame': 4331 obs. of 8 variables: ## $ Protocol : Factor w/ 14 levels "A","C","D","E",..: 4 4 4 4 4 4 4 4 4 4 ... ## $ Compounds : num 997 97 101 93 100 100 105 98 101 95 ... ## $ InputFields: num 137 103 75 76 82 82 88 95 91 92 ... ## $ Iterations : num 20 20 10 20 20 20 20 20 20 20 ... ## $ NumPending : num 0 0 0 0 0 0 0 0 0 0 ... ## $ Hour : num 14 13.8 13.8 10.1 10.4 ... ## $ Day : Factor w/ 7 levels "Mon","Tue","Wed",..: 2 2 4 5 5 3 5 5 5 3 ... ## $ Class : Factor w/ 4 levels "VF","F","M","L": 2 1 1 1 1 1 1 1 1 1 ... ``` The data are a mix of categorical and numeric predictors. Suppose we want to use the Yeo\-Johnson transformation on the continuous predictors then center and scale them. Let’s also suppose that we will be running a tree\-based models so we might want to keep the factors as factors (as opposed to creating dummy variables). We run the function on all the columns except the last, which is the outcome. ``` pp_hpc <- preProcess(schedulingData[, -8], method = c("center", "scale", "YeoJohnson")) pp_hpc ``` ``` ## Created from 4331 samples and 7 variables ## ## Pre-processing: ## - centered (5) ## - ignored (2) ## - scaled (5) ## - Yeo-Johnson transformation (5) ## ## Lambda estimates for Yeo-Johnson transformation: ## -0.08, -0.03, -1.05, -1.1, 1.44 ``` ``` transformed <- predict(pp_hpc, newdata = schedulingData[, -8]) head(transformed) ``` ``` ## Protocol Compounds InputFields Iterations NumPending Hour Day ## 1 E 1.2289592 -0.6324580 -0.0615593 -0.554123 0.004586516 Tue ## 2 E -0.6065826 -0.8120473 -0.0615593 -0.554123 -0.043733201 Tue ## 3 E -0.5719534 -1.0131504 -2.7894869 -0.554123 -0.034967177 Thu ## 4 E -0.6427737 -1.0047277 -0.0615593 -0.554123 -0.964170752 Fri ## 5 E -0.5804713 -0.9564504 -0.0615593 -0.554123 -0.902085020 Fri ## 6 E -0.5804713 -0.9564504 -0.0615593 -0.554123 0.698108782 Wed ``` The two predictors labeled as “ignored” in the output are the two factor predictors. These are not altered but the numeric predictors are transformed. However, the predictor for the number of pending jobs, has a very sparse and unbalanced distribution: ``` mean(schedulingData$NumPending == 0) ``` ``` ## [1] 0.7561764 ``` For some other models, this might be an issue (especially if we resample or down\-sample the data). We can add a filter to check for zero\- or near zero\-variance predictors prior to running the pre\-processing calculations: ``` pp_no_nzv <- preProcess(schedulingData[, -8], method = c("center", "scale", "YeoJohnson", "nzv")) pp_no_nzv ``` ``` ## Created from 4331 samples and 7 variables ## ## Pre-processing: ## - centered (4) ## - ignored (2) ## - removed (1) ## - scaled (4) ## - Yeo-Johnson transformation (4) ## ## Lambda estimates for Yeo-Johnson transformation: ## -0.08, -0.03, -1.05, 1.44 ``` ``` predict(pp_no_nzv, newdata = schedulingData[1:6, -8]) ``` ``` ## Protocol Compounds InputFields Iterations Hour Day ## 1 E 1.2289592 -0.6324580 -0.0615593 0.004586516 Tue ## 2 E -0.6065826 -0.8120473 -0.0615593 -0.043733201 Tue ## 3 E -0.5719534 -1.0131504 -2.7894869 -0.034967177 Thu ## 4 E -0.6427737 -1.0047277 -0.0615593 -0.964170752 Fri ## 5 E -0.5804713 -0.9564504 -0.0615593 -0.902085020 Fri ## 6 E -0.5804713 -0.9564504 -0.0615593 0.698108782 Wed ``` Note that one predictor is labeled as “removed” and the processed data lack the sparse predictor. 3\.10 Class Distance Calculations --------------------------------- [`caret`](http://cran.r-project.org/web/packages/caret/index.html) contains functions to generate new predictors variables based on distances to class centroids (similar to how linear discriminant analysis works). For each level of a factor variable, the class centroid and covariance matrix is calculated. For new samples, the Mahalanobis distance to each of the class centroids is computed and can be used as an additional predictor. This can be helpful for non\-linear models when the true decision boundary is actually linear. In cases where there are more predictors within a class than samples, the `classDist` function has arguments called `pca` and `keep` arguments that allow for principal components analysis within each class to be used to avoid issues with singular covariance matrices. `predict.classDist` is then used to generate the class distances. By default, the distances are logged, but this can be changed via the `trans` argument to `predict.classDist`. As an example, we can used the MDRR data. ``` centroids <- classDist(trainBC, trainMDRR) distances <- predict(centroids, testBC) distances <- as.data.frame(distances) head(distances) ``` ``` ## dist.Active dist.Inactive ## ACEPROMAZINE 3.787139 3.941234 ## ACEPROMETAZINE 4.306137 3.992772 ## MESORIDAZINE 3.707296 4.324115 ## PERIMETAZINE 4.079938 4.117170 ## PROPERICIAZINE 4.174101 4.430957 ## DUOPERONE 4.355328 6.000025 ``` This image shows a scatterplot matrix of the class distances for the held\-out samples: ``` xyplot(dist.Active ~ dist.Inactive, data = distances, groups = testMDRR, auto.key = list(columns = 2)) ```
Machine Learning
topepo.github.io
https://topepo.github.io/caret/data-splitting.html
4 Data Splitting ================ Contents * [Simple Splitting Based on the Outcome](data-splitting.html#outcome) * [Splitting Based on the Predictors](data-splitting.html#predictors) * [Data Splitting for Time Series](data-splitting.html#time) * [Data Splitting with Important Groups](data-splitting.html#groups) 4\.1 Simple Splitting Based on the Outcome ------------------------------------------ The function `createDataPartition` can be used to create balanced splits of the data. If the `y` argument to this function is a factor, the random sampling occurs within each class and should preserve the overall class distribution of the data. For example, to create a single 80/20% split of the iris data: ``` library(caret) set.seed(3456) trainIndex <- createDataPartition(iris$Species, p = .8, list = FALSE, times = 1) head(trainIndex) ``` ``` ## Resample1 ## [1,] 1 ## [2,] 2 ## [3,] 3 ## [4,] 5 ## [5,] 6 ## [6,] 7 ``` ``` irisTrain <- iris[ trainIndex,] irisTest <- iris[-trainIndex,] ``` The `list = FALSE` avoids returning the data as a list. This function also has an argument, `times`, that can create multiple splits at once; the data indices are returned in a list of integer vectors. Similarly, `createResample` can be used to make simple bootstrap samples and `createFolds` can be used to generate balanced cross–validation groupings from a set of data. 4\.2 Splitting Based on the Predictors -------------------------------------- Also, the function `maxDissim` can be used to create sub–samples using a maximum dissimilarity approach ([Willett, 1999](http://www.liebertonline.com/doi/abs/10.1089/106652799318382)). Suppose there is a data set *A* with *m* samples and a larger data set *B* with *n* samples. We may want to create a sub–sample from *B* that is diverse when compared to *A*. To do this, for each sample in *B*, the function calculates the *m* dissimilarities between each point in *A*. The most dissimilar point in *B* is added to *A* and the process continues. There are many methods in R to calculate dissimilarity. [`caret`](http://cran.r-project.org/web/packages/caret/index.html) uses the [`proxy`](http://cran.r-project.org/web/packages/proxy/index.html) package. See the manual for that package for a list of available measures. Also, there are many ways to calculate which sample is “most dissimilar”. The argument `obj` can be used to specify any function that returns a scalar measure. [`caret`](http://cran.r-project.org/web/packages/caret/index.html) includes two functions, `minDiss` and `sumDiss`, that can be used to maximize the minimum and total dissimilarities, respectfully. As an example, the figure below shows a scatter plot of two chemical descriptors for the Cox2 data. Using an initial random sample of 5 compounds, we can select 20 more compounds from the data so that the new compounds are most dissimilar from the initial 5 that were specified. The panels in the figure show the results using several combinations of distance metrics and scoring functions. For these data, the distance measure has less of an impact than the scoring method for determining which compounds are most dissimilar. ``` library(mlbench) data(BostonHousing) testing <- scale(BostonHousing[, c("age", "nox")]) set.seed(5) ## A random sample of 5 data points startSet <- sample(1:dim(testing)[1], 5) samplePool <- testing[-startSet,] start <- testing[startSet,] newSamp <- maxDissim(start, samplePool, n = 20) head(newSamp) ``` ``` ## [1] 460 142 491 156 498 82 ``` The visualization below shows the data set (small points), the starting samples (larger blue points) and the order in which the other 20 samples are added. 4\.3 Data Splitting for Time Series ----------------------------------- Simple random sampling of time series is probably not the best way to resample times series data. [Hyndman and Athanasopoulos (2013\)](https://www.otexts.org/fpp/2/5) discuss *rolling forecasting origin* techniques that move the training and test sets in time. caret contains a function called `createTimeSlices` that can create the indices for this type of splitting. The three parameters for this type of splitting are: * `initialWindow`: the initial number of consecutive values in each training set sample * `horizon`: The number of consecutive values in test set sample * `fixedWindow`: A logical: if `FALSE`, the training set always start at the first sample and the training set size will vary over data splits. As an example, suppose we have a time series with 20 data points. We can fix `initialWindow = 5` and look at different settings of the other two arguments. In the plot below, rows in each panel correspond to different data splits (i.e. resamples) and the columns correspond to different data points. Also, red indicates samples that are in included in the training set and the blue indicates samples in the test set. 4\.4 Simple Splitting with Important Groups ------------------------------------------- In some cases there is an important qualitative factor in the data that should be considered during (re)sampling. For example: * in clinical trials, there may be hospital\-to\-hospital differences * with longitudinal or repeated measures data, subjects (or general independent experimental unit) may have multiple rows in the data set, etc. There may be an interest in making sure that these groups are not contained in the training and testing set since this may bias the test set performance to be more optimistic. Also, when one or more specific groups are held out, the resampling might capture the “ruggedness” of the model. In the example where clinical data is recorded over multiple sites, the resampling performance estimates partly measure how extensible the model is across sites. To split the data based on groups, `groupKFold` can be used: ``` set.seed(3527) subjects <- sample(1:20, size = 80, replace = TRUE) table(subjects) ``` ``` ## subjects ## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ## 2 3 2 5 3 5 4 5 4 4 2 5 4 2 3 3 6 7 8 3 ``` ``` folds <- groupKFold(subjects, k = 15) ``` The results in `folds` can be used as inputs into the `index` argument of the `trainControl` function. This plot shows how each subject is partitioned between the modeling and holdout sets. Note that since `k` was less than 20 when `folds` was created, there are some holdouts with model than one subject. 4\.1 Simple Splitting Based on the Outcome ------------------------------------------ The function `createDataPartition` can be used to create balanced splits of the data. If the `y` argument to this function is a factor, the random sampling occurs within each class and should preserve the overall class distribution of the data. For example, to create a single 80/20% split of the iris data: ``` library(caret) set.seed(3456) trainIndex <- createDataPartition(iris$Species, p = .8, list = FALSE, times = 1) head(trainIndex) ``` ``` ## Resample1 ## [1,] 1 ## [2,] 2 ## [3,] 3 ## [4,] 5 ## [5,] 6 ## [6,] 7 ``` ``` irisTrain <- iris[ trainIndex,] irisTest <- iris[-trainIndex,] ``` The `list = FALSE` avoids returning the data as a list. This function also has an argument, `times`, that can create multiple splits at once; the data indices are returned in a list of integer vectors. Similarly, `createResample` can be used to make simple bootstrap samples and `createFolds` can be used to generate balanced cross–validation groupings from a set of data. 4\.2 Splitting Based on the Predictors -------------------------------------- Also, the function `maxDissim` can be used to create sub–samples using a maximum dissimilarity approach ([Willett, 1999](http://www.liebertonline.com/doi/abs/10.1089/106652799318382)). Suppose there is a data set *A* with *m* samples and a larger data set *B* with *n* samples. We may want to create a sub–sample from *B* that is diverse when compared to *A*. To do this, for each sample in *B*, the function calculates the *m* dissimilarities between each point in *A*. The most dissimilar point in *B* is added to *A* and the process continues. There are many methods in R to calculate dissimilarity. [`caret`](http://cran.r-project.org/web/packages/caret/index.html) uses the [`proxy`](http://cran.r-project.org/web/packages/proxy/index.html) package. See the manual for that package for a list of available measures. Also, there are many ways to calculate which sample is “most dissimilar”. The argument `obj` can be used to specify any function that returns a scalar measure. [`caret`](http://cran.r-project.org/web/packages/caret/index.html) includes two functions, `minDiss` and `sumDiss`, that can be used to maximize the minimum and total dissimilarities, respectfully. As an example, the figure below shows a scatter plot of two chemical descriptors for the Cox2 data. Using an initial random sample of 5 compounds, we can select 20 more compounds from the data so that the new compounds are most dissimilar from the initial 5 that were specified. The panels in the figure show the results using several combinations of distance metrics and scoring functions. For these data, the distance measure has less of an impact than the scoring method for determining which compounds are most dissimilar. ``` library(mlbench) data(BostonHousing) testing <- scale(BostonHousing[, c("age", "nox")]) set.seed(5) ## A random sample of 5 data points startSet <- sample(1:dim(testing)[1], 5) samplePool <- testing[-startSet,] start <- testing[startSet,] newSamp <- maxDissim(start, samplePool, n = 20) head(newSamp) ``` ``` ## [1] 460 142 491 156 498 82 ``` The visualization below shows the data set (small points), the starting samples (larger blue points) and the order in which the other 20 samples are added. 4\.3 Data Splitting for Time Series ----------------------------------- Simple random sampling of time series is probably not the best way to resample times series data. [Hyndman and Athanasopoulos (2013\)](https://www.otexts.org/fpp/2/5) discuss *rolling forecasting origin* techniques that move the training and test sets in time. caret contains a function called `createTimeSlices` that can create the indices for this type of splitting. The three parameters for this type of splitting are: * `initialWindow`: the initial number of consecutive values in each training set sample * `horizon`: The number of consecutive values in test set sample * `fixedWindow`: A logical: if `FALSE`, the training set always start at the first sample and the training set size will vary over data splits. As an example, suppose we have a time series with 20 data points. We can fix `initialWindow = 5` and look at different settings of the other two arguments. In the plot below, rows in each panel correspond to different data splits (i.e. resamples) and the columns correspond to different data points. Also, red indicates samples that are in included in the training set and the blue indicates samples in the test set. 4\.4 Simple Splitting with Important Groups ------------------------------------------- In some cases there is an important qualitative factor in the data that should be considered during (re)sampling. For example: * in clinical trials, there may be hospital\-to\-hospital differences * with longitudinal or repeated measures data, subjects (or general independent experimental unit) may have multiple rows in the data set, etc. There may be an interest in making sure that these groups are not contained in the training and testing set since this may bias the test set performance to be more optimistic. Also, when one or more specific groups are held out, the resampling might capture the “ruggedness” of the model. In the example where clinical data is recorded over multiple sites, the resampling performance estimates partly measure how extensible the model is across sites. To split the data based on groups, `groupKFold` can be used: ``` set.seed(3527) subjects <- sample(1:20, size = 80, replace = TRUE) table(subjects) ``` ``` ## subjects ## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ## 2 3 2 5 3 5 4 5 4 4 2 5 4 2 3 3 6 7 8 3 ``` ``` folds <- groupKFold(subjects, k = 15) ``` The results in `folds` can be used as inputs into the `index` argument of the `trainControl` function. This plot shows how each subject is partitioned between the modeling and holdout sets. Note that since `k` was less than 20 when `folds` was created, there are some holdouts with model than one subject.
Machine Learning
topepo.github.io
https://topepo.github.io/caret/model-training-and-tuning.html
5 Model Training and Tuning =========================== Contents * [Model Training and Parameter Tuning](model-training-and-tuning.html#basic) + [An Example](model-training-and-tuning.html#example) * [Basic Parameter Tuning](model-training-and-tuning.html#tune) * [Notes on Reproducibility](model-training-and-tuning.html#repro) * [Customizing the Tuning Process](model-training-and-tuning.html#custom) + [Pre\-Processing Options](model-training-and-tuning.html#preproc) + [Alternate Tuning Grids](model-training-and-tuning.html#grids) + [Plotting the Resampling Profile](model-training-and-tuning.html#plots) + [The `trainControl` Function](model-training-and-tuning.html#control) * [Alternate Performance Metrics](model-training-and-tuning.html#metrics) * [Choosing the Final Model](model-training-and-tuning.html#final) * [Extracting Predictions and Class Probabilities](model-training-and-tuning.html#pred) * [Exploring and Comparing Resampling Distributions](model-training-and-tuning.html#resamp) + [Within\-Model](model-training-and-tuning.html#within) + [Between\-Models](model-training-and-tuning.html#between) * [Fitting Models Without Parameter Tuning](model-training-and-tuning.html#notune) 5\.1 Model Training and Parameter Tuning ---------------------------------------- The [`caret`](http://cran.r-project.org/web/packages/caret/index.html) package has several functions that attempt to streamline the model building and evaluation process. The `train` function can be used to * evaluate, using resampling, the effect of model tuning parameters on performance * choose the “optimal” model across these parameters * estimate model performance from a training set First, a specific model must be chosen. Currently, 238 are available using [`caret`](http://cran.r-project.org/web/packages/caret/index.html); see [`train` Model List](available-models.html) or [`train` Models By Tag](train-models-by-tag.html) for details. On these pages, there are lists of tuning parameters that can potentially be optimized. [User\-defined models](using-your-own-model-in-train.html) can also be created. The first step in tuning the model (line 1 in the algorithm below) is to choose a set of parameters to evaluate. For example, if fitting a Partial Least Squares (PLS) model, the number of PLS components to evaluate must be specified. Once the model and tuning parameter values have been defined, the type of resampling should be also be specified. Currently, *k*\-fold cross\-validation (once or repeated), leave\-one\-out cross\-validation and bootstrap (simple estimation or the 632 rule) resampling methods can be used by `train`. After resampling, the process produces a profile of performance measures is available to guide the user as to which tuning parameter values should be chosen. By default, the function automatically chooses the tuning parameters associated with the best value, although different algorithms can be used (see details below). 5\.2 An Example --------------- The Sonar data are available in the [`mlbench`](http://cran.r-project.org/web/packages/mlbench/index.html) package. Here, we load the data: ``` library(mlbench) data(Sonar) str(Sonar[, 1:10]) ``` ``` ## 'data.frame': 208 obs. of 10 variables: ## $ V1 : num 0.02 0.0453 0.0262 0.01 0.0762 0.0286 0.0317 0.0519 0.0223 0.0164 ... ## $ V2 : num 0.0371 0.0523 0.0582 0.0171 0.0666 0.0453 0.0956 0.0548 0.0375 0.0173 ... ## $ V3 : num 0.0428 0.0843 0.1099 0.0623 0.0481 ... ## $ V4 : num 0.0207 0.0689 0.1083 0.0205 0.0394 ... ## $ V5 : num 0.0954 0.1183 0.0974 0.0205 0.059 ... ## $ V6 : num 0.0986 0.2583 0.228 0.0368 0.0649 ... ## $ V7 : num 0.154 0.216 0.243 0.11 0.121 ... ## $ V8 : num 0.16 0.348 0.377 0.128 0.247 ... ## $ V9 : num 0.3109 0.3337 0.5598 0.0598 0.3564 ... ## $ V10: num 0.211 0.287 0.619 0.126 0.446 ... ``` The function `createDataPartition` can be used to create a stratified random sample of the data into training and test sets: ``` library(caret) set.seed(998) inTraining <- createDataPartition(Sonar$Class, p = .75, list = FALSE) training <- Sonar[ inTraining,] testing <- Sonar[-inTraining,] ``` We will use these data illustrate functionality on this (and other) pages. 5\.3 Basic Parameter Tuning --------------------------- By default, simple bootstrap resampling is used for line 3 in the algorithm above. Others are available, such as repeated *K*\-fold cross\-validation, leave\-one\-out etc. The function `trainControl` can be used to specifiy the type of resampling: ``` fitControl <- trainControl(## 10-fold CV method = "repeatedcv", number = 10, ## repeated ten times repeats = 10) ``` More information about `trainControl` is given in [a section below](model-training-and-tuning.html#custom). The first two arguments to `train` are the predictor and outcome data objects, respectively. The third argument, `method`, specifies the type of model (see [`train` Model List](available-models.html') or [`train` Models By Tag](train-models-by-tag.html)). To illustrate, we will fit a boosted tree model via the [`gbm`](http://cran.r-project.org/web/packages/gbm/index.html) package. The basic syntax for fitting this model using repeated cross\-validation is shown below: ``` set.seed(825) gbmFit1 <- train(Class ~ ., data = training, method = "gbm", trControl = fitControl, ## This last option is actually one ## for gbm() that passes through verbose = FALSE) gbmFit1 ``` ``` ## Stochastic Gradient Boosting ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## interaction.depth n.trees Accuracy Kappa ## 1 50 0.7935784 0.5797839 ## 1 100 0.8171078 0.6290208 ## 1 150 0.8219608 0.6386184 ## 2 50 0.8041912 0.6027771 ## 2 100 0.8302059 0.6556940 ## 2 150 0.8283627 0.6520181 ## 3 50 0.8110343 0.6170317 ## 3 100 0.8301275 0.6551379 ## 3 150 0.8310343 0.6577252 ## ## Tuning parameter 'shrinkage' was held constant at a value of 0.1 ## ## Tuning parameter 'n.minobsinnode' was held constant at a value of 10 ## Accuracy was used to select the optimal model using the largest value. ## The final values used for the model were n.trees = 150, ## interaction.depth = 3, shrinkage = 0.1 and n.minobsinnode = 10. ``` For a gradient boosting machine (GBM) model, there are three main tuning parameters: * number of iterations, i.e. trees, (called `n.trees` in the `gbm` function) * complexity of the tree, called `interaction.depth` * learning rate: how quickly the algorithm adapts, called `shrinkage` * the minimum number of training set samples in a node to commence splitting (`n.minobsinnode`) The default values tested for this model are shown in the first two columns (`shrinkage` and `n.minobsinnode` are not shown beause the grid set of candidate models all use a single value for these tuning parameters). The column labeled “`Accuracy`” is the overall agreement rate averaged over cross\-validation iterations. The agreement standard deviation is also calculated from the cross\-validation results. The column “`Kappa`” is Cohen’s (unweighted) Kappa statistic averaged across the resampling results. `train` works with specific models (see [`train` Model List](available-models.html') or [`train` Models By Tag](train-models-by-tag.html)). For these models, `train` can automatically create a grid of tuning parameters. By default, if *p* is the number of tuning parameters, the grid size is *3^p*. As another example, regularized discriminant analysis (RDA) models have two parameters (`gamma` and `lambda`), both of which lie between zero and one. The default training grid would produce nine combinations in this two\-dimensional space. There is additional functionality in `train` that is described in the next section. 5\.4 Notes on Reproducibility ----------------------------- Many models utilize random numbers during the phase where parameters are estimated. Also, the resampling indices are chosen using random numbers. There are two main ways to control the randomness in order to assure reproducible results. * There are two approaches to ensuring that the same *resamples* are used between calls to `train`. The first is to use `set.seed` just prior to calling `train`. The first use of random numbers is to create the resampling information. Alternatively, if you would like to use specific splits of the data, the `index` argument of the `trainControl` function can be used. This is briefly discussed below. * When the models are created *inside of resampling*, the seeds can also be set. While setting the seed prior to calling `train` may guarantee that the same random numbers are used, this is unlikely to be the case when [parallel processing](parallel-processing.html) is used (depending which technology is utilized). To set the model fitting seeds, `trainControl` has an additional argument called `seeds` that can be used. The value for this argument is a list of integer vectors that are used as seeds. The help page for `trainControl` describes the appropriate format for this option. How random numbers are used is highly dependent on the package author. There are rare cases where the underlying model function does not control the random number seed, especially if the computations are conducted in C code. Also, please note that [some packages load random numbers when loaded (directly or via namespace)](https://github.com/topepo/caret/issues/452) and this may affect reproducibility. 5\.5 Customizing the Tuning Process ----------------------------------- There are a few ways to customize the process of selecting tuning/complexity parameters and building the final model. ### 5\.5\.1 Pre\-Processing Options As previously mentioned,`train` can pre\-process the data in various ways prior to model fitting. The function `preProcess` is automatically used. This function can be used for centering and scaling, imputation (see details below), applying the spatial sign transformation and feature extraction via principal component analysis or independent component analysis. To specify what pre\-processing should occur, the `train` function has an argument called `preProcess`. This argument takes a character string of methods that would normally be passed to the `method` argument of the [`preProcess` function](pre-processing.html). Additional options to the `preProcess` function can be passed via the `trainControl` function. These processing steps would be applied during any predictions generated using `predict.train`, `extractPrediction` or `extractProbs` (see details later in this document). The pre\-processing would **not** be applied to predictions that directly use the `object$finalModel` object. For imputation, there are three methods currently implemented: * *k*\-nearest neighbors takes a sample with missing values and finds the *k* closest samples in the training set. The average of the *k* training set values for that predictor are used as a substitute for the original data. When calculating the distances to the training set samples, the predictors used in the calculation are the ones with no missing values for that sample and no missing values in the training set. * another approach is to fit a bagged tree model for each predictor using the training set samples. This is usually a fairly accurate model and can handle missing values. When a predictor for a sample requires imputation, the values for the other predictors are fed through the bagged tree and the prediction is used as the new value. This model can have significant computational cost. * the median of the predictor’s training set values can be used to estimate the missing data. If there are missing values in the training set, PCA and ICA models only use complete samples. ### 5\.5\.2 Alternate Tuning Grids The tuning parameter grid can be specified by the user. The argument `tuneGrid` can take a data frame with columns for each tuning parameter. The column names should be the same as the fitting function’s arguments. For the previously mentioned RDA example, the names would be `gamma` and `lambda`. `train` will tune the model over each combination of values in the rows. For the boosted tree model, we can fix the learning rate and evaluate more than three values of `n.trees`: ``` gbmGrid <- expand.grid(interaction.depth = c(1, 5, 9), n.trees = (1:30)*50, shrinkage = 0.1, n.minobsinnode = 20) nrow(gbmGrid) set.seed(825) gbmFit2 <- train(Class ~ ., data = training, method = "gbm", trControl = fitControl, verbose = FALSE, ## Now specify the exact models ## to evaluate: tuneGrid = gbmGrid) gbmFit2 ``` ``` ## Stochastic Gradient Boosting ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## interaction.depth n.trees Accuracy Kappa ## 1 50 0.78 0.56 ## 1 100 0.81 0.61 ## 1 150 0.82 0.63 ## 1 200 0.83 0.65 ## 1 250 0.82 0.65 ## 1 300 0.83 0.65 ## : : : : ## 9 1350 0.85 0.69 ## 9 1400 0.85 0.69 ## 9 1450 0.85 0.69 ## 9 1500 0.85 0.69 ## ## Tuning parameter 'shrinkage' was held constant at a value of 0.1 ## ## Tuning parameter 'n.minobsinnode' was held constant at a value of 20 ## Accuracy was used to select the optimal model using the largest value. ## The final values used for the model were n.trees = 1200, ## interaction.depth = 9, shrinkage = 0.1 and n.minobsinnode = 20. ``` Another option is to use a random sample of possible tuning parameter combinations, i.e. “random search”[(pdf)](http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf). This functionality is described on [this page](random-hyperparameter-search.html). To use a random search, use the option `search = "random"` in the call to `trainControl`. In this situation, the `tuneLength` parameter defines the total number of parameter combinations that will be evaluated. ### 5\.5\.3 Plotting the Resampling Profile The `plot` function can be used to examine the relationship between the estimates of performance and the tuning parameters. For example, a simple invokation of the function shows the results for the first performance measure: ``` trellis.par.set(caretTheme()) plot(gbmFit2) ``` Other performance metrics can be shown using the `metric` option: ``` trellis.par.set(caretTheme()) plot(gbmFit2, metric = "Kappa") ``` Other types of plot are also available. See `?plot.train` for more details. The code below shows a heatmap of the results: ``` trellis.par.set(caretTheme()) plot(gbmFit2, metric = "Kappa", plotType = "level", scales = list(x = list(rot = 90))) ``` A `ggplot` method can also be used: ``` ggplot(gbmFit2) ``` There are also plot functions that show more detailed representations of the resampled estimates. See `?xyplot.train` for more details. From these plots, a different set of tuning parameters may be desired. To change the final values without starting the whole process again, the `update.train` can be used to refit the final model. See `?update.train` ### 5\.5\.4 The `trainControl` Function The function `trainControl` generates parameters that further control how models are created, with possible values: * `method`: The resampling method: `"boot"`, `"cv"`, `"LOOCV"`, `"LGOCV"`, `"repeatedcv"`, `"timeslice"`, `"none"` and `"oob"`. The last value, out\-of\-bag estimates, can only be used by random forest, bagged trees, bagged earth, bagged flexible discriminant analysis, or conditional tree forest models. GBM models are not included (the [`gbm`](http://cran.r-project.org/web/packages/gbm/index.html) package maintainer has indicated that it would not be a good idea to choose tuning parameter values based on the model OOB error estimates with boosted trees). Also, for leave\-one\-out cross\-validation, no uncertainty estimates are given for the resampled performance measures. * `number` and `repeats`: `number` controls with the number of folds in *K*\-fold cross\-validation or number of resampling iterations for bootstrapping and leave\-group\-out cross\-validation. `repeats` applied only to repeated *K*\-fold cross\-validation. Suppose that `method = "repeatedcv"`, `number = 10` and `repeats = 3`,then three separate 10\-fold cross\-validations are used as the resampling scheme. * `verboseIter`: A logical for printing a training log. * `returnData`: A logical for saving the data into a slot called `trainingData`. * `p`: For leave\-group out cross\-validation: the training percentage * For `method = "timeslice"`, `trainControl` has options `initialWindow`, `horizon` and `fixedWindow` that govern how [cross\-validation can be used for time series data.](data-splitting.html) * `classProbs`: a logical value determining whether class probabilities should be computed for held\-out samples during resample. * `index` and `indexOut`: optional lists with elements for each resampling iteration. Each list element is the sample rows used for training at that iteration or should be held\-out. When these values are not specified, `train` will generate them. * `summaryFunction`: a function to computed alternate performance summaries. * `selectionFunction`: a function to choose the optimal tuning parameters. and examples. * `PCAthresh`, `ICAcomp` and `k`: these are all options to pass to the `preProcess` function (when used). * `returnResamp`: a character string containing one of the following values: `"all"`, `"final"` or `"none"`. This specifies how much of the resampled performance measures to save. * `allowParallel`: a logical that governs whether `train` should [use parallel processing (if availible).](parallel-processing.html) There are several other options not discussed here. ### 5\.5\.5 Alternate Performance Metrics The user can change the metric used to determine the best settings. By default, RMSE, *R*2, and the mean absolute error (MAE) are computed for regression while accuracy and Kappa are computed for classification. Also by default, the parameter values are chosen using RMSE and accuracy, respectively for regression and classification. The `metric` argument of the `train` function allows the user to control which the optimality criterion is used. For example, in problems where there are a low percentage of samples in one class, using `metric = "Kappa"` can improve quality of the final model. If none of these parameters are satisfactory, the user can also compute custom performance metrics. The `trainControl` function has a argument called `summaryFunction` that specifies a function for computing performance. The function should have these arguments: * `data` is a reference for a data frame or matrix with columns called `obs` and `pred` for the observed and predicted outcome values (either numeric data for regression or character values for classification). Currently, class probabilities are not passed to the function. The values in data are the held\-out predictions (and their associated reference values) for a single combination of tuning parameters. If the `classProbs` argument of the `trainControl` object is set to `TRUE`, additional columns in `data` will be present that contains the class probabilities. The names of these columns are the same as the class levels. Also, if `weights` were specified in the call to `train`, a column called `weights` will also be in the data set. Additionally, if the `recipe` method for `train` was used (see [this section of documentation](topepo.github.io/caret/using-recipes-with-train)), other variables not used in the model will also be included. This can be accomplished by adding a role in the recipe of `"performance var"`. An example is given in the recipe section of this site. * `lev` is a character string that has the outcome factor levels taken from the training data. For regression, a value of `NULL` is passed into the function. * `model` is a character string for the model being used (i.e. the value passed to the `method` argument of `train`). The output to the function should be a vector of numeric summary metrics with non\-null names. By default, `train` evaluate classification models in terms of the predicted classes. Optionally, class probabilities can also be used to measure performance. To obtain predicted class probabilities within the resampling process, the argument `classProbs` in `trainControl` must be set to `TRUE`. This merges columns of probabilities into the predictions generated from each resample (there is a column per class and the column names are the class names). As shown in the last section, custom functions can be used to calculate performance scores that are averaged over the resamples. Another built\-in function, `twoClassSummary`, will compute the sensitivity, specificity and area under the ROC curve: ``` head(twoClassSummary) ``` ``` ## ## 1 function (data, lev = NULL, model = NULL) ## 2 { ## 3 lvls <- levels(data$obs) ## 4 if (length(lvls) > 2) ## 5 stop(paste("Your outcome has", length(lvls), "levels. The twoClassSummary() function isn't appropriate.")) ## 6 requireNamespaceQuietStop("ModelMetrics") ``` To rebuild the boosted tree model using this criterion, we can see the relationship between the tuning parameters and the area under the ROC curve using the following code: ``` fitControl <- trainControl(method = "repeatedcv", number = 10, repeats = 10, ## Estimate class probabilities classProbs = TRUE, ## Evaluate performance using ## the following function summaryFunction = twoClassSummary) set.seed(825) gbmFit3 <- train(Class ~ ., data = training, method = "gbm", trControl = fitControl, verbose = FALSE, tuneGrid = gbmGrid, ## Specify which metric to optimize metric = "ROC") gbmFit3 ``` ``` ## Stochastic Gradient Boosting ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## interaction.depth n.trees ROC Sens Spec ## 1 50 0.86 0.86 0.69 ## 1 100 0.88 0.85 0.75 ## 1 150 0.89 0.86 0.77 ## 1 200 0.90 0.87 0.78 ## 1 250 0.90 0.86 0.78 ## 1 300 0.90 0.87 0.78 ## : : : : : ## 9 1350 0.92 0.88 0.81 ## 9 1400 0.92 0.88 0.80 ## 9 1450 0.92 0.88 0.81 ## 9 1500 0.92 0.88 0.80 ## ## Tuning parameter 'shrinkage' was held constant at a value of 0.1 ## ## Tuning parameter 'n.minobsinnode' was held constant at a value of 20 ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were n.trees = 1450, ## interaction.depth = 5, shrinkage = 0.1 and n.minobsinnode = 20. ``` In this case, the average area under the ROC curve associated with the optimal tuning parameters was 0\.922 across the 100 resamples. 5\.6 Choosing the Final Model ----------------------------- Another method for customizing the tuning process is to modify the algorithm that is used to select the “best” parameter values, given the performance numbers. By default, the `train` function chooses the model with the largest performance value (or smallest, for mean squared error in regression models). Other schemes for selecting model can be used. [Breiman et al (1984\)](http://books.google.com/books/about/Classification_and_Regression_Trees.html?id=JwQx-WOmSyQC) suggested the “one standard error rule” for simple tree\-based models. In this case, the model with the best performance value is identified and, using resampling, we can estimate the standard error of performance. The final model used was the simplest model within one standard error of the (empirically) best model. With simple trees this makes sense, since these models will start to over\-fit as they become more and more specific to the training data. `train` allows the user to specify alternate rules for selecting the final model. The argument `selectionFunction` can be used to supply a function to algorithmically determine the final model. There are three existing functions in the package: `best` is chooses the largest/smallest value, `oneSE` attempts to capture the spirit of [Breiman et al (1984\)](http://books.google.com/books/about/Classification_and_Regression_Trees.html?id=JwQx-WOmSyQC) and `tolerance` selects the least complex model within some percent tolerance of the best value. See `?best` for more details. User\-defined functions can be used, as long as they have the following arguments: * `x` is a data frame containing the tune parameters and their associated performance metrics. Each row corresponds to a different tuning parameter combination. * `metric` a character string indicating which performance metric should be optimized (this is passed in directly from the `metric` argument of `train`. * `maximize` is a single logical value indicating whether larger values of the performance metric are better (this is also directly passed from the call to `train`). The function should output a single integer indicating which row in `x` is chosen. As an example, if we chose the previous boosted tree model on the basis of overall accuracy, we would choose: n.trees \= 1450, interaction.depth \= 5, shrinkage \= 0\.1, n.minobsinnode \= 20\. However, the scale in this plots is fairly tight, with accuracy values ranging from 0\.863 to 0\.922\. A less complex model (e.g. fewer, more shallow trees) might also yield acceptable accuracy. The tolerance function could be used to find a less complex model based on (*x*\-*x*best)/*x*bestx 100, which is the percent difference. For example, to select parameter values based on a 2% loss of performance: ``` whichTwoPct <- tolerance(gbmFit3$results, metric = "ROC", tol = 2, maximize = TRUE) cat("best model within 2 pct of best:\n") ``` ``` ## best model within 2 pct of best: ``` ``` gbmFit3$results[whichTwoPct,1:6] ``` ``` ## shrinkage interaction.depth n.minobsinnode n.trees ROC Sens ## 32 0.1 5 20 100 0.9139707 0.8645833 ``` This indicates that we can get a less complex model with an area under the ROC curve of 0\.914 (compared to the “pick the best” value of 0\.922\). The main issue with these functions is related to ordering the models from simplest to complex. In some cases, this is easy (e.g. simple trees, partial least squares), but in cases such as this model, the ordering of models is subjective. For example, is a boosted tree model using 100 iterations and a tree depth of 2 more complex than one with 50 iterations and a depth of 8? The package makes some choices regarding the orderings. In the case of boosted trees, the package assumes that increasing the number of iterations adds complexity at a faster rate than increasing the tree depth, so models are ordered on the number of iterations then ordered with depth. See `?best` for more examples for specific models. 5\.7 Extracting Predictions and Class Probabilities --------------------------------------------------- As previously mentioned, objects produced by the `train` function contain the “optimized” model in the `finalModel` sub\-object. Predictions can be made from these objects as usual. In some cases, such as `pls` or `gbm` objects, additional parameters from the optimized fit may need to be specified. In these cases, the `train` objects uses the results of the parameter optimization to predict new samples. For example, if predictions were created using `predict.gbm`, the user would have to specify the number of trees directly (there is no default). Also, for binary classification, the predictions from this function take the form of the probability of one of the classes, so extra steps are required to convert this to a factor vector. `predict.train` automatically handles these details for this (and for other models). Also, there are very few standard syntaxes for model predictions in R. For example, to get class probabilities, many `predict` methods have an argument called `type` that is used to specify whether the classes or probabilities should be generated. Different packages use different values of `type`, such as `"prob"`, `"posterior"`, `"response"`, `"probability"` or `"raw"`. In other cases, completely different syntax is used. For `predict.train`, the type options are standardized to be `"class"` and `"prob"` (the underlying code matches these to the appropriate choices for each model. For example: ``` predict(gbmFit3, newdata = head(testing)) ``` ``` ## [1] R M R M R M ## Levels: M R ``` ``` predict(gbmFit3, newdata = head(testing), type = "prob") ``` ``` ## M R ## 1 3.215213e-02 9.678479e-01 ## 2 1.000000e+00 3.965815e-08 ## 3 6.996088e-13 1.000000e+00 ## 4 9.070652e-01 9.293483e-02 ## 5 2.029754e-03 9.979702e-01 ## 6 9.999662e-01 3.377548e-05 ``` 5\.8 Exploring and Comparing Resampling Distributions ----------------------------------------------------- ### 5\.8\.1 Within\-Model There are several [`lattice`](http://cran.r-project.org/web/packages/lattice/index.html) functions than can be used to explore relationships between tuning parameters and the resampling results for a specific model: * `xyplot` and `stripplot` can be used to plot resampling statistics against (numeric) tuning parameters. * `histogram` and `densityplot` can also be used to look at distributions of the tuning parameters across tuning parameters. For example, the following statements create a density plot: ``` trellis.par.set(caretTheme()) densityplot(gbmFit3, pch = "|") ``` Note that if you are interested in plotting the resampling results across multiple tuning parameters, the option `resamples = "all"` should be used in the control object. ### 5\.8\.2 Between\-Models The [`caret`](http://cran.r-project.org/web/packages/caret/index.html) package also includes functions to characterize the differences between models (generated using `train`, `sbf` or `rfe`) via their resampling distributions. These functions are based on the work of [Hothorn et al. (2005\)](https://homepage.boku.ac.at/leisch/papers/Hothorn+Leisch+Zeileis-2005.pdf) and [Eugster et al (2008\)](http://epub.ub.uni-muenchen.de/10604/1/tr56.pdf). First, a support vector machine model is fit to the Sonar data. The data are centered and scaled using the `preProc` argument. Note that the same random number seed is set prior to the model that is identical to the seed used for the boosted tree model. This ensures that the same resampling sets are used, which will come in handy when we compare the resampling profiles between models. ``` set.seed(825) svmFit <- train(Class ~ ., data = training, method = "svmRadial", trControl = fitControl, preProc = c("center", "scale"), tuneLength = 8, metric = "ROC") svmFit ``` ``` ## Support Vector Machines with Radial Basis Function Kernel ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## Pre-processing: centered (60), scaled (60) ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## C ROC Sens Spec ## 0.25 0.8438318 0.7373611 0.7230357 ## 0.50 0.8714459 0.8083333 0.7316071 ## 1.00 0.8921354 0.8031944 0.7653571 ## 2.00 0.9116171 0.8358333 0.7925000 ## 4.00 0.9298934 0.8525000 0.8201786 ## 8.00 0.9318899 0.8684722 0.8217857 ## 16.00 0.9339658 0.8730556 0.8205357 ## 32.00 0.9339658 0.8776389 0.8276786 ## ## Tuning parameter 'sigma' was held constant at a value of 0.01181293 ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were sigma = 0.01181293 and C = 16. ``` Also, a regularized discriminant analysis model was fit. ``` set.seed(825) rdaFit <- train(Class ~ ., data = training, method = "rda", trControl = fitControl, tuneLength = 4, metric = "ROC") rdaFit ``` ``` ## Regularized Discriminant Analysis ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## gamma lambda ROC Sens Spec ## 0.0000000 0.0000000 0.6426029 0.9311111 0.3364286 ## 0.0000000 0.3333333 0.8543564 0.8076389 0.7585714 ## 0.0000000 0.6666667 0.8596577 0.8083333 0.7766071 ## 0.0000000 1.0000000 0.7950670 0.7677778 0.6925000 ## 0.3333333 0.0000000 0.8509276 0.8502778 0.6914286 ## 0.3333333 0.3333333 0.8650372 0.8676389 0.6866071 ## 0.3333333 0.6666667 0.8698115 0.8604167 0.6941071 ## 0.3333333 1.0000000 0.8336930 0.7597222 0.7542857 ## 0.6666667 0.0000000 0.8600868 0.8756944 0.6482143 ## 0.6666667 0.3333333 0.8692981 0.8794444 0.6446429 ## 0.6666667 0.6666667 0.8678547 0.8355556 0.6892857 ## 0.6666667 1.0000000 0.8277133 0.7445833 0.7448214 ## 1.0000000 0.0000000 0.7059797 0.6888889 0.6032143 ## 1.0000000 0.3333333 0.7098313 0.6830556 0.6101786 ## 1.0000000 0.6666667 0.7129489 0.6672222 0.6173214 ## 1.0000000 1.0000000 0.7193031 0.6626389 0.6296429 ## ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were gamma = 0.3333333 and lambda ## = 0.6666667. ``` Given these models, can we make statistical statements about their performance differences? To do this, we first collect the resampling results using `resamples`. ``` resamps <- resamples(list(GBM = gbmFit3, SVM = svmFit, RDA = rdaFit)) resamps ``` ``` ## ## Call: ## resamples.default(x = list(GBM = gbmFit3, SVM = svmFit, RDA = rdaFit)) ## ## Models: GBM, SVM, RDA ## Number of resamples: 100 ## Performance metrics: ROC, Sens, Spec ## Time estimates for: everything, final model fit ``` ``` summary(resamps) ``` ``` ## ## Call: ## summary.resamples(object = resamps) ## ## Models: GBM, SVM, RDA ## Number of resamples: 100 ## ## ROC ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## GBM 0.6964286 0.874504 0.9375000 0.9216270 0.9821429 1 0 ## SVM 0.7321429 0.905878 0.9464286 0.9339658 0.9821429 1 0 ## RDA 0.5625000 0.812500 0.8750000 0.8698115 0.9392361 1 0 ## ## Sens ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## GBM 0.5555556 0.7777778 0.8750000 0.8776389 1 1 0 ## SVM 0.5000000 0.7777778 0.8888889 0.8730556 1 1 0 ## RDA 0.4444444 0.7777778 0.8750000 0.8604167 1 1 0 ## ## Spec ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## GBM 0.4285714 0.7142857 0.8571429 0.8133929 1.0000000 1 0 ## SVM 0.4285714 0.7142857 0.8571429 0.8205357 0.9062500 1 0 ## RDA 0.1428571 0.5714286 0.7142857 0.6941071 0.8571429 1 0 ``` Note that, in this case, the option `resamples = "final"` should be user\-defined in the control objects. There are several lattice plot methods that can be used to visualize the resampling distributions: density plots, box\-whisker plots, scatterplot matrices and scatterplots of summary statistics. For example: ``` theme1 <- trellis.par.get() theme1$plot.symbol$col = rgb(.2, .2, .2, .4) theme1$plot.symbol$pch = 16 theme1$plot.line$col = rgb(1, 0, 0, .7) theme1$plot.line$lwd <- 2 trellis.par.set(theme1) bwplot(resamps, layout = c(3, 1)) ``` ``` trellis.par.set(caretTheme()) dotplot(resamps, metric = "ROC") ``` ``` trellis.par.set(theme1) xyplot(resamps, what = "BlandAltman") ``` ``` splom(resamps) ``` Other visualizations are availible in `densityplot.resamples` and `parallel.resamples` Since models are fit on the same versions of the training data, it makes sense to make inferences on the differences between models. In this way we reduce the within\-resample correlation that may exist. We can compute the differences, then use a simple *t*\-test to evaluate the null hypothesis that there is no difference between models. ``` difValues <- diff(resamps) difValues ``` ``` ## ## Call: ## diff.resamples(x = resamps) ## ## Models: GBM, SVM, RDA ## Metrics: ROC, Sens, Spec ## Number of differences: 3 ## p-value adjustment: bonferroni ``` ``` summary(difValues) ``` ``` ## ## Call: ## summary.diff.resamples(object = difValues) ## ## p-value adjustment: bonferroni ## Upper diagonal: estimates of the difference ## Lower diagonal: p-value for H0: difference = 0 ## ## ROC ## GBM SVM RDA ## GBM -0.01234 0.05182 ## SVM 0.3388 0.06415 ## RDA 5.988e-07 2.638e-10 ## ## Sens ## GBM SVM RDA ## GBM 0.004583 0.017222 ## SVM 1.0000 0.012639 ## RDA 0.5187 1.0000 ## ## Spec ## GBM SVM RDA ## GBM -0.007143 0.119286 ## SVM 1 0.126429 ## RDA 5.300e-07 1.921e-10 ``` ``` trellis.par.set(theme1) bwplot(difValues, layout = c(3, 1)) ``` ``` trellis.par.set(caretTheme()) dotplot(difValues) ``` 5\.9 Fitting Models Without Parameter Tuning -------------------------------------------- In cases where the model tuning values are known, `train` can be used to fit the model to the entire training set without any resampling or parameter tuning. Using the `method = "none"` option in `trainControl` can be used. For example: ``` fitControl <- trainControl(method = "none", classProbs = TRUE) set.seed(825) gbmFit4 <- train(Class ~ ., data = training, method = "gbm", trControl = fitControl, verbose = FALSE, ## Only a single model can be passed to the ## function when no resampling is used: tuneGrid = data.frame(interaction.depth = 4, n.trees = 100, shrinkage = .1, n.minobsinnode = 20), metric = "ROC") gbmFit4 ``` ``` ## Stochastic Gradient Boosting ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: None ``` Note that `plot.train`, `resamples`, `confusionMatrix.train` and several other functions will not work with this object but `predict.train` and others will: ``` predict(gbmFit4, newdata = head(testing)) ``` ``` ## [1] R M R R M M ## Levels: M R ``` ``` predict(gbmFit4, newdata = head(testing), type = "prob") ``` ``` ## M R ## 1 0.264671996 0.73532800 ## 2 0.960445979 0.03955402 ## 3 0.005731862 0.99426814 ## 4 0.298628996 0.70137100 ## 5 0.503935367 0.49606463 ## 6 0.813716635 0.18628336 ``` 5\.1 Model Training and Parameter Tuning ---------------------------------------- The [`caret`](http://cran.r-project.org/web/packages/caret/index.html) package has several functions that attempt to streamline the model building and evaluation process. The `train` function can be used to * evaluate, using resampling, the effect of model tuning parameters on performance * choose the “optimal” model across these parameters * estimate model performance from a training set First, a specific model must be chosen. Currently, 238 are available using [`caret`](http://cran.r-project.org/web/packages/caret/index.html); see [`train` Model List](available-models.html) or [`train` Models By Tag](train-models-by-tag.html) for details. On these pages, there are lists of tuning parameters that can potentially be optimized. [User\-defined models](using-your-own-model-in-train.html) can also be created. The first step in tuning the model (line 1 in the algorithm below) is to choose a set of parameters to evaluate. For example, if fitting a Partial Least Squares (PLS) model, the number of PLS components to evaluate must be specified. Once the model and tuning parameter values have been defined, the type of resampling should be also be specified. Currently, *k*\-fold cross\-validation (once or repeated), leave\-one\-out cross\-validation and bootstrap (simple estimation or the 632 rule) resampling methods can be used by `train`. After resampling, the process produces a profile of performance measures is available to guide the user as to which tuning parameter values should be chosen. By default, the function automatically chooses the tuning parameters associated with the best value, although different algorithms can be used (see details below). 5\.2 An Example --------------- The Sonar data are available in the [`mlbench`](http://cran.r-project.org/web/packages/mlbench/index.html) package. Here, we load the data: ``` library(mlbench) data(Sonar) str(Sonar[, 1:10]) ``` ``` ## 'data.frame': 208 obs. of 10 variables: ## $ V1 : num 0.02 0.0453 0.0262 0.01 0.0762 0.0286 0.0317 0.0519 0.0223 0.0164 ... ## $ V2 : num 0.0371 0.0523 0.0582 0.0171 0.0666 0.0453 0.0956 0.0548 0.0375 0.0173 ... ## $ V3 : num 0.0428 0.0843 0.1099 0.0623 0.0481 ... ## $ V4 : num 0.0207 0.0689 0.1083 0.0205 0.0394 ... ## $ V5 : num 0.0954 0.1183 0.0974 0.0205 0.059 ... ## $ V6 : num 0.0986 0.2583 0.228 0.0368 0.0649 ... ## $ V7 : num 0.154 0.216 0.243 0.11 0.121 ... ## $ V8 : num 0.16 0.348 0.377 0.128 0.247 ... ## $ V9 : num 0.3109 0.3337 0.5598 0.0598 0.3564 ... ## $ V10: num 0.211 0.287 0.619 0.126 0.446 ... ``` The function `createDataPartition` can be used to create a stratified random sample of the data into training and test sets: ``` library(caret) set.seed(998) inTraining <- createDataPartition(Sonar$Class, p = .75, list = FALSE) training <- Sonar[ inTraining,] testing <- Sonar[-inTraining,] ``` We will use these data illustrate functionality on this (and other) pages. 5\.3 Basic Parameter Tuning --------------------------- By default, simple bootstrap resampling is used for line 3 in the algorithm above. Others are available, such as repeated *K*\-fold cross\-validation, leave\-one\-out etc. The function `trainControl` can be used to specifiy the type of resampling: ``` fitControl <- trainControl(## 10-fold CV method = "repeatedcv", number = 10, ## repeated ten times repeats = 10) ``` More information about `trainControl` is given in [a section below](model-training-and-tuning.html#custom). The first two arguments to `train` are the predictor and outcome data objects, respectively. The third argument, `method`, specifies the type of model (see [`train` Model List](available-models.html') or [`train` Models By Tag](train-models-by-tag.html)). To illustrate, we will fit a boosted tree model via the [`gbm`](http://cran.r-project.org/web/packages/gbm/index.html) package. The basic syntax for fitting this model using repeated cross\-validation is shown below: ``` set.seed(825) gbmFit1 <- train(Class ~ ., data = training, method = "gbm", trControl = fitControl, ## This last option is actually one ## for gbm() that passes through verbose = FALSE) gbmFit1 ``` ``` ## Stochastic Gradient Boosting ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## interaction.depth n.trees Accuracy Kappa ## 1 50 0.7935784 0.5797839 ## 1 100 0.8171078 0.6290208 ## 1 150 0.8219608 0.6386184 ## 2 50 0.8041912 0.6027771 ## 2 100 0.8302059 0.6556940 ## 2 150 0.8283627 0.6520181 ## 3 50 0.8110343 0.6170317 ## 3 100 0.8301275 0.6551379 ## 3 150 0.8310343 0.6577252 ## ## Tuning parameter 'shrinkage' was held constant at a value of 0.1 ## ## Tuning parameter 'n.minobsinnode' was held constant at a value of 10 ## Accuracy was used to select the optimal model using the largest value. ## The final values used for the model were n.trees = 150, ## interaction.depth = 3, shrinkage = 0.1 and n.minobsinnode = 10. ``` For a gradient boosting machine (GBM) model, there are three main tuning parameters: * number of iterations, i.e. trees, (called `n.trees` in the `gbm` function) * complexity of the tree, called `interaction.depth` * learning rate: how quickly the algorithm adapts, called `shrinkage` * the minimum number of training set samples in a node to commence splitting (`n.minobsinnode`) The default values tested for this model are shown in the first two columns (`shrinkage` and `n.minobsinnode` are not shown beause the grid set of candidate models all use a single value for these tuning parameters). The column labeled “`Accuracy`” is the overall agreement rate averaged over cross\-validation iterations. The agreement standard deviation is also calculated from the cross\-validation results. The column “`Kappa`” is Cohen’s (unweighted) Kappa statistic averaged across the resampling results. `train` works with specific models (see [`train` Model List](available-models.html') or [`train` Models By Tag](train-models-by-tag.html)). For these models, `train` can automatically create a grid of tuning parameters. By default, if *p* is the number of tuning parameters, the grid size is *3^p*. As another example, regularized discriminant analysis (RDA) models have two parameters (`gamma` and `lambda`), both of which lie between zero and one. The default training grid would produce nine combinations in this two\-dimensional space. There is additional functionality in `train` that is described in the next section. 5\.4 Notes on Reproducibility ----------------------------- Many models utilize random numbers during the phase where parameters are estimated. Also, the resampling indices are chosen using random numbers. There are two main ways to control the randomness in order to assure reproducible results. * There are two approaches to ensuring that the same *resamples* are used between calls to `train`. The first is to use `set.seed` just prior to calling `train`. The first use of random numbers is to create the resampling information. Alternatively, if you would like to use specific splits of the data, the `index` argument of the `trainControl` function can be used. This is briefly discussed below. * When the models are created *inside of resampling*, the seeds can also be set. While setting the seed prior to calling `train` may guarantee that the same random numbers are used, this is unlikely to be the case when [parallel processing](parallel-processing.html) is used (depending which technology is utilized). To set the model fitting seeds, `trainControl` has an additional argument called `seeds` that can be used. The value for this argument is a list of integer vectors that are used as seeds. The help page for `trainControl` describes the appropriate format for this option. How random numbers are used is highly dependent on the package author. There are rare cases where the underlying model function does not control the random number seed, especially if the computations are conducted in C code. Also, please note that [some packages load random numbers when loaded (directly or via namespace)](https://github.com/topepo/caret/issues/452) and this may affect reproducibility. 5\.5 Customizing the Tuning Process ----------------------------------- There are a few ways to customize the process of selecting tuning/complexity parameters and building the final model. ### 5\.5\.1 Pre\-Processing Options As previously mentioned,`train` can pre\-process the data in various ways prior to model fitting. The function `preProcess` is automatically used. This function can be used for centering and scaling, imputation (see details below), applying the spatial sign transformation and feature extraction via principal component analysis or independent component analysis. To specify what pre\-processing should occur, the `train` function has an argument called `preProcess`. This argument takes a character string of methods that would normally be passed to the `method` argument of the [`preProcess` function](pre-processing.html). Additional options to the `preProcess` function can be passed via the `trainControl` function. These processing steps would be applied during any predictions generated using `predict.train`, `extractPrediction` or `extractProbs` (see details later in this document). The pre\-processing would **not** be applied to predictions that directly use the `object$finalModel` object. For imputation, there are three methods currently implemented: * *k*\-nearest neighbors takes a sample with missing values and finds the *k* closest samples in the training set. The average of the *k* training set values for that predictor are used as a substitute for the original data. When calculating the distances to the training set samples, the predictors used in the calculation are the ones with no missing values for that sample and no missing values in the training set. * another approach is to fit a bagged tree model for each predictor using the training set samples. This is usually a fairly accurate model and can handle missing values. When a predictor for a sample requires imputation, the values for the other predictors are fed through the bagged tree and the prediction is used as the new value. This model can have significant computational cost. * the median of the predictor’s training set values can be used to estimate the missing data. If there are missing values in the training set, PCA and ICA models only use complete samples. ### 5\.5\.2 Alternate Tuning Grids The tuning parameter grid can be specified by the user. The argument `tuneGrid` can take a data frame with columns for each tuning parameter. The column names should be the same as the fitting function’s arguments. For the previously mentioned RDA example, the names would be `gamma` and `lambda`. `train` will tune the model over each combination of values in the rows. For the boosted tree model, we can fix the learning rate and evaluate more than three values of `n.trees`: ``` gbmGrid <- expand.grid(interaction.depth = c(1, 5, 9), n.trees = (1:30)*50, shrinkage = 0.1, n.minobsinnode = 20) nrow(gbmGrid) set.seed(825) gbmFit2 <- train(Class ~ ., data = training, method = "gbm", trControl = fitControl, verbose = FALSE, ## Now specify the exact models ## to evaluate: tuneGrid = gbmGrid) gbmFit2 ``` ``` ## Stochastic Gradient Boosting ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## interaction.depth n.trees Accuracy Kappa ## 1 50 0.78 0.56 ## 1 100 0.81 0.61 ## 1 150 0.82 0.63 ## 1 200 0.83 0.65 ## 1 250 0.82 0.65 ## 1 300 0.83 0.65 ## : : : : ## 9 1350 0.85 0.69 ## 9 1400 0.85 0.69 ## 9 1450 0.85 0.69 ## 9 1500 0.85 0.69 ## ## Tuning parameter 'shrinkage' was held constant at a value of 0.1 ## ## Tuning parameter 'n.minobsinnode' was held constant at a value of 20 ## Accuracy was used to select the optimal model using the largest value. ## The final values used for the model were n.trees = 1200, ## interaction.depth = 9, shrinkage = 0.1 and n.minobsinnode = 20. ``` Another option is to use a random sample of possible tuning parameter combinations, i.e. “random search”[(pdf)](http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf). This functionality is described on [this page](random-hyperparameter-search.html). To use a random search, use the option `search = "random"` in the call to `trainControl`. In this situation, the `tuneLength` parameter defines the total number of parameter combinations that will be evaluated. ### 5\.5\.3 Plotting the Resampling Profile The `plot` function can be used to examine the relationship between the estimates of performance and the tuning parameters. For example, a simple invokation of the function shows the results for the first performance measure: ``` trellis.par.set(caretTheme()) plot(gbmFit2) ``` Other performance metrics can be shown using the `metric` option: ``` trellis.par.set(caretTheme()) plot(gbmFit2, metric = "Kappa") ``` Other types of plot are also available. See `?plot.train` for more details. The code below shows a heatmap of the results: ``` trellis.par.set(caretTheme()) plot(gbmFit2, metric = "Kappa", plotType = "level", scales = list(x = list(rot = 90))) ``` A `ggplot` method can also be used: ``` ggplot(gbmFit2) ``` There are also plot functions that show more detailed representations of the resampled estimates. See `?xyplot.train` for more details. From these plots, a different set of tuning parameters may be desired. To change the final values without starting the whole process again, the `update.train` can be used to refit the final model. See `?update.train` ### 5\.5\.4 The `trainControl` Function The function `trainControl` generates parameters that further control how models are created, with possible values: * `method`: The resampling method: `"boot"`, `"cv"`, `"LOOCV"`, `"LGOCV"`, `"repeatedcv"`, `"timeslice"`, `"none"` and `"oob"`. The last value, out\-of\-bag estimates, can only be used by random forest, bagged trees, bagged earth, bagged flexible discriminant analysis, or conditional tree forest models. GBM models are not included (the [`gbm`](http://cran.r-project.org/web/packages/gbm/index.html) package maintainer has indicated that it would not be a good idea to choose tuning parameter values based on the model OOB error estimates with boosted trees). Also, for leave\-one\-out cross\-validation, no uncertainty estimates are given for the resampled performance measures. * `number` and `repeats`: `number` controls with the number of folds in *K*\-fold cross\-validation or number of resampling iterations for bootstrapping and leave\-group\-out cross\-validation. `repeats` applied only to repeated *K*\-fold cross\-validation. Suppose that `method = "repeatedcv"`, `number = 10` and `repeats = 3`,then three separate 10\-fold cross\-validations are used as the resampling scheme. * `verboseIter`: A logical for printing a training log. * `returnData`: A logical for saving the data into a slot called `trainingData`. * `p`: For leave\-group out cross\-validation: the training percentage * For `method = "timeslice"`, `trainControl` has options `initialWindow`, `horizon` and `fixedWindow` that govern how [cross\-validation can be used for time series data.](data-splitting.html) * `classProbs`: a logical value determining whether class probabilities should be computed for held\-out samples during resample. * `index` and `indexOut`: optional lists with elements for each resampling iteration. Each list element is the sample rows used for training at that iteration or should be held\-out. When these values are not specified, `train` will generate them. * `summaryFunction`: a function to computed alternate performance summaries. * `selectionFunction`: a function to choose the optimal tuning parameters. and examples. * `PCAthresh`, `ICAcomp` and `k`: these are all options to pass to the `preProcess` function (when used). * `returnResamp`: a character string containing one of the following values: `"all"`, `"final"` or `"none"`. This specifies how much of the resampled performance measures to save. * `allowParallel`: a logical that governs whether `train` should [use parallel processing (if availible).](parallel-processing.html) There are several other options not discussed here. ### 5\.5\.5 Alternate Performance Metrics The user can change the metric used to determine the best settings. By default, RMSE, *R*2, and the mean absolute error (MAE) are computed for regression while accuracy and Kappa are computed for classification. Also by default, the parameter values are chosen using RMSE and accuracy, respectively for regression and classification. The `metric` argument of the `train` function allows the user to control which the optimality criterion is used. For example, in problems where there are a low percentage of samples in one class, using `metric = "Kappa"` can improve quality of the final model. If none of these parameters are satisfactory, the user can also compute custom performance metrics. The `trainControl` function has a argument called `summaryFunction` that specifies a function for computing performance. The function should have these arguments: * `data` is a reference for a data frame or matrix with columns called `obs` and `pred` for the observed and predicted outcome values (either numeric data for regression or character values for classification). Currently, class probabilities are not passed to the function. The values in data are the held\-out predictions (and their associated reference values) for a single combination of tuning parameters. If the `classProbs` argument of the `trainControl` object is set to `TRUE`, additional columns in `data` will be present that contains the class probabilities. The names of these columns are the same as the class levels. Also, if `weights` were specified in the call to `train`, a column called `weights` will also be in the data set. Additionally, if the `recipe` method for `train` was used (see [this section of documentation](topepo.github.io/caret/using-recipes-with-train)), other variables not used in the model will also be included. This can be accomplished by adding a role in the recipe of `"performance var"`. An example is given in the recipe section of this site. * `lev` is a character string that has the outcome factor levels taken from the training data. For regression, a value of `NULL` is passed into the function. * `model` is a character string for the model being used (i.e. the value passed to the `method` argument of `train`). The output to the function should be a vector of numeric summary metrics with non\-null names. By default, `train` evaluate classification models in terms of the predicted classes. Optionally, class probabilities can also be used to measure performance. To obtain predicted class probabilities within the resampling process, the argument `classProbs` in `trainControl` must be set to `TRUE`. This merges columns of probabilities into the predictions generated from each resample (there is a column per class and the column names are the class names). As shown in the last section, custom functions can be used to calculate performance scores that are averaged over the resamples. Another built\-in function, `twoClassSummary`, will compute the sensitivity, specificity and area under the ROC curve: ``` head(twoClassSummary) ``` ``` ## ## 1 function (data, lev = NULL, model = NULL) ## 2 { ## 3 lvls <- levels(data$obs) ## 4 if (length(lvls) > 2) ## 5 stop(paste("Your outcome has", length(lvls), "levels. The twoClassSummary() function isn't appropriate.")) ## 6 requireNamespaceQuietStop("ModelMetrics") ``` To rebuild the boosted tree model using this criterion, we can see the relationship between the tuning parameters and the area under the ROC curve using the following code: ``` fitControl <- trainControl(method = "repeatedcv", number = 10, repeats = 10, ## Estimate class probabilities classProbs = TRUE, ## Evaluate performance using ## the following function summaryFunction = twoClassSummary) set.seed(825) gbmFit3 <- train(Class ~ ., data = training, method = "gbm", trControl = fitControl, verbose = FALSE, tuneGrid = gbmGrid, ## Specify which metric to optimize metric = "ROC") gbmFit3 ``` ``` ## Stochastic Gradient Boosting ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## interaction.depth n.trees ROC Sens Spec ## 1 50 0.86 0.86 0.69 ## 1 100 0.88 0.85 0.75 ## 1 150 0.89 0.86 0.77 ## 1 200 0.90 0.87 0.78 ## 1 250 0.90 0.86 0.78 ## 1 300 0.90 0.87 0.78 ## : : : : : ## 9 1350 0.92 0.88 0.81 ## 9 1400 0.92 0.88 0.80 ## 9 1450 0.92 0.88 0.81 ## 9 1500 0.92 0.88 0.80 ## ## Tuning parameter 'shrinkage' was held constant at a value of 0.1 ## ## Tuning parameter 'n.minobsinnode' was held constant at a value of 20 ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were n.trees = 1450, ## interaction.depth = 5, shrinkage = 0.1 and n.minobsinnode = 20. ``` In this case, the average area under the ROC curve associated with the optimal tuning parameters was 0\.922 across the 100 resamples. ### 5\.5\.1 Pre\-Processing Options As previously mentioned,`train` can pre\-process the data in various ways prior to model fitting. The function `preProcess` is automatically used. This function can be used for centering and scaling, imputation (see details below), applying the spatial sign transformation and feature extraction via principal component analysis or independent component analysis. To specify what pre\-processing should occur, the `train` function has an argument called `preProcess`. This argument takes a character string of methods that would normally be passed to the `method` argument of the [`preProcess` function](pre-processing.html). Additional options to the `preProcess` function can be passed via the `trainControl` function. These processing steps would be applied during any predictions generated using `predict.train`, `extractPrediction` or `extractProbs` (see details later in this document). The pre\-processing would **not** be applied to predictions that directly use the `object$finalModel` object. For imputation, there are three methods currently implemented: * *k*\-nearest neighbors takes a sample with missing values and finds the *k* closest samples in the training set. The average of the *k* training set values for that predictor are used as a substitute for the original data. When calculating the distances to the training set samples, the predictors used in the calculation are the ones with no missing values for that sample and no missing values in the training set. * another approach is to fit a bagged tree model for each predictor using the training set samples. This is usually a fairly accurate model and can handle missing values. When a predictor for a sample requires imputation, the values for the other predictors are fed through the bagged tree and the prediction is used as the new value. This model can have significant computational cost. * the median of the predictor’s training set values can be used to estimate the missing data. If there are missing values in the training set, PCA and ICA models only use complete samples. ### 5\.5\.2 Alternate Tuning Grids The tuning parameter grid can be specified by the user. The argument `tuneGrid` can take a data frame with columns for each tuning parameter. The column names should be the same as the fitting function’s arguments. For the previously mentioned RDA example, the names would be `gamma` and `lambda`. `train` will tune the model over each combination of values in the rows. For the boosted tree model, we can fix the learning rate and evaluate more than three values of `n.trees`: ``` gbmGrid <- expand.grid(interaction.depth = c(1, 5, 9), n.trees = (1:30)*50, shrinkage = 0.1, n.minobsinnode = 20) nrow(gbmGrid) set.seed(825) gbmFit2 <- train(Class ~ ., data = training, method = "gbm", trControl = fitControl, verbose = FALSE, ## Now specify the exact models ## to evaluate: tuneGrid = gbmGrid) gbmFit2 ``` ``` ## Stochastic Gradient Boosting ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## interaction.depth n.trees Accuracy Kappa ## 1 50 0.78 0.56 ## 1 100 0.81 0.61 ## 1 150 0.82 0.63 ## 1 200 0.83 0.65 ## 1 250 0.82 0.65 ## 1 300 0.83 0.65 ## : : : : ## 9 1350 0.85 0.69 ## 9 1400 0.85 0.69 ## 9 1450 0.85 0.69 ## 9 1500 0.85 0.69 ## ## Tuning parameter 'shrinkage' was held constant at a value of 0.1 ## ## Tuning parameter 'n.minobsinnode' was held constant at a value of 20 ## Accuracy was used to select the optimal model using the largest value. ## The final values used for the model were n.trees = 1200, ## interaction.depth = 9, shrinkage = 0.1 and n.minobsinnode = 20. ``` Another option is to use a random sample of possible tuning parameter combinations, i.e. “random search”[(pdf)](http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf). This functionality is described on [this page](random-hyperparameter-search.html). To use a random search, use the option `search = "random"` in the call to `trainControl`. In this situation, the `tuneLength` parameter defines the total number of parameter combinations that will be evaluated. ### 5\.5\.3 Plotting the Resampling Profile The `plot` function can be used to examine the relationship between the estimates of performance and the tuning parameters. For example, a simple invokation of the function shows the results for the first performance measure: ``` trellis.par.set(caretTheme()) plot(gbmFit2) ``` Other performance metrics can be shown using the `metric` option: ``` trellis.par.set(caretTheme()) plot(gbmFit2, metric = "Kappa") ``` Other types of plot are also available. See `?plot.train` for more details. The code below shows a heatmap of the results: ``` trellis.par.set(caretTheme()) plot(gbmFit2, metric = "Kappa", plotType = "level", scales = list(x = list(rot = 90))) ``` A `ggplot` method can also be used: ``` ggplot(gbmFit2) ``` There are also plot functions that show more detailed representations of the resampled estimates. See `?xyplot.train` for more details. From these plots, a different set of tuning parameters may be desired. To change the final values without starting the whole process again, the `update.train` can be used to refit the final model. See `?update.train` ### 5\.5\.4 The `trainControl` Function The function `trainControl` generates parameters that further control how models are created, with possible values: * `method`: The resampling method: `"boot"`, `"cv"`, `"LOOCV"`, `"LGOCV"`, `"repeatedcv"`, `"timeslice"`, `"none"` and `"oob"`. The last value, out\-of\-bag estimates, can only be used by random forest, bagged trees, bagged earth, bagged flexible discriminant analysis, or conditional tree forest models. GBM models are not included (the [`gbm`](http://cran.r-project.org/web/packages/gbm/index.html) package maintainer has indicated that it would not be a good idea to choose tuning parameter values based on the model OOB error estimates with boosted trees). Also, for leave\-one\-out cross\-validation, no uncertainty estimates are given for the resampled performance measures. * `number` and `repeats`: `number` controls with the number of folds in *K*\-fold cross\-validation or number of resampling iterations for bootstrapping and leave\-group\-out cross\-validation. `repeats` applied only to repeated *K*\-fold cross\-validation. Suppose that `method = "repeatedcv"`, `number = 10` and `repeats = 3`,then three separate 10\-fold cross\-validations are used as the resampling scheme. * `verboseIter`: A logical for printing a training log. * `returnData`: A logical for saving the data into a slot called `trainingData`. * `p`: For leave\-group out cross\-validation: the training percentage * For `method = "timeslice"`, `trainControl` has options `initialWindow`, `horizon` and `fixedWindow` that govern how [cross\-validation can be used for time series data.](data-splitting.html) * `classProbs`: a logical value determining whether class probabilities should be computed for held\-out samples during resample. * `index` and `indexOut`: optional lists with elements for each resampling iteration. Each list element is the sample rows used for training at that iteration or should be held\-out. When these values are not specified, `train` will generate them. * `summaryFunction`: a function to computed alternate performance summaries. * `selectionFunction`: a function to choose the optimal tuning parameters. and examples. * `PCAthresh`, `ICAcomp` and `k`: these are all options to pass to the `preProcess` function (when used). * `returnResamp`: a character string containing one of the following values: `"all"`, `"final"` or `"none"`. This specifies how much of the resampled performance measures to save. * `allowParallel`: a logical that governs whether `train` should [use parallel processing (if availible).](parallel-processing.html) There are several other options not discussed here. ### 5\.5\.5 Alternate Performance Metrics The user can change the metric used to determine the best settings. By default, RMSE, *R*2, and the mean absolute error (MAE) are computed for regression while accuracy and Kappa are computed for classification. Also by default, the parameter values are chosen using RMSE and accuracy, respectively for regression and classification. The `metric` argument of the `train` function allows the user to control which the optimality criterion is used. For example, in problems where there are a low percentage of samples in one class, using `metric = "Kappa"` can improve quality of the final model. If none of these parameters are satisfactory, the user can also compute custom performance metrics. The `trainControl` function has a argument called `summaryFunction` that specifies a function for computing performance. The function should have these arguments: * `data` is a reference for a data frame or matrix with columns called `obs` and `pred` for the observed and predicted outcome values (either numeric data for regression or character values for classification). Currently, class probabilities are not passed to the function. The values in data are the held\-out predictions (and their associated reference values) for a single combination of tuning parameters. If the `classProbs` argument of the `trainControl` object is set to `TRUE`, additional columns in `data` will be present that contains the class probabilities. The names of these columns are the same as the class levels. Also, if `weights` were specified in the call to `train`, a column called `weights` will also be in the data set. Additionally, if the `recipe` method for `train` was used (see [this section of documentation](topepo.github.io/caret/using-recipes-with-train)), other variables not used in the model will also be included. This can be accomplished by adding a role in the recipe of `"performance var"`. An example is given in the recipe section of this site. * `lev` is a character string that has the outcome factor levels taken from the training data. For regression, a value of `NULL` is passed into the function. * `model` is a character string for the model being used (i.e. the value passed to the `method` argument of `train`). The output to the function should be a vector of numeric summary metrics with non\-null names. By default, `train` evaluate classification models in terms of the predicted classes. Optionally, class probabilities can also be used to measure performance. To obtain predicted class probabilities within the resampling process, the argument `classProbs` in `trainControl` must be set to `TRUE`. This merges columns of probabilities into the predictions generated from each resample (there is a column per class and the column names are the class names). As shown in the last section, custom functions can be used to calculate performance scores that are averaged over the resamples. Another built\-in function, `twoClassSummary`, will compute the sensitivity, specificity and area under the ROC curve: ``` head(twoClassSummary) ``` ``` ## ## 1 function (data, lev = NULL, model = NULL) ## 2 { ## 3 lvls <- levels(data$obs) ## 4 if (length(lvls) > 2) ## 5 stop(paste("Your outcome has", length(lvls), "levels. The twoClassSummary() function isn't appropriate.")) ## 6 requireNamespaceQuietStop("ModelMetrics") ``` To rebuild the boosted tree model using this criterion, we can see the relationship between the tuning parameters and the area under the ROC curve using the following code: ``` fitControl <- trainControl(method = "repeatedcv", number = 10, repeats = 10, ## Estimate class probabilities classProbs = TRUE, ## Evaluate performance using ## the following function summaryFunction = twoClassSummary) set.seed(825) gbmFit3 <- train(Class ~ ., data = training, method = "gbm", trControl = fitControl, verbose = FALSE, tuneGrid = gbmGrid, ## Specify which metric to optimize metric = "ROC") gbmFit3 ``` ``` ## Stochastic Gradient Boosting ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## interaction.depth n.trees ROC Sens Spec ## 1 50 0.86 0.86 0.69 ## 1 100 0.88 0.85 0.75 ## 1 150 0.89 0.86 0.77 ## 1 200 0.90 0.87 0.78 ## 1 250 0.90 0.86 0.78 ## 1 300 0.90 0.87 0.78 ## : : : : : ## 9 1350 0.92 0.88 0.81 ## 9 1400 0.92 0.88 0.80 ## 9 1450 0.92 0.88 0.81 ## 9 1500 0.92 0.88 0.80 ## ## Tuning parameter 'shrinkage' was held constant at a value of 0.1 ## ## Tuning parameter 'n.minobsinnode' was held constant at a value of 20 ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were n.trees = 1450, ## interaction.depth = 5, shrinkage = 0.1 and n.minobsinnode = 20. ``` In this case, the average area under the ROC curve associated with the optimal tuning parameters was 0\.922 across the 100 resamples. 5\.6 Choosing the Final Model ----------------------------- Another method for customizing the tuning process is to modify the algorithm that is used to select the “best” parameter values, given the performance numbers. By default, the `train` function chooses the model with the largest performance value (or smallest, for mean squared error in regression models). Other schemes for selecting model can be used. [Breiman et al (1984\)](http://books.google.com/books/about/Classification_and_Regression_Trees.html?id=JwQx-WOmSyQC) suggested the “one standard error rule” for simple tree\-based models. In this case, the model with the best performance value is identified and, using resampling, we can estimate the standard error of performance. The final model used was the simplest model within one standard error of the (empirically) best model. With simple trees this makes sense, since these models will start to over\-fit as they become more and more specific to the training data. `train` allows the user to specify alternate rules for selecting the final model. The argument `selectionFunction` can be used to supply a function to algorithmically determine the final model. There are three existing functions in the package: `best` is chooses the largest/smallest value, `oneSE` attempts to capture the spirit of [Breiman et al (1984\)](http://books.google.com/books/about/Classification_and_Regression_Trees.html?id=JwQx-WOmSyQC) and `tolerance` selects the least complex model within some percent tolerance of the best value. See `?best` for more details. User\-defined functions can be used, as long as they have the following arguments: * `x` is a data frame containing the tune parameters and their associated performance metrics. Each row corresponds to a different tuning parameter combination. * `metric` a character string indicating which performance metric should be optimized (this is passed in directly from the `metric` argument of `train`. * `maximize` is a single logical value indicating whether larger values of the performance metric are better (this is also directly passed from the call to `train`). The function should output a single integer indicating which row in `x` is chosen. As an example, if we chose the previous boosted tree model on the basis of overall accuracy, we would choose: n.trees \= 1450, interaction.depth \= 5, shrinkage \= 0\.1, n.minobsinnode \= 20\. However, the scale in this plots is fairly tight, with accuracy values ranging from 0\.863 to 0\.922\. A less complex model (e.g. fewer, more shallow trees) might also yield acceptable accuracy. The tolerance function could be used to find a less complex model based on (*x*\-*x*best)/*x*bestx 100, which is the percent difference. For example, to select parameter values based on a 2% loss of performance: ``` whichTwoPct <- tolerance(gbmFit3$results, metric = "ROC", tol = 2, maximize = TRUE) cat("best model within 2 pct of best:\n") ``` ``` ## best model within 2 pct of best: ``` ``` gbmFit3$results[whichTwoPct,1:6] ``` ``` ## shrinkage interaction.depth n.minobsinnode n.trees ROC Sens ## 32 0.1 5 20 100 0.9139707 0.8645833 ``` This indicates that we can get a less complex model with an area under the ROC curve of 0\.914 (compared to the “pick the best” value of 0\.922\). The main issue with these functions is related to ordering the models from simplest to complex. In some cases, this is easy (e.g. simple trees, partial least squares), but in cases such as this model, the ordering of models is subjective. For example, is a boosted tree model using 100 iterations and a tree depth of 2 more complex than one with 50 iterations and a depth of 8? The package makes some choices regarding the orderings. In the case of boosted trees, the package assumes that increasing the number of iterations adds complexity at a faster rate than increasing the tree depth, so models are ordered on the number of iterations then ordered with depth. See `?best` for more examples for specific models. 5\.7 Extracting Predictions and Class Probabilities --------------------------------------------------- As previously mentioned, objects produced by the `train` function contain the “optimized” model in the `finalModel` sub\-object. Predictions can be made from these objects as usual. In some cases, such as `pls` or `gbm` objects, additional parameters from the optimized fit may need to be specified. In these cases, the `train` objects uses the results of the parameter optimization to predict new samples. For example, if predictions were created using `predict.gbm`, the user would have to specify the number of trees directly (there is no default). Also, for binary classification, the predictions from this function take the form of the probability of one of the classes, so extra steps are required to convert this to a factor vector. `predict.train` automatically handles these details for this (and for other models). Also, there are very few standard syntaxes for model predictions in R. For example, to get class probabilities, many `predict` methods have an argument called `type` that is used to specify whether the classes or probabilities should be generated. Different packages use different values of `type`, such as `"prob"`, `"posterior"`, `"response"`, `"probability"` or `"raw"`. In other cases, completely different syntax is used. For `predict.train`, the type options are standardized to be `"class"` and `"prob"` (the underlying code matches these to the appropriate choices for each model. For example: ``` predict(gbmFit3, newdata = head(testing)) ``` ``` ## [1] R M R M R M ## Levels: M R ``` ``` predict(gbmFit3, newdata = head(testing), type = "prob") ``` ``` ## M R ## 1 3.215213e-02 9.678479e-01 ## 2 1.000000e+00 3.965815e-08 ## 3 6.996088e-13 1.000000e+00 ## 4 9.070652e-01 9.293483e-02 ## 5 2.029754e-03 9.979702e-01 ## 6 9.999662e-01 3.377548e-05 ``` 5\.8 Exploring and Comparing Resampling Distributions ----------------------------------------------------- ### 5\.8\.1 Within\-Model There are several [`lattice`](http://cran.r-project.org/web/packages/lattice/index.html) functions than can be used to explore relationships between tuning parameters and the resampling results for a specific model: * `xyplot` and `stripplot` can be used to plot resampling statistics against (numeric) tuning parameters. * `histogram` and `densityplot` can also be used to look at distributions of the tuning parameters across tuning parameters. For example, the following statements create a density plot: ``` trellis.par.set(caretTheme()) densityplot(gbmFit3, pch = "|") ``` Note that if you are interested in plotting the resampling results across multiple tuning parameters, the option `resamples = "all"` should be used in the control object. ### 5\.8\.2 Between\-Models The [`caret`](http://cran.r-project.org/web/packages/caret/index.html) package also includes functions to characterize the differences between models (generated using `train`, `sbf` or `rfe`) via their resampling distributions. These functions are based on the work of [Hothorn et al. (2005\)](https://homepage.boku.ac.at/leisch/papers/Hothorn+Leisch+Zeileis-2005.pdf) and [Eugster et al (2008\)](http://epub.ub.uni-muenchen.de/10604/1/tr56.pdf). First, a support vector machine model is fit to the Sonar data. The data are centered and scaled using the `preProc` argument. Note that the same random number seed is set prior to the model that is identical to the seed used for the boosted tree model. This ensures that the same resampling sets are used, which will come in handy when we compare the resampling profiles between models. ``` set.seed(825) svmFit <- train(Class ~ ., data = training, method = "svmRadial", trControl = fitControl, preProc = c("center", "scale"), tuneLength = 8, metric = "ROC") svmFit ``` ``` ## Support Vector Machines with Radial Basis Function Kernel ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## Pre-processing: centered (60), scaled (60) ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## C ROC Sens Spec ## 0.25 0.8438318 0.7373611 0.7230357 ## 0.50 0.8714459 0.8083333 0.7316071 ## 1.00 0.8921354 0.8031944 0.7653571 ## 2.00 0.9116171 0.8358333 0.7925000 ## 4.00 0.9298934 0.8525000 0.8201786 ## 8.00 0.9318899 0.8684722 0.8217857 ## 16.00 0.9339658 0.8730556 0.8205357 ## 32.00 0.9339658 0.8776389 0.8276786 ## ## Tuning parameter 'sigma' was held constant at a value of 0.01181293 ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were sigma = 0.01181293 and C = 16. ``` Also, a regularized discriminant analysis model was fit. ``` set.seed(825) rdaFit <- train(Class ~ ., data = training, method = "rda", trControl = fitControl, tuneLength = 4, metric = "ROC") rdaFit ``` ``` ## Regularized Discriminant Analysis ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## gamma lambda ROC Sens Spec ## 0.0000000 0.0000000 0.6426029 0.9311111 0.3364286 ## 0.0000000 0.3333333 0.8543564 0.8076389 0.7585714 ## 0.0000000 0.6666667 0.8596577 0.8083333 0.7766071 ## 0.0000000 1.0000000 0.7950670 0.7677778 0.6925000 ## 0.3333333 0.0000000 0.8509276 0.8502778 0.6914286 ## 0.3333333 0.3333333 0.8650372 0.8676389 0.6866071 ## 0.3333333 0.6666667 0.8698115 0.8604167 0.6941071 ## 0.3333333 1.0000000 0.8336930 0.7597222 0.7542857 ## 0.6666667 0.0000000 0.8600868 0.8756944 0.6482143 ## 0.6666667 0.3333333 0.8692981 0.8794444 0.6446429 ## 0.6666667 0.6666667 0.8678547 0.8355556 0.6892857 ## 0.6666667 1.0000000 0.8277133 0.7445833 0.7448214 ## 1.0000000 0.0000000 0.7059797 0.6888889 0.6032143 ## 1.0000000 0.3333333 0.7098313 0.6830556 0.6101786 ## 1.0000000 0.6666667 0.7129489 0.6672222 0.6173214 ## 1.0000000 1.0000000 0.7193031 0.6626389 0.6296429 ## ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were gamma = 0.3333333 and lambda ## = 0.6666667. ``` Given these models, can we make statistical statements about their performance differences? To do this, we first collect the resampling results using `resamples`. ``` resamps <- resamples(list(GBM = gbmFit3, SVM = svmFit, RDA = rdaFit)) resamps ``` ``` ## ## Call: ## resamples.default(x = list(GBM = gbmFit3, SVM = svmFit, RDA = rdaFit)) ## ## Models: GBM, SVM, RDA ## Number of resamples: 100 ## Performance metrics: ROC, Sens, Spec ## Time estimates for: everything, final model fit ``` ``` summary(resamps) ``` ``` ## ## Call: ## summary.resamples(object = resamps) ## ## Models: GBM, SVM, RDA ## Number of resamples: 100 ## ## ROC ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## GBM 0.6964286 0.874504 0.9375000 0.9216270 0.9821429 1 0 ## SVM 0.7321429 0.905878 0.9464286 0.9339658 0.9821429 1 0 ## RDA 0.5625000 0.812500 0.8750000 0.8698115 0.9392361 1 0 ## ## Sens ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## GBM 0.5555556 0.7777778 0.8750000 0.8776389 1 1 0 ## SVM 0.5000000 0.7777778 0.8888889 0.8730556 1 1 0 ## RDA 0.4444444 0.7777778 0.8750000 0.8604167 1 1 0 ## ## Spec ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## GBM 0.4285714 0.7142857 0.8571429 0.8133929 1.0000000 1 0 ## SVM 0.4285714 0.7142857 0.8571429 0.8205357 0.9062500 1 0 ## RDA 0.1428571 0.5714286 0.7142857 0.6941071 0.8571429 1 0 ``` Note that, in this case, the option `resamples = "final"` should be user\-defined in the control objects. There are several lattice plot methods that can be used to visualize the resampling distributions: density plots, box\-whisker plots, scatterplot matrices and scatterplots of summary statistics. For example: ``` theme1 <- trellis.par.get() theme1$plot.symbol$col = rgb(.2, .2, .2, .4) theme1$plot.symbol$pch = 16 theme1$plot.line$col = rgb(1, 0, 0, .7) theme1$plot.line$lwd <- 2 trellis.par.set(theme1) bwplot(resamps, layout = c(3, 1)) ``` ``` trellis.par.set(caretTheme()) dotplot(resamps, metric = "ROC") ``` ``` trellis.par.set(theme1) xyplot(resamps, what = "BlandAltman") ``` ``` splom(resamps) ``` Other visualizations are availible in `densityplot.resamples` and `parallel.resamples` Since models are fit on the same versions of the training data, it makes sense to make inferences on the differences between models. In this way we reduce the within\-resample correlation that may exist. We can compute the differences, then use a simple *t*\-test to evaluate the null hypothesis that there is no difference between models. ``` difValues <- diff(resamps) difValues ``` ``` ## ## Call: ## diff.resamples(x = resamps) ## ## Models: GBM, SVM, RDA ## Metrics: ROC, Sens, Spec ## Number of differences: 3 ## p-value adjustment: bonferroni ``` ``` summary(difValues) ``` ``` ## ## Call: ## summary.diff.resamples(object = difValues) ## ## p-value adjustment: bonferroni ## Upper diagonal: estimates of the difference ## Lower diagonal: p-value for H0: difference = 0 ## ## ROC ## GBM SVM RDA ## GBM -0.01234 0.05182 ## SVM 0.3388 0.06415 ## RDA 5.988e-07 2.638e-10 ## ## Sens ## GBM SVM RDA ## GBM 0.004583 0.017222 ## SVM 1.0000 0.012639 ## RDA 0.5187 1.0000 ## ## Spec ## GBM SVM RDA ## GBM -0.007143 0.119286 ## SVM 1 0.126429 ## RDA 5.300e-07 1.921e-10 ``` ``` trellis.par.set(theme1) bwplot(difValues, layout = c(3, 1)) ``` ``` trellis.par.set(caretTheme()) dotplot(difValues) ``` ### 5\.8\.1 Within\-Model There are several [`lattice`](http://cran.r-project.org/web/packages/lattice/index.html) functions than can be used to explore relationships between tuning parameters and the resampling results for a specific model: * `xyplot` and `stripplot` can be used to plot resampling statistics against (numeric) tuning parameters. * `histogram` and `densityplot` can also be used to look at distributions of the tuning parameters across tuning parameters. For example, the following statements create a density plot: ``` trellis.par.set(caretTheme()) densityplot(gbmFit3, pch = "|") ``` Note that if you are interested in plotting the resampling results across multiple tuning parameters, the option `resamples = "all"` should be used in the control object. ### 5\.8\.2 Between\-Models The [`caret`](http://cran.r-project.org/web/packages/caret/index.html) package also includes functions to characterize the differences between models (generated using `train`, `sbf` or `rfe`) via their resampling distributions. These functions are based on the work of [Hothorn et al. (2005\)](https://homepage.boku.ac.at/leisch/papers/Hothorn+Leisch+Zeileis-2005.pdf) and [Eugster et al (2008\)](http://epub.ub.uni-muenchen.de/10604/1/tr56.pdf). First, a support vector machine model is fit to the Sonar data. The data are centered and scaled using the `preProc` argument. Note that the same random number seed is set prior to the model that is identical to the seed used for the boosted tree model. This ensures that the same resampling sets are used, which will come in handy when we compare the resampling profiles between models. ``` set.seed(825) svmFit <- train(Class ~ ., data = training, method = "svmRadial", trControl = fitControl, preProc = c("center", "scale"), tuneLength = 8, metric = "ROC") svmFit ``` ``` ## Support Vector Machines with Radial Basis Function Kernel ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## Pre-processing: centered (60), scaled (60) ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## C ROC Sens Spec ## 0.25 0.8438318 0.7373611 0.7230357 ## 0.50 0.8714459 0.8083333 0.7316071 ## 1.00 0.8921354 0.8031944 0.7653571 ## 2.00 0.9116171 0.8358333 0.7925000 ## 4.00 0.9298934 0.8525000 0.8201786 ## 8.00 0.9318899 0.8684722 0.8217857 ## 16.00 0.9339658 0.8730556 0.8205357 ## 32.00 0.9339658 0.8776389 0.8276786 ## ## Tuning parameter 'sigma' was held constant at a value of 0.01181293 ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were sigma = 0.01181293 and C = 16. ``` Also, a regularized discriminant analysis model was fit. ``` set.seed(825) rdaFit <- train(Class ~ ., data = training, method = "rda", trControl = fitControl, tuneLength = 4, metric = "ROC") rdaFit ``` ``` ## Regularized Discriminant Analysis ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## gamma lambda ROC Sens Spec ## 0.0000000 0.0000000 0.6426029 0.9311111 0.3364286 ## 0.0000000 0.3333333 0.8543564 0.8076389 0.7585714 ## 0.0000000 0.6666667 0.8596577 0.8083333 0.7766071 ## 0.0000000 1.0000000 0.7950670 0.7677778 0.6925000 ## 0.3333333 0.0000000 0.8509276 0.8502778 0.6914286 ## 0.3333333 0.3333333 0.8650372 0.8676389 0.6866071 ## 0.3333333 0.6666667 0.8698115 0.8604167 0.6941071 ## 0.3333333 1.0000000 0.8336930 0.7597222 0.7542857 ## 0.6666667 0.0000000 0.8600868 0.8756944 0.6482143 ## 0.6666667 0.3333333 0.8692981 0.8794444 0.6446429 ## 0.6666667 0.6666667 0.8678547 0.8355556 0.6892857 ## 0.6666667 1.0000000 0.8277133 0.7445833 0.7448214 ## 1.0000000 0.0000000 0.7059797 0.6888889 0.6032143 ## 1.0000000 0.3333333 0.7098313 0.6830556 0.6101786 ## 1.0000000 0.6666667 0.7129489 0.6672222 0.6173214 ## 1.0000000 1.0000000 0.7193031 0.6626389 0.6296429 ## ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were gamma = 0.3333333 and lambda ## = 0.6666667. ``` Given these models, can we make statistical statements about their performance differences? To do this, we first collect the resampling results using `resamples`. ``` resamps <- resamples(list(GBM = gbmFit3, SVM = svmFit, RDA = rdaFit)) resamps ``` ``` ## ## Call: ## resamples.default(x = list(GBM = gbmFit3, SVM = svmFit, RDA = rdaFit)) ## ## Models: GBM, SVM, RDA ## Number of resamples: 100 ## Performance metrics: ROC, Sens, Spec ## Time estimates for: everything, final model fit ``` ``` summary(resamps) ``` ``` ## ## Call: ## summary.resamples(object = resamps) ## ## Models: GBM, SVM, RDA ## Number of resamples: 100 ## ## ROC ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## GBM 0.6964286 0.874504 0.9375000 0.9216270 0.9821429 1 0 ## SVM 0.7321429 0.905878 0.9464286 0.9339658 0.9821429 1 0 ## RDA 0.5625000 0.812500 0.8750000 0.8698115 0.9392361 1 0 ## ## Sens ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## GBM 0.5555556 0.7777778 0.8750000 0.8776389 1 1 0 ## SVM 0.5000000 0.7777778 0.8888889 0.8730556 1 1 0 ## RDA 0.4444444 0.7777778 0.8750000 0.8604167 1 1 0 ## ## Spec ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## GBM 0.4285714 0.7142857 0.8571429 0.8133929 1.0000000 1 0 ## SVM 0.4285714 0.7142857 0.8571429 0.8205357 0.9062500 1 0 ## RDA 0.1428571 0.5714286 0.7142857 0.6941071 0.8571429 1 0 ``` Note that, in this case, the option `resamples = "final"` should be user\-defined in the control objects. There are several lattice plot methods that can be used to visualize the resampling distributions: density plots, box\-whisker plots, scatterplot matrices and scatterplots of summary statistics. For example: ``` theme1 <- trellis.par.get() theme1$plot.symbol$col = rgb(.2, .2, .2, .4) theme1$plot.symbol$pch = 16 theme1$plot.line$col = rgb(1, 0, 0, .7) theme1$plot.line$lwd <- 2 trellis.par.set(theme1) bwplot(resamps, layout = c(3, 1)) ``` ``` trellis.par.set(caretTheme()) dotplot(resamps, metric = "ROC") ``` ``` trellis.par.set(theme1) xyplot(resamps, what = "BlandAltman") ``` ``` splom(resamps) ``` Other visualizations are availible in `densityplot.resamples` and `parallel.resamples` Since models are fit on the same versions of the training data, it makes sense to make inferences on the differences between models. In this way we reduce the within\-resample correlation that may exist. We can compute the differences, then use a simple *t*\-test to evaluate the null hypothesis that there is no difference between models. ``` difValues <- diff(resamps) difValues ``` ``` ## ## Call: ## diff.resamples(x = resamps) ## ## Models: GBM, SVM, RDA ## Metrics: ROC, Sens, Spec ## Number of differences: 3 ## p-value adjustment: bonferroni ``` ``` summary(difValues) ``` ``` ## ## Call: ## summary.diff.resamples(object = difValues) ## ## p-value adjustment: bonferroni ## Upper diagonal: estimates of the difference ## Lower diagonal: p-value for H0: difference = 0 ## ## ROC ## GBM SVM RDA ## GBM -0.01234 0.05182 ## SVM 0.3388 0.06415 ## RDA 5.988e-07 2.638e-10 ## ## Sens ## GBM SVM RDA ## GBM 0.004583 0.017222 ## SVM 1.0000 0.012639 ## RDA 0.5187 1.0000 ## ## Spec ## GBM SVM RDA ## GBM -0.007143 0.119286 ## SVM 1 0.126429 ## RDA 5.300e-07 1.921e-10 ``` ``` trellis.par.set(theme1) bwplot(difValues, layout = c(3, 1)) ``` ``` trellis.par.set(caretTheme()) dotplot(difValues) ``` 5\.9 Fitting Models Without Parameter Tuning -------------------------------------------- In cases where the model tuning values are known, `train` can be used to fit the model to the entire training set without any resampling or parameter tuning. Using the `method = "none"` option in `trainControl` can be used. For example: ``` fitControl <- trainControl(method = "none", classProbs = TRUE) set.seed(825) gbmFit4 <- train(Class ~ ., data = training, method = "gbm", trControl = fitControl, verbose = FALSE, ## Only a single model can be passed to the ## function when no resampling is used: tuneGrid = data.frame(interaction.depth = 4, n.trees = 100, shrinkage = .1, n.minobsinnode = 20), metric = "ROC") gbmFit4 ``` ``` ## Stochastic Gradient Boosting ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: None ``` Note that `plot.train`, `resamples`, `confusionMatrix.train` and several other functions will not work with this object but `predict.train` and others will: ``` predict(gbmFit4, newdata = head(testing)) ``` ``` ## [1] R M R R M M ## Levels: M R ``` ``` predict(gbmFit4, newdata = head(testing), type = "prob") ``` ``` ## M R ## 1 0.264671996 0.73532800 ## 2 0.960445979 0.03955402 ## 3 0.005731862 0.99426814 ## 4 0.298628996 0.70137100 ## 5 0.503935367 0.49606463 ## 6 0.813716635 0.18628336 ```
Machine Learning
topepo.github.io
https://topepo.github.io/caret/train-models-by-tag.html
7 `train` Models By Tag ======================= The following is a basic list of model types or relevant characteristics. There entires in these lists are arguable. For example: random forests theoretically use feature selection but effectively may not, support vector machines use L2 regularization etc. Contents * [Accepts Case Weights](train-models-by-tag.html#Accepts_Case_Weights) * [Bagging](train-models-by-tag.html#Bagging) * [Bayesian Model](train-models-by-tag.html#Bayesian_Model) * [Binary Predictors Only](train-models-by-tag.html#Binary_Predictors_Only) * [Boosting](train-models-by-tag.html#Boosting) * [Categorical Predictors Only](train-models-by-tag.html#Categorical_Predictors_Only) * [Cost Sensitive Learning](train-models-by-tag.html#Cost_Sensitive_Learning) * [Discriminant Analysis](train-models-by-tag.html#Discriminant_Analysis) * [Distance Weighted Discrimination](train-models-by-tag.html#Distance_Weighted_Discrimination) * [Ensemble Model](train-models-by-tag.html#Ensemble_Model) * [Feature Extraction](train-models-by-tag.html#Feature_Extraction) * [Feature Selection Wrapper](train-models-by-tag.html#Feature_Selection_Wrapper) * [Gaussian Process](train-models-by-tag.html#Gaussian_Process) * [Generalized Additive Model](train-models-by-tag.html#Generalized_Additive_Model) * [Generalized Linear Model](train-models-by-tag.html#Generalized_Linear_Model) * [Handle Missing Predictor Data](train-models-by-tag.html#Handle_Missing_Predictor_Data) * [Implicit Feature Selection](train-models-by-tag.html#Implicit_Feature_Selection) * [Kernel Method](train-models-by-tag.html#Kernel_Method) * [L1 Regularization](train-models-by-tag.html#L1_Regularization) * [L2 Regularization](train-models-by-tag.html#L2_Regularization) * [Linear Classifier](train-models-by-tag.html#Linear_Classifier) * [Linear Regression](train-models-by-tag.html#Linear_Regression) * [Logic Regression](train-models-by-tag.html#Logic_Regression) * [Logistic Regression](train-models-by-tag.html#Logistic_Regression) * [Mixture Model](train-models-by-tag.html#Mixture_Model) * [Model Tree](train-models-by-tag.html#Model_Tree) * [Multivariate Adaptive Regression Splines](train-models-by-tag.html#Multivariate_Adaptive_Regression_Splines) * [Neural Network](train-models-by-tag.html#Neural_Network) * [Oblique Tree](train-models-by-tag.html#Oblique_Tree) * [Ordinal Outcomes](train-models-by-tag.html#Ordinal_Outcomes) * [Partial Least Squares](train-models-by-tag.html#Partial_Least_Squares) * [Patient Rule Induction Method](train-models-by-tag.html#Patient_Rule_Induction_Method) * [Polynomial Model](train-models-by-tag.html#Polynomial_Model) * [Prototype Models](train-models-by-tag.html#Prototype_Models) * [Quantile Regression](train-models-by-tag.html#Quantile_Regression) * [Radial Basis Function](train-models-by-tag.html#Radial_Basis_Function) * [Random Forest](train-models-by-tag.html#Random_Forest) * [Regularization](train-models-by-tag.html#Regularization) * [Relevance Vector Machines](train-models-by-tag.html#Relevance_Vector_Machines) * [Ridge Regression](train-models-by-tag.html#Ridge_Regression) * [Robust Methods](train-models-by-tag.html#Robust_Methods) * [Robust Model](train-models-by-tag.html#Robust_Model) * [ROC Curves](train-models-by-tag.html#ROC_Curves) * [Rule\-Based Model](train-models-by-tag.html#Rule_Based_Model) * [Self\-Organising Maps](train-models-by-tag.html#Self_Organising_Maps) * [String Kernel](train-models-by-tag.html#String_Kernel) * [Support Vector Machines](train-models-by-tag.html#Support_Vector_Machines) * [Supports Class Probabilities](train-models-by-tag.html#Supports_Class_Probabilities) * [Text Mining](train-models-by-tag.html#Text_Mining) * [Tree\-Based Model](train-models-by-tag.html#Tree_Based_Model) * [Two Class Only](train-models-by-tag.html#Two_Class_Only) ### 7\.0\.1 Accepts Case Weights (back to [contents](train-models-by-tag.html#top)) **Adjacent Categories Probability Model for Ordinal Data** ``` method = 'vglmAdjCat' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Bagged CART** ``` method = 'treebag' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `ipred`, `plyr`, `e1071` A model\-specific variable importance metric is available. **Bagged Flexible Discriminant Analysis** ``` method = 'bagFDA' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS** ``` method = 'bagEarth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS using gCV Pruning** ``` method = 'bagEarthGCV' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bayesian Generalized Linear Model** ``` method = 'bayesglm' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `arm` **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Tree** ``` method = 'blackboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `party`, `mboost`, `plyr`, `partykit` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart' ``` Type: Regression, Classification Tuning parameters: * `cp` (Complexity Parameter) Required packages: `rpart` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart1SE' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `rpart` A model\-specific variable importance metric is available. Notes: This CART model replicates the same process used by the `rpart` function where the model complexity is determined using the one\-standard error method. This procedure is replicated inside of the resampling done by `train` so that an external resampling estimate can be obtained. **CART** ``` method = 'rpart2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) Required packages: `rpart` A model\-specific variable importance metric is available. **CART or Ordinal Responses** ``` method = 'rpartScore' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `split` (Split Function) * `prune` (Pruning Measure) Required packages: `rpartScore`, `plyr` A model\-specific variable importance metric is available. **CHi\-squared Automated Interaction Detection** ``` method = 'chaid' ``` Type: Classification Tuning parameters: * `alpha2` (Merging Threshold) * `alpha3` (Splitting former Merged Threshold) * `alpha4` ( Splitting former Merged Threshold) Required packages: `CHAID` **Conditional Inference Random Forest** ``` method = 'cforest' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `party` A model\-specific variable importance metric is available. **Conditional Inference Tree** ``` method = 'ctree' ``` Type: Classification, Regression Tuning parameters: * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Conditional Inference Tree** ``` method = 'ctree2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Continuation Ratio Model for Ordinal Data** ``` method = 'vglmContRatio' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive CART** ``` method = 'rpartCost' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `Cost` (Cost) Required packages: `rpart`, `plyr` **Cumulative Probability Model for Ordinal Data** ``` method = 'vglmCumulative' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **eXtreme Gradient Boosting** ``` method = 'xgbDART' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `subsample` (Subsample Percentage) * `colsample_bytree` (Subsample Ratio of Columns) * `rate_drop` (Fraction of Trees Dropped) * `skip_drop` (Prob. of Skipping Drop\-out) * `min_child_weight` (Minimum Sum of Instance Weight) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbTree' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `colsample_bytree` (Subsample Ratio of Columns) * `min_child_weight` (Minimum Sum of Instance Weight) * `subsample` (Subsample Percentage) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **Flexible Discriminant Analysis** ``` method = 'fda' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Generalized Linear Model** ``` method = 'glm' ``` Type: Regression, Classification No tuning parameters for this model A model\-specific variable importance metric is available. **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **Linear Regression** ``` method = 'lm' ``` Type: Regression Tuning parameters: * `intercept` (intercept) A model\-specific variable importance metric is available. **Linear Regression with Stepwise Selection** ``` method = 'lmStepAIC' ``` Type: Regression No tuning parameters for this model Required packages: `MASS` **Model Averaged Neural Network** ``` method = 'avNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) * `bag` (Bagging) Required packages: `nnet` **Multivariate Adaptive Regression Spline** ``` method = 'earth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Multivariate Adaptive Regression Splines** ``` method = 'gcvEarth' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Negative Binomial Generalized Linear Model** ``` method = 'glm.nb' ``` Type: Regression Tuning parameters: * `link` (Link Function) Required packages: `MASS` A model\-specific variable importance metric is available. **Neural Network** ``` method = 'nnet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Neural Networks with Feature Extraction** ``` method = 'pcaNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` **Ordered Logistic or Probit Regression** ``` method = 'polr' ``` Type: Classification Tuning parameters: * `method` (parameter) Required packages: `MASS` A model\-specific variable importance metric is available. **Penalized Discriminant Analysis** ``` method = 'pda' ``` Type: Classification Tuning parameters: * `lambda` (Shrinkage Penalty Coefficient) Required packages: `mda` **Penalized Discriminant Analysis** ``` method = 'pda2' ``` Type: Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `mda` **Penalized Multinomial Regression** ``` method = 'multinom' ``` Type: Classification Tuning parameters: * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Projection Pursuit Regression** ``` method = 'ppr' ``` Type: Regression Tuning parameters: * `nterms` (\# Terms) **Random Forest** ``` method = 'ranger' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `splitrule` (Splitting Rule) * `min.node.size` (Minimal Node Size) Required packages: `e1071`, `ranger`, `dplyr` A model\-specific variable importance metric is available. **Robust Linear Model** ``` method = 'rlm' ``` Type: Regression Tuning parameters: * `intercept` (intercept) * `psi` (psi) Required packages: `MASS` A model\-specific variable importance metric is available. **Single C5\.0 Ruleset** ``` method = 'C5.0Rules' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single C5\.0 Tree** ``` method = 'C5.0Tree' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Stochastic Gradient Boosting** ``` method = 'gbm' ``` Type: Regression, Classification Tuning parameters: * `n.trees` (\# Boosting Iterations) * `interaction.depth` (Max Tree Depth) * `shrinkage` (Shrinkage) * `n.minobsinnode` (Min. Terminal Node Size) Required packages: `gbm`, `plyr` A model\-specific variable importance metric is available. **Tree Models from Genetic Algorithms** ``` method = 'evtree' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Complexity Parameter) Required packages: `evtree` ### 7\.0\.2 Bagging (back to [contents](train-models-by-tag.html#top)) **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged CART** ``` method = 'treebag' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `ipred`, `plyr`, `e1071` A model\-specific variable importance metric is available. **Bagged Flexible Discriminant Analysis** ``` method = 'bagFDA' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bagged MARS** ``` method = 'bagEarth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS using gCV Pruning** ``` method = 'bagEarthGCV' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged Model** ``` method = 'bag' ``` Type: Regression, Classification Tuning parameters: * `vars` (\#Randomly Selected Predictors) Required packages: `caret` **Conditional Inference Random Forest** ``` method = 'cforest' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `party` A model\-specific variable importance metric is available. **Ensembles of Generalized Linear Models** ``` method = 'randomGLM' ``` Type: Regression, Classification Tuning parameters: * `maxInteractionOrder` (Interaction Order) Required packages: `randomGLM` Notes: Unlike other packages used by `train`, the `randomGLM` package is fully loaded when this model is used. **Model Averaged Neural Network** ``` method = 'avNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) * `bag` (Bagging) Required packages: `nnet` **Parallel Random Forest** ``` method = 'parRF' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `e1071`, `randomForest`, `foreach`, `import` A model\-specific variable importance metric is available. **Quantile Random Forest** ``` method = 'qrf' ``` Type: Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `quantregForest` **Quantile Regression Neural Network** ``` method = 'qrnn' ``` Type: Regression Tuning parameters: * `n.hidden` (\#Hidden Units) * `penalty` ( Weight Decay) * `bag` (Bagged Models?) Required packages: `qrnn` **Random Ferns** ``` method = 'rFerns' ``` Type: Classification Tuning parameters: * `depth` (Fern Depth) Required packages: `rFerns` **Random Forest** ``` method = 'ordinalRF' ``` Type: Classification Tuning parameters: * `nsets` (\# score sets tried prior to the approximation) * `ntreeperdiv` (\# of trees (small RFs)) * `ntreefinal` (\# of trees (final RF)) Required packages: `e1071`, `ranger`, `dplyr`, `ordinalForest` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'ranger' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `splitrule` (Splitting Rule) * `min.node.size` (Minimal Node Size) Required packages: `e1071`, `ranger`, `dplyr` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'Rborist' ``` Type: Classification, Regression Tuning parameters: * `predFixed` (\#Randomly Selected Predictors) * `minNode` (Minimal Node Size) Required packages: `Rborist` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'rf' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `randomForest` A model\-specific variable importance metric is available. **Random Forest by Randomization** ``` method = 'extraTrees' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\# Randomly Selected Predictors) * `numRandomCuts` (\# Random Cuts) Required packages: `extraTrees` **Random Forest Rule\-Based Model** ``` method = 'rfRules' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `maxdepth` (Maximum Rule Depth) Required packages: `randomForest`, `inTrees`, `plyr` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRF' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) * `coefImp` (Importance Coefficient) Required packages: `randomForest`, `RRF` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRFglobal' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) Required packages: `RRF` A model\-specific variable importance metric is available. **Weighted Subspace Random Forest** ``` method = 'wsrf' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `wsrf` ### 7\.0\.3 Bayesian Model (back to [contents](train-models-by-tag.html#top)) **Bayesian Additive Regression Trees** ``` method = 'bartMachine' ``` Type: Classification, Regression Tuning parameters: * `num_trees` (\#Trees) * `k` (Prior Boundary) * `alpha` (Base Terminal Node Hyperparameter) * `beta` (Power Terminal Node Hyperparameter) * `nu` (Degrees of Freedom) Required packages: `bartMachine` A model\-specific variable importance metric is available. **Bayesian Generalized Linear Model** ``` method = 'bayesglm' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `arm` **Bayesian Regularized Neural Networks** ``` method = 'brnn' ``` Type: Regression Tuning parameters: * `neurons` (\# Neurons) Required packages: `brnn` **Bayesian Ridge Regression** ``` method = 'bridge' ``` Type: Regression No tuning parameters for this model Required packages: `monomvn` **Bayesian Ridge Regression (Model Averaged)** ``` method = 'blassoAveraged' ``` Type: Regression No tuning parameters for this model Required packages: `monomvn` Notes: This model makes predictions by averaging the predictions based on the posterior estimates of the regression coefficients. While it is possible that some of these posterior estimates are zero for non\-informative predictors, the final predicted value may be a function of many (or even all) predictors. **Model Averaged Naive Bayes Classifier** ``` method = 'manb' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) * `prior` (Prior Probability) Required packages: `bnclassify` **Naive Bayes** ``` method = 'naive_bayes' ``` Type: Classification Tuning parameters: * `laplace` (Laplace Correction) * `usekernel` (Distribution Type) * `adjust` (Bandwidth Adjustment) Required packages: `naivebayes` **Naive Bayes** ``` method = 'nb' ``` Type: Classification Tuning parameters: * `fL` (Laplace Correction) * `usekernel` (Distribution Type) * `adjust` (Bandwidth Adjustment) Required packages: `klaR` **Naive Bayes Classifier** ``` method = 'nbDiscrete' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Naive Bayes Classifier with Attribute Weighting** ``` method = 'awnb' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Semi\-Naive Structure Learner Wrapper** ``` method = 'nbSearch' ``` Type: Classification Tuning parameters: * `k` (\#Folds) * `epsilon` (Minimum Absolute Improvement) * `smooth` (Smoothing Parameter) * `final_smooth` (Final Smoothing Parameter) * `direction` (Search Direction) Required packages: `bnclassify` **Spike and Slab Regression** ``` method = 'spikeslab' ``` Type: Regression Tuning parameters: * `vars` (Variables Retained) Required packages: `spikeslab`, `plyr` Notes: Unlike other packages used by `train`, the `spikeslab` package is fully loaded when this model is used. **The Bayesian lasso** ``` method = 'blasso' ``` Type: Regression Tuning parameters: * `sparsity` (Sparsity Threshold) Required packages: `monomvn` Notes: This model creates predictions using the mean of the posterior distributions but sets some parameters specifically to zero based on the tuning parameter `sparsity`. For example, when `sparsity = .5`, only coefficients where at least half the posterior estimates are nonzero are used. **Tree Augmented Naive Bayes Classifier** ``` method = 'tan' ``` Type: Classification Tuning parameters: * `score` (Score Function) * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier Structure Learner Wrapper** ``` method = 'tanSearch' ``` Type: Classification Tuning parameters: * `k` (\#Folds) * `epsilon` (Minimum Absolute Improvement) * `smooth` (Smoothing Parameter) * `final_smooth` (Final Smoothing Parameter) * `sp` (Super\-Parent) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier with Attribute Weighting** ``` method = 'awtan' ``` Type: Classification Tuning parameters: * `score` (Score Function) * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Variational Bayesian Multinomial Probit Regression** ``` method = 'vbmpRadial' ``` Type: Classification Tuning parameters: * `estimateTheta` (Theta Estimated) Required packages: `vbmp` ### 7\.0\.4 Binary Predictors Only (back to [contents](train-models-by-tag.html#top)) **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Binary Discriminant Analysis** ``` method = 'binda' ``` Type: Classification Tuning parameters: * `lambda.freqs` (Shrinkage Intensity) Required packages: `binda` **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` ### 7\.0\.5 Boosting (back to [contents](train-models-by-tag.html#top)) **AdaBoost Classification Trees** ``` method = 'adaboost' ``` Type: Classification Tuning parameters: * `nIter` (\#Trees) * `method` (Method) Required packages: `fastAdaboost` **AdaBoost.M1** ``` method = 'AdaBoost.M1' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) * `coeflearn` (Coefficient Type) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Linear Model** ``` method = 'BstLm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **Boosted Logistic Regression** ``` method = 'LogitBoost' ``` Type: Classification Tuning parameters: * `nIter` (\# Boosting Iterations) Required packages: `caTools` **Boosted Smoothing Spline** ``` method = 'bstSm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **Boosted Tree** ``` method = 'blackboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `party`, `mboost`, `plyr`, `partykit` **Boosted Tree** ``` method = 'bstTree' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `maxdepth` (Max Tree Depth) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **eXtreme Gradient Boosting** ``` method = 'xgbDART' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `subsample` (Subsample Percentage) * `colsample_bytree` (Subsample Ratio of Columns) * `rate_drop` (Fraction of Trees Dropped) * `skip_drop` (Prob. of Skipping Drop\-out) * `min_child_weight` (Minimum Sum of Instance Weight) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbLinear' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `lambda` (L2 Regularization) * `alpha` (L1 Regularization) * `eta` (Learning Rate) Required packages: `xgboost` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbTree' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `colsample_bytree` (Subsample Ratio of Columns) * `min_child_weight` (Minimum Sum of Instance Weight) * `subsample` (Subsample Percentage) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **Gradient Boosting Machines** ``` method = 'gbm_h2o' ``` Type: Regression, Classification Tuning parameters: * `ntrees` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `min_rows` (Min. Terminal Node Size) * `learn_rate` (Shrinkage) * `col_sample_rate` (\#Randomly Selected Predictors) Required packages: `h2o` A model\-specific variable importance metric is available. **Stochastic Gradient Boosting** ``` method = 'gbm' ``` Type: Regression, Classification Tuning parameters: * `n.trees` (\# Boosting Iterations) * `interaction.depth` (Max Tree Depth) * `shrinkage` (Shrinkage) * `n.minobsinnode` (Min. Terminal Node Size) Required packages: `gbm`, `plyr` A model\-specific variable importance metric is available. ### 7\.0\.6 Categorical Predictors Only (back to [contents](train-models-by-tag.html#top)) **Model Averaged Naive Bayes Classifier** ``` method = 'manb' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) * `prior` (Prior Probability) Required packages: `bnclassify` **Naive Bayes Classifier** ``` method = 'nbDiscrete' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Naive Bayes Classifier with Attribute Weighting** ``` method = 'awnb' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Semi\-Naive Structure Learner Wrapper** ``` method = 'nbSearch' ``` Type: Classification Tuning parameters: * `k` (\#Folds) * `epsilon` (Minimum Absolute Improvement) * `smooth` (Smoothing Parameter) * `final_smooth` (Final Smoothing Parameter) * `direction` (Search Direction) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier** ``` method = 'tan' ``` Type: Classification Tuning parameters: * `score` (Score Function) * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier Structure Learner Wrapper** ``` method = 'tanSearch' ``` Type: Classification Tuning parameters: * `k` (\#Folds) * `epsilon` (Minimum Absolute Improvement) * `smooth` (Smoothing Parameter) * `final_smooth` (Final Smoothing Parameter) * `sp` (Super\-Parent) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier with Attribute Weighting** ``` method = 'awtan' ``` Type: Classification Tuning parameters: * `score` (Score Function) * `smooth` (Smoothing Parameter) Required packages: `bnclassify` ### 7\.0\.7 Cost Sensitive Learning (back to [contents](train-models-by-tag.html#top)) **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive CART** ``` method = 'rpartCost' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `Cost` (Cost) Required packages: `rpart`, `plyr` **L2 Regularized Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights2' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) * `weight` (Class Weight) Required packages: `LiblineaR` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Multilayer Perceptron Network with Dropout** ``` method = 'mlpKerasDropoutCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `dropout` (Dropout Rate) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecayCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Support Vector Machines with Class Weights** ``` method = 'svmRadialWeights' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) * `Weight` (Weight) Required packages: `kernlab` ### 7\.0\.8 Discriminant Analysis (back to [contents](train-models-by-tag.html#top)) **Adaptive Mixture Discriminant Analysis** ``` method = 'amdai' ``` Type: Classification Tuning parameters: * `model` (Model Type) Required packages: `adaptDA` **Binary Discriminant Analysis** ``` method = 'binda' ``` Type: Classification Tuning parameters: * `lambda.freqs` (Shrinkage Intensity) Required packages: `binda` **Diagonal Discriminant Analysis** ``` method = 'dda' ``` Type: Classification Tuning parameters: * `model` (Model) * `shrinkage` (Shrinkage Type) Required packages: `sparsediscrim` **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **Factor\-Based Linear Discriminant Analysis** ``` method = 'RFlda' ``` Type: Classification Tuning parameters: * `q` (\# Factors) Required packages: `HiDimDA` **Heteroscedastic Discriminant Analysis** ``` method = 'hda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `newdim` (Dimension of the Discriminative Subspace) Required packages: `hda` **High Dimensional Discriminant Analysis** ``` method = 'hdda' ``` Type: Classification Tuning parameters: * `threshold` (Threshold) * `model` (Model Type) Required packages: `HDclassif` **High\-Dimensional Regularized Discriminant Analysis** ``` method = 'hdrda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `shrinkage_type` (Shrinkage Type) Required packages: `sparsediscrim` **Linear Discriminant Analysis** ``` method = 'lda' ``` Type: Classification No tuning parameters for this model Required packages: `MASS` **Linear Discriminant Analysis** ``` method = 'lda2' ``` Type: Classification Tuning parameters: * `dimen` (\#Discriminant Functions) Required packages: `MASS` **Linear Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepLDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Localized Linear Discriminant Analysis** ``` method = 'loclda' ``` Type: Classification Tuning parameters: * `k` (\#Nearest Neighbors) Required packages: `klaR` **Maximum Uncertainty Linear Discriminant Analysis** ``` method = 'Mlda' ``` Type: Classification No tuning parameters for this model Required packages: `HiDimDA` **Mixture Discriminant Analysis** ``` method = 'mda' ``` Type: Classification Tuning parameters: * `subclasses` (\#Subclasses Per Class) Required packages: `mda` **Penalized Discriminant Analysis** ``` method = 'pda' ``` Type: Classification Tuning parameters: * `lambda` (Shrinkage Penalty Coefficient) Required packages: `mda` **Penalized Discriminant Analysis** ``` method = 'pda2' ``` Type: Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `mda` **Penalized Linear Discriminant Analysis** ``` method = 'PenalizedLDA' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `K` (\#Discriminant Functions) Required packages: `penalizedLDA`, `plyr` **Quadratic Discriminant Analysis** ``` method = 'qda' ``` Type: Classification No tuning parameters for this model Required packages: `MASS` **Quadratic Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepQDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Regularized Discriminant Analysis** ``` method = 'rda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) Required packages: `klaR` **Regularized Linear Discriminant Analysis** ``` method = 'rlda' ``` Type: Classification Tuning parameters: * `estimator` (Regularization Method) Required packages: `sparsediscrim` **Robust Linear Discriminant Analysis** ``` method = 'Linda' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Robust Mixture Discriminant Analysis** ``` method = 'rmda' ``` Type: Classification Tuning parameters: * `K` (\#Subclasses Per Class) * `model` (Model) Required packages: `robustDA` **Robust Quadratic Discriminant Analysis** ``` method = 'QdaCov' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Robust Regularized Linear Discriminant Analysis** ``` method = 'rrlda' ``` Type: Classification Tuning parameters: * `lambda` (Penalty Parameter) * `hp` (Robustness Parameter) * `penalty` (Penalty Type) Required packages: `rrlda` Notes: Unlike other packages used by `train`, the `rrlda` package is fully loaded when this model is used. **Shrinkage Discriminant Analysis** ``` method = 'sda' ``` Type: Classification Tuning parameters: * `diagonal` (Diagonalize) * `lambda` (shrinkage) Required packages: `sda` **Sparse Linear Discriminant Analysis** ``` method = 'sparseLDA' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) Required packages: `sparseLDA` **Sparse Mixture Discriminant Analysis** ``` method = 'smda' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) * `R` (\# Subclasses) Required packages: `sparseLDA` **Stabilized Linear Discriminant Analysis** ``` method = 'slda' ``` Type: Classification No tuning parameters for this model Required packages: `ipred` ### 7\.0\.9 Distance Weighted Discrimination (back to [contents](train-models-by-tag.html#top)) **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Sparse Distance Weighted Discrimination** ``` method = 'sdwd' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `sdwd` A model\-specific variable importance metric is available. ### 7\.0\.10 Ensemble Model (back to [contents](train-models-by-tag.html#top)) **AdaBoost Classification Trees** ``` method = 'adaboost' ``` Type: Classification Tuning parameters: * `nIter` (\#Trees) * `method` (Method) Required packages: `fastAdaboost` **AdaBoost.M1** ``` method = 'AdaBoost.M1' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) * `coeflearn` (Coefficient Type) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged CART** ``` method = 'treebag' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `ipred`, `plyr`, `e1071` A model\-specific variable importance metric is available. **Bagged Flexible Discriminant Analysis** ``` method = 'bagFDA' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bagged MARS** ``` method = 'bagEarth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS using gCV Pruning** ``` method = 'bagEarthGCV' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged Model** ``` method = 'bag' ``` Type: Regression, Classification Tuning parameters: * `vars` (\#Randomly Selected Predictors) Required packages: `caret` **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Linear Model** ``` method = 'BstLm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **Boosted Logistic Regression** ``` method = 'LogitBoost' ``` Type: Classification Tuning parameters: * `nIter` (\# Boosting Iterations) Required packages: `caTools` **Boosted Smoothing Spline** ``` method = 'bstSm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **Boosted Tree** ``` method = 'blackboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `party`, `mboost`, `plyr`, `partykit` **Boosted Tree** ``` method = 'bstTree' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `maxdepth` (Max Tree Depth) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Conditional Inference Random Forest** ``` method = 'cforest' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `party` A model\-specific variable importance metric is available. **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **Ensembles of Generalized Linear Models** ``` method = 'randomGLM' ``` Type: Regression, Classification Tuning parameters: * `maxInteractionOrder` (Interaction Order) Required packages: `randomGLM` Notes: Unlike other packages used by `train`, the `randomGLM` package is fully loaded when this model is used. **eXtreme Gradient Boosting** ``` method = 'xgbDART' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `subsample` (Subsample Percentage) * `colsample_bytree` (Subsample Ratio of Columns) * `rate_drop` (Fraction of Trees Dropped) * `skip_drop` (Prob. of Skipping Drop\-out) * `min_child_weight` (Minimum Sum of Instance Weight) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbLinear' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `lambda` (L2 Regularization) * `alpha` (L1 Regularization) * `eta` (Learning Rate) Required packages: `xgboost` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbTree' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `colsample_bytree` (Subsample Ratio of Columns) * `min_child_weight` (Minimum Sum of Instance Weight) * `subsample` (Subsample Percentage) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **Gradient Boosting Machines** ``` method = 'gbm_h2o' ``` Type: Regression, Classification Tuning parameters: * `ntrees` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `min_rows` (Min. Terminal Node Size) * `learn_rate` (Shrinkage) * `col_sample_rate` (\#Randomly Selected Predictors) Required packages: `h2o` A model\-specific variable importance metric is available. **Model Averaged Neural Network** ``` method = 'avNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) * `bag` (Bagging) Required packages: `nnet` **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Parallel Random Forest** ``` method = 'parRF' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `e1071`, `randomForest`, `foreach`, `import` A model\-specific variable importance metric is available. **Quantile Random Forest** ``` method = 'qrf' ``` Type: Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `quantregForest` **Quantile Regression Neural Network** ``` method = 'qrnn' ``` Type: Regression Tuning parameters: * `n.hidden` (\#Hidden Units) * `penalty` ( Weight Decay) * `bag` (Bagged Models?) Required packages: `qrnn` **Random Ferns** ``` method = 'rFerns' ``` Type: Classification Tuning parameters: * `depth` (Fern Depth) Required packages: `rFerns` **Random Forest** ``` method = 'ordinalRF' ``` Type: Classification Tuning parameters: * `nsets` (\# score sets tried prior to the approximation) * `ntreeperdiv` (\# of trees (small RFs)) * `ntreefinal` (\# of trees (final RF)) Required packages: `e1071`, `ranger`, `dplyr`, `ordinalForest` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'ranger' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `splitrule` (Splitting Rule) * `min.node.size` (Minimal Node Size) Required packages: `e1071`, `ranger`, `dplyr` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'Rborist' ``` Type: Classification, Regression Tuning parameters: * `predFixed` (\#Randomly Selected Predictors) * `minNode` (Minimal Node Size) Required packages: `Rborist` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'rf' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `randomForest` A model\-specific variable importance metric is available. **Random Forest by Randomization** ``` method = 'extraTrees' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\# Randomly Selected Predictors) * `numRandomCuts` (\# Random Cuts) Required packages: `extraTrees` **Random Forest Rule\-Based Model** ``` method = 'rfRules' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `maxdepth` (Maximum Rule Depth) Required packages: `randomForest`, `inTrees`, `plyr` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRF' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) * `coefImp` (Importance Coefficient) Required packages: `randomForest`, `RRF` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRFglobal' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) Required packages: `RRF` A model\-specific variable importance metric is available. **Rotation Forest** ``` method = 'rotationForest' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) Required packages: `rotationForest` A model\-specific variable importance metric is available. **Rotation Forest** ``` method = 'rotationForestCp' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) * `cp` (Complexity Parameter) Required packages: `rpart`, `plyr`, `rotationForest` A model\-specific variable importance metric is available. **Stochastic Gradient Boosting** ``` method = 'gbm' ``` Type: Regression, Classification Tuning parameters: * `n.trees` (\# Boosting Iterations) * `interaction.depth` (Max Tree Depth) * `shrinkage` (Shrinkage) * `n.minobsinnode` (Min. Terminal Node Size) Required packages: `gbm`, `plyr` A model\-specific variable importance metric is available. **Tree\-Based Ensembles** ``` method = 'nodeHarvest' ``` Type: Regression, Classification Tuning parameters: * `maxinter` (Maximum Interaction Depth) * `mode` (Prediction Mode) Required packages: `nodeHarvest` **Weighted Subspace Random Forest** ``` method = 'wsrf' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `wsrf` ### 7\.0\.11 Feature Extraction (back to [contents](train-models-by-tag.html#top)) **Independent Component Regression** ``` method = 'icr' ``` Type: Regression Tuning parameters: * `n.comp` (\#Components) Required packages: `fastICA` **Neural Networks with Feature Extraction** ``` method = 'pcaNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` **Partial Least Squares** ``` method = 'kernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'pls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'simpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'widekernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Principal Component Analysis** ``` method = 'pcr' ``` Type: Regression Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` **Projection Pursuit Regression** ``` method = 'ppr' ``` Type: Regression Tuning parameters: * `nterms` (\# Terms) **Sparse Partial Least Squares** ``` method = 'spls' ``` Type: Regression, Classification Tuning parameters: * `K` (\#Components) * `eta` (Threshold) * `kappa` (Kappa) Required packages: `spls` **Supervised Principal Component Analysis** ``` method = 'superpc' ``` Type: Regression Tuning parameters: * `threshold` (Threshold) * `n.components` (\#Components) Required packages: `superpc` ### 7\.0\.12 Feature Selection Wrapper (back to [contents](train-models-by-tag.html#top)) **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **Linear Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepLDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Linear Regression with Backwards Selection** ``` method = 'leapBackward' ``` Type: Regression Tuning parameters: * `nvmax` (Maximum Number of Predictors) Required packages: `leaps` **Linear Regression with Forward Selection** ``` method = 'leapForward' ``` Type: Regression Tuning parameters: * `nvmax` (Maximum Number of Predictors) Required packages: `leaps` **Linear Regression with Stepwise Selection** ``` method = 'leapSeq' ``` Type: Regression Tuning parameters: * `nvmax` (Maximum Number of Predictors) Required packages: `leaps` **Linear Regression with Stepwise Selection** ``` method = 'lmStepAIC' ``` Type: Regression No tuning parameters for this model Required packages: `MASS` **Quadratic Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepQDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Ridge Regression with Variable Selection** ``` method = 'foba' ``` Type: Regression Tuning parameters: * `k` (\#Variables Retained) * `lambda` (L2 Penalty) Required packages: `foba` ### 7\.0\.13 Gaussian Process (back to [contents](train-models-by-tag.html#top)) **Gaussian Process** ``` method = 'gaussprLinear' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `kernlab` **Gaussian Process with Polynomial Kernel** ``` method = 'gaussprPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kernlab` **Gaussian Process with Radial Basis Function Kernel** ``` method = 'gaussprRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **Variational Bayesian Multinomial Probit Regression** ``` method = 'vbmpRadial' ``` Type: Classification Tuning parameters: * `estimateTheta` (Theta Estimated) Required packages: `vbmp` ### 7\.0\.14 Generalized Additive Model (back to [contents](train-models-by-tag.html#top)) **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Generalized Additive Model using LOESS** ``` method = 'gamLoess' ``` Type: Regression, Classification Tuning parameters: * `span` (Span) * `degree` (Degree) Required packages: `gam` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `gam` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'bam' ``` Type: Regression, Classification Tuning parameters: * `select` (Feature Selection) * `method` (Method) Required packages: `mgcv` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `mgcv` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'gam' ``` Type: Regression, Classification Tuning parameters: * `select` (Feature Selection) * `method` (Method) Required packages: `mgcv` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `mgcv` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'gamSpline' ``` Type: Regression, Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `gam` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `gam` package is fully loaded when this model is used. ### 7\.0\.15 Generalized Linear Model (back to [contents](train-models-by-tag.html#top)) **Bayesian Generalized Linear Model** ``` method = 'bayesglm' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `arm` **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Ensembles of Generalized Linear Models** ``` method = 'randomGLM' ``` Type: Regression, Classification Tuning parameters: * `maxInteractionOrder` (Interaction Order) Required packages: `randomGLM` Notes: Unlike other packages used by `train`, the `randomGLM` package is fully loaded when this model is used. **Generalized Additive Model using LOESS** ``` method = 'gamLoess' ``` Type: Regression, Classification Tuning parameters: * `span` (Span) * `degree` (Degree) Required packages: `gam` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `gam` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'bam' ``` Type: Regression, Classification Tuning parameters: * `select` (Feature Selection) * `method` (Method) Required packages: `mgcv` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `mgcv` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'gam' ``` Type: Regression, Classification Tuning parameters: * `select` (Feature Selection) * `method` (Method) Required packages: `mgcv` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `mgcv` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'gamSpline' ``` Type: Regression, Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `gam` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `gam` package is fully loaded when this model is used. **Generalized Linear Model** ``` method = 'glm' ``` Type: Regression, Classification No tuning parameters for this model A model\-specific variable importance metric is available. **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Negative Binomial Generalized Linear Model** ``` method = 'glm.nb' ``` Type: Regression Tuning parameters: * `link` (Link Function) Required packages: `MASS` A model\-specific variable importance metric is available. **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 ### 7\.0\.16 Handle Missing Predictor Data (back to [contents](train-models-by-tag.html#top)) **AdaBoost.M1** ``` method = 'AdaBoost.M1' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) * `coeflearn` (Coefficient Type) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart' ``` Type: Regression, Classification Tuning parameters: * `cp` (Complexity Parameter) Required packages: `rpart` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart1SE' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `rpart` A model\-specific variable importance metric is available. Notes: This CART model replicates the same process used by the `rpart` function where the model complexity is determined using the one\-standard error method. This procedure is replicated inside of the resampling done by `train` so that an external resampling estimate can be obtained. **CART** ``` method = 'rpart2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) Required packages: `rpart` A model\-specific variable importance metric is available. **CART or Ordinal Responses** ``` method = 'rpartScore' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `split` (Split Function) * `prune` (Pruning Measure) Required packages: `rpartScore`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive CART** ``` method = 'rpartCost' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `Cost` (Cost) Required packages: `rpart`, `plyr` **Single C5\.0 Ruleset** ``` method = 'C5.0Rules' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single C5\.0 Tree** ``` method = 'C5.0Tree' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. ### 7\.0\.17 Implicit Feature Selection (back to [contents](train-models-by-tag.html#top)) **AdaBoost Classification Trees** ``` method = 'adaboost' ``` Type: Classification Tuning parameters: * `nIter` (\#Trees) * `method` (Method) Required packages: `fastAdaboost` **AdaBoost.M1** ``` method = 'AdaBoost.M1' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) * `coeflearn` (Coefficient Type) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged Flexible Discriminant Analysis** ``` method = 'bagFDA' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS** ``` method = 'bagEarth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS using gCV Pruning** ``` method = 'bagEarthGCV' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bayesian Additive Regression Trees** ``` method = 'bartMachine' ``` Type: Classification, Regression Tuning parameters: * `num_trees` (\#Trees) * `k` (Prior Boundary) * `alpha` (Base Terminal Node Hyperparameter) * `beta` (Power Terminal Node Hyperparameter) * `nu` (Degrees of Freedom) Required packages: `bartMachine` A model\-specific variable importance metric is available. **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Linear Model** ``` method = 'BstLm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **Boosted Logistic Regression** ``` method = 'LogitBoost' ``` Type: Classification Tuning parameters: * `nIter` (\# Boosting Iterations) Required packages: `caTools` **Boosted Smoothing Spline** ``` method = 'bstSm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **C4\.5\-like Trees** ``` method = 'J48' ``` Type: Classification Tuning parameters: * `C` (Confidence Threshold) * `M` (Minimum Instances Per Leaf) Required packages: `RWeka` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart' ``` Type: Regression, Classification Tuning parameters: * `cp` (Complexity Parameter) Required packages: `rpart` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart1SE' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `rpart` A model\-specific variable importance metric is available. Notes: This CART model replicates the same process used by the `rpart` function where the model complexity is determined using the one\-standard error method. This procedure is replicated inside of the resampling done by `train` so that an external resampling estimate can be obtained. **CART** ``` method = 'rpart2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) Required packages: `rpart` A model\-specific variable importance metric is available. **CART or Ordinal Responses** ``` method = 'rpartScore' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `split` (Split Function) * `prune` (Pruning Measure) Required packages: `rpartScore`, `plyr` A model\-specific variable importance metric is available. **CHi\-squared Automated Interaction Detection** ``` method = 'chaid' ``` Type: Classification Tuning parameters: * `alpha2` (Merging Threshold) * `alpha3` (Splitting former Merged Threshold) * `alpha4` ( Splitting former Merged Threshold) Required packages: `CHAID` **Conditional Inference Random Forest** ``` method = 'cforest' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `party` A model\-specific variable importance metric is available. **Conditional Inference Tree** ``` method = 'ctree' ``` Type: Classification, Regression Tuning parameters: * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Conditional Inference Tree** ``` method = 'ctree2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive CART** ``` method = 'rpartCost' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `Cost` (Cost) Required packages: `rpart`, `plyr` **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **Elasticnet** ``` method = 'enet' ``` Type: Regression Tuning parameters: * `fraction` (Fraction of Full Solution) * `lambda` (Weight Decay) Required packages: `elasticnet` **eXtreme Gradient Boosting** ``` method = 'xgbDART' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `subsample` (Subsample Percentage) * `colsample_bytree` (Subsample Ratio of Columns) * `rate_drop` (Fraction of Trees Dropped) * `skip_drop` (Prob. of Skipping Drop\-out) * `min_child_weight` (Minimum Sum of Instance Weight) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbLinear' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `lambda` (L2 Regularization) * `alpha` (L1 Regularization) * `eta` (Learning Rate) Required packages: `xgboost` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbTree' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `colsample_bytree` (Subsample Ratio of Columns) * `min_child_weight` (Minimum Sum of Instance Weight) * `subsample` (Subsample Percentage) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **Flexible Discriminant Analysis** ``` method = 'fda' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Gradient Boosting Machines** ``` method = 'gbm_h2o' ``` Type: Regression, Classification Tuning parameters: * `ntrees` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `min_rows` (Min. Terminal Node Size) * `learn_rate` (Shrinkage) * `col_sample_rate` (\#Randomly Selected Predictors) Required packages: `h2o` A model\-specific variable importance metric is available. **Least Angle Regression** ``` method = 'lars' ``` Type: Regression Tuning parameters: * `fraction` (Fraction) Required packages: `lars` **Least Angle Regression** ``` method = 'lars2' ``` Type: Regression Tuning parameters: * `step` (\#Steps) Required packages: `lars` **Logistic Model Trees** ``` method = 'LMT' ``` Type: Classification Tuning parameters: * `iter` (\# Iteratons) Required packages: `RWeka` **Model Rules** ``` method = 'M5Rules' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) Required packages: `RWeka` **Model Tree** ``` method = 'M5' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) * `rules` (Rules) Required packages: `RWeka` **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Multivariate Adaptive Regression Spline** ``` method = 'earth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Multivariate Adaptive Regression Splines** ``` method = 'gcvEarth' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Nearest Shrunken Centroids** ``` method = 'pam' ``` Type: Classification Tuning parameters: * `threshold` (Shrinkage Threshold) Required packages: `pamr` A model\-specific variable importance metric is available. **Non\-Convex Penalized Quantile Regression** ``` method = 'rqnc' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) * `penalty` (Penalty Type) Required packages: `rqPen` **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Parallel Random Forest** ``` method = 'parRF' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `e1071`, `randomForest`, `foreach`, `import` A model\-specific variable importance metric is available. **Penalized Linear Discriminant Analysis** ``` method = 'PenalizedLDA' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `K` (\#Discriminant Functions) Required packages: `penalizedLDA`, `plyr` **Penalized Linear Regression** ``` method = 'penalized' ``` Type: Regression Tuning parameters: * `lambda1` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `penalized` **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Quantile Random Forest** ``` method = 'qrf' ``` Type: Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `quantregForest` **Quantile Regression with LASSO penalty** ``` method = 'rqlasso' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) Required packages: `rqPen` **Random Ferns** ``` method = 'rFerns' ``` Type: Classification Tuning parameters: * `depth` (Fern Depth) Required packages: `rFerns` **Random Forest** ``` method = 'ordinalRF' ``` Type: Classification Tuning parameters: * `nsets` (\# score sets tried prior to the approximation) * `ntreeperdiv` (\# of trees (small RFs)) * `ntreefinal` (\# of trees (final RF)) Required packages: `e1071`, `ranger`, `dplyr`, `ordinalForest` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'ranger' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `splitrule` (Splitting Rule) * `min.node.size` (Minimal Node Size) Required packages: `e1071`, `ranger`, `dplyr` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'Rborist' ``` Type: Classification, Regression Tuning parameters: * `predFixed` (\#Randomly Selected Predictors) * `minNode` (Minimal Node Size) Required packages: `Rborist` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'rf' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `randomForest` A model\-specific variable importance metric is available. **Random Forest by Randomization** ``` method = 'extraTrees' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\# Randomly Selected Predictors) * `numRandomCuts` (\# Random Cuts) Required packages: `extraTrees` **Random Forest Rule\-Based Model** ``` method = 'rfRules' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `maxdepth` (Maximum Rule Depth) Required packages: `randomForest`, `inTrees`, `plyr` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRF' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) * `coefImp` (Importance Coefficient) Required packages: `randomForest`, `RRF` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRFglobal' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) Required packages: `RRF` A model\-specific variable importance metric is available. **Relaxed Lasso** ``` method = 'relaxo' ``` Type: Regression Tuning parameters: * `lambda` (Penalty Parameter) * `phi` (Relaxation Parameter) Required packages: `relaxo`, `plyr` **Rotation Forest** ``` method = 'rotationForest' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) Required packages: `rotationForest` A model\-specific variable importance metric is available. **Rotation Forest** ``` method = 'rotationForestCp' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) * `cp` (Complexity Parameter) Required packages: `rpart`, `plyr`, `rotationForest` A model\-specific variable importance metric is available. **Rule\-Based Classifier** ``` method = 'JRip' ``` Type: Classification Tuning parameters: * `NumOpt` (\# Optimizations) * `NumFolds` (\# Folds) * `MinWeights` (Min Weights) Required packages: `RWeka` A model\-specific variable importance metric is available. **Rule\-Based Classifier** ``` method = 'PART' ``` Type: Classification Tuning parameters: * `threshold` (Confidence Threshold) * `pruned` (Pruning) Required packages: `RWeka` A model\-specific variable importance metric is available. **Single C5\.0 Ruleset** ``` method = 'C5.0Rules' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single C5\.0 Tree** ``` method = 'C5.0Tree' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single Rule Classification** ``` method = 'OneR' ``` Type: Classification No tuning parameters for this model Required packages: `RWeka` **Sparse Distance Weighted Discrimination** ``` method = 'sdwd' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `sdwd` A model\-specific variable importance metric is available. **Sparse Linear Discriminant Analysis** ``` method = 'sparseLDA' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) Required packages: `sparseLDA` **Sparse Mixture Discriminant Analysis** ``` method = 'smda' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) * `R` (\# Subclasses) Required packages: `sparseLDA` **Spike and Slab Regression** ``` method = 'spikeslab' ``` Type: Regression Tuning parameters: * `vars` (Variables Retained) Required packages: `spikeslab`, `plyr` Notes: Unlike other packages used by `train`, the `spikeslab` package is fully loaded when this model is used. **Stochastic Gradient Boosting** ``` method = 'gbm' ``` Type: Regression, Classification Tuning parameters: * `n.trees` (\# Boosting Iterations) * `interaction.depth` (Max Tree Depth) * `shrinkage` (Shrinkage) * `n.minobsinnode` (Min. Terminal Node Size) Required packages: `gbm`, `plyr` A model\-specific variable importance metric is available. **The Bayesian lasso** ``` method = 'blasso' ``` Type: Regression Tuning parameters: * `sparsity` (Sparsity Threshold) Required packages: `monomvn` Notes: This model creates predictions using the mean of the posterior distributions but sets some parameters specifically to zero based on the tuning parameter `sparsity`. For example, when `sparsity = .5`, only coefficients where at least half the posterior estimates are nonzero are used. **The lasso** ``` method = 'lasso' ``` Type: Regression Tuning parameters: * `fraction` (Fraction of Full Solution) Required packages: `elasticnet` **Tree Models from Genetic Algorithms** ``` method = 'evtree' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Complexity Parameter) Required packages: `evtree` **Tree\-Based Ensembles** ``` method = 'nodeHarvest' ``` Type: Regression, Classification Tuning parameters: * `maxinter` (Maximum Interaction Depth) * `mode` (Prediction Mode) Required packages: `nodeHarvest` **Weighted Subspace Random Forest** ``` method = 'wsrf' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `wsrf` ### 7\.0\.18 Kernel Method (back to [contents](train-models-by-tag.html#top)) **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **Gaussian Process** ``` method = 'gaussprLinear' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `kernlab` **Gaussian Process with Polynomial Kernel** ``` method = 'gaussprPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kernlab` **Gaussian Process with Radial Basis Function Kernel** ``` method = 'gaussprRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **L2 Regularized Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights2' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) * `weight` (Class Weight) Required packages: `LiblineaR` **L2 Regularized Support Vector Machine (dual) with Linear Kernel** ``` method = 'svmLinear3' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) Required packages: `LiblineaR` **Least Squares Support Vector Machine** ``` method = 'lssvmLinear' ``` Type: Classification Tuning parameters: * `tau` (Regularization Parameter) Required packages: `kernlab` **Least Squares Support Vector Machine with Polynomial Kernel** ``` method = 'lssvmPoly' ``` Type: Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `tau` (Regularization Parameter) Required packages: `kernlab` **Least Squares Support Vector Machine with Radial Basis Function Kernel** ``` method = 'lssvmRadial' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `tau` (Regularization Parameter) Required packages: `kernlab` **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Partial Least Squares** ``` method = 'kernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Polynomial Kernel Regularized Least Squares** ``` method = 'krlsPoly' ``` Type: Regression Tuning parameters: * `lambda` (Regularization Parameter) * `degree` (Polynomial Degree) Required packages: `KRLS` **Radial Basis Function Kernel Regularized Least Squares** ``` method = 'krlsRadial' ``` Type: Regression Tuning parameters: * `lambda` (Regularization Parameter) * `sigma` (Sigma) Required packages: `KRLS`, `kernlab` **Relevance Vector Machines with Linear Kernel** ``` method = 'rvmLinear' ``` Type: Regression No tuning parameters for this model Required packages: `kernlab` **Relevance Vector Machines with Polynomial Kernel** ``` method = 'rvmPoly' ``` Type: Regression Tuning parameters: * `scale` (Scale) * `degree` (Polynomial Degree) Required packages: `kernlab` **Relevance Vector Machines with Radial Basis Function Kernel** ``` method = 'rvmRadial' ``` Type: Regression Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **Support Vector Machines with Boundrange String Kernel** ``` method = 'svmBoundrangeString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Class Weights** ``` method = 'svmRadialWeights' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) * `Weight` (Weight) Required packages: `kernlab` **Support Vector Machines with Exponential String Kernel** ``` method = 'svmExpoString' ``` Type: Regression, Classification Tuning parameters: * `lambda` (lambda) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear2' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) Required packages: `e1071` **Support Vector Machines with Polynomial Kernel** ``` method = 'svmPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialCost' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialSigma' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` Notes: This SVM model tunes over the cost parameter and the RBF kernel parameter sigma. In the latter case, using `tuneLength` will, at most, evaluate six values of the kernel parameter. This enables a broad search over the cost parameter and a relatively narrow search over `sigma` **Support Vector Machines with Spectrum String Kernel** ``` method = 'svmSpectrumString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` ### 7\.0\.19 L1 Regularization (back to [contents](train-models-by-tag.html#top)) **Bayesian Ridge Regression (Model Averaged)** ``` method = 'blassoAveraged' ``` Type: Regression No tuning parameters for this model Required packages: `monomvn` Notes: This model makes predictions by averaging the predictions based on the posterior estimates of the regression coefficients. While it is possible that some of these posterior estimates are zero for non\-informative predictors, the final predicted value may be a function of many (or even all) predictors. **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **Elasticnet** ``` method = 'enet' ``` Type: Regression Tuning parameters: * `fraction` (Fraction of Full Solution) * `lambda` (Weight Decay) Required packages: `elasticnet` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Least Angle Regression** ``` method = 'lars' ``` Type: Regression Tuning parameters: * `fraction` (Fraction) Required packages: `lars` **Least Angle Regression** ``` method = 'lars2' ``` Type: Regression Tuning parameters: * `step` (\#Steps) Required packages: `lars` **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Non\-Convex Penalized Quantile Regression** ``` method = 'rqnc' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) * `penalty` (Penalty Type) Required packages: `rqPen` **Penalized Linear Discriminant Analysis** ``` method = 'PenalizedLDA' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `K` (\#Discriminant Functions) Required packages: `penalizedLDA`, `plyr` **Penalized Linear Regression** ``` method = 'penalized' ``` Type: Regression Tuning parameters: * `lambda1` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `penalized` **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Quantile Regression with LASSO penalty** ``` method = 'rqlasso' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) Required packages: `rqPen` **Regularized Logistic Regression** ``` method = 'regLogistic' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `loss` (Loss Function) * `epsilon` (Tolerance) Required packages: `LiblineaR` **Relaxed Lasso** ``` method = 'relaxo' ``` Type: Regression Tuning parameters: * `lambda` (Penalty Parameter) * `phi` (Relaxation Parameter) Required packages: `relaxo`, `plyr` **Sparse Distance Weighted Discrimination** ``` method = 'sdwd' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `sdwd` A model\-specific variable importance metric is available. **Sparse Linear Discriminant Analysis** ``` method = 'sparseLDA' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) Required packages: `sparseLDA` **Sparse Mixture Discriminant Analysis** ``` method = 'smda' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) * `R` (\# Subclasses) Required packages: `sparseLDA` **Sparse Partial Least Squares** ``` method = 'spls' ``` Type: Regression, Classification Tuning parameters: * `K` (\#Components) * `eta` (Threshold) * `kappa` (Kappa) Required packages: `spls` **The Bayesian lasso** ``` method = 'blasso' ``` Type: Regression Tuning parameters: * `sparsity` (Sparsity Threshold) Required packages: `monomvn` Notes: This model creates predictions using the mean of the posterior distributions but sets some parameters specifically to zero based on the tuning parameter `sparsity`. For example, when `sparsity = .5`, only coefficients where at least half the posterior estimates are nonzero are used. **The lasso** ``` method = 'lasso' ``` Type: Regression Tuning parameters: * `fraction` (Fraction of Full Solution) Required packages: `elasticnet` ### 7\.0\.20 L2 Regularization (back to [contents](train-models-by-tag.html#top)) **Bayesian Ridge Regression** ``` method = 'bridge' ``` Type: Regression No tuning parameters for this model Required packages: `monomvn` **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Model Averaged Neural Network** ``` method = 'avNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) * `bag` (Bagging) Required packages: `nnet` **Multi\-Layer Perceptron** ``` method = 'mlpWeightDecay' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `RSNNS` **Multi\-Layer Perceptron, multiple layers** ``` method = 'mlpWeightDecayML' ``` Type: Regression, Classification Tuning parameters: * `layer1` (\#Hidden Units layer1\) * `layer2` (\#Hidden Units layer2\) * `layer3` (\#Hidden Units layer3\) * `decay` (Weight Decay) Required packages: `RSNNS` **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Multilayer Perceptron Network by Stochastic Gradient Descent** ``` method = 'mlpSGD' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `l2reg` (L2 Regularization) * `lambda` (RMSE Gradient Scaling) * `learn_rate` (Learning Rate) * `momentum` (Momentum) * `gamma` (Learning Rate Decay) * `minibatchsz` (Batch Size) * `repeats` (\#Models) Required packages: `FCNN4R`, `plyr` A model\-specific variable importance metric is available. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecay' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecayCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Neural Network** ``` method = 'nnet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Neural Networks with Feature Extraction** ``` method = 'pcaNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Penalized Linear Regression** ``` method = 'penalized' ``` Type: Regression Tuning parameters: * `lambda1` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `penalized` **Penalized Logistic Regression** ``` method = 'plr' ``` Type: Classification Tuning parameters: * `lambda` (L2 Penalty) * `cp` (Complexity Parameter) Required packages: `stepPlr` **Penalized Multinomial Regression** ``` method = 'multinom' ``` Type: Classification Tuning parameters: * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Polynomial Kernel Regularized Least Squares** ``` method = 'krlsPoly' ``` Type: Regression Tuning parameters: * `lambda` (Regularization Parameter) * `degree` (Polynomial Degree) Required packages: `KRLS` **Quantile Regression Neural Network** ``` method = 'qrnn' ``` Type: Regression Tuning parameters: * `n.hidden` (\#Hidden Units) * `penalty` ( Weight Decay) * `bag` (Bagged Models?) Required packages: `qrnn` **Radial Basis Function Kernel Regularized Least Squares** ``` method = 'krlsRadial' ``` Type: Regression Tuning parameters: * `lambda` (Regularization Parameter) * `sigma` (Sigma) Required packages: `KRLS`, `kernlab` **Radial Basis Function Network** ``` method = 'rbf' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) Required packages: `RSNNS` **Radial Basis Function Network** ``` method = 'rbfDDA' ``` Type: Regression, Classification Tuning parameters: * `negativeThreshold` (Activation Limit for Conflicting Classes) Required packages: `RSNNS` **Regularized Logistic Regression** ``` method = 'regLogistic' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `loss` (Loss Function) * `epsilon` (Tolerance) Required packages: `LiblineaR` **Relaxed Lasso** ``` method = 'relaxo' ``` Type: Regression Tuning parameters: * `lambda` (Penalty Parameter) * `phi` (Relaxation Parameter) Required packages: `relaxo`, `plyr` **Ridge Regression** ``` method = 'ridge' ``` Type: Regression Tuning parameters: * `lambda` (Weight Decay) Required packages: `elasticnet` **Ridge Regression with Variable Selection** ``` method = 'foba' ``` Type: Regression Tuning parameters: * `k` (\#Variables Retained) * `lambda` (L2 Penalty) Required packages: `foba` **Sparse Distance Weighted Discrimination** ``` method = 'sdwd' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `sdwd` A model\-specific variable importance metric is available. ### 7\.0\.21 Linear Classifier (back to [contents](train-models-by-tag.html#top)) **Adjacent Categories Probability Model for Ordinal Data** ``` method = 'vglmAdjCat' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bayesian Generalized Linear Model** ``` method = 'bayesglm' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `arm` **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Continuation Ratio Model for Ordinal Data** ``` method = 'vglmContRatio' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Cumulative Probability Model for Ordinal Data** ``` method = 'vglmCumulative' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Diagonal Discriminant Analysis** ``` method = 'dda' ``` Type: Classification Tuning parameters: * `model` (Model) * `shrinkage` (Shrinkage Type) Required packages: `sparsediscrim` **Ensembles of Generalized Linear Models** ``` method = 'randomGLM' ``` Type: Regression, Classification Tuning parameters: * `maxInteractionOrder` (Interaction Order) Required packages: `randomGLM` Notes: Unlike other packages used by `train`, the `randomGLM` package is fully loaded when this model is used. **Factor\-Based Linear Discriminant Analysis** ``` method = 'RFlda' ``` Type: Classification Tuning parameters: * `q` (\# Factors) Required packages: `HiDimDA` **Gaussian Process** ``` method = 'gaussprLinear' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `kernlab` **Generalized Linear Model** ``` method = 'glm' ``` Type: Regression, Classification No tuning parameters for this model A model\-specific variable importance metric is available. **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **Generalized Partial Least Squares** ``` method = 'gpls' ``` Type: Classification Tuning parameters: * `K.prov` (\#Components) Required packages: `gpls` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Heteroscedastic Discriminant Analysis** ``` method = 'hda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `newdim` (Dimension of the Discriminative Subspace) Required packages: `hda` **High Dimensional Discriminant Analysis** ``` method = 'hdda' ``` Type: Classification Tuning parameters: * `threshold` (Threshold) * `model` (Model Type) Required packages: `HDclassif` **High\-Dimensional Regularized Discriminant Analysis** ``` method = 'hdrda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `shrinkage_type` (Shrinkage Type) Required packages: `sparsediscrim` **L2 Regularized Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights2' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) * `weight` (Class Weight) Required packages: `LiblineaR` **L2 Regularized Support Vector Machine (dual) with Linear Kernel** ``` method = 'svmLinear3' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) Required packages: `LiblineaR` **Least Squares Support Vector Machine** ``` method = 'lssvmLinear' ``` Type: Classification Tuning parameters: * `tau` (Regularization Parameter) Required packages: `kernlab` **Linear Discriminant Analysis** ``` method = 'lda' ``` Type: Classification No tuning parameters for this model Required packages: `MASS` **Linear Discriminant Analysis** ``` method = 'lda2' ``` Type: Classification Tuning parameters: * `dimen` (\#Discriminant Functions) Required packages: `MASS` **Linear Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepLDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Localized Linear Discriminant Analysis** ``` method = 'loclda' ``` Type: Classification Tuning parameters: * `k` (\#Nearest Neighbors) Required packages: `klaR` **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` **Logistic Model Trees** ``` method = 'LMT' ``` Type: Classification Tuning parameters: * `iter` (\# Iteratons) Required packages: `RWeka` **Maximum Uncertainty Linear Discriminant Analysis** ``` method = 'Mlda' ``` Type: Classification No tuning parameters for this model Required packages: `HiDimDA` **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Nearest Shrunken Centroids** ``` method = 'pam' ``` Type: Classification Tuning parameters: * `threshold` (Shrinkage Threshold) Required packages: `pamr` A model\-specific variable importance metric is available. **Ordered Logistic or Probit Regression** ``` method = 'polr' ``` Type: Classification Tuning parameters: * `method` (parameter) Required packages: `MASS` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'kernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'pls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'simpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'widekernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Penalized Linear Discriminant Analysis** ``` method = 'PenalizedLDA' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `K` (\#Discriminant Functions) Required packages: `penalizedLDA`, `plyr` **Penalized Logistic Regression** ``` method = 'plr' ``` Type: Classification Tuning parameters: * `lambda` (L2 Penalty) * `cp` (Complexity Parameter) Required packages: `stepPlr` **Penalized Multinomial Regression** ``` method = 'multinom' ``` Type: Classification Tuning parameters: * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Regularized Discriminant Analysis** ``` method = 'rda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) Required packages: `klaR` **Regularized Linear Discriminant Analysis** ``` method = 'rlda' ``` Type: Classification Tuning parameters: * `estimator` (Regularization Method) Required packages: `sparsediscrim` **Regularized Logistic Regression** ``` method = 'regLogistic' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `loss` (Loss Function) * `epsilon` (Tolerance) Required packages: `LiblineaR` **Robust Linear Discriminant Analysis** ``` method = 'Linda' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Robust Regularized Linear Discriminant Analysis** ``` method = 'rrlda' ``` Type: Classification Tuning parameters: * `lambda` (Penalty Parameter) * `hp` (Robustness Parameter) * `penalty` (Penalty Type) Required packages: `rrlda` Notes: Unlike other packages used by `train`, the `rrlda` package is fully loaded when this model is used. **Robust SIMCA** ``` method = 'RSimca' ``` Type: Classification No tuning parameters for this model Required packages: `rrcovHD` Notes: Unlike other packages used by `train`, the `rrcovHD` package is fully loaded when this model is used. **Shrinkage Discriminant Analysis** ``` method = 'sda' ``` Type: Classification Tuning parameters: * `diagonal` (Diagonalize) * `lambda` (shrinkage) Required packages: `sda` **Sparse Distance Weighted Discrimination** ``` method = 'sdwd' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `sdwd` A model\-specific variable importance metric is available. **Sparse Linear Discriminant Analysis** ``` method = 'sparseLDA' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) Required packages: `sparseLDA` **Sparse Partial Least Squares** ``` method = 'spls' ``` Type: Regression, Classification Tuning parameters: * `K` (\#Components) * `eta` (Threshold) * `kappa` (Kappa) Required packages: `spls` **Stabilized Linear Discriminant Analysis** ``` method = 'slda' ``` Type: Classification No tuning parameters for this model Required packages: `ipred` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear2' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) Required packages: `e1071` ### 7\.0\.22 Linear Regression (back to [contents](train-models-by-tag.html#top)) **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bayesian Ridge Regression** ``` method = 'bridge' ``` Type: Regression No tuning parameters for this model Required packages: `monomvn` **Bayesian Ridge Regression (Model Averaged)** ``` method = 'blassoAveraged' ``` Type: Regression No tuning parameters for this model Required packages: `monomvn` Notes: This model makes predictions by averaging the predictions based on the posterior estimates of the regression coefficients. While it is possible that some of these posterior estimates are zero for non\-informative predictors, the final predicted value may be a function of many (or even all) predictors. **Boosted Linear Model** ``` method = 'BstLm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **Elasticnet** ``` method = 'enet' ``` Type: Regression Tuning parameters: * `fraction` (Fraction of Full Solution) * `lambda` (Weight Decay) Required packages: `elasticnet` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Independent Component Regression** ``` method = 'icr' ``` Type: Regression Tuning parameters: * `n.comp` (\#Components) Required packages: `fastICA` **L2 Regularized Support Vector Machine (dual) with Linear Kernel** ``` method = 'svmLinear3' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) Required packages: `LiblineaR` **Least Angle Regression** ``` method = 'lars' ``` Type: Regression Tuning parameters: * `fraction` (Fraction) Required packages: `lars` **Least Angle Regression** ``` method = 'lars2' ``` Type: Regression Tuning parameters: * `step` (\#Steps) Required packages: `lars` **Linear Regression** ``` method = 'lm' ``` Type: Regression Tuning parameters: * `intercept` (intercept) A model\-specific variable importance metric is available. **Linear Regression with Backwards Selection** ``` method = 'leapBackward' ``` Type: Regression Tuning parameters: * `nvmax` (Maximum Number of Predictors) Required packages: `leaps` **Linear Regression with Forward Selection** ``` method = 'leapForward' ``` Type: Regression Tuning parameters: * `nvmax` (Maximum Number of Predictors) Required packages: `leaps` **Linear Regression with Stepwise Selection** ``` method = 'leapSeq' ``` Type: Regression Tuning parameters: * `nvmax` (Maximum Number of Predictors) Required packages: `leaps` **Linear Regression with Stepwise Selection** ``` method = 'lmStepAIC' ``` Type: Regression No tuning parameters for this model Required packages: `MASS` **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` **Model Rules** ``` method = 'M5Rules' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) Required packages: `RWeka` **Model Tree** ``` method = 'M5' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) * `rules` (Rules) Required packages: `RWeka` **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Non\-Convex Penalized Quantile Regression** ``` method = 'rqnc' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) * `penalty` (Penalty Type) Required packages: `rqPen` **Non\-Negative Least Squares** ``` method = 'nnls' ``` Type: Regression No tuning parameters for this model Required packages: `nnls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'kernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'pls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'simpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'widekernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Penalized Linear Regression** ``` method = 'penalized' ``` Type: Regression Tuning parameters: * `lambda1` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `penalized` **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Principal Component Analysis** ``` method = 'pcr' ``` Type: Regression Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` **Quantile Regression with LASSO penalty** ``` method = 'rqlasso' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) Required packages: `rqPen` **Relaxed Lasso** ``` method = 'relaxo' ``` Type: Regression Tuning parameters: * `lambda` (Penalty Parameter) * `phi` (Relaxation Parameter) Required packages: `relaxo`, `plyr` **Relevance Vector Machines with Linear Kernel** ``` method = 'rvmLinear' ``` Type: Regression No tuning parameters for this model Required packages: `kernlab` **Ridge Regression** ``` method = 'ridge' ``` Type: Regression Tuning parameters: * `lambda` (Weight Decay) Required packages: `elasticnet` **Ridge Regression with Variable Selection** ``` method = 'foba' ``` Type: Regression Tuning parameters: * `k` (\#Variables Retained) * `lambda` (L2 Penalty) Required packages: `foba` **Robust Linear Model** ``` method = 'rlm' ``` Type: Regression Tuning parameters: * `intercept` (intercept) * `psi` (psi) Required packages: `MASS` A model\-specific variable importance metric is available. **Sparse Partial Least Squares** ``` method = 'spls' ``` Type: Regression, Classification Tuning parameters: * `K` (\#Components) * `eta` (Threshold) * `kappa` (Kappa) Required packages: `spls` **Spike and Slab Regression** ``` method = 'spikeslab' ``` Type: Regression Tuning parameters: * `vars` (Variables Retained) Required packages: `spikeslab`, `plyr` Notes: Unlike other packages used by `train`, the `spikeslab` package is fully loaded when this model is used. **Supervised Principal Component Analysis** ``` method = 'superpc' ``` Type: Regression Tuning parameters: * `threshold` (Threshold) * `n.components` (\#Components) Required packages: `superpc` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear2' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) Required packages: `e1071` **The Bayesian lasso** ``` method = 'blasso' ``` Type: Regression Tuning parameters: * `sparsity` (Sparsity Threshold) Required packages: `monomvn` Notes: This model creates predictions using the mean of the posterior distributions but sets some parameters specifically to zero based on the tuning parameter `sparsity`. For example, when `sparsity = .5`, only coefficients where at least half the posterior estimates are nonzero are used. **The lasso** ``` method = 'lasso' ``` Type: Regression Tuning parameters: * `fraction` (Fraction of Full Solution) Required packages: `elasticnet` ### 7\.0\.23 Logic Regression (back to [contents](train-models-by-tag.html#top)) **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` ### 7\.0\.24 Logistic Regression (back to [contents](train-models-by-tag.html#top)) **Adjacent Categories Probability Model for Ordinal Data** ``` method = 'vglmAdjCat' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bayesian Generalized Linear Model** ``` method = 'bayesglm' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `arm` **Boosted Logistic Regression** ``` method = 'LogitBoost' ``` Type: Classification Tuning parameters: * `nIter` (\# Boosting Iterations) Required packages: `caTools` **Continuation Ratio Model for Ordinal Data** ``` method = 'vglmContRatio' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Cumulative Probability Model for Ordinal Data** ``` method = 'vglmCumulative' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Generalized Partial Least Squares** ``` method = 'gpls' ``` Type: Classification Tuning parameters: * `K.prov` (\#Components) Required packages: `gpls` **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` **Logistic Model Trees** ``` method = 'LMT' ``` Type: Classification Tuning parameters: * `iter` (\# Iteratons) Required packages: `RWeka` **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Ordered Logistic or Probit Regression** ``` method = 'polr' ``` Type: Classification Tuning parameters: * `method` (parameter) Required packages: `MASS` A model\-specific variable importance metric is available. **Penalized Logistic Regression** ``` method = 'plr' ``` Type: Classification Tuning parameters: * `lambda` (L2 Penalty) * `cp` (Complexity Parameter) Required packages: `stepPlr` **Penalized Multinomial Regression** ``` method = 'multinom' ``` Type: Classification Tuning parameters: * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. ### 7\.0\.25 Mixture Model (back to [contents](train-models-by-tag.html#top)) **Adaptive Mixture Discriminant Analysis** ``` method = 'amdai' ``` Type: Classification Tuning parameters: * `model` (Model Type) Required packages: `adaptDA` **Mixture Discriminant Analysis** ``` method = 'mda' ``` Type: Classification Tuning parameters: * `subclasses` (\#Subclasses Per Class) Required packages: `mda` **Robust Mixture Discriminant Analysis** ``` method = 'rmda' ``` Type: Classification Tuning parameters: * `K` (\#Subclasses Per Class) * `model` (Model) Required packages: `robustDA` **Sparse Mixture Discriminant Analysis** ``` method = 'smda' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) * `R` (\# Subclasses) Required packages: `sparseLDA` ### 7\.0\.26 Model Tree (back to [contents](train-models-by-tag.html#top)) **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **Logistic Model Trees** ``` method = 'LMT' ``` Type: Classification Tuning parameters: * `iter` (\# Iteratons) Required packages: `RWeka` **Model Rules** ``` method = 'M5Rules' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) Required packages: `RWeka` **Model Tree** ``` method = 'M5' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) * `rules` (Rules) Required packages: `RWeka` ### 7\.0\.27 Multivariate Adaptive Regression Splines (back to [contents](train-models-by-tag.html#top)) **Bagged Flexible Discriminant Analysis** ``` method = 'bagFDA' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS** ``` method = 'bagEarth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS using gCV Pruning** ``` method = 'bagEarthGCV' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Flexible Discriminant Analysis** ``` method = 'fda' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Multivariate Adaptive Regression Spline** ``` method = 'earth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Multivariate Adaptive Regression Splines** ``` method = 'gcvEarth' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. ### 7\.0\.28 Neural Network (back to [contents](train-models-by-tag.html#top)) **Bayesian Regularized Neural Networks** ``` method = 'brnn' ``` Type: Regression Tuning parameters: * `neurons` (\# Neurons) Required packages: `brnn` **Extreme Learning Machine** ``` method = 'elm' ``` Type: Classification, Regression Tuning parameters: * `nhid` (\#Hidden Units) * `actfun` (Activation Function) Required packages: `elmNN` Notes: The package is no longer on CRAN but can be installed from the archive at [https://cran.r\-project.org/src/contrib/Archive/elmNN/](https://cran.r-project.org/src/contrib/Archive/elmNN/) **Model Averaged Neural Network** ``` method = 'avNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) * `bag` (Bagging) Required packages: `nnet` **Monotone Multi\-Layer Perceptron Neural Network** ``` method = 'monmlp' ``` Type: Classification, Regression Tuning parameters: * `hidden1` (\#Hidden Units) * `n.ensemble` (\#Models) Required packages: `monmlp` **Multi\-Layer Perceptron** ``` method = 'mlp' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) Required packages: `RSNNS` **Multi\-Layer Perceptron** ``` method = 'mlpWeightDecay' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `RSNNS` **Multi\-Layer Perceptron, multiple layers** ``` method = 'mlpWeightDecayML' ``` Type: Regression, Classification Tuning parameters: * `layer1` (\#Hidden Units layer1\) * `layer2` (\#Hidden Units layer2\) * `layer3` (\#Hidden Units layer3\) * `decay` (Weight Decay) Required packages: `RSNNS` **Multi\-Layer Perceptron, with multiple layers** ``` method = 'mlpML' ``` Type: Regression, Classification Tuning parameters: * `layer1` (\#Hidden Units layer1\) * `layer2` (\#Hidden Units layer2\) * `layer3` (\#Hidden Units layer3\) Required packages: `RSNNS` **Multilayer Perceptron Network by Stochastic Gradient Descent** ``` method = 'mlpSGD' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `l2reg` (L2 Regularization) * `lambda` (RMSE Gradient Scaling) * `learn_rate` (Learning Rate) * `momentum` (Momentum) * `gamma` (Learning Rate Decay) * `minibatchsz` (Batch Size) * `repeats` (\#Models) Required packages: `FCNN4R`, `plyr` A model\-specific variable importance metric is available. **Multilayer Perceptron Network with Dropout** ``` method = 'mlpKerasDropout' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `dropout` (Dropout Rate) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Dropout** ``` method = 'mlpKerasDropoutCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `dropout` (Dropout Rate) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecay' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecayCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Neural Network** ``` method = 'mxnet' ``` Type: Classification, Regression Tuning parameters: * `layer1` (\#Hidden Units in Layer 1\) * `layer2` (\#Hidden Units in Layer 2\) * `layer3` (\#Hidden Units in Layer 3\) * `learning.rate` (Learning Rate) * `momentum` (Momentum) * `dropout` (Dropout Rate) * `activation` (Activation Function) Required packages: `mxnet` Notes: The `mxnet` package is not yet on CRAN. See <http://mxnet.io> for installation instructions. **Neural Network** ``` method = 'mxnetAdam' ``` Type: Classification, Regression Tuning parameters: * `layer1` (\#Hidden Units in Layer 1\) * `layer2` (\#Hidden Units in Layer 2\) * `layer3` (\#Hidden Units in Layer 3\) * `dropout` (Dropout Rate) * `beta1` (beta1\) * `beta2` (beta2\) * `learningrate` (Learning Rate) * `activation` (Activation Function) Required packages: `mxnet` Notes: The `mxnet` package is not yet on CRAN. See <http://mxnet.io> for installation instructions. Users are strongly advised to define `num.round` themselves. **Neural Network** ``` method = 'neuralnet' ``` Type: Regression Tuning parameters: * `layer1` (\#Hidden Units in Layer 1\) * `layer2` (\#Hidden Units in Layer 2\) * `layer3` (\#Hidden Units in Layer 3\) Required packages: `neuralnet` **Neural Network** ``` method = 'nnet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Neural Networks with Feature Extraction** ``` method = 'pcaNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` **Penalized Multinomial Regression** ``` method = 'multinom' ``` Type: Classification Tuning parameters: * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Quantile Regression Neural Network** ``` method = 'qrnn' ``` Type: Regression Tuning parameters: * `n.hidden` (\#Hidden Units) * `penalty` ( Weight Decay) * `bag` (Bagged Models?) Required packages: `qrnn` **Radial Basis Function Network** ``` method = 'rbf' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) Required packages: `RSNNS` **Radial Basis Function Network** ``` method = 'rbfDDA' ``` Type: Regression, Classification Tuning parameters: * `negativeThreshold` (Activation Limit for Conflicting Classes) Required packages: `RSNNS` **Stacked AutoEncoder Deep Neural Network** ``` method = 'dnn' ``` Type: Classification, Regression Tuning parameters: * `layer1` (Hidden Layer 1\) * `layer2` (Hidden Layer 2\) * `layer3` (Hidden Layer 3\) * `hidden_dropout` (Hidden Dropouts) * `visible_dropout` (Visible Dropout) Required packages: `deepnet` ### 7\.0\.29 Oblique Tree (back to [contents](train-models-by-tag.html#top)) **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. ### 7\.0\.30 Ordinal Outcomes (back to [contents](train-models-by-tag.html#top)) **Adjacent Categories Probability Model for Ordinal Data** ``` method = 'vglmAdjCat' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **CART or Ordinal Responses** ``` method = 'rpartScore' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `split` (Split Function) * `prune` (Pruning Measure) Required packages: `rpartScore`, `plyr` A model\-specific variable importance metric is available. **Continuation Ratio Model for Ordinal Data** ``` method = 'vglmContRatio' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Cumulative Probability Model for Ordinal Data** ``` method = 'vglmCumulative' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Ordered Logistic or Probit Regression** ``` method = 'polr' ``` Type: Classification Tuning parameters: * `method` (parameter) Required packages: `MASS` A model\-specific variable importance metric is available. **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Random Forest** ``` method = 'ordinalRF' ``` Type: Classification Tuning parameters: * `nsets` (\# score sets tried prior to the approximation) * `ntreeperdiv` (\# of trees (small RFs)) * `ntreefinal` (\# of trees (final RF)) Required packages: `e1071`, `ranger`, `dplyr`, `ordinalForest` A model\-specific variable importance metric is available. ### 7\.0\.31 Partial Least Squares (back to [contents](train-models-by-tag.html#top)) **Generalized Partial Least Squares** ``` method = 'gpls' ``` Type: Classification Tuning parameters: * `K.prov` (\#Components) Required packages: `gpls` **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Partial Least Squares** ``` method = 'kernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'pls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'simpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'widekernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares Generalized Linear Models** ``` method = 'plsRglm' ``` Type: Classification, Regression Tuning parameters: * `nt` (\#PLS Components) * `alpha.pvals.expli` (p\-Value threshold) Required packages: `plsRglm` Notes: Unlike other packages used by `train`, the `plsRglm` package is fully loaded when this model is used. **Sparse Partial Least Squares** ``` method = 'spls' ``` Type: Regression, Classification Tuning parameters: * `K` (\#Components) * `eta` (Threshold) * `kappa` (Kappa) Required packages: `spls` ### 7\.0\.32 Patient Rule Induction Method (back to [contents](train-models-by-tag.html#top)) **Patient Rule Induction Method** ``` method = 'PRIM' ``` Type: Classification Tuning parameters: * `peel.alpha` (peeling quantile) * `paste.alpha` (pasting quantile) * `mass.min` (minimum mass) Required packages: `supervisedPRIM` ### 7\.0\.33 Polynomial Model (back to [contents](train-models-by-tag.html#top)) **Diagonal Discriminant Analysis** ``` method = 'dda' ``` Type: Classification Tuning parameters: * `model` (Model) * `shrinkage` (Shrinkage Type) Required packages: `sparsediscrim` **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Gaussian Process with Polynomial Kernel** ``` method = 'gaussprPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kernlab` **High\-Dimensional Regularized Discriminant Analysis** ``` method = 'hdrda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `shrinkage_type` (Shrinkage Type) Required packages: `sparsediscrim` **Least Squares Support Vector Machine with Polynomial Kernel** ``` method = 'lssvmPoly' ``` Type: Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `tau` (Regularization Parameter) Required packages: `kernlab` **Penalized Discriminant Analysis** ``` method = 'pda' ``` Type: Classification Tuning parameters: * `lambda` (Shrinkage Penalty Coefficient) Required packages: `mda` **Penalized Discriminant Analysis** ``` method = 'pda2' ``` Type: Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `mda` **Polynomial Kernel Regularized Least Squares** ``` method = 'krlsPoly' ``` Type: Regression Tuning parameters: * `lambda` (Regularization Parameter) * `degree` (Polynomial Degree) Required packages: `KRLS` **Quadratic Discriminant Analysis** ``` method = 'qda' ``` Type: Classification No tuning parameters for this model Required packages: `MASS` **Quadratic Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepQDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Regularized Discriminant Analysis** ``` method = 'rda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) Required packages: `klaR` **Regularized Linear Discriminant Analysis** ``` method = 'rlda' ``` Type: Classification Tuning parameters: * `estimator` (Regularization Method) Required packages: `sparsediscrim` **Relevance Vector Machines with Polynomial Kernel** ``` method = 'rvmPoly' ``` Type: Regression Tuning parameters: * `scale` (Scale) * `degree` (Polynomial Degree) Required packages: `kernlab` **Robust Quadratic Discriminant Analysis** ``` method = 'QdaCov' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Support Vector Machines with Polynomial Kernel** ``` method = 'svmPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `C` (Cost) Required packages: `kernlab` ### 7\.0\.34 Prototype Models (back to [contents](train-models-by-tag.html#top)) **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **Greedy Prototype Selection** ``` method = 'protoclass' ``` Type: Classification Tuning parameters: * `eps` (Ball Size) * `Minkowski` (Distance Order) Required packages: `proxy`, `protoclass` **k\-Nearest Neighbors** ``` method = 'kknn' ``` Type: Regression, Classification Tuning parameters: * `kmax` (Max. \#Neighbors) * `distance` (Distance) * `kernel` (Kernel) Required packages: `kknn` **k\-Nearest Neighbors** ``` method = 'knn' ``` Type: Classification, Regression Tuning parameters: * `k` (\#Neighbors) **Learning Vector Quantization** ``` method = 'lvq' ``` Type: Classification Tuning parameters: * `size` (Codebook Size) * `k` (\#Prototypes) Required packages: `class` **Nearest Shrunken Centroids** ``` method = 'pam' ``` Type: Classification Tuning parameters: * `threshold` (Shrinkage Threshold) Required packages: `pamr` A model\-specific variable importance metric is available. **Optimal Weighted Nearest Neighbor Classifier** ``` method = 'ownn' ``` Type: Classification Tuning parameters: * `K` (\#Neighbors) Required packages: `snn` **Stabilized Nearest Neighbor Classifier** ``` method = 'snn' ``` Type: Classification Tuning parameters: * `lambda` (Stabilization Parameter) Required packages: `snn` ### 7\.0\.35 Quantile Regression (back to [contents](train-models-by-tag.html#top)) **Non\-Convex Penalized Quantile Regression** ``` method = 'rqnc' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) * `penalty` (Penalty Type) Required packages: `rqPen` **Quantile Random Forest** ``` method = 'qrf' ``` Type: Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `quantregForest` **Quantile Regression Neural Network** ``` method = 'qrnn' ``` Type: Regression Tuning parameters: * `n.hidden` (\#Hidden Units) * `penalty` ( Weight Decay) * `bag` (Bagged Models?) Required packages: `qrnn` **Quantile Regression with LASSO penalty** ``` method = 'rqlasso' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) Required packages: `rqPen` ### 7\.0\.36 Radial Basis Function (back to [contents](train-models-by-tag.html#top)) **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **Gaussian Process with Radial Basis Function Kernel** ``` method = 'gaussprRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **Least Squares Support Vector Machine with Radial Basis Function Kernel** ``` method = 'lssvmRadial' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `tau` (Regularization Parameter) Required packages: `kernlab` **Radial Basis Function Kernel Regularized Least Squares** ``` method = 'krlsRadial' ``` Type: Regression Tuning parameters: * `lambda` (Regularization Parameter) * `sigma` (Sigma) Required packages: `KRLS`, `kernlab` **Radial Basis Function Network** ``` method = 'rbf' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) Required packages: `RSNNS` **Radial Basis Function Network** ``` method = 'rbfDDA' ``` Type: Regression, Classification Tuning parameters: * `negativeThreshold` (Activation Limit for Conflicting Classes) Required packages: `RSNNS` **Relevance Vector Machines with Radial Basis Function Kernel** ``` method = 'rvmRadial' ``` Type: Regression Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **Support Vector Machines with Class Weights** ``` method = 'svmRadialWeights' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) * `Weight` (Weight) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialCost' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialSigma' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` Notes: This SVM model tunes over the cost parameter and the RBF kernel parameter sigma. In the latter case, using `tuneLength` will, at most, evaluate six values of the kernel parameter. This enables a broad search over the cost parameter and a relatively narrow search over `sigma` **Variational Bayesian Multinomial Probit Regression** ``` method = 'vbmpRadial' ``` Type: Classification Tuning parameters: * `estimateTheta` (Theta Estimated) Required packages: `vbmp` ### 7\.0\.37 Random Forest (back to [contents](train-models-by-tag.html#top)) **Conditional Inference Random Forest** ``` method = 'cforest' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `party` A model\-specific variable importance metric is available. **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Parallel Random Forest** ``` method = 'parRF' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `e1071`, `randomForest`, `foreach`, `import` A model\-specific variable importance metric is available. **Quantile Random Forest** ``` method = 'qrf' ``` Type: Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `quantregForest` **Random Ferns** ``` method = 'rFerns' ``` Type: Classification Tuning parameters: * `depth` (Fern Depth) Required packages: `rFerns` **Random Forest** ``` method = 'ordinalRF' ``` Type: Classification Tuning parameters: * `nsets` (\# score sets tried prior to the approximation) * `ntreeperdiv` (\# of trees (small RFs)) * `ntreefinal` (\# of trees (final RF)) Required packages: `e1071`, `ranger`, `dplyr`, `ordinalForest` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'ranger' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `splitrule` (Splitting Rule) * `min.node.size` (Minimal Node Size) Required packages: `e1071`, `ranger`, `dplyr` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'Rborist' ``` Type: Classification, Regression Tuning parameters: * `predFixed` (\#Randomly Selected Predictors) * `minNode` (Minimal Node Size) Required packages: `Rborist` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'rf' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `randomForest` A model\-specific variable importance metric is available. **Random Forest by Randomization** ``` method = 'extraTrees' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\# Randomly Selected Predictors) * `numRandomCuts` (\# Random Cuts) Required packages: `extraTrees` **Random Forest Rule\-Based Model** ``` method = 'rfRules' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `maxdepth` (Maximum Rule Depth) Required packages: `randomForest`, `inTrees`, `plyr` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRF' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) * `coefImp` (Importance Coefficient) Required packages: `randomForest`, `RRF` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRFglobal' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) Required packages: `RRF` A model\-specific variable importance metric is available. **Weighted Subspace Random Forest** ``` method = 'wsrf' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `wsrf` ### 7\.0\.38 Regularization (back to [contents](train-models-by-tag.html#top)) **Bayesian Regularized Neural Networks** ``` method = 'brnn' ``` Type: Regression Tuning parameters: * `neurons` (\# Neurons) Required packages: `brnn` **Diagonal Discriminant Analysis** ``` method = 'dda' ``` Type: Classification Tuning parameters: * `model` (Model) * `shrinkage` (Shrinkage Type) Required packages: `sparsediscrim` **Heteroscedastic Discriminant Analysis** ``` method = 'hda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `newdim` (Dimension of the Discriminative Subspace) Required packages: `hda` **High\-Dimensional Regularized Discriminant Analysis** ``` method = 'hdrda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `shrinkage_type` (Shrinkage Type) Required packages: `sparsediscrim` **Regularized Discriminant Analysis** ``` method = 'rda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) Required packages: `klaR` **Regularized Linear Discriminant Analysis** ``` method = 'rlda' ``` Type: Classification Tuning parameters: * `estimator` (Regularization Method) Required packages: `sparsediscrim` **Regularized Random Forest** ``` method = 'RRF' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) * `coefImp` (Importance Coefficient) Required packages: `randomForest`, `RRF` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRFglobal' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) Required packages: `RRF` A model\-specific variable importance metric is available. **Robust Regularized Linear Discriminant Analysis** ``` method = 'rrlda' ``` Type: Classification Tuning parameters: * `lambda` (Penalty Parameter) * `hp` (Robustness Parameter) * `penalty` (Penalty Type) Required packages: `rrlda` Notes: Unlike other packages used by `train`, the `rrlda` package is fully loaded when this model is used. **Shrinkage Discriminant Analysis** ``` method = 'sda' ``` Type: Classification Tuning parameters: * `diagonal` (Diagonalize) * `lambda` (shrinkage) Required packages: `sda` ### 7\.0\.39 Relevance Vector Machines (back to [contents](train-models-by-tag.html#top)) **Relevance Vector Machines with Linear Kernel** ``` method = 'rvmLinear' ``` Type: Regression No tuning parameters for this model Required packages: `kernlab` **Relevance Vector Machines with Polynomial Kernel** ``` method = 'rvmPoly' ``` Type: Regression Tuning parameters: * `scale` (Scale) * `degree` (Polynomial Degree) Required packages: `kernlab` **Relevance Vector Machines with Radial Basis Function Kernel** ``` method = 'rvmRadial' ``` Type: Regression Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` ### 7\.0\.40 Ridge Regression (back to [contents](train-models-by-tag.html#top)) **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Ridge Regression with Variable Selection** ``` method = 'foba' ``` Type: Regression Tuning parameters: * `k` (\#Variables Retained) * `lambda` (L2 Penalty) Required packages: `foba` ### 7\.0\.41 Robust Methods (back to [contents](train-models-by-tag.html#top)) **L2 Regularized Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights2' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) * `weight` (Class Weight) Required packages: `LiblineaR` **L2 Regularized Support Vector Machine (dual) with Linear Kernel** ``` method = 'svmLinear3' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) Required packages: `LiblineaR` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Regularized Logistic Regression** ``` method = 'regLogistic' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `loss` (Loss Function) * `epsilon` (Tolerance) Required packages: `LiblineaR` **Relevance Vector Machines with Linear Kernel** ``` method = 'rvmLinear' ``` Type: Regression No tuning parameters for this model Required packages: `kernlab` **Relevance Vector Machines with Polynomial Kernel** ``` method = 'rvmPoly' ``` Type: Regression Tuning parameters: * `scale` (Scale) * `degree` (Polynomial Degree) Required packages: `kernlab` **Relevance Vector Machines with Radial Basis Function Kernel** ``` method = 'rvmRadial' ``` Type: Regression Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **Robust Mixture Discriminant Analysis** ``` method = 'rmda' ``` Type: Classification Tuning parameters: * `K` (\#Subclasses Per Class) * `model` (Model) Required packages: `robustDA` **Support Vector Machines with Boundrange String Kernel** ``` method = 'svmBoundrangeString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Exponential String Kernel** ``` method = 'svmExpoString' ``` Type: Regression, Classification Tuning parameters: * `lambda` (lambda) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear2' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) Required packages: `e1071` **Support Vector Machines with Polynomial Kernel** ``` method = 'svmPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialSigma' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` Notes: This SVM model tunes over the cost parameter and the RBF kernel parameter sigma. In the latter case, using `tuneLength` will, at most, evaluate six values of the kernel parameter. This enables a broad search over the cost parameter and a relatively narrow search over `sigma` **Support Vector Machines with Spectrum String Kernel** ``` method = 'svmSpectrumString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` ### 7\.0\.42 Robust Model (back to [contents](train-models-by-tag.html#top)) **Quantile Random Forest** ``` method = 'qrf' ``` Type: Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `quantregForest` **Quantile Regression Neural Network** ``` method = 'qrnn' ``` Type: Regression Tuning parameters: * `n.hidden` (\#Hidden Units) * `penalty` ( Weight Decay) * `bag` (Bagged Models?) Required packages: `qrnn` **Robust Linear Discriminant Analysis** ``` method = 'Linda' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Robust Linear Model** ``` method = 'rlm' ``` Type: Regression Tuning parameters: * `intercept` (intercept) * `psi` (psi) Required packages: `MASS` A model\-specific variable importance metric is available. **Robust Regularized Linear Discriminant Analysis** ``` method = 'rrlda' ``` Type: Classification Tuning parameters: * `lambda` (Penalty Parameter) * `hp` (Robustness Parameter) * `penalty` (Penalty Type) Required packages: `rrlda` Notes: Unlike other packages used by `train`, the `rrlda` package is fully loaded when this model is used. **Robust SIMCA** ``` method = 'RSimca' ``` Type: Classification No tuning parameters for this model Required packages: `rrcovHD` Notes: Unlike other packages used by `train`, the `rrcovHD` package is fully loaded when this model is used. **SIMCA** ``` method = 'CSimca' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov`, `rrcovHD` ### 7\.0\.43 ROC Curves (back to [contents](train-models-by-tag.html#top)) **ROC\-Based Classifier** ``` method = 'rocc' ``` Type: Classification Tuning parameters: * `xgenes` (\#Variables Retained) Required packages: `rocc` ### 7\.0\.44 Rule\-Based Model (back to [contents](train-models-by-tag.html#top)) **Adaptive\-Network\-Based Fuzzy Inference System** ``` method = 'ANFIS' ``` Type: Regression Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `max.iter` (Max. Iterations) Required packages: `frbs` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **Dynamic Evolving Neural\-Fuzzy Inference System** ``` method = 'DENFIS' ``` Type: Regression Tuning parameters: * `Dthr` (Threshold) * `max.iter` (Max. Iterations) Required packages: `frbs` **Fuzzy Inference Rules by Descent Method** ``` method = 'FIR.DM' ``` Type: Regression Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `max.iter` (Max. Iterations) Required packages: `frbs` **Fuzzy Rules Using Chi’s Method** ``` method = 'FRBCS.CHI' ``` Type: Classification Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `type.mf` (Membership Function) Required packages: `frbs` **Fuzzy Rules Using Genetic Cooperative\-Competitive Learning and Pittsburgh** ``` method = 'FH.GBML' ``` Type: Classification Tuning parameters: * `max.num.rule` (Max. \#Rules) * `popu.size` (Population Size) * `max.gen` (Max. Generations) Required packages: `frbs` **Fuzzy Rules Using the Structural Learning Algorithm on Vague Environment** ``` method = 'SLAVE' ``` Type: Classification Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `max.iter` (Max. Iterations) * `max.gen` (Max. Generations) Required packages: `frbs` **Fuzzy Rules via MOGUL** ``` method = 'GFS.FR.MOGUL' ``` Type: Regression Tuning parameters: * `max.gen` (Max. Generations) * `max.iter` (Max. Iterations) * `max.tune` (Max. Tuning Iterations) Required packages: `frbs` **Fuzzy Rules via Thrift** ``` method = 'GFS.THRIFT' ``` Type: Regression Tuning parameters: * `popu.size` (Population Size) * `num.labels` (\# Fuzzy Labels) * `max.gen` (Max. Generations) Required packages: `frbs` **Fuzzy Rules with Weight Factor** ``` method = 'FRBCS.W' ``` Type: Classification Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `type.mf` (Membership Function) Required packages: `frbs` **Genetic Lateral Tuning and Rule Selection of Linguistic Fuzzy Systems** ``` method = 'GFS.LT.RS' ``` Type: Regression Tuning parameters: * `popu.size` (Population Size) * `num.labels` (\# Fuzzy Labels) * `max.gen` (Max. Generations) Required packages: `frbs` **Hybrid Neural Fuzzy Inference System** ``` method = 'HYFIS' ``` Type: Regression Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `max.iter` (Max. Iterations) Required packages: `frbs` **Model Rules** ``` method = 'M5Rules' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) Required packages: `RWeka` **Model Tree** ``` method = 'M5' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) * `rules` (Rules) Required packages: `RWeka` **Patient Rule Induction Method** ``` method = 'PRIM' ``` Type: Classification Tuning parameters: * `peel.alpha` (peeling quantile) * `paste.alpha` (pasting quantile) * `mass.min` (minimum mass) Required packages: `supervisedPRIM` **Random Forest Rule\-Based Model** ``` method = 'rfRules' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `maxdepth` (Maximum Rule Depth) Required packages: `randomForest`, `inTrees`, `plyr` A model\-specific variable importance metric is available. **Rule\-Based Classifier** ``` method = 'JRip' ``` Type: Classification Tuning parameters: * `NumOpt` (\# Optimizations) * `NumFolds` (\# Folds) * `MinWeights` (Min Weights) Required packages: `RWeka` A model\-specific variable importance metric is available. **Rule\-Based Classifier** ``` method = 'PART' ``` Type: Classification Tuning parameters: * `threshold` (Confidence Threshold) * `pruned` (Pruning) Required packages: `RWeka` A model\-specific variable importance metric is available. **Simplified TSK Fuzzy Rules** ``` method = 'FS.HGD' ``` Type: Regression Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `max.iter` (Max. Iterations) Required packages: `frbs` **Single C5\.0 Ruleset** ``` method = 'C5.0Rules' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single Rule Classification** ``` method = 'OneR' ``` Type: Classification No tuning parameters for this model Required packages: `RWeka` **Subtractive Clustering and Fuzzy c\-Means Rules** ``` method = 'SBC' ``` Type: Regression Tuning parameters: * `r.a` (Radius) * `eps.high` (Upper Threshold) * `eps.low` (Lower Threshold) Required packages: `frbs` **Wang and Mendel Fuzzy Rules** ``` method = 'WM' ``` Type: Regression Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `type.mf` (Membership Function) Required packages: `frbs` ### 7\.0\.45 Self\-Organising Maps (back to [contents](train-models-by-tag.html#top)) **Self\-Organizing Maps** ``` method = 'xyf' ``` Type: Classification, Regression Tuning parameters: * `xdim` (Rows) * `ydim` (Columns) * `user.weights` (Layer Weight) * `topo` (Topology) Required packages: `kohonen` Notes: As of version 3\.0\.0 of the kohonen package, the argument `user.weights` replaces the old `alpha` parameter. `user.weights` is usually a vector of relative weights such as `c(1, 3)` but is parameterized here as a proportion such as `c(1-.75, .75)` where the .75 is the value of the tuning parameter passed to `train` and indicates that the outcome layer has 3 times the weight as the predictor layer. ### 7\.0\.46 String Kernel (back to [contents](train-models-by-tag.html#top)) **Support Vector Machines with Boundrange String Kernel** ``` method = 'svmBoundrangeString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Exponential String Kernel** ``` method = 'svmExpoString' ``` Type: Regression, Classification Tuning parameters: * `lambda` (lambda) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Spectrum String Kernel** ``` method = 'svmSpectrumString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` ### 7\.0\.47 Support Vector Machines (back to [contents](train-models-by-tag.html#top)) **L2 Regularized Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights2' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) * `weight` (Class Weight) Required packages: `LiblineaR` **L2 Regularized Support Vector Machine (dual) with Linear Kernel** ``` method = 'svmLinear3' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) Required packages: `LiblineaR` **Least Squares Support Vector Machine** ``` method = 'lssvmLinear' ``` Type: Classification Tuning parameters: * `tau` (Regularization Parameter) Required packages: `kernlab` **Least Squares Support Vector Machine with Polynomial Kernel** ``` method = 'lssvmPoly' ``` Type: Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `tau` (Regularization Parameter) Required packages: `kernlab` **Least Squares Support Vector Machine with Radial Basis Function Kernel** ``` method = 'lssvmRadial' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `tau` (Regularization Parameter) Required packages: `kernlab` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Support Vector Machines with Boundrange String Kernel** ``` method = 'svmBoundrangeString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Class Weights** ``` method = 'svmRadialWeights' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) * `Weight` (Weight) Required packages: `kernlab` **Support Vector Machines with Exponential String Kernel** ``` method = 'svmExpoString' ``` Type: Regression, Classification Tuning parameters: * `lambda` (lambda) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear2' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) Required packages: `e1071` **Support Vector Machines with Polynomial Kernel** ``` method = 'svmPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialCost' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialSigma' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` Notes: This SVM model tunes over the cost parameter and the RBF kernel parameter sigma. In the latter case, using `tuneLength` will, at most, evaluate six values of the kernel parameter. This enables a broad search over the cost parameter and a relatively narrow search over `sigma` **Support Vector Machines with Spectrum String Kernel** ``` method = 'svmSpectrumString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` ### 7\.0\.48 Supports Class Probabilities (back to [contents](train-models-by-tag.html#top)) **AdaBoost Classification Trees** ``` method = 'adaboost' ``` Type: Classification Tuning parameters: * `nIter` (\#Trees) * `method` (Method) Required packages: `fastAdaboost` **AdaBoost.M1** ``` method = 'AdaBoost.M1' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) * `coeflearn` (Coefficient Type) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Adaptive Mixture Discriminant Analysis** ``` method = 'amdai' ``` Type: Classification Tuning parameters: * `model` (Model Type) Required packages: `adaptDA` **Adjacent Categories Probability Model for Ordinal Data** ``` method = 'vglmAdjCat' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged CART** ``` method = 'treebag' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `ipred`, `plyr`, `e1071` A model\-specific variable importance metric is available. **Bagged Flexible Discriminant Analysis** ``` method = 'bagFDA' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bagged MARS** ``` method = 'bagEarth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS using gCV Pruning** ``` method = 'bagEarthGCV' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged Model** ``` method = 'bag' ``` Type: Regression, Classification Tuning parameters: * `vars` (\#Randomly Selected Predictors) Required packages: `caret` **Bayesian Additive Regression Trees** ``` method = 'bartMachine' ``` Type: Classification, Regression Tuning parameters: * `num_trees` (\#Trees) * `k` (Prior Boundary) * `alpha` (Base Terminal Node Hyperparameter) * `beta` (Power Terminal Node Hyperparameter) * `nu` (Degrees of Freedom) Required packages: `bartMachine` A model\-specific variable importance metric is available. **Bayesian Generalized Linear Model** ``` method = 'bayesglm' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `arm` **Binary Discriminant Analysis** ``` method = 'binda' ``` Type: Classification Tuning parameters: * `lambda.freqs` (Shrinkage Intensity) Required packages: `binda` **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Logistic Regression** ``` method = 'LogitBoost' ``` Type: Classification Tuning parameters: * `nIter` (\# Boosting Iterations) Required packages: `caTools` **Boosted Tree** ``` method = 'blackboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `party`, `mboost`, `plyr`, `partykit` **C4\.5\-like Trees** ``` method = 'J48' ``` Type: Classification Tuning parameters: * `C` (Confidence Threshold) * `M` (Minimum Instances Per Leaf) Required packages: `RWeka` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart' ``` Type: Regression, Classification Tuning parameters: * `cp` (Complexity Parameter) Required packages: `rpart` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart1SE' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `rpart` A model\-specific variable importance metric is available. Notes: This CART model replicates the same process used by the `rpart` function where the model complexity is determined using the one\-standard error method. This procedure is replicated inside of the resampling done by `train` so that an external resampling estimate can be obtained. **CART** ``` method = 'rpart2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) Required packages: `rpart` A model\-specific variable importance metric is available. **CHi\-squared Automated Interaction Detection** ``` method = 'chaid' ``` Type: Classification Tuning parameters: * `alpha2` (Merging Threshold) * `alpha3` (Splitting former Merged Threshold) * `alpha4` ( Splitting former Merged Threshold) Required packages: `CHAID` **Conditional Inference Random Forest** ``` method = 'cforest' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `party` A model\-specific variable importance metric is available. **Conditional Inference Tree** ``` method = 'ctree' ``` Type: Classification, Regression Tuning parameters: * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Conditional Inference Tree** ``` method = 'ctree2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Continuation Ratio Model for Ordinal Data** ``` method = 'vglmContRatio' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Cumulative Probability Model for Ordinal Data** ``` method = 'vglmCumulative' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Diagonal Discriminant Analysis** ``` method = 'dda' ``` Type: Classification Tuning parameters: * `model` (Model) * `shrinkage` (Shrinkage Type) Required packages: `sparsediscrim` **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **Ensembles of Generalized Linear Models** ``` method = 'randomGLM' ``` Type: Regression, Classification Tuning parameters: * `maxInteractionOrder` (Interaction Order) Required packages: `randomGLM` Notes: Unlike other packages used by `train`, the `randomGLM` package is fully loaded when this model is used. **eXtreme Gradient Boosting** ``` method = 'xgbDART' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `subsample` (Subsample Percentage) * `colsample_bytree` (Subsample Ratio of Columns) * `rate_drop` (Fraction of Trees Dropped) * `skip_drop` (Prob. of Skipping Drop\-out) * `min_child_weight` (Minimum Sum of Instance Weight) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbLinear' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `lambda` (L2 Regularization) * `alpha` (L1 Regularization) * `eta` (Learning Rate) Required packages: `xgboost` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbTree' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `colsample_bytree` (Subsample Ratio of Columns) * `min_child_weight` (Minimum Sum of Instance Weight) * `subsample` (Subsample Percentage) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **Flexible Discriminant Analysis** ``` method = 'fda' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Gaussian Process** ``` method = 'gaussprLinear' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `kernlab` **Gaussian Process with Polynomial Kernel** ``` method = 'gaussprPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kernlab` **Gaussian Process with Radial Basis Function Kernel** ``` method = 'gaussprRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **Generalized Additive Model using LOESS** ``` method = 'gamLoess' ``` Type: Regression, Classification Tuning parameters: * `span` (Span) * `degree` (Degree) Required packages: `gam` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `gam` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'bam' ``` Type: Regression, Classification Tuning parameters: * `select` (Feature Selection) * `method` (Method) Required packages: `mgcv` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `mgcv` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'gam' ``` Type: Regression, Classification Tuning parameters: * `select` (Feature Selection) * `method` (Method) Required packages: `mgcv` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `mgcv` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'gamSpline' ``` Type: Regression, Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `gam` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `gam` package is fully loaded when this model is used. **Generalized Linear Model** ``` method = 'glm' ``` Type: Regression, Classification No tuning parameters for this model A model\-specific variable importance metric is available. **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **Generalized Partial Least Squares** ``` method = 'gpls' ``` Type: Classification Tuning parameters: * `K.prov` (\#Components) Required packages: `gpls` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Gradient Boosting Machines** ``` method = 'gbm_h2o' ``` Type: Regression, Classification Tuning parameters: * `ntrees` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `min_rows` (Min. Terminal Node Size) * `learn_rate` (Shrinkage) * `col_sample_rate` (\#Randomly Selected Predictors) Required packages: `h2o` A model\-specific variable importance metric is available. **Heteroscedastic Discriminant Analysis** ``` method = 'hda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `newdim` (Dimension of the Discriminative Subspace) Required packages: `hda` **High Dimensional Discriminant Analysis** ``` method = 'hdda' ``` Type: Classification Tuning parameters: * `threshold` (Threshold) * `model` (Model Type) Required packages: `HDclassif` **High\-Dimensional Regularized Discriminant Analysis** ``` method = 'hdrda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `shrinkage_type` (Shrinkage Type) Required packages: `sparsediscrim` **k\-Nearest Neighbors** ``` method = 'kknn' ``` Type: Regression, Classification Tuning parameters: * `kmax` (Max. \#Neighbors) * `distance` (Distance) * `kernel` (Kernel) Required packages: `kknn` **k\-Nearest Neighbors** ``` method = 'knn' ``` Type: Classification, Regression Tuning parameters: * `k` (\#Neighbors) **Linear Discriminant Analysis** ``` method = 'lda' ``` Type: Classification No tuning parameters for this model Required packages: `MASS` **Linear Discriminant Analysis** ``` method = 'lda2' ``` Type: Classification Tuning parameters: * `dimen` (\#Discriminant Functions) Required packages: `MASS` **Linear Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepLDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Localized Linear Discriminant Analysis** ``` method = 'loclda' ``` Type: Classification Tuning parameters: * `k` (\#Nearest Neighbors) Required packages: `klaR` **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` **Logistic Model Trees** ``` method = 'LMT' ``` Type: Classification Tuning parameters: * `iter` (\# Iteratons) Required packages: `RWeka` **Mixture Discriminant Analysis** ``` method = 'mda' ``` Type: Classification Tuning parameters: * `subclasses` (\#Subclasses Per Class) Required packages: `mda` **Model Averaged Naive Bayes Classifier** ``` method = 'manb' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) * `prior` (Prior Probability) Required packages: `bnclassify` **Model Averaged Neural Network** ``` method = 'avNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) * `bag` (Bagging) Required packages: `nnet` **Monotone Multi\-Layer Perceptron Neural Network** ``` method = 'monmlp' ``` Type: Classification, Regression Tuning parameters: * `hidden1` (\#Hidden Units) * `n.ensemble` (\#Models) Required packages: `monmlp` **Multi\-Layer Perceptron** ``` method = 'mlp' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) Required packages: `RSNNS` **Multi\-Layer Perceptron** ``` method = 'mlpWeightDecay' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `RSNNS` **Multi\-Layer Perceptron, multiple layers** ``` method = 'mlpWeightDecayML' ``` Type: Regression, Classification Tuning parameters: * `layer1` (\#Hidden Units layer1\) * `layer2` (\#Hidden Units layer2\) * `layer3` (\#Hidden Units layer3\) * `decay` (Weight Decay) Required packages: `RSNNS` **Multi\-Layer Perceptron, with multiple layers** ``` method = 'mlpML' ``` Type: Regression, Classification Tuning parameters: * `layer1` (\#Hidden Units layer1\) * `layer2` (\#Hidden Units layer2\) * `layer3` (\#Hidden Units layer3\) Required packages: `RSNNS` **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Multilayer Perceptron Network by Stochastic Gradient Descent** ``` method = 'mlpSGD' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `l2reg` (L2 Regularization) * `lambda` (RMSE Gradient Scaling) * `learn_rate` (Learning Rate) * `momentum` (Momentum) * `gamma` (Learning Rate Decay) * `minibatchsz` (Batch Size) * `repeats` (\#Models) Required packages: `FCNN4R`, `plyr` A model\-specific variable importance metric is available. **Multilayer Perceptron Network with Dropout** ``` method = 'mlpKerasDropout' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `dropout` (Dropout Rate) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Dropout** ``` method = 'mlpKerasDropoutCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `dropout` (Dropout Rate) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecay' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecayCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multivariate Adaptive Regression Spline** ``` method = 'earth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Multivariate Adaptive Regression Splines** ``` method = 'gcvEarth' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Naive Bayes** ``` method = 'naive_bayes' ``` Type: Classification Tuning parameters: * `laplace` (Laplace Correction) * `usekernel` (Distribution Type) * `adjust` (Bandwidth Adjustment) Required packages: `naivebayes` **Naive Bayes** ``` method = 'nb' ``` Type: Classification Tuning parameters: * `fL` (Laplace Correction) * `usekernel` (Distribution Type) * `adjust` (Bandwidth Adjustment) Required packages: `klaR` **Naive Bayes Classifier** ``` method = 'nbDiscrete' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Naive Bayes Classifier with Attribute Weighting** ``` method = 'awnb' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Nearest Shrunken Centroids** ``` method = 'pam' ``` Type: Classification Tuning parameters: * `threshold` (Shrinkage Threshold) Required packages: `pamr` A model\-specific variable importance metric is available. **Neural Network** ``` method = 'mxnet' ``` Type: Classification, Regression Tuning parameters: * `layer1` (\#Hidden Units in Layer 1\) * `layer2` (\#Hidden Units in Layer 2\) * `layer3` (\#Hidden Units in Layer 3\) * `learning.rate` (Learning Rate) * `momentum` (Momentum) * `dropout` (Dropout Rate) * `activation` (Activation Function) Required packages: `mxnet` Notes: The `mxnet` package is not yet on CRAN. See <http://mxnet.io> for installation instructions. **Neural Network** ``` method = 'mxnetAdam' ``` Type: Classification, Regression Tuning parameters: * `layer1` (\#Hidden Units in Layer 1\) * `layer2` (\#Hidden Units in Layer 2\) * `layer3` (\#Hidden Units in Layer 3\) * `dropout` (Dropout Rate) * `beta1` (beta1\) * `beta2` (beta2\) * `learningrate` (Learning Rate) * `activation` (Activation Function) Required packages: `mxnet` Notes: The `mxnet` package is not yet on CRAN. See <http://mxnet.io> for installation instructions. Users are strongly advised to define `num.round` themselves. **Neural Network** ``` method = 'nnet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Neural Networks with Feature Extraction** ``` method = 'pcaNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` **Non\-Informative Model** ``` method = 'null' ``` Type: Classification, Regression No tuning parameters for this model Notes: Since this model always predicts the same value, R\-squared values will always be estimated to be NA. **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Ordered Logistic or Probit Regression** ``` method = 'polr' ``` Type: Classification Tuning parameters: * `method` (parameter) Required packages: `MASS` A model\-specific variable importance metric is available. **Parallel Random Forest** ``` method = 'parRF' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `e1071`, `randomForest`, `foreach`, `import` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'kernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'pls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'simpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'widekernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares Generalized Linear Models** ``` method = 'plsRglm' ``` Type: Classification, Regression Tuning parameters: * `nt` (\#PLS Components) * `alpha.pvals.expli` (p\-Value threshold) Required packages: `plsRglm` Notes: Unlike other packages used by `train`, the `plsRglm` package is fully loaded when this model is used. **Patient Rule Induction Method** ``` method = 'PRIM' ``` Type: Classification Tuning parameters: * `peel.alpha` (peeling quantile) * `paste.alpha` (pasting quantile) * `mass.min` (minimum mass) Required packages: `supervisedPRIM` **Penalized Discriminant Analysis** ``` method = 'pda' ``` Type: Classification Tuning parameters: * `lambda` (Shrinkage Penalty Coefficient) Required packages: `mda` **Penalized Discriminant Analysis** ``` method = 'pda2' ``` Type: Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `mda` **Penalized Logistic Regression** ``` method = 'plr' ``` Type: Classification Tuning parameters: * `lambda` (L2 Penalty) * `cp` (Complexity Parameter) Required packages: `stepPlr` **Penalized Multinomial Regression** ``` method = 'multinom' ``` Type: Classification Tuning parameters: * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Quadratic Discriminant Analysis** ``` method = 'qda' ``` Type: Classification No tuning parameters for this model Required packages: `MASS` **Quadratic Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepQDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Radial Basis Function Network** ``` method = 'rbf' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) Required packages: `RSNNS` **Radial Basis Function Network** ``` method = 'rbfDDA' ``` Type: Regression, Classification Tuning parameters: * `negativeThreshold` (Activation Limit for Conflicting Classes) Required packages: `RSNNS` **Random Forest** ``` method = 'ordinalRF' ``` Type: Classification Tuning parameters: * `nsets` (\# score sets tried prior to the approximation) * `ntreeperdiv` (\# of trees (small RFs)) * `ntreefinal` (\# of trees (final RF)) Required packages: `e1071`, `ranger`, `dplyr`, `ordinalForest` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'ranger' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `splitrule` (Splitting Rule) * `min.node.size` (Minimal Node Size) Required packages: `e1071`, `ranger`, `dplyr` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'Rborist' ``` Type: Classification, Regression Tuning parameters: * `predFixed` (\#Randomly Selected Predictors) * `minNode` (Minimal Node Size) Required packages: `Rborist` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'rf' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `randomForest` A model\-specific variable importance metric is available. **Random Forest by Randomization** ``` method = 'extraTrees' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\# Randomly Selected Predictors) * `numRandomCuts` (\# Random Cuts) Required packages: `extraTrees` **Regularized Discriminant Analysis** ``` method = 'rda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) Required packages: `klaR` **Regularized Linear Discriminant Analysis** ``` method = 'rlda' ``` Type: Classification Tuning parameters: * `estimator` (Regularization Method) Required packages: `sparsediscrim` **Regularized Logistic Regression** ``` method = 'regLogistic' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `loss` (Loss Function) * `epsilon` (Tolerance) Required packages: `LiblineaR` **Regularized Random Forest** ``` method = 'RRF' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) * `coefImp` (Importance Coefficient) Required packages: `randomForest`, `RRF` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRFglobal' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) Required packages: `RRF` A model\-specific variable importance metric is available. **Robust Linear Discriminant Analysis** ``` method = 'Linda' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Robust Mixture Discriminant Analysis** ``` method = 'rmda' ``` Type: Classification Tuning parameters: * `K` (\#Subclasses Per Class) * `model` (Model) Required packages: `robustDA` **Robust Quadratic Discriminant Analysis** ``` method = 'QdaCov' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Robust Regularized Linear Discriminant Analysis** ``` method = 'rrlda' ``` Type: Classification Tuning parameters: * `lambda` (Penalty Parameter) * `hp` (Robustness Parameter) * `penalty` (Penalty Type) Required packages: `rrlda` Notes: Unlike other packages used by `train`, the `rrlda` package is fully loaded when this model is used. **Rotation Forest** ``` method = 'rotationForest' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) Required packages: `rotationForest` A model\-specific variable importance metric is available. **Rotation Forest** ``` method = 'rotationForestCp' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) * `cp` (Complexity Parameter) Required packages: `rpart`, `plyr`, `rotationForest` A model\-specific variable importance metric is available. **Rule\-Based Classifier** ``` method = 'JRip' ``` Type: Classification Tuning parameters: * `NumOpt` (\# Optimizations) * `NumFolds` (\# Folds) * `MinWeights` (Min Weights) Required packages: `RWeka` A model\-specific variable importance metric is available. **Rule\-Based Classifier** ``` method = 'PART' ``` Type: Classification Tuning parameters: * `threshold` (Confidence Threshold) * `pruned` (Pruning) Required packages: `RWeka` A model\-specific variable importance metric is available. **Self\-Organizing Maps** ``` method = 'xyf' ``` Type: Classification, Regression Tuning parameters: * `xdim` (Rows) * `ydim` (Columns) * `user.weights` (Layer Weight) * `topo` (Topology) Required packages: `kohonen` Notes: As of version 3\.0\.0 of the kohonen package, the argument `user.weights` replaces the old `alpha` parameter. `user.weights` is usually a vector of relative weights such as `c(1, 3)` but is parameterized here as a proportion such as `c(1-.75, .75)` where the .75 is the value of the tuning parameter passed to `train` and indicates that the outcome layer has 3 times the weight as the predictor layer. **Semi\-Naive Structure Learner Wrapper** ``` method = 'nbSearch' ``` Type: Classification Tuning parameters: * `k` (\#Folds) * `epsilon` (Minimum Absolute Improvement) * `smooth` (Smoothing Parameter) * `final_smooth` (Final Smoothing Parameter) * `direction` (Search Direction) Required packages: `bnclassify` **Shrinkage Discriminant Analysis** ``` method = 'sda' ``` Type: Classification Tuning parameters: * `diagonal` (Diagonalize) * `lambda` (shrinkage) Required packages: `sda` **Single C5\.0 Ruleset** ``` method = 'C5.0Rules' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single C5\.0 Tree** ``` method = 'C5.0Tree' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single Rule Classification** ``` method = 'OneR' ``` Type: Classification No tuning parameters for this model Required packages: `RWeka` **Sparse Distance Weighted Discrimination** ``` method = 'sdwd' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `sdwd` A model\-specific variable importance metric is available. **Sparse Linear Discriminant Analysis** ``` method = 'sparseLDA' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) Required packages: `sparseLDA` **Sparse Partial Least Squares** ``` method = 'spls' ``` Type: Regression, Classification Tuning parameters: * `K` (\#Components) * `eta` (Threshold) * `kappa` (Kappa) Required packages: `spls` **Stabilized Linear Discriminant Analysis** ``` method = 'slda' ``` Type: Classification No tuning parameters for this model Required packages: `ipred` **Stacked AutoEncoder Deep Neural Network** ``` method = 'dnn' ``` Type: Classification, Regression Tuning parameters: * `layer1` (Hidden Layer 1\) * `layer2` (Hidden Layer 2\) * `layer3` (Hidden Layer 3\) * `hidden_dropout` (Hidden Dropouts) * `visible_dropout` (Visible Dropout) Required packages: `deepnet` **Stochastic Gradient Boosting** ``` method = 'gbm' ``` Type: Regression, Classification Tuning parameters: * `n.trees` (\# Boosting Iterations) * `interaction.depth` (Max Tree Depth) * `shrinkage` (Shrinkage) * `n.minobsinnode` (Min. Terminal Node Size) Required packages: `gbm`, `plyr` A model\-specific variable importance metric is available. **Support Vector Machines with Boundrange String Kernel** ``` method = 'svmBoundrangeString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Class Weights** ``` method = 'svmRadialWeights' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) * `Weight` (Weight) Required packages: `kernlab` **Support Vector Machines with Exponential String Kernel** ``` method = 'svmExpoString' ``` Type: Regression, Classification Tuning parameters: * `lambda` (lambda) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear2' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) Required packages: `e1071` **Support Vector Machines with Polynomial Kernel** ``` method = 'svmPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialCost' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialSigma' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` Notes: This SVM model tunes over the cost parameter and the RBF kernel parameter sigma. In the latter case, using `tuneLength` will, at most, evaluate six values of the kernel parameter. This enables a broad search over the cost parameter and a relatively narrow search over `sigma` **Support Vector Machines with Spectrum String Kernel** ``` method = 'svmSpectrumString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Tree Augmented Naive Bayes Classifier** ``` method = 'tan' ``` Type: Classification Tuning parameters: * `score` (Score Function) * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier Structure Learner Wrapper** ``` method = 'tanSearch' ``` Type: Classification Tuning parameters: * `k` (\#Folds) * `epsilon` (Minimum Absolute Improvement) * `smooth` (Smoothing Parameter) * `final_smooth` (Final Smoothing Parameter) * `sp` (Super\-Parent) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier with Attribute Weighting** ``` method = 'awtan' ``` Type: Classification Tuning parameters: * `score` (Score Function) * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Tree Models from Genetic Algorithms** ``` method = 'evtree' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Complexity Parameter) Required packages: `evtree` **Tree\-Based Ensembles** ``` method = 'nodeHarvest' ``` Type: Regression, Classification Tuning parameters: * `maxinter` (Maximum Interaction Depth) * `mode` (Prediction Mode) Required packages: `nodeHarvest` **Variational Bayesian Multinomial Probit Regression** ``` method = 'vbmpRadial' ``` Type: Classification Tuning parameters: * `estimateTheta` (Theta Estimated) Required packages: `vbmp` **Weighted Subspace Random Forest** ``` method = 'wsrf' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `wsrf` ### 7\.0\.49 Text Mining (back to [contents](train-models-by-tag.html#top)) **Support Vector Machines with Boundrange String Kernel** ``` method = 'svmBoundrangeString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Exponential String Kernel** ``` method = 'svmExpoString' ``` Type: Regression, Classification Tuning parameters: * `lambda` (lambda) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Spectrum String Kernel** ``` method = 'svmSpectrumString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` ### 7\.0\.50 Tree\-Based Model (back to [contents](train-models-by-tag.html#top)) **AdaBoost Classification Trees** ``` method = 'adaboost' ``` Type: Classification Tuning parameters: * `nIter` (\#Trees) * `method` (Method) Required packages: `fastAdaboost` **AdaBoost.M1** ``` method = 'AdaBoost.M1' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) * `coeflearn` (Coefficient Type) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged CART** ``` method = 'treebag' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `ipred`, `plyr`, `e1071` A model\-specific variable importance metric is available. **Bayesian Additive Regression Trees** ``` method = 'bartMachine' ``` Type: Classification, Regression Tuning parameters: * `num_trees` (\#Trees) * `k` (Prior Boundary) * `alpha` (Base Terminal Node Hyperparameter) * `beta` (Power Terminal Node Hyperparameter) * `nu` (Degrees of Freedom) Required packages: `bartMachine` A model\-specific variable importance metric is available. **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **Boosted Logistic Regression** ``` method = 'LogitBoost' ``` Type: Classification Tuning parameters: * `nIter` (\# Boosting Iterations) Required packages: `caTools` **Boosted Tree** ``` method = 'blackboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `party`, `mboost`, `plyr`, `partykit` **Boosted Tree** ``` method = 'bstTree' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `maxdepth` (Max Tree Depth) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **C4\.5\-like Trees** ``` method = 'J48' ``` Type: Classification Tuning parameters: * `C` (Confidence Threshold) * `M` (Minimum Instances Per Leaf) Required packages: `RWeka` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart' ``` Type: Regression, Classification Tuning parameters: * `cp` (Complexity Parameter) Required packages: `rpart` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart1SE' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `rpart` A model\-specific variable importance metric is available. Notes: This CART model replicates the same process used by the `rpart` function where the model complexity is determined using the one\-standard error method. This procedure is replicated inside of the resampling done by `train` so that an external resampling estimate can be obtained. **CART** ``` method = 'rpart2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) Required packages: `rpart` A model\-specific variable importance metric is available. **CART or Ordinal Responses** ``` method = 'rpartScore' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `split` (Split Function) * `prune` (Pruning Measure) Required packages: `rpartScore`, `plyr` A model\-specific variable importance metric is available. **CHi\-squared Automated Interaction Detection** ``` method = 'chaid' ``` Type: Classification Tuning parameters: * `alpha2` (Merging Threshold) * `alpha3` (Splitting former Merged Threshold) * `alpha4` ( Splitting former Merged Threshold) Required packages: `CHAID` **Conditional Inference Tree** ``` method = 'ctree' ``` Type: Classification, Regression Tuning parameters: * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Conditional Inference Tree** ``` method = 'ctree2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive CART** ``` method = 'rpartCost' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `Cost` (Cost) Required packages: `rpart`, `plyr` **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **eXtreme Gradient Boosting** ``` method = 'xgbDART' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `subsample` (Subsample Percentage) * `colsample_bytree` (Subsample Ratio of Columns) * `rate_drop` (Fraction of Trees Dropped) * `skip_drop` (Prob. of Skipping Drop\-out) * `min_child_weight` (Minimum Sum of Instance Weight) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbTree' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `colsample_bytree` (Subsample Ratio of Columns) * `min_child_weight` (Minimum Sum of Instance Weight) * `subsample` (Subsample Percentage) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **Gradient Boosting Machines** ``` method = 'gbm_h2o' ``` Type: Regression, Classification Tuning parameters: * `ntrees` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `min_rows` (Min. Terminal Node Size) * `learn_rate` (Shrinkage) * `col_sample_rate` (\#Randomly Selected Predictors) Required packages: `h2o` A model\-specific variable importance metric is available. **Model Tree** ``` method = 'M5' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) * `rules` (Rules) Required packages: `RWeka` **Rotation Forest** ``` method = 'rotationForest' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) Required packages: `rotationForest` A model\-specific variable importance metric is available. **Rotation Forest** ``` method = 'rotationForestCp' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) * `cp` (Complexity Parameter) Required packages: `rpart`, `plyr`, `rotationForest` A model\-specific variable importance metric is available. **Single C5\.0 Tree** ``` method = 'C5.0Tree' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Stochastic Gradient Boosting** ``` method = 'gbm' ``` Type: Regression, Classification Tuning parameters: * `n.trees` (\# Boosting Iterations) * `interaction.depth` (Max Tree Depth) * `shrinkage` (Shrinkage) * `n.minobsinnode` (Min. Terminal Node Size) Required packages: `gbm`, `plyr` A model\-specific variable importance metric is available. **Tree Models from Genetic Algorithms** ``` method = 'evtree' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Complexity Parameter) Required packages: `evtree` **Tree\-Based Ensembles** ``` method = 'nodeHarvest' ``` Type: Regression, Classification Tuning parameters: * `maxinter` (Maximum Interaction Depth) * `mode` (Prediction Mode) Required packages: `nodeHarvest` ### 7\.0\.51 Two Class Only (back to [contents](train-models-by-tag.html#top)) **AdaBoost Classification Trees** ``` method = 'adaboost' ``` Type: Classification Tuning parameters: * `nIter` (\#Trees) * `method` (Method) Required packages: `fastAdaboost` **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bayesian Additive Regression Trees** ``` method = 'bartMachine' ``` Type: Classification, Regression Tuning parameters: * `num_trees` (\#Trees) * `k` (Prior Boundary) * `alpha` (Base Terminal Node Hyperparameter) * `beta` (Power Terminal Node Hyperparameter) * `nu` (Degrees of Freedom) Required packages: `bartMachine` A model\-specific variable importance metric is available. **Binary Discriminant Analysis** ``` method = 'binda' ``` Type: Classification Tuning parameters: * `lambda.freqs` (Shrinkage Intensity) Required packages: `binda` **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **CHi\-squared Automated Interaction Detection** ``` method = 'chaid' ``` Type: Classification Tuning parameters: * `alpha2` (Merging Threshold) * `alpha3` (Splitting former Merged Threshold) * `alpha4` ( Splitting former Merged Threshold) Required packages: `CHAID` **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive CART** ``` method = 'rpartCost' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `Cost` (Cost) Required packages: `rpart`, `plyr` **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **Generalized Linear Model** ``` method = 'glm' ``` Type: Regression, Classification No tuning parameters for this model A model\-specific variable importance metric is available. **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **L2 Regularized Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights2' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) * `weight` (Class Weight) Required packages: `LiblineaR` **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` **Multilayer Perceptron Network with Dropout** ``` method = 'mlpKerasDropoutCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `dropout` (Dropout Rate) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecayCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Partial Least Squares Generalized Linear Models** ``` method = 'plsRglm' ``` Type: Classification, Regression Tuning parameters: * `nt` (\#PLS Components) * `alpha.pvals.expli` (p\-Value threshold) Required packages: `plsRglm` Notes: Unlike other packages used by `train`, the `plsRglm` package is fully loaded when this model is used. **Rotation Forest** ``` method = 'rotationForest' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) Required packages: `rotationForest` A model\-specific variable importance metric is available. **Rotation Forest** ``` method = 'rotationForestCp' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) * `cp` (Complexity Parameter) Required packages: `rpart`, `plyr`, `rotationForest` A model\-specific variable importance metric is available. **Support Vector Machines with Class Weights** ``` method = 'svmRadialWeights' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) * `Weight` (Weight) Required packages: `kernlab` **Tree\-Based Ensembles** ``` method = 'nodeHarvest' ``` Type: Regression, Classification Tuning parameters: * `maxinter` (Maximum Interaction Depth) * `mode` (Prediction Mode) Required packages: `nodeHarvest` ### 7\.0\.1 Accepts Case Weights (back to [contents](train-models-by-tag.html#top)) **Adjacent Categories Probability Model for Ordinal Data** ``` method = 'vglmAdjCat' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Bagged CART** ``` method = 'treebag' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `ipred`, `plyr`, `e1071` A model\-specific variable importance metric is available. **Bagged Flexible Discriminant Analysis** ``` method = 'bagFDA' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS** ``` method = 'bagEarth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS using gCV Pruning** ``` method = 'bagEarthGCV' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bayesian Generalized Linear Model** ``` method = 'bayesglm' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `arm` **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Tree** ``` method = 'blackboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `party`, `mboost`, `plyr`, `partykit` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart' ``` Type: Regression, Classification Tuning parameters: * `cp` (Complexity Parameter) Required packages: `rpart` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart1SE' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `rpart` A model\-specific variable importance metric is available. Notes: This CART model replicates the same process used by the `rpart` function where the model complexity is determined using the one\-standard error method. This procedure is replicated inside of the resampling done by `train` so that an external resampling estimate can be obtained. **CART** ``` method = 'rpart2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) Required packages: `rpart` A model\-specific variable importance metric is available. **CART or Ordinal Responses** ``` method = 'rpartScore' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `split` (Split Function) * `prune` (Pruning Measure) Required packages: `rpartScore`, `plyr` A model\-specific variable importance metric is available. **CHi\-squared Automated Interaction Detection** ``` method = 'chaid' ``` Type: Classification Tuning parameters: * `alpha2` (Merging Threshold) * `alpha3` (Splitting former Merged Threshold) * `alpha4` ( Splitting former Merged Threshold) Required packages: `CHAID` **Conditional Inference Random Forest** ``` method = 'cforest' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `party` A model\-specific variable importance metric is available. **Conditional Inference Tree** ``` method = 'ctree' ``` Type: Classification, Regression Tuning parameters: * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Conditional Inference Tree** ``` method = 'ctree2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Continuation Ratio Model for Ordinal Data** ``` method = 'vglmContRatio' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive CART** ``` method = 'rpartCost' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `Cost` (Cost) Required packages: `rpart`, `plyr` **Cumulative Probability Model for Ordinal Data** ``` method = 'vglmCumulative' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **eXtreme Gradient Boosting** ``` method = 'xgbDART' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `subsample` (Subsample Percentage) * `colsample_bytree` (Subsample Ratio of Columns) * `rate_drop` (Fraction of Trees Dropped) * `skip_drop` (Prob. of Skipping Drop\-out) * `min_child_weight` (Minimum Sum of Instance Weight) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbTree' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `colsample_bytree` (Subsample Ratio of Columns) * `min_child_weight` (Minimum Sum of Instance Weight) * `subsample` (Subsample Percentage) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **Flexible Discriminant Analysis** ``` method = 'fda' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Generalized Linear Model** ``` method = 'glm' ``` Type: Regression, Classification No tuning parameters for this model A model\-specific variable importance metric is available. **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **Linear Regression** ``` method = 'lm' ``` Type: Regression Tuning parameters: * `intercept` (intercept) A model\-specific variable importance metric is available. **Linear Regression with Stepwise Selection** ``` method = 'lmStepAIC' ``` Type: Regression No tuning parameters for this model Required packages: `MASS` **Model Averaged Neural Network** ``` method = 'avNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) * `bag` (Bagging) Required packages: `nnet` **Multivariate Adaptive Regression Spline** ``` method = 'earth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Multivariate Adaptive Regression Splines** ``` method = 'gcvEarth' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Negative Binomial Generalized Linear Model** ``` method = 'glm.nb' ``` Type: Regression Tuning parameters: * `link` (Link Function) Required packages: `MASS` A model\-specific variable importance metric is available. **Neural Network** ``` method = 'nnet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Neural Networks with Feature Extraction** ``` method = 'pcaNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` **Ordered Logistic or Probit Regression** ``` method = 'polr' ``` Type: Classification Tuning parameters: * `method` (parameter) Required packages: `MASS` A model\-specific variable importance metric is available. **Penalized Discriminant Analysis** ``` method = 'pda' ``` Type: Classification Tuning parameters: * `lambda` (Shrinkage Penalty Coefficient) Required packages: `mda` **Penalized Discriminant Analysis** ``` method = 'pda2' ``` Type: Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `mda` **Penalized Multinomial Regression** ``` method = 'multinom' ``` Type: Classification Tuning parameters: * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Projection Pursuit Regression** ``` method = 'ppr' ``` Type: Regression Tuning parameters: * `nterms` (\# Terms) **Random Forest** ``` method = 'ranger' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `splitrule` (Splitting Rule) * `min.node.size` (Minimal Node Size) Required packages: `e1071`, `ranger`, `dplyr` A model\-specific variable importance metric is available. **Robust Linear Model** ``` method = 'rlm' ``` Type: Regression Tuning parameters: * `intercept` (intercept) * `psi` (psi) Required packages: `MASS` A model\-specific variable importance metric is available. **Single C5\.0 Ruleset** ``` method = 'C5.0Rules' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single C5\.0 Tree** ``` method = 'C5.0Tree' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Stochastic Gradient Boosting** ``` method = 'gbm' ``` Type: Regression, Classification Tuning parameters: * `n.trees` (\# Boosting Iterations) * `interaction.depth` (Max Tree Depth) * `shrinkage` (Shrinkage) * `n.minobsinnode` (Min. Terminal Node Size) Required packages: `gbm`, `plyr` A model\-specific variable importance metric is available. **Tree Models from Genetic Algorithms** ``` method = 'evtree' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Complexity Parameter) Required packages: `evtree` ### 7\.0\.2 Bagging (back to [contents](train-models-by-tag.html#top)) **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged CART** ``` method = 'treebag' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `ipred`, `plyr`, `e1071` A model\-specific variable importance metric is available. **Bagged Flexible Discriminant Analysis** ``` method = 'bagFDA' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bagged MARS** ``` method = 'bagEarth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS using gCV Pruning** ``` method = 'bagEarthGCV' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged Model** ``` method = 'bag' ``` Type: Regression, Classification Tuning parameters: * `vars` (\#Randomly Selected Predictors) Required packages: `caret` **Conditional Inference Random Forest** ``` method = 'cforest' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `party` A model\-specific variable importance metric is available. **Ensembles of Generalized Linear Models** ``` method = 'randomGLM' ``` Type: Regression, Classification Tuning parameters: * `maxInteractionOrder` (Interaction Order) Required packages: `randomGLM` Notes: Unlike other packages used by `train`, the `randomGLM` package is fully loaded when this model is used. **Model Averaged Neural Network** ``` method = 'avNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) * `bag` (Bagging) Required packages: `nnet` **Parallel Random Forest** ``` method = 'parRF' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `e1071`, `randomForest`, `foreach`, `import` A model\-specific variable importance metric is available. **Quantile Random Forest** ``` method = 'qrf' ``` Type: Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `quantregForest` **Quantile Regression Neural Network** ``` method = 'qrnn' ``` Type: Regression Tuning parameters: * `n.hidden` (\#Hidden Units) * `penalty` ( Weight Decay) * `bag` (Bagged Models?) Required packages: `qrnn` **Random Ferns** ``` method = 'rFerns' ``` Type: Classification Tuning parameters: * `depth` (Fern Depth) Required packages: `rFerns` **Random Forest** ``` method = 'ordinalRF' ``` Type: Classification Tuning parameters: * `nsets` (\# score sets tried prior to the approximation) * `ntreeperdiv` (\# of trees (small RFs)) * `ntreefinal` (\# of trees (final RF)) Required packages: `e1071`, `ranger`, `dplyr`, `ordinalForest` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'ranger' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `splitrule` (Splitting Rule) * `min.node.size` (Minimal Node Size) Required packages: `e1071`, `ranger`, `dplyr` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'Rborist' ``` Type: Classification, Regression Tuning parameters: * `predFixed` (\#Randomly Selected Predictors) * `minNode` (Minimal Node Size) Required packages: `Rborist` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'rf' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `randomForest` A model\-specific variable importance metric is available. **Random Forest by Randomization** ``` method = 'extraTrees' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\# Randomly Selected Predictors) * `numRandomCuts` (\# Random Cuts) Required packages: `extraTrees` **Random Forest Rule\-Based Model** ``` method = 'rfRules' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `maxdepth` (Maximum Rule Depth) Required packages: `randomForest`, `inTrees`, `plyr` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRF' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) * `coefImp` (Importance Coefficient) Required packages: `randomForest`, `RRF` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRFglobal' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) Required packages: `RRF` A model\-specific variable importance metric is available. **Weighted Subspace Random Forest** ``` method = 'wsrf' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `wsrf` ### 7\.0\.3 Bayesian Model (back to [contents](train-models-by-tag.html#top)) **Bayesian Additive Regression Trees** ``` method = 'bartMachine' ``` Type: Classification, Regression Tuning parameters: * `num_trees` (\#Trees) * `k` (Prior Boundary) * `alpha` (Base Terminal Node Hyperparameter) * `beta` (Power Terminal Node Hyperparameter) * `nu` (Degrees of Freedom) Required packages: `bartMachine` A model\-specific variable importance metric is available. **Bayesian Generalized Linear Model** ``` method = 'bayesglm' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `arm` **Bayesian Regularized Neural Networks** ``` method = 'brnn' ``` Type: Regression Tuning parameters: * `neurons` (\# Neurons) Required packages: `brnn` **Bayesian Ridge Regression** ``` method = 'bridge' ``` Type: Regression No tuning parameters for this model Required packages: `monomvn` **Bayesian Ridge Regression (Model Averaged)** ``` method = 'blassoAveraged' ``` Type: Regression No tuning parameters for this model Required packages: `monomvn` Notes: This model makes predictions by averaging the predictions based on the posterior estimates of the regression coefficients. While it is possible that some of these posterior estimates are zero for non\-informative predictors, the final predicted value may be a function of many (or even all) predictors. **Model Averaged Naive Bayes Classifier** ``` method = 'manb' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) * `prior` (Prior Probability) Required packages: `bnclassify` **Naive Bayes** ``` method = 'naive_bayes' ``` Type: Classification Tuning parameters: * `laplace` (Laplace Correction) * `usekernel` (Distribution Type) * `adjust` (Bandwidth Adjustment) Required packages: `naivebayes` **Naive Bayes** ``` method = 'nb' ``` Type: Classification Tuning parameters: * `fL` (Laplace Correction) * `usekernel` (Distribution Type) * `adjust` (Bandwidth Adjustment) Required packages: `klaR` **Naive Bayes Classifier** ``` method = 'nbDiscrete' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Naive Bayes Classifier with Attribute Weighting** ``` method = 'awnb' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Semi\-Naive Structure Learner Wrapper** ``` method = 'nbSearch' ``` Type: Classification Tuning parameters: * `k` (\#Folds) * `epsilon` (Minimum Absolute Improvement) * `smooth` (Smoothing Parameter) * `final_smooth` (Final Smoothing Parameter) * `direction` (Search Direction) Required packages: `bnclassify` **Spike and Slab Regression** ``` method = 'spikeslab' ``` Type: Regression Tuning parameters: * `vars` (Variables Retained) Required packages: `spikeslab`, `plyr` Notes: Unlike other packages used by `train`, the `spikeslab` package is fully loaded when this model is used. **The Bayesian lasso** ``` method = 'blasso' ``` Type: Regression Tuning parameters: * `sparsity` (Sparsity Threshold) Required packages: `monomvn` Notes: This model creates predictions using the mean of the posterior distributions but sets some parameters specifically to zero based on the tuning parameter `sparsity`. For example, when `sparsity = .5`, only coefficients where at least half the posterior estimates are nonzero are used. **Tree Augmented Naive Bayes Classifier** ``` method = 'tan' ``` Type: Classification Tuning parameters: * `score` (Score Function) * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier Structure Learner Wrapper** ``` method = 'tanSearch' ``` Type: Classification Tuning parameters: * `k` (\#Folds) * `epsilon` (Minimum Absolute Improvement) * `smooth` (Smoothing Parameter) * `final_smooth` (Final Smoothing Parameter) * `sp` (Super\-Parent) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier with Attribute Weighting** ``` method = 'awtan' ``` Type: Classification Tuning parameters: * `score` (Score Function) * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Variational Bayesian Multinomial Probit Regression** ``` method = 'vbmpRadial' ``` Type: Classification Tuning parameters: * `estimateTheta` (Theta Estimated) Required packages: `vbmp` ### 7\.0\.4 Binary Predictors Only (back to [contents](train-models-by-tag.html#top)) **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Binary Discriminant Analysis** ``` method = 'binda' ``` Type: Classification Tuning parameters: * `lambda.freqs` (Shrinkage Intensity) Required packages: `binda` **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` ### 7\.0\.5 Boosting (back to [contents](train-models-by-tag.html#top)) **AdaBoost Classification Trees** ``` method = 'adaboost' ``` Type: Classification Tuning parameters: * `nIter` (\#Trees) * `method` (Method) Required packages: `fastAdaboost` **AdaBoost.M1** ``` method = 'AdaBoost.M1' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) * `coeflearn` (Coefficient Type) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Linear Model** ``` method = 'BstLm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **Boosted Logistic Regression** ``` method = 'LogitBoost' ``` Type: Classification Tuning parameters: * `nIter` (\# Boosting Iterations) Required packages: `caTools` **Boosted Smoothing Spline** ``` method = 'bstSm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **Boosted Tree** ``` method = 'blackboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `party`, `mboost`, `plyr`, `partykit` **Boosted Tree** ``` method = 'bstTree' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `maxdepth` (Max Tree Depth) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **eXtreme Gradient Boosting** ``` method = 'xgbDART' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `subsample` (Subsample Percentage) * `colsample_bytree` (Subsample Ratio of Columns) * `rate_drop` (Fraction of Trees Dropped) * `skip_drop` (Prob. of Skipping Drop\-out) * `min_child_weight` (Minimum Sum of Instance Weight) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbLinear' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `lambda` (L2 Regularization) * `alpha` (L1 Regularization) * `eta` (Learning Rate) Required packages: `xgboost` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbTree' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `colsample_bytree` (Subsample Ratio of Columns) * `min_child_weight` (Minimum Sum of Instance Weight) * `subsample` (Subsample Percentage) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **Gradient Boosting Machines** ``` method = 'gbm_h2o' ``` Type: Regression, Classification Tuning parameters: * `ntrees` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `min_rows` (Min. Terminal Node Size) * `learn_rate` (Shrinkage) * `col_sample_rate` (\#Randomly Selected Predictors) Required packages: `h2o` A model\-specific variable importance metric is available. **Stochastic Gradient Boosting** ``` method = 'gbm' ``` Type: Regression, Classification Tuning parameters: * `n.trees` (\# Boosting Iterations) * `interaction.depth` (Max Tree Depth) * `shrinkage` (Shrinkage) * `n.minobsinnode` (Min. Terminal Node Size) Required packages: `gbm`, `plyr` A model\-specific variable importance metric is available. ### 7\.0\.6 Categorical Predictors Only (back to [contents](train-models-by-tag.html#top)) **Model Averaged Naive Bayes Classifier** ``` method = 'manb' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) * `prior` (Prior Probability) Required packages: `bnclassify` **Naive Bayes Classifier** ``` method = 'nbDiscrete' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Naive Bayes Classifier with Attribute Weighting** ``` method = 'awnb' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Semi\-Naive Structure Learner Wrapper** ``` method = 'nbSearch' ``` Type: Classification Tuning parameters: * `k` (\#Folds) * `epsilon` (Minimum Absolute Improvement) * `smooth` (Smoothing Parameter) * `final_smooth` (Final Smoothing Parameter) * `direction` (Search Direction) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier** ``` method = 'tan' ``` Type: Classification Tuning parameters: * `score` (Score Function) * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier Structure Learner Wrapper** ``` method = 'tanSearch' ``` Type: Classification Tuning parameters: * `k` (\#Folds) * `epsilon` (Minimum Absolute Improvement) * `smooth` (Smoothing Parameter) * `final_smooth` (Final Smoothing Parameter) * `sp` (Super\-Parent) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier with Attribute Weighting** ``` method = 'awtan' ``` Type: Classification Tuning parameters: * `score` (Score Function) * `smooth` (Smoothing Parameter) Required packages: `bnclassify` ### 7\.0\.7 Cost Sensitive Learning (back to [contents](train-models-by-tag.html#top)) **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive CART** ``` method = 'rpartCost' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `Cost` (Cost) Required packages: `rpart`, `plyr` **L2 Regularized Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights2' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) * `weight` (Class Weight) Required packages: `LiblineaR` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Multilayer Perceptron Network with Dropout** ``` method = 'mlpKerasDropoutCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `dropout` (Dropout Rate) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecayCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Support Vector Machines with Class Weights** ``` method = 'svmRadialWeights' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) * `Weight` (Weight) Required packages: `kernlab` ### 7\.0\.8 Discriminant Analysis (back to [contents](train-models-by-tag.html#top)) **Adaptive Mixture Discriminant Analysis** ``` method = 'amdai' ``` Type: Classification Tuning parameters: * `model` (Model Type) Required packages: `adaptDA` **Binary Discriminant Analysis** ``` method = 'binda' ``` Type: Classification Tuning parameters: * `lambda.freqs` (Shrinkage Intensity) Required packages: `binda` **Diagonal Discriminant Analysis** ``` method = 'dda' ``` Type: Classification Tuning parameters: * `model` (Model) * `shrinkage` (Shrinkage Type) Required packages: `sparsediscrim` **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **Factor\-Based Linear Discriminant Analysis** ``` method = 'RFlda' ``` Type: Classification Tuning parameters: * `q` (\# Factors) Required packages: `HiDimDA` **Heteroscedastic Discriminant Analysis** ``` method = 'hda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `newdim` (Dimension of the Discriminative Subspace) Required packages: `hda` **High Dimensional Discriminant Analysis** ``` method = 'hdda' ``` Type: Classification Tuning parameters: * `threshold` (Threshold) * `model` (Model Type) Required packages: `HDclassif` **High\-Dimensional Regularized Discriminant Analysis** ``` method = 'hdrda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `shrinkage_type` (Shrinkage Type) Required packages: `sparsediscrim` **Linear Discriminant Analysis** ``` method = 'lda' ``` Type: Classification No tuning parameters for this model Required packages: `MASS` **Linear Discriminant Analysis** ``` method = 'lda2' ``` Type: Classification Tuning parameters: * `dimen` (\#Discriminant Functions) Required packages: `MASS` **Linear Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepLDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Localized Linear Discriminant Analysis** ``` method = 'loclda' ``` Type: Classification Tuning parameters: * `k` (\#Nearest Neighbors) Required packages: `klaR` **Maximum Uncertainty Linear Discriminant Analysis** ``` method = 'Mlda' ``` Type: Classification No tuning parameters for this model Required packages: `HiDimDA` **Mixture Discriminant Analysis** ``` method = 'mda' ``` Type: Classification Tuning parameters: * `subclasses` (\#Subclasses Per Class) Required packages: `mda` **Penalized Discriminant Analysis** ``` method = 'pda' ``` Type: Classification Tuning parameters: * `lambda` (Shrinkage Penalty Coefficient) Required packages: `mda` **Penalized Discriminant Analysis** ``` method = 'pda2' ``` Type: Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `mda` **Penalized Linear Discriminant Analysis** ``` method = 'PenalizedLDA' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `K` (\#Discriminant Functions) Required packages: `penalizedLDA`, `plyr` **Quadratic Discriminant Analysis** ``` method = 'qda' ``` Type: Classification No tuning parameters for this model Required packages: `MASS` **Quadratic Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepQDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Regularized Discriminant Analysis** ``` method = 'rda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) Required packages: `klaR` **Regularized Linear Discriminant Analysis** ``` method = 'rlda' ``` Type: Classification Tuning parameters: * `estimator` (Regularization Method) Required packages: `sparsediscrim` **Robust Linear Discriminant Analysis** ``` method = 'Linda' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Robust Mixture Discriminant Analysis** ``` method = 'rmda' ``` Type: Classification Tuning parameters: * `K` (\#Subclasses Per Class) * `model` (Model) Required packages: `robustDA` **Robust Quadratic Discriminant Analysis** ``` method = 'QdaCov' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Robust Regularized Linear Discriminant Analysis** ``` method = 'rrlda' ``` Type: Classification Tuning parameters: * `lambda` (Penalty Parameter) * `hp` (Robustness Parameter) * `penalty` (Penalty Type) Required packages: `rrlda` Notes: Unlike other packages used by `train`, the `rrlda` package is fully loaded when this model is used. **Shrinkage Discriminant Analysis** ``` method = 'sda' ``` Type: Classification Tuning parameters: * `diagonal` (Diagonalize) * `lambda` (shrinkage) Required packages: `sda` **Sparse Linear Discriminant Analysis** ``` method = 'sparseLDA' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) Required packages: `sparseLDA` **Sparse Mixture Discriminant Analysis** ``` method = 'smda' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) * `R` (\# Subclasses) Required packages: `sparseLDA` **Stabilized Linear Discriminant Analysis** ``` method = 'slda' ``` Type: Classification No tuning parameters for this model Required packages: `ipred` ### 7\.0\.9 Distance Weighted Discrimination (back to [contents](train-models-by-tag.html#top)) **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Sparse Distance Weighted Discrimination** ``` method = 'sdwd' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `sdwd` A model\-specific variable importance metric is available. ### 7\.0\.10 Ensemble Model (back to [contents](train-models-by-tag.html#top)) **AdaBoost Classification Trees** ``` method = 'adaboost' ``` Type: Classification Tuning parameters: * `nIter` (\#Trees) * `method` (Method) Required packages: `fastAdaboost` **AdaBoost.M1** ``` method = 'AdaBoost.M1' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) * `coeflearn` (Coefficient Type) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged CART** ``` method = 'treebag' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `ipred`, `plyr`, `e1071` A model\-specific variable importance metric is available. **Bagged Flexible Discriminant Analysis** ``` method = 'bagFDA' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bagged MARS** ``` method = 'bagEarth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS using gCV Pruning** ``` method = 'bagEarthGCV' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged Model** ``` method = 'bag' ``` Type: Regression, Classification Tuning parameters: * `vars` (\#Randomly Selected Predictors) Required packages: `caret` **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Linear Model** ``` method = 'BstLm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **Boosted Logistic Regression** ``` method = 'LogitBoost' ``` Type: Classification Tuning parameters: * `nIter` (\# Boosting Iterations) Required packages: `caTools` **Boosted Smoothing Spline** ``` method = 'bstSm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **Boosted Tree** ``` method = 'blackboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `party`, `mboost`, `plyr`, `partykit` **Boosted Tree** ``` method = 'bstTree' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `maxdepth` (Max Tree Depth) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Conditional Inference Random Forest** ``` method = 'cforest' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `party` A model\-specific variable importance metric is available. **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **Ensembles of Generalized Linear Models** ``` method = 'randomGLM' ``` Type: Regression, Classification Tuning parameters: * `maxInteractionOrder` (Interaction Order) Required packages: `randomGLM` Notes: Unlike other packages used by `train`, the `randomGLM` package is fully loaded when this model is used. **eXtreme Gradient Boosting** ``` method = 'xgbDART' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `subsample` (Subsample Percentage) * `colsample_bytree` (Subsample Ratio of Columns) * `rate_drop` (Fraction of Trees Dropped) * `skip_drop` (Prob. of Skipping Drop\-out) * `min_child_weight` (Minimum Sum of Instance Weight) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbLinear' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `lambda` (L2 Regularization) * `alpha` (L1 Regularization) * `eta` (Learning Rate) Required packages: `xgboost` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbTree' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `colsample_bytree` (Subsample Ratio of Columns) * `min_child_weight` (Minimum Sum of Instance Weight) * `subsample` (Subsample Percentage) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **Gradient Boosting Machines** ``` method = 'gbm_h2o' ``` Type: Regression, Classification Tuning parameters: * `ntrees` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `min_rows` (Min. Terminal Node Size) * `learn_rate` (Shrinkage) * `col_sample_rate` (\#Randomly Selected Predictors) Required packages: `h2o` A model\-specific variable importance metric is available. **Model Averaged Neural Network** ``` method = 'avNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) * `bag` (Bagging) Required packages: `nnet` **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Parallel Random Forest** ``` method = 'parRF' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `e1071`, `randomForest`, `foreach`, `import` A model\-specific variable importance metric is available. **Quantile Random Forest** ``` method = 'qrf' ``` Type: Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `quantregForest` **Quantile Regression Neural Network** ``` method = 'qrnn' ``` Type: Regression Tuning parameters: * `n.hidden` (\#Hidden Units) * `penalty` ( Weight Decay) * `bag` (Bagged Models?) Required packages: `qrnn` **Random Ferns** ``` method = 'rFerns' ``` Type: Classification Tuning parameters: * `depth` (Fern Depth) Required packages: `rFerns` **Random Forest** ``` method = 'ordinalRF' ``` Type: Classification Tuning parameters: * `nsets` (\# score sets tried prior to the approximation) * `ntreeperdiv` (\# of trees (small RFs)) * `ntreefinal` (\# of trees (final RF)) Required packages: `e1071`, `ranger`, `dplyr`, `ordinalForest` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'ranger' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `splitrule` (Splitting Rule) * `min.node.size` (Minimal Node Size) Required packages: `e1071`, `ranger`, `dplyr` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'Rborist' ``` Type: Classification, Regression Tuning parameters: * `predFixed` (\#Randomly Selected Predictors) * `minNode` (Minimal Node Size) Required packages: `Rborist` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'rf' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `randomForest` A model\-specific variable importance metric is available. **Random Forest by Randomization** ``` method = 'extraTrees' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\# Randomly Selected Predictors) * `numRandomCuts` (\# Random Cuts) Required packages: `extraTrees` **Random Forest Rule\-Based Model** ``` method = 'rfRules' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `maxdepth` (Maximum Rule Depth) Required packages: `randomForest`, `inTrees`, `plyr` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRF' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) * `coefImp` (Importance Coefficient) Required packages: `randomForest`, `RRF` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRFglobal' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) Required packages: `RRF` A model\-specific variable importance metric is available. **Rotation Forest** ``` method = 'rotationForest' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) Required packages: `rotationForest` A model\-specific variable importance metric is available. **Rotation Forest** ``` method = 'rotationForestCp' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) * `cp` (Complexity Parameter) Required packages: `rpart`, `plyr`, `rotationForest` A model\-specific variable importance metric is available. **Stochastic Gradient Boosting** ``` method = 'gbm' ``` Type: Regression, Classification Tuning parameters: * `n.trees` (\# Boosting Iterations) * `interaction.depth` (Max Tree Depth) * `shrinkage` (Shrinkage) * `n.minobsinnode` (Min. Terminal Node Size) Required packages: `gbm`, `plyr` A model\-specific variable importance metric is available. **Tree\-Based Ensembles** ``` method = 'nodeHarvest' ``` Type: Regression, Classification Tuning parameters: * `maxinter` (Maximum Interaction Depth) * `mode` (Prediction Mode) Required packages: `nodeHarvest` **Weighted Subspace Random Forest** ``` method = 'wsrf' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `wsrf` ### 7\.0\.11 Feature Extraction (back to [contents](train-models-by-tag.html#top)) **Independent Component Regression** ``` method = 'icr' ``` Type: Regression Tuning parameters: * `n.comp` (\#Components) Required packages: `fastICA` **Neural Networks with Feature Extraction** ``` method = 'pcaNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` **Partial Least Squares** ``` method = 'kernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'pls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'simpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'widekernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Principal Component Analysis** ``` method = 'pcr' ``` Type: Regression Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` **Projection Pursuit Regression** ``` method = 'ppr' ``` Type: Regression Tuning parameters: * `nterms` (\# Terms) **Sparse Partial Least Squares** ``` method = 'spls' ``` Type: Regression, Classification Tuning parameters: * `K` (\#Components) * `eta` (Threshold) * `kappa` (Kappa) Required packages: `spls` **Supervised Principal Component Analysis** ``` method = 'superpc' ``` Type: Regression Tuning parameters: * `threshold` (Threshold) * `n.components` (\#Components) Required packages: `superpc` ### 7\.0\.12 Feature Selection Wrapper (back to [contents](train-models-by-tag.html#top)) **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **Linear Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepLDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Linear Regression with Backwards Selection** ``` method = 'leapBackward' ``` Type: Regression Tuning parameters: * `nvmax` (Maximum Number of Predictors) Required packages: `leaps` **Linear Regression with Forward Selection** ``` method = 'leapForward' ``` Type: Regression Tuning parameters: * `nvmax` (Maximum Number of Predictors) Required packages: `leaps` **Linear Regression with Stepwise Selection** ``` method = 'leapSeq' ``` Type: Regression Tuning parameters: * `nvmax` (Maximum Number of Predictors) Required packages: `leaps` **Linear Regression with Stepwise Selection** ``` method = 'lmStepAIC' ``` Type: Regression No tuning parameters for this model Required packages: `MASS` **Quadratic Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepQDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Ridge Regression with Variable Selection** ``` method = 'foba' ``` Type: Regression Tuning parameters: * `k` (\#Variables Retained) * `lambda` (L2 Penalty) Required packages: `foba` ### 7\.0\.13 Gaussian Process (back to [contents](train-models-by-tag.html#top)) **Gaussian Process** ``` method = 'gaussprLinear' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `kernlab` **Gaussian Process with Polynomial Kernel** ``` method = 'gaussprPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kernlab` **Gaussian Process with Radial Basis Function Kernel** ``` method = 'gaussprRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **Variational Bayesian Multinomial Probit Regression** ``` method = 'vbmpRadial' ``` Type: Classification Tuning parameters: * `estimateTheta` (Theta Estimated) Required packages: `vbmp` ### 7\.0\.14 Generalized Additive Model (back to [contents](train-models-by-tag.html#top)) **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Generalized Additive Model using LOESS** ``` method = 'gamLoess' ``` Type: Regression, Classification Tuning parameters: * `span` (Span) * `degree` (Degree) Required packages: `gam` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `gam` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'bam' ``` Type: Regression, Classification Tuning parameters: * `select` (Feature Selection) * `method` (Method) Required packages: `mgcv` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `mgcv` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'gam' ``` Type: Regression, Classification Tuning parameters: * `select` (Feature Selection) * `method` (Method) Required packages: `mgcv` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `mgcv` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'gamSpline' ``` Type: Regression, Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `gam` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `gam` package is fully loaded when this model is used. ### 7\.0\.15 Generalized Linear Model (back to [contents](train-models-by-tag.html#top)) **Bayesian Generalized Linear Model** ``` method = 'bayesglm' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `arm` **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Ensembles of Generalized Linear Models** ``` method = 'randomGLM' ``` Type: Regression, Classification Tuning parameters: * `maxInteractionOrder` (Interaction Order) Required packages: `randomGLM` Notes: Unlike other packages used by `train`, the `randomGLM` package is fully loaded when this model is used. **Generalized Additive Model using LOESS** ``` method = 'gamLoess' ``` Type: Regression, Classification Tuning parameters: * `span` (Span) * `degree` (Degree) Required packages: `gam` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `gam` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'bam' ``` Type: Regression, Classification Tuning parameters: * `select` (Feature Selection) * `method` (Method) Required packages: `mgcv` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `mgcv` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'gam' ``` Type: Regression, Classification Tuning parameters: * `select` (Feature Selection) * `method` (Method) Required packages: `mgcv` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `mgcv` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'gamSpline' ``` Type: Regression, Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `gam` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `gam` package is fully loaded when this model is used. **Generalized Linear Model** ``` method = 'glm' ``` Type: Regression, Classification No tuning parameters for this model A model\-specific variable importance metric is available. **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Negative Binomial Generalized Linear Model** ``` method = 'glm.nb' ``` Type: Regression Tuning parameters: * `link` (Link Function) Required packages: `MASS` A model\-specific variable importance metric is available. **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 ### 7\.0\.16 Handle Missing Predictor Data (back to [contents](train-models-by-tag.html#top)) **AdaBoost.M1** ``` method = 'AdaBoost.M1' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) * `coeflearn` (Coefficient Type) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart' ``` Type: Regression, Classification Tuning parameters: * `cp` (Complexity Parameter) Required packages: `rpart` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart1SE' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `rpart` A model\-specific variable importance metric is available. Notes: This CART model replicates the same process used by the `rpart` function where the model complexity is determined using the one\-standard error method. This procedure is replicated inside of the resampling done by `train` so that an external resampling estimate can be obtained. **CART** ``` method = 'rpart2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) Required packages: `rpart` A model\-specific variable importance metric is available. **CART or Ordinal Responses** ``` method = 'rpartScore' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `split` (Split Function) * `prune` (Pruning Measure) Required packages: `rpartScore`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive CART** ``` method = 'rpartCost' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `Cost` (Cost) Required packages: `rpart`, `plyr` **Single C5\.0 Ruleset** ``` method = 'C5.0Rules' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single C5\.0 Tree** ``` method = 'C5.0Tree' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. ### 7\.0\.17 Implicit Feature Selection (back to [contents](train-models-by-tag.html#top)) **AdaBoost Classification Trees** ``` method = 'adaboost' ``` Type: Classification Tuning parameters: * `nIter` (\#Trees) * `method` (Method) Required packages: `fastAdaboost` **AdaBoost.M1** ``` method = 'AdaBoost.M1' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) * `coeflearn` (Coefficient Type) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged Flexible Discriminant Analysis** ``` method = 'bagFDA' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS** ``` method = 'bagEarth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS using gCV Pruning** ``` method = 'bagEarthGCV' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bayesian Additive Regression Trees** ``` method = 'bartMachine' ``` Type: Classification, Regression Tuning parameters: * `num_trees` (\#Trees) * `k` (Prior Boundary) * `alpha` (Base Terminal Node Hyperparameter) * `beta` (Power Terminal Node Hyperparameter) * `nu` (Degrees of Freedom) Required packages: `bartMachine` A model\-specific variable importance metric is available. **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Linear Model** ``` method = 'BstLm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **Boosted Logistic Regression** ``` method = 'LogitBoost' ``` Type: Classification Tuning parameters: * `nIter` (\# Boosting Iterations) Required packages: `caTools` **Boosted Smoothing Spline** ``` method = 'bstSm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **C4\.5\-like Trees** ``` method = 'J48' ``` Type: Classification Tuning parameters: * `C` (Confidence Threshold) * `M` (Minimum Instances Per Leaf) Required packages: `RWeka` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart' ``` Type: Regression, Classification Tuning parameters: * `cp` (Complexity Parameter) Required packages: `rpart` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart1SE' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `rpart` A model\-specific variable importance metric is available. Notes: This CART model replicates the same process used by the `rpart` function where the model complexity is determined using the one\-standard error method. This procedure is replicated inside of the resampling done by `train` so that an external resampling estimate can be obtained. **CART** ``` method = 'rpart2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) Required packages: `rpart` A model\-specific variable importance metric is available. **CART or Ordinal Responses** ``` method = 'rpartScore' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `split` (Split Function) * `prune` (Pruning Measure) Required packages: `rpartScore`, `plyr` A model\-specific variable importance metric is available. **CHi\-squared Automated Interaction Detection** ``` method = 'chaid' ``` Type: Classification Tuning parameters: * `alpha2` (Merging Threshold) * `alpha3` (Splitting former Merged Threshold) * `alpha4` ( Splitting former Merged Threshold) Required packages: `CHAID` **Conditional Inference Random Forest** ``` method = 'cforest' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `party` A model\-specific variable importance metric is available. **Conditional Inference Tree** ``` method = 'ctree' ``` Type: Classification, Regression Tuning parameters: * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Conditional Inference Tree** ``` method = 'ctree2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive CART** ``` method = 'rpartCost' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `Cost` (Cost) Required packages: `rpart`, `plyr` **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **Elasticnet** ``` method = 'enet' ``` Type: Regression Tuning parameters: * `fraction` (Fraction of Full Solution) * `lambda` (Weight Decay) Required packages: `elasticnet` **eXtreme Gradient Boosting** ``` method = 'xgbDART' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `subsample` (Subsample Percentage) * `colsample_bytree` (Subsample Ratio of Columns) * `rate_drop` (Fraction of Trees Dropped) * `skip_drop` (Prob. of Skipping Drop\-out) * `min_child_weight` (Minimum Sum of Instance Weight) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbLinear' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `lambda` (L2 Regularization) * `alpha` (L1 Regularization) * `eta` (Learning Rate) Required packages: `xgboost` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbTree' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `colsample_bytree` (Subsample Ratio of Columns) * `min_child_weight` (Minimum Sum of Instance Weight) * `subsample` (Subsample Percentage) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **Flexible Discriminant Analysis** ``` method = 'fda' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Gradient Boosting Machines** ``` method = 'gbm_h2o' ``` Type: Regression, Classification Tuning parameters: * `ntrees` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `min_rows` (Min. Terminal Node Size) * `learn_rate` (Shrinkage) * `col_sample_rate` (\#Randomly Selected Predictors) Required packages: `h2o` A model\-specific variable importance metric is available. **Least Angle Regression** ``` method = 'lars' ``` Type: Regression Tuning parameters: * `fraction` (Fraction) Required packages: `lars` **Least Angle Regression** ``` method = 'lars2' ``` Type: Regression Tuning parameters: * `step` (\#Steps) Required packages: `lars` **Logistic Model Trees** ``` method = 'LMT' ``` Type: Classification Tuning parameters: * `iter` (\# Iteratons) Required packages: `RWeka` **Model Rules** ``` method = 'M5Rules' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) Required packages: `RWeka` **Model Tree** ``` method = 'M5' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) * `rules` (Rules) Required packages: `RWeka` **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Multivariate Adaptive Regression Spline** ``` method = 'earth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Multivariate Adaptive Regression Splines** ``` method = 'gcvEarth' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Nearest Shrunken Centroids** ``` method = 'pam' ``` Type: Classification Tuning parameters: * `threshold` (Shrinkage Threshold) Required packages: `pamr` A model\-specific variable importance metric is available. **Non\-Convex Penalized Quantile Regression** ``` method = 'rqnc' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) * `penalty` (Penalty Type) Required packages: `rqPen` **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Parallel Random Forest** ``` method = 'parRF' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `e1071`, `randomForest`, `foreach`, `import` A model\-specific variable importance metric is available. **Penalized Linear Discriminant Analysis** ``` method = 'PenalizedLDA' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `K` (\#Discriminant Functions) Required packages: `penalizedLDA`, `plyr` **Penalized Linear Regression** ``` method = 'penalized' ``` Type: Regression Tuning parameters: * `lambda1` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `penalized` **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Quantile Random Forest** ``` method = 'qrf' ``` Type: Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `quantregForest` **Quantile Regression with LASSO penalty** ``` method = 'rqlasso' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) Required packages: `rqPen` **Random Ferns** ``` method = 'rFerns' ``` Type: Classification Tuning parameters: * `depth` (Fern Depth) Required packages: `rFerns` **Random Forest** ``` method = 'ordinalRF' ``` Type: Classification Tuning parameters: * `nsets` (\# score sets tried prior to the approximation) * `ntreeperdiv` (\# of trees (small RFs)) * `ntreefinal` (\# of trees (final RF)) Required packages: `e1071`, `ranger`, `dplyr`, `ordinalForest` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'ranger' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `splitrule` (Splitting Rule) * `min.node.size` (Minimal Node Size) Required packages: `e1071`, `ranger`, `dplyr` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'Rborist' ``` Type: Classification, Regression Tuning parameters: * `predFixed` (\#Randomly Selected Predictors) * `minNode` (Minimal Node Size) Required packages: `Rborist` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'rf' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `randomForest` A model\-specific variable importance metric is available. **Random Forest by Randomization** ``` method = 'extraTrees' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\# Randomly Selected Predictors) * `numRandomCuts` (\# Random Cuts) Required packages: `extraTrees` **Random Forest Rule\-Based Model** ``` method = 'rfRules' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `maxdepth` (Maximum Rule Depth) Required packages: `randomForest`, `inTrees`, `plyr` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRF' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) * `coefImp` (Importance Coefficient) Required packages: `randomForest`, `RRF` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRFglobal' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) Required packages: `RRF` A model\-specific variable importance metric is available. **Relaxed Lasso** ``` method = 'relaxo' ``` Type: Regression Tuning parameters: * `lambda` (Penalty Parameter) * `phi` (Relaxation Parameter) Required packages: `relaxo`, `plyr` **Rotation Forest** ``` method = 'rotationForest' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) Required packages: `rotationForest` A model\-specific variable importance metric is available. **Rotation Forest** ``` method = 'rotationForestCp' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) * `cp` (Complexity Parameter) Required packages: `rpart`, `plyr`, `rotationForest` A model\-specific variable importance metric is available. **Rule\-Based Classifier** ``` method = 'JRip' ``` Type: Classification Tuning parameters: * `NumOpt` (\# Optimizations) * `NumFolds` (\# Folds) * `MinWeights` (Min Weights) Required packages: `RWeka` A model\-specific variable importance metric is available. **Rule\-Based Classifier** ``` method = 'PART' ``` Type: Classification Tuning parameters: * `threshold` (Confidence Threshold) * `pruned` (Pruning) Required packages: `RWeka` A model\-specific variable importance metric is available. **Single C5\.0 Ruleset** ``` method = 'C5.0Rules' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single C5\.0 Tree** ``` method = 'C5.0Tree' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single Rule Classification** ``` method = 'OneR' ``` Type: Classification No tuning parameters for this model Required packages: `RWeka` **Sparse Distance Weighted Discrimination** ``` method = 'sdwd' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `sdwd` A model\-specific variable importance metric is available. **Sparse Linear Discriminant Analysis** ``` method = 'sparseLDA' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) Required packages: `sparseLDA` **Sparse Mixture Discriminant Analysis** ``` method = 'smda' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) * `R` (\# Subclasses) Required packages: `sparseLDA` **Spike and Slab Regression** ``` method = 'spikeslab' ``` Type: Regression Tuning parameters: * `vars` (Variables Retained) Required packages: `spikeslab`, `plyr` Notes: Unlike other packages used by `train`, the `spikeslab` package is fully loaded when this model is used. **Stochastic Gradient Boosting** ``` method = 'gbm' ``` Type: Regression, Classification Tuning parameters: * `n.trees` (\# Boosting Iterations) * `interaction.depth` (Max Tree Depth) * `shrinkage` (Shrinkage) * `n.minobsinnode` (Min. Terminal Node Size) Required packages: `gbm`, `plyr` A model\-specific variable importance metric is available. **The Bayesian lasso** ``` method = 'blasso' ``` Type: Regression Tuning parameters: * `sparsity` (Sparsity Threshold) Required packages: `monomvn` Notes: This model creates predictions using the mean of the posterior distributions but sets some parameters specifically to zero based on the tuning parameter `sparsity`. For example, when `sparsity = .5`, only coefficients where at least half the posterior estimates are nonzero are used. **The lasso** ``` method = 'lasso' ``` Type: Regression Tuning parameters: * `fraction` (Fraction of Full Solution) Required packages: `elasticnet` **Tree Models from Genetic Algorithms** ``` method = 'evtree' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Complexity Parameter) Required packages: `evtree` **Tree\-Based Ensembles** ``` method = 'nodeHarvest' ``` Type: Regression, Classification Tuning parameters: * `maxinter` (Maximum Interaction Depth) * `mode` (Prediction Mode) Required packages: `nodeHarvest` **Weighted Subspace Random Forest** ``` method = 'wsrf' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `wsrf` ### 7\.0\.18 Kernel Method (back to [contents](train-models-by-tag.html#top)) **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **Gaussian Process** ``` method = 'gaussprLinear' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `kernlab` **Gaussian Process with Polynomial Kernel** ``` method = 'gaussprPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kernlab` **Gaussian Process with Radial Basis Function Kernel** ``` method = 'gaussprRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **L2 Regularized Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights2' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) * `weight` (Class Weight) Required packages: `LiblineaR` **L2 Regularized Support Vector Machine (dual) with Linear Kernel** ``` method = 'svmLinear3' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) Required packages: `LiblineaR` **Least Squares Support Vector Machine** ``` method = 'lssvmLinear' ``` Type: Classification Tuning parameters: * `tau` (Regularization Parameter) Required packages: `kernlab` **Least Squares Support Vector Machine with Polynomial Kernel** ``` method = 'lssvmPoly' ``` Type: Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `tau` (Regularization Parameter) Required packages: `kernlab` **Least Squares Support Vector Machine with Radial Basis Function Kernel** ``` method = 'lssvmRadial' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `tau` (Regularization Parameter) Required packages: `kernlab` **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Partial Least Squares** ``` method = 'kernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Polynomial Kernel Regularized Least Squares** ``` method = 'krlsPoly' ``` Type: Regression Tuning parameters: * `lambda` (Regularization Parameter) * `degree` (Polynomial Degree) Required packages: `KRLS` **Radial Basis Function Kernel Regularized Least Squares** ``` method = 'krlsRadial' ``` Type: Regression Tuning parameters: * `lambda` (Regularization Parameter) * `sigma` (Sigma) Required packages: `KRLS`, `kernlab` **Relevance Vector Machines with Linear Kernel** ``` method = 'rvmLinear' ``` Type: Regression No tuning parameters for this model Required packages: `kernlab` **Relevance Vector Machines with Polynomial Kernel** ``` method = 'rvmPoly' ``` Type: Regression Tuning parameters: * `scale` (Scale) * `degree` (Polynomial Degree) Required packages: `kernlab` **Relevance Vector Machines with Radial Basis Function Kernel** ``` method = 'rvmRadial' ``` Type: Regression Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **Support Vector Machines with Boundrange String Kernel** ``` method = 'svmBoundrangeString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Class Weights** ``` method = 'svmRadialWeights' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) * `Weight` (Weight) Required packages: `kernlab` **Support Vector Machines with Exponential String Kernel** ``` method = 'svmExpoString' ``` Type: Regression, Classification Tuning parameters: * `lambda` (lambda) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear2' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) Required packages: `e1071` **Support Vector Machines with Polynomial Kernel** ``` method = 'svmPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialCost' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialSigma' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` Notes: This SVM model tunes over the cost parameter and the RBF kernel parameter sigma. In the latter case, using `tuneLength` will, at most, evaluate six values of the kernel parameter. This enables a broad search over the cost parameter and a relatively narrow search over `sigma` **Support Vector Machines with Spectrum String Kernel** ``` method = 'svmSpectrumString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` ### 7\.0\.19 L1 Regularization (back to [contents](train-models-by-tag.html#top)) **Bayesian Ridge Regression (Model Averaged)** ``` method = 'blassoAveraged' ``` Type: Regression No tuning parameters for this model Required packages: `monomvn` Notes: This model makes predictions by averaging the predictions based on the posterior estimates of the regression coefficients. While it is possible that some of these posterior estimates are zero for non\-informative predictors, the final predicted value may be a function of many (or even all) predictors. **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **Elasticnet** ``` method = 'enet' ``` Type: Regression Tuning parameters: * `fraction` (Fraction of Full Solution) * `lambda` (Weight Decay) Required packages: `elasticnet` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Least Angle Regression** ``` method = 'lars' ``` Type: Regression Tuning parameters: * `fraction` (Fraction) Required packages: `lars` **Least Angle Regression** ``` method = 'lars2' ``` Type: Regression Tuning parameters: * `step` (\#Steps) Required packages: `lars` **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Non\-Convex Penalized Quantile Regression** ``` method = 'rqnc' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) * `penalty` (Penalty Type) Required packages: `rqPen` **Penalized Linear Discriminant Analysis** ``` method = 'PenalizedLDA' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `K` (\#Discriminant Functions) Required packages: `penalizedLDA`, `plyr` **Penalized Linear Regression** ``` method = 'penalized' ``` Type: Regression Tuning parameters: * `lambda1` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `penalized` **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Quantile Regression with LASSO penalty** ``` method = 'rqlasso' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) Required packages: `rqPen` **Regularized Logistic Regression** ``` method = 'regLogistic' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `loss` (Loss Function) * `epsilon` (Tolerance) Required packages: `LiblineaR` **Relaxed Lasso** ``` method = 'relaxo' ``` Type: Regression Tuning parameters: * `lambda` (Penalty Parameter) * `phi` (Relaxation Parameter) Required packages: `relaxo`, `plyr` **Sparse Distance Weighted Discrimination** ``` method = 'sdwd' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `sdwd` A model\-specific variable importance metric is available. **Sparse Linear Discriminant Analysis** ``` method = 'sparseLDA' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) Required packages: `sparseLDA` **Sparse Mixture Discriminant Analysis** ``` method = 'smda' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) * `R` (\# Subclasses) Required packages: `sparseLDA` **Sparse Partial Least Squares** ``` method = 'spls' ``` Type: Regression, Classification Tuning parameters: * `K` (\#Components) * `eta` (Threshold) * `kappa` (Kappa) Required packages: `spls` **The Bayesian lasso** ``` method = 'blasso' ``` Type: Regression Tuning parameters: * `sparsity` (Sparsity Threshold) Required packages: `monomvn` Notes: This model creates predictions using the mean of the posterior distributions but sets some parameters specifically to zero based on the tuning parameter `sparsity`. For example, when `sparsity = .5`, only coefficients where at least half the posterior estimates are nonzero are used. **The lasso** ``` method = 'lasso' ``` Type: Regression Tuning parameters: * `fraction` (Fraction of Full Solution) Required packages: `elasticnet` ### 7\.0\.20 L2 Regularization (back to [contents](train-models-by-tag.html#top)) **Bayesian Ridge Regression** ``` method = 'bridge' ``` Type: Regression No tuning parameters for this model Required packages: `monomvn` **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Model Averaged Neural Network** ``` method = 'avNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) * `bag` (Bagging) Required packages: `nnet` **Multi\-Layer Perceptron** ``` method = 'mlpWeightDecay' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `RSNNS` **Multi\-Layer Perceptron, multiple layers** ``` method = 'mlpWeightDecayML' ``` Type: Regression, Classification Tuning parameters: * `layer1` (\#Hidden Units layer1\) * `layer2` (\#Hidden Units layer2\) * `layer3` (\#Hidden Units layer3\) * `decay` (Weight Decay) Required packages: `RSNNS` **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Multilayer Perceptron Network by Stochastic Gradient Descent** ``` method = 'mlpSGD' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `l2reg` (L2 Regularization) * `lambda` (RMSE Gradient Scaling) * `learn_rate` (Learning Rate) * `momentum` (Momentum) * `gamma` (Learning Rate Decay) * `minibatchsz` (Batch Size) * `repeats` (\#Models) Required packages: `FCNN4R`, `plyr` A model\-specific variable importance metric is available. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecay' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecayCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Neural Network** ``` method = 'nnet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Neural Networks with Feature Extraction** ``` method = 'pcaNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Penalized Linear Regression** ``` method = 'penalized' ``` Type: Regression Tuning parameters: * `lambda1` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `penalized` **Penalized Logistic Regression** ``` method = 'plr' ``` Type: Classification Tuning parameters: * `lambda` (L2 Penalty) * `cp` (Complexity Parameter) Required packages: `stepPlr` **Penalized Multinomial Regression** ``` method = 'multinom' ``` Type: Classification Tuning parameters: * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Polynomial Kernel Regularized Least Squares** ``` method = 'krlsPoly' ``` Type: Regression Tuning parameters: * `lambda` (Regularization Parameter) * `degree` (Polynomial Degree) Required packages: `KRLS` **Quantile Regression Neural Network** ``` method = 'qrnn' ``` Type: Regression Tuning parameters: * `n.hidden` (\#Hidden Units) * `penalty` ( Weight Decay) * `bag` (Bagged Models?) Required packages: `qrnn` **Radial Basis Function Kernel Regularized Least Squares** ``` method = 'krlsRadial' ``` Type: Regression Tuning parameters: * `lambda` (Regularization Parameter) * `sigma` (Sigma) Required packages: `KRLS`, `kernlab` **Radial Basis Function Network** ``` method = 'rbf' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) Required packages: `RSNNS` **Radial Basis Function Network** ``` method = 'rbfDDA' ``` Type: Regression, Classification Tuning parameters: * `negativeThreshold` (Activation Limit for Conflicting Classes) Required packages: `RSNNS` **Regularized Logistic Regression** ``` method = 'regLogistic' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `loss` (Loss Function) * `epsilon` (Tolerance) Required packages: `LiblineaR` **Relaxed Lasso** ``` method = 'relaxo' ``` Type: Regression Tuning parameters: * `lambda` (Penalty Parameter) * `phi` (Relaxation Parameter) Required packages: `relaxo`, `plyr` **Ridge Regression** ``` method = 'ridge' ``` Type: Regression Tuning parameters: * `lambda` (Weight Decay) Required packages: `elasticnet` **Ridge Regression with Variable Selection** ``` method = 'foba' ``` Type: Regression Tuning parameters: * `k` (\#Variables Retained) * `lambda` (L2 Penalty) Required packages: `foba` **Sparse Distance Weighted Discrimination** ``` method = 'sdwd' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `sdwd` A model\-specific variable importance metric is available. ### 7\.0\.21 Linear Classifier (back to [contents](train-models-by-tag.html#top)) **Adjacent Categories Probability Model for Ordinal Data** ``` method = 'vglmAdjCat' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bayesian Generalized Linear Model** ``` method = 'bayesglm' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `arm` **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Continuation Ratio Model for Ordinal Data** ``` method = 'vglmContRatio' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Cumulative Probability Model for Ordinal Data** ``` method = 'vglmCumulative' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Diagonal Discriminant Analysis** ``` method = 'dda' ``` Type: Classification Tuning parameters: * `model` (Model) * `shrinkage` (Shrinkage Type) Required packages: `sparsediscrim` **Ensembles of Generalized Linear Models** ``` method = 'randomGLM' ``` Type: Regression, Classification Tuning parameters: * `maxInteractionOrder` (Interaction Order) Required packages: `randomGLM` Notes: Unlike other packages used by `train`, the `randomGLM` package is fully loaded when this model is used. **Factor\-Based Linear Discriminant Analysis** ``` method = 'RFlda' ``` Type: Classification Tuning parameters: * `q` (\# Factors) Required packages: `HiDimDA` **Gaussian Process** ``` method = 'gaussprLinear' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `kernlab` **Generalized Linear Model** ``` method = 'glm' ``` Type: Regression, Classification No tuning parameters for this model A model\-specific variable importance metric is available. **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **Generalized Partial Least Squares** ``` method = 'gpls' ``` Type: Classification Tuning parameters: * `K.prov` (\#Components) Required packages: `gpls` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Heteroscedastic Discriminant Analysis** ``` method = 'hda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `newdim` (Dimension of the Discriminative Subspace) Required packages: `hda` **High Dimensional Discriminant Analysis** ``` method = 'hdda' ``` Type: Classification Tuning parameters: * `threshold` (Threshold) * `model` (Model Type) Required packages: `HDclassif` **High\-Dimensional Regularized Discriminant Analysis** ``` method = 'hdrda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `shrinkage_type` (Shrinkage Type) Required packages: `sparsediscrim` **L2 Regularized Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights2' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) * `weight` (Class Weight) Required packages: `LiblineaR` **L2 Regularized Support Vector Machine (dual) with Linear Kernel** ``` method = 'svmLinear3' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) Required packages: `LiblineaR` **Least Squares Support Vector Machine** ``` method = 'lssvmLinear' ``` Type: Classification Tuning parameters: * `tau` (Regularization Parameter) Required packages: `kernlab` **Linear Discriminant Analysis** ``` method = 'lda' ``` Type: Classification No tuning parameters for this model Required packages: `MASS` **Linear Discriminant Analysis** ``` method = 'lda2' ``` Type: Classification Tuning parameters: * `dimen` (\#Discriminant Functions) Required packages: `MASS` **Linear Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepLDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Localized Linear Discriminant Analysis** ``` method = 'loclda' ``` Type: Classification Tuning parameters: * `k` (\#Nearest Neighbors) Required packages: `klaR` **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` **Logistic Model Trees** ``` method = 'LMT' ``` Type: Classification Tuning parameters: * `iter` (\# Iteratons) Required packages: `RWeka` **Maximum Uncertainty Linear Discriminant Analysis** ``` method = 'Mlda' ``` Type: Classification No tuning parameters for this model Required packages: `HiDimDA` **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Nearest Shrunken Centroids** ``` method = 'pam' ``` Type: Classification Tuning parameters: * `threshold` (Shrinkage Threshold) Required packages: `pamr` A model\-specific variable importance metric is available. **Ordered Logistic or Probit Regression** ``` method = 'polr' ``` Type: Classification Tuning parameters: * `method` (parameter) Required packages: `MASS` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'kernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'pls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'simpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'widekernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Penalized Linear Discriminant Analysis** ``` method = 'PenalizedLDA' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `K` (\#Discriminant Functions) Required packages: `penalizedLDA`, `plyr` **Penalized Logistic Regression** ``` method = 'plr' ``` Type: Classification Tuning parameters: * `lambda` (L2 Penalty) * `cp` (Complexity Parameter) Required packages: `stepPlr` **Penalized Multinomial Regression** ``` method = 'multinom' ``` Type: Classification Tuning parameters: * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Regularized Discriminant Analysis** ``` method = 'rda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) Required packages: `klaR` **Regularized Linear Discriminant Analysis** ``` method = 'rlda' ``` Type: Classification Tuning parameters: * `estimator` (Regularization Method) Required packages: `sparsediscrim` **Regularized Logistic Regression** ``` method = 'regLogistic' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `loss` (Loss Function) * `epsilon` (Tolerance) Required packages: `LiblineaR` **Robust Linear Discriminant Analysis** ``` method = 'Linda' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Robust Regularized Linear Discriminant Analysis** ``` method = 'rrlda' ``` Type: Classification Tuning parameters: * `lambda` (Penalty Parameter) * `hp` (Robustness Parameter) * `penalty` (Penalty Type) Required packages: `rrlda` Notes: Unlike other packages used by `train`, the `rrlda` package is fully loaded when this model is used. **Robust SIMCA** ``` method = 'RSimca' ``` Type: Classification No tuning parameters for this model Required packages: `rrcovHD` Notes: Unlike other packages used by `train`, the `rrcovHD` package is fully loaded when this model is used. **Shrinkage Discriminant Analysis** ``` method = 'sda' ``` Type: Classification Tuning parameters: * `diagonal` (Diagonalize) * `lambda` (shrinkage) Required packages: `sda` **Sparse Distance Weighted Discrimination** ``` method = 'sdwd' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `sdwd` A model\-specific variable importance metric is available. **Sparse Linear Discriminant Analysis** ``` method = 'sparseLDA' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) Required packages: `sparseLDA` **Sparse Partial Least Squares** ``` method = 'spls' ``` Type: Regression, Classification Tuning parameters: * `K` (\#Components) * `eta` (Threshold) * `kappa` (Kappa) Required packages: `spls` **Stabilized Linear Discriminant Analysis** ``` method = 'slda' ``` Type: Classification No tuning parameters for this model Required packages: `ipred` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear2' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) Required packages: `e1071` ### 7\.0\.22 Linear Regression (back to [contents](train-models-by-tag.html#top)) **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bayesian Ridge Regression** ``` method = 'bridge' ``` Type: Regression No tuning parameters for this model Required packages: `monomvn` **Bayesian Ridge Regression (Model Averaged)** ``` method = 'blassoAveraged' ``` Type: Regression No tuning parameters for this model Required packages: `monomvn` Notes: This model makes predictions by averaging the predictions based on the posterior estimates of the regression coefficients. While it is possible that some of these posterior estimates are zero for non\-informative predictors, the final predicted value may be a function of many (or even all) predictors. **Boosted Linear Model** ``` method = 'BstLm' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **Elasticnet** ``` method = 'enet' ``` Type: Regression Tuning parameters: * `fraction` (Fraction of Full Solution) * `lambda` (Weight Decay) Required packages: `elasticnet` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Independent Component Regression** ``` method = 'icr' ``` Type: Regression Tuning parameters: * `n.comp` (\#Components) Required packages: `fastICA` **L2 Regularized Support Vector Machine (dual) with Linear Kernel** ``` method = 'svmLinear3' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) Required packages: `LiblineaR` **Least Angle Regression** ``` method = 'lars' ``` Type: Regression Tuning parameters: * `fraction` (Fraction) Required packages: `lars` **Least Angle Regression** ``` method = 'lars2' ``` Type: Regression Tuning parameters: * `step` (\#Steps) Required packages: `lars` **Linear Regression** ``` method = 'lm' ``` Type: Regression Tuning parameters: * `intercept` (intercept) A model\-specific variable importance metric is available. **Linear Regression with Backwards Selection** ``` method = 'leapBackward' ``` Type: Regression Tuning parameters: * `nvmax` (Maximum Number of Predictors) Required packages: `leaps` **Linear Regression with Forward Selection** ``` method = 'leapForward' ``` Type: Regression Tuning parameters: * `nvmax` (Maximum Number of Predictors) Required packages: `leaps` **Linear Regression with Stepwise Selection** ``` method = 'leapSeq' ``` Type: Regression Tuning parameters: * `nvmax` (Maximum Number of Predictors) Required packages: `leaps` **Linear Regression with Stepwise Selection** ``` method = 'lmStepAIC' ``` Type: Regression No tuning parameters for this model Required packages: `MASS` **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` **Model Rules** ``` method = 'M5Rules' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) Required packages: `RWeka` **Model Tree** ``` method = 'M5' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) * `rules` (Rules) Required packages: `RWeka` **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Non\-Convex Penalized Quantile Regression** ``` method = 'rqnc' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) * `penalty` (Penalty Type) Required packages: `rqPen` **Non\-Negative Least Squares** ``` method = 'nnls' ``` Type: Regression No tuning parameters for this model Required packages: `nnls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'kernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'pls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'simpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'widekernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Penalized Linear Regression** ``` method = 'penalized' ``` Type: Regression Tuning parameters: * `lambda1` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `penalized` **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Principal Component Analysis** ``` method = 'pcr' ``` Type: Regression Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` **Quantile Regression with LASSO penalty** ``` method = 'rqlasso' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) Required packages: `rqPen` **Relaxed Lasso** ``` method = 'relaxo' ``` Type: Regression Tuning parameters: * `lambda` (Penalty Parameter) * `phi` (Relaxation Parameter) Required packages: `relaxo`, `plyr` **Relevance Vector Machines with Linear Kernel** ``` method = 'rvmLinear' ``` Type: Regression No tuning parameters for this model Required packages: `kernlab` **Ridge Regression** ``` method = 'ridge' ``` Type: Regression Tuning parameters: * `lambda` (Weight Decay) Required packages: `elasticnet` **Ridge Regression with Variable Selection** ``` method = 'foba' ``` Type: Regression Tuning parameters: * `k` (\#Variables Retained) * `lambda` (L2 Penalty) Required packages: `foba` **Robust Linear Model** ``` method = 'rlm' ``` Type: Regression Tuning parameters: * `intercept` (intercept) * `psi` (psi) Required packages: `MASS` A model\-specific variable importance metric is available. **Sparse Partial Least Squares** ``` method = 'spls' ``` Type: Regression, Classification Tuning parameters: * `K` (\#Components) * `eta` (Threshold) * `kappa` (Kappa) Required packages: `spls` **Spike and Slab Regression** ``` method = 'spikeslab' ``` Type: Regression Tuning parameters: * `vars` (Variables Retained) Required packages: `spikeslab`, `plyr` Notes: Unlike other packages used by `train`, the `spikeslab` package is fully loaded when this model is used. **Supervised Principal Component Analysis** ``` method = 'superpc' ``` Type: Regression Tuning parameters: * `threshold` (Threshold) * `n.components` (\#Components) Required packages: `superpc` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear2' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) Required packages: `e1071` **The Bayesian lasso** ``` method = 'blasso' ``` Type: Regression Tuning parameters: * `sparsity` (Sparsity Threshold) Required packages: `monomvn` Notes: This model creates predictions using the mean of the posterior distributions but sets some parameters specifically to zero based on the tuning parameter `sparsity`. For example, when `sparsity = .5`, only coefficients where at least half the posterior estimates are nonzero are used. **The lasso** ``` method = 'lasso' ``` Type: Regression Tuning parameters: * `fraction` (Fraction of Full Solution) Required packages: `elasticnet` ### 7\.0\.23 Logic Regression (back to [contents](train-models-by-tag.html#top)) **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` ### 7\.0\.24 Logistic Regression (back to [contents](train-models-by-tag.html#top)) **Adjacent Categories Probability Model for Ordinal Data** ``` method = 'vglmAdjCat' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bayesian Generalized Linear Model** ``` method = 'bayesglm' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `arm` **Boosted Logistic Regression** ``` method = 'LogitBoost' ``` Type: Classification Tuning parameters: * `nIter` (\# Boosting Iterations) Required packages: `caTools` **Continuation Ratio Model for Ordinal Data** ``` method = 'vglmContRatio' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Cumulative Probability Model for Ordinal Data** ``` method = 'vglmCumulative' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Generalized Partial Least Squares** ``` method = 'gpls' ``` Type: Classification Tuning parameters: * `K.prov` (\#Components) Required packages: `gpls` **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` **Logistic Model Trees** ``` method = 'LMT' ``` Type: Classification Tuning parameters: * `iter` (\# Iteratons) Required packages: `RWeka` **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Ordered Logistic or Probit Regression** ``` method = 'polr' ``` Type: Classification Tuning parameters: * `method` (parameter) Required packages: `MASS` A model\-specific variable importance metric is available. **Penalized Logistic Regression** ``` method = 'plr' ``` Type: Classification Tuning parameters: * `lambda` (L2 Penalty) * `cp` (Complexity Parameter) Required packages: `stepPlr` **Penalized Multinomial Regression** ``` method = 'multinom' ``` Type: Classification Tuning parameters: * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. ### 7\.0\.25 Mixture Model (back to [contents](train-models-by-tag.html#top)) **Adaptive Mixture Discriminant Analysis** ``` method = 'amdai' ``` Type: Classification Tuning parameters: * `model` (Model Type) Required packages: `adaptDA` **Mixture Discriminant Analysis** ``` method = 'mda' ``` Type: Classification Tuning parameters: * `subclasses` (\#Subclasses Per Class) Required packages: `mda` **Robust Mixture Discriminant Analysis** ``` method = 'rmda' ``` Type: Classification Tuning parameters: * `K` (\#Subclasses Per Class) * `model` (Model) Required packages: `robustDA` **Sparse Mixture Discriminant Analysis** ``` method = 'smda' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) * `R` (\# Subclasses) Required packages: `sparseLDA` ### 7\.0\.26 Model Tree (back to [contents](train-models-by-tag.html#top)) **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **Logistic Model Trees** ``` method = 'LMT' ``` Type: Classification Tuning parameters: * `iter` (\# Iteratons) Required packages: `RWeka` **Model Rules** ``` method = 'M5Rules' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) Required packages: `RWeka` **Model Tree** ``` method = 'M5' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) * `rules` (Rules) Required packages: `RWeka` ### 7\.0\.27 Multivariate Adaptive Regression Splines (back to [contents](train-models-by-tag.html#top)) **Bagged Flexible Discriminant Analysis** ``` method = 'bagFDA' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS** ``` method = 'bagEarth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS using gCV Pruning** ``` method = 'bagEarthGCV' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Flexible Discriminant Analysis** ``` method = 'fda' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Multivariate Adaptive Regression Spline** ``` method = 'earth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Multivariate Adaptive Regression Splines** ``` method = 'gcvEarth' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. ### 7\.0\.28 Neural Network (back to [contents](train-models-by-tag.html#top)) **Bayesian Regularized Neural Networks** ``` method = 'brnn' ``` Type: Regression Tuning parameters: * `neurons` (\# Neurons) Required packages: `brnn` **Extreme Learning Machine** ``` method = 'elm' ``` Type: Classification, Regression Tuning parameters: * `nhid` (\#Hidden Units) * `actfun` (Activation Function) Required packages: `elmNN` Notes: The package is no longer on CRAN but can be installed from the archive at [https://cran.r\-project.org/src/contrib/Archive/elmNN/](https://cran.r-project.org/src/contrib/Archive/elmNN/) **Model Averaged Neural Network** ``` method = 'avNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) * `bag` (Bagging) Required packages: `nnet` **Monotone Multi\-Layer Perceptron Neural Network** ``` method = 'monmlp' ``` Type: Classification, Regression Tuning parameters: * `hidden1` (\#Hidden Units) * `n.ensemble` (\#Models) Required packages: `monmlp` **Multi\-Layer Perceptron** ``` method = 'mlp' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) Required packages: `RSNNS` **Multi\-Layer Perceptron** ``` method = 'mlpWeightDecay' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `RSNNS` **Multi\-Layer Perceptron, multiple layers** ``` method = 'mlpWeightDecayML' ``` Type: Regression, Classification Tuning parameters: * `layer1` (\#Hidden Units layer1\) * `layer2` (\#Hidden Units layer2\) * `layer3` (\#Hidden Units layer3\) * `decay` (Weight Decay) Required packages: `RSNNS` **Multi\-Layer Perceptron, with multiple layers** ``` method = 'mlpML' ``` Type: Regression, Classification Tuning parameters: * `layer1` (\#Hidden Units layer1\) * `layer2` (\#Hidden Units layer2\) * `layer3` (\#Hidden Units layer3\) Required packages: `RSNNS` **Multilayer Perceptron Network by Stochastic Gradient Descent** ``` method = 'mlpSGD' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `l2reg` (L2 Regularization) * `lambda` (RMSE Gradient Scaling) * `learn_rate` (Learning Rate) * `momentum` (Momentum) * `gamma` (Learning Rate Decay) * `minibatchsz` (Batch Size) * `repeats` (\#Models) Required packages: `FCNN4R`, `plyr` A model\-specific variable importance metric is available. **Multilayer Perceptron Network with Dropout** ``` method = 'mlpKerasDropout' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `dropout` (Dropout Rate) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Dropout** ``` method = 'mlpKerasDropoutCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `dropout` (Dropout Rate) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecay' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecayCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Neural Network** ``` method = 'mxnet' ``` Type: Classification, Regression Tuning parameters: * `layer1` (\#Hidden Units in Layer 1\) * `layer2` (\#Hidden Units in Layer 2\) * `layer3` (\#Hidden Units in Layer 3\) * `learning.rate` (Learning Rate) * `momentum` (Momentum) * `dropout` (Dropout Rate) * `activation` (Activation Function) Required packages: `mxnet` Notes: The `mxnet` package is not yet on CRAN. See <http://mxnet.io> for installation instructions. **Neural Network** ``` method = 'mxnetAdam' ``` Type: Classification, Regression Tuning parameters: * `layer1` (\#Hidden Units in Layer 1\) * `layer2` (\#Hidden Units in Layer 2\) * `layer3` (\#Hidden Units in Layer 3\) * `dropout` (Dropout Rate) * `beta1` (beta1\) * `beta2` (beta2\) * `learningrate` (Learning Rate) * `activation` (Activation Function) Required packages: `mxnet` Notes: The `mxnet` package is not yet on CRAN. See <http://mxnet.io> for installation instructions. Users are strongly advised to define `num.round` themselves. **Neural Network** ``` method = 'neuralnet' ``` Type: Regression Tuning parameters: * `layer1` (\#Hidden Units in Layer 1\) * `layer2` (\#Hidden Units in Layer 2\) * `layer3` (\#Hidden Units in Layer 3\) Required packages: `neuralnet` **Neural Network** ``` method = 'nnet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Neural Networks with Feature Extraction** ``` method = 'pcaNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` **Penalized Multinomial Regression** ``` method = 'multinom' ``` Type: Classification Tuning parameters: * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Quantile Regression Neural Network** ``` method = 'qrnn' ``` Type: Regression Tuning parameters: * `n.hidden` (\#Hidden Units) * `penalty` ( Weight Decay) * `bag` (Bagged Models?) Required packages: `qrnn` **Radial Basis Function Network** ``` method = 'rbf' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) Required packages: `RSNNS` **Radial Basis Function Network** ``` method = 'rbfDDA' ``` Type: Regression, Classification Tuning parameters: * `negativeThreshold` (Activation Limit for Conflicting Classes) Required packages: `RSNNS` **Stacked AutoEncoder Deep Neural Network** ``` method = 'dnn' ``` Type: Classification, Regression Tuning parameters: * `layer1` (Hidden Layer 1\) * `layer2` (Hidden Layer 2\) * `layer3` (Hidden Layer 3\) * `hidden_dropout` (Hidden Dropouts) * `visible_dropout` (Visible Dropout) Required packages: `deepnet` ### 7\.0\.29 Oblique Tree (back to [contents](train-models-by-tag.html#top)) **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. ### 7\.0\.30 Ordinal Outcomes (back to [contents](train-models-by-tag.html#top)) **Adjacent Categories Probability Model for Ordinal Data** ``` method = 'vglmAdjCat' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **CART or Ordinal Responses** ``` method = 'rpartScore' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `split` (Split Function) * `prune` (Pruning Measure) Required packages: `rpartScore`, `plyr` A model\-specific variable importance metric is available. **Continuation Ratio Model for Ordinal Data** ``` method = 'vglmContRatio' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Cumulative Probability Model for Ordinal Data** ``` method = 'vglmCumulative' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Ordered Logistic or Probit Regression** ``` method = 'polr' ``` Type: Classification Tuning parameters: * `method` (parameter) Required packages: `MASS` A model\-specific variable importance metric is available. **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Random Forest** ``` method = 'ordinalRF' ``` Type: Classification Tuning parameters: * `nsets` (\# score sets tried prior to the approximation) * `ntreeperdiv` (\# of trees (small RFs)) * `ntreefinal` (\# of trees (final RF)) Required packages: `e1071`, `ranger`, `dplyr`, `ordinalForest` A model\-specific variable importance metric is available. ### 7\.0\.31 Partial Least Squares (back to [contents](train-models-by-tag.html#top)) **Generalized Partial Least Squares** ``` method = 'gpls' ``` Type: Classification Tuning parameters: * `K.prov` (\#Components) Required packages: `gpls` **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Partial Least Squares** ``` method = 'kernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'pls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'simpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'widekernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares Generalized Linear Models** ``` method = 'plsRglm' ``` Type: Classification, Regression Tuning parameters: * `nt` (\#PLS Components) * `alpha.pvals.expli` (p\-Value threshold) Required packages: `plsRglm` Notes: Unlike other packages used by `train`, the `plsRglm` package is fully loaded when this model is used. **Sparse Partial Least Squares** ``` method = 'spls' ``` Type: Regression, Classification Tuning parameters: * `K` (\#Components) * `eta` (Threshold) * `kappa` (Kappa) Required packages: `spls` ### 7\.0\.32 Patient Rule Induction Method (back to [contents](train-models-by-tag.html#top)) **Patient Rule Induction Method** ``` method = 'PRIM' ``` Type: Classification Tuning parameters: * `peel.alpha` (peeling quantile) * `paste.alpha` (pasting quantile) * `mass.min` (minimum mass) Required packages: `supervisedPRIM` ### 7\.0\.33 Polynomial Model (back to [contents](train-models-by-tag.html#top)) **Diagonal Discriminant Analysis** ``` method = 'dda' ``` Type: Classification Tuning parameters: * `model` (Model) * `shrinkage` (Shrinkage Type) Required packages: `sparsediscrim` **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Gaussian Process with Polynomial Kernel** ``` method = 'gaussprPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kernlab` **High\-Dimensional Regularized Discriminant Analysis** ``` method = 'hdrda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `shrinkage_type` (Shrinkage Type) Required packages: `sparsediscrim` **Least Squares Support Vector Machine with Polynomial Kernel** ``` method = 'lssvmPoly' ``` Type: Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `tau` (Regularization Parameter) Required packages: `kernlab` **Penalized Discriminant Analysis** ``` method = 'pda' ``` Type: Classification Tuning parameters: * `lambda` (Shrinkage Penalty Coefficient) Required packages: `mda` **Penalized Discriminant Analysis** ``` method = 'pda2' ``` Type: Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `mda` **Polynomial Kernel Regularized Least Squares** ``` method = 'krlsPoly' ``` Type: Regression Tuning parameters: * `lambda` (Regularization Parameter) * `degree` (Polynomial Degree) Required packages: `KRLS` **Quadratic Discriminant Analysis** ``` method = 'qda' ``` Type: Classification No tuning parameters for this model Required packages: `MASS` **Quadratic Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepQDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Regularized Discriminant Analysis** ``` method = 'rda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) Required packages: `klaR` **Regularized Linear Discriminant Analysis** ``` method = 'rlda' ``` Type: Classification Tuning parameters: * `estimator` (Regularization Method) Required packages: `sparsediscrim` **Relevance Vector Machines with Polynomial Kernel** ``` method = 'rvmPoly' ``` Type: Regression Tuning parameters: * `scale` (Scale) * `degree` (Polynomial Degree) Required packages: `kernlab` **Robust Quadratic Discriminant Analysis** ``` method = 'QdaCov' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Support Vector Machines with Polynomial Kernel** ``` method = 'svmPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `C` (Cost) Required packages: `kernlab` ### 7\.0\.34 Prototype Models (back to [contents](train-models-by-tag.html#top)) **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **Greedy Prototype Selection** ``` method = 'protoclass' ``` Type: Classification Tuning parameters: * `eps` (Ball Size) * `Minkowski` (Distance Order) Required packages: `proxy`, `protoclass` **k\-Nearest Neighbors** ``` method = 'kknn' ``` Type: Regression, Classification Tuning parameters: * `kmax` (Max. \#Neighbors) * `distance` (Distance) * `kernel` (Kernel) Required packages: `kknn` **k\-Nearest Neighbors** ``` method = 'knn' ``` Type: Classification, Regression Tuning parameters: * `k` (\#Neighbors) **Learning Vector Quantization** ``` method = 'lvq' ``` Type: Classification Tuning parameters: * `size` (Codebook Size) * `k` (\#Prototypes) Required packages: `class` **Nearest Shrunken Centroids** ``` method = 'pam' ``` Type: Classification Tuning parameters: * `threshold` (Shrinkage Threshold) Required packages: `pamr` A model\-specific variable importance metric is available. **Optimal Weighted Nearest Neighbor Classifier** ``` method = 'ownn' ``` Type: Classification Tuning parameters: * `K` (\#Neighbors) Required packages: `snn` **Stabilized Nearest Neighbor Classifier** ``` method = 'snn' ``` Type: Classification Tuning parameters: * `lambda` (Stabilization Parameter) Required packages: `snn` ### 7\.0\.35 Quantile Regression (back to [contents](train-models-by-tag.html#top)) **Non\-Convex Penalized Quantile Regression** ``` method = 'rqnc' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) * `penalty` (Penalty Type) Required packages: `rqPen` **Quantile Random Forest** ``` method = 'qrf' ``` Type: Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `quantregForest` **Quantile Regression Neural Network** ``` method = 'qrnn' ``` Type: Regression Tuning parameters: * `n.hidden` (\#Hidden Units) * `penalty` ( Weight Decay) * `bag` (Bagged Models?) Required packages: `qrnn` **Quantile Regression with LASSO penalty** ``` method = 'rqlasso' ``` Type: Regression Tuning parameters: * `lambda` (L1 Penalty) Required packages: `rqPen` ### 7\.0\.36 Radial Basis Function (back to [contents](train-models-by-tag.html#top)) **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **Gaussian Process with Radial Basis Function Kernel** ``` method = 'gaussprRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **Least Squares Support Vector Machine with Radial Basis Function Kernel** ``` method = 'lssvmRadial' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `tau` (Regularization Parameter) Required packages: `kernlab` **Radial Basis Function Kernel Regularized Least Squares** ``` method = 'krlsRadial' ``` Type: Regression Tuning parameters: * `lambda` (Regularization Parameter) * `sigma` (Sigma) Required packages: `KRLS`, `kernlab` **Radial Basis Function Network** ``` method = 'rbf' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) Required packages: `RSNNS` **Radial Basis Function Network** ``` method = 'rbfDDA' ``` Type: Regression, Classification Tuning parameters: * `negativeThreshold` (Activation Limit for Conflicting Classes) Required packages: `RSNNS` **Relevance Vector Machines with Radial Basis Function Kernel** ``` method = 'rvmRadial' ``` Type: Regression Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **Support Vector Machines with Class Weights** ``` method = 'svmRadialWeights' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) * `Weight` (Weight) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialCost' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialSigma' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` Notes: This SVM model tunes over the cost parameter and the RBF kernel parameter sigma. In the latter case, using `tuneLength` will, at most, evaluate six values of the kernel parameter. This enables a broad search over the cost parameter and a relatively narrow search over `sigma` **Variational Bayesian Multinomial Probit Regression** ``` method = 'vbmpRadial' ``` Type: Classification Tuning parameters: * `estimateTheta` (Theta Estimated) Required packages: `vbmp` ### 7\.0\.37 Random Forest (back to [contents](train-models-by-tag.html#top)) **Conditional Inference Random Forest** ``` method = 'cforest' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `party` A model\-specific variable importance metric is available. **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Parallel Random Forest** ``` method = 'parRF' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `e1071`, `randomForest`, `foreach`, `import` A model\-specific variable importance metric is available. **Quantile Random Forest** ``` method = 'qrf' ``` Type: Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `quantregForest` **Random Ferns** ``` method = 'rFerns' ``` Type: Classification Tuning parameters: * `depth` (Fern Depth) Required packages: `rFerns` **Random Forest** ``` method = 'ordinalRF' ``` Type: Classification Tuning parameters: * `nsets` (\# score sets tried prior to the approximation) * `ntreeperdiv` (\# of trees (small RFs)) * `ntreefinal` (\# of trees (final RF)) Required packages: `e1071`, `ranger`, `dplyr`, `ordinalForest` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'ranger' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `splitrule` (Splitting Rule) * `min.node.size` (Minimal Node Size) Required packages: `e1071`, `ranger`, `dplyr` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'Rborist' ``` Type: Classification, Regression Tuning parameters: * `predFixed` (\#Randomly Selected Predictors) * `minNode` (Minimal Node Size) Required packages: `Rborist` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'rf' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `randomForest` A model\-specific variable importance metric is available. **Random Forest by Randomization** ``` method = 'extraTrees' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\# Randomly Selected Predictors) * `numRandomCuts` (\# Random Cuts) Required packages: `extraTrees` **Random Forest Rule\-Based Model** ``` method = 'rfRules' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `maxdepth` (Maximum Rule Depth) Required packages: `randomForest`, `inTrees`, `plyr` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRF' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) * `coefImp` (Importance Coefficient) Required packages: `randomForest`, `RRF` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRFglobal' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) Required packages: `RRF` A model\-specific variable importance metric is available. **Weighted Subspace Random Forest** ``` method = 'wsrf' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `wsrf` ### 7\.0\.38 Regularization (back to [contents](train-models-by-tag.html#top)) **Bayesian Regularized Neural Networks** ``` method = 'brnn' ``` Type: Regression Tuning parameters: * `neurons` (\# Neurons) Required packages: `brnn` **Diagonal Discriminant Analysis** ``` method = 'dda' ``` Type: Classification Tuning parameters: * `model` (Model) * `shrinkage` (Shrinkage Type) Required packages: `sparsediscrim` **Heteroscedastic Discriminant Analysis** ``` method = 'hda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `newdim` (Dimension of the Discriminative Subspace) Required packages: `hda` **High\-Dimensional Regularized Discriminant Analysis** ``` method = 'hdrda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `shrinkage_type` (Shrinkage Type) Required packages: `sparsediscrim` **Regularized Discriminant Analysis** ``` method = 'rda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) Required packages: `klaR` **Regularized Linear Discriminant Analysis** ``` method = 'rlda' ``` Type: Classification Tuning parameters: * `estimator` (Regularization Method) Required packages: `sparsediscrim` **Regularized Random Forest** ``` method = 'RRF' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) * `coefImp` (Importance Coefficient) Required packages: `randomForest`, `RRF` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRFglobal' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) Required packages: `RRF` A model\-specific variable importance metric is available. **Robust Regularized Linear Discriminant Analysis** ``` method = 'rrlda' ``` Type: Classification Tuning parameters: * `lambda` (Penalty Parameter) * `hp` (Robustness Parameter) * `penalty` (Penalty Type) Required packages: `rrlda` Notes: Unlike other packages used by `train`, the `rrlda` package is fully loaded when this model is used. **Shrinkage Discriminant Analysis** ``` method = 'sda' ``` Type: Classification Tuning parameters: * `diagonal` (Diagonalize) * `lambda` (shrinkage) Required packages: `sda` ### 7\.0\.39 Relevance Vector Machines (back to [contents](train-models-by-tag.html#top)) **Relevance Vector Machines with Linear Kernel** ``` method = 'rvmLinear' ``` Type: Regression No tuning parameters for this model Required packages: `kernlab` **Relevance Vector Machines with Polynomial Kernel** ``` method = 'rvmPoly' ``` Type: Regression Tuning parameters: * `scale` (Scale) * `degree` (Polynomial Degree) Required packages: `kernlab` **Relevance Vector Machines with Radial Basis Function Kernel** ``` method = 'rvmRadial' ``` Type: Regression Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` ### 7\.0\.40 Ridge Regression (back to [contents](train-models-by-tag.html#top)) **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Ridge Regression with Variable Selection** ``` method = 'foba' ``` Type: Regression Tuning parameters: * `k` (\#Variables Retained) * `lambda` (L2 Penalty) Required packages: `foba` ### 7\.0\.41 Robust Methods (back to [contents](train-models-by-tag.html#top)) **L2 Regularized Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights2' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) * `weight` (Class Weight) Required packages: `LiblineaR` **L2 Regularized Support Vector Machine (dual) with Linear Kernel** ``` method = 'svmLinear3' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) Required packages: `LiblineaR` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Regularized Logistic Regression** ``` method = 'regLogistic' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `loss` (Loss Function) * `epsilon` (Tolerance) Required packages: `LiblineaR` **Relevance Vector Machines with Linear Kernel** ``` method = 'rvmLinear' ``` Type: Regression No tuning parameters for this model Required packages: `kernlab` **Relevance Vector Machines with Polynomial Kernel** ``` method = 'rvmPoly' ``` Type: Regression Tuning parameters: * `scale` (Scale) * `degree` (Polynomial Degree) Required packages: `kernlab` **Relevance Vector Machines with Radial Basis Function Kernel** ``` method = 'rvmRadial' ``` Type: Regression Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **Robust Mixture Discriminant Analysis** ``` method = 'rmda' ``` Type: Classification Tuning parameters: * `K` (\#Subclasses Per Class) * `model` (Model) Required packages: `robustDA` **Support Vector Machines with Boundrange String Kernel** ``` method = 'svmBoundrangeString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Exponential String Kernel** ``` method = 'svmExpoString' ``` Type: Regression, Classification Tuning parameters: * `lambda` (lambda) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear2' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) Required packages: `e1071` **Support Vector Machines with Polynomial Kernel** ``` method = 'svmPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialSigma' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` Notes: This SVM model tunes over the cost parameter and the RBF kernel parameter sigma. In the latter case, using `tuneLength` will, at most, evaluate six values of the kernel parameter. This enables a broad search over the cost parameter and a relatively narrow search over `sigma` **Support Vector Machines with Spectrum String Kernel** ``` method = 'svmSpectrumString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` ### 7\.0\.42 Robust Model (back to [contents](train-models-by-tag.html#top)) **Quantile Random Forest** ``` method = 'qrf' ``` Type: Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `quantregForest` **Quantile Regression Neural Network** ``` method = 'qrnn' ``` Type: Regression Tuning parameters: * `n.hidden` (\#Hidden Units) * `penalty` ( Weight Decay) * `bag` (Bagged Models?) Required packages: `qrnn` **Robust Linear Discriminant Analysis** ``` method = 'Linda' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Robust Linear Model** ``` method = 'rlm' ``` Type: Regression Tuning parameters: * `intercept` (intercept) * `psi` (psi) Required packages: `MASS` A model\-specific variable importance metric is available. **Robust Regularized Linear Discriminant Analysis** ``` method = 'rrlda' ``` Type: Classification Tuning parameters: * `lambda` (Penalty Parameter) * `hp` (Robustness Parameter) * `penalty` (Penalty Type) Required packages: `rrlda` Notes: Unlike other packages used by `train`, the `rrlda` package is fully loaded when this model is used. **Robust SIMCA** ``` method = 'RSimca' ``` Type: Classification No tuning parameters for this model Required packages: `rrcovHD` Notes: Unlike other packages used by `train`, the `rrcovHD` package is fully loaded when this model is used. **SIMCA** ``` method = 'CSimca' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov`, `rrcovHD` ### 7\.0\.43 ROC Curves (back to [contents](train-models-by-tag.html#top)) **ROC\-Based Classifier** ``` method = 'rocc' ``` Type: Classification Tuning parameters: * `xgenes` (\#Variables Retained) Required packages: `rocc` ### 7\.0\.44 Rule\-Based Model (back to [contents](train-models-by-tag.html#top)) **Adaptive\-Network\-Based Fuzzy Inference System** ``` method = 'ANFIS' ``` Type: Regression Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `max.iter` (Max. Iterations) Required packages: `frbs` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cubist** ``` method = 'cubist' ``` Type: Regression Tuning parameters: * `committees` (\#Committees) * `neighbors` (\#Instances) Required packages: `Cubist` A model\-specific variable importance metric is available. **Dynamic Evolving Neural\-Fuzzy Inference System** ``` method = 'DENFIS' ``` Type: Regression Tuning parameters: * `Dthr` (Threshold) * `max.iter` (Max. Iterations) Required packages: `frbs` **Fuzzy Inference Rules by Descent Method** ``` method = 'FIR.DM' ``` Type: Regression Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `max.iter` (Max. Iterations) Required packages: `frbs` **Fuzzy Rules Using Chi’s Method** ``` method = 'FRBCS.CHI' ``` Type: Classification Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `type.mf` (Membership Function) Required packages: `frbs` **Fuzzy Rules Using Genetic Cooperative\-Competitive Learning and Pittsburgh** ``` method = 'FH.GBML' ``` Type: Classification Tuning parameters: * `max.num.rule` (Max. \#Rules) * `popu.size` (Population Size) * `max.gen` (Max. Generations) Required packages: `frbs` **Fuzzy Rules Using the Structural Learning Algorithm on Vague Environment** ``` method = 'SLAVE' ``` Type: Classification Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `max.iter` (Max. Iterations) * `max.gen` (Max. Generations) Required packages: `frbs` **Fuzzy Rules via MOGUL** ``` method = 'GFS.FR.MOGUL' ``` Type: Regression Tuning parameters: * `max.gen` (Max. Generations) * `max.iter` (Max. Iterations) * `max.tune` (Max. Tuning Iterations) Required packages: `frbs` **Fuzzy Rules via Thrift** ``` method = 'GFS.THRIFT' ``` Type: Regression Tuning parameters: * `popu.size` (Population Size) * `num.labels` (\# Fuzzy Labels) * `max.gen` (Max. Generations) Required packages: `frbs` **Fuzzy Rules with Weight Factor** ``` method = 'FRBCS.W' ``` Type: Classification Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `type.mf` (Membership Function) Required packages: `frbs` **Genetic Lateral Tuning and Rule Selection of Linguistic Fuzzy Systems** ``` method = 'GFS.LT.RS' ``` Type: Regression Tuning parameters: * `popu.size` (Population Size) * `num.labels` (\# Fuzzy Labels) * `max.gen` (Max. Generations) Required packages: `frbs` **Hybrid Neural Fuzzy Inference System** ``` method = 'HYFIS' ``` Type: Regression Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `max.iter` (Max. Iterations) Required packages: `frbs` **Model Rules** ``` method = 'M5Rules' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) Required packages: `RWeka` **Model Tree** ``` method = 'M5' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) * `rules` (Rules) Required packages: `RWeka` **Patient Rule Induction Method** ``` method = 'PRIM' ``` Type: Classification Tuning parameters: * `peel.alpha` (peeling quantile) * `paste.alpha` (pasting quantile) * `mass.min` (minimum mass) Required packages: `supervisedPRIM` **Random Forest Rule\-Based Model** ``` method = 'rfRules' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `maxdepth` (Maximum Rule Depth) Required packages: `randomForest`, `inTrees`, `plyr` A model\-specific variable importance metric is available. **Rule\-Based Classifier** ``` method = 'JRip' ``` Type: Classification Tuning parameters: * `NumOpt` (\# Optimizations) * `NumFolds` (\# Folds) * `MinWeights` (Min Weights) Required packages: `RWeka` A model\-specific variable importance metric is available. **Rule\-Based Classifier** ``` method = 'PART' ``` Type: Classification Tuning parameters: * `threshold` (Confidence Threshold) * `pruned` (Pruning) Required packages: `RWeka` A model\-specific variable importance metric is available. **Simplified TSK Fuzzy Rules** ``` method = 'FS.HGD' ``` Type: Regression Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `max.iter` (Max. Iterations) Required packages: `frbs` **Single C5\.0 Ruleset** ``` method = 'C5.0Rules' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single Rule Classification** ``` method = 'OneR' ``` Type: Classification No tuning parameters for this model Required packages: `RWeka` **Subtractive Clustering and Fuzzy c\-Means Rules** ``` method = 'SBC' ``` Type: Regression Tuning parameters: * `r.a` (Radius) * `eps.high` (Upper Threshold) * `eps.low` (Lower Threshold) Required packages: `frbs` **Wang and Mendel Fuzzy Rules** ``` method = 'WM' ``` Type: Regression Tuning parameters: * `num.labels` (\#Fuzzy Terms) * `type.mf` (Membership Function) Required packages: `frbs` ### 7\.0\.45 Self\-Organising Maps (back to [contents](train-models-by-tag.html#top)) **Self\-Organizing Maps** ``` method = 'xyf' ``` Type: Classification, Regression Tuning parameters: * `xdim` (Rows) * `ydim` (Columns) * `user.weights` (Layer Weight) * `topo` (Topology) Required packages: `kohonen` Notes: As of version 3\.0\.0 of the kohonen package, the argument `user.weights` replaces the old `alpha` parameter. `user.weights` is usually a vector of relative weights such as `c(1, 3)` but is parameterized here as a proportion such as `c(1-.75, .75)` where the .75 is the value of the tuning parameter passed to `train` and indicates that the outcome layer has 3 times the weight as the predictor layer. ### 7\.0\.46 String Kernel (back to [contents](train-models-by-tag.html#top)) **Support Vector Machines with Boundrange String Kernel** ``` method = 'svmBoundrangeString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Exponential String Kernel** ``` method = 'svmExpoString' ``` Type: Regression, Classification Tuning parameters: * `lambda` (lambda) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Spectrum String Kernel** ``` method = 'svmSpectrumString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` ### 7\.0\.47 Support Vector Machines (back to [contents](train-models-by-tag.html#top)) **L2 Regularized Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights2' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) * `weight` (Class Weight) Required packages: `LiblineaR` **L2 Regularized Support Vector Machine (dual) with Linear Kernel** ``` method = 'svmLinear3' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) Required packages: `LiblineaR` **Least Squares Support Vector Machine** ``` method = 'lssvmLinear' ``` Type: Classification Tuning parameters: * `tau` (Regularization Parameter) Required packages: `kernlab` **Least Squares Support Vector Machine with Polynomial Kernel** ``` method = 'lssvmPoly' ``` Type: Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `tau` (Regularization Parameter) Required packages: `kernlab` **Least Squares Support Vector Machine with Radial Basis Function Kernel** ``` method = 'lssvmRadial' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `tau` (Regularization Parameter) Required packages: `kernlab` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Support Vector Machines with Boundrange String Kernel** ``` method = 'svmBoundrangeString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Class Weights** ``` method = 'svmRadialWeights' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) * `Weight` (Weight) Required packages: `kernlab` **Support Vector Machines with Exponential String Kernel** ``` method = 'svmExpoString' ``` Type: Regression, Classification Tuning parameters: * `lambda` (lambda) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear2' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) Required packages: `e1071` **Support Vector Machines with Polynomial Kernel** ``` method = 'svmPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialCost' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialSigma' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` Notes: This SVM model tunes over the cost parameter and the RBF kernel parameter sigma. In the latter case, using `tuneLength` will, at most, evaluate six values of the kernel parameter. This enables a broad search over the cost parameter and a relatively narrow search over `sigma` **Support Vector Machines with Spectrum String Kernel** ``` method = 'svmSpectrumString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` ### 7\.0\.48 Supports Class Probabilities (back to [contents](train-models-by-tag.html#top)) **AdaBoost Classification Trees** ``` method = 'adaboost' ``` Type: Classification Tuning parameters: * `nIter` (\#Trees) * `method` (Method) Required packages: `fastAdaboost` **AdaBoost.M1** ``` method = 'AdaBoost.M1' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) * `coeflearn` (Coefficient Type) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Adaptive Mixture Discriminant Analysis** ``` method = 'amdai' ``` Type: Classification Tuning parameters: * `model` (Model Type) Required packages: `adaptDA` **Adjacent Categories Probability Model for Ordinal Data** ``` method = 'vglmAdjCat' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged CART** ``` method = 'treebag' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `ipred`, `plyr`, `e1071` A model\-specific variable importance metric is available. **Bagged Flexible Discriminant Analysis** ``` method = 'bagFDA' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bagged MARS** ``` method = 'bagEarth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged MARS using gCV Pruning** ``` method = 'bagEarthGCV' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Bagged Model** ``` method = 'bag' ``` Type: Regression, Classification Tuning parameters: * `vars` (\#Randomly Selected Predictors) Required packages: `caret` **Bayesian Additive Regression Trees** ``` method = 'bartMachine' ``` Type: Classification, Regression Tuning parameters: * `num_trees` (\#Trees) * `k` (Prior Boundary) * `alpha` (Base Terminal Node Hyperparameter) * `beta` (Power Terminal Node Hyperparameter) * `nu` (Degrees of Freedom) Required packages: `bartMachine` A model\-specific variable importance metric is available. **Bayesian Generalized Linear Model** ``` method = 'bayesglm' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `arm` **Binary Discriminant Analysis** ``` method = 'binda' ``` Type: Classification Tuning parameters: * `lambda.freqs` (Shrinkage Intensity) Required packages: `binda` **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Logistic Regression** ``` method = 'LogitBoost' ``` Type: Classification Tuning parameters: * `nIter` (\# Boosting Iterations) Required packages: `caTools` **Boosted Tree** ``` method = 'blackboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `party`, `mboost`, `plyr`, `partykit` **C4\.5\-like Trees** ``` method = 'J48' ``` Type: Classification Tuning parameters: * `C` (Confidence Threshold) * `M` (Minimum Instances Per Leaf) Required packages: `RWeka` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart' ``` Type: Regression, Classification Tuning parameters: * `cp` (Complexity Parameter) Required packages: `rpart` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart1SE' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `rpart` A model\-specific variable importance metric is available. Notes: This CART model replicates the same process used by the `rpart` function where the model complexity is determined using the one\-standard error method. This procedure is replicated inside of the resampling done by `train` so that an external resampling estimate can be obtained. **CART** ``` method = 'rpart2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) Required packages: `rpart` A model\-specific variable importance metric is available. **CHi\-squared Automated Interaction Detection** ``` method = 'chaid' ``` Type: Classification Tuning parameters: * `alpha2` (Merging Threshold) * `alpha3` (Splitting former Merged Threshold) * `alpha4` ( Splitting former Merged Threshold) Required packages: `CHAID` **Conditional Inference Random Forest** ``` method = 'cforest' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `party` A model\-specific variable importance metric is available. **Conditional Inference Tree** ``` method = 'ctree' ``` Type: Classification, Regression Tuning parameters: * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Conditional Inference Tree** ``` method = 'ctree2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Continuation Ratio Model for Ordinal Data** ``` method = 'vglmContRatio' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Cumulative Probability Model for Ordinal Data** ``` method = 'vglmCumulative' ``` Type: Classification Tuning parameters: * `parallel` (Parallel Curves) * `link` (Link Function) Required packages: `VGAM` **Diagonal Discriminant Analysis** ``` method = 'dda' ``` Type: Classification Tuning parameters: * `model` (Model) * `shrinkage` (Shrinkage Type) Required packages: `sparsediscrim` **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **Ensembles of Generalized Linear Models** ``` method = 'randomGLM' ``` Type: Regression, Classification Tuning parameters: * `maxInteractionOrder` (Interaction Order) Required packages: `randomGLM` Notes: Unlike other packages used by `train`, the `randomGLM` package is fully loaded when this model is used. **eXtreme Gradient Boosting** ``` method = 'xgbDART' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `subsample` (Subsample Percentage) * `colsample_bytree` (Subsample Ratio of Columns) * `rate_drop` (Fraction of Trees Dropped) * `skip_drop` (Prob. of Skipping Drop\-out) * `min_child_weight` (Minimum Sum of Instance Weight) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbLinear' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `lambda` (L2 Regularization) * `alpha` (L1 Regularization) * `eta` (Learning Rate) Required packages: `xgboost` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbTree' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `colsample_bytree` (Subsample Ratio of Columns) * `min_child_weight` (Minimum Sum of Instance Weight) * `subsample` (Subsample Percentage) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **Flexible Discriminant Analysis** ``` method = 'fda' ``` Type: Classification Tuning parameters: * `degree` (Product Degree) * `nprune` (\#Terms) Required packages: `earth`, `mda` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Gaussian Process** ``` method = 'gaussprLinear' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `kernlab` **Gaussian Process with Polynomial Kernel** ``` method = 'gaussprPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kernlab` **Gaussian Process with Radial Basis Function Kernel** ``` method = 'gaussprRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) Required packages: `kernlab` **Generalized Additive Model using LOESS** ``` method = 'gamLoess' ``` Type: Regression, Classification Tuning parameters: * `span` (Span) * `degree` (Degree) Required packages: `gam` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `gam` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'bam' ``` Type: Regression, Classification Tuning parameters: * `select` (Feature Selection) * `method` (Method) Required packages: `mgcv` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `mgcv` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'gam' ``` Type: Regression, Classification Tuning parameters: * `select` (Feature Selection) * `method` (Method) Required packages: `mgcv` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `mgcv` package is fully loaded when this model is used. **Generalized Additive Model using Splines** ``` method = 'gamSpline' ``` Type: Regression, Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `gam` A model\-specific variable importance metric is available. Notes: Which terms enter the model in a nonlinear manner is determined by the number of unique values for the predictor. For example, if a predictor only has four unique values, most basis expansion method will fail because there are not enough granularity in the data. By default, a predictor must have at least 10 unique values to be used in a nonlinear basis expansion. Unlike other packages used by `train`, the `gam` package is fully loaded when this model is used. **Generalized Linear Model** ``` method = 'glm' ``` Type: Regression, Classification No tuning parameters for this model A model\-specific variable importance metric is available. **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **Generalized Partial Least Squares** ``` method = 'gpls' ``` Type: Classification Tuning parameters: * `K.prov` (\#Components) Required packages: `gpls` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **glmnet** ``` method = 'glmnet' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `glmnet`, `Matrix` A model\-specific variable importance metric is available. **Gradient Boosting Machines** ``` method = 'gbm_h2o' ``` Type: Regression, Classification Tuning parameters: * `ntrees` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `min_rows` (Min. Terminal Node Size) * `learn_rate` (Shrinkage) * `col_sample_rate` (\#Randomly Selected Predictors) Required packages: `h2o` A model\-specific variable importance metric is available. **Heteroscedastic Discriminant Analysis** ``` method = 'hda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `newdim` (Dimension of the Discriminative Subspace) Required packages: `hda` **High Dimensional Discriminant Analysis** ``` method = 'hdda' ``` Type: Classification Tuning parameters: * `threshold` (Threshold) * `model` (Model Type) Required packages: `HDclassif` **High\-Dimensional Regularized Discriminant Analysis** ``` method = 'hdrda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) * `shrinkage_type` (Shrinkage Type) Required packages: `sparsediscrim` **k\-Nearest Neighbors** ``` method = 'kknn' ``` Type: Regression, Classification Tuning parameters: * `kmax` (Max. \#Neighbors) * `distance` (Distance) * `kernel` (Kernel) Required packages: `kknn` **k\-Nearest Neighbors** ``` method = 'knn' ``` Type: Classification, Regression Tuning parameters: * `k` (\#Neighbors) **Linear Discriminant Analysis** ``` method = 'lda' ``` Type: Classification No tuning parameters for this model Required packages: `MASS` **Linear Discriminant Analysis** ``` method = 'lda2' ``` Type: Classification Tuning parameters: * `dimen` (\#Discriminant Functions) Required packages: `MASS` **Linear Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepLDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Localized Linear Discriminant Analysis** ``` method = 'loclda' ``` Type: Classification Tuning parameters: * `k` (\#Nearest Neighbors) Required packages: `klaR` **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` **Logistic Model Trees** ``` method = 'LMT' ``` Type: Classification Tuning parameters: * `iter` (\# Iteratons) Required packages: `RWeka` **Mixture Discriminant Analysis** ``` method = 'mda' ``` Type: Classification Tuning parameters: * `subclasses` (\#Subclasses Per Class) Required packages: `mda` **Model Averaged Naive Bayes Classifier** ``` method = 'manb' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) * `prior` (Prior Probability) Required packages: `bnclassify` **Model Averaged Neural Network** ``` method = 'avNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) * `bag` (Bagging) Required packages: `nnet` **Monotone Multi\-Layer Perceptron Neural Network** ``` method = 'monmlp' ``` Type: Classification, Regression Tuning parameters: * `hidden1` (\#Hidden Units) * `n.ensemble` (\#Models) Required packages: `monmlp` **Multi\-Layer Perceptron** ``` method = 'mlp' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) Required packages: `RSNNS` **Multi\-Layer Perceptron** ``` method = 'mlpWeightDecay' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `RSNNS` **Multi\-Layer Perceptron, multiple layers** ``` method = 'mlpWeightDecayML' ``` Type: Regression, Classification Tuning parameters: * `layer1` (\#Hidden Units layer1\) * `layer2` (\#Hidden Units layer2\) * `layer3` (\#Hidden Units layer3\) * `decay` (Weight Decay) Required packages: `RSNNS` **Multi\-Layer Perceptron, with multiple layers** ``` method = 'mlpML' ``` Type: Regression, Classification Tuning parameters: * `layer1` (\#Hidden Units layer1\) * `layer2` (\#Hidden Units layer2\) * `layer3` (\#Hidden Units layer3\) Required packages: `RSNNS` **Multi\-Step Adaptive MCP\-Net** ``` method = 'msaenet' ``` Type: Regression, Classification Tuning parameters: * `alphas` (Alpha) * `nsteps` (\#Adaptive Estimation Steps) * `scale` (Adaptive Weight Scaling Factor) Required packages: `msaenet` A model\-specific variable importance metric is available. **Multilayer Perceptron Network by Stochastic Gradient Descent** ``` method = 'mlpSGD' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `l2reg` (L2 Regularization) * `lambda` (RMSE Gradient Scaling) * `learn_rate` (Learning Rate) * `momentum` (Momentum) * `gamma` (Learning Rate Decay) * `minibatchsz` (Batch Size) * `repeats` (\#Models) Required packages: `FCNN4R`, `plyr` A model\-specific variable importance metric is available. **Multilayer Perceptron Network with Dropout** ``` method = 'mlpKerasDropout' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `dropout` (Dropout Rate) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Dropout** ``` method = 'mlpKerasDropoutCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `dropout` (Dropout Rate) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecay' ``` Type: Regression, Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecayCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multivariate Adaptive Regression Spline** ``` method = 'earth' ``` Type: Regression, Classification Tuning parameters: * `nprune` (\#Terms) * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Multivariate Adaptive Regression Splines** ``` method = 'gcvEarth' ``` Type: Regression, Classification Tuning parameters: * `degree` (Product Degree) Required packages: `earth` A model\-specific variable importance metric is available. Notes: Unlike other packages used by `train`, the `earth` package is fully loaded when this model is used. **Naive Bayes** ``` method = 'naive_bayes' ``` Type: Classification Tuning parameters: * `laplace` (Laplace Correction) * `usekernel` (Distribution Type) * `adjust` (Bandwidth Adjustment) Required packages: `naivebayes` **Naive Bayes** ``` method = 'nb' ``` Type: Classification Tuning parameters: * `fL` (Laplace Correction) * `usekernel` (Distribution Type) * `adjust` (Bandwidth Adjustment) Required packages: `klaR` **Naive Bayes Classifier** ``` method = 'nbDiscrete' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Naive Bayes Classifier with Attribute Weighting** ``` method = 'awnb' ``` Type: Classification Tuning parameters: * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Nearest Shrunken Centroids** ``` method = 'pam' ``` Type: Classification Tuning parameters: * `threshold` (Shrinkage Threshold) Required packages: `pamr` A model\-specific variable importance metric is available. **Neural Network** ``` method = 'mxnet' ``` Type: Classification, Regression Tuning parameters: * `layer1` (\#Hidden Units in Layer 1\) * `layer2` (\#Hidden Units in Layer 2\) * `layer3` (\#Hidden Units in Layer 3\) * `learning.rate` (Learning Rate) * `momentum` (Momentum) * `dropout` (Dropout Rate) * `activation` (Activation Function) Required packages: `mxnet` Notes: The `mxnet` package is not yet on CRAN. See <http://mxnet.io> for installation instructions. **Neural Network** ``` method = 'mxnetAdam' ``` Type: Classification, Regression Tuning parameters: * `layer1` (\#Hidden Units in Layer 1\) * `layer2` (\#Hidden Units in Layer 2\) * `layer3` (\#Hidden Units in Layer 3\) * `dropout` (Dropout Rate) * `beta1` (beta1\) * `beta2` (beta2\) * `learningrate` (Learning Rate) * `activation` (Activation Function) Required packages: `mxnet` Notes: The `mxnet` package is not yet on CRAN. See <http://mxnet.io> for installation instructions. Users are strongly advised to define `num.round` themselves. **Neural Network** ``` method = 'nnet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Neural Networks with Feature Extraction** ``` method = 'pcaNNet' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) * `decay` (Weight Decay) Required packages: `nnet` **Non\-Informative Model** ``` method = 'null' ``` Type: Classification, Regression No tuning parameters for this model Notes: Since this model always predicts the same value, R\-squared values will always be estimated to be NA. **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Ordered Logistic or Probit Regression** ``` method = 'polr' ``` Type: Classification Tuning parameters: * `method` (parameter) Required packages: `MASS` A model\-specific variable importance metric is available. **Parallel Random Forest** ``` method = 'parRF' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `e1071`, `randomForest`, `foreach`, `import` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'kernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'pls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'simpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares** ``` method = 'widekernelpls' ``` Type: Regression, Classification Tuning parameters: * `ncomp` (\#Components) Required packages: `pls` A model\-specific variable importance metric is available. **Partial Least Squares Generalized Linear Models** ``` method = 'plsRglm' ``` Type: Classification, Regression Tuning parameters: * `nt` (\#PLS Components) * `alpha.pvals.expli` (p\-Value threshold) Required packages: `plsRglm` Notes: Unlike other packages used by `train`, the `plsRglm` package is fully loaded when this model is used. **Patient Rule Induction Method** ``` method = 'PRIM' ``` Type: Classification Tuning parameters: * `peel.alpha` (peeling quantile) * `paste.alpha` (pasting quantile) * `mass.min` (minimum mass) Required packages: `supervisedPRIM` **Penalized Discriminant Analysis** ``` method = 'pda' ``` Type: Classification Tuning parameters: * `lambda` (Shrinkage Penalty Coefficient) Required packages: `mda` **Penalized Discriminant Analysis** ``` method = 'pda2' ``` Type: Classification Tuning parameters: * `df` (Degrees of Freedom) Required packages: `mda` **Penalized Logistic Regression** ``` method = 'plr' ``` Type: Classification Tuning parameters: * `lambda` (L2 Penalty) * `cp` (Complexity Parameter) Required packages: `stepPlr` **Penalized Multinomial Regression** ``` method = 'multinom' ``` Type: Classification Tuning parameters: * `decay` (Weight Decay) Required packages: `nnet` A model\-specific variable importance metric is available. **Penalized Ordinal Regression** ``` method = 'ordinalNet' ``` Type: Classification Tuning parameters: * `alpha` (Mixing Percentage) * `criteria` (Selection Criterion) * `link` (Link Function) Required packages: `ordinalNet`, `plyr` A model\-specific variable importance metric is available. Notes: Requires ordinalNet package version \>\= 2\.0 **Quadratic Discriminant Analysis** ``` method = 'qda' ``` Type: Classification No tuning parameters for this model Required packages: `MASS` **Quadratic Discriminant Analysis with Stepwise Feature Selection** ``` method = 'stepQDA' ``` Type: Classification Tuning parameters: * `maxvar` (Maximum \#Variables) * `direction` (Search Direction) Required packages: `klaR`, `MASS` **Radial Basis Function Network** ``` method = 'rbf' ``` Type: Classification, Regression Tuning parameters: * `size` (\#Hidden Units) Required packages: `RSNNS` **Radial Basis Function Network** ``` method = 'rbfDDA' ``` Type: Regression, Classification Tuning parameters: * `negativeThreshold` (Activation Limit for Conflicting Classes) Required packages: `RSNNS` **Random Forest** ``` method = 'ordinalRF' ``` Type: Classification Tuning parameters: * `nsets` (\# score sets tried prior to the approximation) * `ntreeperdiv` (\# of trees (small RFs)) * `ntreefinal` (\# of trees (final RF)) Required packages: `e1071`, `ranger`, `dplyr`, `ordinalForest` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'ranger' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `splitrule` (Splitting Rule) * `min.node.size` (Minimal Node Size) Required packages: `e1071`, `ranger`, `dplyr` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'Rborist' ``` Type: Classification, Regression Tuning parameters: * `predFixed` (\#Randomly Selected Predictors) * `minNode` (Minimal Node Size) Required packages: `Rborist` A model\-specific variable importance metric is available. **Random Forest** ``` method = 'rf' ``` Type: Classification, Regression Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `randomForest` A model\-specific variable importance metric is available. **Random Forest by Randomization** ``` method = 'extraTrees' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\# Randomly Selected Predictors) * `numRandomCuts` (\# Random Cuts) Required packages: `extraTrees` **Regularized Discriminant Analysis** ``` method = 'rda' ``` Type: Classification Tuning parameters: * `gamma` (Gamma) * `lambda` (Lambda) Required packages: `klaR` **Regularized Linear Discriminant Analysis** ``` method = 'rlda' ``` Type: Classification Tuning parameters: * `estimator` (Regularization Method) Required packages: `sparsediscrim` **Regularized Logistic Regression** ``` method = 'regLogistic' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `loss` (Loss Function) * `epsilon` (Tolerance) Required packages: `LiblineaR` **Regularized Random Forest** ``` method = 'RRF' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) * `coefImp` (Importance Coefficient) Required packages: `randomForest`, `RRF` A model\-specific variable importance metric is available. **Regularized Random Forest** ``` method = 'RRFglobal' ``` Type: Regression, Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) * `coefReg` (Regularization Value) Required packages: `RRF` A model\-specific variable importance metric is available. **Robust Linear Discriminant Analysis** ``` method = 'Linda' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Robust Mixture Discriminant Analysis** ``` method = 'rmda' ``` Type: Classification Tuning parameters: * `K` (\#Subclasses Per Class) * `model` (Model) Required packages: `robustDA` **Robust Quadratic Discriminant Analysis** ``` method = 'QdaCov' ``` Type: Classification No tuning parameters for this model Required packages: `rrcov` **Robust Regularized Linear Discriminant Analysis** ``` method = 'rrlda' ``` Type: Classification Tuning parameters: * `lambda` (Penalty Parameter) * `hp` (Robustness Parameter) * `penalty` (Penalty Type) Required packages: `rrlda` Notes: Unlike other packages used by `train`, the `rrlda` package is fully loaded when this model is used. **Rotation Forest** ``` method = 'rotationForest' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) Required packages: `rotationForest` A model\-specific variable importance metric is available. **Rotation Forest** ``` method = 'rotationForestCp' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) * `cp` (Complexity Parameter) Required packages: `rpart`, `plyr`, `rotationForest` A model\-specific variable importance metric is available. **Rule\-Based Classifier** ``` method = 'JRip' ``` Type: Classification Tuning parameters: * `NumOpt` (\# Optimizations) * `NumFolds` (\# Folds) * `MinWeights` (Min Weights) Required packages: `RWeka` A model\-specific variable importance metric is available. **Rule\-Based Classifier** ``` method = 'PART' ``` Type: Classification Tuning parameters: * `threshold` (Confidence Threshold) * `pruned` (Pruning) Required packages: `RWeka` A model\-specific variable importance metric is available. **Self\-Organizing Maps** ``` method = 'xyf' ``` Type: Classification, Regression Tuning parameters: * `xdim` (Rows) * `ydim` (Columns) * `user.weights` (Layer Weight) * `topo` (Topology) Required packages: `kohonen` Notes: As of version 3\.0\.0 of the kohonen package, the argument `user.weights` replaces the old `alpha` parameter. `user.weights` is usually a vector of relative weights such as `c(1, 3)` but is parameterized here as a proportion such as `c(1-.75, .75)` where the .75 is the value of the tuning parameter passed to `train` and indicates that the outcome layer has 3 times the weight as the predictor layer. **Semi\-Naive Structure Learner Wrapper** ``` method = 'nbSearch' ``` Type: Classification Tuning parameters: * `k` (\#Folds) * `epsilon` (Minimum Absolute Improvement) * `smooth` (Smoothing Parameter) * `final_smooth` (Final Smoothing Parameter) * `direction` (Search Direction) Required packages: `bnclassify` **Shrinkage Discriminant Analysis** ``` method = 'sda' ``` Type: Classification Tuning parameters: * `diagonal` (Diagonalize) * `lambda` (shrinkage) Required packages: `sda` **Single C5\.0 Ruleset** ``` method = 'C5.0Rules' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single C5\.0 Tree** ``` method = 'C5.0Tree' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Single Rule Classification** ``` method = 'OneR' ``` Type: Classification No tuning parameters for this model Required packages: `RWeka` **Sparse Distance Weighted Discrimination** ``` method = 'sdwd' ``` Type: Classification Tuning parameters: * `lambda` (L1 Penalty) * `lambda2` (L2 Penalty) Required packages: `sdwd` A model\-specific variable importance metric is available. **Sparse Linear Discriminant Analysis** ``` method = 'sparseLDA' ``` Type: Classification Tuning parameters: * `NumVars` (\# Predictors) * `lambda` (Lambda) Required packages: `sparseLDA` **Sparse Partial Least Squares** ``` method = 'spls' ``` Type: Regression, Classification Tuning parameters: * `K` (\#Components) * `eta` (Threshold) * `kappa` (Kappa) Required packages: `spls` **Stabilized Linear Discriminant Analysis** ``` method = 'slda' ``` Type: Classification No tuning parameters for this model Required packages: `ipred` **Stacked AutoEncoder Deep Neural Network** ``` method = 'dnn' ``` Type: Classification, Regression Tuning parameters: * `layer1` (Hidden Layer 1\) * `layer2` (Hidden Layer 2\) * `layer3` (Hidden Layer 3\) * `hidden_dropout` (Hidden Dropouts) * `visible_dropout` (Visible Dropout) Required packages: `deepnet` **Stochastic Gradient Boosting** ``` method = 'gbm' ``` Type: Regression, Classification Tuning parameters: * `n.trees` (\# Boosting Iterations) * `interaction.depth` (Max Tree Depth) * `shrinkage` (Shrinkage) * `n.minobsinnode` (Min. Terminal Node Size) Required packages: `gbm`, `plyr` A model\-specific variable importance metric is available. **Support Vector Machines with Boundrange String Kernel** ``` method = 'svmBoundrangeString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Class Weights** ``` method = 'svmRadialWeights' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) * `Weight` (Weight) Required packages: `kernlab` **Support Vector Machines with Exponential String Kernel** ``` method = 'svmExpoString' ``` Type: Regression, Classification Tuning parameters: * `lambda` (lambda) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Linear Kernel** ``` method = 'svmLinear2' ``` Type: Regression, Classification Tuning parameters: * `cost` (Cost) Required packages: `e1071` **Support Vector Machines with Polynomial Kernel** ``` method = 'svmPoly' ``` Type: Regression, Classification Tuning parameters: * `degree` (Polynomial Degree) * `scale` (Scale) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadial' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialCost' ``` Type: Regression, Classification Tuning parameters: * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Radial Basis Function Kernel** ``` method = 'svmRadialSigma' ``` Type: Regression, Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) Required packages: `kernlab` Notes: This SVM model tunes over the cost parameter and the RBF kernel parameter sigma. In the latter case, using `tuneLength` will, at most, evaluate six values of the kernel parameter. This enables a broad search over the cost parameter and a relatively narrow search over `sigma` **Support Vector Machines with Spectrum String Kernel** ``` method = 'svmSpectrumString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Tree Augmented Naive Bayes Classifier** ``` method = 'tan' ``` Type: Classification Tuning parameters: * `score` (Score Function) * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier Structure Learner Wrapper** ``` method = 'tanSearch' ``` Type: Classification Tuning parameters: * `k` (\#Folds) * `epsilon` (Minimum Absolute Improvement) * `smooth` (Smoothing Parameter) * `final_smooth` (Final Smoothing Parameter) * `sp` (Super\-Parent) Required packages: `bnclassify` **Tree Augmented Naive Bayes Classifier with Attribute Weighting** ``` method = 'awtan' ``` Type: Classification Tuning parameters: * `score` (Score Function) * `smooth` (Smoothing Parameter) Required packages: `bnclassify` **Tree Models from Genetic Algorithms** ``` method = 'evtree' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Complexity Parameter) Required packages: `evtree` **Tree\-Based Ensembles** ``` method = 'nodeHarvest' ``` Type: Regression, Classification Tuning parameters: * `maxinter` (Maximum Interaction Depth) * `mode` (Prediction Mode) Required packages: `nodeHarvest` **Variational Bayesian Multinomial Probit Regression** ``` method = 'vbmpRadial' ``` Type: Classification Tuning parameters: * `estimateTheta` (Theta Estimated) Required packages: `vbmp` **Weighted Subspace Random Forest** ``` method = 'wsrf' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `wsrf` ### 7\.0\.49 Text Mining (back to [contents](train-models-by-tag.html#top)) **Support Vector Machines with Boundrange String Kernel** ``` method = 'svmBoundrangeString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Exponential String Kernel** ``` method = 'svmExpoString' ``` Type: Regression, Classification Tuning parameters: * `lambda` (lambda) * `C` (Cost) Required packages: `kernlab` **Support Vector Machines with Spectrum String Kernel** ``` method = 'svmSpectrumString' ``` Type: Regression, Classification Tuning parameters: * `length` (length) * `C` (Cost) Required packages: `kernlab` ### 7\.0\.50 Tree\-Based Model (back to [contents](train-models-by-tag.html#top)) **AdaBoost Classification Trees** ``` method = 'adaboost' ``` Type: Classification Tuning parameters: * `nIter` (\#Trees) * `method` (Method) Required packages: `fastAdaboost` **AdaBoost.M1** ``` method = 'AdaBoost.M1' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) * `coeflearn` (Coefficient Type) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged AdaBoost** ``` method = 'AdaBag' ``` Type: Classification Tuning parameters: * `mfinal` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `adabag`, `plyr` A model\-specific variable importance metric is available. **Bagged CART** ``` method = 'treebag' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `ipred`, `plyr`, `e1071` A model\-specific variable importance metric is available. **Bayesian Additive Regression Trees** ``` method = 'bartMachine' ``` Type: Classification, Regression Tuning parameters: * `num_trees` (\#Trees) * `k` (Prior Boundary) * `alpha` (Base Terminal Node Hyperparameter) * `beta` (Power Terminal Node Hyperparameter) * `nu` (Degrees of Freedom) Required packages: `bartMachine` A model\-specific variable importance metric is available. **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **Boosted Logistic Regression** ``` method = 'LogitBoost' ``` Type: Classification Tuning parameters: * `nIter` (\# Boosting Iterations) Required packages: `caTools` **Boosted Tree** ``` method = 'blackboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\#Trees) * `maxdepth` (Max Tree Depth) Required packages: `party`, `mboost`, `plyr`, `partykit` **Boosted Tree** ``` method = 'bstTree' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `maxdepth` (Max Tree Depth) * `nu` (Shrinkage) Required packages: `bst`, `plyr` **C4\.5\-like Trees** ``` method = 'J48' ``` Type: Classification Tuning parameters: * `C` (Confidence Threshold) * `M` (Minimum Instances Per Leaf) Required packages: `RWeka` **C5\.0** ``` method = 'C5.0' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart' ``` Type: Regression, Classification Tuning parameters: * `cp` (Complexity Parameter) Required packages: `rpart` A model\-specific variable importance metric is available. **CART** ``` method = 'rpart1SE' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `rpart` A model\-specific variable importance metric is available. Notes: This CART model replicates the same process used by the `rpart` function where the model complexity is determined using the one\-standard error method. This procedure is replicated inside of the resampling done by `train` so that an external resampling estimate can be obtained. **CART** ``` method = 'rpart2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) Required packages: `rpart` A model\-specific variable importance metric is available. **CART or Ordinal Responses** ``` method = 'rpartScore' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `split` (Split Function) * `prune` (Pruning Measure) Required packages: `rpartScore`, `plyr` A model\-specific variable importance metric is available. **CHi\-squared Automated Interaction Detection** ``` method = 'chaid' ``` Type: Classification Tuning parameters: * `alpha2` (Merging Threshold) * `alpha3` (Splitting former Merged Threshold) * `alpha4` ( Splitting former Merged Threshold) Required packages: `CHAID` **Conditional Inference Tree** ``` method = 'ctree' ``` Type: Classification, Regression Tuning parameters: * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Conditional Inference Tree** ``` method = 'ctree2' ``` Type: Regression, Classification Tuning parameters: * `maxdepth` (Max Tree Depth) * `mincriterion` (1 \- P\-Value Threshold) Required packages: `party` **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive CART** ``` method = 'rpartCost' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `Cost` (Cost) Required packages: `rpart`, `plyr` **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **eXtreme Gradient Boosting** ``` method = 'xgbDART' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `subsample` (Subsample Percentage) * `colsample_bytree` (Subsample Ratio of Columns) * `rate_drop` (Fraction of Trees Dropped) * `skip_drop` (Prob. of Skipping Drop\-out) * `min_child_weight` (Minimum Sum of Instance Weight) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **eXtreme Gradient Boosting** ``` method = 'xgbTree' ``` Type: Regression, Classification Tuning parameters: * `nrounds` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `eta` (Shrinkage) * `gamma` (Minimum Loss Reduction) * `colsample_bytree` (Subsample Ratio of Columns) * `min_child_weight` (Minimum Sum of Instance Weight) * `subsample` (Subsample Percentage) Required packages: `xgboost`, `plyr` A model\-specific variable importance metric is available. **Gradient Boosting Machines** ``` method = 'gbm_h2o' ``` Type: Regression, Classification Tuning parameters: * `ntrees` (\# Boosting Iterations) * `max_depth` (Max Tree Depth) * `min_rows` (Min. Terminal Node Size) * `learn_rate` (Shrinkage) * `col_sample_rate` (\#Randomly Selected Predictors) Required packages: `h2o` A model\-specific variable importance metric is available. **Model Tree** ``` method = 'M5' ``` Type: Regression Tuning parameters: * `pruned` (Pruned) * `smoothed` (Smoothed) * `rules` (Rules) Required packages: `RWeka` **Rotation Forest** ``` method = 'rotationForest' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) Required packages: `rotationForest` A model\-specific variable importance metric is available. **Rotation Forest** ``` method = 'rotationForestCp' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) * `cp` (Complexity Parameter) Required packages: `rpart`, `plyr`, `rotationForest` A model\-specific variable importance metric is available. **Single C5\.0 Tree** ``` method = 'C5.0Tree' ``` Type: Classification No tuning parameters for this model Required packages: `C50` A model\-specific variable importance metric is available. **Stochastic Gradient Boosting** ``` method = 'gbm' ``` Type: Regression, Classification Tuning parameters: * `n.trees` (\# Boosting Iterations) * `interaction.depth` (Max Tree Depth) * `shrinkage` (Shrinkage) * `n.minobsinnode` (Min. Terminal Node Size) Required packages: `gbm`, `plyr` A model\-specific variable importance metric is available. **Tree Models from Genetic Algorithms** ``` method = 'evtree' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Complexity Parameter) Required packages: `evtree` **Tree\-Based Ensembles** ``` method = 'nodeHarvest' ``` Type: Regression, Classification Tuning parameters: * `maxinter` (Maximum Interaction Depth) * `mode` (Prediction Mode) Required packages: `nodeHarvest` ### 7\.0\.51 Two Class Only (back to [contents](train-models-by-tag.html#top)) **AdaBoost Classification Trees** ``` method = 'adaboost' ``` Type: Classification Tuning parameters: * `nIter` (\#Trees) * `method` (Method) Required packages: `fastAdaboost` **Bagged Logic Regression** ``` method = 'logicBag' ``` Type: Regression, Classification Tuning parameters: * `nleaves` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `logicFS` Notes: Unlike other packages used by `train`, the `logicFS` package is fully loaded when this model is used. **Bayesian Additive Regression Trees** ``` method = 'bartMachine' ``` Type: Classification, Regression Tuning parameters: * `num_trees` (\#Trees) * `k` (Prior Boundary) * `alpha` (Base Terminal Node Hyperparameter) * `beta` (Power Terminal Node Hyperparameter) * `nu` (Degrees of Freedom) Required packages: `bartMachine` A model\-specific variable importance metric is available. **Binary Discriminant Analysis** ``` method = 'binda' ``` Type: Classification Tuning parameters: * `lambda.freqs` (Shrinkage Intensity) Required packages: `binda` **Boosted Classification Trees** ``` method = 'ada' ``` Type: Classification Tuning parameters: * `iter` (\#Trees) * `maxdepth` (Max Tree Depth) * `nu` (Learning Rate) Required packages: `ada`, `plyr` **Boosted Generalized Additive Model** ``` method = 'gamboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `mboost`, `plyr`, `import` Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **Boosted Generalized Linear Model** ``` method = 'glmboost' ``` Type: Regression, Classification Tuning parameters: * `mstop` (\# Boosting Iterations) * `prune` (AIC Prune?) Required packages: `plyr`, `mboost` A model\-specific variable importance metric is available. Notes: The `prune` option for this model enables the number of iterations to be determined by the optimal AIC value across all iterations. See the examples in `?mboost::mstop`. If pruning is not used, the ensemble makes predictions using the exact value of the `mstop` tuning parameter value. **CHi\-squared Automated Interaction Detection** ``` method = 'chaid' ``` Type: Classification Tuning parameters: * `alpha2` (Merging Threshold) * `alpha3` (Splitting former Merged Threshold) * `alpha4` ( Splitting former Merged Threshold) Required packages: `CHAID` **Cost\-Sensitive C5\.0** ``` method = 'C5.0Cost' ``` Type: Classification Tuning parameters: * `trials` (\# Boosting Iterations) * `model` (Model Type) * `winnow` (Winnow) * `cost` (Cost) Required packages: `C50`, `plyr` A model\-specific variable importance metric is available. **Cost\-Sensitive CART** ``` method = 'rpartCost' ``` Type: Classification Tuning parameters: * `cp` (Complexity Parameter) * `Cost` (Cost) Required packages: `rpart`, `plyr` **DeepBoost** ``` method = 'deepboost' ``` Type: Classification Tuning parameters: * `num_iter` (\# Boosting Iterations) * `tree_depth` (Tree Depth) * `beta` (L1 Regularization) * `lambda` (Tree Depth Regularization) * `loss_type` (Loss) Required packages: `deepboost` **Distance Weighted Discrimination with Polynomial Kernel** ``` method = 'dwdPoly' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `degree` (Polynomial Degree) * `scale` (Scale) Required packages: `kerndwd` **Distance Weighted Discrimination with Radial Basis Function Kernel** ``` method = 'dwdRadial' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) * `sigma` (Sigma) Required packages: `kernlab`, `kerndwd` **Generalized Linear Model** ``` method = 'glm' ``` Type: Regression, Classification No tuning parameters for this model A model\-specific variable importance metric is available. **Generalized Linear Model with Stepwise Feature Selection** ``` method = 'glmStepAIC' ``` Type: Regression, Classification No tuning parameters for this model Required packages: `MASS` **glmnet** ``` method = 'glmnet_h2o' ``` Type: Regression, Classification Tuning parameters: * `alpha` (Mixing Percentage) * `lambda` (Regularization Parameter) Required packages: `h2o` A model\-specific variable importance metric is available. **L2 Regularized Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights2' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `Loss` (Loss Function) * `weight` (Class Weight) Required packages: `LiblineaR` **Linear Distance Weighted Discrimination** ``` method = 'dwdLinear' ``` Type: Classification Tuning parameters: * `lambda` (Regularization Parameter) * `qval` (q) Required packages: `kerndwd` **Linear Support Vector Machines with Class Weights** ``` method = 'svmLinearWeights' ``` Type: Classification Tuning parameters: * `cost` (Cost) * `weight` (Class Weight) Required packages: `e1071` **Logic Regression** ``` method = 'logreg' ``` Type: Regression, Classification Tuning parameters: * `treesize` (Maximum Number of Leaves) * `ntrees` (Number of Trees) Required packages: `LogicReg` **Multilayer Perceptron Network with Dropout** ``` method = 'mlpKerasDropoutCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `dropout` (Dropout Rate) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Multilayer Perceptron Network with Weight Decay** ``` method = 'mlpKerasDecayCost' ``` Type: Classification Tuning parameters: * `size` (\#Hidden Units) * `lambda` (L2 Regularization) * `batch_size` (Batch Size) * `lr` (Learning Rate) * `rho` (Rho) * `decay` (Learning Rate Decay) * `cost` (Cost) * `activation` (Activation Function) Required packages: `keras` Notes: After `train` completes, the keras model object is serialized so that it can be used between R session. When predicting, the code will temporarily unsearalize the object. To make the predictions more efficient, the user might want to use `keras::unsearlize_model(object$finalModel$object)` in the current R session so that that operation is only done once. Also, this model cannot be run in parallel due to the nature of how tensorflow does the computations. Finally, the cost parameter weights the first class in the outcome vector. Unlike other packages used by `train`, the `dplyr` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFlog' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFpls' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFridge' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Oblique Random Forest** ``` method = 'ORFsvm' ``` Type: Classification Tuning parameters: * `mtry` (\#Randomly Selected Predictors) Required packages: `obliqueRF` Notes: Unlike other packages used by `train`, the `obliqueRF` package is fully loaded when this model is used. **Partial Least Squares Generalized Linear Models** ``` method = 'plsRglm' ``` Type: Classification, Regression Tuning parameters: * `nt` (\#PLS Components) * `alpha.pvals.expli` (p\-Value threshold) Required packages: `plsRglm` Notes: Unlike other packages used by `train`, the `plsRglm` package is fully loaded when this model is used. **Rotation Forest** ``` method = 'rotationForest' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) Required packages: `rotationForest` A model\-specific variable importance metric is available. **Rotation Forest** ``` method = 'rotationForestCp' ``` Type: Classification Tuning parameters: * `K` (\#Variable Subsets) * `L` (Ensemble Size) * `cp` (Complexity Parameter) Required packages: `rpart`, `plyr`, `rotationForest` A model\-specific variable importance metric is available. **Support Vector Machines with Class Weights** ``` method = 'svmRadialWeights' ``` Type: Classification Tuning parameters: * `sigma` (Sigma) * `C` (Cost) * `Weight` (Weight) Required packages: `kernlab` **Tree\-Based Ensembles** ``` method = 'nodeHarvest' ``` Type: Regression, Classification Tuning parameters: * `maxinter` (Maximum Interaction Depth) * `mode` (Prediction Mode) Required packages: `nodeHarvest`
Machine Learning
topepo.github.io
https://topepo.github.io/caret/models-clustered-by-tag-similarity.html
8 Models Clustered by Tag Similarity ==================================== This page shows a network diagram of all the models that can be accessed by `train`. See the [Revolutions blog](http://blog.revolutionanalytics.com/2014/01/predictive-models-in-r-clustered-by-tag-similarity-1.html) for details about how this visualization was made (and [this page](https://github.com/topepo/caret/blob/master/html/similarity.Rhtml) has updated code using the [`networkD3`](http://cran.r-project.org/web/packages/networkD3/index.html) package). In summary, the package annotates each model by a set of tags (e.g. “Bagging”, “L1 Regularization” etc.). Using this information we can cluster models that are similar to each other. Green circles are models only used for regression, blue is classification only and orange is “dual use”. Hover over a circle to get the model name and the model code used by the [`caret`](http://cran.r-project.org/web/packages/caret/index.html) package and refreshing the screen will re\-configure the layout. You may need to move a node to the left to see the whole name. 43 models without connections are not shown in the graph. The data used to create this graph can be found [here](tag_data.csv). The plot below shows the similarity matrix. Hover over a cell to see the pair of models and their [Jaccard similarity](https://en.wikipedia.org/wiki/Jaccard_index). Darker colors indicate similar models. You can also use it along with maximum *dissimilarity* sampling to pick out a diverse set of models. Suppose you would like to use a SVM model with a radial basis function on some regression data. Based on these tags, what other four models would constitute the most diverse set? ``` tag <- read.csv("tag_data.csv", row.names = 1) tag <- as.matrix(tag) ## Select only models for regression regModels <- tag[tag[,"Regression"] == 1,] all <- 1:nrow(regModels) ## Seed the analysis with the SVM model start <- grep("(svmRadial)", rownames(regModels), fixed = TRUE) pool <- all[all != start] ## Select 4 model models by maximizing the Jaccard ## dissimilarity between sets of models nextMods <- maxDissim(regModels[start,,drop = FALSE], regModels[pool, ], method = "Jaccard", n = 4) rownames(regModels)[c(start, nextMods)] ``` ``` ## [1] "Support Vector Machines with Radial Basis Function Kernel (svmRadial)" ## [2] "Cubist (cubist)" ## [3] "Bayesian Regularized Neural Networks (brnn)" ## [4] "Negative Binomial Generalized Linear Model (glm.nb)" ## [5] "Logic Regression (logreg)" ```
Machine Learning
topepo.github.io
https://topepo.github.io/caret/parallel-processing.html
9 Parallel Processing ===================== In this package, resampling is primary approach for optimizing predictive models with tuning parameters. To do this, many alternate versions of the training set are used to train the model and predict a hold\-out set. This process is repeated many times to get performance estimates that generalize to new data sets. Each of the resampled data sets is independent of the others, so there is no formal requirement that the models must be run sequentially. If a computer with multiple processors or cores is available, the computations could be spread across these “workers” to increase the computational efficiency. [`caret`](http://cran.r-project.org/web/packages/caret/index.html) leverages one of the parallel processing frameworks in R to do just this. The [`foreach`](http://cran.r-project.org/web/packages/foreach/index.html) package allows R code to be run either sequentially or in parallel using several different technologies, such as the [`multicore`](http://cran.r-project.org/web/packages/multicore/index.html) or [`Rmpi`](http://cran.r-project.org/web/packages/Rmpi/index.html) packages (see [Schmidberger *et al*, 2009](http://www.jstatsoft.org/v31/i01/paper) for summaries and descriptions of the available options). There are several R packages that work with [`foreach`](http://cran.r-project.org/web/packages/foreach/index.html) to implement these techniques, such as [`doMC`](http://cran.r-project.org/web/packages/doMC/index.html) (for [`multicore`](http://cran.r-project.org/web/packages/multicore/index.html)) or [`doMPI`](http://cran.r-project.org/web/packages/doMPI/index.html) (for [`Rmpi`](http://cran.r-project.org/web/packages/Rmpi/index.html)). A fairly comprehensive study of the benefits of parallel processing can be found in [this blog post](http://appliedpredictivemodeling.com/blog/2018/1/17/parallel-processing). To tune a predictive model using multiple workers, the function syntax in the [`caret`](http://cran.r-project.org/web/packages/caret/index.html) package functions (e.g. `train`, `rfe` or `sbf`) do not change. A separate function is used to “register” the parallel processing technique and specify the number of workers to use. For example, to use the [`doParallel`](http://cran.r-project.org/web/packages/doParallel/index.html)) package with five cores on the same machine, the package is loaded and them registered: ``` library(doParallel) cl <- makePSOCKcluster(5) registerDoParallel(cl) ## All subsequent models are then run in parallel model <- train(y ~ ., data = training, method = "rf") ## When you are done: stopCluster(cl) ``` The syntax for other packages associated with [`foreach`](http://cran.r-project.org/web/packages/foreach/index.html) is very similar. Note that as the number of workers increases, the memory required also increase. For example, using five workers would keep a total of six versions of the data in memory. If the data are large or the computational model is demanding, performance can be affected if the amount of required memory exceeds the physical amount available. Also, for `rfe` and `sbf`, these functions may call `train` for some models. In this case, registering *M* workers will actually invoke *M*2 total processes. Does this help reduce the time to fit models? A moderately sized data set (4331 rows and 8\) was modeled multiple times with different number of workers for several models. Random forest was used with 2000 trees and tuned over 10 values of *m*try. Variable importance calculations were also conducted during each model fit. Linear discriminant analysis was also run, as was a cost\-sensitive radial basis function support vector machine (tuned over 15 cost values). All models were tuned using five repeats of 10\-fold cross\-validation. The results are shown in the figure below. The y\-axis corresponds to the total execution time (encompassing model tuning and the final model fit) versus the number of workers. Random forest clearly took the longest to train and the LDA models were very computationally efficient. The total time (in minutes) decreased as the number of workers increase but stabilized around seven workers. The data for this plot were generated in a randomized fashion so that there should be no bias in the run order. The bottom right panel shows the *speed\-up* which is the sequential time divided by the parallel time. For example, a speed\-up of three indicates that the parallel version was three times faster than the sequential version. At best, parallelization can achieve linear speed\-ups; that is, for *M* workers, the parallel time is 1/*M*. For these models, the speed\-up is close to linear until four or five workers are used. After this, there is a small improvement in performance. Since LDA is already computationally efficient, the speed\-up levels off more rapidly than the other models. While not linear, the decrease in execution time is helpful \- a nearly 10 hour model fit was decreased to about 90 minutes. Note that some models, especially those using the [`RWeka`](http://cran.r-project.org/web/packages/RWeka/index.html) package, may not be able to be run in parallel due to the underlying code structure. `train`, `rfe`, `sbf`, `bag` and `avNNet` were given an additional argument in their respective control files called `allowParallel` that defaults to `TRUE`. When `TRUE`, the code will be executed in parallel if a parallel backend (e.g. **doMC**) is registered. When `allowParallel = FALSE`, the parallel backend is always ignored. The use case is when `rfe` or `sbf` calls `train`. If a parallel backend with *P* processors is being used, the combination of these functions will create *P*2 processes. Since some operations benefit more from parallelization than others, the user has the ability to concentrate computing resources for specific functions. One additional “trick” that `train` exploits to increase computational efficiency is to use sub\-models; a single model fit can produce predictions for multiple tuning parameters. For example, in most implementations of boosted models, a model trained on *B* boosting iterations can produce predictions for models for iterations less than *B*. Suppose a `gbm` model was tuned over the following grid: ``` gbmGrid <- expand.grid(interaction.depth = c(1, 5, 9), n.trees = (1:15)*100, shrinkage = 0.1, n.minobsinnode = 20) ``` In reality, `train` only created objects for 3 models and derived the other predictions from these objects. This trick is used for the following models: `ada`, `AdaBag`, `AdaBoost.M1`, `bagEarth`, `blackboost`, `blasso`, `BstLm`, `bstSm`, `bstTree`, `C5.0`, `C5.0Cost`, `cubist`, `earth`, `enet`, `foba`, `gamboost`, `gbm`, `glmboost`, `glmnet`, `kernelpls`, `lars`, `lars2`, `lasso`, `lda2`, `leapBackward`, `leapForward`, `leapSeq`, `LogitBoost`, `pam`, `partDSA`, `pcr`, `PenalizedLDA`, `pls`, `relaxo`, `rfRules`, `rotationForest`, `rotationForestCp`, `rpart`, `rpart2`, `rpartCost`, `simpls`, `spikeslab`, `superpc`, `widekernelpls`, `xgbTree`.
Machine Learning
topepo.github.io
https://topepo.github.io/caret/random-hyperparameter-search.html
10 Random Hyperparameter Search =============================== The default method for optimizing tuning parameters in `train` is to use a [grid search](model-training-and-tuning.html#grids). This approach is usually effective but, in cases when there are many tuning parameters, it can be inefficient. An alternative is to use a combination of [grid search and racing](adaptive-resampling.html). Another is to use a [random selection of tuning parameter combinations](http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf) to cover the parameter space to a lesser extent. There are a number of models where this can be beneficial in finding reasonable values of the tuning parameters in a relatively short time. However, there are some models where the efficiency in a small search field can cancel out other optimizations. For example, a number of models in caret utilize the “sub\-model trick” where *M* tuning parameter combinations are evaluated, potentially far fewer than M model fits are required. This approach is best leveraged when a simple grid search is used. For this reason, it may be inefficient to use random search for the following model codes: `ada`, `AdaBag`, `AdaBoost.M1`, `bagEarth`, `blackboost`, `blasso`, `BstLm`, `bstSm`, `bstTree`, `C5.0`, `C5.0Cost`, `cubist`, `earth`, `enet`, `foba`, `gamboost`, `gbm`, `glmboost`, `glmnet`, `kernelpls`, `lars`, `lars2`, `lasso`, `lda2`, `leapBackward`, `leapForward`, `leapSeq`, `LogitBoost`, `pam`, `partDSA`, `pcr`, `PenalizedLDA`, `pls`, `relaxo`, `rfRules`, `rotationForest`, `rotationForestCp`, `rpart`, `rpart2`, `rpartCost`, `simpls`, `spikeslab`, `superpc`, `widekernelpls`, `xgbDART`, `xgbTree`. Finally, many of the models wrapped by `train` have a small number of parameters. The average number of parameters is 2\. To use random search, another option is available in `trainControl` called `search`. Possible values of this argument are `"grid"` and `"random"`. The built\-in models contained in caret contain code to generate random tuning parameter combinations. The total number of unique combinations is specified by the `tuneLength` option to `train`. Again, we will use the sonar data from the previous training page to demonstrate the method with a regularized discriminant analysis by looking at a total of 30 tuning parameter combinations: ``` library(mlbench) data(Sonar) library(caret) set.seed(998) inTraining <- createDataPartition(Sonar$Class, p = .75, list = FALSE) training <- Sonar[ inTraining,] testing <- Sonar[-inTraining,] fitControl <- trainControl(method = "repeatedcv", number = 10, repeats = 10, classProbs = TRUE, summaryFunction = twoClassSummary, search = "random") set.seed(825) rda_fit <- train(Class ~ ., data = training, method = "rda", metric = "ROC", tuneLength = 30, trControl = fitControl) rda_fit ``` ``` ## Regularized Discriminant Analysis ## ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## gamma lambda ROC Sens Spec ## 0.03177874 0.767664044 0.8662029 0.7983333 0.7600000 ## 0.03868192 0.499283304 0.8526513 0.8120833 0.7600000 ## 0.11834801 0.974493793 0.8379266 0.7780556 0.7428571 ## 0.12391186 0.018063038 0.8321825 0.8112500 0.7233929 ## 0.13442487 0.868918547 0.8590501 0.8122222 0.7528571 ## 0.19249104 0.335761243 0.8588070 0.8577778 0.7030357 ## 0.23568481 0.064135040 0.8465402 0.8372222 0.7026786 ## 0.23814584 0.986270274 0.8363070 0.7623611 0.7532143 ## 0.25082994 0.674919744 0.8700918 0.8588889 0.7010714 ## 0.28285931 0.576888058 0.8706250 0.8650000 0.6871429 ## 0.29099029 0.474277013 0.8681548 0.8687500 0.6844643 ## 0.29601805 0.002963208 0.8465476 0.8419444 0.6973214 ## 0.31717364 0.943120266 0.8440030 0.7863889 0.7444643 ## 0.33633553 0.283586169 0.8650794 0.8626389 0.6878571 ## 0.41798776 0.881581948 0.8540253 0.8076389 0.7346429 ## 0.45885413 0.701431940 0.8704588 0.8413889 0.7026786 ## 0.48684373 0.545997273 0.8713442 0.8638889 0.6758929 ## 0.48845661 0.377704420 0.8700818 0.8783333 0.6566071 ## 0.51491517 0.592224877 0.8705903 0.8509722 0.6789286 ## 0.53206420 0.339941226 0.8694320 0.8795833 0.6523214 ## 0.54020648 0.253930177 0.8673239 0.8747222 0.6546429 ## 0.56009903 0.183772303 0.8652059 0.8709722 0.6573214 ## 0.56472058 0.995162379 0.8354911 0.7550000 0.7489286 ## 0.58045730 0.773613530 0.8612922 0.8262500 0.7089286 ## 0.67085142 0.287354882 0.8686062 0.8781944 0.6444643 ## 0.69503284 0.348973440 0.8694742 0.8805556 0.6417857 ## 0.72206263 0.653406920 0.8635937 0.8331944 0.6735714 ## 0.76035804 0.183676074 0.8642560 0.8769444 0.6303571 ## 0.86234436 0.272931617 0.8545412 0.8588889 0.6030357 ## 0.98847635 0.580160726 0.7383358 0.7097222 0.6169643 ## ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were gamma = 0.4868437 and lambda ## = 0.5459973. ``` There is currently only a `ggplot` method (instead of a basic `plot` method). The results of this function with random searching depends on the number and type of tuning parameters. In this case, it produces a scatter plot of the continuous parameters. ``` ggplot(rda_fit) + theme(legend.position = "top") ```
Machine Learning
topepo.github.io
https://topepo.github.io/caret/subsampling-for-class-imbalances.html
11 Subsampling For Class Imbalances =================================== Contents * [Subsampling Techniques](subsampling-for-class-imbalances.html#methods) * [Subsampling During Resampling](subsampling-for-class-imbalances.html#resampling) * [Complications](subsampling-for-class-imbalances.html#complications) * [Using Custom Subsampling Techniques](subsampling-for-class-imbalances.html#custom-subsamp) In classification problems, a disparity in the frequencies of the observed classes can have a significant negative impact on model fitting. One technique for resolving such a class imbalance is to subsample the training data in a manner that mitigates the issues. Examples of sampling methods for this purpose are: * *down\-sampling*: randomly subset all the classes in the training set so that their class frequencies match the least prevalent class. For example, suppose that 80% of the training set samples are the first class and the remaining 20% are in the second class. Down\-sampling would randomly sample the first class to be the same size as the second class (so that only 40% of the total training set is used to fit the model). **caret** contains a function (`downSample`) to do this. * *up\-sampling*: randomly sample (with replacement) the minority class to be the same size as the majority class. **caret** contains a function (`upSample`) to do this. * *hybrid methods*: techniques such as [SMOTE](https://scholar.google.com/scholar?hl=en&q=SMOTE&btnG=&as_sdt=1%2C33&as_sdtp=) and [ROSE](https://scholar.google.com/scholar?q=%22Training+and+assessing+classification+rules+with+imbalanced+data%22&btnG=&hl=en&as_sdt=0%2C33) down\-sample the majority class and synthesize new data points in the minority class. There are two packages (**DMwR** and **ROSE**) that implement these procedures. Note that this type of sampling is different from splitting the data into a training and test set. You would never want to artificially balance the test set; its class frequencies should be in\-line with what one would see “in the wild”. Also, the above procedures are independent of resampling methods such as cross\-validation and the bootstrap. In practice, one could take the training set and, before model fitting, sample the data. There are two issues with this approach * Firstly, during model tuning the holdout samples generated during resampling are also glanced and may not reflect the class imbalance that future predictions would encounter. This is likely to lead to overly optimistic estimates of performance. * Secondly, the subsampling process will probably induce more model uncertainty. Would the model results differ under a different subsample? As above, the resampling statistics are more likely to make the model appear more effective than it actually is. The alternative is to include the subsampling inside of the usual resampling procedure. This is also advocated for pre\-process and featur selection steps too. The two disadvantages are that it might increase computational times and that it might also complicate the analysis in other ways (see the [section below](subsampling-for-class-imbalances.html#complications) about the pitfalls). 11\.1 Subsampling Techniques ---------------------------- To illustrate these methods, let’s simulate some data with a class imbalance using this method. We will simulate a training and test set where each contains 10000 samples and a minority class rate of about 5\.9%: ``` library(caret) set.seed(2969) imbal_train <- twoClassSim(10000, intercept = -20, linearVars = 20) imbal_test <- twoClassSim(10000, intercept = -20, linearVars = 20) table(imbal_train$Class) ``` ``` ## ## Class1 Class2 ## 9411 589 ``` Let’s create different versions of the training set prior to model tuning: ``` set.seed(9560) down_train <- downSample(x = imbal_train[, -ncol(imbal_train)], y = imbal_train$Class) table(down_train$Class) ``` ``` ## ## Class1 Class2 ## 589 589 ``` ``` set.seed(9560) up_train <- upSample(x = imbal_train[, -ncol(imbal_train)], y = imbal_train$Class) table(up_train$Class) ``` ``` ## ## Class1 Class2 ## 9411 9411 ``` ``` library(DMwR) set.seed(9560) smote_train <- SMOTE(Class ~ ., data = imbal_train) table(smote_train$Class) ``` ``` ## ## Class1 Class2 ## 2356 1767 ``` ``` library(ROSE) set.seed(9560) rose_train <- ROSE(Class ~ ., data = imbal_train)$data table(rose_train$Class) ``` ``` ## ## Class1 Class2 ## 4939 5061 ``` For these data, we’ll use a bagged classification and estimate the area under the ROC curve using five repeats of 10\-fold CV. ``` ctrl <- trainControl(method = "repeatedcv", repeats = 5, classProbs = TRUE, summaryFunction = twoClassSummary) set.seed(5627) orig_fit <- train(Class ~ ., data = imbal_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) set.seed(5627) down_outside <- train(Class ~ ., data = down_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) set.seed(5627) up_outside <- train(Class ~ ., data = up_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) set.seed(5627) rose_outside <- train(Class ~ ., data = rose_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) set.seed(5627) smote_outside <- train(Class ~ ., data = smote_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) ``` We will collate the resampling results and create a wrapper to estimate the test set performance: ``` outside_models <- list(original = orig_fit, down = down_outside, up = up_outside, SMOTE = smote_outside, ROSE = rose_outside) outside_resampling <- resamples(outside_models) test_roc <- function(model, data) { library(pROC) roc_obj <- roc(data$Class, predict(model, data, type = "prob")[, "Class1"], levels = c("Class2", "Class1")) ci(roc_obj) } outside_test <- lapply(outside_models, test_roc, data = imbal_test) outside_test <- lapply(outside_test, as.vector) outside_test <- do.call("rbind", outside_test) colnames(outside_test) <- c("lower", "ROC", "upper") outside_test <- as.data.frame(outside_test) summary(outside_resampling, metric = "ROC") ``` ``` ## ## Call: ## summary.resamples(object = outside_resampling, metric = "ROC") ## ## Models: original, down, up, SMOTE, ROSE ## Number of resamples: 50 ## ## ROC ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## original 0.9098237 0.9298348 0.9386021 0.9394130 0.9493394 0.9685873 0 ## down 0.9095558 0.9282175 0.9453907 0.9438384 0.9596021 0.9836254 0 ## up 0.9989350 0.9999980 1.0000000 0.9998402 1.0000000 1.0000000 0 ## SMOTE 0.9697171 0.9782214 0.9834234 0.9817476 0.9857071 0.9928255 0 ## ROSE 0.8782985 0.8941488 0.8980313 0.8993135 0.9056404 0.9203092 0 ``` ``` outside_test ``` ``` ## lower ROC upper ## original 0.9130010 0.9247957 0.9365905 ## down 0.9286964 0.9368361 0.9449758 ## up 0.9244128 0.9338499 0.9432869 ## SMOTE 0.9429536 0.9490585 0.9551634 ## ROSE 0.9383809 0.9459729 0.9535649 ``` The training and test set estimates for the area under the ROC curve do not appear to correlate. Based on the resampling results, one would infer that up\-sampling is nearly perfect and that ROSE does relatively poorly. The reason that up\-sampling appears to perform so well is that the samples in the majority class are replicated and have a large potential to be in both the model building and hold\-out sets. In essence, the hold\-outs here are not truly independent samples. In reality, all of the sampling methods do about the same (based on the test set). The statistics for the basic model fit with no sampling are fairly in\-line with one another (0\.939 via resampling and 0\.925 for the test set). 11\.2 Subsampling During Resampling ----------------------------------- Recent versions of **caret** allow the user to specify subsampling when using `train` so that it is conducted inside of resampling. All four methods shown above can be accessed with the basic package using simple syntax. If you want to use your own technique, or want to change some of the parameters for `SMOTE` or `ROSE`, the last section below shows how to use custom subsampling. The way to enable subsampling is to use yet another option in `trainControl` called `sampling`. The most basic syntax is to use a character string with the name of the sampling method, either `"down"`, `"up"`, `"smote"`, or `"rose"`. Note that you will need to have the **DMwR** and **ROSE** packages installed to use SMOTE and ROSE, respectively. One complication is related to pre\-processing. Should the subsampling occur before or after the pre\-processing? For example, if you down\-sample the data and using PCA for signal extraction, should the loadings be estimated from the entire training set? The estimate is potentially better since the entire training set is being used but the subsample may happen to capture a small potion of the PCA space. There isn’t any obvious answer. The default behavior is to subsample the data prior to pre\-processing. This can be easily changed and an example is given below. Now let’s re\-run our bagged tree models while sampling inside of cross\-validation: ``` ctrl <- trainControl(method = "repeatedcv", repeats = 5, classProbs = TRUE, summaryFunction = twoClassSummary, ## new option here: sampling = "down") set.seed(5627) down_inside <- train(Class ~ ., data = imbal_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) ## now just change that option ctrl$sampling <- "up" set.seed(5627) up_inside <- train(Class ~ ., data = imbal_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) ctrl$sampling <- "rose" set.seed(5627) rose_inside <- train(Class ~ ., data = imbal_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) ctrl$sampling <- "smote" set.seed(5627) smote_inside <- train(Class ~ ., data = imbal_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) ``` Here are the resampling and test set results: ``` inside_models <- list(original = orig_fit, down = down_inside, up = up_inside, SMOTE = smote_inside, ROSE = rose_inside) inside_resampling <- resamples(inside_models) inside_test <- lapply(inside_models, test_roc, data = imbal_test) inside_test <- lapply(inside_test, as.vector) inside_test <- do.call("rbind", inside_test) colnames(inside_test) <- c("lower", "ROC", "upper") inside_test <- as.data.frame(inside_test) summary(inside_resampling, metric = "ROC") ``` ``` ## ## Call: ## summary.resamples(object = inside_resampling, metric = "ROC") ## ## Models: original, down, up, SMOTE, ROSE ## Number of resamples: 50 ## ## ROC ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## original 0.9098237 0.9298348 0.9386021 0.9394130 0.9493394 0.9685873 0 ## down 0.9140294 0.9381766 0.9453610 0.9438490 0.9492917 0.9684522 0 ## up 0.8887678 0.9308075 0.9393226 0.9392084 0.9517913 0.9679569 0 ## SMOTE 0.9203876 0.9453453 0.9520074 0.9508721 0.9596354 0.9746933 0 ## ROSE 0.9305013 0.9442821 0.9489859 0.9511117 0.9572416 0.9756750 0 ``` ``` inside_test ``` ``` ## lower ROC upper ## original 0.9130010 0.9247957 0.9365905 ## down 0.9354534 0.9419704 0.9484875 ## up 0.9353945 0.9431074 0.9508202 ## SMOTE 0.9465262 0.9524213 0.9583164 ## ROSE 0.9369170 0.9448367 0.9527563 ``` The figure below shows the difference in the area under the ROC curve and the test set results for the approaches shown here. Repeating the subsampling procedures for every resample produces results that are more consistent with the test set. 11\.3 Complications ------------------- The user should be aware that there are a few things that can happening when subsampling that can cause issues in their code. As previously mentioned, when sampling occurs in relation to pre\-processing is one such issue. Others are: * Sparsely represented categories in factor variables may turn into zero\-variance predictors or may be completely sampled out of the model. * The underlying functions that do the sampling (e.g. `SMOTE`, `downSample`, etc) operate in very different ways and this can affect your results. For example, `SMOTE` and `ROSE` will convert your predictor input argument into a data frame (even if you start with a matrix). * Currently, sample weights are not supported with sub\-sampling. * If you use `tuneLength` to specify the search grid, understand that the data that is used to determine the grid has not been sampled. In most cases, this will not matter but if the grid creation process is affected by the sample size, you may end up using a sub\-optimal tuning grid. * For some models that require more samples than parameters, a reduction in the sample size may prevent you from being able to fit the model. 11\.4 Using Custom Subsampling Techniques ----------------------------------------- Users have the ability to create their own type of subsampling procedure. To do this, alternative syntax is used with the `sampling` argument of the `trainControl`. Previously, we used a simple string as the value of this argument. Another way to specify the argument is to use a list with three (named) elements: * The `name` value is a character string used when the `train` object is printed. It can be any string. * The `func` element is a function that does the subsampling. It should have arguments called `x` and `y` that will contain the predictors and outcome data, respectively. The function should return a list with elements of the same name. * The `first` element is a single logical value that indicates whether the subsampling should occur first relative to pre\-process. A value of `FALSE` means that the subsampling function will receive the sampled versions of `x` and `y`. For example, here is what the list version of the `sampling` argument looks like when simple down\-sampling is used: ``` down_inside$control$sampling ``` ``` ## $name ## [1] "down" ## ## $func ## function(x, y) ## downSample(x, y, list = TRUE) ## ## $first ## [1] TRUE ``` As another example, suppose we want to use SMOTE but use 10 nearest neighbors instead of the default of 5\. To do this, we can create a simple wrapper around the `SMOTE` function and call this instead: ``` smotest <- list(name = "SMOTE with more neighbors!", func = function (x, y) { library(DMwR) dat <- if (is.data.frame(x)) x else as.data.frame(x) dat$.y <- y dat <- SMOTE(.y ~ ., data = dat, k = 10) list(x = dat[, !grepl(".y", colnames(dat), fixed = TRUE)], y = dat$.y) }, first = TRUE) ``` The control object would then be: ``` ctrl <- trainControl(method = "repeatedcv", repeats = 5, classProbs = TRUE, summaryFunction = twoClassSummary, sampling = smotest) ``` 11\.1 Subsampling Techniques ---------------------------- To illustrate these methods, let’s simulate some data with a class imbalance using this method. We will simulate a training and test set where each contains 10000 samples and a minority class rate of about 5\.9%: ``` library(caret) set.seed(2969) imbal_train <- twoClassSim(10000, intercept = -20, linearVars = 20) imbal_test <- twoClassSim(10000, intercept = -20, linearVars = 20) table(imbal_train$Class) ``` ``` ## ## Class1 Class2 ## 9411 589 ``` Let’s create different versions of the training set prior to model tuning: ``` set.seed(9560) down_train <- downSample(x = imbal_train[, -ncol(imbal_train)], y = imbal_train$Class) table(down_train$Class) ``` ``` ## ## Class1 Class2 ## 589 589 ``` ``` set.seed(9560) up_train <- upSample(x = imbal_train[, -ncol(imbal_train)], y = imbal_train$Class) table(up_train$Class) ``` ``` ## ## Class1 Class2 ## 9411 9411 ``` ``` library(DMwR) set.seed(9560) smote_train <- SMOTE(Class ~ ., data = imbal_train) table(smote_train$Class) ``` ``` ## ## Class1 Class2 ## 2356 1767 ``` ``` library(ROSE) set.seed(9560) rose_train <- ROSE(Class ~ ., data = imbal_train)$data table(rose_train$Class) ``` ``` ## ## Class1 Class2 ## 4939 5061 ``` For these data, we’ll use a bagged classification and estimate the area under the ROC curve using five repeats of 10\-fold CV. ``` ctrl <- trainControl(method = "repeatedcv", repeats = 5, classProbs = TRUE, summaryFunction = twoClassSummary) set.seed(5627) orig_fit <- train(Class ~ ., data = imbal_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) set.seed(5627) down_outside <- train(Class ~ ., data = down_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) set.seed(5627) up_outside <- train(Class ~ ., data = up_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) set.seed(5627) rose_outside <- train(Class ~ ., data = rose_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) set.seed(5627) smote_outside <- train(Class ~ ., data = smote_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) ``` We will collate the resampling results and create a wrapper to estimate the test set performance: ``` outside_models <- list(original = orig_fit, down = down_outside, up = up_outside, SMOTE = smote_outside, ROSE = rose_outside) outside_resampling <- resamples(outside_models) test_roc <- function(model, data) { library(pROC) roc_obj <- roc(data$Class, predict(model, data, type = "prob")[, "Class1"], levels = c("Class2", "Class1")) ci(roc_obj) } outside_test <- lapply(outside_models, test_roc, data = imbal_test) outside_test <- lapply(outside_test, as.vector) outside_test <- do.call("rbind", outside_test) colnames(outside_test) <- c("lower", "ROC", "upper") outside_test <- as.data.frame(outside_test) summary(outside_resampling, metric = "ROC") ``` ``` ## ## Call: ## summary.resamples(object = outside_resampling, metric = "ROC") ## ## Models: original, down, up, SMOTE, ROSE ## Number of resamples: 50 ## ## ROC ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## original 0.9098237 0.9298348 0.9386021 0.9394130 0.9493394 0.9685873 0 ## down 0.9095558 0.9282175 0.9453907 0.9438384 0.9596021 0.9836254 0 ## up 0.9989350 0.9999980 1.0000000 0.9998402 1.0000000 1.0000000 0 ## SMOTE 0.9697171 0.9782214 0.9834234 0.9817476 0.9857071 0.9928255 0 ## ROSE 0.8782985 0.8941488 0.8980313 0.8993135 0.9056404 0.9203092 0 ``` ``` outside_test ``` ``` ## lower ROC upper ## original 0.9130010 0.9247957 0.9365905 ## down 0.9286964 0.9368361 0.9449758 ## up 0.9244128 0.9338499 0.9432869 ## SMOTE 0.9429536 0.9490585 0.9551634 ## ROSE 0.9383809 0.9459729 0.9535649 ``` The training and test set estimates for the area under the ROC curve do not appear to correlate. Based on the resampling results, one would infer that up\-sampling is nearly perfect and that ROSE does relatively poorly. The reason that up\-sampling appears to perform so well is that the samples in the majority class are replicated and have a large potential to be in both the model building and hold\-out sets. In essence, the hold\-outs here are not truly independent samples. In reality, all of the sampling methods do about the same (based on the test set). The statistics for the basic model fit with no sampling are fairly in\-line with one another (0\.939 via resampling and 0\.925 for the test set). 11\.2 Subsampling During Resampling ----------------------------------- Recent versions of **caret** allow the user to specify subsampling when using `train` so that it is conducted inside of resampling. All four methods shown above can be accessed with the basic package using simple syntax. If you want to use your own technique, or want to change some of the parameters for `SMOTE` or `ROSE`, the last section below shows how to use custom subsampling. The way to enable subsampling is to use yet another option in `trainControl` called `sampling`. The most basic syntax is to use a character string with the name of the sampling method, either `"down"`, `"up"`, `"smote"`, or `"rose"`. Note that you will need to have the **DMwR** and **ROSE** packages installed to use SMOTE and ROSE, respectively. One complication is related to pre\-processing. Should the subsampling occur before or after the pre\-processing? For example, if you down\-sample the data and using PCA for signal extraction, should the loadings be estimated from the entire training set? The estimate is potentially better since the entire training set is being used but the subsample may happen to capture a small potion of the PCA space. There isn’t any obvious answer. The default behavior is to subsample the data prior to pre\-processing. This can be easily changed and an example is given below. Now let’s re\-run our bagged tree models while sampling inside of cross\-validation: ``` ctrl <- trainControl(method = "repeatedcv", repeats = 5, classProbs = TRUE, summaryFunction = twoClassSummary, ## new option here: sampling = "down") set.seed(5627) down_inside <- train(Class ~ ., data = imbal_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) ## now just change that option ctrl$sampling <- "up" set.seed(5627) up_inside <- train(Class ~ ., data = imbal_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) ctrl$sampling <- "rose" set.seed(5627) rose_inside <- train(Class ~ ., data = imbal_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) ctrl$sampling <- "smote" set.seed(5627) smote_inside <- train(Class ~ ., data = imbal_train, method = "treebag", nbagg = 50, metric = "ROC", trControl = ctrl) ``` Here are the resampling and test set results: ``` inside_models <- list(original = orig_fit, down = down_inside, up = up_inside, SMOTE = smote_inside, ROSE = rose_inside) inside_resampling <- resamples(inside_models) inside_test <- lapply(inside_models, test_roc, data = imbal_test) inside_test <- lapply(inside_test, as.vector) inside_test <- do.call("rbind", inside_test) colnames(inside_test) <- c("lower", "ROC", "upper") inside_test <- as.data.frame(inside_test) summary(inside_resampling, metric = "ROC") ``` ``` ## ## Call: ## summary.resamples(object = inside_resampling, metric = "ROC") ## ## Models: original, down, up, SMOTE, ROSE ## Number of resamples: 50 ## ## ROC ## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's ## original 0.9098237 0.9298348 0.9386021 0.9394130 0.9493394 0.9685873 0 ## down 0.9140294 0.9381766 0.9453610 0.9438490 0.9492917 0.9684522 0 ## up 0.8887678 0.9308075 0.9393226 0.9392084 0.9517913 0.9679569 0 ## SMOTE 0.9203876 0.9453453 0.9520074 0.9508721 0.9596354 0.9746933 0 ## ROSE 0.9305013 0.9442821 0.9489859 0.9511117 0.9572416 0.9756750 0 ``` ``` inside_test ``` ``` ## lower ROC upper ## original 0.9130010 0.9247957 0.9365905 ## down 0.9354534 0.9419704 0.9484875 ## up 0.9353945 0.9431074 0.9508202 ## SMOTE 0.9465262 0.9524213 0.9583164 ## ROSE 0.9369170 0.9448367 0.9527563 ``` The figure below shows the difference in the area under the ROC curve and the test set results for the approaches shown here. Repeating the subsampling procedures for every resample produces results that are more consistent with the test set. 11\.3 Complications ------------------- The user should be aware that there are a few things that can happening when subsampling that can cause issues in their code. As previously mentioned, when sampling occurs in relation to pre\-processing is one such issue. Others are: * Sparsely represented categories in factor variables may turn into zero\-variance predictors or may be completely sampled out of the model. * The underlying functions that do the sampling (e.g. `SMOTE`, `downSample`, etc) operate in very different ways and this can affect your results. For example, `SMOTE` and `ROSE` will convert your predictor input argument into a data frame (even if you start with a matrix). * Currently, sample weights are not supported with sub\-sampling. * If you use `tuneLength` to specify the search grid, understand that the data that is used to determine the grid has not been sampled. In most cases, this will not matter but if the grid creation process is affected by the sample size, you may end up using a sub\-optimal tuning grid. * For some models that require more samples than parameters, a reduction in the sample size may prevent you from being able to fit the model. 11\.4 Using Custom Subsampling Techniques ----------------------------------------- Users have the ability to create their own type of subsampling procedure. To do this, alternative syntax is used with the `sampling` argument of the `trainControl`. Previously, we used a simple string as the value of this argument. Another way to specify the argument is to use a list with three (named) elements: * The `name` value is a character string used when the `train` object is printed. It can be any string. * The `func` element is a function that does the subsampling. It should have arguments called `x` and `y` that will contain the predictors and outcome data, respectively. The function should return a list with elements of the same name. * The `first` element is a single logical value that indicates whether the subsampling should occur first relative to pre\-process. A value of `FALSE` means that the subsampling function will receive the sampled versions of `x` and `y`. For example, here is what the list version of the `sampling` argument looks like when simple down\-sampling is used: ``` down_inside$control$sampling ``` ``` ## $name ## [1] "down" ## ## $func ## function(x, y) ## downSample(x, y, list = TRUE) ## ## $first ## [1] TRUE ``` As another example, suppose we want to use SMOTE but use 10 nearest neighbors instead of the default of 5\. To do this, we can create a simple wrapper around the `SMOTE` function and call this instead: ``` smotest <- list(name = "SMOTE with more neighbors!", func = function (x, y) { library(DMwR) dat <- if (is.data.frame(x)) x else as.data.frame(x) dat$.y <- y dat <- SMOTE(.y ~ ., data = dat, k = 10) list(x = dat[, !grepl(".y", colnames(dat), fixed = TRUE)], y = dat$.y) }, first = TRUE) ``` The control object would then be: ``` ctrl <- trainControl(method = "repeatedcv", repeats = 5, classProbs = TRUE, summaryFunction = twoClassSummary, sampling = smotest) ```
Machine Learning
topepo.github.io
https://topepo.github.io/caret/using-recipes-with-train.html
12 Using Recipes with train =========================== Modeling functions in R let you specific a model using a formula, the `x`/`y` interface, or both. Formulas are good because they will handle a lot of minutia for you (e.g. dummy variables, interactions, etc) so you don’t have to get your hands dirty. They [work pretty well](https://rviews.rstudio.com/2017/02/01/the-r-formula-method-the-good-parts/) but also have [limitations too](https://rviews.rstudio.com/2017/03/01/the-r-formula-method-the-bad-parts/). Their biggest issue is that not all modeling functions have a formula interface (although `train` helps solve that). Recipes are a third method for specifying model terms but also allow for a broad set of preprocessing options for encoding, manipulating, and transforming data. They cover a lot of techniques that formulas cannot do naturally. Recipes can be built incrementally in a way similar to how `dplyr` or `ggplot2` are created. The package [website](https://topepo.github.io/recipes/) has examples of how to use the package and lists the possible techniques (called *steps*). A recipe can then be handed to `train` *in lieu* of a formula. 12\.1 Why Should you learn this? -------------------------------- Here are two reasons: ### 12\.1\.1 More versatile tools for preprocessing data `caret`’s preprocessing tools have a lot of options but the list is not exhaustive and they will only be called in a specific order. If you would like * a broader set of options, * the ability to write your own preprocessing tools, or * to call them in the order that you desire then you can use a recipe to do that. ### 12\.1\.2 Using additional data to measure performance In most modeling functions, including `train`, most variables are consigned to be either predictors or outcomes. For recipes, there are more options. For example, you might want to have specific columns of your data set be available when you compute how well the model is performing, such as: * if different stratification variables (e.g. patients, ZIP codes, etc) are required to do correct summaries or * ancillary data might be need to compute the expected profit or loss based on the model results. To get these data properly, they need to be made available and handled the same way as all of the other data. This means they should be sub\- or resampled as all of the other data. Recipes let you do that. 12\.2 An Example ---------------- The `QSARdata` package contains several chemistry data sets. These data sets have rows for different potential drugs (called “compounds” here). For each compound, some important characteristic is measured. This illustration will use the `AquaticTox` data. The outcome is called “Activity” is a measure of how harmful the compound might be to people. We want to predict this during the drug discovery phase in R\&D To do this, a set of *molecular descriptors* are computed based on the compounds formula. There are a lot of different types of these and we will use the 2\-dimensional MOE descriptor set. First, lets’ load the package and get the data together: ``` library(caret) library(recipes) library(dplyr) library(QSARdata) data(AquaticTox) tox <- AquaticTox_moe2D ncol(tox) ``` ``` ## [1] 221 ``` ``` ## Add the outcome variable to the data frame tox$Activity <- AquaticTox_Outcome$Activity ``` We will build a model on these data to predict the activity. Some notes: * A common aspect to chemical descriptors is that they are *highly correlated*. Many descriptors often measure some variation of the same thing. For example, in these data, there are 56 potential predictors that measure different flavors of surface area. It might be a good idea to reduce the dimensionality of these data by pre\-filtering the predictors and/or using a dimension reduction technique. * Other descriptors are counts of certain types of aspects of the molecule. For example, one predictor is the number of Bromine atoms. The vast majority of compounds lack Bromine and this leads to a near\-zero variance situation discussed previously. It might be a good idea to pre\-filter these. Also, to demonstrate the utility of recipes, suppose that we could score potential drugs on the basis of how manufacturable they might be. We might want to build a model on the entire data set but only evaluate it on compounds that could be reasonably manufactured. For illustration, we’ll assume that, as a compounds molecule weight increases, its manufacturability *decreases*. For this purpose, we create a new variable (`manufacturability`) that is neither an outcome or predictor but will be needed to compute performance. ``` tox <- tox %>% select(-Molecule) %>% ## Suppose the easy of manufacturability is ## related to the molecular weight of the compound mutate(manufacturability = 1/moe2D_Weight) %>% mutate(manufacturability = manufacturability/sum(manufacturability)) ``` For this analysis, we will compute the RMSE using weights based on the manufacturability column such that a difficult compound has less impact on the RMSE. ``` model_stats <- function(data, lev = NULL, model = NULL) { stats <- defaultSummary(data, lev = lev, model = model) wt_rmse <- function (pred, obs, wts, na.rm = TRUE) sqrt(weighted.mean((pred - obs)^2, wts, na.rm = na.rm)) res <- wt_rmse(pred = data$pred, obs = data$obs, wts = data$manufacturability) c(wRMSE = res, stats) } ``` There is no way to include this extra variable using the default `train` method or using `train.formula`. Now, let’s create a recipe incrementally. First, we will use the formula methods to declare the outcome and predictors but change the analysis role of the `manufacturability` variable so that it will only be available when summarizing the model fit. ``` tox_recipe <- recipe(Activity ~ ., data = tox) %>% add_role(manufacturability, new_role = "performance var") tox_recipe ``` ``` ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## performance var 1 ## predictor 221 ``` Using this new role, the `manufacturability` column will be available when the summary function is executed and the appropriate rows of the data set will be exposed during resampling. For example, if one were to debug the `model_stats` function during execution of a model, the `data` object might look like this: ``` Browse[1]> head(data) obs manufacturability rowIndex pred 1 3.40 0.002770707 3 3.376488 2 3.75 0.002621364 27 3.945456 3 3.57 0.002697900 33 3.389999 4 3.84 0.002919528 39 4.023662 5 4.41 0.002561416 53 4.482736 6 3.98 0.002838804 54 3.965465 ``` More than one variable can have this role so that multiple columns can be made available. Now let’s add some steps to the recipe First, we remove sparse and unbalanced predictors: ``` tox_recipe <- tox_recipe %>% step_nzv(all_predictors()) tox_recipe ``` ``` ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## performance var 1 ## predictor 221 ## ## Operations: ## ## Sparse, unbalanced variable filter on all_predictors() ``` Note that we have only specified what *will happen once the recipe* is executed. This is only a specification that uses a generic declaration of `all_predictors`. As mentioned above, there are a lot of different surface area predictors and they tend to have very high correlations with one another. We’ll add one or more predictors to the model in place of these predictors using principal component analysis. The step will retain the number of components required to capture 95% of the information contained in these 56 predictors. We’ll name these new predictors `surf_area_1`, `surf_area_2` etc. ``` tox_recipe <- tox_recipe %>% step_pca(contains("VSA"), prefix = "surf_area_", threshold = .95) ``` Now, lets specific that the third step in the recipe is to reduce the number of predictors so that no pair has an absolute correlation greater than 0\.90\. However, we might want to keep the surface area principal components so we *exclude* these from the filter (using the minus sign) ``` tox_recipe <- tox_recipe %>% step_corr(all_predictors(), -starts_with("surf_area_"), threshold = .90) ``` Finally, we can center and scale all of the predictors that are available at the end of the recipe: ``` tox_recipe <- tox_recipe %>% step_center(all_predictors()) %>% step_scale(all_predictors()) tox_recipe ``` ``` ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## performance var 1 ## predictor 221 ## ## Operations: ## ## Sparse, unbalanced variable filter on all_predictors() ## PCA extraction with contains("VSA") ## Correlation filter on 2 items ## Centering for all_predictors() ## Scaling for all_predictors() ``` Let’s use this recipe to fit a SVM model and pick the tuning parameters that minimize the weighted RMSE value: ``` tox_ctrl <- trainControl(method = "cv", summaryFunction = model_stats) set.seed(888) tox_svm <- train(tox_recipe, tox, method = "svmRadial", metric = "wRMSE", maximize = FALSE, tuneLength = 10, trControl = tox_ctrl) ``` ``` ## Warning in train_rec(rec = x, dat = data, info = trainInfo, method = ## models, : There were missing values in resampled performance measures. ``` ``` tox_svm ``` ``` ## Support Vector Machines with Radial Basis Function Kernel ## ## 322 samples ## 221 predictors ## ## Recipe steps: nzv, pca, corr, center, scale ## Resampling: Cross-Validated (10 fold) ## Summary of sample sizes: 290, 290, 289, 290, 290, 290, ... ## Resampling results across tuning parameters: ## ## C wRMSE RMSE Rsquared MAE ## 0.25 1.786725 0.7665516 0.6640118 0.5299520 ## 0.50 1.672121 0.7164854 0.6958983 0.4900911 ## 1.00 1.526568 0.6833307 0.7168690 0.4617431 ## 2.00 1.536196 0.6571988 0.7374416 0.4374691 ## 4.00 1.520765 0.6490274 0.7446164 0.4312956 ## 8.00 1.313955 0.6350230 0.7585357 0.4229098 ## 16.00 1.231053 0.6316081 0.7622038 0.4229104 ## 32.00 1.357506 0.6478135 0.7468737 0.4362607 ## 64.00 1.448316 0.6765142 0.7179122 0.4527874 ## 128.00 1.598975 0.7331978 0.6789173 0.4761214 ## ## Tuning parameter 'sigma' was held constant at a value of 0.01150696 ## wRMSE was used to select the optimal model using the smallest value. ## The final values used for the model were sigma = 0.01150696 and C = 16. ``` What variables were generated by the recipe? ``` ## originally: ncol(tox) - 2 ``` ``` ## [1] 220 ``` ``` ## after the recipe was executed: predictors(tox_svm) ``` ``` ## [1] "moeGao_Abra_R" "moeGao_Abra_acidity" "moeGao_Abra_basicity" ## [4] "moeGao_Abra_pi" "moe2D_BCUT_PEOE_3" "moe2D_BCUT_SLOGP_0" ## [7] "moe2D_BCUT_SLOGP_1" "moe2D_BCUT_SLOGP_3" "moe2D_GCUT_PEOE_0" ## [10] "moe2D_GCUT_PEOE_1" "moe2D_GCUT_PEOE_2" "moe2D_GCUT_SLOGP_0" ## [13] "moe2D_GCUT_SLOGP_1" "moe2D_GCUT_SLOGP_2" "moe2D_GCUT_SLOGP_3" ## [16] "moe2D_GCUT_SMR_0" "moe2D_Kier3" "moe2D_KierA1" ## [19] "moe2D_KierA2" "moe2D_KierA3" "moe2D_KierFlex" ## [22] "moe2D_PEOE_PC..1" "moe2D_PEOE_RPC." "moe2D_PEOE_RPC..1" ## [25] "moe2D_Q_PC." "moe2D_Q_RPC." "moe2D_Q_RPC..1" ## [28] "moe2D_SlogP" "moe2D_TPSA" "moe2D_Weight" ## [31] "moe2D_a_ICM" "moe2D_a_acc" "moe2D_a_hyd" ## [34] "moe2D_a_nH" "moe2D_a_nN" "moe2D_a_nO" ## [37] "moe2D_b_1rotN" "moe2D_b_1rotR" "moe2D_b_double" ## [40] "moe2D_balabanJ" "moe2D_chi0v" "moeGao_chi3cv" ## [43] "moeGao_chi3cv_C" "moeGao_chi3pv" "moeGao_chi4ca_C" ## [46] "moeGao_chi4cav_C" "moeGao_chi4pc" "moeGao_chi4pcv" ## [49] "moeGao_chi4pcv_C" "moe2D_density" "moe2D_kS_aaCH" ## [52] "moe2D_kS_aaaC" "moe2D_kS_aasC" "moe2D_kS_dO" ## [55] "moe2D_kS_dsCH" "moe2D_kS_dssC" "moe2D_kS_sCH3" ## [58] "moe2D_kS_sCl" "moe2D_kS_sNH2" "moe2D_kS_sOH" ## [61] "moe2D_kS_ssCH2" "moe2D_kS_ssO" "moe2D_kS_sssCH" ## [64] "moe2D_lip_don" "moe2D_petitjean" "moe2D_radius" ## [67] "moe2D_reactive" "moe2D_rings" "moe2D_weinerPath" ## [70] "moe2D_weinerPol" "manufacturability" "surf_area_1" ## [73] "surf_area_2" "surf_area_3" "surf_area_4" ``` The trained recipe is available in the `train` object and now shows specific variables involved in each step: ``` tox_svm$recipe ``` ``` ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## performance var 1 ## predictor 221 ## ## Training data contained 322 data points and no missing data. ## ## Operations: ## ## Sparse, unbalanced variable filter removed moe2D_PEOE_VSA.3, ... [trained] ## PCA extraction with moe2D_PEOE_VSA.0, ... [trained] ## Correlation filter removed moe2D_BCUT_SMR_0, ... [trained] ## Centering for moeGao_Abra_R, ... [trained] ## Scaling for moeGao_Abra_R, ... [trained] ``` 12\.3 Case Weights ------------------ For [models that accept them](https://topepo.github.io/caret/train-models-by-tag.html#Accepts_Case_Weights), case weights can be passed to the model fitting routines using a role of `"case weight"`. 12\.1 Why Should you learn this? -------------------------------- Here are two reasons: ### 12\.1\.1 More versatile tools for preprocessing data `caret`’s preprocessing tools have a lot of options but the list is not exhaustive and they will only be called in a specific order. If you would like * a broader set of options, * the ability to write your own preprocessing tools, or * to call them in the order that you desire then you can use a recipe to do that. ### 12\.1\.2 Using additional data to measure performance In most modeling functions, including `train`, most variables are consigned to be either predictors or outcomes. For recipes, there are more options. For example, you might want to have specific columns of your data set be available when you compute how well the model is performing, such as: * if different stratification variables (e.g. patients, ZIP codes, etc) are required to do correct summaries or * ancillary data might be need to compute the expected profit or loss based on the model results. To get these data properly, they need to be made available and handled the same way as all of the other data. This means they should be sub\- or resampled as all of the other data. Recipes let you do that. ### 12\.1\.1 More versatile tools for preprocessing data `caret`’s preprocessing tools have a lot of options but the list is not exhaustive and they will only be called in a specific order. If you would like * a broader set of options, * the ability to write your own preprocessing tools, or * to call them in the order that you desire then you can use a recipe to do that. ### 12\.1\.2 Using additional data to measure performance In most modeling functions, including `train`, most variables are consigned to be either predictors or outcomes. For recipes, there are more options. For example, you might want to have specific columns of your data set be available when you compute how well the model is performing, such as: * if different stratification variables (e.g. patients, ZIP codes, etc) are required to do correct summaries or * ancillary data might be need to compute the expected profit or loss based on the model results. To get these data properly, they need to be made available and handled the same way as all of the other data. This means they should be sub\- or resampled as all of the other data. Recipes let you do that. 12\.2 An Example ---------------- The `QSARdata` package contains several chemistry data sets. These data sets have rows for different potential drugs (called “compounds” here). For each compound, some important characteristic is measured. This illustration will use the `AquaticTox` data. The outcome is called “Activity” is a measure of how harmful the compound might be to people. We want to predict this during the drug discovery phase in R\&D To do this, a set of *molecular descriptors* are computed based on the compounds formula. There are a lot of different types of these and we will use the 2\-dimensional MOE descriptor set. First, lets’ load the package and get the data together: ``` library(caret) library(recipes) library(dplyr) library(QSARdata) data(AquaticTox) tox <- AquaticTox_moe2D ncol(tox) ``` ``` ## [1] 221 ``` ``` ## Add the outcome variable to the data frame tox$Activity <- AquaticTox_Outcome$Activity ``` We will build a model on these data to predict the activity. Some notes: * A common aspect to chemical descriptors is that they are *highly correlated*. Many descriptors often measure some variation of the same thing. For example, in these data, there are 56 potential predictors that measure different flavors of surface area. It might be a good idea to reduce the dimensionality of these data by pre\-filtering the predictors and/or using a dimension reduction technique. * Other descriptors are counts of certain types of aspects of the molecule. For example, one predictor is the number of Bromine atoms. The vast majority of compounds lack Bromine and this leads to a near\-zero variance situation discussed previously. It might be a good idea to pre\-filter these. Also, to demonstrate the utility of recipes, suppose that we could score potential drugs on the basis of how manufacturable they might be. We might want to build a model on the entire data set but only evaluate it on compounds that could be reasonably manufactured. For illustration, we’ll assume that, as a compounds molecule weight increases, its manufacturability *decreases*. For this purpose, we create a new variable (`manufacturability`) that is neither an outcome or predictor but will be needed to compute performance. ``` tox <- tox %>% select(-Molecule) %>% ## Suppose the easy of manufacturability is ## related to the molecular weight of the compound mutate(manufacturability = 1/moe2D_Weight) %>% mutate(manufacturability = manufacturability/sum(manufacturability)) ``` For this analysis, we will compute the RMSE using weights based on the manufacturability column such that a difficult compound has less impact on the RMSE. ``` model_stats <- function(data, lev = NULL, model = NULL) { stats <- defaultSummary(data, lev = lev, model = model) wt_rmse <- function (pred, obs, wts, na.rm = TRUE) sqrt(weighted.mean((pred - obs)^2, wts, na.rm = na.rm)) res <- wt_rmse(pred = data$pred, obs = data$obs, wts = data$manufacturability) c(wRMSE = res, stats) } ``` There is no way to include this extra variable using the default `train` method or using `train.formula`. Now, let’s create a recipe incrementally. First, we will use the formula methods to declare the outcome and predictors but change the analysis role of the `manufacturability` variable so that it will only be available when summarizing the model fit. ``` tox_recipe <- recipe(Activity ~ ., data = tox) %>% add_role(manufacturability, new_role = "performance var") tox_recipe ``` ``` ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## performance var 1 ## predictor 221 ``` Using this new role, the `manufacturability` column will be available when the summary function is executed and the appropriate rows of the data set will be exposed during resampling. For example, if one were to debug the `model_stats` function during execution of a model, the `data` object might look like this: ``` Browse[1]> head(data) obs manufacturability rowIndex pred 1 3.40 0.002770707 3 3.376488 2 3.75 0.002621364 27 3.945456 3 3.57 0.002697900 33 3.389999 4 3.84 0.002919528 39 4.023662 5 4.41 0.002561416 53 4.482736 6 3.98 0.002838804 54 3.965465 ``` More than one variable can have this role so that multiple columns can be made available. Now let’s add some steps to the recipe First, we remove sparse and unbalanced predictors: ``` tox_recipe <- tox_recipe %>% step_nzv(all_predictors()) tox_recipe ``` ``` ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## performance var 1 ## predictor 221 ## ## Operations: ## ## Sparse, unbalanced variable filter on all_predictors() ``` Note that we have only specified what *will happen once the recipe* is executed. This is only a specification that uses a generic declaration of `all_predictors`. As mentioned above, there are a lot of different surface area predictors and they tend to have very high correlations with one another. We’ll add one or more predictors to the model in place of these predictors using principal component analysis. The step will retain the number of components required to capture 95% of the information contained in these 56 predictors. We’ll name these new predictors `surf_area_1`, `surf_area_2` etc. ``` tox_recipe <- tox_recipe %>% step_pca(contains("VSA"), prefix = "surf_area_", threshold = .95) ``` Now, lets specific that the third step in the recipe is to reduce the number of predictors so that no pair has an absolute correlation greater than 0\.90\. However, we might want to keep the surface area principal components so we *exclude* these from the filter (using the minus sign) ``` tox_recipe <- tox_recipe %>% step_corr(all_predictors(), -starts_with("surf_area_"), threshold = .90) ``` Finally, we can center and scale all of the predictors that are available at the end of the recipe: ``` tox_recipe <- tox_recipe %>% step_center(all_predictors()) %>% step_scale(all_predictors()) tox_recipe ``` ``` ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## performance var 1 ## predictor 221 ## ## Operations: ## ## Sparse, unbalanced variable filter on all_predictors() ## PCA extraction with contains("VSA") ## Correlation filter on 2 items ## Centering for all_predictors() ## Scaling for all_predictors() ``` Let’s use this recipe to fit a SVM model and pick the tuning parameters that minimize the weighted RMSE value: ``` tox_ctrl <- trainControl(method = "cv", summaryFunction = model_stats) set.seed(888) tox_svm <- train(tox_recipe, tox, method = "svmRadial", metric = "wRMSE", maximize = FALSE, tuneLength = 10, trControl = tox_ctrl) ``` ``` ## Warning in train_rec(rec = x, dat = data, info = trainInfo, method = ## models, : There were missing values in resampled performance measures. ``` ``` tox_svm ``` ``` ## Support Vector Machines with Radial Basis Function Kernel ## ## 322 samples ## 221 predictors ## ## Recipe steps: nzv, pca, corr, center, scale ## Resampling: Cross-Validated (10 fold) ## Summary of sample sizes: 290, 290, 289, 290, 290, 290, ... ## Resampling results across tuning parameters: ## ## C wRMSE RMSE Rsquared MAE ## 0.25 1.786725 0.7665516 0.6640118 0.5299520 ## 0.50 1.672121 0.7164854 0.6958983 0.4900911 ## 1.00 1.526568 0.6833307 0.7168690 0.4617431 ## 2.00 1.536196 0.6571988 0.7374416 0.4374691 ## 4.00 1.520765 0.6490274 0.7446164 0.4312956 ## 8.00 1.313955 0.6350230 0.7585357 0.4229098 ## 16.00 1.231053 0.6316081 0.7622038 0.4229104 ## 32.00 1.357506 0.6478135 0.7468737 0.4362607 ## 64.00 1.448316 0.6765142 0.7179122 0.4527874 ## 128.00 1.598975 0.7331978 0.6789173 0.4761214 ## ## Tuning parameter 'sigma' was held constant at a value of 0.01150696 ## wRMSE was used to select the optimal model using the smallest value. ## The final values used for the model were sigma = 0.01150696 and C = 16. ``` What variables were generated by the recipe? ``` ## originally: ncol(tox) - 2 ``` ``` ## [1] 220 ``` ``` ## after the recipe was executed: predictors(tox_svm) ``` ``` ## [1] "moeGao_Abra_R" "moeGao_Abra_acidity" "moeGao_Abra_basicity" ## [4] "moeGao_Abra_pi" "moe2D_BCUT_PEOE_3" "moe2D_BCUT_SLOGP_0" ## [7] "moe2D_BCUT_SLOGP_1" "moe2D_BCUT_SLOGP_3" "moe2D_GCUT_PEOE_0" ## [10] "moe2D_GCUT_PEOE_1" "moe2D_GCUT_PEOE_2" "moe2D_GCUT_SLOGP_0" ## [13] "moe2D_GCUT_SLOGP_1" "moe2D_GCUT_SLOGP_2" "moe2D_GCUT_SLOGP_3" ## [16] "moe2D_GCUT_SMR_0" "moe2D_Kier3" "moe2D_KierA1" ## [19] "moe2D_KierA2" "moe2D_KierA3" "moe2D_KierFlex" ## [22] "moe2D_PEOE_PC..1" "moe2D_PEOE_RPC." "moe2D_PEOE_RPC..1" ## [25] "moe2D_Q_PC." "moe2D_Q_RPC." "moe2D_Q_RPC..1" ## [28] "moe2D_SlogP" "moe2D_TPSA" "moe2D_Weight" ## [31] "moe2D_a_ICM" "moe2D_a_acc" "moe2D_a_hyd" ## [34] "moe2D_a_nH" "moe2D_a_nN" "moe2D_a_nO" ## [37] "moe2D_b_1rotN" "moe2D_b_1rotR" "moe2D_b_double" ## [40] "moe2D_balabanJ" "moe2D_chi0v" "moeGao_chi3cv" ## [43] "moeGao_chi3cv_C" "moeGao_chi3pv" "moeGao_chi4ca_C" ## [46] "moeGao_chi4cav_C" "moeGao_chi4pc" "moeGao_chi4pcv" ## [49] "moeGao_chi4pcv_C" "moe2D_density" "moe2D_kS_aaCH" ## [52] "moe2D_kS_aaaC" "moe2D_kS_aasC" "moe2D_kS_dO" ## [55] "moe2D_kS_dsCH" "moe2D_kS_dssC" "moe2D_kS_sCH3" ## [58] "moe2D_kS_sCl" "moe2D_kS_sNH2" "moe2D_kS_sOH" ## [61] "moe2D_kS_ssCH2" "moe2D_kS_ssO" "moe2D_kS_sssCH" ## [64] "moe2D_lip_don" "moe2D_petitjean" "moe2D_radius" ## [67] "moe2D_reactive" "moe2D_rings" "moe2D_weinerPath" ## [70] "moe2D_weinerPol" "manufacturability" "surf_area_1" ## [73] "surf_area_2" "surf_area_3" "surf_area_4" ``` The trained recipe is available in the `train` object and now shows specific variables involved in each step: ``` tox_svm$recipe ``` ``` ## Data Recipe ## ## Inputs: ## ## role #variables ## outcome 1 ## performance var 1 ## predictor 221 ## ## Training data contained 322 data points and no missing data. ## ## Operations: ## ## Sparse, unbalanced variable filter removed moe2D_PEOE_VSA.3, ... [trained] ## PCA extraction with moe2D_PEOE_VSA.0, ... [trained] ## Correlation filter removed moe2D_BCUT_SMR_0, ... [trained] ## Centering for moeGao_Abra_R, ... [trained] ## Scaling for moeGao_Abra_R, ... [trained] ``` 12\.3 Case Weights ------------------ For [models that accept them](https://topepo.github.io/caret/train-models-by-tag.html#Accepts_Case_Weights), case weights can be passed to the model fitting routines using a role of `"case weight"`.
Machine Learning
topepo.github.io
https://topepo.github.io/caret/using-your-own-model-in-train.html
13 Using Your Own Model in `train` ================================== Contents * [Introduction](using-your-own-model-in-train.html#Introduction) * [Illustrative Example 1: SVMs with Laplacian Kernels](using-your-own-model-in-train.html#Illustration1) * [Model Components](using-your-own-model-in-train.html#Components) * [Illustrative Example 2: Something More Complicated `LogitBoost`](using-your-own-model-in-train.html#Illustration2) * [Illustrative Example 3: Nonstandard Formulas](using-your-own-model-in-train.html#Illustration3) * [Illustrative Example 4: PLS Feature Extraction Pre\-Processing](using-your-own-model-in-train.html#Illustration4) * [Illustrative Example 5: Optimizing probability thresholds for class imbalances](using-your-own-model-in-train.html#Illustration5) * [Illustrative Example 6: Offsets in Generalized Linear Models](using-your-own-model-in-train.html#Illustration6) 13\.1 Introduction ------------------ The package contains a large number of predictive model interfaces. However, you may want to create your own because: * you are testing out a novel model or the package doesn’t have a model that you are interested in * you would like to run an existing model in the package your own way * there are pre\-processing or sampling steps not contained in the package or you just don’t like the way the package does things You can still get the benefits of the [`caret`](http://cran.r-project.org/web/packages/caret/index.html) infrastructure by creating your own model. Currently, when you specify the type of model that you are interested in (e.g. `type = "lda"`), the `train` function runs another function called `getModelInfo` to retrieve the specifics of that model from the existing catalog. For example: ``` ldaModelInfo <- getModelInfo(model = "lda", regex = FALSE)[[1]] ## Model components names(ldaModelInfo) ``` ``` ## [1] "label" "library" "loop" "type" "parameters" ## [6] "grid" "fit" "predict" "prob" "predictors" ## [11] "tags" "levels" "sort" ``` To use your own model, you can pass a list of these components to `type`. This page will describe those components in detail. 13\.2 Illustrative Example 1: SVMs with Laplacian Kernels --------------------------------------------------------- The package currently contains support vector machine (SVM) models using linear, polynomial and radial basis function kernels. The [`kernlab`](http://cran.r-project.org/web/packages/kernlab/index.html) package has other functions, including the Laplacian kernel. We will illustrate the model components for this model, which has two parameters: the standard cost parameter for SVMs and one kernel parameter (`sigma`) 13\.3 Model Components ---------------------- You can pass a list of information to the `method` argument in `train`. For models that are built\-in to the package, you can just pass the method name as before. There are some basic components of the list for custom models. A brief description is below for each then, after setting up and example, each will be described in detail. The list should have the following elements: * `library` is a character vector of package names that will be needed to fit the model or calculate predictions. `NULL` can also be used. * `type` is a simple character vector with values `"Classification"`, `"Regression"` or both. * `parameters` is a data frame with three simple attributes for each tuning parameter (if any): the argument name (e.g. `mtry`), the type of data in the parameter grid and textual labels for the parameter. * `grid` is a function that is used to create the tuning grid (unless the user gives the exact values of the parameters via `tuneGrid`) * `fit` is a function that fits the model * `predict` is the function that creates predictions * `prob` is a function that can be used to create class probabilities (if applicable) * `sort` is a function that sorts the parameter from most complex to least * `loop` is an **optional** function for advanced users for models that can create multiple submodel predictions from the same object. * `levels` is an **optional** function, primarily for classification models using `S4` methods to return the factor levels of the outcome. * `tags` is an **optional** character vector that has subjects associated with the model, such as `Tree-Based Model` or `Embedded Feature Selection`. This string is used by the package to create additional documentation pages on the package website. * `label` is an **optional** character string that names the model (e.g. “Linear Discriminant Analysis”). * `predictors` is an **optional** function that returns a character vector that contains the names of the predictors that we used in the prediction equation. * `varImp` is an **optional** function that calculates variable importance metrics for the model (if any). * `oob` is another **optional** function that calculates out\-of\-bag performance estimates from the model object. Most models do not have this capability but some (e.g. random forests, bagged models) do. * `notes` is an **optional** character vector that can be used to document non\-obvious aspects of the model. For example, there are two Bayesian lasso models ([`blasso`](https://github.com/topepo/caret/blob/master/models/files/blasso.R) and [`blassoAveraged`](https://github.com/topepo/caret/blob/master/models/files/blassoAveraged.R)) and this field is used to describe the differences between the two models. * `check` is an **optional** function that can be used to check the system/install to make sure that any atypical software requirements are available to the user. The input is `pkg`, which is the same character string given by the `library`. This function is run *after* the checking function to see if the packages specified in `library` are installed. As an example, the model [`pythonKnnReg`](https://github.com/topepo/caret/blob/master/models/files/pythonKnnReg.R) uses certain python libraries and the user should have python and these libraries installed. The [model file](https://github.com/topepo/caret/blob/master/models/files/pythonKnnReg.R) demonstrates how to check for python libraries prior to running the R model. In the [`caret`](http://cran.r-project.org/web/packages/caret/index.html) package, the subdirectory `models` has all the code for each model that `train` interfaces with and these can be used as prototypes for your model. Let’s create a new model for a classification support vector machin using the Laplacian kernel function. We will use the `kernlab` package’s `ksvm` function. The kernel has two parameters: the standard cost parameter for SVMs and one kernel parameter (`sigma`). To start, we’ll create a new list: ``` lpSVM <- list(type = "Classification", library = "kernlab", loop = NULL) ``` This model can also be used for regression but we will constrain things here for simplicity. For other SVM models, the type value would be `c("Classification", "Regression")`. The `library` value checks to see if this package is installed and loads it whenever it is needed (e.g. before modeling or prediction). **Note**: `caret` will check to see if these packages are installed but will *not* explicitly load them. As such, functions that are used from the package should be referenced by namespace. This is discussed more below when describing the `fit` function. ### 13\.3\.1 The parameters Element We have to create some basic information for the parameters in the form of a data frame. The first column is the name of the parameter. The convention is to use the argument name in the model function (e.g. the `ksvm` function here). Those values are `C` and `sigma`. Each is a number and we can give them labels of `"Cost"` and `"Sigma"`, respectively. The `parameters` element would then be: ``` prm <- data.frame(parameter = c("C", "sigma"), class = rep("numeric", 2), label = c("Cost", "Sigma")) ``` Now we assign it to the model list: ``` lpSVM$parameters <- prm ``` Values of `type` can indicate numeric, character or logical data types. ### 13\.3\.2 The `grid` Element This should be a function that takes parameters: `x` and `y` (for the predictors and outcome data), `len` (the number of values per tuning parameter) as well as `search`. `len` is the value of `tuneLength` that is potentially passed in through `train`. `search` can be either `"grid"` or `"random"`. This can be used to setup a grid for searching or random values for random search. The output should be a data frame of tuning parameter combinations with a column for each parameter. The column names should be the parameter name (e.g. the values of `prm$parameter`). In our case, let’s vary the cost parameter on the log 2 scale. For the sigma parameter, we can use the `kernlab` function `sigest` to pre\-estimate the value. Following `ksvm` we take the average of the low and high estimates. Here is a function we could use: ``` svmGrid <- function(x, y, len = NULL, search = "grid") { library(kernlab) ## This produces low, middle and high values for sigma ## (i.e. a vector with 3 elements). sigmas <- kernlab::sigest(as.matrix(x), na.action = na.omit, scaled = TRUE) ## To use grid search: if(search == "grid") { out <- expand.grid(sigma = mean(as.vector(sigmas[-2])), C = 2 ^((1:len) - 3)) } else { ## For random search, define ranges for the parameters then ## generate random values for them rng <- extendrange(log(sigmas), f = .75) out <- data.frame(sigma = exp(runif(len, min = rng[1], max = rng[2])), C = 2^runif(len, min = -5, max = 8)) } out } ``` Why did we use `kernlab::sigest` instead of `sigest`? As previously mentioned, `caret` will not execute `library(kernlab)` unless you explicitly code it in these functions. Since it is not explicitly loaded, you have to call it *using the namespace operator* `::`. Again, the user can pass their own grid via `train`’s `tuneGrid` option or they can use this code to create a default grid. We assign this function to the overall model list: ``` lpSVM$grid <- svmGrid ``` ### 13\.3\.3 The `fit` Element Here is where we fit the model. This `fit` function has several arguments: * `x`, `y`: the current data used to fit the model * `wts`: optional instance weights (not applicable for this particular model) * `param`: the current tuning parameter values * `lev`: the class levels of the outcome (or `NULL` in regression) * `last`: a logical for whether the current fit is the final fit * `weights` * `classProbs`: a logical for whether class probabilities should be computed. Here is something we could use for this model: ``` svmFit <- function(x, y, wts, param, lev, last, weights, classProbs, ...) { kernlab::ksvm( x = as.matrix(x), y = y, kernel = "rbfdot", kpar = list(sigma = param$sigma), C = param$C, prob.model = classProbs, ... ) } lpSVM$fit <- svmFit ``` A few notes about this: * Notice that the package is not loaded in the code. It is loaded prior to this function being called so it won’t hurt if you load it again (but that’s not needed). * The `ksvm` function requires a *matrix* or predictors. If the original data were a data frame, this would throw and error. * The tuning parameters are references in the `param` data frame. There is always a single row in this data frame. * The probability model is fit based on the value of `classProbs`. This value is determined by the value given in `trainControl`. * The three dots allow the user to pass options in from `train` to, in this case, the `ksvm` function. For example, if the use wanted to set the cache size for the function, they could list `cache = 80` and this argument will be pass from `train` to `ksvm`. * Any pre\-processing that was requested in the call to `train` have been done. For example, if `preProc = "center"` was originally requested, the columns of `x` seen within this function are mean centered. * Again, the namespace operator `::` is used for `rbfdot` and `ksvm` to ensure that the function can be found. ### 13\.3\.4 The `predict` Element This is a function that produces a vector or predictions. In our case these are class predictions but they could be numbers for regression models. The arguments are: * `modelFit`: the model produced by the `fit` code shown above. * `newdata`: the predictor values of the instances being predicted (e.g. out\-of\-bag samples) * `preProc` * `submodels`: this an optional list of tuning parameters only used with the `loop` element discussed below. In most cases, it will be `NULL`. Our function will be very simple: ``` svmPred <- function(modelFit, newdata, preProc = NULL, submodels = NULL) kernlab::predict(modelFit, newdata) lpSVM$predict <- svmPred ``` The function `predict.ksvm` will automatically create a factor vector as output. The function could also produce character values. Either way, the innards of `train` will make them factors and ensure that the same levels as the original data are used. ### 13\.3\.5 The `prob` Element If a regression model is being used or if the classification model does not create class probabilities a value of `NULL` can be used here instead of a function. Otherwise, the function arguments are the same as the `pred` function. The output should be a matrix or data frame of class probabilities with a column for each class. The column names should be the class levels. We can use: ``` svmProb <- function(modelFit, newdata, preProc = NULL, submodels = NULL) kernlab::predict(modelFit, newdata, type = "probabilities") lpSVM$prob <- svmProb ``` If you look at some of the SVM examples in the `models` directory, the real functions used by `train` are much more complicated so that they can deal with model failures, probabilities that do not sum to 1 etc. 13\.4 The sort Element ---------------------- This is an optional function that sorts the tuning parameters from the simplest model to the most complex. There are times where this ordering is not obvious. This information is used when the performance values are tied across multiple parameters. We would probably want to choose the least complex model in those cases. Here, we will sort by the cost value. Smaller values of `C` produce smoother class boundaries than larger values: ``` svmSort <- function(x) x[order(x$C),] lpSVM$sort <- svmSort ``` ### 13\.4\.1 The `levels` Element `train` ensures that classification models always predict factors with the same levels. To do this at prediction time, the package needs to know the levels from the model object (specifically, the `finalModels` slot of the `train` object). For model functions using `S3` methods, `train` automatically attaches a character vector called `obsLevels` to the object and the package code uses this value. However, this strategy does not work for `S4` methods. In these cases, the package will use the code found in the `levels` slot of the model list. For example, the `ksvm` function uses `S4` methods but, unlike most model functions, has a built–in function called `lev` that will extract the class levels (if any). In this case, our levels code would be: ``` lpSVM$levels <- function(x) kernlab::lev(x) ``` In most other cases, the levels will beed to be extracted from data contained in the fitted model object. As another example, objects created using the `ctree` function in the `party` package would need to use: ``` function(x) levels(x@data@get("response")[,1]) ``` Again, this slot is only used for classification models using `S4` methods. We should now be ready to fit our model. ``` library(mlbench) data(Sonar) library(caret) set.seed(998) inTraining <- createDataPartition(Sonar$Class, p = .75, list = FALSE) training <- Sonar[ inTraining,] testing <- Sonar[-inTraining,] fitControl <- trainControl(method = "repeatedcv", ## 10-fold CV... number = 10, ## repeated ten times repeats = 10) set.seed(825) Laplacian <- train(Class ~ ., data = training, method = lpSVM, preProc = c("center", "scale"), tuneLength = 8, trControl = fitControl) Laplacian ``` ``` ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## Pre-processing: centered (60), scaled (60) ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## C Accuracy Kappa ## 0.25 0.7344118 0.4506090 ## 0.50 0.7576716 0.5056691 ## 1.00 0.7820245 0.5617124 ## 2.00 0.8146348 0.6270944 ## 4.00 0.8357745 0.6691484 ## 8.00 0.8508824 0.6985281 ## 16.00 0.8537108 0.7044561 ## 32.00 0.8537108 0.7044561 ## ## Tuning parameter 'sigma' was held constant at a value of 0.01181293 ## Accuracy was used to select the optimal model using the largest value. ## The final values used for the model were C = 16 and sigma = 0.01181293. ``` A plot of the data shows that the model doesn’t change when the cost value is above 16\. ``` ggplot(Laplacian) + scale_x_log10() ``` 13\.5 Illustrative Example 2: Something More Complicated \- `LogitBoost` ------------------------------------------------------------------------ \#\#\#The loop Element This function can be used to create custom loops for models to tune over. In most cases, the function can just return the existing tuning grid. For example, a `LogitBoost` model can be trained over the number of boosting iterations. In the [`caTools`](http://cran.r-project.org/web/packages/caTools/index.html) package, the `LogitBoost` function can be used to fit this model. For example: ``` mod <- LogitBoost(as.matrix(x), y, nIter = 51) ``` If we were to tune the model evaluating models where the number of iterations was 11, 21, 31, 41 and 51, the grid could be ``` lbGrid <- data.frame(nIter = seq(11, 51, by = 10)) ``` During resampling, `train` could loop over all five rows in `lbGrid` and fit five models. However, the `predict.LogitBoost` function has an argument called `nIter` that can produce, in this case, predictions from `mod` for all five models. Instead of `train` fitting five models, we could fit a single model with `nIter` \= class\=“hl num”\>51`and derive predictions for all five models using only`mod\`. The terminology used here is that `nIter` is a *sequential* tuning parameter (and the other parameters would be considered *fixed*). The `loop` argument for models is used to produce two objects: * `loop`: this is the actual loop that is used by `train`. * `submodels` is a *list* that has as many elements as there are rows in `loop`. The list has all the “extra” parameter settings that can be derived for each model. Going back to the `LogitBoost` example, we could have: ``` loop <- data.frame(.nIter = 51) loop ``` ``` ## .nIter ## 1 51 ``` ``` submodels <- list(data.frame(nIter = seq(11, 41, by = 10))) submodels ``` ``` ## [[1]] ## nIter ## 1 11 ## 2 21 ## 3 31 ## 4 41 ``` For this case, `train` first fits the `nIter = 51` model. When the model is predicted, that code has a `for` loop that iterates over the elements of `submodel[[1]]` to get the predictions for the other 4 models. In the end, predictions for all five models (for `nIter = seq(11, 51, by = 10)`) with a single model fit. There are other models built\-in to [`caret`](http://cran.r-project.org/web/packages/caret/index.html) that are used this way. There are a number of models that have multiple sequential tuning parameters. If the `loop` argument is left `NULL` the results of `tuneGrid` are used as the simple loop and is recommended for most situations. Note that the machinery that is used to “derive” the extra predictions is up to the user to create, typically in the `predict` and `prob` elements of the custom model object. For the `LogitBoost` model, some simple code to create these objects would be: ``` fullGrid <- data.frame(nIter = seq(11, 51, by = 10)) ## Get the largest value of nIter to fit the "full" model loop <- fullGrid[which.max(fullGrid$nIter),,drop = FALSE] loop ``` ``` ## nIter ## 5 51 ``` ``` submodels <- fullGrid[-which.max(fullGrid$nIter),,drop = FALSE] ## This needs to be encased in a list in case there are more ## than one tuning parameter submodels <- list(submodels) submodels ``` ``` ## [[1]] ## nIter ## 1 11 ## 2 21 ## 3 31 ## 4 41 ``` For the `LogitBoost` custom model object, we could use this code in the `predict` slot: ``` lbPred <- function(modelFit, newdata, preProc = NULL, submodels = NULL) { ## This model was fit with the maximum value of nIter out <- caTools::predict.LogitBoost(modelFit, newdata, type="class") ## In this case, 'submodels' is a data frame with the other values of ## nIter. We loop over these to get the other predictions. if(!is.null(submodels)) { ## Save _all_ the predictions in a list tmp <- out out <- vector(mode = "list", length = nrow(submodels) + 1) out[[1]] <- tmp for(j in seq(along = submodels$nIter)) { out[[j+1]] <- caTools::predict.LogitBoost( modelFit, newdata, nIter = submodels$nIter[j]) } } out } ``` A few more notes: * The code in the `fit` element does not have to change. * The `prob` slot works in the same way. The only difference is that the values saved in the outgoing lists are matrices or data frames of probabilities for each class. * After model training (i.e. predicting new samples), the value of `submodels` is set to `NULL` and the code produces a single set of predictions. * If the model had one sequential parameter and one fixed parameter, the `loop` data frame would have two columns (one for each parameter). If the model is tuned over more than one value of the fixed parameter, the `submodels` list would have more than one element. If `loop` had 10 rows, then `length(submodels)` would be `10` and `loop[i,]` would be linked to `submodels[[i]]`. * In this case, the prediction function was called by namespace too (i.e. `caTools::predict.LogitBoost`). This may not seem necessary but what functions are available can vary depending on what parallel processing technology is being used. For example, the nature of forking used by `doMC` and `doParallel` tends to have easier access to functions while PSOCK methods in `doParallel` do not. It may be easier to take the safe path of using the namespace operator wherever possible to avoid errors that are difficult to track down. Here is a slimmed down version of the logitBoost code already in the package: ``` lbFuncs <- list(library = "caTools", loop = function(grid) { loop <- grid[which.max(grid$nIter),,drop = FALSE] submodels <- grid[-which.max(grid$nIter),,drop = FALSE] submodels <- list(submodels) list(loop = loop, submodels = submodels) }, type = "Classification", parameters = data.frame(parameter = 'nIter', class = 'numeric', label = '# Boosting Iterations'), grid = function(x, y, len = NULL, search = "grid") { out <- if(search == "grid") data.frame(nIter = 1 + ((1:len)*10)) else data.frame(nIter = sample(1:500, size = len)) out }, fit = function(x, y, wts, param, lev, last, weights, classProbs, ...) { caTools::LogitBoost(as.matrix(x), y, nIter = param$nIter) }, predict = function(modelFit, newdata, preProc = NULL, submodels = NULL) { out <- caTools::predict.LogitBoost(modelFit, newdata, type="class") if(!is.null(submodels)) { tmp <- out out <- vector(mode = "list", length = nrow(submodels) + 1) out[[1]] <- tmp for(j in seq(along = submodels$nIter)) { out[[j+1]] <- caTools::predict.LogitBoost( modelFit, newdata, nIter = submodels$nIter[j] ) } } out }, prob = NULL, sort = function(x) x) ``` Should you care about this? Let’s tune the model over the same data set used for the SVM model above and see how long it takes: ``` set.seed(825) lb1 <- system.time(train(Class ~ ., data = training, method = lbFuncs, tuneLength = 3, trControl = fitControl)) lb1 ``` ``` ## user system elapsed ## 7.337 5.560 1.397 ``` ``` ## Now get rid of the submodel parts lbFuncs2 <- lbFuncs lbFuncs2$predict <- function(modelFit, newdata, preProc = NULL, submodels = NULL) caTools::predict.LogitBoost(modelFit, newdata, type = "class") lbFuncs2$loop <- NULL set.seed(825) lb2 <- system.time(train(Class ~ ., data = training, method = lbFuncs2, tuneLength = 3, trControl = fitControl)) lb2 ``` ``` ## user system elapsed ## 14.767 12.421 2.193 ``` On a data set with 157 instances and 60 predictors and a model that is tuned over only 3 parameter values, there is a 1\.57\-fold speed\-up. If the model were more computationally taxing or the data set were larger or the number of tune parameters that were evaluated was larger, the speed\-up would increase. Here is a plot of the speed\-up for a few more values of `tuneLength`: ``` bigGrid <- data.frame(nIter = seq(1, 151, by = 10)) results <- bigGrid results$SpeedUp <- NA for(i in 2:nrow(bigGrid)){ rm(lb1, lb2) set.seed(825) lb1 <- system.time(train(Class ~ ., data = training, method = lbFuncs, tuneGrid = bigGrid[1:i,,drop = FALSE], trControl = fitControl)) set.seed(825) lb2 <- system.time(train(Class ~ ., data = training, method = lbFuncs2, tuneGrid = bigGrid[1:i,,drop = FALSE], trControl = fitControl)) results$SpeedUp[i] <- lb2[3]/lb1[3] } ggplot(results, aes(x = nIter, y = SpeedUp)) + geom_point() + geom_smooth(method = "lm") + xlab("LogitBoost Iterations") + ylab("Speed-Up") ``` The speed\-ups show a significant decrease in training time using this method. **Note:** The previous examples were run using parallel processing. The remainder in this chapter are run sequentially and, for simplicity, the namespace operator is not used in the custom code modules below. 13\.6 Illustrative Example 3: Nonstandard Formulas -------------------------------------------------- (Note: the previous third illustration (“SMOTE During Resampling”) is no longer needed due to the inclusion of subsampling via `train`.) One limitation of `train` is that it requires the use of basic model formulas. There are several functions that use special formulas or operators on predictors that won’t (and perhaps should not) work in the top level call to `train`. However, we can still fit these models. Here is an example using the `mboost` function in the **mboost** package from the help page. ``` library(mboost) data("bodyfat", package = "TH.data") mod <- mboost(DEXfat ~ btree(age) + bols(waistcirc) + bbs(hipcirc), data = bodyfat) mod ``` ``` ## ## Model-based Boosting ## ## Call: ## mboost(formula = DEXfat ~ btree(age) + bols(waistcirc) + bbs(hipcirc), data = bodyfat) ## ## ## Squared Error (Regression) ## ## Loss function: (y - f)^2 ## ## ## Number of boosting iterations: mstop = 100 ## Step size: 0.1 ## Offset: 30.78282 ## Number of baselearners: 3 ``` We can create a custom model that mimics this code so that we can obtain resampling estimates for this specific model: ``` modelInfo <- list(label = "Model-based Gradient Boosting", library = "mboost", type = "Regression", parameters = data.frame(parameter = "parameter", class = "character", label = "parameter"), grid = function(x, y, len = NULL, search = "grid") data.frame(parameter = "none"), loop = NULL, fit = function(x, y, wts, param, lev, last, classProbs, ...) { ## mboost requires a data frame with predictors and response dat <- if(is.data.frame(x)) x else as.data.frame(x) dat$DEXfat <- y mod <- mboost( DEXfat ~ btree(age) + bols(waistcirc) + bbs(hipcirc), data = dat ) }, predict = function(modelFit, newdata, submodels = NULL) { if(!is.data.frame(newdata)) newdata <- as.data.frame(newdata) ## By default a matrix is returned; we convert it to a vector predict(modelFit, newdata)[,1] }, prob = NULL, predictors = function(x, ...) { unique(as.vector(variable.names(x))) }, tags = c("Ensemble Model", "Boosting", "Implicit Feature Selection"), levels = NULL, sort = function(x) x) ## Just use the basic formula method so that these predictors ## are passed 'as-is' into the model fitting and prediction ## functions. set.seed(307) mboost_resamp <- train(DEXfat ~ age + waistcirc + hipcirc, data = bodyfat, method = modelInfo, trControl = trainControl(method = "repeatedcv", repeats = 5)) mboost_resamp ``` ``` ## Model-based Gradient Boosting ## ## 71 samples ## 3 predictor ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 5 times) ## Summary of sample sizes: 65, 64, 63, 63, 65, 63, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 4.031102 0.9011156 3.172689 ``` 13\.7 Illustrative Example 4: PLS Feature Extraction Pre\-Processing -------------------------------------------------------------------- PCA is a common tool for feature extraction prior to modeling but is *unsupervised*. Partial Least Squares (PLS) is essentially a supervised version of PCA. For some data sets, there may be some benefit to using PLS to generate new features from the original data (the PLS scores) then use those as an input into a different predictive model. PLS requires parameter tuning. In the example below, we use PLS on a data set with highly correlated predictors then use the PLS scores in a random forest model. The “trick” here is to save the PLS loadings along with the random forest model fit so that the loadings can be used on future samples for prediction. Also, the PLS and random forest models are *jointly* tuned instead of an initial modeling process that finalizes the PLS model, then builds the random forest model separately. In this was we optimize both at once. Another important point is that the resampling results reflect the variability in the random forest *and* PLS models. If we did PLS up\-front then resampled the random forest model, we would under\-estimate the noise in the modeling process. The tecator spectroscopy data are used: ``` data(tecator) set.seed(930) colnames(absorp) <- paste("x", 1:ncol(absorp)) ## We will model the protein content data trainMeats <- createDataPartition(endpoints[,3], p = 3/4) absorpTrain <- absorp[trainMeats[[1]], ] proteinTrain <- endpoints[trainMeats[[1]], 3] absorpTest <- absorp[-trainMeats[[1]], ] proteinTest <- endpoints[-trainMeats[[1]], 3] ``` Here is the model code: ``` pls_rf <- list(label = "PLS-RF", library = c("pls", "randomForest"), type = "Regression", ## Tune over both parameters at the same time parameters = data.frame(parameter = c('ncomp', 'mtry'), class = c("numeric", 'numeric'), label = c('#Components', '#Randomly Selected Predictors')), grid = function(x, y, len = NULL, search = "grid") { if(search == "grid") { grid <- expand.grid(ncomp = seq(1, min(ncol(x) - 1, len), by = 1), mtry = 1:len) } else { grid <- expand.grid(ncomp = sample(1:ncol(x), size = len), mtry = sample(1:ncol(x), size = len)) } ## We can't have mtry > ncomp grid <- subset(grid, mtry <= ncomp) }, loop = NULL, fit = function(x, y, wts, param, lev, last, classProbs, ...) { ## First fit the pls model, generate the training set scores, ## then attach what is needed to the random forest object to ## be used later ## plsr only has a formula interface so create one data frame dat <- x dat$y <- y pre <- plsr(y~ ., data = dat, ncomp = param$ncomp) scores <- predict(pre, x, type = "scores") colnames(scores) <- paste("score", 1:param$ncomp, sep = "") mod <- randomForest(scores, y, mtry = param$mtry, ...) mod$projection <- pre$projection mod }, predict = function(modelFit, newdata, submodels = NULL) { ## Now apply the same scaling to the new samples scores <- as.matrix(newdata) %*% modelFit$projection colnames(scores) <- paste("score", 1:ncol(scores), sep = "") scores <- as.data.frame(scores) ## Predict the random forest model predict(modelFit, scores) }, prob = NULL, varImp = NULL, predictors = function(x, ...) rownames(x$projection), levels = function(x) x$obsLevels, sort = function(x) x[order(x[,1]),]) ``` We fit the models and look at the resampling results for the joint model: ``` meatCtrl <- trainControl(method = "repeatedcv", repeats = 5) ## These will take a while for these data set.seed(184) plsrf <- train(x = as.data.frame(absorpTrain), y = proteinTrain, method = pls_rf, preProc = c("center", "scale"), tuneLength = 10, ntree = 1000, trControl = meatCtrl) ggplot(plsrf, plotType = "level") ``` ``` ## How does random forest do on its own? set.seed(184) rfOnly <- train(absorpTrain, proteinTrain, method = "rf", tuneLength = 10, ntree = 1000, trControl = meatCtrl) getTrainPerf(rfOnly) ``` ``` ## TrainRMSE TrainRsquared TrainMAE method ## 1 2.167941 0.516604 1.714846 rf ``` ``` ## How does random forest do on its own? set.seed(184) plsOnly <- train(absorpTrain, proteinTrain, method = "pls", tuneLength = 20, preProc = c("center", "scale"), trControl = meatCtrl) getTrainPerf(plsOnly) ``` ``` ## TrainRMSE TrainRsquared TrainMAE method ## 1 0.6980342 0.9541472 0.5446974 pls ``` The test set results indicate that these data like the linear model more than anything: ``` postResample(predict(plsrf, absorpTest), proteinTest) ``` ``` ## RMSE Rsquared MAE ## 1.0964463 0.8840342 0.8509050 ``` ``` postResample(predict(rfOnly, absorpTest), proteinTest) ``` ``` ## RMSE Rsquared MAE ## 2.2414327 0.4566869 1.8422873 ``` ``` postResample(predict(plsOnly, absorpTest), proteinTest) ``` ``` ## RMSE Rsquared MAE ## 0.5587882 0.9692432 0.4373753 ``` 13\.8 Illustrative Example 5: Optimizing probability thresholds for class imbalances ------------------------------------------------------------------------------------ This description was originally posted on [this blog.](http://appliedpredictivemodeling.com/blog/) One of the toughest problems in predictive model occurs when the classes have a severe imbalance. In [our book](http://appliedpredictivemodeling.com/), we spend [an entire chapter](http://rd.springer.com/chapter/10.1007/978-1-4614-6849-3_16) on this subject itself. One consequence of this is that the performance is generally very biased against the class with the smallest frequencies. For example, if the data have a majority of samples belonging to the first class and very few in the second class, most predictive models will maximize accuracy by predicting everything to be the first class. As a result there’s usually great sensitivity but poor specificity. As a demonstration will use a simulation system [described here](http://appliedpredictivemodeling.com/blog/2013/4/11/a-classification-simulation-system). By default it has about a 50\-50 class frequency but we can change this by altering the function argument called `intercept`: ``` library(caret) set.seed(442) trainingSet <- twoClassSim(n = 500, intercept = -16) testingSet <- twoClassSim(n = 500, intercept = -16) ## Class frequencies table(trainingSet$Class) ``` ``` ## ## Class1 Class2 ## 450 50 ``` There is almost a 9:1 imbalance in these data. Let’s use a standard random forest model with these data using the default value of `mtry`. We’ll also use repeated 10\-fold cross validation to get a sense of performance: ``` set.seed(949) mod0 <- train(Class ~ ., data = trainingSet, method = "rf", metric = "ROC", tuneGrid = data.frame(mtry = 3), ntree = 1000, trControl = trainControl(method = "repeatedcv", repeats = 5, classProbs = TRUE, summaryFunction = twoClassSummary)) getTrainPerf(mod0) ``` ``` ## TrainROC TrainSens TrainSpec method ## 1 0.9602222 0.9977778 0.324 rf ``` ``` ## Get the ROC curve roc0 <- roc(testingSet$Class, predict(mod0, testingSet, type = "prob")[,1], levels = rev(levels(testingSet$Class))) roc0 ``` ``` ## ## Call: ## roc.default(response = testingSet$Class, predictor = predict(mod0, testingSet, type = "prob")[, 1], levels = rev(levels(testingSet$Class))) ## ## Data: predict(mod0, testingSet, type = "prob")[, 1] in 34 controls (testingSet$Class Class2) < 466 cases (testingSet$Class Class1). ## Area under the curve: 0.9301 ``` ``` ## Now plot plot(roc0, print.thres = c(.5), type = "S", print.thres.pattern = "%.3f (Spec = %.2f, Sens = %.2f)", print.thres.cex = .8, legacy.axes = TRUE) ``` The area under the ROC curve is very high, indicating that the model has very good predictive power for these data. The plot shows the default probability cut off value of 50%. The sensitivity and specificity values associated with this point indicate that performance is not that good when an actual call needs to be made on a sample. One of the most common ways to deal with this is to determine an alternate probability cut off using the ROC curve. But to do this well, another set of data (not the test set) is needed to set the cut off and the test set is used to validate it. We don’t have a lot of data this is difficult since we will be spending some of our data just to get a single cut off value. Alternatively the model can be tuned, using resampling, to determine any model tuning parameters as well as an appropriate cut off for the probabilities. Suppose the model has one tuning parameter and we want to look at four candidate values for tuning. Suppose we also want to tune the probability cut off over 20 different thresholds. Now we have to look at 20×4\=80 different models (and that is for each resample). One other feature that has been opened up his ability to use sequential parameters: these are tuning parameters that don’t require a completely new model fit to produce predictions. In this case, we can fit one random forest model and get it’s predicted class probabilities and evaluate the candidate probability cutoffs using these same hold\-out samples. Here is what the model code looks like: ``` ## Get the model code for the original random forest method: thresh_code <- getModelInfo("rf", regex = FALSE)[[1]] thresh_code$type <- c("Classification") ## Add the threshold as another tuning parameter thresh_code$parameters <- data.frame(parameter = c("mtry", "threshold"), class = c("numeric", "numeric"), label = c("#Randomly Selected Predictors", "Probability Cutoff")) ## The default tuning grid code: thresh_code$grid <- function(x, y, len = NULL, search = "grid") { p <- ncol(x) if(search == "grid") { grid <- expand.grid(mtry = floor(sqrt(p)), threshold = seq(.01, .99, length = len)) } else { grid <- expand.grid(mtry = sample(1:p, size = len), threshold = runif(runif, min = 0, max = 1)) } grid } ## Here we fit a single random forest model (with a fixed mtry) ## and loop over the threshold values to get predictions from the same ## randomForest model. thresh_code$loop = function(grid) { library(plyr) loop <- ddply(grid, c("mtry"), function(x) c(threshold = max(x$threshold))) submodels <- vector(mode = "list", length = nrow(loop)) for(i in seq(along = loop$threshold)) { index <- which(grid$mtry == loop$mtry[i]) cuts <- grid[index, "threshold"] submodels[[i]] <- data.frame(threshold = cuts[cuts != loop$threshold[i]]) } list(loop = loop, submodels = submodels) } ## Fit the model independent of the threshold parameter thresh_code$fit = function(x, y, wts, param, lev, last, classProbs, ...) { if(length(levels(y)) != 2) stop("This works only for 2-class problems") randomForest(x, y, mtry = param$mtry, ...) } ## Now get a probability prediction and use different thresholds to ## get the predicted class thresh_code$predict = function(modelFit, newdata, submodels = NULL) { class1Prob <- predict(modelFit, newdata, type = "prob")[, modelFit$obsLevels[1]] ## Raise the threshold for class #1 and a higher level of ## evidence is needed to call it class 1 so it should ## decrease sensitivity and increase specificity out <- ifelse(class1Prob >= modelFit$tuneValue$threshold, modelFit$obsLevels[1], modelFit$obsLevels[2]) if(!is.null(submodels)) { tmp2 <- out out <- vector(mode = "list", length = length(submodels$threshold)) out[[1]] <- tmp2 for(i in seq(along = submodels$threshold)) { out[[i+1]] <- ifelse(class1Prob >= submodels$threshold[[i]], modelFit$obsLevels[1], modelFit$obsLevels[2]) } } out } ## The probabilities are always the same but we have to create ## mulitple versions of the probs to evaluate the data across ## thresholds thresh_code$prob = function(modelFit, newdata, submodels = NULL) { out <- as.data.frame(predict(modelFit, newdata, type = "prob")) if(!is.null(submodels)) { probs <- out out <- vector(mode = "list", length = length(submodels$threshold)+1) out <- lapply(out, function(x) probs) } out } ``` Basically, we define a list of model components (such as the fitting code, the prediction code, etc.) and feed this into the train function instead of using a pre\-listed model string (such as `method = "rf"`). For this model and these data, there was an 8% increase in training time to evaluate 20 additional values of the probability cut off. How do we optimize this model? Normally we might look at the area under the ROC curve as a metric to choose our final values. In this case the ROC curve is independent of the probability threshold so we have to use something else. A common technique to evaluate a candidate threshold is see how close it is to the perfect model where sensitivity and specificity are one. Our code will use the distance between the current model’s performance and the best possible performance and then have train minimize this distance when choosing it’s parameters. Here is the code that we use to calculate this: ``` fourStats <- function (data, lev = levels(data$obs), model = NULL) { ## This code will get use the area under the ROC curve and the ## sensitivity and specificity values using the current candidate ## value of the probability threshold. out <- c(twoClassSummary(data, lev = levels(data$obs), model = NULL)) ## The best possible model has sensitivity of 1 and specificity of 1. ## How far are we from that value? coords <- matrix(c(1, 1, out["Spec"], out["Sens"]), ncol = 2, byrow = TRUE) colnames(coords) <- c("Spec", "Sens") rownames(coords) <- c("Best", "Current") c(out, Dist = dist(coords)[1]) } set.seed(949) mod1 <- train(Class ~ ., data = trainingSet, method = thresh_code, ## Minimize the distance to the perfect model metric = "Dist", maximize = FALSE, tuneLength = 20, ntree = 1000, trControl = trainControl(method = "repeatedcv", repeats = 5, classProbs = TRUE, summaryFunction = fourStats)) mod1 ``` ``` ## Random Forest ## ## 500 samples ## 15 predictor ## 2 classes: 'Class1', 'Class2' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 5 times) ## Summary of sample sizes: 450, 450, 450, 450, 450, 450, ... ## Resampling results across tuning parameters: ## ## threshold ROC Sens Spec Dist ## 0.01000000 0.9602222 1.0000000 0.000 1.0000000 ## 0.06157895 0.9602222 1.0000000 0.000 1.0000000 ## 0.11315789 0.9602222 1.0000000 0.000 1.0000000 ## 0.16473684 0.9602222 1.0000000 0.000 1.0000000 ## 0.21631579 0.9602222 1.0000000 0.000 1.0000000 ## 0.26789474 0.9602222 1.0000000 0.000 1.0000000 ## 0.31947368 0.9602222 1.0000000 0.020 0.9800000 ## 0.37105263 0.9602222 1.0000000 0.064 0.9360000 ## 0.42263158 0.9602222 0.9991111 0.132 0.8680329 ## 0.47421053 0.9602222 0.9991111 0.240 0.7600976 ## 0.52578947 0.9602222 0.9973333 0.420 0.5802431 ## 0.57736842 0.9602222 0.9880000 0.552 0.4494847 ## 0.62894737 0.9602222 0.9742222 0.612 0.3941985 ## 0.68052632 0.9602222 0.9644444 0.668 0.3436329 ## 0.73210526 0.9602222 0.9524444 0.700 0.3184533 ## 0.78368421 0.9602222 0.9346667 0.736 0.2915366 ## 0.83526316 0.9602222 0.8995556 0.828 0.2278799 ## 0.88684211 0.9602222 0.8337778 0.952 0.1927598 ## 0.93842105 0.9602222 0.6817778 0.996 0.3192700 ## 0.99000000 0.9602222 0.1844444 1.000 0.8155556 ## ## Tuning parameter 'mtry' was held constant at a value of 3 ## Dist was used to select the optimal model using the smallest value. ## The final values used for the model were mtry = 3 and threshold ## = 0.8868421. ``` Using `ggplot(mod1)` will show the performance profile. Instead here is a plot of the sensitivity, specificity, and distance to the perfect model: ``` library(reshape2) metrics <- mod1$results[, c(2, 4:6)] metrics <- melt(metrics, id.vars = "threshold", variable.name = "Resampled", value.name = "Data") ggplot(metrics, aes(x = threshold, y = Data, color = Resampled)) + geom_line() + ylab("") + xlab("Probability Cutoff") + theme(legend.position = "top") ``` You can see that as we increase the probability cut off for the first class it takes more and more evidence for a sample to be predicted as the first class. As a result the sensitivity goes down when the threshold becomes very large. The upside is that we can increase specificity in the same way. The blue curve shows the distance to the perfect model. The value of 0\.89 was found to be optimal. Now we can use the test set ROC curve to validate the cut off we chose by resampling. Here the cut off closest to the perfect model is 0\.89\. We were able to find a good probability cut off value without setting aside another set of data for tuning the cut off. One great thing about this code is that it will automatically apply the optimized probability threshold when predicting new samples. 13\.9 Illustrative Example 6: Offsets in Generalized Linear Models ------------------------------------------------------------------ Like the `mboost` example [above](using-your-own-model-in-train.html#Illustration3), a custom method is required since a formula element is used to set the offset variable. Here is an example from `?glm`: ``` ## (Intercept) Prewt TreatCont TreatFT ## 49.7711090 -0.5655388 -4.0970655 4.5630627 ``` We can write a small custom method to duplicate this model. Two details of note: * If we have factors in the data and do not want `train` to convert them to dummy variables, the formula method for `train` should be avoided. We can let `glm` do that inside the custom method. This would help `glm` understand that the dummy variable columns came from the same original factor. This will avoid errors in other functions used with `glm` (e.g. `anova`). * The slot for `x` should include any variables that are on the right\-hand side of the model formula, including the offset column. Here is the custom model: ``` offset_mod <- getModelInfo("glm", regex = FALSE)[[1]] offset_mod$fit <- function(x, y, wts, param, lev, last, classProbs, ...) { dat <- if(is.data.frame(x)) x else as.data.frame(x) dat$Postwt <- y glm(Postwt ~ Prewt + Treat + offset(Prewt), family = gaussian, data = dat) } mod <- train(x = anorexia[, 1:2], y = anorexia$Postwt, method = offset_mod) coef(mod$finalModel) ``` ``` ## (Intercept) Prewt TreatCont TreatFT ## 49.7711090 -0.5655388 -4.0970655 4.5630627 ``` 13\.1 Introduction ------------------ The package contains a large number of predictive model interfaces. However, you may want to create your own because: * you are testing out a novel model or the package doesn’t have a model that you are interested in * you would like to run an existing model in the package your own way * there are pre\-processing or sampling steps not contained in the package or you just don’t like the way the package does things You can still get the benefits of the [`caret`](http://cran.r-project.org/web/packages/caret/index.html) infrastructure by creating your own model. Currently, when you specify the type of model that you are interested in (e.g. `type = "lda"`), the `train` function runs another function called `getModelInfo` to retrieve the specifics of that model from the existing catalog. For example: ``` ldaModelInfo <- getModelInfo(model = "lda", regex = FALSE)[[1]] ## Model components names(ldaModelInfo) ``` ``` ## [1] "label" "library" "loop" "type" "parameters" ## [6] "grid" "fit" "predict" "prob" "predictors" ## [11] "tags" "levels" "sort" ``` To use your own model, you can pass a list of these components to `type`. This page will describe those components in detail. 13\.2 Illustrative Example 1: SVMs with Laplacian Kernels --------------------------------------------------------- The package currently contains support vector machine (SVM) models using linear, polynomial and radial basis function kernels. The [`kernlab`](http://cran.r-project.org/web/packages/kernlab/index.html) package has other functions, including the Laplacian kernel. We will illustrate the model components for this model, which has two parameters: the standard cost parameter for SVMs and one kernel parameter (`sigma`) 13\.3 Model Components ---------------------- You can pass a list of information to the `method` argument in `train`. For models that are built\-in to the package, you can just pass the method name as before. There are some basic components of the list for custom models. A brief description is below for each then, after setting up and example, each will be described in detail. The list should have the following elements: * `library` is a character vector of package names that will be needed to fit the model or calculate predictions. `NULL` can also be used. * `type` is a simple character vector with values `"Classification"`, `"Regression"` or both. * `parameters` is a data frame with three simple attributes for each tuning parameter (if any): the argument name (e.g. `mtry`), the type of data in the parameter grid and textual labels for the parameter. * `grid` is a function that is used to create the tuning grid (unless the user gives the exact values of the parameters via `tuneGrid`) * `fit` is a function that fits the model * `predict` is the function that creates predictions * `prob` is a function that can be used to create class probabilities (if applicable) * `sort` is a function that sorts the parameter from most complex to least * `loop` is an **optional** function for advanced users for models that can create multiple submodel predictions from the same object. * `levels` is an **optional** function, primarily for classification models using `S4` methods to return the factor levels of the outcome. * `tags` is an **optional** character vector that has subjects associated with the model, such as `Tree-Based Model` or `Embedded Feature Selection`. This string is used by the package to create additional documentation pages on the package website. * `label` is an **optional** character string that names the model (e.g. “Linear Discriminant Analysis”). * `predictors` is an **optional** function that returns a character vector that contains the names of the predictors that we used in the prediction equation. * `varImp` is an **optional** function that calculates variable importance metrics for the model (if any). * `oob` is another **optional** function that calculates out\-of\-bag performance estimates from the model object. Most models do not have this capability but some (e.g. random forests, bagged models) do. * `notes` is an **optional** character vector that can be used to document non\-obvious aspects of the model. For example, there are two Bayesian lasso models ([`blasso`](https://github.com/topepo/caret/blob/master/models/files/blasso.R) and [`blassoAveraged`](https://github.com/topepo/caret/blob/master/models/files/blassoAveraged.R)) and this field is used to describe the differences between the two models. * `check` is an **optional** function that can be used to check the system/install to make sure that any atypical software requirements are available to the user. The input is `pkg`, which is the same character string given by the `library`. This function is run *after* the checking function to see if the packages specified in `library` are installed. As an example, the model [`pythonKnnReg`](https://github.com/topepo/caret/blob/master/models/files/pythonKnnReg.R) uses certain python libraries and the user should have python and these libraries installed. The [model file](https://github.com/topepo/caret/blob/master/models/files/pythonKnnReg.R) demonstrates how to check for python libraries prior to running the R model. In the [`caret`](http://cran.r-project.org/web/packages/caret/index.html) package, the subdirectory `models` has all the code for each model that `train` interfaces with and these can be used as prototypes for your model. Let’s create a new model for a classification support vector machin using the Laplacian kernel function. We will use the `kernlab` package’s `ksvm` function. The kernel has two parameters: the standard cost parameter for SVMs and one kernel parameter (`sigma`). To start, we’ll create a new list: ``` lpSVM <- list(type = "Classification", library = "kernlab", loop = NULL) ``` This model can also be used for regression but we will constrain things here for simplicity. For other SVM models, the type value would be `c("Classification", "Regression")`. The `library` value checks to see if this package is installed and loads it whenever it is needed (e.g. before modeling or prediction). **Note**: `caret` will check to see if these packages are installed but will *not* explicitly load them. As such, functions that are used from the package should be referenced by namespace. This is discussed more below when describing the `fit` function. ### 13\.3\.1 The parameters Element We have to create some basic information for the parameters in the form of a data frame. The first column is the name of the parameter. The convention is to use the argument name in the model function (e.g. the `ksvm` function here). Those values are `C` and `sigma`. Each is a number and we can give them labels of `"Cost"` and `"Sigma"`, respectively. The `parameters` element would then be: ``` prm <- data.frame(parameter = c("C", "sigma"), class = rep("numeric", 2), label = c("Cost", "Sigma")) ``` Now we assign it to the model list: ``` lpSVM$parameters <- prm ``` Values of `type` can indicate numeric, character or logical data types. ### 13\.3\.2 The `grid` Element This should be a function that takes parameters: `x` and `y` (for the predictors and outcome data), `len` (the number of values per tuning parameter) as well as `search`. `len` is the value of `tuneLength` that is potentially passed in through `train`. `search` can be either `"grid"` or `"random"`. This can be used to setup a grid for searching or random values for random search. The output should be a data frame of tuning parameter combinations with a column for each parameter. The column names should be the parameter name (e.g. the values of `prm$parameter`). In our case, let’s vary the cost parameter on the log 2 scale. For the sigma parameter, we can use the `kernlab` function `sigest` to pre\-estimate the value. Following `ksvm` we take the average of the low and high estimates. Here is a function we could use: ``` svmGrid <- function(x, y, len = NULL, search = "grid") { library(kernlab) ## This produces low, middle and high values for sigma ## (i.e. a vector with 3 elements). sigmas <- kernlab::sigest(as.matrix(x), na.action = na.omit, scaled = TRUE) ## To use grid search: if(search == "grid") { out <- expand.grid(sigma = mean(as.vector(sigmas[-2])), C = 2 ^((1:len) - 3)) } else { ## For random search, define ranges for the parameters then ## generate random values for them rng <- extendrange(log(sigmas), f = .75) out <- data.frame(sigma = exp(runif(len, min = rng[1], max = rng[2])), C = 2^runif(len, min = -5, max = 8)) } out } ``` Why did we use `kernlab::sigest` instead of `sigest`? As previously mentioned, `caret` will not execute `library(kernlab)` unless you explicitly code it in these functions. Since it is not explicitly loaded, you have to call it *using the namespace operator* `::`. Again, the user can pass their own grid via `train`’s `tuneGrid` option or they can use this code to create a default grid. We assign this function to the overall model list: ``` lpSVM$grid <- svmGrid ``` ### 13\.3\.3 The `fit` Element Here is where we fit the model. This `fit` function has several arguments: * `x`, `y`: the current data used to fit the model * `wts`: optional instance weights (not applicable for this particular model) * `param`: the current tuning parameter values * `lev`: the class levels of the outcome (or `NULL` in regression) * `last`: a logical for whether the current fit is the final fit * `weights` * `classProbs`: a logical for whether class probabilities should be computed. Here is something we could use for this model: ``` svmFit <- function(x, y, wts, param, lev, last, weights, classProbs, ...) { kernlab::ksvm( x = as.matrix(x), y = y, kernel = "rbfdot", kpar = list(sigma = param$sigma), C = param$C, prob.model = classProbs, ... ) } lpSVM$fit <- svmFit ``` A few notes about this: * Notice that the package is not loaded in the code. It is loaded prior to this function being called so it won’t hurt if you load it again (but that’s not needed). * The `ksvm` function requires a *matrix* or predictors. If the original data were a data frame, this would throw and error. * The tuning parameters are references in the `param` data frame. There is always a single row in this data frame. * The probability model is fit based on the value of `classProbs`. This value is determined by the value given in `trainControl`. * The three dots allow the user to pass options in from `train` to, in this case, the `ksvm` function. For example, if the use wanted to set the cache size for the function, they could list `cache = 80` and this argument will be pass from `train` to `ksvm`. * Any pre\-processing that was requested in the call to `train` have been done. For example, if `preProc = "center"` was originally requested, the columns of `x` seen within this function are mean centered. * Again, the namespace operator `::` is used for `rbfdot` and `ksvm` to ensure that the function can be found. ### 13\.3\.4 The `predict` Element This is a function that produces a vector or predictions. In our case these are class predictions but they could be numbers for regression models. The arguments are: * `modelFit`: the model produced by the `fit` code shown above. * `newdata`: the predictor values of the instances being predicted (e.g. out\-of\-bag samples) * `preProc` * `submodels`: this an optional list of tuning parameters only used with the `loop` element discussed below. In most cases, it will be `NULL`. Our function will be very simple: ``` svmPred <- function(modelFit, newdata, preProc = NULL, submodels = NULL) kernlab::predict(modelFit, newdata) lpSVM$predict <- svmPred ``` The function `predict.ksvm` will automatically create a factor vector as output. The function could also produce character values. Either way, the innards of `train` will make them factors and ensure that the same levels as the original data are used. ### 13\.3\.5 The `prob` Element If a regression model is being used or if the classification model does not create class probabilities a value of `NULL` can be used here instead of a function. Otherwise, the function arguments are the same as the `pred` function. The output should be a matrix or data frame of class probabilities with a column for each class. The column names should be the class levels. We can use: ``` svmProb <- function(modelFit, newdata, preProc = NULL, submodels = NULL) kernlab::predict(modelFit, newdata, type = "probabilities") lpSVM$prob <- svmProb ``` If you look at some of the SVM examples in the `models` directory, the real functions used by `train` are much more complicated so that they can deal with model failures, probabilities that do not sum to 1 etc. ### 13\.3\.1 The parameters Element We have to create some basic information for the parameters in the form of a data frame. The first column is the name of the parameter. The convention is to use the argument name in the model function (e.g. the `ksvm` function here). Those values are `C` and `sigma`. Each is a number and we can give them labels of `"Cost"` and `"Sigma"`, respectively. The `parameters` element would then be: ``` prm <- data.frame(parameter = c("C", "sigma"), class = rep("numeric", 2), label = c("Cost", "Sigma")) ``` Now we assign it to the model list: ``` lpSVM$parameters <- prm ``` Values of `type` can indicate numeric, character or logical data types. ### 13\.3\.2 The `grid` Element This should be a function that takes parameters: `x` and `y` (for the predictors and outcome data), `len` (the number of values per tuning parameter) as well as `search`. `len` is the value of `tuneLength` that is potentially passed in through `train`. `search` can be either `"grid"` or `"random"`. This can be used to setup a grid for searching or random values for random search. The output should be a data frame of tuning parameter combinations with a column for each parameter. The column names should be the parameter name (e.g. the values of `prm$parameter`). In our case, let’s vary the cost parameter on the log 2 scale. For the sigma parameter, we can use the `kernlab` function `sigest` to pre\-estimate the value. Following `ksvm` we take the average of the low and high estimates. Here is a function we could use: ``` svmGrid <- function(x, y, len = NULL, search = "grid") { library(kernlab) ## This produces low, middle and high values for sigma ## (i.e. a vector with 3 elements). sigmas <- kernlab::sigest(as.matrix(x), na.action = na.omit, scaled = TRUE) ## To use grid search: if(search == "grid") { out <- expand.grid(sigma = mean(as.vector(sigmas[-2])), C = 2 ^((1:len) - 3)) } else { ## For random search, define ranges for the parameters then ## generate random values for them rng <- extendrange(log(sigmas), f = .75) out <- data.frame(sigma = exp(runif(len, min = rng[1], max = rng[2])), C = 2^runif(len, min = -5, max = 8)) } out } ``` Why did we use `kernlab::sigest` instead of `sigest`? As previously mentioned, `caret` will not execute `library(kernlab)` unless you explicitly code it in these functions. Since it is not explicitly loaded, you have to call it *using the namespace operator* `::`. Again, the user can pass their own grid via `train`’s `tuneGrid` option or they can use this code to create a default grid. We assign this function to the overall model list: ``` lpSVM$grid <- svmGrid ``` ### 13\.3\.3 The `fit` Element Here is where we fit the model. This `fit` function has several arguments: * `x`, `y`: the current data used to fit the model * `wts`: optional instance weights (not applicable for this particular model) * `param`: the current tuning parameter values * `lev`: the class levels of the outcome (or `NULL` in regression) * `last`: a logical for whether the current fit is the final fit * `weights` * `classProbs`: a logical for whether class probabilities should be computed. Here is something we could use for this model: ``` svmFit <- function(x, y, wts, param, lev, last, weights, classProbs, ...) { kernlab::ksvm( x = as.matrix(x), y = y, kernel = "rbfdot", kpar = list(sigma = param$sigma), C = param$C, prob.model = classProbs, ... ) } lpSVM$fit <- svmFit ``` A few notes about this: * Notice that the package is not loaded in the code. It is loaded prior to this function being called so it won’t hurt if you load it again (but that’s not needed). * The `ksvm` function requires a *matrix* or predictors. If the original data were a data frame, this would throw and error. * The tuning parameters are references in the `param` data frame. There is always a single row in this data frame. * The probability model is fit based on the value of `classProbs`. This value is determined by the value given in `trainControl`. * The three dots allow the user to pass options in from `train` to, in this case, the `ksvm` function. For example, if the use wanted to set the cache size for the function, they could list `cache = 80` and this argument will be pass from `train` to `ksvm`. * Any pre\-processing that was requested in the call to `train` have been done. For example, if `preProc = "center"` was originally requested, the columns of `x` seen within this function are mean centered. * Again, the namespace operator `::` is used for `rbfdot` and `ksvm` to ensure that the function can be found. ### 13\.3\.4 The `predict` Element This is a function that produces a vector or predictions. In our case these are class predictions but they could be numbers for regression models. The arguments are: * `modelFit`: the model produced by the `fit` code shown above. * `newdata`: the predictor values of the instances being predicted (e.g. out\-of\-bag samples) * `preProc` * `submodels`: this an optional list of tuning parameters only used with the `loop` element discussed below. In most cases, it will be `NULL`. Our function will be very simple: ``` svmPred <- function(modelFit, newdata, preProc = NULL, submodels = NULL) kernlab::predict(modelFit, newdata) lpSVM$predict <- svmPred ``` The function `predict.ksvm` will automatically create a factor vector as output. The function could also produce character values. Either way, the innards of `train` will make them factors and ensure that the same levels as the original data are used. ### 13\.3\.5 The `prob` Element If a regression model is being used or if the classification model does not create class probabilities a value of `NULL` can be used here instead of a function. Otherwise, the function arguments are the same as the `pred` function. The output should be a matrix or data frame of class probabilities with a column for each class. The column names should be the class levels. We can use: ``` svmProb <- function(modelFit, newdata, preProc = NULL, submodels = NULL) kernlab::predict(modelFit, newdata, type = "probabilities") lpSVM$prob <- svmProb ``` If you look at some of the SVM examples in the `models` directory, the real functions used by `train` are much more complicated so that they can deal with model failures, probabilities that do not sum to 1 etc. 13\.4 The sort Element ---------------------- This is an optional function that sorts the tuning parameters from the simplest model to the most complex. There are times where this ordering is not obvious. This information is used when the performance values are tied across multiple parameters. We would probably want to choose the least complex model in those cases. Here, we will sort by the cost value. Smaller values of `C` produce smoother class boundaries than larger values: ``` svmSort <- function(x) x[order(x$C),] lpSVM$sort <- svmSort ``` ### 13\.4\.1 The `levels` Element `train` ensures that classification models always predict factors with the same levels. To do this at prediction time, the package needs to know the levels from the model object (specifically, the `finalModels` slot of the `train` object). For model functions using `S3` methods, `train` automatically attaches a character vector called `obsLevels` to the object and the package code uses this value. However, this strategy does not work for `S4` methods. In these cases, the package will use the code found in the `levels` slot of the model list. For example, the `ksvm` function uses `S4` methods but, unlike most model functions, has a built–in function called `lev` that will extract the class levels (if any). In this case, our levels code would be: ``` lpSVM$levels <- function(x) kernlab::lev(x) ``` In most other cases, the levels will beed to be extracted from data contained in the fitted model object. As another example, objects created using the `ctree` function in the `party` package would need to use: ``` function(x) levels(x@data@get("response")[,1]) ``` Again, this slot is only used for classification models using `S4` methods. We should now be ready to fit our model. ``` library(mlbench) data(Sonar) library(caret) set.seed(998) inTraining <- createDataPartition(Sonar$Class, p = .75, list = FALSE) training <- Sonar[ inTraining,] testing <- Sonar[-inTraining,] fitControl <- trainControl(method = "repeatedcv", ## 10-fold CV... number = 10, ## repeated ten times repeats = 10) set.seed(825) Laplacian <- train(Class ~ ., data = training, method = lpSVM, preProc = c("center", "scale"), tuneLength = 8, trControl = fitControl) Laplacian ``` ``` ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## Pre-processing: centered (60), scaled (60) ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## C Accuracy Kappa ## 0.25 0.7344118 0.4506090 ## 0.50 0.7576716 0.5056691 ## 1.00 0.7820245 0.5617124 ## 2.00 0.8146348 0.6270944 ## 4.00 0.8357745 0.6691484 ## 8.00 0.8508824 0.6985281 ## 16.00 0.8537108 0.7044561 ## 32.00 0.8537108 0.7044561 ## ## Tuning parameter 'sigma' was held constant at a value of 0.01181293 ## Accuracy was used to select the optimal model using the largest value. ## The final values used for the model were C = 16 and sigma = 0.01181293. ``` A plot of the data shows that the model doesn’t change when the cost value is above 16\. ``` ggplot(Laplacian) + scale_x_log10() ``` ### 13\.4\.1 The `levels` Element `train` ensures that classification models always predict factors with the same levels. To do this at prediction time, the package needs to know the levels from the model object (specifically, the `finalModels` slot of the `train` object). For model functions using `S3` methods, `train` automatically attaches a character vector called `obsLevels` to the object and the package code uses this value. However, this strategy does not work for `S4` methods. In these cases, the package will use the code found in the `levels` slot of the model list. For example, the `ksvm` function uses `S4` methods but, unlike most model functions, has a built–in function called `lev` that will extract the class levels (if any). In this case, our levels code would be: ``` lpSVM$levels <- function(x) kernlab::lev(x) ``` In most other cases, the levels will beed to be extracted from data contained in the fitted model object. As another example, objects created using the `ctree` function in the `party` package would need to use: ``` function(x) levels(x@data@get("response")[,1]) ``` Again, this slot is only used for classification models using `S4` methods. We should now be ready to fit our model. ``` library(mlbench) data(Sonar) library(caret) set.seed(998) inTraining <- createDataPartition(Sonar$Class, p = .75, list = FALSE) training <- Sonar[ inTraining,] testing <- Sonar[-inTraining,] fitControl <- trainControl(method = "repeatedcv", ## 10-fold CV... number = 10, ## repeated ten times repeats = 10) set.seed(825) Laplacian <- train(Class ~ ., data = training, method = lpSVM, preProc = c("center", "scale"), tuneLength = 8, trControl = fitControl) Laplacian ``` ``` ## 157 samples ## 60 predictor ## 2 classes: 'M', 'R' ## ## Pre-processing: centered (60), scaled (60) ## Resampling: Cross-Validated (10 fold, repeated 10 times) ## Summary of sample sizes: 141, 142, 141, 142, 141, 142, ... ## Resampling results across tuning parameters: ## ## C Accuracy Kappa ## 0.25 0.7344118 0.4506090 ## 0.50 0.7576716 0.5056691 ## 1.00 0.7820245 0.5617124 ## 2.00 0.8146348 0.6270944 ## 4.00 0.8357745 0.6691484 ## 8.00 0.8508824 0.6985281 ## 16.00 0.8537108 0.7044561 ## 32.00 0.8537108 0.7044561 ## ## Tuning parameter 'sigma' was held constant at a value of 0.01181293 ## Accuracy was used to select the optimal model using the largest value. ## The final values used for the model were C = 16 and sigma = 0.01181293. ``` A plot of the data shows that the model doesn’t change when the cost value is above 16\. ``` ggplot(Laplacian) + scale_x_log10() ``` 13\.5 Illustrative Example 2: Something More Complicated \- `LogitBoost` ------------------------------------------------------------------------ \#\#\#The loop Element This function can be used to create custom loops for models to tune over. In most cases, the function can just return the existing tuning grid. For example, a `LogitBoost` model can be trained over the number of boosting iterations. In the [`caTools`](http://cran.r-project.org/web/packages/caTools/index.html) package, the `LogitBoost` function can be used to fit this model. For example: ``` mod <- LogitBoost(as.matrix(x), y, nIter = 51) ``` If we were to tune the model evaluating models where the number of iterations was 11, 21, 31, 41 and 51, the grid could be ``` lbGrid <- data.frame(nIter = seq(11, 51, by = 10)) ``` During resampling, `train` could loop over all five rows in `lbGrid` and fit five models. However, the `predict.LogitBoost` function has an argument called `nIter` that can produce, in this case, predictions from `mod` for all five models. Instead of `train` fitting five models, we could fit a single model with `nIter` \= class\=“hl num”\>51`and derive predictions for all five models using only`mod\`. The terminology used here is that `nIter` is a *sequential* tuning parameter (and the other parameters would be considered *fixed*). The `loop` argument for models is used to produce two objects: * `loop`: this is the actual loop that is used by `train`. * `submodels` is a *list* that has as many elements as there are rows in `loop`. The list has all the “extra” parameter settings that can be derived for each model. Going back to the `LogitBoost` example, we could have: ``` loop <- data.frame(.nIter = 51) loop ``` ``` ## .nIter ## 1 51 ``` ``` submodels <- list(data.frame(nIter = seq(11, 41, by = 10))) submodels ``` ``` ## [[1]] ## nIter ## 1 11 ## 2 21 ## 3 31 ## 4 41 ``` For this case, `train` first fits the `nIter = 51` model. When the model is predicted, that code has a `for` loop that iterates over the elements of `submodel[[1]]` to get the predictions for the other 4 models. In the end, predictions for all five models (for `nIter = seq(11, 51, by = 10)`) with a single model fit. There are other models built\-in to [`caret`](http://cran.r-project.org/web/packages/caret/index.html) that are used this way. There are a number of models that have multiple sequential tuning parameters. If the `loop` argument is left `NULL` the results of `tuneGrid` are used as the simple loop and is recommended for most situations. Note that the machinery that is used to “derive” the extra predictions is up to the user to create, typically in the `predict` and `prob` elements of the custom model object. For the `LogitBoost` model, some simple code to create these objects would be: ``` fullGrid <- data.frame(nIter = seq(11, 51, by = 10)) ## Get the largest value of nIter to fit the "full" model loop <- fullGrid[which.max(fullGrid$nIter),,drop = FALSE] loop ``` ``` ## nIter ## 5 51 ``` ``` submodels <- fullGrid[-which.max(fullGrid$nIter),,drop = FALSE] ## This needs to be encased in a list in case there are more ## than one tuning parameter submodels <- list(submodels) submodels ``` ``` ## [[1]] ## nIter ## 1 11 ## 2 21 ## 3 31 ## 4 41 ``` For the `LogitBoost` custom model object, we could use this code in the `predict` slot: ``` lbPred <- function(modelFit, newdata, preProc = NULL, submodels = NULL) { ## This model was fit with the maximum value of nIter out <- caTools::predict.LogitBoost(modelFit, newdata, type="class") ## In this case, 'submodels' is a data frame with the other values of ## nIter. We loop over these to get the other predictions. if(!is.null(submodels)) { ## Save _all_ the predictions in a list tmp <- out out <- vector(mode = "list", length = nrow(submodels) + 1) out[[1]] <- tmp for(j in seq(along = submodels$nIter)) { out[[j+1]] <- caTools::predict.LogitBoost( modelFit, newdata, nIter = submodels$nIter[j]) } } out } ``` A few more notes: * The code in the `fit` element does not have to change. * The `prob` slot works in the same way. The only difference is that the values saved in the outgoing lists are matrices or data frames of probabilities for each class. * After model training (i.e. predicting new samples), the value of `submodels` is set to `NULL` and the code produces a single set of predictions. * If the model had one sequential parameter and one fixed parameter, the `loop` data frame would have two columns (one for each parameter). If the model is tuned over more than one value of the fixed parameter, the `submodels` list would have more than one element. If `loop` had 10 rows, then `length(submodels)` would be `10` and `loop[i,]` would be linked to `submodels[[i]]`. * In this case, the prediction function was called by namespace too (i.e. `caTools::predict.LogitBoost`). This may not seem necessary but what functions are available can vary depending on what parallel processing technology is being used. For example, the nature of forking used by `doMC` and `doParallel` tends to have easier access to functions while PSOCK methods in `doParallel` do not. It may be easier to take the safe path of using the namespace operator wherever possible to avoid errors that are difficult to track down. Here is a slimmed down version of the logitBoost code already in the package: ``` lbFuncs <- list(library = "caTools", loop = function(grid) { loop <- grid[which.max(grid$nIter),,drop = FALSE] submodels <- grid[-which.max(grid$nIter),,drop = FALSE] submodels <- list(submodels) list(loop = loop, submodels = submodels) }, type = "Classification", parameters = data.frame(parameter = 'nIter', class = 'numeric', label = '# Boosting Iterations'), grid = function(x, y, len = NULL, search = "grid") { out <- if(search == "grid") data.frame(nIter = 1 + ((1:len)*10)) else data.frame(nIter = sample(1:500, size = len)) out }, fit = function(x, y, wts, param, lev, last, weights, classProbs, ...) { caTools::LogitBoost(as.matrix(x), y, nIter = param$nIter) }, predict = function(modelFit, newdata, preProc = NULL, submodels = NULL) { out <- caTools::predict.LogitBoost(modelFit, newdata, type="class") if(!is.null(submodels)) { tmp <- out out <- vector(mode = "list", length = nrow(submodels) + 1) out[[1]] <- tmp for(j in seq(along = submodels$nIter)) { out[[j+1]] <- caTools::predict.LogitBoost( modelFit, newdata, nIter = submodels$nIter[j] ) } } out }, prob = NULL, sort = function(x) x) ``` Should you care about this? Let’s tune the model over the same data set used for the SVM model above and see how long it takes: ``` set.seed(825) lb1 <- system.time(train(Class ~ ., data = training, method = lbFuncs, tuneLength = 3, trControl = fitControl)) lb1 ``` ``` ## user system elapsed ## 7.337 5.560 1.397 ``` ``` ## Now get rid of the submodel parts lbFuncs2 <- lbFuncs lbFuncs2$predict <- function(modelFit, newdata, preProc = NULL, submodels = NULL) caTools::predict.LogitBoost(modelFit, newdata, type = "class") lbFuncs2$loop <- NULL set.seed(825) lb2 <- system.time(train(Class ~ ., data = training, method = lbFuncs2, tuneLength = 3, trControl = fitControl)) lb2 ``` ``` ## user system elapsed ## 14.767 12.421 2.193 ``` On a data set with 157 instances and 60 predictors and a model that is tuned over only 3 parameter values, there is a 1\.57\-fold speed\-up. If the model were more computationally taxing or the data set were larger or the number of tune parameters that were evaluated was larger, the speed\-up would increase. Here is a plot of the speed\-up for a few more values of `tuneLength`: ``` bigGrid <- data.frame(nIter = seq(1, 151, by = 10)) results <- bigGrid results$SpeedUp <- NA for(i in 2:nrow(bigGrid)){ rm(lb1, lb2) set.seed(825) lb1 <- system.time(train(Class ~ ., data = training, method = lbFuncs, tuneGrid = bigGrid[1:i,,drop = FALSE], trControl = fitControl)) set.seed(825) lb2 <- system.time(train(Class ~ ., data = training, method = lbFuncs2, tuneGrid = bigGrid[1:i,,drop = FALSE], trControl = fitControl)) results$SpeedUp[i] <- lb2[3]/lb1[3] } ggplot(results, aes(x = nIter, y = SpeedUp)) + geom_point() + geom_smooth(method = "lm") + xlab("LogitBoost Iterations") + ylab("Speed-Up") ``` The speed\-ups show a significant decrease in training time using this method. **Note:** The previous examples were run using parallel processing. The remainder in this chapter are run sequentially and, for simplicity, the namespace operator is not used in the custom code modules below. 13\.6 Illustrative Example 3: Nonstandard Formulas -------------------------------------------------- (Note: the previous third illustration (“SMOTE During Resampling”) is no longer needed due to the inclusion of subsampling via `train`.) One limitation of `train` is that it requires the use of basic model formulas. There are several functions that use special formulas or operators on predictors that won’t (and perhaps should not) work in the top level call to `train`. However, we can still fit these models. Here is an example using the `mboost` function in the **mboost** package from the help page. ``` library(mboost) data("bodyfat", package = "TH.data") mod <- mboost(DEXfat ~ btree(age) + bols(waistcirc) + bbs(hipcirc), data = bodyfat) mod ``` ``` ## ## Model-based Boosting ## ## Call: ## mboost(formula = DEXfat ~ btree(age) + bols(waistcirc) + bbs(hipcirc), data = bodyfat) ## ## ## Squared Error (Regression) ## ## Loss function: (y - f)^2 ## ## ## Number of boosting iterations: mstop = 100 ## Step size: 0.1 ## Offset: 30.78282 ## Number of baselearners: 3 ``` We can create a custom model that mimics this code so that we can obtain resampling estimates for this specific model: ``` modelInfo <- list(label = "Model-based Gradient Boosting", library = "mboost", type = "Regression", parameters = data.frame(parameter = "parameter", class = "character", label = "parameter"), grid = function(x, y, len = NULL, search = "grid") data.frame(parameter = "none"), loop = NULL, fit = function(x, y, wts, param, lev, last, classProbs, ...) { ## mboost requires a data frame with predictors and response dat <- if(is.data.frame(x)) x else as.data.frame(x) dat$DEXfat <- y mod <- mboost( DEXfat ~ btree(age) + bols(waistcirc) + bbs(hipcirc), data = dat ) }, predict = function(modelFit, newdata, submodels = NULL) { if(!is.data.frame(newdata)) newdata <- as.data.frame(newdata) ## By default a matrix is returned; we convert it to a vector predict(modelFit, newdata)[,1] }, prob = NULL, predictors = function(x, ...) { unique(as.vector(variable.names(x))) }, tags = c("Ensemble Model", "Boosting", "Implicit Feature Selection"), levels = NULL, sort = function(x) x) ## Just use the basic formula method so that these predictors ## are passed 'as-is' into the model fitting and prediction ## functions. set.seed(307) mboost_resamp <- train(DEXfat ~ age + waistcirc + hipcirc, data = bodyfat, method = modelInfo, trControl = trainControl(method = "repeatedcv", repeats = 5)) mboost_resamp ``` ``` ## Model-based Gradient Boosting ## ## 71 samples ## 3 predictor ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 5 times) ## Summary of sample sizes: 65, 64, 63, 63, 65, 63, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 4.031102 0.9011156 3.172689 ``` 13\.7 Illustrative Example 4: PLS Feature Extraction Pre\-Processing -------------------------------------------------------------------- PCA is a common tool for feature extraction prior to modeling but is *unsupervised*. Partial Least Squares (PLS) is essentially a supervised version of PCA. For some data sets, there may be some benefit to using PLS to generate new features from the original data (the PLS scores) then use those as an input into a different predictive model. PLS requires parameter tuning. In the example below, we use PLS on a data set with highly correlated predictors then use the PLS scores in a random forest model. The “trick” here is to save the PLS loadings along with the random forest model fit so that the loadings can be used on future samples for prediction. Also, the PLS and random forest models are *jointly* tuned instead of an initial modeling process that finalizes the PLS model, then builds the random forest model separately. In this was we optimize both at once. Another important point is that the resampling results reflect the variability in the random forest *and* PLS models. If we did PLS up\-front then resampled the random forest model, we would under\-estimate the noise in the modeling process. The tecator spectroscopy data are used: ``` data(tecator) set.seed(930) colnames(absorp) <- paste("x", 1:ncol(absorp)) ## We will model the protein content data trainMeats <- createDataPartition(endpoints[,3], p = 3/4) absorpTrain <- absorp[trainMeats[[1]], ] proteinTrain <- endpoints[trainMeats[[1]], 3] absorpTest <- absorp[-trainMeats[[1]], ] proteinTest <- endpoints[-trainMeats[[1]], 3] ``` Here is the model code: ``` pls_rf <- list(label = "PLS-RF", library = c("pls", "randomForest"), type = "Regression", ## Tune over both parameters at the same time parameters = data.frame(parameter = c('ncomp', 'mtry'), class = c("numeric", 'numeric'), label = c('#Components', '#Randomly Selected Predictors')), grid = function(x, y, len = NULL, search = "grid") { if(search == "grid") { grid <- expand.grid(ncomp = seq(1, min(ncol(x) - 1, len), by = 1), mtry = 1:len) } else { grid <- expand.grid(ncomp = sample(1:ncol(x), size = len), mtry = sample(1:ncol(x), size = len)) } ## We can't have mtry > ncomp grid <- subset(grid, mtry <= ncomp) }, loop = NULL, fit = function(x, y, wts, param, lev, last, classProbs, ...) { ## First fit the pls model, generate the training set scores, ## then attach what is needed to the random forest object to ## be used later ## plsr only has a formula interface so create one data frame dat <- x dat$y <- y pre <- plsr(y~ ., data = dat, ncomp = param$ncomp) scores <- predict(pre, x, type = "scores") colnames(scores) <- paste("score", 1:param$ncomp, sep = "") mod <- randomForest(scores, y, mtry = param$mtry, ...) mod$projection <- pre$projection mod }, predict = function(modelFit, newdata, submodels = NULL) { ## Now apply the same scaling to the new samples scores <- as.matrix(newdata) %*% modelFit$projection colnames(scores) <- paste("score", 1:ncol(scores), sep = "") scores <- as.data.frame(scores) ## Predict the random forest model predict(modelFit, scores) }, prob = NULL, varImp = NULL, predictors = function(x, ...) rownames(x$projection), levels = function(x) x$obsLevels, sort = function(x) x[order(x[,1]),]) ``` We fit the models and look at the resampling results for the joint model: ``` meatCtrl <- trainControl(method = "repeatedcv", repeats = 5) ## These will take a while for these data set.seed(184) plsrf <- train(x = as.data.frame(absorpTrain), y = proteinTrain, method = pls_rf, preProc = c("center", "scale"), tuneLength = 10, ntree = 1000, trControl = meatCtrl) ggplot(plsrf, plotType = "level") ``` ``` ## How does random forest do on its own? set.seed(184) rfOnly <- train(absorpTrain, proteinTrain, method = "rf", tuneLength = 10, ntree = 1000, trControl = meatCtrl) getTrainPerf(rfOnly) ``` ``` ## TrainRMSE TrainRsquared TrainMAE method ## 1 2.167941 0.516604 1.714846 rf ``` ``` ## How does random forest do on its own? set.seed(184) plsOnly <- train(absorpTrain, proteinTrain, method = "pls", tuneLength = 20, preProc = c("center", "scale"), trControl = meatCtrl) getTrainPerf(plsOnly) ``` ``` ## TrainRMSE TrainRsquared TrainMAE method ## 1 0.6980342 0.9541472 0.5446974 pls ``` The test set results indicate that these data like the linear model more than anything: ``` postResample(predict(plsrf, absorpTest), proteinTest) ``` ``` ## RMSE Rsquared MAE ## 1.0964463 0.8840342 0.8509050 ``` ``` postResample(predict(rfOnly, absorpTest), proteinTest) ``` ``` ## RMSE Rsquared MAE ## 2.2414327 0.4566869 1.8422873 ``` ``` postResample(predict(plsOnly, absorpTest), proteinTest) ``` ``` ## RMSE Rsquared MAE ## 0.5587882 0.9692432 0.4373753 ``` 13\.8 Illustrative Example 5: Optimizing probability thresholds for class imbalances ------------------------------------------------------------------------------------ This description was originally posted on [this blog.](http://appliedpredictivemodeling.com/blog/) One of the toughest problems in predictive model occurs when the classes have a severe imbalance. In [our book](http://appliedpredictivemodeling.com/), we spend [an entire chapter](http://rd.springer.com/chapter/10.1007/978-1-4614-6849-3_16) on this subject itself. One consequence of this is that the performance is generally very biased against the class with the smallest frequencies. For example, if the data have a majority of samples belonging to the first class and very few in the second class, most predictive models will maximize accuracy by predicting everything to be the first class. As a result there’s usually great sensitivity but poor specificity. As a demonstration will use a simulation system [described here](http://appliedpredictivemodeling.com/blog/2013/4/11/a-classification-simulation-system). By default it has about a 50\-50 class frequency but we can change this by altering the function argument called `intercept`: ``` library(caret) set.seed(442) trainingSet <- twoClassSim(n = 500, intercept = -16) testingSet <- twoClassSim(n = 500, intercept = -16) ## Class frequencies table(trainingSet$Class) ``` ``` ## ## Class1 Class2 ## 450 50 ``` There is almost a 9:1 imbalance in these data. Let’s use a standard random forest model with these data using the default value of `mtry`. We’ll also use repeated 10\-fold cross validation to get a sense of performance: ``` set.seed(949) mod0 <- train(Class ~ ., data = trainingSet, method = "rf", metric = "ROC", tuneGrid = data.frame(mtry = 3), ntree = 1000, trControl = trainControl(method = "repeatedcv", repeats = 5, classProbs = TRUE, summaryFunction = twoClassSummary)) getTrainPerf(mod0) ``` ``` ## TrainROC TrainSens TrainSpec method ## 1 0.9602222 0.9977778 0.324 rf ``` ``` ## Get the ROC curve roc0 <- roc(testingSet$Class, predict(mod0, testingSet, type = "prob")[,1], levels = rev(levels(testingSet$Class))) roc0 ``` ``` ## ## Call: ## roc.default(response = testingSet$Class, predictor = predict(mod0, testingSet, type = "prob")[, 1], levels = rev(levels(testingSet$Class))) ## ## Data: predict(mod0, testingSet, type = "prob")[, 1] in 34 controls (testingSet$Class Class2) < 466 cases (testingSet$Class Class1). ## Area under the curve: 0.9301 ``` ``` ## Now plot plot(roc0, print.thres = c(.5), type = "S", print.thres.pattern = "%.3f (Spec = %.2f, Sens = %.2f)", print.thres.cex = .8, legacy.axes = TRUE) ``` The area under the ROC curve is very high, indicating that the model has very good predictive power for these data. The plot shows the default probability cut off value of 50%. The sensitivity and specificity values associated with this point indicate that performance is not that good when an actual call needs to be made on a sample. One of the most common ways to deal with this is to determine an alternate probability cut off using the ROC curve. But to do this well, another set of data (not the test set) is needed to set the cut off and the test set is used to validate it. We don’t have a lot of data this is difficult since we will be spending some of our data just to get a single cut off value. Alternatively the model can be tuned, using resampling, to determine any model tuning parameters as well as an appropriate cut off for the probabilities. Suppose the model has one tuning parameter and we want to look at four candidate values for tuning. Suppose we also want to tune the probability cut off over 20 different thresholds. Now we have to look at 20×4\=80 different models (and that is for each resample). One other feature that has been opened up his ability to use sequential parameters: these are tuning parameters that don’t require a completely new model fit to produce predictions. In this case, we can fit one random forest model and get it’s predicted class probabilities and evaluate the candidate probability cutoffs using these same hold\-out samples. Here is what the model code looks like: ``` ## Get the model code for the original random forest method: thresh_code <- getModelInfo("rf", regex = FALSE)[[1]] thresh_code$type <- c("Classification") ## Add the threshold as another tuning parameter thresh_code$parameters <- data.frame(parameter = c("mtry", "threshold"), class = c("numeric", "numeric"), label = c("#Randomly Selected Predictors", "Probability Cutoff")) ## The default tuning grid code: thresh_code$grid <- function(x, y, len = NULL, search = "grid") { p <- ncol(x) if(search == "grid") { grid <- expand.grid(mtry = floor(sqrt(p)), threshold = seq(.01, .99, length = len)) } else { grid <- expand.grid(mtry = sample(1:p, size = len), threshold = runif(runif, min = 0, max = 1)) } grid } ## Here we fit a single random forest model (with a fixed mtry) ## and loop over the threshold values to get predictions from the same ## randomForest model. thresh_code$loop = function(grid) { library(plyr) loop <- ddply(grid, c("mtry"), function(x) c(threshold = max(x$threshold))) submodels <- vector(mode = "list", length = nrow(loop)) for(i in seq(along = loop$threshold)) { index <- which(grid$mtry == loop$mtry[i]) cuts <- grid[index, "threshold"] submodels[[i]] <- data.frame(threshold = cuts[cuts != loop$threshold[i]]) } list(loop = loop, submodels = submodels) } ## Fit the model independent of the threshold parameter thresh_code$fit = function(x, y, wts, param, lev, last, classProbs, ...) { if(length(levels(y)) != 2) stop("This works only for 2-class problems") randomForest(x, y, mtry = param$mtry, ...) } ## Now get a probability prediction and use different thresholds to ## get the predicted class thresh_code$predict = function(modelFit, newdata, submodels = NULL) { class1Prob <- predict(modelFit, newdata, type = "prob")[, modelFit$obsLevels[1]] ## Raise the threshold for class #1 and a higher level of ## evidence is needed to call it class 1 so it should ## decrease sensitivity and increase specificity out <- ifelse(class1Prob >= modelFit$tuneValue$threshold, modelFit$obsLevels[1], modelFit$obsLevels[2]) if(!is.null(submodels)) { tmp2 <- out out <- vector(mode = "list", length = length(submodels$threshold)) out[[1]] <- tmp2 for(i in seq(along = submodels$threshold)) { out[[i+1]] <- ifelse(class1Prob >= submodels$threshold[[i]], modelFit$obsLevels[1], modelFit$obsLevels[2]) } } out } ## The probabilities are always the same but we have to create ## mulitple versions of the probs to evaluate the data across ## thresholds thresh_code$prob = function(modelFit, newdata, submodels = NULL) { out <- as.data.frame(predict(modelFit, newdata, type = "prob")) if(!is.null(submodels)) { probs <- out out <- vector(mode = "list", length = length(submodels$threshold)+1) out <- lapply(out, function(x) probs) } out } ``` Basically, we define a list of model components (such as the fitting code, the prediction code, etc.) and feed this into the train function instead of using a pre\-listed model string (such as `method = "rf"`). For this model and these data, there was an 8% increase in training time to evaluate 20 additional values of the probability cut off. How do we optimize this model? Normally we might look at the area under the ROC curve as a metric to choose our final values. In this case the ROC curve is independent of the probability threshold so we have to use something else. A common technique to evaluate a candidate threshold is see how close it is to the perfect model where sensitivity and specificity are one. Our code will use the distance between the current model’s performance and the best possible performance and then have train minimize this distance when choosing it’s parameters. Here is the code that we use to calculate this: ``` fourStats <- function (data, lev = levels(data$obs), model = NULL) { ## This code will get use the area under the ROC curve and the ## sensitivity and specificity values using the current candidate ## value of the probability threshold. out <- c(twoClassSummary(data, lev = levels(data$obs), model = NULL)) ## The best possible model has sensitivity of 1 and specificity of 1. ## How far are we from that value? coords <- matrix(c(1, 1, out["Spec"], out["Sens"]), ncol = 2, byrow = TRUE) colnames(coords) <- c("Spec", "Sens") rownames(coords) <- c("Best", "Current") c(out, Dist = dist(coords)[1]) } set.seed(949) mod1 <- train(Class ~ ., data = trainingSet, method = thresh_code, ## Minimize the distance to the perfect model metric = "Dist", maximize = FALSE, tuneLength = 20, ntree = 1000, trControl = trainControl(method = "repeatedcv", repeats = 5, classProbs = TRUE, summaryFunction = fourStats)) mod1 ``` ``` ## Random Forest ## ## 500 samples ## 15 predictor ## 2 classes: 'Class1', 'Class2' ## ## No pre-processing ## Resampling: Cross-Validated (10 fold, repeated 5 times) ## Summary of sample sizes: 450, 450, 450, 450, 450, 450, ... ## Resampling results across tuning parameters: ## ## threshold ROC Sens Spec Dist ## 0.01000000 0.9602222 1.0000000 0.000 1.0000000 ## 0.06157895 0.9602222 1.0000000 0.000 1.0000000 ## 0.11315789 0.9602222 1.0000000 0.000 1.0000000 ## 0.16473684 0.9602222 1.0000000 0.000 1.0000000 ## 0.21631579 0.9602222 1.0000000 0.000 1.0000000 ## 0.26789474 0.9602222 1.0000000 0.000 1.0000000 ## 0.31947368 0.9602222 1.0000000 0.020 0.9800000 ## 0.37105263 0.9602222 1.0000000 0.064 0.9360000 ## 0.42263158 0.9602222 0.9991111 0.132 0.8680329 ## 0.47421053 0.9602222 0.9991111 0.240 0.7600976 ## 0.52578947 0.9602222 0.9973333 0.420 0.5802431 ## 0.57736842 0.9602222 0.9880000 0.552 0.4494847 ## 0.62894737 0.9602222 0.9742222 0.612 0.3941985 ## 0.68052632 0.9602222 0.9644444 0.668 0.3436329 ## 0.73210526 0.9602222 0.9524444 0.700 0.3184533 ## 0.78368421 0.9602222 0.9346667 0.736 0.2915366 ## 0.83526316 0.9602222 0.8995556 0.828 0.2278799 ## 0.88684211 0.9602222 0.8337778 0.952 0.1927598 ## 0.93842105 0.9602222 0.6817778 0.996 0.3192700 ## 0.99000000 0.9602222 0.1844444 1.000 0.8155556 ## ## Tuning parameter 'mtry' was held constant at a value of 3 ## Dist was used to select the optimal model using the smallest value. ## The final values used for the model were mtry = 3 and threshold ## = 0.8868421. ``` Using `ggplot(mod1)` will show the performance profile. Instead here is a plot of the sensitivity, specificity, and distance to the perfect model: ``` library(reshape2) metrics <- mod1$results[, c(2, 4:6)] metrics <- melt(metrics, id.vars = "threshold", variable.name = "Resampled", value.name = "Data") ggplot(metrics, aes(x = threshold, y = Data, color = Resampled)) + geom_line() + ylab("") + xlab("Probability Cutoff") + theme(legend.position = "top") ``` You can see that as we increase the probability cut off for the first class it takes more and more evidence for a sample to be predicted as the first class. As a result the sensitivity goes down when the threshold becomes very large. The upside is that we can increase specificity in the same way. The blue curve shows the distance to the perfect model. The value of 0\.89 was found to be optimal. Now we can use the test set ROC curve to validate the cut off we chose by resampling. Here the cut off closest to the perfect model is 0\.89\. We were able to find a good probability cut off value without setting aside another set of data for tuning the cut off. One great thing about this code is that it will automatically apply the optimized probability threshold when predicting new samples. 13\.9 Illustrative Example 6: Offsets in Generalized Linear Models ------------------------------------------------------------------ Like the `mboost` example [above](using-your-own-model-in-train.html#Illustration3), a custom method is required since a formula element is used to set the offset variable. Here is an example from `?glm`: ``` ## (Intercept) Prewt TreatCont TreatFT ## 49.7711090 -0.5655388 -4.0970655 4.5630627 ``` We can write a small custom method to duplicate this model. Two details of note: * If we have factors in the data and do not want `train` to convert them to dummy variables, the formula method for `train` should be avoided. We can let `glm` do that inside the custom method. This would help `glm` understand that the dummy variable columns came from the same original factor. This will avoid errors in other functions used with `glm` (e.g. `anova`). * The slot for `x` should include any variables that are on the right\-hand side of the model formula, including the offset column. Here is the custom model: ``` offset_mod <- getModelInfo("glm", regex = FALSE)[[1]] offset_mod$fit <- function(x, y, wts, param, lev, last, classProbs, ...) { dat <- if(is.data.frame(x)) x else as.data.frame(x) dat$Postwt <- y glm(Postwt ~ Prewt + Treat + offset(Prewt), family = gaussian, data = dat) } mod <- train(x = anorexia[, 1:2], y = anorexia$Postwt, method = offset_mod) coef(mod$finalModel) ``` ``` ## (Intercept) Prewt TreatCont TreatFT ## 49.7711090 -0.5655388 -4.0970655 4.5630627 ```
Machine Learning
topepo.github.io
https://topepo.github.io/caret/adaptive-resampling.html
14 Adaptive Resampling ====================== Models can benefit significantly from tuning but the optimal values are rarely known beforehand. `train` can be used to define a grid of possible points and resampling can be used to generate good estimates of performance for each tuning parameter combination. However, in the nominal resampling process, all the tuning parameter combinations are computed for all the resamples before a choice is made about which parameters are good and which are poor. [`caret`](http://cran.r-project.org/web/packages/caret/index.html) contains the ability to adaptively resample the tuning parameter grid in a way that concentrates on values that are the in the neighborhood of the optimal settings. See [this paper](http://arxiv.org/abs/1405.6974) for the details. To illustrate, we will use the Sonar data from one of the [previous pages](https://topepo.github.io/caret/model-training-and-tuning.html#tune). ``` library(mlbench) data(Sonar) library(caret) set.seed(998) inTraining <- createDataPartition(Sonar$Class, p = .75, list = FALSE) training <- Sonar[ inTraining,] testing <- Sonar[-inTraining,] ``` We will tune a support vector machine model using the same tuning strategy as before but with [random search](https://topepo.github.io/caret/random-hyperparameter-search.html): ``` svmControl <- trainControl(method = "repeatedcv", number = 10, repeats = 10, classProbs = TRUE, summaryFunction = twoClassSummary, search = "random") set.seed(825) svmFit <- train(Class ~ ., data = training, method = "svmRadial", trControl = svmControl, preProc = c("center", "scale"), metric = "ROC", tuneLength = 15) ``` Using this method, the optimal tuning parameters were a RBF kernel parameter of 0\.0301 and a cost value of 9\.091958\. To use the adaptive procedure, the `trainControl` option needs some additional arguments: * `min` is the minimum number of resamples that will be used for each tuning parameter. The default value is 5 and increasing it will decrease the speed\-up generated by adaptive resampling but should also increase the likelihood of finding a good model. * `alpha` is a confidence level that is used to remove parameter settings. To date, this value has not shown much of an effect. * `method` is either `"gls"` for a linear model or `"BT"` for a Bradley\-Terry model. The latter may be more useful when you expect the model to do very well (e.g. an area under the ROC curve near 1\) or when there are a large number of tuning parameter settings. * `complete` is a logical value that specifies whether `train` should generate the full resampling set if it finds an optimal solution before the end of resampling. If you want to know the optimal parameter settings and don’t care much for the estimated performance value, a value of `FALSE` would be appropriate here. The new code is below. Recall that setting the random number seed just prior to the model fit will ensure the same resamples as well as the same random grid. ``` adaptControl <- trainControl(method = "adaptive_cv", number = 10, repeats = 10, adaptive = list(min = 5, alpha = 0.05, method = "gls", complete = TRUE), classProbs = TRUE, summaryFunction = twoClassSummary, search = "random") set.seed(825) svmAdapt <- train(Class ~ ., data = training, method = "svmRadial", trControl = adaptControl, preProc = c("center", "scale"), metric = "ROC", tuneLength = 15) ``` The search finalized the tuning parameters on the 14th iteration of resampling and was 1\.5\-fold faster than the original analysis. Here, the optimal tuning parameters were a RBF kernel parameter of 0\.0301 and a cost value of 9\.091958\. These are close to the previous settings and result in a difference in the area under the ROC curve of 0 and the adaptive approach used 1295 fewer models. Remember that this methodology is experimental, so please send any [questions or bug reports](https://github.com/topepo/caret/issues) to the package maintainer.
Machine Learning
topepo.github.io
https://topepo.github.io/caret/variable-importance.html
15 Variable Importance ====================== Variable importance evaluation functions can be separated into two groups: those that use the model information and those that do not. The advantage of using a model\-based approach is that is more closely tied to the model performance and that it *may* be able to incorporate the correlation structure between the predictors into the importance calculation. Regardless of how the importance is calculated: * For most classification models, each predictor will have a separate variable importance for each class (the exceptions are classification trees, bagged trees and boosted trees). * All measures of importance are scaled to have a maximum value of 100, unless the `scale` argument of `varImp.train` is set to `FALSE`. 15\.1 Model Specific Metrics ---------------------------- The following methods for estimating the contribution of each variable to the model are available: * **Linear Models**: the absolute value of the *t*\-statistic for each model parameter is used. * **Random Forest**: from the R package: “For each tree, the prediction accuracy on the out\-of\-bag portion of the data is recorded. Then the same is done after permuting each predictor variable. The difference between the two accuracies are then averaged over all trees, and normalized by the standard error. For regression, the MSE is computed on the out\-of\-bag data for each tree, and then the same computed after permuting a variable. The differences are averaged and normalized by the standard error. If the standard error is equal to 0 for a variable, the division is not done.” * **Partial Least Squares**: the variable importance measure here is based on weighted sums of the absolute regression coefficients. The weights are a function of the reduction of the sums of squares across the number of PLS components and are computed separately for each outcome. Therefore, the contribution of the coefficients are weighted proportionally to the reduction in the sums of squares. * **Recursive Partitioning**: The reduction in the loss function (e.g. mean squared error) attributed to each variable at each split is tabulated and the sum is returned. Also, since there may be candidate variables that are important but are not used in a split, the top competing variables are also tabulated at each split. This can be turned off using the `maxcompete` argument in `rpart.control`. This method does not currently provide class\-specific measures of importance when the response is a factor. * **Bagged Trees**: The same methodology as a single tree is applied to all bootstrapped trees and the total importance is returned * **Boosted Trees**: This method uses the same approach as a single tree, but sums the importances over each boosting iteration (see the [`gbm`](http://cran.r-project.org/web/packages/gbm/index.html) package vignette). * **Multivariate Adaptive Regression Splines**: MARS models include a backwards elimination feature selection routine that looks at reductions in the generalized cross\-validation (GCV) estimate of error. The `varImp` function tracks the changes in model statistics, such as the GCV, for each predictor and accumulates the reduction in the statistic when each predictor’s feature is added to the model. This total reduction is used as the variable importance measure. If a predictor was never used in any MARS basis function, it has an importance value of zero. There are three statistics that can be used to estimate variable importance in MARS models. Using `varImp(object, value = "gcv")` tracks the reduction in the generalized cross\-validation statistic as terms are added. However, there are some cases when terms are retained in the model that result in an increase in GCV. Negative variable importance values for MARS are set to zero. Terms with non\-zero importance that were not included in the final, pruned model are also listed as zero. Alternatively, using `varImp(object, value = "rss")` monitors the change in the residual sums of squares (RSS) as terms are added, which will never be negative. Also, the option `varImp(object, value = "nsubsets")` returns the number of times that each variable is involved in a subset (in the final, pruned model). Prior to June 2008, `varImp` used an internal function to estimate importance for MARS models. Currently, it is a wrapper around the `evimp` function in the [`earth`](http://cran.r-project.org/web/packages/earth/index.html) package. * **Nearest shrunken centroids**: The difference between the class centroids and the overall centroid is used to measure the variable influence (see `pamr.predict`). The larger the difference between the class centroid and the overall center of the data, the larger the separation between the classes. The training set predictions must be supplied when an object of class `pamrtrained` is given to `varImp`. * **Cubist**: The Cubist output contains variable usage statistics. It gives the percentage of times where each variable was used in a condition and/or a linear model. Note that this output will probably be inconsistent with the rules shown in the output from `summary.cubist`. At each split of the tree, Cubist saves a linear model (after feature selection) that is allowed to have terms for each variable used in the current split or any split above it. [Quinlan (1992\)](http://sci2s.ugr.es/keel/pdf/algorithm/congreso/1992-Quinlan-AI.pdf) discusses a smoothing algorithm where each model prediction is a linear combination of the parent and child model along the tree. As such, the final prediction is a function of all the linear models from the initial node to the terminal node. The percentages shown in the Cubist output reflects all the models involved in prediction (as opposed to the terminal models shown in the output). The variable importance used here is a linear combination of the usage in the rule conditions and the model. 15\.2 Model Independent Metrics ------------------------------- If there is no model\-specific way to estimate importance (or the argument `useModel = FALSE` is used in `varImp`) the importance of each predictor is evaluated individually using a “filter” approach. For classification, ROC curve analysis is conducted on each predictor. For two class problems, a series of cutoffs is applied to the predictor data to predict the class. The sensitivity and specificity are computed for each cutoff and the ROC curve is computed. The trapezoidal rule is used to compute the area under the ROC curve. This area is used as the measure of variable importance. For multi\-class outcomes, the problem is decomposed into all pair\-wise problems and the area under the curve is calculated for each class pair (i.e. class 1 vs. class 2, class 2 vs. class 3 etc.). For a specific class, the maximum area under the curve across the relevant pair\-wise AUC’s is used as the variable importance measure. For regression, the relationship between each predictor and the outcome is evaluated. An argument, `nonpara`, is used to pick the model fitting technique. When `nonpara = FALSE`, a linear model is fit and the absolute value of the *t*\-value for the slope of the predictor is used. Otherwise, a loess smoother is fit between the outcome and the predictor. The R2 statistic is calculated for this model against the intercept only null model. This number is returned as a relative measure of variable importance. 15\.3 An Example ---------------- On the model training web, several models were fit to the example data. The boosted tree model has a built\-in variable importance score but neither the support vector machine or the regularized discriminant analysis model do. ``` gbmImp <- varImp(gbmFit3, scale = FALSE) gbmImp ``` ``` ## gbm variable importance ## ## only 20 most important variables shown (out of 60) ## ## Overall ## V11 21.308 ## V12 11.896 ## V36 9.810 ## V52 9.793 ## V51 9.324 ## V46 5.536 ## V13 5.005 ## V9 4.396 ## V31 4.356 ## V37 4.233 ## V48 4.109 ## V3 3.814 ## V23 3.554 ## V5 3.544 ## V1 3.491 ## V43 3.347 ## V45 3.110 ## V17 3.064 ## V27 2.941 ## V54 2.819 ``` The function automatically scales the importance scores to be between 0 and 100\. Using `scale = FALSE` avoids this normalization step. To get the area under the ROC curve for each predictor, the `filterVarImp` function can be used. The area under the ROC curve is computed for each class. ``` roc_imp <- filterVarImp(x = training[, -ncol(training)], y = training$Class) head(roc_imp) ``` ``` ## M R ## V1 0.6695205 0.6695205 ## V2 0.6470157 0.6470157 ## V3 0.6443249 0.6443249 ## V4 0.6635682 0.6635682 ## V5 0.6601435 0.6601435 ## V6 0.6510926 0.6510926 ``` Alternatively, for models where no built\-in importance score is implemented (or exists), the `varImp` can still be used to get scores. For SVM classification models, the default behavior is to compute the area under the ROC curve. ``` roc_imp2 <- varImp(svmFit, scale = FALSE) roc_imp2 ``` ``` ## ROC curve variable importance ## ## only 20 most important variables shown (out of 60) ## ## Importance ## V11 0.7758 ## V12 0.7586 ## V9 0.7320 ## V13 0.7291 ## V10 0.7187 ## V52 0.7074 ## V46 0.7034 ## V49 0.7022 ## V51 0.6892 ## V45 0.6813 ## V47 0.6806 ## V48 0.6704 ## V1 0.6695 ## V4 0.6636 ## V5 0.6601 ## V6 0.6511 ## V2 0.6470 ## V36 0.6460 ## V3 0.6443 ## V44 0.6417 ``` For importance scores generated from `varImp.train`, a plot method can be used to visualize the results. In the plot below, the `top` option is used to make the image more readable. ``` plot(gbmImp, top = 20) ``` 15\.1 Model Specific Metrics ---------------------------- The following methods for estimating the contribution of each variable to the model are available: * **Linear Models**: the absolute value of the *t*\-statistic for each model parameter is used. * **Random Forest**: from the R package: “For each tree, the prediction accuracy on the out\-of\-bag portion of the data is recorded. Then the same is done after permuting each predictor variable. The difference between the two accuracies are then averaged over all trees, and normalized by the standard error. For regression, the MSE is computed on the out\-of\-bag data for each tree, and then the same computed after permuting a variable. The differences are averaged and normalized by the standard error. If the standard error is equal to 0 for a variable, the division is not done.” * **Partial Least Squares**: the variable importance measure here is based on weighted sums of the absolute regression coefficients. The weights are a function of the reduction of the sums of squares across the number of PLS components and are computed separately for each outcome. Therefore, the contribution of the coefficients are weighted proportionally to the reduction in the sums of squares. * **Recursive Partitioning**: The reduction in the loss function (e.g. mean squared error) attributed to each variable at each split is tabulated and the sum is returned. Also, since there may be candidate variables that are important but are not used in a split, the top competing variables are also tabulated at each split. This can be turned off using the `maxcompete` argument in `rpart.control`. This method does not currently provide class\-specific measures of importance when the response is a factor. * **Bagged Trees**: The same methodology as a single tree is applied to all bootstrapped trees and the total importance is returned * **Boosted Trees**: This method uses the same approach as a single tree, but sums the importances over each boosting iteration (see the [`gbm`](http://cran.r-project.org/web/packages/gbm/index.html) package vignette). * **Multivariate Adaptive Regression Splines**: MARS models include a backwards elimination feature selection routine that looks at reductions in the generalized cross\-validation (GCV) estimate of error. The `varImp` function tracks the changes in model statistics, such as the GCV, for each predictor and accumulates the reduction in the statistic when each predictor’s feature is added to the model. This total reduction is used as the variable importance measure. If a predictor was never used in any MARS basis function, it has an importance value of zero. There are three statistics that can be used to estimate variable importance in MARS models. Using `varImp(object, value = "gcv")` tracks the reduction in the generalized cross\-validation statistic as terms are added. However, there are some cases when terms are retained in the model that result in an increase in GCV. Negative variable importance values for MARS are set to zero. Terms with non\-zero importance that were not included in the final, pruned model are also listed as zero. Alternatively, using `varImp(object, value = "rss")` monitors the change in the residual sums of squares (RSS) as terms are added, which will never be negative. Also, the option `varImp(object, value = "nsubsets")` returns the number of times that each variable is involved in a subset (in the final, pruned model). Prior to June 2008, `varImp` used an internal function to estimate importance for MARS models. Currently, it is a wrapper around the `evimp` function in the [`earth`](http://cran.r-project.org/web/packages/earth/index.html) package. * **Nearest shrunken centroids**: The difference between the class centroids and the overall centroid is used to measure the variable influence (see `pamr.predict`). The larger the difference between the class centroid and the overall center of the data, the larger the separation between the classes. The training set predictions must be supplied when an object of class `pamrtrained` is given to `varImp`. * **Cubist**: The Cubist output contains variable usage statistics. It gives the percentage of times where each variable was used in a condition and/or a linear model. Note that this output will probably be inconsistent with the rules shown in the output from `summary.cubist`. At each split of the tree, Cubist saves a linear model (after feature selection) that is allowed to have terms for each variable used in the current split or any split above it. [Quinlan (1992\)](http://sci2s.ugr.es/keel/pdf/algorithm/congreso/1992-Quinlan-AI.pdf) discusses a smoothing algorithm where each model prediction is a linear combination of the parent and child model along the tree. As such, the final prediction is a function of all the linear models from the initial node to the terminal node. The percentages shown in the Cubist output reflects all the models involved in prediction (as opposed to the terminal models shown in the output). The variable importance used here is a linear combination of the usage in the rule conditions and the model. 15\.2 Model Independent Metrics ------------------------------- If there is no model\-specific way to estimate importance (or the argument `useModel = FALSE` is used in `varImp`) the importance of each predictor is evaluated individually using a “filter” approach. For classification, ROC curve analysis is conducted on each predictor. For two class problems, a series of cutoffs is applied to the predictor data to predict the class. The sensitivity and specificity are computed for each cutoff and the ROC curve is computed. The trapezoidal rule is used to compute the area under the ROC curve. This area is used as the measure of variable importance. For multi\-class outcomes, the problem is decomposed into all pair\-wise problems and the area under the curve is calculated for each class pair (i.e. class 1 vs. class 2, class 2 vs. class 3 etc.). For a specific class, the maximum area under the curve across the relevant pair\-wise AUC’s is used as the variable importance measure. For regression, the relationship between each predictor and the outcome is evaluated. An argument, `nonpara`, is used to pick the model fitting technique. When `nonpara = FALSE`, a linear model is fit and the absolute value of the *t*\-value for the slope of the predictor is used. Otherwise, a loess smoother is fit between the outcome and the predictor. The R2 statistic is calculated for this model against the intercept only null model. This number is returned as a relative measure of variable importance. 15\.3 An Example ---------------- On the model training web, several models were fit to the example data. The boosted tree model has a built\-in variable importance score but neither the support vector machine or the regularized discriminant analysis model do. ``` gbmImp <- varImp(gbmFit3, scale = FALSE) gbmImp ``` ``` ## gbm variable importance ## ## only 20 most important variables shown (out of 60) ## ## Overall ## V11 21.308 ## V12 11.896 ## V36 9.810 ## V52 9.793 ## V51 9.324 ## V46 5.536 ## V13 5.005 ## V9 4.396 ## V31 4.356 ## V37 4.233 ## V48 4.109 ## V3 3.814 ## V23 3.554 ## V5 3.544 ## V1 3.491 ## V43 3.347 ## V45 3.110 ## V17 3.064 ## V27 2.941 ## V54 2.819 ``` The function automatically scales the importance scores to be between 0 and 100\. Using `scale = FALSE` avoids this normalization step. To get the area under the ROC curve for each predictor, the `filterVarImp` function can be used. The area under the ROC curve is computed for each class. ``` roc_imp <- filterVarImp(x = training[, -ncol(training)], y = training$Class) head(roc_imp) ``` ``` ## M R ## V1 0.6695205 0.6695205 ## V2 0.6470157 0.6470157 ## V3 0.6443249 0.6443249 ## V4 0.6635682 0.6635682 ## V5 0.6601435 0.6601435 ## V6 0.6510926 0.6510926 ``` Alternatively, for models where no built\-in importance score is implemented (or exists), the `varImp` can still be used to get scores. For SVM classification models, the default behavior is to compute the area under the ROC curve. ``` roc_imp2 <- varImp(svmFit, scale = FALSE) roc_imp2 ``` ``` ## ROC curve variable importance ## ## only 20 most important variables shown (out of 60) ## ## Importance ## V11 0.7758 ## V12 0.7586 ## V9 0.7320 ## V13 0.7291 ## V10 0.7187 ## V52 0.7074 ## V46 0.7034 ## V49 0.7022 ## V51 0.6892 ## V45 0.6813 ## V47 0.6806 ## V48 0.6704 ## V1 0.6695 ## V4 0.6636 ## V5 0.6601 ## V6 0.6511 ## V2 0.6470 ## V36 0.6460 ## V3 0.6443 ## V44 0.6417 ``` For importance scores generated from `varImp.train`, a plot method can be used to visualize the results. In the plot below, the `top` option is used to make the image more readable. ``` plot(gbmImp, top = 20) ```
Machine Learning
topepo.github.io
https://topepo.github.io/caret/miscellaneous-model-functions.html
16 Miscellaneous Model Functions ================================ Contents * [Yet Another *k*\-Nearest Neighbor Function](miscellaneous-model-functions.html#knn) * [Partial Least Squares Discriminant Analysis](miscellaneous-model-functions.html#plsda) * [Bagged MARS and FDA](miscellaneous-model-functions.html#bagMARS) * [General Purpose Bagging](miscellaneous-model-functions.html#bag) * [Model Averaged Neural Networks](miscellaneous-model-functions.html#avnnet) * [Neural Networks with a Principal Component Step](miscellaneous-model-functions.html#pcannet) * [Independent Component Regression](miscellaneous-model-functions.html#ica) 16\.1 Yet Another *k*\-Nearest Neighbor Function ------------------------------------------------ `knn3` is a function for *k*\-nearest neighbor classification. This particular implementation is a modification of the `knn` C code and returns the vote information for all of the classes ( `knn` only returns the probability for the winning class). There is a formula interface via ``` knn3(formula, data) ## or by passing the training data directly ## x is a matrix or data frame, y is a factor vector knn3(x, y) ``` There are also `print` and `predict` methods. For the Sonar data in the [`mlbench`](http://cran.r-project.org/web/packages/mlbench/index.html) package, we can fit an 11\-nearest neighbor model: ``` library(caret) library(mlbench) data(Sonar) set.seed(808) inTrain <- createDataPartition(Sonar$Class, p = 2/3, list = FALSE) ## Save the predictors and class in different objects sonarTrain <- Sonar[ inTrain, -ncol(Sonar)] sonarTest <- Sonar[-inTrain, -ncol(Sonar)] trainClass <- Sonar[ inTrain, "Class"] testClass <- Sonar[-inTrain, "Class"] centerScale <- preProcess(sonarTrain) centerScale ``` ``` ## Created from 139 samples and 60 variables ## ## Pre-processing: ## - centered (60) ## - ignored (0) ## - scaled (60) ``` ``` training <- predict(centerScale, sonarTrain) testing <- predict(centerScale, sonarTest) knnFit <- knn3(training, trainClass, k = 11) knnFit ``` ``` ## 11-nearest neighbor model ## Training set outcome distribution: ## ## M R ## 74 65 ``` ``` predict(knnFit, head(testing), type = "prob") ``` ``` ## M R ## [1,] 0.45454545 0.5454545 ## [2,] 0.81818182 0.1818182 ## [3,] 0.63636364 0.3636364 ## [4,] 0.09090909 0.9090909 ## [5,] 0.54545455 0.4545455 ## [6,] 0.45454545 0.5454545 ``` Similarly, [`caret`](http://cran.r-project.org/web/packages/caret/index.html) contains a *k*\-nearest neighbor regression function, `knnreg`. It returns the average outcome for the neighbor. 16\.2 Partial Least Squares Discriminant Analysis ------------------------------------------------- The `plsda` function is a wrapper for the `plsr` function in the [`pls`](http://cran.r-project.org/web/packages/pls/index.html) package that does not require a formula interface and can take factor outcomes as arguments. The classes are broken down into dummy variables (one for each class). These 0/1 dummy variables are modeled by partial least squares. From this model, there are two approaches to computing the class predictions and probabilities: * the softmax technique can be used on a per\-sample basis to normalize the scores so that they are more “probability like”" (i.e. they sum to one and are between zero and one). For a vector of model predictions for each class *X*, the softmax class probabilities are computed as. The predicted class is simply the class with the largest model prediction, or equivalently, the largest class probability. This is the default behavior for `plsda`. * Bayes rule can be applied to the model predictions to form posterior probabilities. Here, the model predictions for the training set are used along with the training set outcomes to create conditional distributions for each class. When new samples are predicted, the raw model predictions are run through these conditional distributions to produce a posterior probability for each class (along with the prior). Bayes rule can be used by specifying `probModel = "Bayes"`. An additional parameter, `prior`, can be used to set prior probabilities for the classes. The advantage to using Bayes rule is that the full training set is used to directly compute the class probabilities (unlike the softmax function which only uses the current sample’s scores). This creates more realistic probability estimates but the disadvantage is that a separate Bayesian model must be created for each value of `ncomp`, which is more time consuming. For the sonar data set, we can fit two PLS models using each technique and predict the class probabilities for the test set. ``` plsFit <- plsda(training, trainClass, ncomp = 20) plsFit ``` ``` ## Partial least squares classification, fitted with the kernel algorithm. ## The softmax function was used to compute class probabilities. ``` ``` plsBayesFit <- plsda(training, trainClass, ncomp = 20, probMethod = "Bayes") plsBayesFit ``` ``` ## Partial least squares classification, fitted with the kernel algorithm. ## Bayes rule was used to compute class probabilities. ``` ``` predict(plsFit, head(testing), type = "prob") ``` ``` ## , , 20 comps ## ## M R ## 2 0.12860843 0.8713916 ## 5 0.49074450 0.5092555 ## 8 0.59582388 0.4041761 ## 11 0.35693679 0.6430632 ## 13 0.36360834 0.6363917 ## 14 0.06626214 0.9337379 ``` ``` predict(plsBayesFit, head(testing), type = "prob") ``` ``` ## , , ncomp20 ## ## M R ## 2 0.02774255 0.9722574 ## 5 0.47710154 0.5228985 ## 8 0.89692329 0.1030767 ## 11 0.06002366 0.9399763 ## 13 0.07292981 0.9270702 ## 14 0.60530446 0.3946955 ``` Similar to `plsda`, [`caret`](http://cran.r-project.org/web/packages/caret/index.html) also contains a function `splsda` that allows for classification using sparse PLS. A dummy matrix is created for each class and used with the `spls` function in the [`spls`](http://cran.r-project.org/web/packages/spls/index.html) package. The same approach to estimating class probabilities is used for `plsda` and `splsda`. 16\.3 Bagged MARS and FDA ------------------------- Multivariate adaptive regression splines (MARS) models, like classification/regression trees, are unstable predictors (Breiman, 1996\). This means that small perturbations in the training data might lead to significantly different models. Bagged trees and random forests are effective ways of improving tree models by exploiting these instabilities. [`caret`](http://cran.r-project.org/web/packages/caret/index.html) contains a function, `bagEarth`, that fits MARS models via the `earth` function. There are formula and non\-formula interfaces. Also, flexible discriminant analysis is a generalization of linear discriminant analysis that can use non\-linear features as inputs. One way of doing this is the use MARS\-type features to classify samples. The function `bagFDA` fits FDA models of a set of bootstrap samples and aggregates the predictions to reduce noise. This function is deprecated in favor of the `bag` function. 16\.4 Bagging ------------- The `bag` function offers a general platform for bagging classification and regression models. Like `rfe` and `sbf`, it is open and models are specified by declaring functions for the model fitting and prediction code (and several built\-in sets of functions exist in the package). The function `bagControl` has options to specify the functions (more details below). The function also has a few non\-standard features: * The argument `var` can enable random sampling of the predictors at each bagging iteration. This is to de\-correlate the bagged models in the same spirit of random forests (although here the sampling is done once for the whole model). The default is to use all the predictors for each model. * The `bagControl` function has a logical argument called `downSample` that is useful for classification models with severe class imbalance. The bootstrapped data set is reduced so that the sample sizes for the classes with larger frequencies are the same as the sample size for the minority class. * If a parallel backend for the **foreach** package has been loaded and registered, the bagged models can be trained in parallel. The function’s control function requires the following arguments: ### 16\.4\.1 The `fit` Function Inputs: * `x`: a data frame of the training set predictor data. * `y`: the training set outcomes. * `...` arguments passed from `train` to this function The output is the object corresponding to the trained model and any other objects required for prediction. A simple example for a linear discriminant analysis model from the **MASS** package is: ``` function(x, y, ...) { library(MASS) lda(x, y, ...) } ``` ### 16\.4\.2 The `pred` Function This should be a function that produces predictors for new samples. Inputs: * `object`: the object generated by the `fit` module. * `x`: a matrix or data frame of predictor data. The output is either a number vector (for regression), a factor (or character) vector for classification or a matrix/data frame of class probabilities. For classification, it is probably better to average class probabilities instead of using the votes of the class predictions. Using the `lda` example again: ``` ## predict.lda returns the class and the class probabilities ## We will average the probabilities, so these are saved function(object, x) predict(object, x)$posterior ``` ``` ## function(object, x) predict(object, x)$posterior ``` ### 16\.4\.3 The `aggregate` Function This should be a function that takes the predictions from the constituent models and converts them to a single prediction per sample. Inputs: * `x`: a list of objects returned by the `pred` module. * `type`: an optional string that describes the type of output (e.g. “class”, “prob” etc.). The output is either a number vector (for regression), a factor (or character) vector for classification or a matrix/data frame of class probabilities. For the linear discriminant model above, we saved the matrix of class probabilities. To average them and generate a class prediction, we could use: ``` function(x, type = "class") { ## The class probabilities come in as a list of matrices ## For each class, we can pool them then average over them ## Pre-allocate space for the results pooled <- x[[1]] * NA n <- nrow(pooled) classes <- colnames(pooled) ## For each class probability, take the median across ## all the bagged model predictions for(i in 1:ncol(pooled)) { tmp <- lapply(x, function(y, col) y[,col], col = i) tmp <- do.call("rbind", tmp) pooled[,i] <- apply(tmp, 2, median) } ## Re-normalize to make sure they add to 1 pooled <- apply(pooled, 1, function(x) x/sum(x)) if(n != nrow(pooled)) pooled <- t(pooled) if(type == "class") { out <- factor(classes[apply(pooled, 1, which.max)], levels = classes) } else out <- as.data.frame(pooled) out } ``` For example, to bag a conditional inference tree (from the **party** package): ``` library(caret) set.seed(998) inTraining <- createDataPartition(Sonar$Class, p = .75, list = FALSE) training <- Sonar[ inTraining,] testing <- Sonar[-inTraining,] set.seed(825) baggedCT <- bag(x = training[, names(training) != "Class"], y = training$Class, B = 50, bagControl = bagControl(fit = ctreeBag$fit, predict = ctreeBag$pred, aggregate = ctreeBag$aggregate)) summary(baggedCT) ``` ``` ## ## Call: ## bag.default(x = training[, names(training) != "Class"], y ## = training$Class, B = 50, bagControl = bagControl(fit = ## ctreeBag$fit, predict = ctreeBag$pred, aggregate = ctreeBag$aggregate)) ## ## Out of bag statistics (B = 50): ## ## Accuracy Kappa ## 0.0% 0.4746 -0.04335 ## 2.5% 0.5806 0.17971 ## 25.0% 0.6681 0.32402 ## 50.0% 0.7094 0.41815 ## 75.0% 0.7606 0.51092 ## 97.5% 0.8060 0.59901 ## 100.0% 0.8077 0.61078 ``` 16\.5 Model Averaged Neural Networks ------------------------------------ The `avNNet` fits multiple neural network models to the same data set and predicts using the average of the predictions coming from each constituent model. The models can be different either due to different random number seeds to initialize the network or by fitting the models on bootstrap samples of the original training set (i.e. bagging the neural network). For classification models, the class probabilities are averaged to produce the final class prediction (as opposed to voting from the individual class predictions. As an example, the model can be fit via `train`: ``` set.seed(825) avNnetFit <- train(x = training, y = trainClass, method = "avNNet", repeats = 15, trace = FALSE) ``` 16\.6 Neural Networks with a Principal Component Step ----------------------------------------------------- Neural networks can be affected by severe amounts of multicollinearity in the predictors. The function `pcaNNet` is a wrapper around the `preProcess` and `nnet` functions that will run principal component analysis on the predictors before using them as inputs into a neural network. The function will keep enough components that will capture some pre\-defined threshold on the cumulative proportion of variance (see the `thresh` argument). For new samples, the same transformation is applied to the new predictor values (based on the loadings from the training set). The function is available for both regression and classification. This function is deprecated in favor of the `train` function using `method = "nnet"` and `preProc = "pca"`. 16\.7 Independent Component Regression -------------------------------------- The `icr` function can be used to fit a model analogous to principal component regression (PCR), but using independent component analysis (ICA). The predictor data are centered and projected to the ICA components. These components are then regressed against the outcome. The user needed to specify the number of components to keep. The model uses the `preProcess` function to compute the latent variables using the [fastICA](http://cran.r-project.org/web/packages/fastICA/index.html) package. Like PCR, there is no guarantee that there will be a correlation between the new latent variable and the outcomes. 16\.1 Yet Another *k*\-Nearest Neighbor Function ------------------------------------------------ `knn3` is a function for *k*\-nearest neighbor classification. This particular implementation is a modification of the `knn` C code and returns the vote information for all of the classes ( `knn` only returns the probability for the winning class). There is a formula interface via ``` knn3(formula, data) ## or by passing the training data directly ## x is a matrix or data frame, y is a factor vector knn3(x, y) ``` There are also `print` and `predict` methods. For the Sonar data in the [`mlbench`](http://cran.r-project.org/web/packages/mlbench/index.html) package, we can fit an 11\-nearest neighbor model: ``` library(caret) library(mlbench) data(Sonar) set.seed(808) inTrain <- createDataPartition(Sonar$Class, p = 2/3, list = FALSE) ## Save the predictors and class in different objects sonarTrain <- Sonar[ inTrain, -ncol(Sonar)] sonarTest <- Sonar[-inTrain, -ncol(Sonar)] trainClass <- Sonar[ inTrain, "Class"] testClass <- Sonar[-inTrain, "Class"] centerScale <- preProcess(sonarTrain) centerScale ``` ``` ## Created from 139 samples and 60 variables ## ## Pre-processing: ## - centered (60) ## - ignored (0) ## - scaled (60) ``` ``` training <- predict(centerScale, sonarTrain) testing <- predict(centerScale, sonarTest) knnFit <- knn3(training, trainClass, k = 11) knnFit ``` ``` ## 11-nearest neighbor model ## Training set outcome distribution: ## ## M R ## 74 65 ``` ``` predict(knnFit, head(testing), type = "prob") ``` ``` ## M R ## [1,] 0.45454545 0.5454545 ## [2,] 0.81818182 0.1818182 ## [3,] 0.63636364 0.3636364 ## [4,] 0.09090909 0.9090909 ## [5,] 0.54545455 0.4545455 ## [6,] 0.45454545 0.5454545 ``` Similarly, [`caret`](http://cran.r-project.org/web/packages/caret/index.html) contains a *k*\-nearest neighbor regression function, `knnreg`. It returns the average outcome for the neighbor. 16\.2 Partial Least Squares Discriminant Analysis ------------------------------------------------- The `plsda` function is a wrapper for the `plsr` function in the [`pls`](http://cran.r-project.org/web/packages/pls/index.html) package that does not require a formula interface and can take factor outcomes as arguments. The classes are broken down into dummy variables (one for each class). These 0/1 dummy variables are modeled by partial least squares. From this model, there are two approaches to computing the class predictions and probabilities: * the softmax technique can be used on a per\-sample basis to normalize the scores so that they are more “probability like”" (i.e. they sum to one and are between zero and one). For a vector of model predictions for each class *X*, the softmax class probabilities are computed as. The predicted class is simply the class with the largest model prediction, or equivalently, the largest class probability. This is the default behavior for `plsda`. * Bayes rule can be applied to the model predictions to form posterior probabilities. Here, the model predictions for the training set are used along with the training set outcomes to create conditional distributions for each class. When new samples are predicted, the raw model predictions are run through these conditional distributions to produce a posterior probability for each class (along with the prior). Bayes rule can be used by specifying `probModel = "Bayes"`. An additional parameter, `prior`, can be used to set prior probabilities for the classes. The advantage to using Bayes rule is that the full training set is used to directly compute the class probabilities (unlike the softmax function which only uses the current sample’s scores). This creates more realistic probability estimates but the disadvantage is that a separate Bayesian model must be created for each value of `ncomp`, which is more time consuming. For the sonar data set, we can fit two PLS models using each technique and predict the class probabilities for the test set. ``` plsFit <- plsda(training, trainClass, ncomp = 20) plsFit ``` ``` ## Partial least squares classification, fitted with the kernel algorithm. ## The softmax function was used to compute class probabilities. ``` ``` plsBayesFit <- plsda(training, trainClass, ncomp = 20, probMethod = "Bayes") plsBayesFit ``` ``` ## Partial least squares classification, fitted with the kernel algorithm. ## Bayes rule was used to compute class probabilities. ``` ``` predict(plsFit, head(testing), type = "prob") ``` ``` ## , , 20 comps ## ## M R ## 2 0.12860843 0.8713916 ## 5 0.49074450 0.5092555 ## 8 0.59582388 0.4041761 ## 11 0.35693679 0.6430632 ## 13 0.36360834 0.6363917 ## 14 0.06626214 0.9337379 ``` ``` predict(plsBayesFit, head(testing), type = "prob") ``` ``` ## , , ncomp20 ## ## M R ## 2 0.02774255 0.9722574 ## 5 0.47710154 0.5228985 ## 8 0.89692329 0.1030767 ## 11 0.06002366 0.9399763 ## 13 0.07292981 0.9270702 ## 14 0.60530446 0.3946955 ``` Similar to `plsda`, [`caret`](http://cran.r-project.org/web/packages/caret/index.html) also contains a function `splsda` that allows for classification using sparse PLS. A dummy matrix is created for each class and used with the `spls` function in the [`spls`](http://cran.r-project.org/web/packages/spls/index.html) package. The same approach to estimating class probabilities is used for `plsda` and `splsda`. 16\.3 Bagged MARS and FDA ------------------------- Multivariate adaptive regression splines (MARS) models, like classification/regression trees, are unstable predictors (Breiman, 1996\). This means that small perturbations in the training data might lead to significantly different models. Bagged trees and random forests are effective ways of improving tree models by exploiting these instabilities. [`caret`](http://cran.r-project.org/web/packages/caret/index.html) contains a function, `bagEarth`, that fits MARS models via the `earth` function. There are formula and non\-formula interfaces. Also, flexible discriminant analysis is a generalization of linear discriminant analysis that can use non\-linear features as inputs. One way of doing this is the use MARS\-type features to classify samples. The function `bagFDA` fits FDA models of a set of bootstrap samples and aggregates the predictions to reduce noise. This function is deprecated in favor of the `bag` function. 16\.4 Bagging ------------- The `bag` function offers a general platform for bagging classification and regression models. Like `rfe` and `sbf`, it is open and models are specified by declaring functions for the model fitting and prediction code (and several built\-in sets of functions exist in the package). The function `bagControl` has options to specify the functions (more details below). The function also has a few non\-standard features: * The argument `var` can enable random sampling of the predictors at each bagging iteration. This is to de\-correlate the bagged models in the same spirit of random forests (although here the sampling is done once for the whole model). The default is to use all the predictors for each model. * The `bagControl` function has a logical argument called `downSample` that is useful for classification models with severe class imbalance. The bootstrapped data set is reduced so that the sample sizes for the classes with larger frequencies are the same as the sample size for the minority class. * If a parallel backend for the **foreach** package has been loaded and registered, the bagged models can be trained in parallel. The function’s control function requires the following arguments: ### 16\.4\.1 The `fit` Function Inputs: * `x`: a data frame of the training set predictor data. * `y`: the training set outcomes. * `...` arguments passed from `train` to this function The output is the object corresponding to the trained model and any other objects required for prediction. A simple example for a linear discriminant analysis model from the **MASS** package is: ``` function(x, y, ...) { library(MASS) lda(x, y, ...) } ``` ### 16\.4\.2 The `pred` Function This should be a function that produces predictors for new samples. Inputs: * `object`: the object generated by the `fit` module. * `x`: a matrix or data frame of predictor data. The output is either a number vector (for regression), a factor (or character) vector for classification or a matrix/data frame of class probabilities. For classification, it is probably better to average class probabilities instead of using the votes of the class predictions. Using the `lda` example again: ``` ## predict.lda returns the class and the class probabilities ## We will average the probabilities, so these are saved function(object, x) predict(object, x)$posterior ``` ``` ## function(object, x) predict(object, x)$posterior ``` ### 16\.4\.3 The `aggregate` Function This should be a function that takes the predictions from the constituent models and converts them to a single prediction per sample. Inputs: * `x`: a list of objects returned by the `pred` module. * `type`: an optional string that describes the type of output (e.g. “class”, “prob” etc.). The output is either a number vector (for regression), a factor (or character) vector for classification or a matrix/data frame of class probabilities. For the linear discriminant model above, we saved the matrix of class probabilities. To average them and generate a class prediction, we could use: ``` function(x, type = "class") { ## The class probabilities come in as a list of matrices ## For each class, we can pool them then average over them ## Pre-allocate space for the results pooled <- x[[1]] * NA n <- nrow(pooled) classes <- colnames(pooled) ## For each class probability, take the median across ## all the bagged model predictions for(i in 1:ncol(pooled)) { tmp <- lapply(x, function(y, col) y[,col], col = i) tmp <- do.call("rbind", tmp) pooled[,i] <- apply(tmp, 2, median) } ## Re-normalize to make sure they add to 1 pooled <- apply(pooled, 1, function(x) x/sum(x)) if(n != nrow(pooled)) pooled <- t(pooled) if(type == "class") { out <- factor(classes[apply(pooled, 1, which.max)], levels = classes) } else out <- as.data.frame(pooled) out } ``` For example, to bag a conditional inference tree (from the **party** package): ``` library(caret) set.seed(998) inTraining <- createDataPartition(Sonar$Class, p = .75, list = FALSE) training <- Sonar[ inTraining,] testing <- Sonar[-inTraining,] set.seed(825) baggedCT <- bag(x = training[, names(training) != "Class"], y = training$Class, B = 50, bagControl = bagControl(fit = ctreeBag$fit, predict = ctreeBag$pred, aggregate = ctreeBag$aggregate)) summary(baggedCT) ``` ``` ## ## Call: ## bag.default(x = training[, names(training) != "Class"], y ## = training$Class, B = 50, bagControl = bagControl(fit = ## ctreeBag$fit, predict = ctreeBag$pred, aggregate = ctreeBag$aggregate)) ## ## Out of bag statistics (B = 50): ## ## Accuracy Kappa ## 0.0% 0.4746 -0.04335 ## 2.5% 0.5806 0.17971 ## 25.0% 0.6681 0.32402 ## 50.0% 0.7094 0.41815 ## 75.0% 0.7606 0.51092 ## 97.5% 0.8060 0.59901 ## 100.0% 0.8077 0.61078 ``` ### 16\.4\.1 The `fit` Function Inputs: * `x`: a data frame of the training set predictor data. * `y`: the training set outcomes. * `...` arguments passed from `train` to this function The output is the object corresponding to the trained model and any other objects required for prediction. A simple example for a linear discriminant analysis model from the **MASS** package is: ``` function(x, y, ...) { library(MASS) lda(x, y, ...) } ``` ### 16\.4\.2 The `pred` Function This should be a function that produces predictors for new samples. Inputs: * `object`: the object generated by the `fit` module. * `x`: a matrix or data frame of predictor data. The output is either a number vector (for regression), a factor (or character) vector for classification or a matrix/data frame of class probabilities. For classification, it is probably better to average class probabilities instead of using the votes of the class predictions. Using the `lda` example again: ``` ## predict.lda returns the class and the class probabilities ## We will average the probabilities, so these are saved function(object, x) predict(object, x)$posterior ``` ``` ## function(object, x) predict(object, x)$posterior ``` ### 16\.4\.3 The `aggregate` Function This should be a function that takes the predictions from the constituent models and converts them to a single prediction per sample. Inputs: * `x`: a list of objects returned by the `pred` module. * `type`: an optional string that describes the type of output (e.g. “class”, “prob” etc.). The output is either a number vector (for regression), a factor (or character) vector for classification or a matrix/data frame of class probabilities. For the linear discriminant model above, we saved the matrix of class probabilities. To average them and generate a class prediction, we could use: ``` function(x, type = "class") { ## The class probabilities come in as a list of matrices ## For each class, we can pool them then average over them ## Pre-allocate space for the results pooled <- x[[1]] * NA n <- nrow(pooled) classes <- colnames(pooled) ## For each class probability, take the median across ## all the bagged model predictions for(i in 1:ncol(pooled)) { tmp <- lapply(x, function(y, col) y[,col], col = i) tmp <- do.call("rbind", tmp) pooled[,i] <- apply(tmp, 2, median) } ## Re-normalize to make sure they add to 1 pooled <- apply(pooled, 1, function(x) x/sum(x)) if(n != nrow(pooled)) pooled <- t(pooled) if(type == "class") { out <- factor(classes[apply(pooled, 1, which.max)], levels = classes) } else out <- as.data.frame(pooled) out } ``` For example, to bag a conditional inference tree (from the **party** package): ``` library(caret) set.seed(998) inTraining <- createDataPartition(Sonar$Class, p = .75, list = FALSE) training <- Sonar[ inTraining,] testing <- Sonar[-inTraining,] set.seed(825) baggedCT <- bag(x = training[, names(training) != "Class"], y = training$Class, B = 50, bagControl = bagControl(fit = ctreeBag$fit, predict = ctreeBag$pred, aggregate = ctreeBag$aggregate)) summary(baggedCT) ``` ``` ## ## Call: ## bag.default(x = training[, names(training) != "Class"], y ## = training$Class, B = 50, bagControl = bagControl(fit = ## ctreeBag$fit, predict = ctreeBag$pred, aggregate = ctreeBag$aggregate)) ## ## Out of bag statistics (B = 50): ## ## Accuracy Kappa ## 0.0% 0.4746 -0.04335 ## 2.5% 0.5806 0.17971 ## 25.0% 0.6681 0.32402 ## 50.0% 0.7094 0.41815 ## 75.0% 0.7606 0.51092 ## 97.5% 0.8060 0.59901 ## 100.0% 0.8077 0.61078 ``` 16\.5 Model Averaged Neural Networks ------------------------------------ The `avNNet` fits multiple neural network models to the same data set and predicts using the average of the predictions coming from each constituent model. The models can be different either due to different random number seeds to initialize the network or by fitting the models on bootstrap samples of the original training set (i.e. bagging the neural network). For classification models, the class probabilities are averaged to produce the final class prediction (as opposed to voting from the individual class predictions. As an example, the model can be fit via `train`: ``` set.seed(825) avNnetFit <- train(x = training, y = trainClass, method = "avNNet", repeats = 15, trace = FALSE) ``` 16\.6 Neural Networks with a Principal Component Step ----------------------------------------------------- Neural networks can be affected by severe amounts of multicollinearity in the predictors. The function `pcaNNet` is a wrapper around the `preProcess` and `nnet` functions that will run principal component analysis on the predictors before using them as inputs into a neural network. The function will keep enough components that will capture some pre\-defined threshold on the cumulative proportion of variance (see the `thresh` argument). For new samples, the same transformation is applied to the new predictor values (based on the loadings from the training set). The function is available for both regression and classification. This function is deprecated in favor of the `train` function using `method = "nnet"` and `preProc = "pca"`. 16\.7 Independent Component Regression -------------------------------------- The `icr` function can be used to fit a model analogous to principal component regression (PCR), but using independent component analysis (ICA). The predictor data are centered and projected to the ICA components. These components are then regressed against the outcome. The user needed to specify the number of components to keep. The model uses the `preProcess` function to compute the latent variables using the [fastICA](http://cran.r-project.org/web/packages/fastICA/index.html) package. Like PCR, there is no guarantee that there will be a correlation between the new latent variable and the outcomes.
Machine Learning
topepo.github.io
https://topepo.github.io/caret/measuring-performance.html
17 Measuring Performance ======================== * [Measures for Regression](measuring-performance.html#reg) * [Measures for Predicted Classes](measuring-performance.html#class) * [Measures for Class Probabilities](measuring-performance.html#probs) * [Lift Curves](measuring-performance.html#lift) * [Calibration Curves](measuring-performance.html#calib) 17\.1 Measures for Regression ----------------------------- The function `postResample` can be used to estimate the root mean squared error (RMSE), simple R2, and the mean absolute error (MAE) for numeric outcomes. For example: ``` library(mlbench) data(BostonHousing) set.seed(280) bh_index <- createDataPartition(BostonHousing$medv, p = .75, list = FALSE) bh_tr <- BostonHousing[ bh_index, ] bh_te <- BostonHousing[-bh_index, ] set.seed(7279) lm_fit <- train(medv ~ . + rm:lstat, data = bh_tr, method = "lm") bh_pred <- predict(lm_fit, bh_te) lm_fit ``` ``` ## Linear Regression ## ## 381 samples ## 13 predictor ## ## No pre-processing ## Resampling: Bootstrapped (25 reps) ## Summary of sample sizes: 381, 381, 381, 381, 381, 381, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 4.374098 0.7724562 2.963927 ## ## Tuning parameter 'intercept' was held constant at a value of TRUE ``` ``` postResample(pred = bh_pred, obs = bh_te$medv) ``` ``` ## RMSE Rsquared MAE ## 4.0927043 0.8234427 2.8163731 ``` A note about how R2 is calculated by `caret`: it takes the straightforward approach of computing the correlation between the observed and predicted values (i.e. R) and squaring the value. When the model is poor, this can lead to differences between this estimator and the more widely known estimate derived form linear regression models. Mostly notably, the correlation approach will not generate negative values of R2 (which are theoretically invalid). A comparison of these and other estimators can be found in [Kvalseth 1985](http://amstat.tandfonline.com/doi/abs/10.1080/00031305.1985.10479448). 17\.2 Measures for Predicted Classes ------------------------------------ Before proceeding, let’s make up some test set data: ``` set.seed(144) true_class <- factor(sample(paste0("Class", 1:2), size = 1000, prob = c(.2, .8), replace = TRUE)) true_class <- sort(true_class) class1_probs <- rbeta(sum(true_class == "Class1"), 4, 1) class2_probs <- rbeta(sum(true_class == "Class2"), 1, 2.5) test_set <- data.frame(obs = true_class, Class1 = c(class1_probs, class2_probs)) test_set$Class2 <- 1 - test_set$Class1 test_set$pred <- factor(ifelse(test_set$Class1 >= .5, "Class1", "Class2")) ``` We would expect that this model will do well on these data: ``` ggplot(test_set, aes(x = Class1)) + geom_histogram(binwidth = .05) + facet_wrap(~obs) + xlab("Probability of Class #1") ``` Generating the predicted classes based on the typical 50% cutoff for the probabilities, we can compute the *confusion matrix*, which shows a cross\-tabulation of the observed and predicted classes. The `confusionMatrix` function can be used to generate these results: ``` confusionMatrix(data = test_set$pred, reference = test_set$obs) ``` ``` ## Confusion Matrix and Statistics ## ## Reference ## Prediction Class1 Class2 ## Class1 183 141 ## Class2 13 663 ## ## Accuracy : 0.846 ## 95% CI : (0.8221, 0.8678) ## No Information Rate : 0.804 ## P-Value [Acc > NIR] : 0.0003424 ## ## Kappa : 0.6081 ## ## Mcnemar's Test P-Value : < 2.2e-16 ## ## Sensitivity : 0.9337 ## Specificity : 0.8246 ## Pos Pred Value : 0.5648 ## Neg Pred Value : 0.9808 ## Prevalence : 0.1960 ## Detection Rate : 0.1830 ## Detection Prevalence : 0.3240 ## Balanced Accuracy : 0.8792 ## ## 'Positive' Class : Class1 ## ``` For two classes, this function assumes that the class corresponding to an event is the *first* class level (but this can be changed using the `positive` argument. Note that there are a number of statistics shown here. The “no\-information rate” is the largest proportion of the observed classes (there were more class 2 data than class 1 in this test set). A hypothesis test is also computed to evaluate whether the overall accuracy rate is greater than the rate of the largest class. Also, the prevalence of the “positive event” is computed from the data (unless passed in as an argument), the detection rate (the rate of true events also predicted to be events) and the detection prevalence (the prevalence of predicted events). If the prevalence of the event is different than those seen in the test set, the `prevalence` option can be used to adjust this. Suppose a 2x2 table: When there are three or more classes, `confusionMatrix` will show the confusion matrix and a set of “one\-versus\-all” results. For example, in a three class problem, the sensitivity of the first class is calculated against all the samples in the second and third classes (and so on). The `confusionMatrix` matrix frames the errors in terms of sensitivity and specificity. In the case of information retrieval, the precision and recall might be more appropriate. In this case, the option `mode` can be used to get those statistics: ``` confusionMatrix(data = test_set$pred, reference = test_set$obs, mode = "prec_recall") ``` ``` ## Confusion Matrix and Statistics ## ## Reference ## Prediction Class1 Class2 ## Class1 183 141 ## Class2 13 663 ## ## Accuracy : 0.846 ## 95% CI : (0.8221, 0.8678) ## No Information Rate : 0.804 ## P-Value [Acc > NIR] : 0.0003424 ## ## Kappa : 0.6081 ## ## Mcnemar's Test P-Value : < 2.2e-16 ## ## Precision : 0.5648 ## Recall : 0.9337 ## F1 : 0.7038 ## Prevalence : 0.1960 ## Detection Rate : 0.1830 ## Detection Prevalence : 0.3240 ## Balanced Accuracy : 0.8792 ## ## 'Positive' Class : Class1 ## ``` Again, the `positive` argument can be used to control which factor level is associated with a “found” or “important” document or sample. There are individual functions called `sensitivity`, `specificity`, `posPredValue`, `negPredValue`, `precision`, `recall`, and `F_meas`. Also, a resampled estimate of the training set can also be obtained using `confusionMatrix.train`. For each resampling iteration, a confusion matrix is created from the hold\-out samples and these values can be aggregated to diagnose issues with the model fit. These values are the percentages that hold\-out samples landed in the confusion matrix during resampling. There are several methods for normalizing these values. See `?confusionMatrix.train` for details. The default performance function used by `train` is `postResample`, which generates the accuracy and Kappa statistics: ``` postResample(pred = test_set$pred, obs = test_set$obs) ``` ``` ## Accuracy Kappa ## 0.8460000 0.6081345 ``` As shown below, another function called `twoClassSummary` can be used to get the sensitivity and specificity using the default probability cutoff. Another function, `multiClassSummary`, can do similar calculations when there are three or more classes but both require class probabilities for each class. 17\.3 Measures for Class Probabilities -------------------------------------- For data with two classes, there are specialized functions for measuring model performance. First, the `twoClassSummary` function computes the area under the ROC curve and the specificity and sensitivity under the 50% cutoff. Note that: * this function uses the first class level to define the “event” of interest. To change this, use the `lev` option to the function * there must be columns in the data for each of the class probabilities (named the same as the outcome’s class levels) ``` twoClassSummary(test_set, lev = levels(test_set$obs)) ``` ``` ## ROC Sens Spec ## 0.9560044 0.9336735 0.8246269 ``` A similar function can be used to get the analugous precision\-recall values and the area under the precision\-recall curve: ``` prSummary(test_set, lev = levels(test_set$obs)) ``` ``` ## AUC Precision Recall F ## 0.8582695 0.5648148 0.9336735 0.7038462 ``` This function requires that the `MLmetrics` package is installed. For multi\-class problems, there are additional functions that can be used to calculate performance. One, `mnLogLoss` computes the negative of the multinomial log\-likelihood (smaller is better) based on the class probabilities. This can be used to optimize tuning parameters but can lead to results that are inconsistent with other measures (e.g. accuracy or the area under the ROC curve), especially when the other measures are near their best possible values. The function has similar arguments to the other functions described above. Here is the two\-class data from above: ``` mnLogLoss(test_set, lev = levels(test_set$obs)) ``` ``` ## logLoss ## 0.370626 ``` Additionally, the function `multiClassSummary` computes a number of relevant metrics: * the overall accuracy and Kappa statistics using the predicted classes * the negative of the multinomial log loss (if class probabilities are available) * averages of the “one versus all” statistics such as sensitivity, specificity, the area under the ROC curve, etc. 17\.4 Lift Curves ----------------- The `lift` function can be used to evaluate probabilities thresholds that can capture a certain percentage of hits. The function requires a set of sample probability predictions (not from the training set) and the true class labels. For example, we can simulate two\-class samples using the `twoClassSim` function and fit a set of models to the training set: ``` set.seed(2) lift_training <- twoClassSim(1000) lift_testing <- twoClassSim(1000) ctrl <- trainControl(method = "cv", classProbs = TRUE, summaryFunction = twoClassSummary) set.seed(1045) fda_lift <- train(Class ~ ., data = lift_training, method = "fda", metric = "ROC", tuneLength = 20, trControl = ctrl) set.seed(1045) lda_lift <- train(Class ~ ., data = lift_training, method = "lda", metric = "ROC", trControl = ctrl) library(C50) set.seed(1045) c5_lift <- train(Class ~ ., data = lift_training, method = "C5.0", metric = "ROC", tuneLength = 10, trControl = ctrl, control = C5.0Control(earlyStopping = FALSE)) ## Generate the test set results lift_results <- data.frame(Class = lift_testing$Class) lift_results$FDA <- predict(fda_lift, lift_testing, type = "prob")[,"Class1"] lift_results$LDA <- predict(lda_lift, lift_testing, type = "prob")[,"Class1"] lift_results$C5.0 <- predict(c5_lift, lift_testing, type = "prob")[,"Class1"] head(lift_results) ``` ``` ## Class FDA LDA C5.0 ## 1 Class1 0.99187063 0.8838205 0.8445830 ## 2 Class1 0.99115613 0.7572450 0.8882418 ## 3 Class1 0.80567440 0.8883830 0.5732098 ## 4 Class2 0.05245632 0.0140480 0.1690251 ## 5 Class1 0.76175025 0.9320695 0.4824400 ## 6 Class2 0.13782751 0.0524154 0.3310495 ``` The `lift` function does the calculations and the corresponding `plot` function is used to plot the lift curve (although some call this the *gain curve*). The value argument creates reference lines: ``` trellis.par.set(caretTheme()) lift_obj <- lift(Class ~ FDA + LDA + C5.0, data = lift_results) plot(lift_obj, values = 60, auto.key = list(columns = 3, lines = TRUE, points = FALSE)) ``` There is also a `ggplot` method for `lift` objects: ``` ggplot(lift_obj, values = 60) ``` From this we can see that, to find 60 percent of the hits, a little more than 30 percent of the data can be sampled (when ordered by the probability predictions). The LDA model does somewhat worse than the other two models. 17\.5 Calibration Curves ------------------------ Calibration curves can be used to characterisze how consistent the predicted class probabilities are with the observed event rates. Other functions in the `gbm` package, the `rms` package (and others) can also produce calibrartion curves. The format for the function is very similar to the lift function: ``` trellis.par.set(caretTheme()) cal_obj <- calibration(Class ~ FDA + LDA + C5.0, data = lift_results, cuts = 13) plot(cal_obj, type = "l", auto.key = list(columns = 3, lines = TRUE, points = FALSE)) ``` There is also a `ggplot` method that shows the confidence intervals for the proportions inside of the subsets: ``` ggplot(cal_obj) ``` 17\.1 Measures for Regression ----------------------------- The function `postResample` can be used to estimate the root mean squared error (RMSE), simple R2, and the mean absolute error (MAE) for numeric outcomes. For example: ``` library(mlbench) data(BostonHousing) set.seed(280) bh_index <- createDataPartition(BostonHousing$medv, p = .75, list = FALSE) bh_tr <- BostonHousing[ bh_index, ] bh_te <- BostonHousing[-bh_index, ] set.seed(7279) lm_fit <- train(medv ~ . + rm:lstat, data = bh_tr, method = "lm") bh_pred <- predict(lm_fit, bh_te) lm_fit ``` ``` ## Linear Regression ## ## 381 samples ## 13 predictor ## ## No pre-processing ## Resampling: Bootstrapped (25 reps) ## Summary of sample sizes: 381, 381, 381, 381, 381, 381, ... ## Resampling results: ## ## RMSE Rsquared MAE ## 4.374098 0.7724562 2.963927 ## ## Tuning parameter 'intercept' was held constant at a value of TRUE ``` ``` postResample(pred = bh_pred, obs = bh_te$medv) ``` ``` ## RMSE Rsquared MAE ## 4.0927043 0.8234427 2.8163731 ``` A note about how R2 is calculated by `caret`: it takes the straightforward approach of computing the correlation between the observed and predicted values (i.e. R) and squaring the value. When the model is poor, this can lead to differences between this estimator and the more widely known estimate derived form linear regression models. Mostly notably, the correlation approach will not generate negative values of R2 (which are theoretically invalid). A comparison of these and other estimators can be found in [Kvalseth 1985](http://amstat.tandfonline.com/doi/abs/10.1080/00031305.1985.10479448). 17\.2 Measures for Predicted Classes ------------------------------------ Before proceeding, let’s make up some test set data: ``` set.seed(144) true_class <- factor(sample(paste0("Class", 1:2), size = 1000, prob = c(.2, .8), replace = TRUE)) true_class <- sort(true_class) class1_probs <- rbeta(sum(true_class == "Class1"), 4, 1) class2_probs <- rbeta(sum(true_class == "Class2"), 1, 2.5) test_set <- data.frame(obs = true_class, Class1 = c(class1_probs, class2_probs)) test_set$Class2 <- 1 - test_set$Class1 test_set$pred <- factor(ifelse(test_set$Class1 >= .5, "Class1", "Class2")) ``` We would expect that this model will do well on these data: ``` ggplot(test_set, aes(x = Class1)) + geom_histogram(binwidth = .05) + facet_wrap(~obs) + xlab("Probability of Class #1") ``` Generating the predicted classes based on the typical 50% cutoff for the probabilities, we can compute the *confusion matrix*, which shows a cross\-tabulation of the observed and predicted classes. The `confusionMatrix` function can be used to generate these results: ``` confusionMatrix(data = test_set$pred, reference = test_set$obs) ``` ``` ## Confusion Matrix and Statistics ## ## Reference ## Prediction Class1 Class2 ## Class1 183 141 ## Class2 13 663 ## ## Accuracy : 0.846 ## 95% CI : (0.8221, 0.8678) ## No Information Rate : 0.804 ## P-Value [Acc > NIR] : 0.0003424 ## ## Kappa : 0.6081 ## ## Mcnemar's Test P-Value : < 2.2e-16 ## ## Sensitivity : 0.9337 ## Specificity : 0.8246 ## Pos Pred Value : 0.5648 ## Neg Pred Value : 0.9808 ## Prevalence : 0.1960 ## Detection Rate : 0.1830 ## Detection Prevalence : 0.3240 ## Balanced Accuracy : 0.8792 ## ## 'Positive' Class : Class1 ## ``` For two classes, this function assumes that the class corresponding to an event is the *first* class level (but this can be changed using the `positive` argument. Note that there are a number of statistics shown here. The “no\-information rate” is the largest proportion of the observed classes (there were more class 2 data than class 1 in this test set). A hypothesis test is also computed to evaluate whether the overall accuracy rate is greater than the rate of the largest class. Also, the prevalence of the “positive event” is computed from the data (unless passed in as an argument), the detection rate (the rate of true events also predicted to be events) and the detection prevalence (the prevalence of predicted events). If the prevalence of the event is different than those seen in the test set, the `prevalence` option can be used to adjust this. Suppose a 2x2 table: When there are three or more classes, `confusionMatrix` will show the confusion matrix and a set of “one\-versus\-all” results. For example, in a three class problem, the sensitivity of the first class is calculated against all the samples in the second and third classes (and so on). The `confusionMatrix` matrix frames the errors in terms of sensitivity and specificity. In the case of information retrieval, the precision and recall might be more appropriate. In this case, the option `mode` can be used to get those statistics: ``` confusionMatrix(data = test_set$pred, reference = test_set$obs, mode = "prec_recall") ``` ``` ## Confusion Matrix and Statistics ## ## Reference ## Prediction Class1 Class2 ## Class1 183 141 ## Class2 13 663 ## ## Accuracy : 0.846 ## 95% CI : (0.8221, 0.8678) ## No Information Rate : 0.804 ## P-Value [Acc > NIR] : 0.0003424 ## ## Kappa : 0.6081 ## ## Mcnemar's Test P-Value : < 2.2e-16 ## ## Precision : 0.5648 ## Recall : 0.9337 ## F1 : 0.7038 ## Prevalence : 0.1960 ## Detection Rate : 0.1830 ## Detection Prevalence : 0.3240 ## Balanced Accuracy : 0.8792 ## ## 'Positive' Class : Class1 ## ``` Again, the `positive` argument can be used to control which factor level is associated with a “found” or “important” document or sample. There are individual functions called `sensitivity`, `specificity`, `posPredValue`, `negPredValue`, `precision`, `recall`, and `F_meas`. Also, a resampled estimate of the training set can also be obtained using `confusionMatrix.train`. For each resampling iteration, a confusion matrix is created from the hold\-out samples and these values can be aggregated to diagnose issues with the model fit. These values are the percentages that hold\-out samples landed in the confusion matrix during resampling. There are several methods for normalizing these values. See `?confusionMatrix.train` for details. The default performance function used by `train` is `postResample`, which generates the accuracy and Kappa statistics: ``` postResample(pred = test_set$pred, obs = test_set$obs) ``` ``` ## Accuracy Kappa ## 0.8460000 0.6081345 ``` As shown below, another function called `twoClassSummary` can be used to get the sensitivity and specificity using the default probability cutoff. Another function, `multiClassSummary`, can do similar calculations when there are three or more classes but both require class probabilities for each class. 17\.3 Measures for Class Probabilities -------------------------------------- For data with two classes, there are specialized functions for measuring model performance. First, the `twoClassSummary` function computes the area under the ROC curve and the specificity and sensitivity under the 50% cutoff. Note that: * this function uses the first class level to define the “event” of interest. To change this, use the `lev` option to the function * there must be columns in the data for each of the class probabilities (named the same as the outcome’s class levels) ``` twoClassSummary(test_set, lev = levels(test_set$obs)) ``` ``` ## ROC Sens Spec ## 0.9560044 0.9336735 0.8246269 ``` A similar function can be used to get the analugous precision\-recall values and the area under the precision\-recall curve: ``` prSummary(test_set, lev = levels(test_set$obs)) ``` ``` ## AUC Precision Recall F ## 0.8582695 0.5648148 0.9336735 0.7038462 ``` This function requires that the `MLmetrics` package is installed. For multi\-class problems, there are additional functions that can be used to calculate performance. One, `mnLogLoss` computes the negative of the multinomial log\-likelihood (smaller is better) based on the class probabilities. This can be used to optimize tuning parameters but can lead to results that are inconsistent with other measures (e.g. accuracy or the area under the ROC curve), especially when the other measures are near their best possible values. The function has similar arguments to the other functions described above. Here is the two\-class data from above: ``` mnLogLoss(test_set, lev = levels(test_set$obs)) ``` ``` ## logLoss ## 0.370626 ``` Additionally, the function `multiClassSummary` computes a number of relevant metrics: * the overall accuracy and Kappa statistics using the predicted classes * the negative of the multinomial log loss (if class probabilities are available) * averages of the “one versus all” statistics such as sensitivity, specificity, the area under the ROC curve, etc. 17\.4 Lift Curves ----------------- The `lift` function can be used to evaluate probabilities thresholds that can capture a certain percentage of hits. The function requires a set of sample probability predictions (not from the training set) and the true class labels. For example, we can simulate two\-class samples using the `twoClassSim` function and fit a set of models to the training set: ``` set.seed(2) lift_training <- twoClassSim(1000) lift_testing <- twoClassSim(1000) ctrl <- trainControl(method = "cv", classProbs = TRUE, summaryFunction = twoClassSummary) set.seed(1045) fda_lift <- train(Class ~ ., data = lift_training, method = "fda", metric = "ROC", tuneLength = 20, trControl = ctrl) set.seed(1045) lda_lift <- train(Class ~ ., data = lift_training, method = "lda", metric = "ROC", trControl = ctrl) library(C50) set.seed(1045) c5_lift <- train(Class ~ ., data = lift_training, method = "C5.0", metric = "ROC", tuneLength = 10, trControl = ctrl, control = C5.0Control(earlyStopping = FALSE)) ## Generate the test set results lift_results <- data.frame(Class = lift_testing$Class) lift_results$FDA <- predict(fda_lift, lift_testing, type = "prob")[,"Class1"] lift_results$LDA <- predict(lda_lift, lift_testing, type = "prob")[,"Class1"] lift_results$C5.0 <- predict(c5_lift, lift_testing, type = "prob")[,"Class1"] head(lift_results) ``` ``` ## Class FDA LDA C5.0 ## 1 Class1 0.99187063 0.8838205 0.8445830 ## 2 Class1 0.99115613 0.7572450 0.8882418 ## 3 Class1 0.80567440 0.8883830 0.5732098 ## 4 Class2 0.05245632 0.0140480 0.1690251 ## 5 Class1 0.76175025 0.9320695 0.4824400 ## 6 Class2 0.13782751 0.0524154 0.3310495 ``` The `lift` function does the calculations and the corresponding `plot` function is used to plot the lift curve (although some call this the *gain curve*). The value argument creates reference lines: ``` trellis.par.set(caretTheme()) lift_obj <- lift(Class ~ FDA + LDA + C5.0, data = lift_results) plot(lift_obj, values = 60, auto.key = list(columns = 3, lines = TRUE, points = FALSE)) ``` There is also a `ggplot` method for `lift` objects: ``` ggplot(lift_obj, values = 60) ``` From this we can see that, to find 60 percent of the hits, a little more than 30 percent of the data can be sampled (when ordered by the probability predictions). The LDA model does somewhat worse than the other two models. 17\.5 Calibration Curves ------------------------ Calibration curves can be used to characterisze how consistent the predicted class probabilities are with the observed event rates. Other functions in the `gbm` package, the `rms` package (and others) can also produce calibrartion curves. The format for the function is very similar to the lift function: ``` trellis.par.set(caretTheme()) cal_obj <- calibration(Class ~ FDA + LDA + C5.0, data = lift_results, cuts = 13) plot(cal_obj, type = "l", auto.key = list(columns = 3, lines = TRUE, points = FALSE)) ``` There is also a `ggplot` method that shows the confidence intervals for the proportions inside of the subsets: ``` ggplot(cal_obj) ```
Machine Learning
topepo.github.io
https://topepo.github.io/caret/feature-selection-using-univariate-filters.html
19 Feature Selection using Univariate Filters ============================================= Contents * [Univariate Filters](feature-selection-using-univariate-filters.html#filter) * [Basic Syntax](feature-selection-using-univariate-filters.html#syntax) * [The Example](feature-selection-using-univariate-filters.html#fexample) 19\.1 Univariate Filters ------------------------ Another approach to feature selection is to pre\-screen the predictors using simple univariate statistical methods then only use those that pass some criterion in the subsequent model steps. Similar to recursive selection, cross\-validation of the subsequent models will be biased as the remaining predictors have already been evaluate on the data set. Proper performance estimates via resampling should include the feature selection step. As an example, it has been suggested for classification models, that predictors can be filtered by conducting some sort of *k*\-sample test (where *k* is the number of classes) to see if the mean of the predictor is different between the classes. Wilcoxon tests, *t*\-tests and ANOVA models are sometimes used. Predictors that have statistically significant differences between the classes are then used for modeling. The caret function `sbf` (for selection by filter) can be used to cross\-validate such feature selection schemes. Similar to `rfe`, functions can be passed into `sbf` for the computational components: univariate filtering, model fitting, prediction and performance summaries (details are given below). The function is applied to the entire training set and also to different resampled versions of the data set. From this, generalizable estimates of performance can be computed that properly take into account the feature selection step. Also, the results of the predictor filters can be tracked over resamples to understand the uncertainty in the filtering. 19\.2 Basic Syntax ------------------ Similar to the `rfe` function, the syntax for `sbf` is: ``` sbf(predictors, outcome, sbfControl = sbfControl(), ...) ## or sbf(formula, data, sbfControl = sbfControl(), ...) ``` In this case, the details are specificed using the `sbfControl` function. Here, the argument `functions` dictates what the different components should do. This argument should have elements called `filter`, `fit`, `pred` and `summary`. ### 19\.2\.1 The `score` Function This function takes as inputs the predictors and the outcome in objects called `x` and `y`, respectively. By default, each predictor in `x` is passed to the `score` function individually. In this case, the function should return a single score. Alternatively, all the predictors can be exposed to the function using the `multivariate` argument to `sbfControl`. In this case, the output should be a named vector of scores where the names correspond to the column names of `x`. There are two built\-in functions called `anovaScores` and `gamScores`. `anovaScores` treats the outcome as the independent variable and the predictor as the outcome. In this way, the null hypothesis is that the mean predictor values are equal across the different classes. For regression, `gamScores` fits a smoothing spline in the predictor to the outcome using a generalized additive model and tests to see if there is any functional relationship between the two. In each function the p\-value is used as the score. ### 19\.2\.2 The `filter` Function This function takes as inputs the scores coming out of the `score` function (in an argument called `score`). The function also has the training set data as inputs (arguments are called `x` and `y`). The output should be a named logical vector where the names correspond to the column names of `x`. Columns with values of `TRUE` will be used in the subsequent model. ### 19\.2\.3 The `fit` Function The component is very similar to the `rfe`\-specific function described above. For `sbf`, there are no `first` or `last` arguments. The function should have arguments `x`, `y` and `...`. The data within `x` have been filtered using the `filter` function described above. The output of the `fit` function should be a fitted model. With some data sets, no predictors will survive the filter. In these cases, a model with predictors cannot be computed, but the lack of viable predictors should not be ignored in the final results. To account for this issue, [`caret`](http://cran.r-project.org/web/packages/caret/index.html) contains a model function called `nullModel` that fits a simple model that is independent of any of the predictors. For problems where the outcome is numeric, the function predicts every sample using the simple mean of the training set outcomes. For classification, the model predicts all samples using the most prevalent class in the training data. This function can be used in the `fit` component function to “error\-trap” cases where no predictors are selected. For example, there are several built\-in functions for some models. The object `rfSBF` is a set of functions that may be useful for fitting random forest models with filtering. The `fit` function here uses `nullModel` to check for cases with no predictors: ``` rfSBF$fit ``` ``` ## function (x, y, ...) ## { ## if (ncol(x) > 0) { ## loadNamespace("randomForest") ## randomForest::randomForest(x, y, ...) ## } ## else nullModel(y = y) ## } ## <bytecode: 0x7fa6cc2db540> ## <environment: namespace:caret> ``` ### 19\.2\.4 The `summary` and `pred` Functions The `summary` function is used to calculate model performance on held\-out samples. The `pred` function is used to predict new samples using the current predictor set. The arguments and outputs for these two functions are identical to the previously discussed `summary` and `pred` functions in previously described sections. 19\.3 The Example ----------------- Returning to the example from (Friedman, 1991\), we can fit another random forest model with the predictors pre\-filtered using the generalized additive model approach described previously. ``` filterCtrl <- sbfControl(functions = rfSBF, method = "repeatedcv", repeats = 5) set.seed(10) rfWithFilter <- sbf(x, y, sbfControl = filterCtrl) rfWithFilter ``` ``` ## ## Selection By Filter ## ## Outer resampling method: Cross-Validated (10 fold, repeated 5 times) ## ## Resampling performance: ## ## RMSE Rsquared MAE RMSESD RsquaredSD MAESD ## 3.407 0.5589 2.86 0.5309 0.1782 0.5361 ## ## Using the training set, 6 variables were selected: ## real2, real4, real5, bogus2, bogus17... ## ## During resampling, the top 5 selected variables (out of a possible 13): ## real2 (100%), real4 (100%), real5 (100%), bogus44 (76%), bogus2 (44%) ## ## On average, 5.5 variables were selected (min = 4, max = 8) ``` In this case, the training set indicated that 6 should be used in the random forest model, but the resampling results indicate that there is some variation in this number. Some of the informative predictors are used, but a few others are erroneous retained. Similar to `rfe`, there are methods for `predictors`, `densityplot`, `histogram` and `varImp`. 19\.1 Univariate Filters ------------------------ Another approach to feature selection is to pre\-screen the predictors using simple univariate statistical methods then only use those that pass some criterion in the subsequent model steps. Similar to recursive selection, cross\-validation of the subsequent models will be biased as the remaining predictors have already been evaluate on the data set. Proper performance estimates via resampling should include the feature selection step. As an example, it has been suggested for classification models, that predictors can be filtered by conducting some sort of *k*\-sample test (where *k* is the number of classes) to see if the mean of the predictor is different between the classes. Wilcoxon tests, *t*\-tests and ANOVA models are sometimes used. Predictors that have statistically significant differences between the classes are then used for modeling. The caret function `sbf` (for selection by filter) can be used to cross\-validate such feature selection schemes. Similar to `rfe`, functions can be passed into `sbf` for the computational components: univariate filtering, model fitting, prediction and performance summaries (details are given below). The function is applied to the entire training set and also to different resampled versions of the data set. From this, generalizable estimates of performance can be computed that properly take into account the feature selection step. Also, the results of the predictor filters can be tracked over resamples to understand the uncertainty in the filtering. 19\.2 Basic Syntax ------------------ Similar to the `rfe` function, the syntax for `sbf` is: ``` sbf(predictors, outcome, sbfControl = sbfControl(), ...) ## or sbf(formula, data, sbfControl = sbfControl(), ...) ``` In this case, the details are specificed using the `sbfControl` function. Here, the argument `functions` dictates what the different components should do. This argument should have elements called `filter`, `fit`, `pred` and `summary`. ### 19\.2\.1 The `score` Function This function takes as inputs the predictors and the outcome in objects called `x` and `y`, respectively. By default, each predictor in `x` is passed to the `score` function individually. In this case, the function should return a single score. Alternatively, all the predictors can be exposed to the function using the `multivariate` argument to `sbfControl`. In this case, the output should be a named vector of scores where the names correspond to the column names of `x`. There are two built\-in functions called `anovaScores` and `gamScores`. `anovaScores` treats the outcome as the independent variable and the predictor as the outcome. In this way, the null hypothesis is that the mean predictor values are equal across the different classes. For regression, `gamScores` fits a smoothing spline in the predictor to the outcome using a generalized additive model and tests to see if there is any functional relationship between the two. In each function the p\-value is used as the score. ### 19\.2\.2 The `filter` Function This function takes as inputs the scores coming out of the `score` function (in an argument called `score`). The function also has the training set data as inputs (arguments are called `x` and `y`). The output should be a named logical vector where the names correspond to the column names of `x`. Columns with values of `TRUE` will be used in the subsequent model. ### 19\.2\.3 The `fit` Function The component is very similar to the `rfe`\-specific function described above. For `sbf`, there are no `first` or `last` arguments. The function should have arguments `x`, `y` and `...`. The data within `x` have been filtered using the `filter` function described above. The output of the `fit` function should be a fitted model. With some data sets, no predictors will survive the filter. In these cases, a model with predictors cannot be computed, but the lack of viable predictors should not be ignored in the final results. To account for this issue, [`caret`](http://cran.r-project.org/web/packages/caret/index.html) contains a model function called `nullModel` that fits a simple model that is independent of any of the predictors. For problems where the outcome is numeric, the function predicts every sample using the simple mean of the training set outcomes. For classification, the model predicts all samples using the most prevalent class in the training data. This function can be used in the `fit` component function to “error\-trap” cases where no predictors are selected. For example, there are several built\-in functions for some models. The object `rfSBF` is a set of functions that may be useful for fitting random forest models with filtering. The `fit` function here uses `nullModel` to check for cases with no predictors: ``` rfSBF$fit ``` ``` ## function (x, y, ...) ## { ## if (ncol(x) > 0) { ## loadNamespace("randomForest") ## randomForest::randomForest(x, y, ...) ## } ## else nullModel(y = y) ## } ## <bytecode: 0x7fa6cc2db540> ## <environment: namespace:caret> ``` ### 19\.2\.4 The `summary` and `pred` Functions The `summary` function is used to calculate model performance on held\-out samples. The `pred` function is used to predict new samples using the current predictor set. The arguments and outputs for these two functions are identical to the previously discussed `summary` and `pred` functions in previously described sections. ### 19\.2\.1 The `score` Function This function takes as inputs the predictors and the outcome in objects called `x` and `y`, respectively. By default, each predictor in `x` is passed to the `score` function individually. In this case, the function should return a single score. Alternatively, all the predictors can be exposed to the function using the `multivariate` argument to `sbfControl`. In this case, the output should be a named vector of scores where the names correspond to the column names of `x`. There are two built\-in functions called `anovaScores` and `gamScores`. `anovaScores` treats the outcome as the independent variable and the predictor as the outcome. In this way, the null hypothesis is that the mean predictor values are equal across the different classes. For regression, `gamScores` fits a smoothing spline in the predictor to the outcome using a generalized additive model and tests to see if there is any functional relationship between the two. In each function the p\-value is used as the score. ### 19\.2\.2 The `filter` Function This function takes as inputs the scores coming out of the `score` function (in an argument called `score`). The function also has the training set data as inputs (arguments are called `x` and `y`). The output should be a named logical vector where the names correspond to the column names of `x`. Columns with values of `TRUE` will be used in the subsequent model. ### 19\.2\.3 The `fit` Function The component is very similar to the `rfe`\-specific function described above. For `sbf`, there are no `first` or `last` arguments. The function should have arguments `x`, `y` and `...`. The data within `x` have been filtered using the `filter` function described above. The output of the `fit` function should be a fitted model. With some data sets, no predictors will survive the filter. In these cases, a model with predictors cannot be computed, but the lack of viable predictors should not be ignored in the final results. To account for this issue, [`caret`](http://cran.r-project.org/web/packages/caret/index.html) contains a model function called `nullModel` that fits a simple model that is independent of any of the predictors. For problems where the outcome is numeric, the function predicts every sample using the simple mean of the training set outcomes. For classification, the model predicts all samples using the most prevalent class in the training data. This function can be used in the `fit` component function to “error\-trap” cases where no predictors are selected. For example, there are several built\-in functions for some models. The object `rfSBF` is a set of functions that may be useful for fitting random forest models with filtering. The `fit` function here uses `nullModel` to check for cases with no predictors: ``` rfSBF$fit ``` ``` ## function (x, y, ...) ## { ## if (ncol(x) > 0) { ## loadNamespace("randomForest") ## randomForest::randomForest(x, y, ...) ## } ## else nullModel(y = y) ## } ## <bytecode: 0x7fa6cc2db540> ## <environment: namespace:caret> ``` ### 19\.2\.4 The `summary` and `pred` Functions The `summary` function is used to calculate model performance on held\-out samples. The `pred` function is used to predict new samples using the current predictor set. The arguments and outputs for these two functions are identical to the previously discussed `summary` and `pred` functions in previously described sections. 19\.3 The Example ----------------- Returning to the example from (Friedman, 1991\), we can fit another random forest model with the predictors pre\-filtered using the generalized additive model approach described previously. ``` filterCtrl <- sbfControl(functions = rfSBF, method = "repeatedcv", repeats = 5) set.seed(10) rfWithFilter <- sbf(x, y, sbfControl = filterCtrl) rfWithFilter ``` ``` ## ## Selection By Filter ## ## Outer resampling method: Cross-Validated (10 fold, repeated 5 times) ## ## Resampling performance: ## ## RMSE Rsquared MAE RMSESD RsquaredSD MAESD ## 3.407 0.5589 2.86 0.5309 0.1782 0.5361 ## ## Using the training set, 6 variables were selected: ## real2, real4, real5, bogus2, bogus17... ## ## During resampling, the top 5 selected variables (out of a possible 13): ## real2 (100%), real4 (100%), real5 (100%), bogus44 (76%), bogus2 (44%) ## ## On average, 5.5 variables were selected (min = 4, max = 8) ``` In this case, the training set indicated that 6 should be used in the random forest model, but the resampling results indicate that there is some variation in this number. Some of the informative predictors are used, but a few others are erroneous retained. Similar to `rfe`, there are methods for `predictors`, `densityplot`, `histogram` and `varImp`.
Machine Learning
topepo.github.io
https://topepo.github.io/caret/recursive-feature-elimination.html
20 Recursive Feature Elimination ================================ Contents * [Feature Selection Using Search Algorithms](#search) * [Resampling and External Validation](model-training-and-tuning.html#resamp) * [Recursive Feature Elimination via `caret`](recursive-feature-elimination.html#rfe) * [An Example](recursive-feature-elimination.html#rfeexample) * [Helper Functions](recursive-feature-elimination.html#rfehelpers) * [The Example](recursive-feature-elimination.html#rfeexample2) * [Using a Recipe](##rferecipes) 20\.1 Backwards Selection ------------------------- First, the algorithm fits the model to all predictors. Each predictor is ranked using it’s importance to the model. Let *S* be a sequence of ordered numbers which are candidate values for the number of predictors to retain (*S1* \> *S2*, …). At each iteration of feature selection, the *Si* top ranked predictors are retained, the model is refit and performance is assessed. The value of *Si* with the best performance is determined and the top *Si* predictors are used to fit the final model. Algorithm 1 has a more complete definition. The algorithm has an optional step (line 1\.9\) where the predictor rankings are recomputed on the model on the reduced feature set. [Svetnik *et al* (2004\)](http://rd.springer.com/chapter/10.1007%2F978-3-540-25966-4_33) showed that, for random forest models, there was a decrease in performance when the rankings were re\-computed at every step. However, in other cases when the initial rankings are not good (e.g. linear models with highly collinear predictors), re\-calculation can slightly improve performance. One potential issue over\-fitting to the predictor set such that the wrapper procedure could focus on nuances of the training data that are not found in future samples (i.e. over\-fitting to predictors and samples). For example, suppose a very large number of uninformative predictors were collected and one such predictor randomly correlated with the outcome. The RFE algorithm would give a good rank to this variable and the prediction error (on the same data set) would be lowered. It would take a different test/validation to find out that this predictor was uninformative. The was referred to as “selection bias” by [Ambroise and McLachlan (2002\)](http://www.pnas.org/content/99/10/6562.short). In the current RFE algorithm, the training data is being used for at least three purposes: predictor selection, model fitting and performance evaluation. Unless the number of samples is large, especially in relation to the number of variables, one static training set may not be able to fulfill these needs. 20\.2 Resampling and External Validation ---------------------------------------- Since feature selection is part of the model building process, resampling methods (e.g. cross\-validation, the bootstrap) should factor in the variability caused by feature selection when calculating performance. For example, the RFE procedure in Algorithm 1 can estimate the model performance on line 1\.7, which during the selection process. [Ambroise and McLachlan (2002\)](http://www.pnas.org/content/99/10/6562.short) and [Svetnik *et al* (2004\)](http://rd.springer.com/chapter/10.1007%2F978-3-540-25966-4_33) showed that improper use of resampling to measure performance will result in models that perform poorly on new samples. To get performance estimates that incorporate the variation due to feature selection, it is suggested that the steps in Algorithm 1 be encapsulated inside an outer layer of resampling (e.g. 10\-fold cross\-validation). Algorithm 2 shows a version of the algorithm that uses resampling. While this will provide better estimates of performance, it is more computationally burdensome. For users with access to machines with multiple processors, the first `For` loop in Algorithm 2 (line 2\.1\) can be easily parallelized. Another complication to using resampling is that multiple lists of the “best” predictors are generated at each iteration. At first this may seem like a disadvantage, but it does provide a more probabilistic assessment of predictor importance than a ranking based on a single fixed data set. At the end of the algorithm, a consensus ranking can be used to determine the best predictors to retain. 20\.3 Recursive Feature Elimination via [`caret`](http://cran.r-project.org/web/packages/caret/index.html) ---------------------------------------------------------------------------------------------------------- In [`caret`](http://cran.r-project.org/web/packages/caret/index.html), Algorithm 1 is implemented by the function `rfeIter`. The resampling\-based Algorithm 2 is in the `rfe` function. Given the potential selection bias issues, this document focuses on `rfe`. There are several arguments: * `x`, a matrix or data frame of predictor variables * `y`, a vector (numeric or factor) of outcomes * `sizes`, a integer vector for the specific subset sizes that should be tested (which need not to include `ncol(x)`) * `rfeControl`, a list of options that can be used to specify the model and the methods for prediction, ranking etc. For a specific model, a set of functions must be specified in `rfeControl$functions`. Sections below has descriptions of these sub\-functions. There are a number of pre\-defined sets of functions for several models, including: linear regression (in the object `lmFuncs`), random forests (`rfFuncs`), naive Bayes (`nbFuncs`), bagged trees (`treebagFuncs`) and functions that can be used with [`caret`](http://cran.r-project.org/web/packages/caret/index.html)’s `train` function (`caretFuncs`). The latter is useful if the model has tuning parameters that must be determined at each iteration. 20\.4 An Example ---------------- ``` library(caret) library(mlbench) library(Hmisc) library(randomForest) ``` To test the algorithm, the “Friedman 1” benchmark (Friedman, 1991\) was used. There are five informative variables generated by the equation In the simulation used here: ``` n <- 100 p <- 40 sigma <- 1 set.seed(1) sim <- mlbench.friedman1(n, sd = sigma) colnames(sim$x) <- c(paste("real", 1:5, sep = ""), paste("bogus", 1:5, sep = "")) bogus <- matrix(rnorm(n * p), nrow = n) colnames(bogus) <- paste("bogus", 5+(1:ncol(bogus)), sep = "") x <- cbind(sim$x, bogus) y <- sim$y ``` Of the 50 predictors, there are 45 pure noise variables: 5 are uniform on \\\[0, 1\\] and 40 are random univariate standard normals. The predictors are centered and scaled: ``` normalization <- preProcess(x) x <- predict(normalization, x) x <- as.data.frame(x) subsets <- c(1:5, 10, 15, 20, 25) ``` The simulation will fit models with subset sizes of 25, 20, 15, 10, 5, 4, 3, 2, 1\. As previously mentioned, to fit linear models, the `lmFuncs` set of functions can be used. To do this, a control object is created with the `rfeControl` function. We also specify that repeated 10\-fold cross\-validation should be used in line 2\.1 of Algorithm 2\. The number of folds can be changed via the `number` argument to `rfeControl` (defaults to 10\). The `verbose` option prevents copious amounts of output from being produced. ``` set.seed(10) ctrl <- rfeControl(functions = lmFuncs, method = "repeatedcv", repeats = 5, verbose = FALSE) lmProfile <- rfe(x, y, sizes = subsets, rfeControl = ctrl) lmProfile ``` ``` ## ## Recursive feature selection ## ## Outer resampling method: Cross-Validated (10 fold, repeated 5 times) ## ## Resampling performance over subset size: ## ## Variables RMSE Rsquared MAE RMSESD RsquaredSD MAESD Selected ## 1 3.950 0.3790 3.381 0.6379 0.2149 0.5867 ## 2 3.552 0.4985 3.000 0.5820 0.2007 0.5807 ## 3 3.069 0.6107 2.593 0.6022 0.1582 0.5588 ## 4 2.889 0.6658 2.319 0.8208 0.1969 0.5852 * ## 5 2.949 0.6566 2.349 0.8012 0.1856 0.5599 ## 10 3.252 0.5965 2.628 0.8256 0.1781 0.6016 ## 15 3.405 0.5712 2.709 0.8862 0.1985 0.6603 ## 20 3.514 0.5562 2.799 0.9162 0.2048 0.7334 ## 25 3.700 0.5313 2.987 0.9095 0.1972 0.7500 ## 50 4.067 0.4756 3.268 0.8819 0.1908 0.7315 ## ## The top 4 variables (out of 4): ## real4, real5, real2, real1 ``` The output shows that the best subset size was estimated to be 4 predictors. This set includes informative variables but did not include them all. The `predictors` function can be used to get a text string of variable names that were picked in the final model. The `lmProfile` is a list of class `"rfe"` that contains an object `fit` that is the final linear model with the remaining terms. The model can be used to get predictions for future or test samples. ``` predictors(lmProfile) ``` ``` ## [1] "real4" "real5" "real2" "real1" ``` ``` lmProfile$fit ``` ``` ## ## Call: ## lm(formula = y ~ ., data = tmp) ## ## Coefficients: ## (Intercept) real4 real5 real2 real1 ## 14.613 2.857 1.965 1.625 1.359 ``` ``` head(lmProfile$resample) ``` ``` ## Variables RMSE Rsquared MAE Resample ## 4 4 1.923763 0.9142474 1.640438 Fold01.Rep1 ## 14 4 2.212266 0.8403133 1.845878 Fold02.Rep1 ## 24 4 4.074172 0.5052766 3.095980 Fold03.Rep1 ## 34 4 3.938895 0.3250410 2.992700 Fold04.Rep1 ## 44 4 3.311426 0.6652186 2.195083 Fold05.Rep1 ## 54 4 2.286320 0.6974626 1.840118 Fold06.Rep1 ``` There are also several plot methods to visualize the results. `plot(lmProfile)` produces the performance profile across different subset sizes, as shown in the figure below. ``` trellis.par.set(caretTheme()) plot(lmProfile, type = c("g", "o")) ``` Also the resampling results are stored in the sub\-object `lmProfile$resample` and can be used with several lattice functions. Univariate lattice functions (`densityplot`, `histogram`) can be used to plot the resampling distribution while bivariate functions (`xyplot`, `stripplot`) can be used to plot the distributions for different subset sizes. In the latter case, the option `returnResamp`` = "all"` in `rfeControl` can be used to save all the resampling results. Example images are shown below for the random forest model. 20\.5 Helper Functions ---------------------- To use feature elimination for an arbitrary model, a set of functions must be passed to `rfe` for each of the steps in Algorithm 2\. This section defines those functions and uses the existing random forest functions as an illustrative example. [`caret`](http://cran.r-project.org/web/packages/caret/index.html) contains a list called `rfFuncs`, but this document will use a more simple version that will be better for illustrating the ideas. A set of simplified functions used here and called `rfRFE`. ``` rfRFE <- list(summary = defaultSummary, fit = function(x, y, first, last, ...){ library(randomForest) randomForest(x, y, importance = first, ...) }, pred = function(object, x) predict(object, x), rank = function(object, x, y) { vimp <- varImp(object) vimp <- vimp[order(vimp$Overall,decreasing = TRUE),,drop = FALSE] vimp$var <- rownames(vimp) vimp }, selectSize = pickSizeBest, selectVar = pickVars) ``` ### 20\.5\.1 The `summary` Function The `summary` function takes the observed and predicted values and computes one or more performance metrics (see line 2\.14\). The input is a data frame with columns `obs` and `pred`. The output should be a named vector of numeric variables. Note that the `metric` argument of the `rfe` function should reference one of the names of the output of `summary`. The example function is: ``` rfRFE$summary ``` ``` ## function (data, lev = NULL, model = NULL) ## { ## if (is.character(data$obs)) ## data$obs <- factor(data$obs, levels = lev) ## postResample(data[, "pred"], data[, "obs"]) ## } ## <bytecode: 0x7fa726eefe08> ## <environment: namespace:caret> ``` Two functions in [`caret`](http://cran.r-project.org/web/packages/caret/index.html) that can be used as the summary funciton are `defaultSummary` and `twoClassSummary` (for classification problems with two classes). ### 20\.5\.2 The `fit` Function This function builds the model based on the current data set (lines 2\.3, 2\.9 and 2\.17\). The arguments for the function must be: * `x`: the current training set of predictor data with the appropriate subset of variables * `y`: the current outcome data (either a numeric or factor vector) * `first`: a single logical value for whether the current predictor set has all possible variables (e.g. line 2\.3\) * `last`: similar to `first`, but `TRUE` when the last model is fit with the final subset size and predictors. (line 2\.17\) * `...`: optional arguments to pass to the fit function in the call to `rfe` The function should return a model object that can be used to generate predictions. For random forest, the fit function is simple: ``` rfRFE$fit ``` ``` ## function(x, y, first, last, ...){ ## library(randomForest) ## randomForest(x, y, importance = first, ...) ## } ``` For feature selection without re\-ranking at each iteration, the random forest variable importances only need to be computed on the first iterations when all of the predictors are in the model. This can be accomplished using `importance`` = first`. ### 20\.5\.3 The `pred` Function This function returns a vector of predictions (numeric or factors) from the current model (lines 2\.4 and 2\.10\). The input arguments must be * `object`: the model generated by the `fit` function * `x`: the current set of predictor set for the held\-back samples For random forests, the function is a simple wrapper for the predict function: ``` rfRFE$pred ``` ``` ## function(object, x) predict(object, x) ``` For classification, it is probably a good idea to ensure that the resulting factor variables of predictions has the same levels as the input data. ### 20\.5\.4 The `rank` Function This function is used to return the predictors in the order of the most important to the least important (lines 2\.5 and 2\.11\). Inputs are: * `object`: the model generated by the `fit` function * `x`: the current set of predictor set for the training samples * `y`: the current training outcomes The function should return a data frame with a column called `var` that has the current variable names. The first row should be the most important predictor etc. Other columns can be included in the output and will be returned in the final `rfe` object. For random forests, the function below uses [`caret`](http://cran.r-project.org/web/packages/caret/index.html)’s `varImp` function to extract the random forest importances and orders them. For classification, `randomForest` will produce a column of importances for each class. In this case, the default ranking function orders the predictors by the averages importance across the classes. ``` rfRFE$rank ``` ``` ## function(object, x, y) { ## vimp <- varImp(object) ## vimp <- vimp[order(vimp$Overall,decreasing = TRUE),,drop = FALSE] ## vimp$var <- rownames(vimp) ## vimp ## } ``` ### 20\.5\.5 The `selectSize` Function This function determines the optimal number of predictors based on the resampling output (line 2\.15\). Inputs for the function are: * `x`: a matrix with columns for the performance metrics and the number of variables, called `Variables` * `metric`: a character string of the performance measure to optimize (e.g. RMSE, Accuracy) * `maximize`: a single logical for whether the metric should be maximized This function should return an integer corresponding to the optimal subset size. [`caret`](http://cran.r-project.org/web/packages/caret/index.html) comes with two examples functions for this purpose: `pickSizeBest` and `pickSizeTolerance`. The former simply selects the subset size that has the best value. The latter takes into account the whole profile and tries to pick a subset size that is small without sacrificing too much performance. For example, suppose we have computed the RMSE over a series of variables sizes: ``` example <- data.frame(RMSE = c(3.215, 2.819, 2.414, 2.144, 2.014, 1.997, 2.025, 1.987, 1.971, 2.055, 1.935, 1.999, 2.047, 2.002, 1.895, 2.018), Variables = 1:16) ``` These are depicted in the figure below. The solid circle identifies the subset size with the absolute smallest RMSE. However, there are many smaller subsets that produce approximately the same performance but with fewer predictors. In this case, we might be able to accept a slightly larger error for less predictors. The `pickSizeTolerance` determines the absolute best value then the percent difference of the other points to this value. In the case of RMSE, this would be where *RMSE{opt}* is the absolute best error rate. These “tolerance” values are plotted in the bottom panel. The solid triangle is the smallest subset size that is within 10% of the optimal value. This approach can produce good results for many of the tree based models, such as random forest, where there is a plateau of good performance for larger subset sizes. For trees, this is usually because unimportant variables are infrequently used in splits and do not significantly affect performance. ``` ## Find the row with the absolute smallest RMSE smallest <- pickSizeBest(example, metric = "RMSE", maximize = FALSE) smallest ``` ``` ## [1] 15 ``` ``` ## Now one that is within 10% of the smallest within10Pct <- pickSizeTolerance(example, metric = "RMSE", tol = 10, maximize = FALSE) within10Pct ``` ``` ## [1] 5 ``` ``` minRMSE <- min(example$RMSE) example$Tolerance <- (example$RMSE - minRMSE)/minRMSE * 100 ## Plot the profile and the subsets selected using the ## two different criteria par(mfrow = c(2, 1), mar = c(3, 4, 1, 2)) plot(example$Variables[-c(smallest, within10Pct)], example$RMSE[-c(smallest, within10Pct)], ylim = extendrange(example$RMSE), ylab = "RMSE", xlab = "Variables") points(example$Variables[smallest], example$RMSE[smallest], pch = 16, cex= 1.3) points(example$Variables[within10Pct], example$RMSE[within10Pct], pch = 17, cex= 1.3) with(example, plot(Variables, Tolerance)) abline(h = 10, lty = 2, col = "darkgrey") ``` ### 20\.5\.6 The `selectVar` Function After the optimal subset size is determined, this function will be used to calculate the best rankings for each variable across all the resampling iterations (line 2\.16\). Inputs for the function are: * `y`: a list of variables importance for each resampling iteration and each subset size (generated by the user\-defined `rank` function). In the example, each each of the cross\-validation groups the output of the rank function is saved for each of the 10 subset sizes (including the original subset). If the rankings are not recomputed at each iteration, the values will be the same within each cross\-validation iteration. * `size`: the integer returned by the `selectSize` function This function should return a character string of predictor names (of length `size`) in the order of most important to least important For random forests, only the first importance calculation (line 2\.5\) is used since these are the rankings on the full set of predictors. These importances are averaged and the top predictors are returned. ``` rfRFE$selectVar ``` ``` ## function (y, size) ## { ## finalImp <- ddply(y[, c("Overall", "var")], .(var), function(x) mean(x$Overall, ## na.rm = TRUE)) ## names(finalImp)[2] <- "Overall" ## finalImp <- finalImp[order(finalImp$Overall, decreasing = TRUE), ## ] ## as.character(finalImp$var[1:size]) ## } ## <bytecode: 0x7fa6f06cdc18> ## <environment: namespace:caret> ``` Note that if the predictor rankings are recomputed at each iteration (line 2\.11\) the user will need to write their own selection function to use the other ranks. 20\.6 The Example ----------------- For random forest, we fit the same series of model sizes as the linear model. The option to save all the resampling results across subset sizes was changed for this model and are used to show the lattice plot function capabilities in the figures below. ``` ctrl$functions <- rfRFE ctrl$returnResamp <- "all" set.seed(10) rfProfile <- rfe(x, y, sizes = subsets, rfeControl = ctrl) rfProfile ``` ``` ## ## Recursive feature selection ## ## Outer resampling method: Cross-Validated (10 fold, repeated 5 times) ## ## Resampling performance over subset size: ## ## Variables RMSE Rsquared MAE RMSESD RsquaredSD MAESD Selected ## 1 4.667 0.2159 3.907 0.8779 0.20591 0.7889 ## 2 3.801 0.4082 3.225 0.5841 0.21832 0.5858 ## 3 3.157 0.6005 2.650 0.5302 0.14847 0.5156 ## 4 2.696 0.7646 2.277 0.4044 0.08625 0.3962 * ## 5 2.859 0.7553 2.385 0.4577 0.10529 0.4382 ## 10 3.061 0.7184 2.570 0.4378 0.13898 0.4106 ## 15 3.170 0.7035 2.671 0.4423 0.15140 0.4110 ## 20 3.327 0.6826 2.812 0.4469 0.16074 0.4117 ## 25 3.356 0.6729 2.843 0.4634 0.16947 0.4324 ## 50 3.525 0.6437 3.011 0.4597 0.17207 0.4196 ## ## The top 4 variables (out of 4): ## real4, real5, real2, real1 ``` The resampling profile can be visualized along with plots of the individual resampling results: ``` trellis.par.set(caretTheme()) plot1 <- plot(rfProfile, type = c("g", "o")) plot2 <- plot(rfProfile, type = c("g", "o"), metric = "Rsquared") print(plot1, split=c(1,1,1,2), more=TRUE) print(plot2, split=c(1,2,1,2)) ``` ``` plot1 <- xyplot(rfProfile, type = c("g", "p", "smooth"), ylab = "RMSE CV Estimates") plot2 <- densityplot(rfProfile, subset = Variables < 5, adjust = 1.25, as.table = TRUE, xlab = "RMSE CV Estimates", pch = "|") print(plot1, split=c(1,1,1,2), more=TRUE) print(plot2, split=c(1,2,1,2)) ``` 20\.7 Using a Recipe -------------------- A recipe can be used to specify the model terms and any preprocessing that may be needed. Instead of using ``` rfe(x = predictors, y = outcome) ``` an existing recipe can be used along with a data frame containing the predictors and outcome: ``` rfe(recipe, data) ``` The recipe is prepped within each resample in the same manner that `train` executes the `preProc` option. However, since a recipe can do a variety of different operations, there are some potentially complicating factors. The main pitfall is that the recipe can involve the creation and deletion of predictors. There are a number of steps that can reduce the number of predictors, such as the ones for pooling factors into an “other” category, PCA signal extraction, as well as filters for near\-zero variance predictors and highly correlated predictors. For this reason, it may be difficult to know how many predictors are available for the full model. Also, this number will likely vary between iterations of resampling. To illustrate, let’s use the blood\-brain barrier data where there is a high degree of correlation between the predictors. A simple recipe could be ``` library(recipes) library(tidyverse) data(BloodBrain) # combine into a single data frame bbb <- bbbDescr bbb$y <- logBBB bbb_rec <- recipe(y ~ ., data = bbb) %>% step_center(all_predictors()) %>% step_scale(all_predictors()) %>% step_nzv(all_predictors()) %>% step_pca(all_predictors(), threshold = .95) ``` Originally, there are 134 predictors and, for the entire data set, the processed version has: ``` prep(bbb_rec, training = bbb, retain = TRUE) %>% juice(all_predictors()) %>% ncol() ``` ``` ## [1] 28 ``` When calling `rfe`, let’s start the maximum subset size at 28: ``` bbb_ctrl <- rfeControl( method = "repeatedcv", repeats = 5, functions = lmFuncs, returnResamp = "all" ) set.seed(36) lm_rfe <- rfe( bbb_rec, data = bbb, sizes = 2:28, rfeControl = bbb_ctrl ) ggplot(lm_rfe) + theme_bw() ``` What was the distribution of the maximum number of terms: ``` term_dist <- lm_rfe$resample %>% group_by(Resample) %>% dplyr::summarize(max_terms = max(Variables)) table(term_dist$max_terms) ``` ``` ## ## 27 28 29 ## 7 40 3 ``` So… 28ish. Suppose that we used `sizes = 2:ncol(bbbDescr)` when calling `rfe`. A warning is issued that: ``` Warning message: For the training set, the recipe generated fewer predictors than the 130 expected in `sizes` and the number of subsets will be truncated to be <= 28 ``` 20\.1 Backwards Selection ------------------------- First, the algorithm fits the model to all predictors. Each predictor is ranked using it’s importance to the model. Let *S* be a sequence of ordered numbers which are candidate values for the number of predictors to retain (*S1* \> *S2*, …). At each iteration of feature selection, the *Si* top ranked predictors are retained, the model is refit and performance is assessed. The value of *Si* with the best performance is determined and the top *Si* predictors are used to fit the final model. Algorithm 1 has a more complete definition. The algorithm has an optional step (line 1\.9\) where the predictor rankings are recomputed on the model on the reduced feature set. [Svetnik *et al* (2004\)](http://rd.springer.com/chapter/10.1007%2F978-3-540-25966-4_33) showed that, for random forest models, there was a decrease in performance when the rankings were re\-computed at every step. However, in other cases when the initial rankings are not good (e.g. linear models with highly collinear predictors), re\-calculation can slightly improve performance. One potential issue over\-fitting to the predictor set such that the wrapper procedure could focus on nuances of the training data that are not found in future samples (i.e. over\-fitting to predictors and samples). For example, suppose a very large number of uninformative predictors were collected and one such predictor randomly correlated with the outcome. The RFE algorithm would give a good rank to this variable and the prediction error (on the same data set) would be lowered. It would take a different test/validation to find out that this predictor was uninformative. The was referred to as “selection bias” by [Ambroise and McLachlan (2002\)](http://www.pnas.org/content/99/10/6562.short). In the current RFE algorithm, the training data is being used for at least three purposes: predictor selection, model fitting and performance evaluation. Unless the number of samples is large, especially in relation to the number of variables, one static training set may not be able to fulfill these needs. 20\.2 Resampling and External Validation ---------------------------------------- Since feature selection is part of the model building process, resampling methods (e.g. cross\-validation, the bootstrap) should factor in the variability caused by feature selection when calculating performance. For example, the RFE procedure in Algorithm 1 can estimate the model performance on line 1\.7, which during the selection process. [Ambroise and McLachlan (2002\)](http://www.pnas.org/content/99/10/6562.short) and [Svetnik *et al* (2004\)](http://rd.springer.com/chapter/10.1007%2F978-3-540-25966-4_33) showed that improper use of resampling to measure performance will result in models that perform poorly on new samples. To get performance estimates that incorporate the variation due to feature selection, it is suggested that the steps in Algorithm 1 be encapsulated inside an outer layer of resampling (e.g. 10\-fold cross\-validation). Algorithm 2 shows a version of the algorithm that uses resampling. While this will provide better estimates of performance, it is more computationally burdensome. For users with access to machines with multiple processors, the first `For` loop in Algorithm 2 (line 2\.1\) can be easily parallelized. Another complication to using resampling is that multiple lists of the “best” predictors are generated at each iteration. At first this may seem like a disadvantage, but it does provide a more probabilistic assessment of predictor importance than a ranking based on a single fixed data set. At the end of the algorithm, a consensus ranking can be used to determine the best predictors to retain. 20\.3 Recursive Feature Elimination via [`caret`](http://cran.r-project.org/web/packages/caret/index.html) ---------------------------------------------------------------------------------------------------------- In [`caret`](http://cran.r-project.org/web/packages/caret/index.html), Algorithm 1 is implemented by the function `rfeIter`. The resampling\-based Algorithm 2 is in the `rfe` function. Given the potential selection bias issues, this document focuses on `rfe`. There are several arguments: * `x`, a matrix or data frame of predictor variables * `y`, a vector (numeric or factor) of outcomes * `sizes`, a integer vector for the specific subset sizes that should be tested (which need not to include `ncol(x)`) * `rfeControl`, a list of options that can be used to specify the model and the methods for prediction, ranking etc. For a specific model, a set of functions must be specified in `rfeControl$functions`. Sections below has descriptions of these sub\-functions. There are a number of pre\-defined sets of functions for several models, including: linear regression (in the object `lmFuncs`), random forests (`rfFuncs`), naive Bayes (`nbFuncs`), bagged trees (`treebagFuncs`) and functions that can be used with [`caret`](http://cran.r-project.org/web/packages/caret/index.html)’s `train` function (`caretFuncs`). The latter is useful if the model has tuning parameters that must be determined at each iteration. 20\.4 An Example ---------------- ``` library(caret) library(mlbench) library(Hmisc) library(randomForest) ``` To test the algorithm, the “Friedman 1” benchmark (Friedman, 1991\) was used. There are five informative variables generated by the equation In the simulation used here: ``` n <- 100 p <- 40 sigma <- 1 set.seed(1) sim <- mlbench.friedman1(n, sd = sigma) colnames(sim$x) <- c(paste("real", 1:5, sep = ""), paste("bogus", 1:5, sep = "")) bogus <- matrix(rnorm(n * p), nrow = n) colnames(bogus) <- paste("bogus", 5+(1:ncol(bogus)), sep = "") x <- cbind(sim$x, bogus) y <- sim$y ``` Of the 50 predictors, there are 45 pure noise variables: 5 are uniform on \\\[0, 1\\] and 40 are random univariate standard normals. The predictors are centered and scaled: ``` normalization <- preProcess(x) x <- predict(normalization, x) x <- as.data.frame(x) subsets <- c(1:5, 10, 15, 20, 25) ``` The simulation will fit models with subset sizes of 25, 20, 15, 10, 5, 4, 3, 2, 1\. As previously mentioned, to fit linear models, the `lmFuncs` set of functions can be used. To do this, a control object is created with the `rfeControl` function. We also specify that repeated 10\-fold cross\-validation should be used in line 2\.1 of Algorithm 2\. The number of folds can be changed via the `number` argument to `rfeControl` (defaults to 10\). The `verbose` option prevents copious amounts of output from being produced. ``` set.seed(10) ctrl <- rfeControl(functions = lmFuncs, method = "repeatedcv", repeats = 5, verbose = FALSE) lmProfile <- rfe(x, y, sizes = subsets, rfeControl = ctrl) lmProfile ``` ``` ## ## Recursive feature selection ## ## Outer resampling method: Cross-Validated (10 fold, repeated 5 times) ## ## Resampling performance over subset size: ## ## Variables RMSE Rsquared MAE RMSESD RsquaredSD MAESD Selected ## 1 3.950 0.3790 3.381 0.6379 0.2149 0.5867 ## 2 3.552 0.4985 3.000 0.5820 0.2007 0.5807 ## 3 3.069 0.6107 2.593 0.6022 0.1582 0.5588 ## 4 2.889 0.6658 2.319 0.8208 0.1969 0.5852 * ## 5 2.949 0.6566 2.349 0.8012 0.1856 0.5599 ## 10 3.252 0.5965 2.628 0.8256 0.1781 0.6016 ## 15 3.405 0.5712 2.709 0.8862 0.1985 0.6603 ## 20 3.514 0.5562 2.799 0.9162 0.2048 0.7334 ## 25 3.700 0.5313 2.987 0.9095 0.1972 0.7500 ## 50 4.067 0.4756 3.268 0.8819 0.1908 0.7315 ## ## The top 4 variables (out of 4): ## real4, real5, real2, real1 ``` The output shows that the best subset size was estimated to be 4 predictors. This set includes informative variables but did not include them all. The `predictors` function can be used to get a text string of variable names that were picked in the final model. The `lmProfile` is a list of class `"rfe"` that contains an object `fit` that is the final linear model with the remaining terms. The model can be used to get predictions for future or test samples. ``` predictors(lmProfile) ``` ``` ## [1] "real4" "real5" "real2" "real1" ``` ``` lmProfile$fit ``` ``` ## ## Call: ## lm(formula = y ~ ., data = tmp) ## ## Coefficients: ## (Intercept) real4 real5 real2 real1 ## 14.613 2.857 1.965 1.625 1.359 ``` ``` head(lmProfile$resample) ``` ``` ## Variables RMSE Rsquared MAE Resample ## 4 4 1.923763 0.9142474 1.640438 Fold01.Rep1 ## 14 4 2.212266 0.8403133 1.845878 Fold02.Rep1 ## 24 4 4.074172 0.5052766 3.095980 Fold03.Rep1 ## 34 4 3.938895 0.3250410 2.992700 Fold04.Rep1 ## 44 4 3.311426 0.6652186 2.195083 Fold05.Rep1 ## 54 4 2.286320 0.6974626 1.840118 Fold06.Rep1 ``` There are also several plot methods to visualize the results. `plot(lmProfile)` produces the performance profile across different subset sizes, as shown in the figure below. ``` trellis.par.set(caretTheme()) plot(lmProfile, type = c("g", "o")) ``` Also the resampling results are stored in the sub\-object `lmProfile$resample` and can be used with several lattice functions. Univariate lattice functions (`densityplot`, `histogram`) can be used to plot the resampling distribution while bivariate functions (`xyplot`, `stripplot`) can be used to plot the distributions for different subset sizes. In the latter case, the option `returnResamp`` = "all"` in `rfeControl` can be used to save all the resampling results. Example images are shown below for the random forest model. 20\.5 Helper Functions ---------------------- To use feature elimination for an arbitrary model, a set of functions must be passed to `rfe` for each of the steps in Algorithm 2\. This section defines those functions and uses the existing random forest functions as an illustrative example. [`caret`](http://cran.r-project.org/web/packages/caret/index.html) contains a list called `rfFuncs`, but this document will use a more simple version that will be better for illustrating the ideas. A set of simplified functions used here and called `rfRFE`. ``` rfRFE <- list(summary = defaultSummary, fit = function(x, y, first, last, ...){ library(randomForest) randomForest(x, y, importance = first, ...) }, pred = function(object, x) predict(object, x), rank = function(object, x, y) { vimp <- varImp(object) vimp <- vimp[order(vimp$Overall,decreasing = TRUE),,drop = FALSE] vimp$var <- rownames(vimp) vimp }, selectSize = pickSizeBest, selectVar = pickVars) ``` ### 20\.5\.1 The `summary` Function The `summary` function takes the observed and predicted values and computes one or more performance metrics (see line 2\.14\). The input is a data frame with columns `obs` and `pred`. The output should be a named vector of numeric variables. Note that the `metric` argument of the `rfe` function should reference one of the names of the output of `summary`. The example function is: ``` rfRFE$summary ``` ``` ## function (data, lev = NULL, model = NULL) ## { ## if (is.character(data$obs)) ## data$obs <- factor(data$obs, levels = lev) ## postResample(data[, "pred"], data[, "obs"]) ## } ## <bytecode: 0x7fa726eefe08> ## <environment: namespace:caret> ``` Two functions in [`caret`](http://cran.r-project.org/web/packages/caret/index.html) that can be used as the summary funciton are `defaultSummary` and `twoClassSummary` (for classification problems with two classes). ### 20\.5\.2 The `fit` Function This function builds the model based on the current data set (lines 2\.3, 2\.9 and 2\.17\). The arguments for the function must be: * `x`: the current training set of predictor data with the appropriate subset of variables * `y`: the current outcome data (either a numeric or factor vector) * `first`: a single logical value for whether the current predictor set has all possible variables (e.g. line 2\.3\) * `last`: similar to `first`, but `TRUE` when the last model is fit with the final subset size and predictors. (line 2\.17\) * `...`: optional arguments to pass to the fit function in the call to `rfe` The function should return a model object that can be used to generate predictions. For random forest, the fit function is simple: ``` rfRFE$fit ``` ``` ## function(x, y, first, last, ...){ ## library(randomForest) ## randomForest(x, y, importance = first, ...) ## } ``` For feature selection without re\-ranking at each iteration, the random forest variable importances only need to be computed on the first iterations when all of the predictors are in the model. This can be accomplished using `importance`` = first`. ### 20\.5\.3 The `pred` Function This function returns a vector of predictions (numeric or factors) from the current model (lines 2\.4 and 2\.10\). The input arguments must be * `object`: the model generated by the `fit` function * `x`: the current set of predictor set for the held\-back samples For random forests, the function is a simple wrapper for the predict function: ``` rfRFE$pred ``` ``` ## function(object, x) predict(object, x) ``` For classification, it is probably a good idea to ensure that the resulting factor variables of predictions has the same levels as the input data. ### 20\.5\.4 The `rank` Function This function is used to return the predictors in the order of the most important to the least important (lines 2\.5 and 2\.11\). Inputs are: * `object`: the model generated by the `fit` function * `x`: the current set of predictor set for the training samples * `y`: the current training outcomes The function should return a data frame with a column called `var` that has the current variable names. The first row should be the most important predictor etc. Other columns can be included in the output and will be returned in the final `rfe` object. For random forests, the function below uses [`caret`](http://cran.r-project.org/web/packages/caret/index.html)’s `varImp` function to extract the random forest importances and orders them. For classification, `randomForest` will produce a column of importances for each class. In this case, the default ranking function orders the predictors by the averages importance across the classes. ``` rfRFE$rank ``` ``` ## function(object, x, y) { ## vimp <- varImp(object) ## vimp <- vimp[order(vimp$Overall,decreasing = TRUE),,drop = FALSE] ## vimp$var <- rownames(vimp) ## vimp ## } ``` ### 20\.5\.5 The `selectSize` Function This function determines the optimal number of predictors based on the resampling output (line 2\.15\). Inputs for the function are: * `x`: a matrix with columns for the performance metrics and the number of variables, called `Variables` * `metric`: a character string of the performance measure to optimize (e.g. RMSE, Accuracy) * `maximize`: a single logical for whether the metric should be maximized This function should return an integer corresponding to the optimal subset size. [`caret`](http://cran.r-project.org/web/packages/caret/index.html) comes with two examples functions for this purpose: `pickSizeBest` and `pickSizeTolerance`. The former simply selects the subset size that has the best value. The latter takes into account the whole profile and tries to pick a subset size that is small without sacrificing too much performance. For example, suppose we have computed the RMSE over a series of variables sizes: ``` example <- data.frame(RMSE = c(3.215, 2.819, 2.414, 2.144, 2.014, 1.997, 2.025, 1.987, 1.971, 2.055, 1.935, 1.999, 2.047, 2.002, 1.895, 2.018), Variables = 1:16) ``` These are depicted in the figure below. The solid circle identifies the subset size with the absolute smallest RMSE. However, there are many smaller subsets that produce approximately the same performance but with fewer predictors. In this case, we might be able to accept a slightly larger error for less predictors. The `pickSizeTolerance` determines the absolute best value then the percent difference of the other points to this value. In the case of RMSE, this would be where *RMSE{opt}* is the absolute best error rate. These “tolerance” values are plotted in the bottom panel. The solid triangle is the smallest subset size that is within 10% of the optimal value. This approach can produce good results for many of the tree based models, such as random forest, where there is a plateau of good performance for larger subset sizes. For trees, this is usually because unimportant variables are infrequently used in splits and do not significantly affect performance. ``` ## Find the row with the absolute smallest RMSE smallest <- pickSizeBest(example, metric = "RMSE", maximize = FALSE) smallest ``` ``` ## [1] 15 ``` ``` ## Now one that is within 10% of the smallest within10Pct <- pickSizeTolerance(example, metric = "RMSE", tol = 10, maximize = FALSE) within10Pct ``` ``` ## [1] 5 ``` ``` minRMSE <- min(example$RMSE) example$Tolerance <- (example$RMSE - minRMSE)/minRMSE * 100 ## Plot the profile and the subsets selected using the ## two different criteria par(mfrow = c(2, 1), mar = c(3, 4, 1, 2)) plot(example$Variables[-c(smallest, within10Pct)], example$RMSE[-c(smallest, within10Pct)], ylim = extendrange(example$RMSE), ylab = "RMSE", xlab = "Variables") points(example$Variables[smallest], example$RMSE[smallest], pch = 16, cex= 1.3) points(example$Variables[within10Pct], example$RMSE[within10Pct], pch = 17, cex= 1.3) with(example, plot(Variables, Tolerance)) abline(h = 10, lty = 2, col = "darkgrey") ``` ### 20\.5\.6 The `selectVar` Function After the optimal subset size is determined, this function will be used to calculate the best rankings for each variable across all the resampling iterations (line 2\.16\). Inputs for the function are: * `y`: a list of variables importance for each resampling iteration and each subset size (generated by the user\-defined `rank` function). In the example, each each of the cross\-validation groups the output of the rank function is saved for each of the 10 subset sizes (including the original subset). If the rankings are not recomputed at each iteration, the values will be the same within each cross\-validation iteration. * `size`: the integer returned by the `selectSize` function This function should return a character string of predictor names (of length `size`) in the order of most important to least important For random forests, only the first importance calculation (line 2\.5\) is used since these are the rankings on the full set of predictors. These importances are averaged and the top predictors are returned. ``` rfRFE$selectVar ``` ``` ## function (y, size) ## { ## finalImp <- ddply(y[, c("Overall", "var")], .(var), function(x) mean(x$Overall, ## na.rm = TRUE)) ## names(finalImp)[2] <- "Overall" ## finalImp <- finalImp[order(finalImp$Overall, decreasing = TRUE), ## ] ## as.character(finalImp$var[1:size]) ## } ## <bytecode: 0x7fa6f06cdc18> ## <environment: namespace:caret> ``` Note that if the predictor rankings are recomputed at each iteration (line 2\.11\) the user will need to write their own selection function to use the other ranks. ### 20\.5\.1 The `summary` Function The `summary` function takes the observed and predicted values and computes one or more performance metrics (see line 2\.14\). The input is a data frame with columns `obs` and `pred`. The output should be a named vector of numeric variables. Note that the `metric` argument of the `rfe` function should reference one of the names of the output of `summary`. The example function is: ``` rfRFE$summary ``` ``` ## function (data, lev = NULL, model = NULL) ## { ## if (is.character(data$obs)) ## data$obs <- factor(data$obs, levels = lev) ## postResample(data[, "pred"], data[, "obs"]) ## } ## <bytecode: 0x7fa726eefe08> ## <environment: namespace:caret> ``` Two functions in [`caret`](http://cran.r-project.org/web/packages/caret/index.html) that can be used as the summary funciton are `defaultSummary` and `twoClassSummary` (for classification problems with two classes). ### 20\.5\.2 The `fit` Function This function builds the model based on the current data set (lines 2\.3, 2\.9 and 2\.17\). The arguments for the function must be: * `x`: the current training set of predictor data with the appropriate subset of variables * `y`: the current outcome data (either a numeric or factor vector) * `first`: a single logical value for whether the current predictor set has all possible variables (e.g. line 2\.3\) * `last`: similar to `first`, but `TRUE` when the last model is fit with the final subset size and predictors. (line 2\.17\) * `...`: optional arguments to pass to the fit function in the call to `rfe` The function should return a model object that can be used to generate predictions. For random forest, the fit function is simple: ``` rfRFE$fit ``` ``` ## function(x, y, first, last, ...){ ## library(randomForest) ## randomForest(x, y, importance = first, ...) ## } ``` For feature selection without re\-ranking at each iteration, the random forest variable importances only need to be computed on the first iterations when all of the predictors are in the model. This can be accomplished using `importance`` = first`. ### 20\.5\.3 The `pred` Function This function returns a vector of predictions (numeric or factors) from the current model (lines 2\.4 and 2\.10\). The input arguments must be * `object`: the model generated by the `fit` function * `x`: the current set of predictor set for the held\-back samples For random forests, the function is a simple wrapper for the predict function: ``` rfRFE$pred ``` ``` ## function(object, x) predict(object, x) ``` For classification, it is probably a good idea to ensure that the resulting factor variables of predictions has the same levels as the input data. ### 20\.5\.4 The `rank` Function This function is used to return the predictors in the order of the most important to the least important (lines 2\.5 and 2\.11\). Inputs are: * `object`: the model generated by the `fit` function * `x`: the current set of predictor set for the training samples * `y`: the current training outcomes The function should return a data frame with a column called `var` that has the current variable names. The first row should be the most important predictor etc. Other columns can be included in the output and will be returned in the final `rfe` object. For random forests, the function below uses [`caret`](http://cran.r-project.org/web/packages/caret/index.html)’s `varImp` function to extract the random forest importances and orders them. For classification, `randomForest` will produce a column of importances for each class. In this case, the default ranking function orders the predictors by the averages importance across the classes. ``` rfRFE$rank ``` ``` ## function(object, x, y) { ## vimp <- varImp(object) ## vimp <- vimp[order(vimp$Overall,decreasing = TRUE),,drop = FALSE] ## vimp$var <- rownames(vimp) ## vimp ## } ``` ### 20\.5\.5 The `selectSize` Function This function determines the optimal number of predictors based on the resampling output (line 2\.15\). Inputs for the function are: * `x`: a matrix with columns for the performance metrics and the number of variables, called `Variables` * `metric`: a character string of the performance measure to optimize (e.g. RMSE, Accuracy) * `maximize`: a single logical for whether the metric should be maximized This function should return an integer corresponding to the optimal subset size. [`caret`](http://cran.r-project.org/web/packages/caret/index.html) comes with two examples functions for this purpose: `pickSizeBest` and `pickSizeTolerance`. The former simply selects the subset size that has the best value. The latter takes into account the whole profile and tries to pick a subset size that is small without sacrificing too much performance. For example, suppose we have computed the RMSE over a series of variables sizes: ``` example <- data.frame(RMSE = c(3.215, 2.819, 2.414, 2.144, 2.014, 1.997, 2.025, 1.987, 1.971, 2.055, 1.935, 1.999, 2.047, 2.002, 1.895, 2.018), Variables = 1:16) ``` These are depicted in the figure below. The solid circle identifies the subset size with the absolute smallest RMSE. However, there are many smaller subsets that produce approximately the same performance but with fewer predictors. In this case, we might be able to accept a slightly larger error for less predictors. The `pickSizeTolerance` determines the absolute best value then the percent difference of the other points to this value. In the case of RMSE, this would be where *RMSE{opt}* is the absolute best error rate. These “tolerance” values are plotted in the bottom panel. The solid triangle is the smallest subset size that is within 10% of the optimal value. This approach can produce good results for many of the tree based models, such as random forest, where there is a plateau of good performance for larger subset sizes. For trees, this is usually because unimportant variables are infrequently used in splits and do not significantly affect performance. ``` ## Find the row with the absolute smallest RMSE smallest <- pickSizeBest(example, metric = "RMSE", maximize = FALSE) smallest ``` ``` ## [1] 15 ``` ``` ## Now one that is within 10% of the smallest within10Pct <- pickSizeTolerance(example, metric = "RMSE", tol = 10, maximize = FALSE) within10Pct ``` ``` ## [1] 5 ``` ``` minRMSE <- min(example$RMSE) example$Tolerance <- (example$RMSE - minRMSE)/minRMSE * 100 ## Plot the profile and the subsets selected using the ## two different criteria par(mfrow = c(2, 1), mar = c(3, 4, 1, 2)) plot(example$Variables[-c(smallest, within10Pct)], example$RMSE[-c(smallest, within10Pct)], ylim = extendrange(example$RMSE), ylab = "RMSE", xlab = "Variables") points(example$Variables[smallest], example$RMSE[smallest], pch = 16, cex= 1.3) points(example$Variables[within10Pct], example$RMSE[within10Pct], pch = 17, cex= 1.3) with(example, plot(Variables, Tolerance)) abline(h = 10, lty = 2, col = "darkgrey") ``` ### 20\.5\.6 The `selectVar` Function After the optimal subset size is determined, this function will be used to calculate the best rankings for each variable across all the resampling iterations (line 2\.16\). Inputs for the function are: * `y`: a list of variables importance for each resampling iteration and each subset size (generated by the user\-defined `rank` function). In the example, each each of the cross\-validation groups the output of the rank function is saved for each of the 10 subset sizes (including the original subset). If the rankings are not recomputed at each iteration, the values will be the same within each cross\-validation iteration. * `size`: the integer returned by the `selectSize` function This function should return a character string of predictor names (of length `size`) in the order of most important to least important For random forests, only the first importance calculation (line 2\.5\) is used since these are the rankings on the full set of predictors. These importances are averaged and the top predictors are returned. ``` rfRFE$selectVar ``` ``` ## function (y, size) ## { ## finalImp <- ddply(y[, c("Overall", "var")], .(var), function(x) mean(x$Overall, ## na.rm = TRUE)) ## names(finalImp)[2] <- "Overall" ## finalImp <- finalImp[order(finalImp$Overall, decreasing = TRUE), ## ] ## as.character(finalImp$var[1:size]) ## } ## <bytecode: 0x7fa6f06cdc18> ## <environment: namespace:caret> ``` Note that if the predictor rankings are recomputed at each iteration (line 2\.11\) the user will need to write their own selection function to use the other ranks. 20\.6 The Example ----------------- For random forest, we fit the same series of model sizes as the linear model. The option to save all the resampling results across subset sizes was changed for this model and are used to show the lattice plot function capabilities in the figures below. ``` ctrl$functions <- rfRFE ctrl$returnResamp <- "all" set.seed(10) rfProfile <- rfe(x, y, sizes = subsets, rfeControl = ctrl) rfProfile ``` ``` ## ## Recursive feature selection ## ## Outer resampling method: Cross-Validated (10 fold, repeated 5 times) ## ## Resampling performance over subset size: ## ## Variables RMSE Rsquared MAE RMSESD RsquaredSD MAESD Selected ## 1 4.667 0.2159 3.907 0.8779 0.20591 0.7889 ## 2 3.801 0.4082 3.225 0.5841 0.21832 0.5858 ## 3 3.157 0.6005 2.650 0.5302 0.14847 0.5156 ## 4 2.696 0.7646 2.277 0.4044 0.08625 0.3962 * ## 5 2.859 0.7553 2.385 0.4577 0.10529 0.4382 ## 10 3.061 0.7184 2.570 0.4378 0.13898 0.4106 ## 15 3.170 0.7035 2.671 0.4423 0.15140 0.4110 ## 20 3.327 0.6826 2.812 0.4469 0.16074 0.4117 ## 25 3.356 0.6729 2.843 0.4634 0.16947 0.4324 ## 50 3.525 0.6437 3.011 0.4597 0.17207 0.4196 ## ## The top 4 variables (out of 4): ## real4, real5, real2, real1 ``` The resampling profile can be visualized along with plots of the individual resampling results: ``` trellis.par.set(caretTheme()) plot1 <- plot(rfProfile, type = c("g", "o")) plot2 <- plot(rfProfile, type = c("g", "o"), metric = "Rsquared") print(plot1, split=c(1,1,1,2), more=TRUE) print(plot2, split=c(1,2,1,2)) ``` ``` plot1 <- xyplot(rfProfile, type = c("g", "p", "smooth"), ylab = "RMSE CV Estimates") plot2 <- densityplot(rfProfile, subset = Variables < 5, adjust = 1.25, as.table = TRUE, xlab = "RMSE CV Estimates", pch = "|") print(plot1, split=c(1,1,1,2), more=TRUE) print(plot2, split=c(1,2,1,2)) ``` 20\.7 Using a Recipe -------------------- A recipe can be used to specify the model terms and any preprocessing that may be needed. Instead of using ``` rfe(x = predictors, y = outcome) ``` an existing recipe can be used along with a data frame containing the predictors and outcome: ``` rfe(recipe, data) ``` The recipe is prepped within each resample in the same manner that `train` executes the `preProc` option. However, since a recipe can do a variety of different operations, there are some potentially complicating factors. The main pitfall is that the recipe can involve the creation and deletion of predictors. There are a number of steps that can reduce the number of predictors, such as the ones for pooling factors into an “other” category, PCA signal extraction, as well as filters for near\-zero variance predictors and highly correlated predictors. For this reason, it may be difficult to know how many predictors are available for the full model. Also, this number will likely vary between iterations of resampling. To illustrate, let’s use the blood\-brain barrier data where there is a high degree of correlation between the predictors. A simple recipe could be ``` library(recipes) library(tidyverse) data(BloodBrain) # combine into a single data frame bbb <- bbbDescr bbb$y <- logBBB bbb_rec <- recipe(y ~ ., data = bbb) %>% step_center(all_predictors()) %>% step_scale(all_predictors()) %>% step_nzv(all_predictors()) %>% step_pca(all_predictors(), threshold = .95) ``` Originally, there are 134 predictors and, for the entire data set, the processed version has: ``` prep(bbb_rec, training = bbb, retain = TRUE) %>% juice(all_predictors()) %>% ncol() ``` ``` ## [1] 28 ``` When calling `rfe`, let’s start the maximum subset size at 28: ``` bbb_ctrl <- rfeControl( method = "repeatedcv", repeats = 5, functions = lmFuncs, returnResamp = "all" ) set.seed(36) lm_rfe <- rfe( bbb_rec, data = bbb, sizes = 2:28, rfeControl = bbb_ctrl ) ggplot(lm_rfe) + theme_bw() ``` What was the distribution of the maximum number of terms: ``` term_dist <- lm_rfe$resample %>% group_by(Resample) %>% dplyr::summarize(max_terms = max(Variables)) table(term_dist$max_terms) ``` ``` ## ## 27 28 29 ## 7 40 3 ``` So… 28ish. Suppose that we used `sizes = 2:ncol(bbbDescr)` when calling `rfe`. A warning is issued that: ``` Warning message: For the training set, the recipe generated fewer predictors than the 130 expected in `sizes` and the number of subsets will be truncated to be <= 28 ```
Machine Learning
topepo.github.io
https://topepo.github.io/caret/feature-selection-using-genetic-algorithms.html
21 Feature Selection using Genetic Algorithms ============================================= Contents * [Genetic Algorithms](feature-selection-using-genetic-algorithms.html#ga) * [Internal and External Performance Estimates](feature-selection-using-genetic-algorithms.html#performance) * [Basic Syntax](feature-selection-using-univariate-filters.html#syntax) * [Example](feature-selection-using-genetic-algorithms.html#gaexample) * [Customizing the Search](model-training-and-tuning.html#custom) * [The Example Revisited](feature-selection-using-genetic-algorithms.html#example2) * [Using Recipes](feature-selection-using-genetic-algorithms.html#garecipes) 21\.1 Genetic Algorithms ------------------------ Genetic algorithms (GAs) mimic Darwinian forces of natural selection to find optimal values of some function ([Mitchell, 1998](http://mitpress.mit.edu/books/introduction-genetic-algorithms)). An initial set of candidate solutions are created and their corresponding *fitness* values are calculated (where larger values are better). This set of solutions is referred to as a population and each solution as an *individual*. The individuals with the best fitness values are combined randomly to produce offsprings which make up the next population. To do so, individual are selected and undergo cross\-over (mimicking genetic reproduction) and also are subject to random mutations. This process is repeated again and again and many generations are produced (i.e. iterations of the search procedure) that should create better and better solutions. For feature selection, the individuals are subsets of predictors that are encoded as binary; a feature is either included or not in the subset. The fitness values are some measure of model performance, such as the RMSE or classification accuracy. One issue with using GAs for feature selection is that the optimization process can be very aggressive and their is potential for the GA to overfit to the predictors (much like the previous discussion for RFE). 21\.2 Internal and External Performance Estimates ------------------------------------------------- The genetic algorithm code in [`caret`](http://cran.r-project.org/package=caret) conducts the search of the feature space repeatedly within resampling iterations. First, the training data are split be whatever resampling method was specified in the control function. For example, if 10\-fold cross\-validation is selected, the entire genetic algorithm is conducted 10 separate times. For the first fold, nine tenths of the data are used in the search while the remaining tenth is used to estimate the external performance since these data points were not used in the search. During the genetic algorithm, a measure of fitness is needed to guide the search. This is the internal measure of performance. During the search, the data that are available are the instances selected by the top\-level resampling (e.g. the nine tenths mentioned above). A common approach is to conduct another resampling procedure. Another option is to use a holdout set of samples to determine the internal estimate of performance (see the holdout argument of the control function). While this is faster, it is more likely to cause overfitting of the features and should only be used when a large amount of training data are available. Yet another idea is to use a penalized metric (such as the AIC statistic) but this may not exist for some metrics (e.g. the area under the ROC curve). The internal estimates of performance will eventually overfit the subsets to the data. However, since the external estimate is not used by the search, it is able to make better assessments of overfitting. After resampling, this function determines the optimal number of generations for the GA. Finally, the entire data set is used in the last execution of the genetic algorithm search and the final model is built on the predictor subset that is associated with the optimal number of generations determined by resampling (although the update function can be used to manually set the number of generations). 21\.3 Basic Syntax ------------------ The most basic usage of the function is: ``` obj <- gafs(x = predictors, y = outcome, iters = 100) ``` where * `x`: a data frame or matrix of predictor values * `y`: a factor or numeric vector of outcomes * `iters`: the number of generations for the GA This isn’t very specific. All of the action is in the control function. That can be used to specify the model to be fit, how predictions are made and summarized as well as the genetic operations. Suppose that we want to fit a linear regression model. To do this, we can use `train` as an interface and pass arguments to that function through `gafs`: ``` ctrl <- gafsControl(functions = caretGA) obj <- gafs(x = predictors, y = outcome, iters = 100, gafsControl = ctrl, ## Now pass options to `train` method = "lm") ``` Other options, such as `preProcess`, can be passed in as well. Some important options to `gafsControl` are: * `method`, `number`, `repeats`, `index`, `indexOut`, etc: options similar to those for [`train`](http://topepo.github.io/caret/model-training-and-tuning.html#control) top control resampling. * `metric`: this is similar to [`train`](http://topepo.github.io/caret/model-training-and-tuning.html#control)’s option but, in this case, the value should be a named vector with values for the internal and external metrics. If none are specified, the first value returned by the summary functions (see details below) are used and a warning is issued. A similar two\-element vector for the option `maximize` is also required. See the [last example here](feature-selection-using-genetic-algorithms.html#example2) for an illustration. * `holdout`: this is a number between `[0, 1)` that can be used to hold out samples for computing the internal fitness value. Note that this is independent of the external resampling step. Suppose 10\-fold CV is being used. Within a resampling iteration, `holdout` can be used to sample an additional proportion of the 90% resampled data to use for estimating fitness. This may not be a good idea unless you have a very large training set and want to avoid an internal resampling procedure to estimate fitness. * `allowParallel` and `genParallel`: these are logicals to control where parallel processing should be used (if at all). The former will parallelize the external resampling while the latter parallelizes the fitness calculations within a generation. `allowParallel` will almost always be more advantageous. There are a few built\-in sets of functions to use with `gafs`: `caretGA`, `rfGA`, and `treebagGA`. The first is a simple interface to `train`. When using this, as shown above, arguments can be passed to `train` using the `...` structure and the resampling estimates of performance can be used as the internal fitness value. The functions provided by `rfGA` and `treebagGA` avoid using `train` and their internal estimates of fitness come from using the out\-of\-bag estimates generated from the model. The GA implementation in [`caret`](http://cran.r-project.org/web/packages/caret/index.html) uses the underlying code from the [`GA`](http://cran.r-project.org/package=GA) package ([Scrucca, 2013](http://www.jstatsoft.org/v53/i04/)). 21\.4 Genetic Algorithm Example ------------------------------- Using the example from the [previous page](recursive-feature-elimination.html#example) where there are five real predictors and 40 noise predictors: ``` library(mlbench) n <- 100 p <- 40 sigma <- 1 set.seed(1) sim <- mlbench.friedman1(n, sd = sigma) colnames(sim$x) <- c(paste("real", 1:5, sep = ""), paste("bogus", 1:5, sep = "")) bogus <- matrix(rnorm(n * p), nrow = n) colnames(bogus) <- paste("bogus", 5+(1:ncol(bogus)), sep = "") x <- cbind(sim$x, bogus) y <- sim$y normalization <- preProcess(x) x <- predict(normalization, x) x <- as.data.frame(x) ``` We’ll fit a random forest model and use the out\-of\-bag RMSE estimate as the internal performance metric and use the same repeated 10\-fold cross\-validation process used with the search. To do this, we’ll use the built\-in `rfGA` object for this purpose. The default GA operators will be used and conduct 200 generations of the algorithm. ``` ga_ctrl <- gafsControl(functions = rfGA, method = "repeatedcv", repeats = 5) ## Use the same random number seed as the RFE process ## so that the same CV folds are used for the external ## resampling. set.seed(10) rf_ga <- gafs(x = x, y = y, iters = 200, gafsControl = ga_ctrl) rf_ga ``` ``` ## ## Genetic Algorithm Feature Selection ## ## 100 samples ## 50 predictors ## ## Maximum generations: 200 ## Population per generation: 50 ## Crossover probability: 0.8 ## Mutation probability: 0.1 ## Elitism: 0 ## ## Internal performance values: RMSE, Rsquared ## Subset selection driven to minimize internal RMSE ## ## External performance values: RMSE, Rsquared, MAE ## Best iteration chose by minimizing external RMSE ## External resampling method: Cross-Validated (10 fold, repeated 5 times) ## ## During resampling: ## * the top 5 selected variables (out of a possible 50): ## real1 (100%), real2 (100%), real4 (100%), real5 (100%), real3 (92%) ## * on average, 9.3 variables were selected (min = 6, max = 15) ## ## In the final search using the entire training set: ## * 12 features selected at iteration 195 including: ## real1, real2, real3, real4, real5 ... ## * external performance at this iteration is ## ## RMSE Rsquared MAE ## 2.8056 0.7607 2.3640 ``` With 5 repeats of 10\-fold cross\-validation, the GA was executed 50 times. The average external performance is calculated across resamples and these results are used to determine the optimal number of iterations for the final GA to avoid over\-fitting. Across the resamples, an average of 9\.3 predictors were selected at the end of each of the algorithms. The `plot` function is used to monitor the average of the internal out\-of\-bag RMSE estimates as well as the average of the external performance estimates calculated from the 50 out\-of\-sample predictions. By default, this function uses [`ggplot2`](http://cran.r-project.org/package=ggplot2) package. A black and white theme can be “added” to the output object: ``` plot(rf_ga) + theme_bw() ``` Based on these results, the generation associated with the best external RMSE estimate was 2\.81\. Using the entire training set, the final GA is conducted and, at generation 195, there were 12 that were selected: real1, real2, real3, real4, real5, bogus3, bogus5, bogus7, bogus8, bogus14, bogus17, bogus29\. The random forest model with these predictors is created using the entire training set is trained and this is the model that is used when `predict.gafs` is executed. **Note:** the correlation between the internal and external fitness values is somewhat atypical for most real\-world problems. This is a function of the nature of the simulations (a small number of uncorrelated informative predictors) and that the OOB error estimate from random forest is a product of hundreds of trees. Your mileage may vary. 21\.5 Customizing the Search ---------------------------- ### 21\.5\.1 The `fit` Function This function builds the model based on a proposed current subset. The arguments for the function must be: * `x`: the current training set of predictor data with the appropriate subset of variables * `y`: the current outcome data (either a numeric or factor vector) * `lev`: a character vector with the class levels (or `NULL` for regression problems) * `last`: a logical that is `TRUE` when the final GA search is conducted on the entire data set * `...`: optional arguments to pass to the fit function in the call to `gafs` The function should return a model object that can be used to generate predictions. For random forest, the fit function is simple: ``` rfGA$fit ``` ``` ## function (x, y, lev = NULL, last = FALSE, ...) ## { ## loadNamespace("randomForest") ## randomForest::randomForest(x, y, ...) ## } ## <bytecode: 0x7fa7162692d8> ## <environment: namespace:caret> ``` ### 21\.5\.2 The `pred` Function This function returns a vector of predictions (numeric or factors) from the current model . The input arguments must be * `object`: the model generated by the `fit` function * `x`: the current set of predictor set for the held\-back samples For random forests, the function is a simple wrapper for the predict function: ``` rfGA$pred ``` ``` ## function (object, x) ## { ## tmp <- predict(object, x) ## if (is.factor(object$y)) { ## out <- cbind(data.frame(pred = tmp), as.data.frame(predict(object, ## x, type = "prob"))) ## } ## else out <- tmp ## out ## } ## <bytecode: 0x7fa716269770> ## <environment: namespace:caret> ``` For classification, it is probably a good idea to ensure that the resulting factor variables of predictions has the same levels as the input data. ### 21\.5\.3 The `fitness_intern` Function The `fitness_intern` function takes the fitted model and computes one or more performance metrics. The inputs to this function are: * `object`: the model generated by the `fit` function * `x`: the current set of predictor set. If the option `gafsControl$holdout` is zero, these values will be from the current resample (i.e. the same data used to fit the model). Otherwise, the predictor values are from the hold\-out set created by `gafsControl$holdout`. * `y`: outcome values. See the note for the `x` argument to understand which data are presented to the function. * `maximize`: a logical from `gafsControl` that indicates whether the metric should be maximized or minimized * `p`: the total number of possible predictors The output should be a **named** numeric vector of performance values. In many cases, some resampled measure of performance is used. In the example above using random forest, the OOB error was used. In other cases, the resampled performance from `train` can be used and, if `gafsControl$holdout` is not zero, a static hold\-out set can be used. This depends on the data and problem at hand. The example function for random forest is: ``` rfGA$fitness_intern ``` ``` ## function (object, x, y, maximize, p) ## rfStats(object) ## <bytecode: 0x7fa71626a570> ## <environment: namespace:caret> ``` ### 21\.5\.4 The `fitness_extern` Function The `fitness_extern` function takes the observed and predicted values form the external resampling process and computes one or more performance metrics. The input arguments are: * `data`: a data frame or predictions generated by the `fit` function. For regression, the predicted values in a column called `pred`. For classification, `pred` is a factor vector. Class probabilities are usually attached as columns whose names are the class levels (see the random forest example for the `fit` function above) * `lev`: a character vector with the class levels (or `NULL` for regression problems) The output should be a **named** numeric vector of performance values. The example function for random forest is: ``` rfGA$fitness_extern ``` ``` ## function (data, lev = NULL, model = NULL) ## { ## if (is.character(data$obs)) ## data$obs <- factor(data$obs, levels = lev) ## postResample(data[, "pred"], data[, "obs"]) ## } ## <bytecode: 0x7fa71626a7a0> ## <environment: namespace:caret> ``` Two functions in [`caret`](http://cran.r-project.org/web/packages/caret/index.html) that can be used as the summary function are `defaultSummary` and `twoClassSummary` (for classification problems with two classes). ### 21\.5\.5 The `initial` Function This function creates an initial generation. Inputs are: * `vars`: the number of possible predictors * `popSize`: the population size for each generation * `...`: not currently used The output should be a binary 0/1 matrix where there are `vars` columns corresponding to the predictors and `popSize` rows for the individuals in the population. The default function populates the rows randomly with subset sizes varying between 10% and 90% of number of possible predictors. For example: ``` set.seed(128) starting <- rfGA$initial(vars = 12, popSize = 8) starting ``` ``` ## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] ## [1,] 0 1 0 0 1 0 0 0 0 0 0 0 ## [2,] 0 0 0 0 0 0 1 0 0 0 1 0 ## [3,] 0 0 1 1 1 0 0 0 1 0 1 0 ## [4,] 0 1 0 0 1 0 0 1 0 1 0 1 ## [5,] 0 1 1 1 0 0 1 0 0 1 0 0 ## [6,] 1 1 1 1 1 1 0 1 1 1 1 1 ## [7,] 1 1 1 1 1 0 0 1 0 1 1 1 ## [8,] 1 0 1 1 0 1 1 1 0 1 1 1 ``` ``` apply(starting, 1, mean) ``` ``` ## [1] 0.1666667 0.1666667 0.4166667 0.4166667 0.4166667 0.9166667 0.7500000 ## [8] 0.7500000 ``` `gafs` has an argument called `suggestions` that is similar to the one in the `ga` function where the initial population can be seeded with specific subsets. ### 21\.5\.6 The `selection` Function This function conducts the genetic selection. Inputs are: * `population`: the indicators for the current population * `fitness`: the corresponding fitness values for the population. Note that if the internal performance value is to be minimized, these are the negatives of the actual values * `r`, `q`: tuning parameters for specific selection functions. See `gafs_lrSelection` and `gafs_nlrSelection` * `...`: not currently used The output should be a list with named elements. * `population`: the indicators for the selected individuals * `fitness`: the fitness values for the selected individuals The default function is a version of the [`GA`](http://cran.r-project.org/package=GA) package’s ga\_lrSelection function. ### 21\.5\.7 The `crossover` Function This function conducts the genetic crossover. Inputs are: * `population`: the indicators for the current population * `fitness`: the corresponding fitness values for the population. Note that if the internal performance value is to be minimized, these are the negatives of the actual values * `parents`: a matrix with two rows containing indicators for the parent individuals. * `...`: not currently used The default function is a version of the [`GA`](http://cran.r-project.org/package=GA) package’s `ga_spCrossover` function. Another function that is a version of that package’s uniform cross\-over function is also available. . The output should be a list with named elements. * `children`: from `?ga_spCrossover`: “a matrix of dimension 2 times the number of decision variables containing the generated offsprings”" * `fitness`: “a vector of length 2 containing the fitness values for the offsprings. A value `NA` is returned if an offspring is different (which is usually the case) from the two parents.”" ### 21\.5\.8 The `mutation` Function This function conducts the genetic mutation. Inputs are: * `population`: the indicators for the current population * `parents`: a vector of indices for where the mutation should occur. * `...`: not currently used The default function is a version of the [`GA`](http://cran.r-project.org/package=GA) package’s `gabin_raMutation` function. . The output should the mutated population. ### 21\.5\.9 The `selectIter` Function This function determines the optimal number of generations based on the resampling output. Inputs for the function are: * `x`: a matrix with columns for the performance metrics averaged over resamples * `metric`: a character string of the performance measure to optimize (e.g. RMSE, Accuracy) * `maximize`: a single logical for whether the metric should be maximized This function should return an integer corresponding to the optimal subset size. 21\.6 The Example Revisited --------------------------- The previous GA included some of the non\-informative predictors. We can cheat a little and try to bias the search to get the right solution. We can try to encourage the algorithm to choose fewer predictors, we can penalize the the RMSE estimate. Normally, a metric like the Akaike information criterion (AIC) statistic would be used. However, with a random forest model, there is no real notion of model degrees of freedom. As an alternative, we can use [desirability functions](http://scholar.google.com/scholar?q=%22desirability+functions) to penalize the RMSE. To do this, two functions are created that translate the number of predictors and the RMSE values to a measure of “desirability”. For the number of predictors, the most desirable property would be a single predictor and the worst situation would be if the model required all 50 predictors. That desirability function is visualized as: For the RMSE, the best case would be zero. Many poor models have values around four. To give the RMSE value more weight in the overall desirability calculation, we use a scale parameter value of 2\. This desirability function is: To use the overall desirability to drive the feature selection, the `internal` function requires replacement. We make a copy of `rfGA` and add code using the [`desirability`](http://cran.r-project.org/package=desirability) package and the function returns the estimated RMSE and the overall desirability. The `gafsControl` function also need changes. The `metric` argument needs to reflect that the overall desirability score should be maximized internally but the RMSE estimate should be minimized externally. ``` library(desirability) rfGA2 <- rfGA rfGA2$fitness_intern <- function (object, x, y, maximize, p) { RMSE <- rfStats(object)[1] d_RMSE <- dMin(0, 4) d_Size <- dMin(1, p, 2) overall <- dOverall(d_RMSE, d_Size) D <- predict(overall, data.frame(RMSE, ncol(x))) c(D = D, RMSE = as.vector(RMSE)) } ga_ctrl_d <- gafsControl(functions = rfGA2, method = "repeatedcv", repeats = 5, metric = c(internal = "D", external = "RMSE"), maximize = c(internal = TRUE, external = FALSE)) set.seed(10) rf_ga_d <- gafs(x = x, y = y, iters = 150, gafsControl = ga_ctrl_d) rf_ga_d ``` ``` ## ## Genetic Algorithm Feature Selection ## ## 100 samples ## 50 predictors ## ## Maximum generations: 150 ## Population per generation: 50 ## Crossover probability: 0.8 ## Mutation probability: 0.1 ## Elitism: 0 ## ## Internal performance values: D, RMSE ## Subset selection driven to maximize internal D ## ## External performance values: RMSE, Rsquared, MAE ## Best iteration chose by minimizing external RMSE ## External resampling method: Cross-Validated (10 fold, repeated 5 times) ## ## During resampling: ## * the top 5 selected variables (out of a possible 50): ## real1 (100%), real2 (100%), real4 (100%), real5 (100%), real3 (40%) ## * on average, 5.2 variables were selected (min = 4, max = 6) ## ## In the final search using the entire training set: ## * 6 features selected at iteration 146 including: ## real1, real2, real3, real4, real5 ... ## * external performance at this iteration is ## ## RMSE Rsquared MAE ## 2.7046 0.7665 2.2730 ``` Here are the RMSE values for this search: ``` plot(rf_ga_d) + theme_bw() ``` The final GA found 6 that were selected: real1, real2, real3, real4, real5, bogus43\. During resampling, the average number of predictors selected was 5\.2, indicating that the penalty on the number of predictors was effective. 21\.7 Using Recipes ------------------- Like the other feature selection routines, `gafs` can take a data recipe as an input. This is advantageous when your data needs preprocessing before the model, such as: * creation of dummy variables from factors * specification of interactions * missing data imputation * more complex feature engineering methods Like `train`, the recipe’s preprocessing steps are calculated within each resample. This makes sure that the resampling statistics capture the variation and effect that the preprocessing has on the model. As an example, the Ames housing data is used. These data contain a number of categorical predictors that require conversion to indicators as well as other variables that require processing. To load (and split) the data: ``` library(AmesHousing) library(rsample) # Create the data and remove one column that is more of # an outcome. ames <- make_ames() %>% select(-Overall_Qual) ncol(ames) ``` ``` ## [1] 80 ``` ``` # How many factor variables? sum(vapply(ames, is.factor, logical(1))) ``` ``` ## [1] 45 ``` ``` # We'll use `rsample` to make the initial split to be consistent with other # analyses of these data. Set the seed first to make sure that you get the # same random numbers set.seed(4595) data_split <- initial_split(ames, strata = "Sale_Price", prop = 3/4) ames_train <- training(data_split) %>% as.data.frame() ames_test <- testing(data_split) %>% as.data.frame() ``` Here is a recipe that does differetn types of preprocssing on the predictor set: ``` library(recipes) ames_rec <- recipe(Sale_Price ~ ., data = ames_train) %>% step_log(Sale_Price, base = 10) %>% step_other(Neighborhood, threshold = 0.05) %>% step_dummy(all_nominal(), -Bldg_Type) %>% step_interact(~ starts_with("Central_Air"):Year_Built) %>% step_zv(all_predictors())%>% step_bs(Longitude, Latitude, options = list(df = 5)) ``` If this were executed on the training set, it would produce 280 predictor columns out of the original 79\. Let’s tune some linear models with `gafs` and, for the sake of computational time, only use 10 generations of the algorithm: ``` lm_ga_ctrl <- gafsControl(functions = caretGA, method = "cv", number = 10) set.seed(23555) lm_ga_search <- gafs( ames_rec, data = ames_train, iters = 10, gafsControl = lm_ga_ctrl, # now options to `train` for caretGA method = "lm", trControl = trainControl(method = "cv", allowParallel = FALSE) ) lm_ga_search ``` ``` ## ## Genetic Algorithm Feature Selection ## ## 2199 samples ## 273 predictors ## ## Maximum generations: 10 ## Population per generation: 50 ## Crossover probability: 0.8 ## Mutation probability: 0.1 ## Elitism: 0 ## ## Internal performance values: RMSE, Rsquared, MAE ## Subset selection driven to minimize internal RMSE ## ## External performance values: RMSE, Rsquared, MAE ## Best iteration chose by minimizing external RMSE ## External resampling method: Cross-Validated (10 fold) ## ## During resampling: ## * the top 5 selected variables (out of a possible 273): ## Bldg_Type (100%), Bsmt_Exposure_No (100%), First_Flr_SF (100%), MS_Zoning_Residential_High_Density (100%), Neighborhood_Gilbert (100%) ## * on average, 171.7 variables were selected (min = 150, max = 198) ## ## In the final search using the entire training set: ## * 155 features selected at iteration 9 including: ## Lot_Frontage, Year_Built, Year_Remod_Add, BsmtFin_SF_2, Gr_Liv_Area ... ## * external performance at this iteration is ## ## RMSE Rsquared MAE ## 0.06923 0.84659 0.04260 ``` 21\.1 Genetic Algorithms ------------------------ Genetic algorithms (GAs) mimic Darwinian forces of natural selection to find optimal values of some function ([Mitchell, 1998](http://mitpress.mit.edu/books/introduction-genetic-algorithms)). An initial set of candidate solutions are created and their corresponding *fitness* values are calculated (where larger values are better). This set of solutions is referred to as a population and each solution as an *individual*. The individuals with the best fitness values are combined randomly to produce offsprings which make up the next population. To do so, individual are selected and undergo cross\-over (mimicking genetic reproduction) and also are subject to random mutations. This process is repeated again and again and many generations are produced (i.e. iterations of the search procedure) that should create better and better solutions. For feature selection, the individuals are subsets of predictors that are encoded as binary; a feature is either included or not in the subset. The fitness values are some measure of model performance, such as the RMSE or classification accuracy. One issue with using GAs for feature selection is that the optimization process can be very aggressive and their is potential for the GA to overfit to the predictors (much like the previous discussion for RFE). 21\.2 Internal and External Performance Estimates ------------------------------------------------- The genetic algorithm code in [`caret`](http://cran.r-project.org/package=caret) conducts the search of the feature space repeatedly within resampling iterations. First, the training data are split be whatever resampling method was specified in the control function. For example, if 10\-fold cross\-validation is selected, the entire genetic algorithm is conducted 10 separate times. For the first fold, nine tenths of the data are used in the search while the remaining tenth is used to estimate the external performance since these data points were not used in the search. During the genetic algorithm, a measure of fitness is needed to guide the search. This is the internal measure of performance. During the search, the data that are available are the instances selected by the top\-level resampling (e.g. the nine tenths mentioned above). A common approach is to conduct another resampling procedure. Another option is to use a holdout set of samples to determine the internal estimate of performance (see the holdout argument of the control function). While this is faster, it is more likely to cause overfitting of the features and should only be used when a large amount of training data are available. Yet another idea is to use a penalized metric (such as the AIC statistic) but this may not exist for some metrics (e.g. the area under the ROC curve). The internal estimates of performance will eventually overfit the subsets to the data. However, since the external estimate is not used by the search, it is able to make better assessments of overfitting. After resampling, this function determines the optimal number of generations for the GA. Finally, the entire data set is used in the last execution of the genetic algorithm search and the final model is built on the predictor subset that is associated with the optimal number of generations determined by resampling (although the update function can be used to manually set the number of generations). 21\.3 Basic Syntax ------------------ The most basic usage of the function is: ``` obj <- gafs(x = predictors, y = outcome, iters = 100) ``` where * `x`: a data frame or matrix of predictor values * `y`: a factor or numeric vector of outcomes * `iters`: the number of generations for the GA This isn’t very specific. All of the action is in the control function. That can be used to specify the model to be fit, how predictions are made and summarized as well as the genetic operations. Suppose that we want to fit a linear regression model. To do this, we can use `train` as an interface and pass arguments to that function through `gafs`: ``` ctrl <- gafsControl(functions = caretGA) obj <- gafs(x = predictors, y = outcome, iters = 100, gafsControl = ctrl, ## Now pass options to `train` method = "lm") ``` Other options, such as `preProcess`, can be passed in as well. Some important options to `gafsControl` are: * `method`, `number`, `repeats`, `index`, `indexOut`, etc: options similar to those for [`train`](http://topepo.github.io/caret/model-training-and-tuning.html#control) top control resampling. * `metric`: this is similar to [`train`](http://topepo.github.io/caret/model-training-and-tuning.html#control)’s option but, in this case, the value should be a named vector with values for the internal and external metrics. If none are specified, the first value returned by the summary functions (see details below) are used and a warning is issued. A similar two\-element vector for the option `maximize` is also required. See the [last example here](feature-selection-using-genetic-algorithms.html#example2) for an illustration. * `holdout`: this is a number between `[0, 1)` that can be used to hold out samples for computing the internal fitness value. Note that this is independent of the external resampling step. Suppose 10\-fold CV is being used. Within a resampling iteration, `holdout` can be used to sample an additional proportion of the 90% resampled data to use for estimating fitness. This may not be a good idea unless you have a very large training set and want to avoid an internal resampling procedure to estimate fitness. * `allowParallel` and `genParallel`: these are logicals to control where parallel processing should be used (if at all). The former will parallelize the external resampling while the latter parallelizes the fitness calculations within a generation. `allowParallel` will almost always be more advantageous. There are a few built\-in sets of functions to use with `gafs`: `caretGA`, `rfGA`, and `treebagGA`. The first is a simple interface to `train`. When using this, as shown above, arguments can be passed to `train` using the `...` structure and the resampling estimates of performance can be used as the internal fitness value. The functions provided by `rfGA` and `treebagGA` avoid using `train` and their internal estimates of fitness come from using the out\-of\-bag estimates generated from the model. The GA implementation in [`caret`](http://cran.r-project.org/web/packages/caret/index.html) uses the underlying code from the [`GA`](http://cran.r-project.org/package=GA) package ([Scrucca, 2013](http://www.jstatsoft.org/v53/i04/)). 21\.4 Genetic Algorithm Example ------------------------------- Using the example from the [previous page](recursive-feature-elimination.html#example) where there are five real predictors and 40 noise predictors: ``` library(mlbench) n <- 100 p <- 40 sigma <- 1 set.seed(1) sim <- mlbench.friedman1(n, sd = sigma) colnames(sim$x) <- c(paste("real", 1:5, sep = ""), paste("bogus", 1:5, sep = "")) bogus <- matrix(rnorm(n * p), nrow = n) colnames(bogus) <- paste("bogus", 5+(1:ncol(bogus)), sep = "") x <- cbind(sim$x, bogus) y <- sim$y normalization <- preProcess(x) x <- predict(normalization, x) x <- as.data.frame(x) ``` We’ll fit a random forest model and use the out\-of\-bag RMSE estimate as the internal performance metric and use the same repeated 10\-fold cross\-validation process used with the search. To do this, we’ll use the built\-in `rfGA` object for this purpose. The default GA operators will be used and conduct 200 generations of the algorithm. ``` ga_ctrl <- gafsControl(functions = rfGA, method = "repeatedcv", repeats = 5) ## Use the same random number seed as the RFE process ## so that the same CV folds are used for the external ## resampling. set.seed(10) rf_ga <- gafs(x = x, y = y, iters = 200, gafsControl = ga_ctrl) rf_ga ``` ``` ## ## Genetic Algorithm Feature Selection ## ## 100 samples ## 50 predictors ## ## Maximum generations: 200 ## Population per generation: 50 ## Crossover probability: 0.8 ## Mutation probability: 0.1 ## Elitism: 0 ## ## Internal performance values: RMSE, Rsquared ## Subset selection driven to minimize internal RMSE ## ## External performance values: RMSE, Rsquared, MAE ## Best iteration chose by minimizing external RMSE ## External resampling method: Cross-Validated (10 fold, repeated 5 times) ## ## During resampling: ## * the top 5 selected variables (out of a possible 50): ## real1 (100%), real2 (100%), real4 (100%), real5 (100%), real3 (92%) ## * on average, 9.3 variables were selected (min = 6, max = 15) ## ## In the final search using the entire training set: ## * 12 features selected at iteration 195 including: ## real1, real2, real3, real4, real5 ... ## * external performance at this iteration is ## ## RMSE Rsquared MAE ## 2.8056 0.7607 2.3640 ``` With 5 repeats of 10\-fold cross\-validation, the GA was executed 50 times. The average external performance is calculated across resamples and these results are used to determine the optimal number of iterations for the final GA to avoid over\-fitting. Across the resamples, an average of 9\.3 predictors were selected at the end of each of the algorithms. The `plot` function is used to monitor the average of the internal out\-of\-bag RMSE estimates as well as the average of the external performance estimates calculated from the 50 out\-of\-sample predictions. By default, this function uses [`ggplot2`](http://cran.r-project.org/package=ggplot2) package. A black and white theme can be “added” to the output object: ``` plot(rf_ga) + theme_bw() ``` Based on these results, the generation associated with the best external RMSE estimate was 2\.81\. Using the entire training set, the final GA is conducted and, at generation 195, there were 12 that were selected: real1, real2, real3, real4, real5, bogus3, bogus5, bogus7, bogus8, bogus14, bogus17, bogus29\. The random forest model with these predictors is created using the entire training set is trained and this is the model that is used when `predict.gafs` is executed. **Note:** the correlation between the internal and external fitness values is somewhat atypical for most real\-world problems. This is a function of the nature of the simulations (a small number of uncorrelated informative predictors) and that the OOB error estimate from random forest is a product of hundreds of trees. Your mileage may vary. 21\.5 Customizing the Search ---------------------------- ### 21\.5\.1 The `fit` Function This function builds the model based on a proposed current subset. The arguments for the function must be: * `x`: the current training set of predictor data with the appropriate subset of variables * `y`: the current outcome data (either a numeric or factor vector) * `lev`: a character vector with the class levels (or `NULL` for regression problems) * `last`: a logical that is `TRUE` when the final GA search is conducted on the entire data set * `...`: optional arguments to pass to the fit function in the call to `gafs` The function should return a model object that can be used to generate predictions. For random forest, the fit function is simple: ``` rfGA$fit ``` ``` ## function (x, y, lev = NULL, last = FALSE, ...) ## { ## loadNamespace("randomForest") ## randomForest::randomForest(x, y, ...) ## } ## <bytecode: 0x7fa7162692d8> ## <environment: namespace:caret> ``` ### 21\.5\.2 The `pred` Function This function returns a vector of predictions (numeric or factors) from the current model . The input arguments must be * `object`: the model generated by the `fit` function * `x`: the current set of predictor set for the held\-back samples For random forests, the function is a simple wrapper for the predict function: ``` rfGA$pred ``` ``` ## function (object, x) ## { ## tmp <- predict(object, x) ## if (is.factor(object$y)) { ## out <- cbind(data.frame(pred = tmp), as.data.frame(predict(object, ## x, type = "prob"))) ## } ## else out <- tmp ## out ## } ## <bytecode: 0x7fa716269770> ## <environment: namespace:caret> ``` For classification, it is probably a good idea to ensure that the resulting factor variables of predictions has the same levels as the input data. ### 21\.5\.3 The `fitness_intern` Function The `fitness_intern` function takes the fitted model and computes one or more performance metrics. The inputs to this function are: * `object`: the model generated by the `fit` function * `x`: the current set of predictor set. If the option `gafsControl$holdout` is zero, these values will be from the current resample (i.e. the same data used to fit the model). Otherwise, the predictor values are from the hold\-out set created by `gafsControl$holdout`. * `y`: outcome values. See the note for the `x` argument to understand which data are presented to the function. * `maximize`: a logical from `gafsControl` that indicates whether the metric should be maximized or minimized * `p`: the total number of possible predictors The output should be a **named** numeric vector of performance values. In many cases, some resampled measure of performance is used. In the example above using random forest, the OOB error was used. In other cases, the resampled performance from `train` can be used and, if `gafsControl$holdout` is not zero, a static hold\-out set can be used. This depends on the data and problem at hand. The example function for random forest is: ``` rfGA$fitness_intern ``` ``` ## function (object, x, y, maximize, p) ## rfStats(object) ## <bytecode: 0x7fa71626a570> ## <environment: namespace:caret> ``` ### 21\.5\.4 The `fitness_extern` Function The `fitness_extern` function takes the observed and predicted values form the external resampling process and computes one or more performance metrics. The input arguments are: * `data`: a data frame or predictions generated by the `fit` function. For regression, the predicted values in a column called `pred`. For classification, `pred` is a factor vector. Class probabilities are usually attached as columns whose names are the class levels (see the random forest example for the `fit` function above) * `lev`: a character vector with the class levels (or `NULL` for regression problems) The output should be a **named** numeric vector of performance values. The example function for random forest is: ``` rfGA$fitness_extern ``` ``` ## function (data, lev = NULL, model = NULL) ## { ## if (is.character(data$obs)) ## data$obs <- factor(data$obs, levels = lev) ## postResample(data[, "pred"], data[, "obs"]) ## } ## <bytecode: 0x7fa71626a7a0> ## <environment: namespace:caret> ``` Two functions in [`caret`](http://cran.r-project.org/web/packages/caret/index.html) that can be used as the summary function are `defaultSummary` and `twoClassSummary` (for classification problems with two classes). ### 21\.5\.5 The `initial` Function This function creates an initial generation. Inputs are: * `vars`: the number of possible predictors * `popSize`: the population size for each generation * `...`: not currently used The output should be a binary 0/1 matrix where there are `vars` columns corresponding to the predictors and `popSize` rows for the individuals in the population. The default function populates the rows randomly with subset sizes varying between 10% and 90% of number of possible predictors. For example: ``` set.seed(128) starting <- rfGA$initial(vars = 12, popSize = 8) starting ``` ``` ## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] ## [1,] 0 1 0 0 1 0 0 0 0 0 0 0 ## [2,] 0 0 0 0 0 0 1 0 0 0 1 0 ## [3,] 0 0 1 1 1 0 0 0 1 0 1 0 ## [4,] 0 1 0 0 1 0 0 1 0 1 0 1 ## [5,] 0 1 1 1 0 0 1 0 0 1 0 0 ## [6,] 1 1 1 1 1 1 0 1 1 1 1 1 ## [7,] 1 1 1 1 1 0 0 1 0 1 1 1 ## [8,] 1 0 1 1 0 1 1 1 0 1 1 1 ``` ``` apply(starting, 1, mean) ``` ``` ## [1] 0.1666667 0.1666667 0.4166667 0.4166667 0.4166667 0.9166667 0.7500000 ## [8] 0.7500000 ``` `gafs` has an argument called `suggestions` that is similar to the one in the `ga` function where the initial population can be seeded with specific subsets. ### 21\.5\.6 The `selection` Function This function conducts the genetic selection. Inputs are: * `population`: the indicators for the current population * `fitness`: the corresponding fitness values for the population. Note that if the internal performance value is to be minimized, these are the negatives of the actual values * `r`, `q`: tuning parameters for specific selection functions. See `gafs_lrSelection` and `gafs_nlrSelection` * `...`: not currently used The output should be a list with named elements. * `population`: the indicators for the selected individuals * `fitness`: the fitness values for the selected individuals The default function is a version of the [`GA`](http://cran.r-project.org/package=GA) package’s ga\_lrSelection function. ### 21\.5\.7 The `crossover` Function This function conducts the genetic crossover. Inputs are: * `population`: the indicators for the current population * `fitness`: the corresponding fitness values for the population. Note that if the internal performance value is to be minimized, these are the negatives of the actual values * `parents`: a matrix with two rows containing indicators for the parent individuals. * `...`: not currently used The default function is a version of the [`GA`](http://cran.r-project.org/package=GA) package’s `ga_spCrossover` function. Another function that is a version of that package’s uniform cross\-over function is also available. . The output should be a list with named elements. * `children`: from `?ga_spCrossover`: “a matrix of dimension 2 times the number of decision variables containing the generated offsprings”" * `fitness`: “a vector of length 2 containing the fitness values for the offsprings. A value `NA` is returned if an offspring is different (which is usually the case) from the two parents.”" ### 21\.5\.8 The `mutation` Function This function conducts the genetic mutation. Inputs are: * `population`: the indicators for the current population * `parents`: a vector of indices for where the mutation should occur. * `...`: not currently used The default function is a version of the [`GA`](http://cran.r-project.org/package=GA) package’s `gabin_raMutation` function. . The output should the mutated population. ### 21\.5\.9 The `selectIter` Function This function determines the optimal number of generations based on the resampling output. Inputs for the function are: * `x`: a matrix with columns for the performance metrics averaged over resamples * `metric`: a character string of the performance measure to optimize (e.g. RMSE, Accuracy) * `maximize`: a single logical for whether the metric should be maximized This function should return an integer corresponding to the optimal subset size. ### 21\.5\.1 The `fit` Function This function builds the model based on a proposed current subset. The arguments for the function must be: * `x`: the current training set of predictor data with the appropriate subset of variables * `y`: the current outcome data (either a numeric or factor vector) * `lev`: a character vector with the class levels (or `NULL` for regression problems) * `last`: a logical that is `TRUE` when the final GA search is conducted on the entire data set * `...`: optional arguments to pass to the fit function in the call to `gafs` The function should return a model object that can be used to generate predictions. For random forest, the fit function is simple: ``` rfGA$fit ``` ``` ## function (x, y, lev = NULL, last = FALSE, ...) ## { ## loadNamespace("randomForest") ## randomForest::randomForest(x, y, ...) ## } ## <bytecode: 0x7fa7162692d8> ## <environment: namespace:caret> ``` ### 21\.5\.2 The `pred` Function This function returns a vector of predictions (numeric or factors) from the current model . The input arguments must be * `object`: the model generated by the `fit` function * `x`: the current set of predictor set for the held\-back samples For random forests, the function is a simple wrapper for the predict function: ``` rfGA$pred ``` ``` ## function (object, x) ## { ## tmp <- predict(object, x) ## if (is.factor(object$y)) { ## out <- cbind(data.frame(pred = tmp), as.data.frame(predict(object, ## x, type = "prob"))) ## } ## else out <- tmp ## out ## } ## <bytecode: 0x7fa716269770> ## <environment: namespace:caret> ``` For classification, it is probably a good idea to ensure that the resulting factor variables of predictions has the same levels as the input data. ### 21\.5\.3 The `fitness_intern` Function The `fitness_intern` function takes the fitted model and computes one or more performance metrics. The inputs to this function are: * `object`: the model generated by the `fit` function * `x`: the current set of predictor set. If the option `gafsControl$holdout` is zero, these values will be from the current resample (i.e. the same data used to fit the model). Otherwise, the predictor values are from the hold\-out set created by `gafsControl$holdout`. * `y`: outcome values. See the note for the `x` argument to understand which data are presented to the function. * `maximize`: a logical from `gafsControl` that indicates whether the metric should be maximized or minimized * `p`: the total number of possible predictors The output should be a **named** numeric vector of performance values. In many cases, some resampled measure of performance is used. In the example above using random forest, the OOB error was used. In other cases, the resampled performance from `train` can be used and, if `gafsControl$holdout` is not zero, a static hold\-out set can be used. This depends on the data and problem at hand. The example function for random forest is: ``` rfGA$fitness_intern ``` ``` ## function (object, x, y, maximize, p) ## rfStats(object) ## <bytecode: 0x7fa71626a570> ## <environment: namespace:caret> ``` ### 21\.5\.4 The `fitness_extern` Function The `fitness_extern` function takes the observed and predicted values form the external resampling process and computes one or more performance metrics. The input arguments are: * `data`: a data frame or predictions generated by the `fit` function. For regression, the predicted values in a column called `pred`. For classification, `pred` is a factor vector. Class probabilities are usually attached as columns whose names are the class levels (see the random forest example for the `fit` function above) * `lev`: a character vector with the class levels (or `NULL` for regression problems) The output should be a **named** numeric vector of performance values. The example function for random forest is: ``` rfGA$fitness_extern ``` ``` ## function (data, lev = NULL, model = NULL) ## { ## if (is.character(data$obs)) ## data$obs <- factor(data$obs, levels = lev) ## postResample(data[, "pred"], data[, "obs"]) ## } ## <bytecode: 0x7fa71626a7a0> ## <environment: namespace:caret> ``` Two functions in [`caret`](http://cran.r-project.org/web/packages/caret/index.html) that can be used as the summary function are `defaultSummary` and `twoClassSummary` (for classification problems with two classes). ### 21\.5\.5 The `initial` Function This function creates an initial generation. Inputs are: * `vars`: the number of possible predictors * `popSize`: the population size for each generation * `...`: not currently used The output should be a binary 0/1 matrix where there are `vars` columns corresponding to the predictors and `popSize` rows for the individuals in the population. The default function populates the rows randomly with subset sizes varying between 10% and 90% of number of possible predictors. For example: ``` set.seed(128) starting <- rfGA$initial(vars = 12, popSize = 8) starting ``` ``` ## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] ## [1,] 0 1 0 0 1 0 0 0 0 0 0 0 ## [2,] 0 0 0 0 0 0 1 0 0 0 1 0 ## [3,] 0 0 1 1 1 0 0 0 1 0 1 0 ## [4,] 0 1 0 0 1 0 0 1 0 1 0 1 ## [5,] 0 1 1 1 0 0 1 0 0 1 0 0 ## [6,] 1 1 1 1 1 1 0 1 1 1 1 1 ## [7,] 1 1 1 1 1 0 0 1 0 1 1 1 ## [8,] 1 0 1 1 0 1 1 1 0 1 1 1 ``` ``` apply(starting, 1, mean) ``` ``` ## [1] 0.1666667 0.1666667 0.4166667 0.4166667 0.4166667 0.9166667 0.7500000 ## [8] 0.7500000 ``` `gafs` has an argument called `suggestions` that is similar to the one in the `ga` function where the initial population can be seeded with specific subsets. ### 21\.5\.6 The `selection` Function This function conducts the genetic selection. Inputs are: * `population`: the indicators for the current population * `fitness`: the corresponding fitness values for the population. Note that if the internal performance value is to be minimized, these are the negatives of the actual values * `r`, `q`: tuning parameters for specific selection functions. See `gafs_lrSelection` and `gafs_nlrSelection` * `...`: not currently used The output should be a list with named elements. * `population`: the indicators for the selected individuals * `fitness`: the fitness values for the selected individuals The default function is a version of the [`GA`](http://cran.r-project.org/package=GA) package’s ga\_lrSelection function. ### 21\.5\.7 The `crossover` Function This function conducts the genetic crossover. Inputs are: * `population`: the indicators for the current population * `fitness`: the corresponding fitness values for the population. Note that if the internal performance value is to be minimized, these are the negatives of the actual values * `parents`: a matrix with two rows containing indicators for the parent individuals. * `...`: not currently used The default function is a version of the [`GA`](http://cran.r-project.org/package=GA) package’s `ga_spCrossover` function. Another function that is a version of that package’s uniform cross\-over function is also available. . The output should be a list with named elements. * `children`: from `?ga_spCrossover`: “a matrix of dimension 2 times the number of decision variables containing the generated offsprings”" * `fitness`: “a vector of length 2 containing the fitness values for the offsprings. A value `NA` is returned if an offspring is different (which is usually the case) from the two parents.”" ### 21\.5\.8 The `mutation` Function This function conducts the genetic mutation. Inputs are: * `population`: the indicators for the current population * `parents`: a vector of indices for where the mutation should occur. * `...`: not currently used The default function is a version of the [`GA`](http://cran.r-project.org/package=GA) package’s `gabin_raMutation` function. . The output should the mutated population. ### 21\.5\.9 The `selectIter` Function This function determines the optimal number of generations based on the resampling output. Inputs for the function are: * `x`: a matrix with columns for the performance metrics averaged over resamples * `metric`: a character string of the performance measure to optimize (e.g. RMSE, Accuracy) * `maximize`: a single logical for whether the metric should be maximized This function should return an integer corresponding to the optimal subset size. 21\.6 The Example Revisited --------------------------- The previous GA included some of the non\-informative predictors. We can cheat a little and try to bias the search to get the right solution. We can try to encourage the algorithm to choose fewer predictors, we can penalize the the RMSE estimate. Normally, a metric like the Akaike information criterion (AIC) statistic would be used. However, with a random forest model, there is no real notion of model degrees of freedom. As an alternative, we can use [desirability functions](http://scholar.google.com/scholar?q=%22desirability+functions) to penalize the RMSE. To do this, two functions are created that translate the number of predictors and the RMSE values to a measure of “desirability”. For the number of predictors, the most desirable property would be a single predictor and the worst situation would be if the model required all 50 predictors. That desirability function is visualized as: For the RMSE, the best case would be zero. Many poor models have values around four. To give the RMSE value more weight in the overall desirability calculation, we use a scale parameter value of 2\. This desirability function is: To use the overall desirability to drive the feature selection, the `internal` function requires replacement. We make a copy of `rfGA` and add code using the [`desirability`](http://cran.r-project.org/package=desirability) package and the function returns the estimated RMSE and the overall desirability. The `gafsControl` function also need changes. The `metric` argument needs to reflect that the overall desirability score should be maximized internally but the RMSE estimate should be minimized externally. ``` library(desirability) rfGA2 <- rfGA rfGA2$fitness_intern <- function (object, x, y, maximize, p) { RMSE <- rfStats(object)[1] d_RMSE <- dMin(0, 4) d_Size <- dMin(1, p, 2) overall <- dOverall(d_RMSE, d_Size) D <- predict(overall, data.frame(RMSE, ncol(x))) c(D = D, RMSE = as.vector(RMSE)) } ga_ctrl_d <- gafsControl(functions = rfGA2, method = "repeatedcv", repeats = 5, metric = c(internal = "D", external = "RMSE"), maximize = c(internal = TRUE, external = FALSE)) set.seed(10) rf_ga_d <- gafs(x = x, y = y, iters = 150, gafsControl = ga_ctrl_d) rf_ga_d ``` ``` ## ## Genetic Algorithm Feature Selection ## ## 100 samples ## 50 predictors ## ## Maximum generations: 150 ## Population per generation: 50 ## Crossover probability: 0.8 ## Mutation probability: 0.1 ## Elitism: 0 ## ## Internal performance values: D, RMSE ## Subset selection driven to maximize internal D ## ## External performance values: RMSE, Rsquared, MAE ## Best iteration chose by minimizing external RMSE ## External resampling method: Cross-Validated (10 fold, repeated 5 times) ## ## During resampling: ## * the top 5 selected variables (out of a possible 50): ## real1 (100%), real2 (100%), real4 (100%), real5 (100%), real3 (40%) ## * on average, 5.2 variables were selected (min = 4, max = 6) ## ## In the final search using the entire training set: ## * 6 features selected at iteration 146 including: ## real1, real2, real3, real4, real5 ... ## * external performance at this iteration is ## ## RMSE Rsquared MAE ## 2.7046 0.7665 2.2730 ``` Here are the RMSE values for this search: ``` plot(rf_ga_d) + theme_bw() ``` The final GA found 6 that were selected: real1, real2, real3, real4, real5, bogus43\. During resampling, the average number of predictors selected was 5\.2, indicating that the penalty on the number of predictors was effective. 21\.7 Using Recipes ------------------- Like the other feature selection routines, `gafs` can take a data recipe as an input. This is advantageous when your data needs preprocessing before the model, such as: * creation of dummy variables from factors * specification of interactions * missing data imputation * more complex feature engineering methods Like `train`, the recipe’s preprocessing steps are calculated within each resample. This makes sure that the resampling statistics capture the variation and effect that the preprocessing has on the model. As an example, the Ames housing data is used. These data contain a number of categorical predictors that require conversion to indicators as well as other variables that require processing. To load (and split) the data: ``` library(AmesHousing) library(rsample) # Create the data and remove one column that is more of # an outcome. ames <- make_ames() %>% select(-Overall_Qual) ncol(ames) ``` ``` ## [1] 80 ``` ``` # How many factor variables? sum(vapply(ames, is.factor, logical(1))) ``` ``` ## [1] 45 ``` ``` # We'll use `rsample` to make the initial split to be consistent with other # analyses of these data. Set the seed first to make sure that you get the # same random numbers set.seed(4595) data_split <- initial_split(ames, strata = "Sale_Price", prop = 3/4) ames_train <- training(data_split) %>% as.data.frame() ames_test <- testing(data_split) %>% as.data.frame() ``` Here is a recipe that does differetn types of preprocssing on the predictor set: ``` library(recipes) ames_rec <- recipe(Sale_Price ~ ., data = ames_train) %>% step_log(Sale_Price, base = 10) %>% step_other(Neighborhood, threshold = 0.05) %>% step_dummy(all_nominal(), -Bldg_Type) %>% step_interact(~ starts_with("Central_Air"):Year_Built) %>% step_zv(all_predictors())%>% step_bs(Longitude, Latitude, options = list(df = 5)) ``` If this were executed on the training set, it would produce 280 predictor columns out of the original 79\. Let’s tune some linear models with `gafs` and, for the sake of computational time, only use 10 generations of the algorithm: ``` lm_ga_ctrl <- gafsControl(functions = caretGA, method = "cv", number = 10) set.seed(23555) lm_ga_search <- gafs( ames_rec, data = ames_train, iters = 10, gafsControl = lm_ga_ctrl, # now options to `train` for caretGA method = "lm", trControl = trainControl(method = "cv", allowParallel = FALSE) ) lm_ga_search ``` ``` ## ## Genetic Algorithm Feature Selection ## ## 2199 samples ## 273 predictors ## ## Maximum generations: 10 ## Population per generation: 50 ## Crossover probability: 0.8 ## Mutation probability: 0.1 ## Elitism: 0 ## ## Internal performance values: RMSE, Rsquared, MAE ## Subset selection driven to minimize internal RMSE ## ## External performance values: RMSE, Rsquared, MAE ## Best iteration chose by minimizing external RMSE ## External resampling method: Cross-Validated (10 fold) ## ## During resampling: ## * the top 5 selected variables (out of a possible 273): ## Bldg_Type (100%), Bsmt_Exposure_No (100%), First_Flr_SF (100%), MS_Zoning_Residential_High_Density (100%), Neighborhood_Gilbert (100%) ## * on average, 171.7 variables were selected (min = 150, max = 198) ## ## In the final search using the entire training set: ## * 155 features selected at iteration 9 including: ## Lot_Frontage, Year_Built, Year_Remod_Add, BsmtFin_SF_2, Gr_Liv_Area ... ## * external performance at this iteration is ## ## RMSE Rsquared MAE ## 0.06923 0.84659 0.04260 ```
Machine Learning
topepo.github.io
https://topepo.github.io/caret/feature-selection-using-simulated-annealing.html
22 Feature Selection using Simulated Annealing ============================================== Contents * [Simulated Annealing](feature-selection-using-simulated-annealing.html#sa) * [Internal and External Performance Estimates](feature-selection-using-genetic-algorithms.html#performance) * [Basic Syntax](feature-selection-using-univariate-filters.html#syntax) * [Example](feature-selection-using-simulated-annealing.html#saexample) * [Customizing the Search](model-training-and-tuning.html#custom) * [Using Recipes](feature-selection-using-simulated-annealing.html#sarecipes) 22\.1 Simulated Annealing ------------------------- Simulated annealing (SA) is a global search method that makes small random changes (i.e. perturbations) to an initial candidate solution. If the performance value for the perturbed value is better than the previous solution, the new solution is accepted. If not, an acceptance probability is determined based on the difference between the two performance values and the current iteration of the search. From this, a sub\-optimal solution can be accepted on the off\-change that it may eventually produce a better solution in subsequent iterations. See [Kirkpatrick (1984\)](http://scholar.google.com/scholar?hl=en&q=%22Optimization+by+simulated+annealing) or [Rutenbar (1989\)](http://scholar.google.com/scholar?q=%22Simulated+annealing+algorithms%3A+An+overview) for better descriptions. In the context of feature selection, a solution is a binary vector that describes the current subset. The subset is perturbed by randomly changing a small number of members in the subset. 22\.2 Internal and External Performance Estimates ------------------------------------------------- Much of the discussion on this subject in the [genetic algorithm page](feature-selection-using-genetic-algorithms.html#performance) is relevant here, although SA search is less aggressive than GA search. In any case, the implementation here conducts the SA search inside the resampling loops and uses an external performance estimate to choose how many iterations of the search are appropriate. 22\.3 Basic Syntax ------------------ The syntax of this function is very similar to the previous information for genetic algorithm searches. The most basic usage of the function is: ``` obj <- safs(x = predictors, y = outcome, iters = 100) ``` where * `x`: a data frame or matrix of predictor values * `y`: a factor or numeric vector of outcomes * `iters`: the number of iterations for the SA This isn’t very specific. All of the action is in the control function. That can be used to specify the model to be fit, how predictions are made and summarized as well as the genetic operations. Suppose that we want to fit a linear regression model. To do this, we can use `train` as an interface and pass arguments to that function through `safs`: ``` ctrl <- safsControl(functions = caretSA) obj <- safs(x = predictors, y = outcome, iters = 100, safsControl = ctrl, ## Now pass options to `train` method = "lm") ``` Other options, such as `preProcess`, can be passed in as well. Some important options to `safsControl` are: * `method`, `number`, `repeats`, `index`, `indexOut`, etc: options similar to those for [`train`](http://topepo.github.io/caret/model-training-and-tuning.html#control) top control resampling. * `metric`: this is similar to [`train`](http://topepo.github.io/caret/model-training-and-tuning.html#control)’s option but, in this case, the value should be a named vector with values for the internal and external metrics. If none are specified, the first value returned by the summary functions (see details below) are used and a warning is issued. A similar two\-element vector for the option `maximize` is also required. See the [last example here](feature-selection-using-genetic-algorithms.html#example2) for an illustration. * `holdout`: this is a number between `[0, 1)` that can be used to hold out samples for computing the internal fitness value. Note that this is independent of the external resampling step. Suppose 10\-fold CV is being used. Within a resampling iteration, `holdout` can be used to sample an additional proportion of the 90% resampled data to use for estimating fitness. This may not be a good idea unless you have a very large training set and want to avoid an internal resampling procedure to estimate fitness. * `improve`: an integer (or infinity) defining how many iterations should pass without an improvement in fitness before the current subset is reset to the last known improvement. * `allowParallel`: should the external resampling loop be run in parallel?. There are a few built\-in sets of functions to use with `safs`: `caretSA`, `rfSA`, and `treebagSA`. The first is a simple interface to `train`. When using this, as shown above, arguments can be passed to `train` using the `...` structure and the resampling estimates of performance can be used as the internal fitness value. The functions provided by `rfSA` and `treebagSA` avoid using `train` and their internal estimates of fitness come from using the out\-of\-bag estimates generated from the model. 22\.4 Simulated Annealing Example --------------------------------- Using the example from the [previous page](recursive-feature-elimination.html#example) where there are five real predictors and 40 noise predictors. We’ll fit a random forest model and use the out\-of\-bag RMSE estimate as the internal performance metric and use the same repeated 10\-fold cross\-validation process used with the search. To do this, we’ll use the built\-in `rfSA` object for this purpose. The default SA operators will be used with 1000 iterations of the algorithm. ``` sa_ctrl <- safsControl(functions = rfSA, method = "repeatedcv", repeats = 5, improve = 50) set.seed(10) rf_sa <- safs(x = x, y = y, iters = 250, safsControl = sa_ctrl) rf_sa ``` ``` ## ## Simulated Annealing Feature Selection ## ## 100 samples ## 50 predictors ## ## Maximum search iterations: 250 ## Restart after 50 iterations without improvement (2.1 restarts on average) ## ## Internal performance values: RMSE, Rsquared ## Subset selection driven to minimize internal RMSE ## ## External performance values: RMSE, Rsquared, MAE ## Best iteration chose by minimizing external RMSE ## External resampling method: Cross-Validated (10 fold, repeated 5 times) ## ## During resampling: ## * the top 5 selected variables (out of a possible 50): ## real1 (100%), real2 (100%), real4 (100%), real5 (98%), bogus17 (88%) ## * on average, 20.7 variables were selected (min = 12, max = 30) ## ## In the final search using the entire training set: ## * 21 features selected at iteration 212 including: ## real1, real2, real5, bogus1, bogus3 ... ## * external performance at this iteration is ## ## RMSE Rsquared MAE ## 3.3147 0.6625 2.8369 ``` As with the GA, we can plot the internal and external performance over iterations. ``` plot(rf_sa) + theme_bw() ``` The performance here isn’t as good as the previous GA or RFE solutions. Based on these results, the iteration associated with the best external RMSE estimate was 212 with a corresponding RMSE estimate of 3\.31\. Using the entire training set, the final SA is conducted and, at iteration 212, there were 21 selected: real1, real2, real5, bogus1, bogus3, bogus9, bogus10, bogus13, bogus14, bogus15, bogus19, bogus20, bogus23, bogus24, bogus25, bogus26, bogus28, bogus31, bogus33, bogus38, bogus44\. The random forest model with these predictors is created using the entire training set is trained and this is the model that is used when `predict.safs` is executed. 22\.5 Customizing the Search ---------------------------- ### 22\.5\.1 The `fit` Function This function builds the model based on a proposed current subset. The arguments for the function must be: * `x`: the current training set of predictor data with the appropriate subset of variables * `y`: the current outcome data (either a numeric or factor vector) * `lev`: a character vector with the class levels (or `NULL` for regression problems) * `last`: a logical that is `TRUE` when the final SA search is conducted on the entire data set * `...`: optional arguments to pass to the fit function in the call to `safs` The function should return a model object that can be used to generate predictions. For random forest, the fit function is simple: ``` rfSA$fit ``` ``` ## function (x, y, lev = NULL, last = FALSE, ...) ## { ## loadNamespace("randomForest") ## randomForest::randomForest(x, y, ...) ## } ## <bytecode: 0x7fa6bdc6c4c0> ## <environment: namespace:caret> ``` ### 22\.5\.2 The `pred` Function This function returns a vector of predictions (numeric or factors) from the current model. The input arguments must be * `object`: the model generated by the `fit` function * `x`: the current set of predictor set for the held\-back samples For random forests, the function is a simple wrapper for the predict function: ``` rfSA$pred ``` ``` ## function (object, x) ## { ## tmp <- predict(object, x) ## if (is.factor(object$y)) { ## out <- cbind(data.frame(pred = tmp), as.data.frame(predict(object, ## x, type = "prob"))) ## } ## else out <- tmp ## out ## } ## <bytecode: 0x7fa6bdc6c958> ## <environment: namespace:caret> ``` For classification, it is probably a good idea to ensure that the resulting factor variables of predictions has the same levels as the input data. ### 22\.5\.3 The `fitness_intern` Function The `fitness_intern` function takes the fitted model and computes one or more performance metrics. The inputs to this function are: * `object`: the model generated by the `fit` function * `x`: the current set of predictor set. If the option `safsControl$holdout` is zero, these values will be from the current resample (i.e. the same data used to fit the model). Otherwise, the predictor values are from the hold\-out set created by `safsControl$holdout`. * `y`: outcome values. See the note for the `x` argument to understand which data are presented to the function. * `maximize`: a logical from `safsControl` that indicates whether the metric should be maximized or minimized * `p`: the total number of possible predictors The output should be a **named** numeric vector of performance values. In many cases, some resampled measure of performance is used. In the example above using random forest, the OOB error was used. In other cases, the resampled performance from `train` can be used and, if `safsControl$holdout` is not zero, a static hold\-out set can be used. This depends on the data and problem at hand. If left The example function for random forest is: ``` rfSA$fitness_intern ``` ``` ## function (object, x, y, maximize, p) ## rfStats(object) ## <bytecode: 0x7fa6bdc69848> ## <environment: namespace:caret> ``` ### 22\.5\.4 The `fitness_extern` Function The `fitness_extern` function takes the observed and predicted values form the external resampling process and computes one or more performance metrics. The input arguments are: * `data`: a data frame or predictions generated by the `fit` function. For regression, the predicted values in a column called `pred`. For classification, `pred` is a factor vector. Class probabilities are usually attached as columns whose names are the class levels (see the random forest example for the `fit` function above) * `lev`: a character vector with the class levels (or `NULL` for regression problems) The output should be a **named** numeric vector of performance values. The example function for random forest is: ``` rfSA$fitness_extern ``` ``` ## function (data, lev = NULL, model = NULL) ## { ## if (is.character(data$obs)) ## data$obs <- factor(data$obs, levels = lev) ## postResample(data[, "pred"], data[, "obs"]) ## } ## <bytecode: 0x7fa6bdc69a78> ## <environment: namespace:caret> ``` Two functions in [`caret`](http://cran.r-project.org/web/packages/caret/index.html) that can be used as the summary function are `defaultSummary` and `twoClassSummary` (for classification problems with two classes). ### 22\.5\.5 The `initial` Function This function creates an initial subset. Inputs are: * `vars`: the number of possible predictors * `prob`: the probability that a feature is in the subset * `...`: not currently used The output should be a vector of integers indicating which predictors are in the initial subset. Alternatively, instead of a function, a vector of integers can be used in this slot. ### 22\.5\.6 The `perturb` Function This function perturbs the subset. Inputs are: * `x`: the integers defining the current subset * `vars`: the number of possible predictors * `number`: the number of predictors to randomly change * `...`: not currently used The output should be a vector of integers indicating which predictors are in the new subset. ### 22\.5\.7 The `prob` Function This function computes the acceptance probability. Inputs are: * `old`: the fitness value for the current subset * `new`: the fitness value for the new subset * `iteration`: the current iteration number or, if the `improve`argument of `safsControl` is used, the number of iterations since the last restart * `...`: not currently used The output should be a numeric value between zero and one. One of the biggest difficulties in using simulated annealing is the specification of the acceptance probability calculation. There are many references on different methods for doing this but the general consensus is that 1\) the probability should decrease as the difference between the current and new solution increases and 2\) the probability should decrease over iterations. One issue is that the difference in fitness values can be scale\-dependent. In this package, the default probability calculations uses the percent difference, i.e. `(current - new)/current` to normalize the difference. The basic form of the probability simply takes the difference, multiplies by the iteration number and exponentiates this product: ``` prob = exp[(current - new)/current*iteration] ``` To demonstrate this, the plot below shows the probability profile for different fitness values of the current subset and different (absolute) differences. For the example data that were simulated, the RMSE values ranged between values greater than 4 to just under 3\. In the plot below, the red curve in the right\-hand panel shows how the probability changes over time when comparing a current value of 4 with a new values of 4\.5 (smaller values being better). While this difference would likely be accepted in the first few iterations, it is unlikely to be accepted after 30 or 40\. Also, larger differences are uniformly disfavored relative to smaller differences. ``` grid <- expand.grid(old = c(4, 3.5), new = c(4.5, 4, 3.5) + 1, iter = 1:40) grid <- subset(grid, old < new) grid$prob <- apply(grid, 1, function(x) safs_prob(new = x["new"], old= x["old"], iteration = x["iter"])) grid$Difference <- factor(grid$new - grid$old) grid$Group <- factor(paste("Current Value", grid$old)) ggplot(grid, aes(x = iter, y = prob, color = Difference)) + geom_line() + facet_wrap(~Group) + theme_bw() + ylab("Probability") + xlab("Iteration") ``` While this is the default, any user\-written function can be used to assign probabilities. 22\.6 Using Recipes ------------------- Similar to the previous section on genetic algorothms, recipes can be used with `safs`. Using the same data as before: ``` library(AmesHousing) library(rsample) # Create the data and remove one column that is more of # an outcome. ames <- make_ames() %>% select(-Overall_Qual) ncol(ames) ``` ``` ## [1] 80 ``` ``` # How many factor variables? sum(vapply(ames, is.factor, logical(1))) ``` ``` ## [1] 45 ``` ``` # We'll use `rsample` to make the initial split to be consistent with other # analyses of these data. Set the seed first to make sure that you get the # same random numbers set.seed(4595) data_split <- initial_split(ames, strata = "Sale_Price", prop = 3/4) ames_train <- training(data_split) %>% as.data.frame() ames_test <- testing(data_split) %>% as.data.frame() library(recipes) ames_rec <- recipe(Sale_Price ~ ., data = ames_train) %>% step_log(Sale_Price, base = 10) %>% step_other(Neighborhood, threshold = 0.05) %>% step_dummy(all_nominal(), -Bldg_Type) %>% step_interact(~ starts_with("Central_Air"):Year_Built) %>% step_zv(all_predictors())%>% step_bs(Longitude, Latitude, options = list(df = 5)) ``` Let’s again use linear models with the function: ``` lm_sa_ctrl <- safsControl(functions = caretSA, method = "cv", number = 10) set.seed(23555) lm_sa_search <- safs( ames_rec, data = ames_train, iters = 10, # we probably need thousands of iterations safsControl = lm_sa_ctrl, # now options to `train` for caretSA method = "lm", trControl = trainControl(method = "cv", allowParallel = FALSE) ) lm_sa_search ``` ``` ## ## Simulated Annealing Feature Selection ## ## 2199 samples ## 273 predictors ## ## Maximum search iterations: 10 ## ## Internal performance values: RMSE, Rsquared, MAE ## Subset selection driven to minimize internal RMSE ## ## External performance values: RMSE, Rsquared, MAE ## Best iteration chose by minimizing external RMSE ## External resampling method: Cross-Validated (10 fold) ## ## During resampling: ## * the top 5 selected variables (out of a possible 273): ## Roof_Style_Gambrel (70%), Latitude_bs_2 (60%), Bsmt_Full_Bath (50%), BsmtFin_Type_1_Rec (50%), Condition_1_Norm (50%) ## * on average, 59.1 variables were selected (min = 56, max = 63) ## ## In the final search using the entire training set: ## * 56 features selected at iteration 4 including: ## Year_Remod_Add, Half_Bath, TotRms_AbvGrd, Garage_Area, Enclosed_Porch ... ## * external performance at this iteration is ## ## RMSE Rsquared MAE ## 0.09518 0.69840 0.07033 ``` 22\.1 Simulated Annealing ------------------------- Simulated annealing (SA) is a global search method that makes small random changes (i.e. perturbations) to an initial candidate solution. If the performance value for the perturbed value is better than the previous solution, the new solution is accepted. If not, an acceptance probability is determined based on the difference between the two performance values and the current iteration of the search. From this, a sub\-optimal solution can be accepted on the off\-change that it may eventually produce a better solution in subsequent iterations. See [Kirkpatrick (1984\)](http://scholar.google.com/scholar?hl=en&q=%22Optimization+by+simulated+annealing) or [Rutenbar (1989\)](http://scholar.google.com/scholar?q=%22Simulated+annealing+algorithms%3A+An+overview) for better descriptions. In the context of feature selection, a solution is a binary vector that describes the current subset. The subset is perturbed by randomly changing a small number of members in the subset. 22\.2 Internal and External Performance Estimates ------------------------------------------------- Much of the discussion on this subject in the [genetic algorithm page](feature-selection-using-genetic-algorithms.html#performance) is relevant here, although SA search is less aggressive than GA search. In any case, the implementation here conducts the SA search inside the resampling loops and uses an external performance estimate to choose how many iterations of the search are appropriate. 22\.3 Basic Syntax ------------------ The syntax of this function is very similar to the previous information for genetic algorithm searches. The most basic usage of the function is: ``` obj <- safs(x = predictors, y = outcome, iters = 100) ``` where * `x`: a data frame or matrix of predictor values * `y`: a factor or numeric vector of outcomes * `iters`: the number of iterations for the SA This isn’t very specific. All of the action is in the control function. That can be used to specify the model to be fit, how predictions are made and summarized as well as the genetic operations. Suppose that we want to fit a linear regression model. To do this, we can use `train` as an interface and pass arguments to that function through `safs`: ``` ctrl <- safsControl(functions = caretSA) obj <- safs(x = predictors, y = outcome, iters = 100, safsControl = ctrl, ## Now pass options to `train` method = "lm") ``` Other options, such as `preProcess`, can be passed in as well. Some important options to `safsControl` are: * `method`, `number`, `repeats`, `index`, `indexOut`, etc: options similar to those for [`train`](http://topepo.github.io/caret/model-training-and-tuning.html#control) top control resampling. * `metric`: this is similar to [`train`](http://topepo.github.io/caret/model-training-and-tuning.html#control)’s option but, in this case, the value should be a named vector with values for the internal and external metrics. If none are specified, the first value returned by the summary functions (see details below) are used and a warning is issued. A similar two\-element vector for the option `maximize` is also required. See the [last example here](feature-selection-using-genetic-algorithms.html#example2) for an illustration. * `holdout`: this is a number between `[0, 1)` that can be used to hold out samples for computing the internal fitness value. Note that this is independent of the external resampling step. Suppose 10\-fold CV is being used. Within a resampling iteration, `holdout` can be used to sample an additional proportion of the 90% resampled data to use for estimating fitness. This may not be a good idea unless you have a very large training set and want to avoid an internal resampling procedure to estimate fitness. * `improve`: an integer (or infinity) defining how many iterations should pass without an improvement in fitness before the current subset is reset to the last known improvement. * `allowParallel`: should the external resampling loop be run in parallel?. There are a few built\-in sets of functions to use with `safs`: `caretSA`, `rfSA`, and `treebagSA`. The first is a simple interface to `train`. When using this, as shown above, arguments can be passed to `train` using the `...` structure and the resampling estimates of performance can be used as the internal fitness value. The functions provided by `rfSA` and `treebagSA` avoid using `train` and their internal estimates of fitness come from using the out\-of\-bag estimates generated from the model. 22\.4 Simulated Annealing Example --------------------------------- Using the example from the [previous page](recursive-feature-elimination.html#example) where there are five real predictors and 40 noise predictors. We’ll fit a random forest model and use the out\-of\-bag RMSE estimate as the internal performance metric and use the same repeated 10\-fold cross\-validation process used with the search. To do this, we’ll use the built\-in `rfSA` object for this purpose. The default SA operators will be used with 1000 iterations of the algorithm. ``` sa_ctrl <- safsControl(functions = rfSA, method = "repeatedcv", repeats = 5, improve = 50) set.seed(10) rf_sa <- safs(x = x, y = y, iters = 250, safsControl = sa_ctrl) rf_sa ``` ``` ## ## Simulated Annealing Feature Selection ## ## 100 samples ## 50 predictors ## ## Maximum search iterations: 250 ## Restart after 50 iterations without improvement (2.1 restarts on average) ## ## Internal performance values: RMSE, Rsquared ## Subset selection driven to minimize internal RMSE ## ## External performance values: RMSE, Rsquared, MAE ## Best iteration chose by minimizing external RMSE ## External resampling method: Cross-Validated (10 fold, repeated 5 times) ## ## During resampling: ## * the top 5 selected variables (out of a possible 50): ## real1 (100%), real2 (100%), real4 (100%), real5 (98%), bogus17 (88%) ## * on average, 20.7 variables were selected (min = 12, max = 30) ## ## In the final search using the entire training set: ## * 21 features selected at iteration 212 including: ## real1, real2, real5, bogus1, bogus3 ... ## * external performance at this iteration is ## ## RMSE Rsquared MAE ## 3.3147 0.6625 2.8369 ``` As with the GA, we can plot the internal and external performance over iterations. ``` plot(rf_sa) + theme_bw() ``` The performance here isn’t as good as the previous GA or RFE solutions. Based on these results, the iteration associated with the best external RMSE estimate was 212 with a corresponding RMSE estimate of 3\.31\. Using the entire training set, the final SA is conducted and, at iteration 212, there were 21 selected: real1, real2, real5, bogus1, bogus3, bogus9, bogus10, bogus13, bogus14, bogus15, bogus19, bogus20, bogus23, bogus24, bogus25, bogus26, bogus28, bogus31, bogus33, bogus38, bogus44\. The random forest model with these predictors is created using the entire training set is trained and this is the model that is used when `predict.safs` is executed. 22\.5 Customizing the Search ---------------------------- ### 22\.5\.1 The `fit` Function This function builds the model based on a proposed current subset. The arguments for the function must be: * `x`: the current training set of predictor data with the appropriate subset of variables * `y`: the current outcome data (either a numeric or factor vector) * `lev`: a character vector with the class levels (or `NULL` for regression problems) * `last`: a logical that is `TRUE` when the final SA search is conducted on the entire data set * `...`: optional arguments to pass to the fit function in the call to `safs` The function should return a model object that can be used to generate predictions. For random forest, the fit function is simple: ``` rfSA$fit ``` ``` ## function (x, y, lev = NULL, last = FALSE, ...) ## { ## loadNamespace("randomForest") ## randomForest::randomForest(x, y, ...) ## } ## <bytecode: 0x7fa6bdc6c4c0> ## <environment: namespace:caret> ``` ### 22\.5\.2 The `pred` Function This function returns a vector of predictions (numeric or factors) from the current model. The input arguments must be * `object`: the model generated by the `fit` function * `x`: the current set of predictor set for the held\-back samples For random forests, the function is a simple wrapper for the predict function: ``` rfSA$pred ``` ``` ## function (object, x) ## { ## tmp <- predict(object, x) ## if (is.factor(object$y)) { ## out <- cbind(data.frame(pred = tmp), as.data.frame(predict(object, ## x, type = "prob"))) ## } ## else out <- tmp ## out ## } ## <bytecode: 0x7fa6bdc6c958> ## <environment: namespace:caret> ``` For classification, it is probably a good idea to ensure that the resulting factor variables of predictions has the same levels as the input data. ### 22\.5\.3 The `fitness_intern` Function The `fitness_intern` function takes the fitted model and computes one or more performance metrics. The inputs to this function are: * `object`: the model generated by the `fit` function * `x`: the current set of predictor set. If the option `safsControl$holdout` is zero, these values will be from the current resample (i.e. the same data used to fit the model). Otherwise, the predictor values are from the hold\-out set created by `safsControl$holdout`. * `y`: outcome values. See the note for the `x` argument to understand which data are presented to the function. * `maximize`: a logical from `safsControl` that indicates whether the metric should be maximized or minimized * `p`: the total number of possible predictors The output should be a **named** numeric vector of performance values. In many cases, some resampled measure of performance is used. In the example above using random forest, the OOB error was used. In other cases, the resampled performance from `train` can be used and, if `safsControl$holdout` is not zero, a static hold\-out set can be used. This depends on the data and problem at hand. If left The example function for random forest is: ``` rfSA$fitness_intern ``` ``` ## function (object, x, y, maximize, p) ## rfStats(object) ## <bytecode: 0x7fa6bdc69848> ## <environment: namespace:caret> ``` ### 22\.5\.4 The `fitness_extern` Function The `fitness_extern` function takes the observed and predicted values form the external resampling process and computes one or more performance metrics. The input arguments are: * `data`: a data frame or predictions generated by the `fit` function. For regression, the predicted values in a column called `pred`. For classification, `pred` is a factor vector. Class probabilities are usually attached as columns whose names are the class levels (see the random forest example for the `fit` function above) * `lev`: a character vector with the class levels (or `NULL` for regression problems) The output should be a **named** numeric vector of performance values. The example function for random forest is: ``` rfSA$fitness_extern ``` ``` ## function (data, lev = NULL, model = NULL) ## { ## if (is.character(data$obs)) ## data$obs <- factor(data$obs, levels = lev) ## postResample(data[, "pred"], data[, "obs"]) ## } ## <bytecode: 0x7fa6bdc69a78> ## <environment: namespace:caret> ``` Two functions in [`caret`](http://cran.r-project.org/web/packages/caret/index.html) that can be used as the summary function are `defaultSummary` and `twoClassSummary` (for classification problems with two classes). ### 22\.5\.5 The `initial` Function This function creates an initial subset. Inputs are: * `vars`: the number of possible predictors * `prob`: the probability that a feature is in the subset * `...`: not currently used The output should be a vector of integers indicating which predictors are in the initial subset. Alternatively, instead of a function, a vector of integers can be used in this slot. ### 22\.5\.6 The `perturb` Function This function perturbs the subset. Inputs are: * `x`: the integers defining the current subset * `vars`: the number of possible predictors * `number`: the number of predictors to randomly change * `...`: not currently used The output should be a vector of integers indicating which predictors are in the new subset. ### 22\.5\.7 The `prob` Function This function computes the acceptance probability. Inputs are: * `old`: the fitness value for the current subset * `new`: the fitness value for the new subset * `iteration`: the current iteration number or, if the `improve`argument of `safsControl` is used, the number of iterations since the last restart * `...`: not currently used The output should be a numeric value between zero and one. One of the biggest difficulties in using simulated annealing is the specification of the acceptance probability calculation. There are many references on different methods for doing this but the general consensus is that 1\) the probability should decrease as the difference between the current and new solution increases and 2\) the probability should decrease over iterations. One issue is that the difference in fitness values can be scale\-dependent. In this package, the default probability calculations uses the percent difference, i.e. `(current - new)/current` to normalize the difference. The basic form of the probability simply takes the difference, multiplies by the iteration number and exponentiates this product: ``` prob = exp[(current - new)/current*iteration] ``` To demonstrate this, the plot below shows the probability profile for different fitness values of the current subset and different (absolute) differences. For the example data that were simulated, the RMSE values ranged between values greater than 4 to just under 3\. In the plot below, the red curve in the right\-hand panel shows how the probability changes over time when comparing a current value of 4 with a new values of 4\.5 (smaller values being better). While this difference would likely be accepted in the first few iterations, it is unlikely to be accepted after 30 or 40\. Also, larger differences are uniformly disfavored relative to smaller differences. ``` grid <- expand.grid(old = c(4, 3.5), new = c(4.5, 4, 3.5) + 1, iter = 1:40) grid <- subset(grid, old < new) grid$prob <- apply(grid, 1, function(x) safs_prob(new = x["new"], old= x["old"], iteration = x["iter"])) grid$Difference <- factor(grid$new - grid$old) grid$Group <- factor(paste("Current Value", grid$old)) ggplot(grid, aes(x = iter, y = prob, color = Difference)) + geom_line() + facet_wrap(~Group) + theme_bw() + ylab("Probability") + xlab("Iteration") ``` While this is the default, any user\-written function can be used to assign probabilities. ### 22\.5\.1 The `fit` Function This function builds the model based on a proposed current subset. The arguments for the function must be: * `x`: the current training set of predictor data with the appropriate subset of variables * `y`: the current outcome data (either a numeric or factor vector) * `lev`: a character vector with the class levels (or `NULL` for regression problems) * `last`: a logical that is `TRUE` when the final SA search is conducted on the entire data set * `...`: optional arguments to pass to the fit function in the call to `safs` The function should return a model object that can be used to generate predictions. For random forest, the fit function is simple: ``` rfSA$fit ``` ``` ## function (x, y, lev = NULL, last = FALSE, ...) ## { ## loadNamespace("randomForest") ## randomForest::randomForest(x, y, ...) ## } ## <bytecode: 0x7fa6bdc6c4c0> ## <environment: namespace:caret> ``` ### 22\.5\.2 The `pred` Function This function returns a vector of predictions (numeric or factors) from the current model. The input arguments must be * `object`: the model generated by the `fit` function * `x`: the current set of predictor set for the held\-back samples For random forests, the function is a simple wrapper for the predict function: ``` rfSA$pred ``` ``` ## function (object, x) ## { ## tmp <- predict(object, x) ## if (is.factor(object$y)) { ## out <- cbind(data.frame(pred = tmp), as.data.frame(predict(object, ## x, type = "prob"))) ## } ## else out <- tmp ## out ## } ## <bytecode: 0x7fa6bdc6c958> ## <environment: namespace:caret> ``` For classification, it is probably a good idea to ensure that the resulting factor variables of predictions has the same levels as the input data. ### 22\.5\.3 The `fitness_intern` Function The `fitness_intern` function takes the fitted model and computes one or more performance metrics. The inputs to this function are: * `object`: the model generated by the `fit` function * `x`: the current set of predictor set. If the option `safsControl$holdout` is zero, these values will be from the current resample (i.e. the same data used to fit the model). Otherwise, the predictor values are from the hold\-out set created by `safsControl$holdout`. * `y`: outcome values. See the note for the `x` argument to understand which data are presented to the function. * `maximize`: a logical from `safsControl` that indicates whether the metric should be maximized or minimized * `p`: the total number of possible predictors The output should be a **named** numeric vector of performance values. In many cases, some resampled measure of performance is used. In the example above using random forest, the OOB error was used. In other cases, the resampled performance from `train` can be used and, if `safsControl$holdout` is not zero, a static hold\-out set can be used. This depends on the data and problem at hand. If left The example function for random forest is: ``` rfSA$fitness_intern ``` ``` ## function (object, x, y, maximize, p) ## rfStats(object) ## <bytecode: 0x7fa6bdc69848> ## <environment: namespace:caret> ``` ### 22\.5\.4 The `fitness_extern` Function The `fitness_extern` function takes the observed and predicted values form the external resampling process and computes one or more performance metrics. The input arguments are: * `data`: a data frame or predictions generated by the `fit` function. For regression, the predicted values in a column called `pred`. For classification, `pred` is a factor vector. Class probabilities are usually attached as columns whose names are the class levels (see the random forest example for the `fit` function above) * `lev`: a character vector with the class levels (or `NULL` for regression problems) The output should be a **named** numeric vector of performance values. The example function for random forest is: ``` rfSA$fitness_extern ``` ``` ## function (data, lev = NULL, model = NULL) ## { ## if (is.character(data$obs)) ## data$obs <- factor(data$obs, levels = lev) ## postResample(data[, "pred"], data[, "obs"]) ## } ## <bytecode: 0x7fa6bdc69a78> ## <environment: namespace:caret> ``` Two functions in [`caret`](http://cran.r-project.org/web/packages/caret/index.html) that can be used as the summary function are `defaultSummary` and `twoClassSummary` (for classification problems with two classes). ### 22\.5\.5 The `initial` Function This function creates an initial subset. Inputs are: * `vars`: the number of possible predictors * `prob`: the probability that a feature is in the subset * `...`: not currently used The output should be a vector of integers indicating which predictors are in the initial subset. Alternatively, instead of a function, a vector of integers can be used in this slot. ### 22\.5\.6 The `perturb` Function This function perturbs the subset. Inputs are: * `x`: the integers defining the current subset * `vars`: the number of possible predictors * `number`: the number of predictors to randomly change * `...`: not currently used The output should be a vector of integers indicating which predictors are in the new subset. ### 22\.5\.7 The `prob` Function This function computes the acceptance probability. Inputs are: * `old`: the fitness value for the current subset * `new`: the fitness value for the new subset * `iteration`: the current iteration number or, if the `improve`argument of `safsControl` is used, the number of iterations since the last restart * `...`: not currently used The output should be a numeric value between zero and one. One of the biggest difficulties in using simulated annealing is the specification of the acceptance probability calculation. There are many references on different methods for doing this but the general consensus is that 1\) the probability should decrease as the difference between the current and new solution increases and 2\) the probability should decrease over iterations. One issue is that the difference in fitness values can be scale\-dependent. In this package, the default probability calculations uses the percent difference, i.e. `(current - new)/current` to normalize the difference. The basic form of the probability simply takes the difference, multiplies by the iteration number and exponentiates this product: ``` prob = exp[(current - new)/current*iteration] ``` To demonstrate this, the plot below shows the probability profile for different fitness values of the current subset and different (absolute) differences. For the example data that were simulated, the RMSE values ranged between values greater than 4 to just under 3\. In the plot below, the red curve in the right\-hand panel shows how the probability changes over time when comparing a current value of 4 with a new values of 4\.5 (smaller values being better). While this difference would likely be accepted in the first few iterations, it is unlikely to be accepted after 30 or 40\. Also, larger differences are uniformly disfavored relative to smaller differences. ``` grid <- expand.grid(old = c(4, 3.5), new = c(4.5, 4, 3.5) + 1, iter = 1:40) grid <- subset(grid, old < new) grid$prob <- apply(grid, 1, function(x) safs_prob(new = x["new"], old= x["old"], iteration = x["iter"])) grid$Difference <- factor(grid$new - grid$old) grid$Group <- factor(paste("Current Value", grid$old)) ggplot(grid, aes(x = iter, y = prob, color = Difference)) + geom_line() + facet_wrap(~Group) + theme_bw() + ylab("Probability") + xlab("Iteration") ``` While this is the default, any user\-written function can be used to assign probabilities. 22\.6 Using Recipes ------------------- Similar to the previous section on genetic algorothms, recipes can be used with `safs`. Using the same data as before: ``` library(AmesHousing) library(rsample) # Create the data and remove one column that is more of # an outcome. ames <- make_ames() %>% select(-Overall_Qual) ncol(ames) ``` ``` ## [1] 80 ``` ``` # How many factor variables? sum(vapply(ames, is.factor, logical(1))) ``` ``` ## [1] 45 ``` ``` # We'll use `rsample` to make the initial split to be consistent with other # analyses of these data. Set the seed first to make sure that you get the # same random numbers set.seed(4595) data_split <- initial_split(ames, strata = "Sale_Price", prop = 3/4) ames_train <- training(data_split) %>% as.data.frame() ames_test <- testing(data_split) %>% as.data.frame() library(recipes) ames_rec <- recipe(Sale_Price ~ ., data = ames_train) %>% step_log(Sale_Price, base = 10) %>% step_other(Neighborhood, threshold = 0.05) %>% step_dummy(all_nominal(), -Bldg_Type) %>% step_interact(~ starts_with("Central_Air"):Year_Built) %>% step_zv(all_predictors())%>% step_bs(Longitude, Latitude, options = list(df = 5)) ``` Let’s again use linear models with the function: ``` lm_sa_ctrl <- safsControl(functions = caretSA, method = "cv", number = 10) set.seed(23555) lm_sa_search <- safs( ames_rec, data = ames_train, iters = 10, # we probably need thousands of iterations safsControl = lm_sa_ctrl, # now options to `train` for caretSA method = "lm", trControl = trainControl(method = "cv", allowParallel = FALSE) ) lm_sa_search ``` ``` ## ## Simulated Annealing Feature Selection ## ## 2199 samples ## 273 predictors ## ## Maximum search iterations: 10 ## ## Internal performance values: RMSE, Rsquared, MAE ## Subset selection driven to minimize internal RMSE ## ## External performance values: RMSE, Rsquared, MAE ## Best iteration chose by minimizing external RMSE ## External resampling method: Cross-Validated (10 fold) ## ## During resampling: ## * the top 5 selected variables (out of a possible 273): ## Roof_Style_Gambrel (70%), Latitude_bs_2 (60%), Bsmt_Full_Bath (50%), BsmtFin_Type_1_Rec (50%), Condition_1_Norm (50%) ## * on average, 59.1 variables were selected (min = 56, max = 63) ## ## In the final search using the entire training set: ## * 56 features selected at iteration 4 including: ## Year_Remod_Add, Half_Bath, TotRms_AbvGrd, Garage_Area, Enclosed_Porch ... ## * external performance at this iteration is ## ## RMSE Rsquared MAE ## 0.09518 0.69840 0.07033 ```
Machine Learning
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/small-multiples-with-all-headers-present-for-each-multiple.html
4\.1 Small multiples with all headers present for each multiple --------------------------------------------------------------- The code to import one of these multiples will be simple. ``` cells %>% behead("up-left", subject) %>% behead("up", header) %>% select(-col, -local_format_id) %>% spatter(header) %>% select(-row) ``` The first table is in rows 1 to 4, columns 1 to 3, so we start by writing the code to import only that table. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "small-multiples") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric, local_format_id) table1 <- dplyr::filter(all_cells, row %in% 1:4, col %in% 1:3) table1 %>% behead("up-left", subject) %>% behead("up", header) %>% select(-col, -local_format_id) %>% spatter(header) %>% select(-row) ``` ``` ## # A tibble: 2 x 4 ## subject Grade Name Score ## <chr> <chr> <chr> <dbl> ## 1 Classics F Matilda 1 ## 2 Classics D Olivia 2 ``` We wrap that code in a function, to be applied to each separate table. ``` unpivot <- function(cells) { cells %>% behead("up-left", subject) %>% behead("up", header) %>% select(-col, -local_format_id) %>% spatter(header) %>% select(-row) } ``` Now we partition the spreadsheet into the separate tables. This is done by identifying a corner cell in each table. ``` formats <- xlsx_formats(path) italic <- which(formats$local$font$italic) corners <- all_cells %>% dplyr::filter(local_format_id %in% italic) %>% select(row, col) partitions <- partition(all_cells, corners) partitions ``` ``` ## # A tibble: 4 x 3 ## corner_row corner_col cells ## <dbl> <dbl> <list> ## 1 1 1 <tibble [10 × 6]> ## 2 1 5 <tibble [10 × 6]> ## 3 6 1 <tibble [10 × 6]> ## 4 6 5 <tibble [10 × 6]> ``` Finally, map the unpivoting function over the partitions, and combine the results. ``` partitions %>% mutate(cells = map(cells, unpivot)) %>% unnest() %>% select(-corner_row, -corner_col) ``` ``` ## Warning: `cols` is now required. ## Please use `cols = c(cells)` ``` ``` ## # A tibble: 8 x 4 ## subject Grade Name Score ## <chr> <chr> <chr> <dbl> ## 1 Classics F Matilda 1 ## 2 Classics D Olivia 2 ## 3 History D Matilda 3 ## 4 History C Olivia 4 ## 5 Music B Matilda 5 ## 6 Music B Olivia 6 ## 7 Drama A Matilda 7 ## 8 Drama A Olivia 8 ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/same-table-in-several-worksheetsfiles-using-the-sheetfile-name.html
4\.2 Same table in several worksheets/files (using the sheet/file name) ----------------------------------------------------------------------- Because `tidyxl()` imports cells from multiple sheets into the same data frame, tables on separate sheets can be imported by mapping over the different sheets. Just name each sheet in the `xlsx_cell()` call, or don’t name any to import them all. As far as `tidyxl()` is concerned, the particular sheet (aka ‘tab’) that a cell is on is another coordinate like `row` and `col`, so the full location of a cell is its `row`, its `col`, and its `sheet`. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = c("humanities", "performance")) %>% dplyr::filter(!is_blank) %>% select(sheet, row, col, data_type, character, numeric) all_cells ``` ``` ## # A tibble: 16 x 6 ## sheet row col data_type character numeric ## <chr> <int> <int> <chr> <chr> <dbl> ## 1 humanities 1 2 character Matilda NA ## 2 humanities 1 3 character Nicholas NA ## 3 humanities 2 1 character Classics NA ## 4 humanities 2 2 numeric <NA> 1 ## 5 humanities 2 3 numeric <NA> 3 ## 6 humanities 3 1 character History NA ## 7 humanities 3 2 numeric <NA> 3 ## 8 humanities 3 3 numeric <NA> 5 ## 9 performance 1 2 character Matilda NA ## 10 performance 1 3 character Nicholas NA ## 11 performance 2 1 character Music NA ## 12 performance 2 2 numeric <NA> 5 ## 13 performance 2 3 numeric <NA> 9 ## 14 performance 3 1 character Drama NA ## 15 performance 3 2 numeric <NA> 7 ## 16 performance 3 3 numeric <NA> 12 ``` To prepare the sheets to be mapped over, use `tidyr::nest()`. The `data` column contains the cells of each sheet. ``` all_cells %>% nest(-sheet) ``` ``` ## Warning: All elements of `...` must be named. ## Did you want `data = c(row, col, data_type, character, numeric)`? ``` ``` ## # A tibble: 2 x 2 ## sheet data ## <chr> <list> ## 1 humanities <tibble [8 × 5]> ## 2 performance <tibble [8 × 5]> ``` The function to unpivot each table in this case will be a couple of `behead()` statements. Further clean\-up can be saved until the end. ``` unpivot <- function(cells) { cells %>% behead("up", name) %>% behead("left", subject) } ``` After mapping the unpivot function over each sheet of cells, use `tidyr::unnest()` to show every row of data again. ``` all_cells %>% nest(-sheet) %>% mutate(data = map(data, unpivot)) %>% unnest() ``` ``` ## Warning: All elements of `...` must be named. ## Did you want `data = c(row, col, data_type, character, numeric)`? ``` ``` ## Warning: `cols` is now required. ## Please use `cols = c(data)` ``` ``` ## # A tibble: 8 x 8 ## sheet row col data_type character numeric name subject ## <chr> <int> <int> <chr> <chr> <dbl> <chr> <chr> ## 1 humanities 2 2 numeric <NA> 1 Matilda Classics ## 2 humanities 2 3 numeric <NA> 3 Nicholas Classics ## 3 humanities 3 2 numeric <NA> 3 Matilda History ## 4 humanities 3 3 numeric <NA> 5 Nicholas History ## 5 performance 2 2 numeric <NA> 5 Matilda Music ## 6 performance 2 3 numeric <NA> 9 Nicholas Music ## 7 performance 3 2 numeric <NA> 7 Matilda Drama ## 8 performance 3 3 numeric <NA> 12 Nicholas Drama ``` Finally, do the clean\-up operations that were saved until now. ``` all_cells %>% nest(-sheet) %>% mutate(data = map(data, unpivot)) %>% unnest() %>% transmute(field = sheet, name, subject, score = numeric) ``` ``` ## Warning: All elements of `...` must be named. ## Did you want `data = c(row, col, data_type, character, numeric)`? ``` ``` ## Warning: `cols` is now required. ## Please use `cols = c(data)` ``` ``` ## # A tibble: 8 x 4 ## field name subject score ## <chr> <chr> <chr> <dbl> ## 1 humanities Matilda Classics 1 ## 2 humanities Nicholas Classics 3 ## 3 humanities Matilda History 3 ## 4 humanities Nicholas History 5 ## 5 performance Matilda Music 5 ## 6 performance Nicholas Music 9 ## 7 performance Matilda Drama 7 ## 8 performance Nicholas Drama 12 ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/same-table-in-several-worksheetsfiles-but-in-different-positions.html
4\.3 Same table in several worksheets/files but in different positions ---------------------------------------------------------------------- This is almost the same as the section “Same table in several worksheets/files (using the sheet/file name)”. The only difference is that the function you write to unpivot the table must also *find* the table in the first place, and be robust to differences in the placement and context of the table on each sheet. In this example, both tables begin in the same column, but there is an extra row of notes above one of the tables. There are a few ways to tackle this problem. Here, we filter for the `Subject` cell, which is either `A3` or `A4`, and then extend the selection to include the whole table. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = c("female", "male")) %>% dplyr::filter(!is_blank) %>% select(sheet, row, col, data_type, character, numeric) all_cells ``` ``` ## # A tibble: 21 x 6 ## sheet row col data_type character numeric ## <chr> <int> <int> <chr> <chr> <dbl> ## 1 female 1 1 character Table of scores NA ## 2 female 3 1 character Subject NA ## 3 female 3 2 character Matilda NA ## 4 female 3 3 character Olivia NA ## 5 female 4 1 character Classics NA ## 6 female 4 2 numeric <NA> 1 ## 7 female 4 3 numeric <NA> 2 ## 8 female 5 1 character History NA ## 9 female 5 2 numeric <NA> 3 ## 10 female 5 3 numeric <NA> 4 ## # … with 11 more rows ``` ``` unpivot <- function(cells) { cells %>% dplyr::filter(character == "Subject") %>% pull(row) %>% {dplyr::filter(cells, row >= .)} %>% behead("up", name) %>% behead("left", subject) } all_cells %>% nest(-sheet) %>% mutate(data = map(data, unpivot)) %>% unnest() %>% select(sex = sheet, name, subject, score = numeric) ``` ``` ## Warning: All elements of `...` must be named. ## Did you want `data = c(row, col, data_type, character, numeric)`? ``` ``` ## Warning: `cols` is now required. ## Please use `cols = c(data)` ``` ``` ## # A tibble: 8 x 4 ## sex name subject score ## <chr> <chr> <chr> <dbl> ## 1 female Matilda Classics 1 ## 2 female Olivia Classics 2 ## 3 female Matilda History 3 ## 4 female Olivia History 4 ## 5 male Nicholas Classics 3 ## 6 male Paul Classics 0 ## 7 male Nicholas History 5 ## 8 male Paul History 1 ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/implied-multiples.html
4\.4 Implied multiples ---------------------- Implied multiples look like a single table, but many of the headers appear more than once. There is a dominant set of headers that are on the same ‘level’ (e.g. in the same row) as the other headers. See a real\-life [case study](vaccinations.html#vaccinations) In the example, the header “Grade” is repeated, but it really belongs in each case to the header “Classics”, “History”, “Music” or “Drama”. Those subject headers serve two purposes: as title of each small multiple, and as the unstated “Score” header of their columns. The difficulty is in associating a grade with its corresponding score. 1. Filter for the “Classics”, “History”, “Music” and “Drama” headers, and assign them to a variable to be `enhead()`ed later. You could think of this as faking a set of headers that doesn’t exist, but is implied. 2. Meanwhile, `behead()` the original “Classics”, “History” (etc.) cells and then overwrite them with “Score”. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") all_cells <- xlsx_cells(path, sheets = "implied-multiples") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) ``` Filter for the “Classics”, “History”, “Music” and “Drama” headers, and assign them to a variable to be `enhead()`ed later. ``` subjects <- all_cells %>% dplyr::filter(col >= 2, row == 2, character != "Grade") %>% select(row, col, subject = character) subjects ``` ``` ## # A tibble: 4 x 3 ## row col subject ## <int> <int> <chr> ## 1 2 2 Classics ## 2 2 4 History ## 3 2 6 Music ## 4 2 8 Drama ``` Meanwhile, `behead()` the original “Classics”, “History” (etc.) cells and then overwrite them with “Score”. ``` all_cells %>% behead("up-left", "field") %>% behead("up", "header") %>% behead("left", "name") %>% enhead(subjects, "up-left") %>% # Reattach the filtered subject headers mutate(header = if_else(header == "Grade", header, "Score")) %>% select(-col) %>% spatter(header) %>% select(-row) ``` ``` ## # A tibble: 8 x 5 ## field name subject Grade Score ## <chr> <chr> <chr> <chr> <dbl> ## 1 Humanities Matilda Classics F 1 ## 2 Humanities Matilda History D 3 ## 3 Performance Matilda Drama A 7 ## 4 Performance Matilda Music B 5 ## 5 Humanities Olivia Classics D 2 ## 6 Humanities Olivia History C 4 ## 7 Performance Olivia Drama A 8 ## 8 Performance Olivia Music B 6 ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/an-example-formatting-lookup.html
5\.1 An example formatting lookup --------------------------------- This example shows how to look up whether a cell is bold. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") cells <- xlsx_cells(path, sheet = "formatting") %>% select(row, col, character, style_format, local_format_id) cells ``` ``` ## # A tibble: 14 x 5 ## row col character style_format local_format_id ## <int> <int> <chr> <chr> <int> ## 1 1 1 bold Normal 6 ## 2 2 1 italic Normal 8 ## 3 3 1 underline Normal 51 ## 4 4 1 strikethrough Normal 52 ## 5 5 1 red text Normal 12 ## 6 6 1 font size 14 Normal 53 ## 7 7 1 font arial Normal 54 ## 8 8 1 yellow fill Normal 11 ## 9 9 1 black border Normal 43 ## 10 10 1 thick border Normal 55 ## 11 11 1 dashed border Normal 56 ## 12 12 1 row height 30 Normal 1 ## 13 13 2 column width 16.76 Normal 1 ## 14 14 1 Bad' style Explanatory Text 57 ``` ``` formats <- xlsx_formats(path) bold <- formats$local$font$bold # The list of lists of lists of vectors bold ``` ``` ## [1] FALSE TRUE TRUE FALSE FALSE TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE ## [16] FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ## [31] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ## [46] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE ``` ``` mutate(cells, bold = bold[local_format_id]) ``` ``` ## # A tibble: 14 x 6 ## row col character style_format local_format_id bold ## <int> <int> <chr> <chr> <int> <lgl> ## 1 1 1 bold Normal 6 TRUE ## 2 2 1 italic Normal 8 FALSE ## 3 3 1 underline Normal 51 FALSE ## 4 4 1 strikethrough Normal 52 FALSE ## 5 5 1 red text Normal 12 FALSE ## 6 6 1 font size 14 Normal 53 FALSE ## 7 7 1 font arial Normal 54 FALSE ## 8 8 1 yellow fill Normal 11 FALSE ## 9 9 1 black border Normal 43 FALSE ## 10 10 1 thick border Normal 55 FALSE ## 11 11 1 dashed border Normal 56 FALSE ## 12 12 1 row height 30 Normal 1 FALSE ## 13 13 2 column width 16.76 Normal 1 FALSE ## 14 14 1 Bad' style Explanatory Text 57 FALSE ``` A quick way to see what formatting definitions exist is to use `str()`. (Scroll past this for now – you don’t need to memorise it). ``` formats <- xlsx_formats(path) str(formats) ``` ``` ## List of 2 ## $ local:List of 6 ## ..$ numFmt : chr [1:59] "General" "General" "General" "General" ... ## ..$ font :List of 10 ## .. ..$ bold : logi [1:59] FALSE TRUE TRUE FALSE FALSE TRUE ... ## .. ..$ italic : logi [1:59] FALSE FALSE FALSE FALSE FALSE FALSE ... ## .. ..$ underline: chr [1:59] NA NA NA NA ... ## .. ..$ strike : logi [1:59] FALSE FALSE FALSE FALSE FALSE FALSE ... ## .. ..$ vertAlign: chr [1:59] NA NA NA NA ... ## .. ..$ size : num [1:59] 11 11 11 11 11 11 11 11 11 11 ... ## .. ..$ color :List of 4 ## .. .. ..$ rgb : chr [1:59] "FF000000" "FF000000" "FF000000" "FF000000" ... ## .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. ..$ name : chr [1:59] "Calibri" "Calibri" "Calibri" "Calibri" ... ## .. ..$ family : int [1:59] 2 2 2 2 2 2 2 2 2 2 ... ## .. ..$ scheme : chr [1:59] NA NA NA NA ... ## ..$ fill :List of 2 ## .. ..$ patternFill :List of 3 ## .. .. ..$ fgColor :List of 4 ## .. .. .. ..$ rgb : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. ..$ bgColor :List of 4 ## .. .. .. ..$ rgb : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. ..$ patternType: chr [1:59] NA NA NA NA ... ## .. ..$ gradientFill:List of 8 ## .. .. ..$ type : chr [1:59] NA NA NA NA ... ## .. .. ..$ degree: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. ..$ left : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. ..$ right : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. ..$ top : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. ..$ bottom: num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. ..$ stop1 :List of 2 ## .. .. .. ..$ position: num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. ..$ color :List of 4 ## .. .. .. .. ..$ rgb : chr [1:59] NA NA NA NA ... ## .. .. .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. ..$ stop2 :List of 2 ## .. .. .. ..$ position: num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. ..$ color :List of 4 ## .. .. .. .. ..$ rgb : chr [1:59] NA NA NA NA ... ## .. .. .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## ..$ border :List of 12 ## .. ..$ diagonalDown: logi [1:59] FALSE FALSE FALSE FALSE FALSE FALSE ... ## .. ..$ diagonalUp : logi [1:59] FALSE FALSE FALSE FALSE FALSE FALSE ... ## .. ..$ outline : logi [1:59] FALSE FALSE FALSE FALSE FALSE FALSE ... ## .. ..$ left :List of 2 ## .. .. ..$ style: chr [1:59] NA "thin" NA NA ... ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : chr [1:59] NA "FF000000" NA NA ... ## .. .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. ..$ right :List of 2 ## .. .. ..$ style: chr [1:59] NA "thin" NA NA ... ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : chr [1:59] NA "FF000000" NA NA ... ## .. .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. ..$ start :List of 2 ## .. .. ..$ style: chr [1:59] NA NA NA NA ... ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. ..$ end :List of 2 ## .. .. ..$ style: chr [1:59] NA NA NA NA ... ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. ..$ top :List of 2 ## .. .. ..$ style: chr [1:59] NA "thin" NA NA ... ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : chr [1:59] NA "FF000000" NA NA ... ## .. .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. ..$ bottom :List of 2 ## .. .. ..$ style: chr [1:59] NA NA "thin" NA ... ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : chr [1:59] NA NA "FF000000" NA ... ## .. .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. ..$ diagonal :List of 2 ## .. .. ..$ style: chr [1:59] NA NA NA NA ... ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. ..$ vertical :List of 2 ## .. .. ..$ style: chr [1:59] NA NA NA NA ... ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. ..$ horizontal :List of 2 ## .. .. ..$ style: chr [1:59] NA NA NA NA ... ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ theme : chr [1:59] NA NA NA NA ... ## .. .. .. ..$ indexed: int [1:59] NA NA NA NA NA NA NA NA NA NA ... ## .. .. .. ..$ tint : num [1:59] NA NA NA NA NA NA NA NA NA NA ... ## ..$ alignment :List of 8 ## .. ..$ horizontal : chr [1:59] "general" "center" "general" "general" ... ## .. ..$ vertical : chr [1:59] "bottom" "bottom" "bottom" "bottom" ... ## .. ..$ wrapText : logi [1:59] FALSE FALSE FALSE FALSE FALSE FALSE ... ## .. ..$ readingOrder : chr [1:59] "context" "context" "context" "context" ... ## .. ..$ indent : int [1:59] 0 0 0 0 0 0 0 0 0 0 ... ## .. ..$ justifyLastLine: logi [1:59] FALSE FALSE FALSE FALSE FALSE FALSE ... ## .. ..$ shrinkToFit : logi [1:59] FALSE FALSE FALSE FALSE FALSE FALSE ... ## .. ..$ textRotation : int [1:59] 0 0 0 0 0 0 0 0 0 0 ... ## ..$ protection:List of 2 ## .. ..$ locked: logi [1:59] TRUE TRUE TRUE TRUE TRUE TRUE ... ## .. ..$ hidden: logi [1:59] FALSE FALSE FALSE FALSE FALSE FALSE ... ## $ style:List of 6 ## ..$ numFmt : Named chr [1:2] "General" "General" ## .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## ..$ font :List of 10 ## .. ..$ bold : Named logi [1:2] FALSE FALSE ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ italic : Named logi [1:2] FALSE FALSE ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ underline: Named chr [1:2] NA NA ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ strike : Named logi [1:2] FALSE FALSE ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ vertAlign: Named chr [1:2] NA NA ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ size : Named num [1:2] 11 11 ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ color :List of 4 ## .. .. ..$ rgb : Named chr [1:2] "FF000000" "FF9C0006" ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ name : Named chr [1:2] "Calibri" "Calibri" ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ family : Named int [1:2] 2 2 ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ scheme : Named chr [1:2] NA NA ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## ..$ fill :List of 2 ## .. ..$ patternFill :List of 3 ## .. .. ..$ fgColor :List of 4 ## .. .. .. ..$ rgb : Named chr [1:2] NA "FFFFC7CE" ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ bgColor :List of 4 ## .. .. .. ..$ rgb : Named chr [1:2] NA "FFCCCCFF" ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ patternType: Named chr [1:2] NA "solid" ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ gradientFill:List of 8 ## .. .. ..$ type : Named chr [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ degree: Named int [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ left : Named num [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ right : Named num [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ top : Named num [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ bottom: Named num [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ stop1 :List of 2 ## .. .. .. ..$ position: Named num [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ color :List of 4 ## .. .. .. .. ..$ rgb : Named chr [1:2] NA NA ## .. .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ stop2 :List of 2 ## .. .. .. ..$ position: Named num [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ color :List of 4 ## .. .. .. .. ..$ rgb : Named chr [1:2] NA NA ## .. .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## ..$ border :List of 12 ## .. ..$ diagonalDown: Named logi [1:2] FALSE FALSE ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ diagonalUp : Named logi [1:2] FALSE FALSE ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ outline : Named logi [1:2] FALSE FALSE ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ left :List of 2 ## .. .. ..$ style: Named chr [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ right :List of 2 ## .. .. ..$ style: Named chr [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ start :List of 2 ## .. .. ..$ style: Named chr [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ end :List of 2 ## .. .. ..$ style: Named chr [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ top :List of 2 ## .. .. ..$ style: Named chr [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ bottom :List of 2 ## .. .. ..$ style: Named chr [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ diagonal :List of 2 ## .. .. ..$ style: Named chr [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ vertical :List of 2 ## .. .. ..$ style: Named chr [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ horizontal :List of 2 ## .. .. ..$ style: Named chr [1:2] NA NA ## .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. ..$ color:List of 4 ## .. .. .. ..$ rgb : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ theme : Named chr [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ indexed: Named int [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. .. .. ..$ tint : Named num [1:2] NA NA ## .. .. .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## ..$ alignment :List of 8 ## .. ..$ horizontal : Named chr [1:2] "general" "general" ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ vertical : Named chr [1:2] "bottom" "bottom" ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ wrapText : Named logi [1:2] FALSE FALSE ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ readingOrder : Named chr [1:2] "context" "context" ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ indent : Named int [1:2] 0 0 ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ justifyLastLine: Named logi [1:2] FALSE FALSE ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ shrinkToFit : Named logi [1:2] FALSE FALSE ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ textRotation : Named int [1:2] 0 0 ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## ..$ protection:List of 2 ## .. ..$ locked: Named logi [1:2] TRUE TRUE ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ## .. ..$ hidden: Named logi [1:2] FALSE FALSE ## .. .. ..- attr(*, "names")= chr [1:2] "Normal" "Explanatory Text" ``` Why is this so complicated? For one thing, there are too many types of formatting available to include in the data frame given by `xlsx_cells()`. Consider borders: each cell can have a border on each of its four sides, as well as through the middle of the cell horizontally, vertically, diagonally up and diagonally down. Each border can have its own colour and linetype. Colour can be expressed as an RGB value, a theme number with or without a tint, or an index number. To express that in a data frame would take (4 sides \+ 4 through the middle) \* (4 ways to express colour \+ 1 linetype) \= 40 columns. Just for borders. Instead, Excel dynamically defines combinations of formatting, as they occur, and gives ID numbers to those combinations. Each cell has a formatting ID, which is used to look up its particular combination of formats. Note that this means two cells that are both bold can have different formatting IDs, e.g. if one is also italic. There is also a hierarchy of formatting. The first formatting to be applied is the ‘style’. Every cell has a style, which by default is the ‘normal’ style. You can reformat all cells of the ‘normal’ style at once by updating the ‘normal’ style. Style formats are available under `xlsx_formats()$style` When you modify the format of a particular cell, then that modification is local to that cell. The cell’s local formatting is available under `xlsx_formats()$local`. Both `$style` and `$local` have the same structure, so it’s easy to switch from checking a cell’s style\-level formatting to its local formatting. Here’s an example of looking up both the local bold formatting and the style\-level bold formatting of a cell. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") cells <- xlsx_cells(path, sheet = "formatting") %>% select(row, col, character, style_format, local_format_id) %>% dplyr::filter(row == 1, col == 1) cells ``` ``` ## # A tibble: 1 x 5 ## row col character style_format local_format_id ## <int> <int> <chr> <chr> <int> ## 1 1 1 bold Normal 6 ``` ``` formats <- xlsx_formats(path) local_bold <- formats$local$font$bold local_bold ``` ``` ## [1] FALSE TRUE TRUE FALSE FALSE TRUE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE ## [16] FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ## [31] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE ## [46] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE ``` ``` style_bold <- formats$style$font$bold style_bold ``` ``` ## Normal Explanatory Text ## FALSE FALSE ``` ``` mutate(cells, style_bold = style_bold[style_format], local_bold = local_bold[local_format_id]) ``` ``` ## # A tibble: 1 x 7 ## row col character style_format local_format_id style_bold local_bold ## <int> <int> <chr> <chr> <int> <lgl> <lgl> ## 1 1 1 bold Normal 6 FALSE TRUE ``` Most of the time you will use the local formatting. You only need to check the style formatting when styles have been used in the spreadsheet (rare) and you want to ignore any local modifications of that style for particular cells. Conditional formatting is an obvious omission. It isn’t supported by tidyxl because it doesn’t encode any new information; it’s responds to cell values, which you already have. If you think you need it, feel free to open an [issue](https://github.com/nacnudus/tidyxl/issues).
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/common-formats.html
5\.2 Common formats ------------------- This example shows how to look up the most common formats. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") cells <- xlsx_cells(path, sheet = "formatting") %>% select(row, col, character, style_format, local_format_id, height, width) formats <- xlsx_formats(path) bold <- formats$local$font$bold italic <- formats$local$font$italic underline <- formats$local$font$underline strikethrough <- formats$local$font$strike font_colour <- formats$local$font$color$rgb fill_colour <- formats$local$fill$patternFill$fgColor$rgb font_size <- formats$local$font$size font_name <- formats$local$font$name border_colour <- formats$local$border$right$color$rgb border_linetype <- formats$local$border$right$style mutate(cells, bold = bold[local_format_id], italic = italic[local_format_id], underline = underline[local_format_id], strikethrough = strikethrough[local_format_id], font_colour = font_colour[local_format_id], font_size = font_size[local_format_id], font_name = font_name[local_format_id], fill_colour = fill_colour[local_format_id], border_colour = border_colour[local_format_id], border_linetype = border_linetype[local_format_id]) ``` ``` ## # A tibble: 14 x 17 ## row col character style_format local_format_id height width bold italic underline ## <int> <int> <chr> <chr> <int> <dbl> <dbl> <lgl> <lgl> <chr> ## 1 1 1 bold Normal 6 15 8.71 TRUE FALSE <NA> ## 2 2 1 italic Normal 8 15 8.71 FALSE TRUE <NA> ## 3 3 1 underline Normal 51 15 8.71 FALSE FALSE single ## 4 4 1 striketh… Normal 52 15 8.71 FALSE FALSE <NA> ## 5 5 1 red text Normal 12 15 8.71 FALSE FALSE <NA> ## 6 6 1 font siz… Normal 53 18.8 8.71 FALSE FALSE <NA> ## 7 7 1 font ari… Normal 54 15 8.71 FALSE FALSE <NA> ## 8 8 1 yellow f… Normal 11 15 8.71 FALSE FALSE <NA> ## 9 9 1 black bo… Normal 43 15 8.71 FALSE FALSE <NA> ## 10 10 1 thick bo… Normal 55 15 8.71 FALSE FALSE <NA> ## 11 11 1 dashed b… Normal 56 15 8.71 FALSE FALSE <NA> ## 12 12 1 row heig… Normal 1 30 8.71 FALSE FALSE <NA> ## 13 13 2 column w… Normal 1 15 17.4 FALSE FALSE <NA> ## 14 14 1 Bad' sty… Explanatory… 57 15 8.71 FALSE FALSE <NA> ## # … with 7 more variables: strikethrough <lgl>, font_colour <chr>, font_size <dbl>, ## # font_name <chr>, fill_colour <chr>, border_colour <chr>, border_linetype <chr> ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/in-cell-formatting.html
5\.3 In\-cell formatting ------------------------ The previous section was about formatting applied at the level of cells. What about when multiple formats are applied within a single cell? A single word in a string might be a different colour, to stand out. Unlike cell\-level formatting, in\-cell formatting is very limited, so it can be provided as a data frame with the following columns. * bold * italic * underline * strike * vertAlign * size * color\_rgb * color\_theme * color\_indexed * color\_tint * font * family * scheme There is one of these data frames for each cell, and they are kept in a list\-column called `character_formatted`. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") xlsx_cells(path, sheet = "in-cell formatting") %>% select(address, character_formatted) ``` ``` ## # A tibble: 9 x 2 ## address character_formatted ## <chr> <list> ## 1 A1 <tibble [9 × 14]> ## 2 A3 <tibble [1 × 14]> ## 3 B3 <tibble [1 × 14]> ## 4 A4 <tibble [1 × 14]> ## 5 B4 <NULL> ## 6 A5 <tibble [2 × 14]> ## 7 B5 <NULL> ## 8 A6 <tibble [1 × 14]> ## 9 B6 <NULL> ``` The way to access these data frames is via `tidyr::unnest()`. In this example, a single cell has a long string of words, where each word is formatted differently. ``` xlsx_cells(path, sheet = "in-cell formatting") %>% dplyr::filter(address == "A1") %>% select(address, character_formatted) %>% unnest() ``` ``` ## Warning: `cols` is now required. ## Please use `cols = c(character_formatted)` ``` ``` ## # A tibble: 9 x 15 ## address character bold italic underline strike vertAlign size color_rgb color_theme ## <chr> <chr> <lgl> <lgl> <chr> <lgl> <chr> <dbl> <chr> <int> ## 1 A1 in-cell: FALSE FALSE <NA> FALSE <NA> 0 FF000000 NA ## 2 A1 bold, TRUE FALSE <NA> FALSE <NA> 0 FF000000 NA ## 3 A1 italic, FALSE TRUE <NA> FALSE <NA> 0 FF000000 NA ## 4 A1 underlin… FALSE FALSE single FALSE <NA> 0 FF000000 NA ## 5 A1 striketh… FALSE FALSE <NA> TRUE <NA> 0 FF000000 NA ## 6 A1 superscr… FALSE FALSE <NA> FALSE superscr… 0 FF000000 NA ## 7 A1 red, FALSE FALSE <NA> FALSE <NA> 0 FFFF0000 NA ## 8 A1 arial, FALSE FALSE <NA> FALSE <NA> 0 <NA> NA ## 9 A1 size 14 FALSE FALSE <NA> FALSE <NA> 0 <NA> NA ## # … with 5 more variables: color_indexed <int>, color_tint <dbl>, font <chr>, family <int>, ## # scheme <chr> ``` It’s hard to think of a plausible example, so what follows is an implausible one that nevertheless occurred in real life.
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/multiple-pieces-of-information-in-a-single-cell-with-meaningful-formatting.html
5\.4 Multiple pieces of information in a single cell, with meaningful formatting -------------------------------------------------------------------------------- The above table of products and their production readiness combines three pieces of information in a single cell. Believe it or not, this is based on a real\-life example. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") xlsx_cells(path, sheet = "in-cell formatting") %>% dplyr::filter(address != "A1") %>% rectify() ``` ``` ## # A tibble: 4 x 3 ## `row/col` `1(A)` `2(B)` ## <int> <chr> <chr> ## 1 3 ID Count ## 2 4 A1-TEST 1 ## 3 5 A2-PRODUCTION 2 ## 4 6 A3-PRODUCTION 3 ``` In the `ID` column, the first section `"A1"`, `"A2"`, `"A3"` is the product ID. The second section `"TEST"`, `"PRODUCTION"` is the production readiness, and the formatting of `"TEST"` and `"PRODUCTION"` shows whether or not manufacturing failed. In the file, one of those strings is formatted red with a strikethrough, indicating failure. One way to extract the formatting is by unnesting, as above, but in this case we can get away with mapping over the nested data frames and pulling out a single value. ``` strikethrough <- xlsx_cells(path, sheet = "in-cell formatting") %>% dplyr::filter(address != "A1", col == 1) %>% mutate(strikethrough = map_lgl(character_formatted, ~ any(.x$strike))) %>% select(row, col, character, strikethrough) ``` This can then be joined onto the rest of the table, in the same way as the section “Already a tidy table but with meaningful formatting of single cells”. ``` cells <- xlsx_cells(path, sheet = "in-cell formatting") %>% dplyr::filter(address != "A1") %>% select(row, col, data_type, character, numeric) strikethrough <- xlsx_cells(path, sheet = "in-cell formatting") %>% dplyr::filter(address != "A1", col == 1) %>% mutate(strikethrough = map_lgl(character_formatted, ~ any(.x$strike))) %>% select(row, strikethrough) left_join(cells, strikethrough, by = "row") %>% behead("up", header) %>% select(-col) %>% spatter(header) %>% select(ID, strikethrough, Count) ``` ``` ## # A tibble: 3 x 3 ## ID strikethrough Count ## <chr> <lgl> <dbl> ## 1 A1-TEST NA 1 ## 2 A2-PRODUCTION TRUE 2 ## 3 A3-PRODUCTION NA 3 ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/superscript-symbols.html
5\.5 Superscript symbols ------------------------ This is pernicious. What was Paula’s score, in the table below? ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") read_excel(path, sheet = "superscript symbols") ``` ``` ## # A tibble: 2 x 2 ## Name Score ## <chr> <chr> ## 1 Paula 91 ## 2 Matilda 10 ``` The answer is, it’s not Paula, it’s Paul (superscript ‘a’), who scored 9 (superscript ‘1’). This sort of thing is difficult to spot. There’s a clue in the ‘Score’ column, which has been coerced to character so that the author could enter the superscript ‘1’ (Excel doesn’t allow superscripts in numeric cells), But it would be easy to interpret that as an accident of translation, and simply coerce back to numeric with `as.integer()`. With tidyxl, you can count the rows of each element of the `character_formatted` column to identify cells that have in\-cell formatting. ``` xlsx_cells(path, sheet = "superscript symbols") %>% dplyr::filter(data_type == "character") %>% dplyr::filter(map_int(character_formatted, nrow) != 1) %>% select(row, col, character) ``` ``` ## # A tibble: 2 x 3 ## row col character ## <int> <int> <chr> ## 1 2 1 Paula ## 2 2 2 91 ``` The values and symbols can then be separated by assuming the value is the first string, and the symbol is the second. ``` xlsx_cells(path, sheet = "superscript symbols") %>% mutate(character = map_chr(character_formatted, ~ ifelse(is.null(.x), character, .x$character[1])), symbol = map_chr(character_formatted, ~ ifelse(is.null(.x), NA, .x$character[2])), numeric = if_else(row > 1 & col == 2 & data_type == "character", as.numeric(character), numeric), character = if_else(is.na(numeric), character, NA_character_)) %>% select(row, col, numeric, character, symbol) ``` ``` ## Warning in if_else(row > 1 & col == 2 & data_type == "character", as.numeric(character), : NAs ## introduced by coercion ``` ``` ## # A tibble: 6 x 5 ## row col numeric character symbol ## <int> <int> <dbl> <chr> <chr> ## 1 1 1 NA Name <NA> ## 2 1 2 NA Score <NA> ## 3 2 1 NA Paul a ## 4 2 2 9 <NA> 1 ## 5 3 1 NA Matilda <NA> ## 6 3 2 10 <NA> <NA> ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/non-text-headers-e-g-dates.html
8\.1 Non\-text headers e.g. dates --------------------------------- At the time of writing, readxl doesn’t convert Excel dates to R dates when they are in the header row. Using tidyxl and unpivotr, you can choose to make a cell of any data type into a tidy ‘header’, and you can reformat it as text before `spatter()` turns it into the header of a data frame. Another way to format headers as part of the `behead()` will be shown later. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") xlsx_cells(path, sheet = "non-text headers") %>% behead("left", name) %>% behead("up", `academic-year`) %>% mutate(`academic-year` = strftime(`academic-year`, "%Y")) %>% select(row, data_type, `academic-year`, name, numeric) %>% spatter(`academic-year`) %>% select(-row) ``` ``` ## # A tibble: 2 x 3 ## name `2017` `2018` ## <chr> <dbl> <dbl> ## 1 Matilda 4 2 ## 2 Nicholas 3 1 ``` When a single set of headers is of mixed data types, e.g. some character and some date, `behead()` chooses the correct ones using the `data_type` column, before converting them all to text via `format()`. ``` xlsx_cells(path, sheet = "non-text headers") %>% select(row, col, data_type, character, numeric, date) %>% behead("up", header) ``` ``` ## # A tibble: 6 x 7 ## row col data_type character numeric date header ## <int> <int> <chr> <chr> <dbl> <dttm> <chr> ## 1 2 1 character Matilda NA NA Name ## 2 2 2 numeric <NA> 2 NA 2018-01-01 ## 3 2 3 numeric <NA> 4 NA 2017-01-01 ## 4 3 1 character Nicholas NA NA Name ## 5 3 2 numeric <NA> 1 NA 2018-01-01 ## 6 3 3 numeric <NA> 3 NA 2017-01-01 ``` To format a header when a single set of headers are of mixed data types, you can specify a function for each data type in the call to `behead()`. ``` xlsx_cells(path, sheet = "non-text headers") %>% select(row, col, data_type, character, numeric, date) %>% behead("up", header, formatters = list(date = ~ strftime(.x, "%Y"), character = toupper)) ``` ``` ## # A tibble: 6 x 7 ## row col data_type character numeric date header ## <int> <int> <chr> <chr> <dbl> <dttm> <chr> ## 1 2 1 character Matilda NA NA NAME ## 2 2 2 numeric <NA> 2 NA 2018 ## 3 2 3 numeric <NA> 4 NA 2017 ## 4 3 1 character Nicholas NA NA NAME ## 5 3 2 numeric <NA> 1 NA 2018 ## 6 3 3 numeric <NA> 3 NA 2017 ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/data-embedded-in-comments.html
8\.2 Data embedded in comments ------------------------------ Comment strings are availabe in the `comment` column, just like `character`. Comments can have formatting, but tidyxl doesn’t yet import the formatting. If you need this, please open an [issue](https://github.com/nacnudus/tidyxl/issues). It would probably be imported into a `comment_formatted` column, similarly to `character_formatted`. ``` path <- system.file("extdata", "worked-examples.xlsx", package = "unpivotr") xlsx_cells(path, sheet = "comments") %>% select(row, col, data_type, character, numeric, comment) %>% behead("up", "header") ``` ``` ## # A tibble: 4 x 7 ## row col data_type character numeric comment header ## <int> <int> <chr> <chr> <dbl> <chr> <chr> ## 1 2 1 character Paul NA Absent Term 1 Name ## 2 2 2 numeric <NA> 9 Predicted Score ## 3 3 1 character Matilda NA <NA> Name ## 4 3 2 numeric <NA> 10 <NA> Score ``` Comments apply to single cells, so follow the same procedure as “Already a tidy table but with meaningful formatting of single cells”. ``` cells <- xlsx_cells(path, sheet = "comments") %>% select(row, col, data_type, character, numeric, comment) cells ``` ``` ## # A tibble: 6 x 6 ## row col data_type character numeric comment ## <int> <int> <chr> <chr> <dbl> <chr> ## 1 1 1 character Name NA <NA> ## 2 1 2 character Score NA <NA> ## 3 2 1 character Paul NA Absent Term 1 ## 4 2 2 numeric <NA> 9 Predicted ## 5 3 1 character Matilda NA <NA> ## 6 3 2 numeric <NA> 10 <NA> ``` ``` values <- cells %>% select(-comment) %>% behead("up", header) %>% select(-col) %>% spatter(header) values ``` ``` ## # A tibble: 2 x 3 ## row Name Score ## <int> <chr> <dbl> ## 1 2 Paul 9 ## 2 3 Matilda 10 ``` ``` comments <- cells %>% behead("up", header) %>% mutate(header = paste0(header, "_comment")) %>% select(row, header, comment) %>% spread(header, comment) comments ``` ``` ## # A tibble: 2 x 3 ## row Name_comment Score_comment ## <int> <chr> <chr> ## 1 2 Absent Term 1 Predicted ## 2 3 <NA> <NA> ``` ``` left_join(values, comments, by = "row") %>% select(-row) ``` ``` ## # A tibble: 2 x 4 ## Name Score Name_comment Score_comment ## <chr> <dbl> <chr> <chr> ## 1 Paul 9 Absent Term 1 Predicted ## 2 Matilda 10 <NA> <NA> ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/case-studies.html
9 Case studies ============== This is a collection of spreadsheets found in the wild. Some are as easy to mung as the examples; others are harder because their structure is less consistent. Seeing and reading the code will help you guage how much work is still involved in munging a spreadsheet. Attempting them for yourself and checking the model answer will help you to hone your instincts. The spreadsheet files are provided in the `smungs` package on GitHub. Install as follows. ``` # install.packages("devtools") # If you don't already have it devtools::install_github("nacnudus/smungs") ``` #### 9\.0\.0\.1 Other case studies elsewhere * [YouTube videos](https://www.youtube.com/channel/UCrw0ScBCFSbk_lgkjyg4ucw) * [Worked example code](https://github.com/nacnudus/ukfarm) * [Blog post on `readr::melt_csv()`](https://nacnudus.github.io/duncangarmonsway/posts/2018-12-29-meltcsv/) #### 9\.0\.0\.1 Other case studies elsewhere * [YouTube videos](https://www.youtube.com/channel/UCrw0ScBCFSbk_lgkjyg4ucw) * [Worked example code](https://github.com/nacnudus/ukfarm) * [Blog post on `readr::melt_csv()`](https://nacnudus.github.io/duncangarmonsway/posts/2018-12-29-meltcsv/)
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/australian-marriage-survey.html
9\.1 Australian Marriage Survey ------------------------------- [ These are the results of a survey in 2017 by the Australian Bureau of Statistics that asked, “Should the law be changed to allow same\-sex couples to marry?” There are two tables with structures that are similar but different. [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/ozmarriage.xlsx?raw=true). [Original source](http://www.abs.gov.au/ausstats/[email protected]/mf/1800.0). ### 9\.1\.1 The full code listing ``` cells <- xlsx_cells(smungs::ozmarriage) formats <- xlsx_formats(smungs::ozmarriage) table_1 <- cells %>% dplyr::filter(sheet == "Table 1", row >= 5L, !is_blank) %>% mutate(character = str_trim(character)) %>% behead("up-left", "population") %>% behead("up-left", "response") %>% behead("up", "unit") %>% behead("left", "state") %>% arrange(row, col) %>% select(row, data_type, numeric, state, population, response, unit) %>% spatter(unit) %>% select(-row) state <- cells %>% dplyr::filter(sheet == "Table 2", row >= 5L, col == 1L, !is_blank, formats$local$font$bold[local_format_id]) %>% select(row, col, state = character) table_2 <- cells %>% dplyr::filter(sheet == "Table 2", row >= 5L, !is_blank) %>% mutate(character = str_trim(character)) %>% behead("up-left", "population") %>% behead("up-left", "response") %>% behead("up", "unit") %>% behead("left", "territory") %>% enhead(state, "up-left") %>% arrange(row, col) %>% select(row, data_type, numeric, state, territory, population, response, unit) %>% spatter(unit) %>% select(-row) all_tables <- bind_rows("Table 1" = table_1, "Table 2" = table_2, .id = "sheet") all_tables ``` ``` ## # A tibble: 1,176 x 7 ## sheet state population response `%` no. territory ## <chr> <chr> <chr> <chr> <dbl> <dbl> <chr> ## 1 Table 1 New South Wales Eligible Participants Non-responding 20.5 1065445 <NA> ## 2 Table 1 New South Wales Eligible Participants Response clear 79.2 4111200 <NA> ## 3 Table 1 New South Wales Eligible Participants Response not clear(a) 0.2 11036 <NA> ## 4 Table 1 New South Wales Eligible Participants Total 100 5187681 <NA> ## 5 Table 1 New South Wales Response clear No 42.2 1736838 <NA> ## 6 Table 1 New South Wales Response clear Total 100 4111200 <NA> ## 7 Table 1 New South Wales Response clear Yes 57.8 2374362 <NA> ## 8 Table 1 Victoria Eligible Participants Non-responding 18.3 743634 <NA> ## 9 Table 1 Victoria Eligible Participants Response clear 81.4 3306727 <NA> ## 10 Table 1 Victoria Eligible Participants Response not clear(a) 0.3 11028 <NA> ## # … with 1,166 more rows ``` ### 9\.1\.2 Step by step #### 9\.1\.2\.1 Table 1 The first rows, up to the column\-headers, must be filtered out. The trailing rows below the table will be treated us row\-headers, but because there is no data to join them to, they will be dropped automatically. That is handy, because otherwise we would have to know where the bottom of the table is, which is likely to change with later editions of the same data. Apart from filtering the first rows, the rest of this example is ‘textbook’. ``` cells <- xlsx_cells(smungs::ozmarriage) table_1 <- cells %>% dplyr::filter(sheet == "Table 1", row >= 5L, !is_blank) %>% mutate(character = str_trim(character)) %>% behead("up-left", "population") %>% behead("up-left", "response") %>% behead("up", "unit") %>% behead("left", "state") %>% arrange(row, col) %>% select(row, data_type, numeric, state, population, response, unit) %>% spatter(unit) %>% select(-row) table_1 ``` ``` ## # A tibble: 63 x 5 ## state population response `%` no. ## <chr> <chr> <chr> <dbl> <dbl> ## 1 New South Wales Eligible Participants Non-responding 20.5 1065445 ## 2 New South Wales Eligible Participants Response clear 79.2 4111200 ## 3 New South Wales Eligible Participants Response not clear(a) 0.2 11036 ## 4 New South Wales Eligible Participants Total 100 5187681 ## 5 New South Wales Response clear No 42.2 1736838 ## 6 New South Wales Response clear Total 100 4111200 ## 7 New South Wales Response clear Yes 57.8 2374362 ## 8 Victoria Eligible Participants Non-responding 18.3 743634 ## 9 Victoria Eligible Participants Response clear 81.4 3306727 ## 10 Victoria Eligible Participants Response not clear(a) 0.3 11028 ## # … with 53 more rows ``` #### 9\.1\.2\.2 Table 2 This is like Table 1, broken down by division rather than by state. The snag is that the states are named in the same column as their divisions. Because the state names are formatted in bold, we can isolate them from the division names. With them out of the way, unpivot the rest of the table as normal, and then use `enhead()` at the end to join the state names back on. Since tables 1 and 2 are so similar structurally, they might as well be joined into one. ``` cells <- xlsx_cells(smungs::ozmarriage) formats <- xlsx_formats(smungs::ozmarriage) state <- cells %>% dplyr::filter(sheet == "Table 2", row >= 5L, col == 1L, !is_blank, formats$local$font$bold[local_format_id]) %>% select(row, col, state = character) table_2 <- cells %>% dplyr::filter(sheet == "Table 2", row >= 5L, !is_blank) %>% mutate(character = str_trim(character)) %>% behead("up-left", "population") %>% behead("up-left", "response") %>% behead("up", "unit") %>% behead("left", "territory") %>% enhead(state, "up-left") %>% arrange(row, col) %>% select(row, data_type, numeric, state, territory, population, response, unit) %>% spatter(unit) %>% select(-row) all_tables <- bind_rows("Table 1" = table_1, "Table 2" = table_2, .id = "sheet") %>% select(sheet, state, territory, population, response, `%`, no.) all_tables ``` ``` ## # A tibble: 1,176 x 7 ## sheet state territory population response `%` no. ## <chr> <chr> <chr> <chr> <chr> <dbl> <dbl> ## 1 Table 1 New South Wales <NA> Eligible Participants Non-responding 20.5 1065445 ## 2 Table 1 New South Wales <NA> Eligible Participants Response clear 79.2 4111200 ## 3 Table 1 New South Wales <NA> Eligible Participants Response not clear(a) 0.2 11036 ## 4 Table 1 New South Wales <NA> Eligible Participants Total 100 5187681 ## 5 Table 1 New South Wales <NA> Response clear No 42.2 1736838 ## 6 Table 1 New South Wales <NA> Response clear Total 100 4111200 ## 7 Table 1 New South Wales <NA> Response clear Yes 57.8 2374362 ## 8 Table 1 Victoria <NA> Eligible Participants Non-responding 18.3 743634 ## 9 Table 1 Victoria <NA> Eligible Participants Response clear 81.4 3306727 ## 10 Table 1 Victoria <NA> Eligible Participants Response not clear(a) 0.3 11028 ## # … with 1,166 more rows ``` ### 9\.1\.1 The full code listing ``` cells <- xlsx_cells(smungs::ozmarriage) formats <- xlsx_formats(smungs::ozmarriage) table_1 <- cells %>% dplyr::filter(sheet == "Table 1", row >= 5L, !is_blank) %>% mutate(character = str_trim(character)) %>% behead("up-left", "population") %>% behead("up-left", "response") %>% behead("up", "unit") %>% behead("left", "state") %>% arrange(row, col) %>% select(row, data_type, numeric, state, population, response, unit) %>% spatter(unit) %>% select(-row) state <- cells %>% dplyr::filter(sheet == "Table 2", row >= 5L, col == 1L, !is_blank, formats$local$font$bold[local_format_id]) %>% select(row, col, state = character) table_2 <- cells %>% dplyr::filter(sheet == "Table 2", row >= 5L, !is_blank) %>% mutate(character = str_trim(character)) %>% behead("up-left", "population") %>% behead("up-left", "response") %>% behead("up", "unit") %>% behead("left", "territory") %>% enhead(state, "up-left") %>% arrange(row, col) %>% select(row, data_type, numeric, state, territory, population, response, unit) %>% spatter(unit) %>% select(-row) all_tables <- bind_rows("Table 1" = table_1, "Table 2" = table_2, .id = "sheet") all_tables ``` ``` ## # A tibble: 1,176 x 7 ## sheet state population response `%` no. territory ## <chr> <chr> <chr> <chr> <dbl> <dbl> <chr> ## 1 Table 1 New South Wales Eligible Participants Non-responding 20.5 1065445 <NA> ## 2 Table 1 New South Wales Eligible Participants Response clear 79.2 4111200 <NA> ## 3 Table 1 New South Wales Eligible Participants Response not clear(a) 0.2 11036 <NA> ## 4 Table 1 New South Wales Eligible Participants Total 100 5187681 <NA> ## 5 Table 1 New South Wales Response clear No 42.2 1736838 <NA> ## 6 Table 1 New South Wales Response clear Total 100 4111200 <NA> ## 7 Table 1 New South Wales Response clear Yes 57.8 2374362 <NA> ## 8 Table 1 Victoria Eligible Participants Non-responding 18.3 743634 <NA> ## 9 Table 1 Victoria Eligible Participants Response clear 81.4 3306727 <NA> ## 10 Table 1 Victoria Eligible Participants Response not clear(a) 0.3 11028 <NA> ## # … with 1,166 more rows ``` ### 9\.1\.2 Step by step #### 9\.1\.2\.1 Table 1 The first rows, up to the column\-headers, must be filtered out. The trailing rows below the table will be treated us row\-headers, but because there is no data to join them to, they will be dropped automatically. That is handy, because otherwise we would have to know where the bottom of the table is, which is likely to change with later editions of the same data. Apart from filtering the first rows, the rest of this example is ‘textbook’. ``` cells <- xlsx_cells(smungs::ozmarriage) table_1 <- cells %>% dplyr::filter(sheet == "Table 1", row >= 5L, !is_blank) %>% mutate(character = str_trim(character)) %>% behead("up-left", "population") %>% behead("up-left", "response") %>% behead("up", "unit") %>% behead("left", "state") %>% arrange(row, col) %>% select(row, data_type, numeric, state, population, response, unit) %>% spatter(unit) %>% select(-row) table_1 ``` ``` ## # A tibble: 63 x 5 ## state population response `%` no. ## <chr> <chr> <chr> <dbl> <dbl> ## 1 New South Wales Eligible Participants Non-responding 20.5 1065445 ## 2 New South Wales Eligible Participants Response clear 79.2 4111200 ## 3 New South Wales Eligible Participants Response not clear(a) 0.2 11036 ## 4 New South Wales Eligible Participants Total 100 5187681 ## 5 New South Wales Response clear No 42.2 1736838 ## 6 New South Wales Response clear Total 100 4111200 ## 7 New South Wales Response clear Yes 57.8 2374362 ## 8 Victoria Eligible Participants Non-responding 18.3 743634 ## 9 Victoria Eligible Participants Response clear 81.4 3306727 ## 10 Victoria Eligible Participants Response not clear(a) 0.3 11028 ## # … with 53 more rows ``` #### 9\.1\.2\.2 Table 2 This is like Table 1, broken down by division rather than by state. The snag is that the states are named in the same column as their divisions. Because the state names are formatted in bold, we can isolate them from the division names. With them out of the way, unpivot the rest of the table as normal, and then use `enhead()` at the end to join the state names back on. Since tables 1 and 2 are so similar structurally, they might as well be joined into one. ``` cells <- xlsx_cells(smungs::ozmarriage) formats <- xlsx_formats(smungs::ozmarriage) state <- cells %>% dplyr::filter(sheet == "Table 2", row >= 5L, col == 1L, !is_blank, formats$local$font$bold[local_format_id]) %>% select(row, col, state = character) table_2 <- cells %>% dplyr::filter(sheet == "Table 2", row >= 5L, !is_blank) %>% mutate(character = str_trim(character)) %>% behead("up-left", "population") %>% behead("up-left", "response") %>% behead("up", "unit") %>% behead("left", "territory") %>% enhead(state, "up-left") %>% arrange(row, col) %>% select(row, data_type, numeric, state, territory, population, response, unit) %>% spatter(unit) %>% select(-row) all_tables <- bind_rows("Table 1" = table_1, "Table 2" = table_2, .id = "sheet") %>% select(sheet, state, territory, population, response, `%`, no.) all_tables ``` ``` ## # A tibble: 1,176 x 7 ## sheet state territory population response `%` no. ## <chr> <chr> <chr> <chr> <chr> <dbl> <dbl> ## 1 Table 1 New South Wales <NA> Eligible Participants Non-responding 20.5 1065445 ## 2 Table 1 New South Wales <NA> Eligible Participants Response clear 79.2 4111200 ## 3 Table 1 New South Wales <NA> Eligible Participants Response not clear(a) 0.2 11036 ## 4 Table 1 New South Wales <NA> Eligible Participants Total 100 5187681 ## 5 Table 1 New South Wales <NA> Response clear No 42.2 1736838 ## 6 Table 1 New South Wales <NA> Response clear Total 100 4111200 ## 7 Table 1 New South Wales <NA> Response clear Yes 57.8 2374362 ## 8 Table 1 Victoria <NA> Eligible Participants Non-responding 18.3 743634 ## 9 Table 1 Victoria <NA> Eligible Participants Response clear 81.4 3306727 ## 10 Table 1 Victoria <NA> Eligible Participants Response not clear(a) 0.3 11028 ## # … with 1,166 more rows ``` #### 9\.1\.2\.1 Table 1 The first rows, up to the column\-headers, must be filtered out. The trailing rows below the table will be treated us row\-headers, but because there is no data to join them to, they will be dropped automatically. That is handy, because otherwise we would have to know where the bottom of the table is, which is likely to change with later editions of the same data. Apart from filtering the first rows, the rest of this example is ‘textbook’. ``` cells <- xlsx_cells(smungs::ozmarriage) table_1 <- cells %>% dplyr::filter(sheet == "Table 1", row >= 5L, !is_blank) %>% mutate(character = str_trim(character)) %>% behead("up-left", "population") %>% behead("up-left", "response") %>% behead("up", "unit") %>% behead("left", "state") %>% arrange(row, col) %>% select(row, data_type, numeric, state, population, response, unit) %>% spatter(unit) %>% select(-row) table_1 ``` ``` ## # A tibble: 63 x 5 ## state population response `%` no. ## <chr> <chr> <chr> <dbl> <dbl> ## 1 New South Wales Eligible Participants Non-responding 20.5 1065445 ## 2 New South Wales Eligible Participants Response clear 79.2 4111200 ## 3 New South Wales Eligible Participants Response not clear(a) 0.2 11036 ## 4 New South Wales Eligible Participants Total 100 5187681 ## 5 New South Wales Response clear No 42.2 1736838 ## 6 New South Wales Response clear Total 100 4111200 ## 7 New South Wales Response clear Yes 57.8 2374362 ## 8 Victoria Eligible Participants Non-responding 18.3 743634 ## 9 Victoria Eligible Participants Response clear 81.4 3306727 ## 10 Victoria Eligible Participants Response not clear(a) 0.3 11028 ## # … with 53 more rows ``` #### 9\.1\.2\.2 Table 2 This is like Table 1, broken down by division rather than by state. The snag is that the states are named in the same column as their divisions. Because the state names are formatted in bold, we can isolate them from the division names. With them out of the way, unpivot the rest of the table as normal, and then use `enhead()` at the end to join the state names back on. Since tables 1 and 2 are so similar structurally, they might as well be joined into one. ``` cells <- xlsx_cells(smungs::ozmarriage) formats <- xlsx_formats(smungs::ozmarriage) state <- cells %>% dplyr::filter(sheet == "Table 2", row >= 5L, col == 1L, !is_blank, formats$local$font$bold[local_format_id]) %>% select(row, col, state = character) table_2 <- cells %>% dplyr::filter(sheet == "Table 2", row >= 5L, !is_blank) %>% mutate(character = str_trim(character)) %>% behead("up-left", "population") %>% behead("up-left", "response") %>% behead("up", "unit") %>% behead("left", "territory") %>% enhead(state, "up-left") %>% arrange(row, col) %>% select(row, data_type, numeric, state, territory, population, response, unit) %>% spatter(unit) %>% select(-row) all_tables <- bind_rows("Table 1" = table_1, "Table 2" = table_2, .id = "sheet") %>% select(sheet, state, territory, population, response, `%`, no.) all_tables ``` ``` ## # A tibble: 1,176 x 7 ## sheet state territory population response `%` no. ## <chr> <chr> <chr> <chr> <chr> <dbl> <dbl> ## 1 Table 1 New South Wales <NA> Eligible Participants Non-responding 20.5 1065445 ## 2 Table 1 New South Wales <NA> Eligible Participants Response clear 79.2 4111200 ## 3 Table 1 New South Wales <NA> Eligible Participants Response not clear(a) 0.2 11036 ## 4 Table 1 New South Wales <NA> Eligible Participants Total 100 5187681 ## 5 Table 1 New South Wales <NA> Response clear No 42.2 1736838 ## 6 Table 1 New South Wales <NA> Response clear Total 100 4111200 ## 7 Table 1 New South Wales <NA> Response clear Yes 57.8 2374362 ## 8 Table 1 Victoria <NA> Eligible Participants Non-responding 18.3 743634 ## 9 Table 1 Victoria <NA> Eligible Participants Response clear 81.4 3306727 ## 10 Table 1 Victoria <NA> Eligible Participants Response not clear(a) 0.3 11028 ## # … with 1,166 more rows ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/vaccinations.html
9\.2 Vaccinations ----------------- [ This is a real\-life example of [implied multiples](implied-multiples.html#implied-multiples). Implied multiples look like a single table, but many of the headers appear more than once. There is a dominant set of headers that are on the same ‘level’ (e.g. in the same row) as the other headers. In this case, there is a small multiple for each year of data. The year headers are highlighted in yellow in the screenshot. [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/vaccinations.xlsx?raw=true). [Original source](https://www.cdc.gov/vaccines/imz-managers/coverage/schoolvaxview/data-reports/vacc-coverage.html). The way to unpivot this is to realise that the year cells represent two different things: the year (obviously) and a statistic (percentage vaccinated). It would have been easier to unpivot if the years had been put into a separate row of headers, so we will pretend that that was in fact the case. 1. Filter for the year cells and store in a variable to `enhead()` later. 2. `behead()` everything else as usual, and then overwite the year headers with `percentage_vaccinated`. 3. `enhead()` the year cells. ``` cells <- xlsx_cells(smungs::vaccinations, "SVV Coverage Trend Data") years <- cells %>% dplyr::filter(row == 3, col >= 1, str_detect(character, "20[0-9]{2}-[0-9]{2}")) %>% select(row, col, year = character) years ``` ``` ## # A tibble: 42 x 3 ## row col year ## <int> <int> <chr> ## 1 3 2 2009-10 ## 2 3 8 2010-11 ## 3 3 14 2011-12 ## 4 3 20 2012-13 ## 5 3 26 2013-14 ## 6 3 32 2014-15 ## 7 3 38 2015-16 ## 8 3 44 2009-10 ## 9 3 50 2010-11 ## 10 3 56 2011-12 ## # … with 32 more rows ``` ``` cells %>% select(row, col, data_type, character) %>% behead("up-left", "series") %>% behead("up-left", "population") %>% behead("left", "state") %>% behead("up", "header") %>% mutate(header = if_else(str_detect(header, "20[0-9]{2}-[0-9]{2}"), "percent_vaccinated", header), header = str_replace_all(str_to_lower(header), " ", "_")) %>% enhead(years, "up-left") %>% select(row, series, population, state, year, header, character) %>% spatter(header, character) %>% select(series, population, state, year, percent_vaccinated, percent_surveyed, everything()) ``` ``` ## # A tibble: 2,226 x 11 ## series population state year percent_vaccina… percent_surveyed row footnotes survey_type ## <chr> <chr> <chr> <chr> <chr> <chr> <int> <chr> <chr> ## 1 Schoo… All kinde… Alab… 2009… 94.0 100.0 4 ≥. *. ** Census ## 2 Schoo… All kinde… Alab… 2010… NA NA 4 NA NA ## 3 Schoo… All kinde… Alab… 2011… 93.6 100.0 4 ≥. * .** Census ## 4 Schoo… All kinde… Alab… 2012… 92.8 100.0 4 ≥. * . ‡… Census ## 5 Schoo… All kinde… Alab… 2013… 92.0 100.0 4 ≥. *. ** Census ## 6 Schoo… All kinde… Alab… 2014… 93.5 100.0 4 ≥. * Census ## 7 Schoo… All kinde… Alab… 2015… 93.1 100.0 4 ≥.* Census ## 8 Schoo… All kinde… Alab… 2009… NReq 100.0 4 *. ** Census ## 9 Schoo… All kinde… Alab… 2010… NA NA 4 NA NA ## 10 Schoo… All kinde… Alab… 2011… NReq 100.0 4 * .** Census ## # … with 2,216 more rows, and 2 more variables: target <chr>, ## # total_kindergarten_population <chr> ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/us-crime.html
9\.3 US Crime ------------- [ These are two tables of numbers of crimes in the USA, by state and category of crime. Confusingly, they’re numbered Table 2 and Table 3\. Table 1 exists but isn’t included in this case study because it is so straightforward. ### 9\.3\.1 Table 2 [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/us-crime-2.xlsx?raw=true). [Original source](https://ucr.fbi.gov/crime-in-the-u.s/2016/crime-in-the-u.s.-2016/tables/table-2). #### 9\.3\.1\.1 Simple version This is straightforward to import as long as you don’t care to organise the hierarchies of crimes and areas. For example, Conneticut is within the division New England, which itself is within the region Northeast, but if you don’t need to express those relationships in the data then you can ignore the bold formatting. The only slight snag is that the header cells in row 5 are blank. There is a header for the units “Rate per 100,000”, but no header for the units “Count” – the cells in those positions are empty. It would be a problem if the cells didn’t exist at all, because `behead("up", "unit")` wouldn’t be able to associate data cells with missing header cells. Fortunately they do exist (because they have formatting), they are just empty or `NA`. To make sure they aren’t ignored, use `drop_na = FALSE` in `behead()`, and then later fill the blanks in the `units` column with `"Count"`. ``` cells <- xlsx_cells(smungs::us_crime_2) %>% mutate(character = map_chr(character_formatted, ~ ifelse(is.null(.x), character, .x$character[1])), character = str_replace_all(character, "\n", " ")) cells %>% dplyr::filter(row >= 4L) %>% select(row, col, data_type, character, numeric) %>% behead("up-left", "crime") %>% behead("up", "unit", drop_na = FALSE) %>% behead("left-up", "area") %>% behead("left", "year") %>% behead("left", "population") %>% dplyr::filter(year != "Percent change") %>% mutate(unit = if_else(unit == "", "Count", unit)) %>% select(row, data_type, numeric, unit, area, year, population, crime) %>% spatter(unit) %>% select(-row) ``` ``` ## # A tibble: 1,320 x 6 ## area year population crime Count `Rate per 100,00… ## <chr> <chr> <chr> <chr> <dbl> <dbl> ## 1 United States T… 2015 320896618 Aggravated assault 7.64e5 238. ## 2 United States T… 2015 320896618 Burglary 1.59e6 495. ## 3 United States T… 2015 320896618 Larceny-theft 5.72e6 1784. ## 4 United States T… 2015 320896618 Motor vehicle theft 7.13e5 222. ## 5 United States T… 2015 320896618 Murder and nonnegligent mans… 1.59e4 4.9 ## 6 United States T… 2015 320896618 Property crime 8.02e6 2500. ## 7 United States T… 2015 320896618 Rape (legacy definition) 9.13e4 28.4 ## 8 United States T… 2015 320896618 Rape (revised definition) 1.26e5 39.3 ## 9 United States T… 2015 320896618 Robbery 3.28e5 102. ## 10 United States T… 2015 320896618 Violent crime 1.23e6 385. ## # … with 1,310 more rows ``` #### 9\.3\.1\.2 Complex version If you do mind about grouping states within divisions within regions, and crimes within categories, then you have more work to do using `enhead()` rather than `behead()`. 1. Select the header cells at each level of the hierarchy and store them in their own variables. For example, filter for the bold cells in row 4, which are the categories of crimes, and store them in the `categories` variable. 2. Select the data cells, and use `enhead()` to join them to the headers. In fact the headers `unit`, `year`, `population` can be handled by `behead()`, because they aren’t hierarchichal, so only the variables `category`, `crime`, `region`, `division` and `state` are handled by `enhead()`. ``` cells <- xlsx_cells(smungs::us_crime_2) %>% mutate(character = map_chr(character_formatted, ~ ifelse(is.null(.x), character, .x$character[1])), character = str_replace_all(character, "\n", " ")) formats <- xlsx_formats(smungs::us_crime_2) categories <- cells %>% dplyr::filter(row == 4L, data_type == "character", formats$local$font$bold[local_format_id]) %>% select(row, col, category = character) categories ``` ``` ## # A tibble: 2 x 3 ## row col category ## <int> <int> <chr> ## 1 4 4 Violent crime ## 2 4 16 Property crime ``` ``` crimes <- cells %>% dplyr::filter(row == 4L, data_type == "character") %>% mutate(character = if_else(character %in% categories$category, "Total", character)) %>% select(row, col, crime = character) crimes ``` ``` ## # A tibble: 13 x 3 ## row col crime ## <int> <int> <chr> ## 1 4 1 Area ## 2 4 2 Year ## 3 4 3 Population ## 4 4 4 Total ## 5 4 6 Murder and nonnegligent manslaughter ## 6 4 8 Rape (revised definition) ## 7 4 10 Rape (legacy definition) ## 8 4 12 Robbery ## 9 4 14 Aggravated assault ## 10 4 16 Total ## 11 4 18 Burglary ## 12 4 20 Larceny-theft ## 13 4 22 Motor vehicle theft ``` ``` regions <- cells %>% dplyr::filter(row >= 6L, col == 1L, data_type == "character", formats$local$font$bold[local_format_id]) %>% select(row, col, region = character) regions ``` ``` ## # A tibble: 5 x 3 ## row col region ## <int> <int> <chr> ## 1 6 1 United States Total ## 2 9 1 Northeast ## 3 45 1 Midwest ## 4 90 1 South ## 5 153 1 West ``` ``` divisions <- cells %>% dplyr::filter(row >= 6L, col == 1L, data_type == "character", !formats$local$font$bold[local_format_id], !str_detect(character, "^ {5}")) %>% select(row, col, division = character) divisions ``` ``` ## # A tibble: 21 x 3 ## row col division ## <int> <int> <chr> ## 1 12 1 New England ## 2 33 1 Middle Atlantic ## 3 48 1 East North Central ## 4 66 1 West North Central ## 5 93 1 South Atlantic ## 6 123 1 East South Central ## 7 138 1 West South Central ## 8 156 1 Mountain ## 9 183 1 Pacific ## 10 201 1 Puerto Rico ## # … with 11 more rows ``` ``` states <- cells %>% dplyr::filter(row >= 6L, col == 1L, data_type == "character") %>% mutate(character = if_else(str_detect(character, "^ {5}"), str_trim(character), "Total")) %>% select(row, col, state = character) states ``` ``` ## # A tibble: 77 x 3 ## row col state ## <int> <int> <chr> ## 1 6 1 Total ## 2 9 1 Total ## 3 12 1 Total ## 4 15 1 Connecticut ## 5 18 1 Maine ## 6 21 1 Massachusetts ## 7 24 1 New Hampshire ## 8 27 1 Rhode Island ## 9 30 1 Vermont ## 10 33 1 Total ## # … with 67 more rows ``` ``` cells %>% dplyr::filter(row >= 5L, col >= 2L) %>% select(row, col, data_type, character, numeric) %>% behead("up", "unit") %>% behead("left", "year") %>% behead("left", "population") %>% enhead(categories, "up-left") %>% enhead(crimes, "up-left") %>% enhead(regions, "left-up") %>% enhead(divisions, "left-up", drop = FALSE) %>% enhead(states, "left-up", drop = FALSE) %>% dplyr::filter(year != "Percent change") %>% select(value = numeric, category, crime, region, division, state, year, population) ``` ``` ## # A tibble: 2,640 x 8 ## value category crime region division state year population ## <dbl> <chr> <chr> <chr> <chr> <chr> <chr> <chr> ## 1 42121 Violent cri… Total Northea… New Engla… Total 2015 14710229 ## 2 286. Violent cri… Total Northea… New Engla… Total 2015 14710229 ## 3 41598 Violent cri… Total Northea… New Engla… Total 2016 14735525 ## 4 282. Violent cri… Total Northea… New Engla… Total 2016 14735525 ## 5 326 Violent cri… Murder and nonnegligent … Northea… New Engla… Total 2015 14710229 ## 6 2.2 Violent cri… Murder and nonnegligent … Northea… New Engla… Total 2015 14710229 ## 7 292 Violent cri… Murder and nonnegligent … Northea… New Engla… Total 2016 14735525 ## 8 2 Violent cri… Murder and nonnegligent … Northea… New Engla… Total 2016 14735525 ## 9 4602 Violent cri… Rape (revised definition) Northea… New Engla… Total 2015 14710229 ## 10 31.3 Violent cri… Rape (revised definition) Northea… New Engla… Total 2015 14710229 ## # … with 2,630 more rows ``` ### 9\.3\.2 Table 3 [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/us-crime-3.xlsx?raw=true). [Original source](https://ucr.fbi.gov/crime-in-the-u.s/2016/crime-in-the-u.s.-2016/tables/table-3). This table is confusing to humans, let alone computers. The `Population` column seems to belong to a different table altogether, so that’s how we’ll treat it. 1. Import the `Population` column and the state/area headers to the left. 2. Import the crime\-related column headers, and the state/area headers to the left. 3. Join the two datasets. The `statistic` header ends up having blank values due to the cells being blank, so these are manually filled in. The hierarchy of crime (e.g. ‘robbery’ is within ‘violent crime’) is ignored. That would be handled in the same way as for [Table 2](us-crime.html#us-crime-2). ``` cells <- xlsx_cells(smungs::us_crime_3) %>% mutate(character = map_chr(character_formatted, ~ ifelse(is.null(.x), character, .x$character[1])), character = str_replace_all(character, "\n", " ")) population <- cells %>% dplyr::filter(row >= 5L, col <= 4L) %>% behead("left-up", "state") %>% behead("left-up", "area") %>% behead("left", "statistic", drop_na = FALSE) %>% mutate(statistic = case_when(is.na(statistic) ~ "Population", statistic == "" ~ "Population", TRUE ~ str_trim(statistic))) %>% dplyr::filter(data_type == "numeric", !str_detect(area, regex("total", ignore_case = TRUE)), statistic != "Estimated total") %>% select(data_type, numeric, state, area, statistic) %>% spatter(statistic) crime <- cells %>% dplyr::filter(row >= 4, col != 5L) %>% behead("left-up", "state") %>% behead("left-up", "area") %>% behead("left", "statistic", formatters = list(character = str_trim)) %>% behead("up", "crime") %>% dplyr::filter(data_type == "numeric", !str_detect(area, regex("total", ignore_case = TRUE)), !is.na(statistic), statistic != "") %>% mutate(statistic = case_when(statistic == "Area actually reporting" ~ "Actual", statistic == "Estimated total" ~ "Estimated")) %>% select(data_type, numeric, state, area, statistic, crime) %>% spatter(statistic) left_join(population, crime) ``` ``` ## Joining, by = c("state", "area") ``` ``` ## # A tibble: 1,480 x 7 ## state area `Area actually repo… Population crime Actual Estimated ## <chr> <chr> <dbl> <dbl> <chr> <dbl> <dbl> ## 1 ALABA… Cities outside … 0.966 520422 Aggravated assa… 2.84e+3 2914 ## 2 ALABA… Cities outside … 0.966 520422 Burglary 4.17e+3 4275 ## 3 ALABA… Cities outside … 0.966 520422 Larceny- theft 1.43e+4 14641 ## 4 ALABA… Cities outside … 0.966 520422 Motor vehicle … 1.34e+3 1375 ## 5 ALABA… Cities outside … 0.966 520422 Murder and nonn… 4.10e+1 42 ## 6 ALABA… Cities outside … 0.966 520422 Population 9.66e-1 1 ## 7 ALABA… Cities outside … 0.966 520422 Property crime 1.98e+4 20291 ## 8 ALABA… Cities outside … 0.966 520422 Rape (legacy def… 1.87e+2 193 ## 9 ALABA… Cities outside … 0.966 520422 Rape (revised de… 2.63e+2 269 ## 10 ALABA… Cities outside … 0.966 520422 Robbery 4.10e+2 421 ## # … with 1,470 more rows ``` ### 9\.3\.1 Table 2 [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/us-crime-2.xlsx?raw=true). [Original source](https://ucr.fbi.gov/crime-in-the-u.s/2016/crime-in-the-u.s.-2016/tables/table-2). #### 9\.3\.1\.1 Simple version This is straightforward to import as long as you don’t care to organise the hierarchies of crimes and areas. For example, Conneticut is within the division New England, which itself is within the region Northeast, but if you don’t need to express those relationships in the data then you can ignore the bold formatting. The only slight snag is that the header cells in row 5 are blank. There is a header for the units “Rate per 100,000”, but no header for the units “Count” – the cells in those positions are empty. It would be a problem if the cells didn’t exist at all, because `behead("up", "unit")` wouldn’t be able to associate data cells with missing header cells. Fortunately they do exist (because they have formatting), they are just empty or `NA`. To make sure they aren’t ignored, use `drop_na = FALSE` in `behead()`, and then later fill the blanks in the `units` column with `"Count"`. ``` cells <- xlsx_cells(smungs::us_crime_2) %>% mutate(character = map_chr(character_formatted, ~ ifelse(is.null(.x), character, .x$character[1])), character = str_replace_all(character, "\n", " ")) cells %>% dplyr::filter(row >= 4L) %>% select(row, col, data_type, character, numeric) %>% behead("up-left", "crime") %>% behead("up", "unit", drop_na = FALSE) %>% behead("left-up", "area") %>% behead("left", "year") %>% behead("left", "population") %>% dplyr::filter(year != "Percent change") %>% mutate(unit = if_else(unit == "", "Count", unit)) %>% select(row, data_type, numeric, unit, area, year, population, crime) %>% spatter(unit) %>% select(-row) ``` ``` ## # A tibble: 1,320 x 6 ## area year population crime Count `Rate per 100,00… ## <chr> <chr> <chr> <chr> <dbl> <dbl> ## 1 United States T… 2015 320896618 Aggravated assault 7.64e5 238. ## 2 United States T… 2015 320896618 Burglary 1.59e6 495. ## 3 United States T… 2015 320896618 Larceny-theft 5.72e6 1784. ## 4 United States T… 2015 320896618 Motor vehicle theft 7.13e5 222. ## 5 United States T… 2015 320896618 Murder and nonnegligent mans… 1.59e4 4.9 ## 6 United States T… 2015 320896618 Property crime 8.02e6 2500. ## 7 United States T… 2015 320896618 Rape (legacy definition) 9.13e4 28.4 ## 8 United States T… 2015 320896618 Rape (revised definition) 1.26e5 39.3 ## 9 United States T… 2015 320896618 Robbery 3.28e5 102. ## 10 United States T… 2015 320896618 Violent crime 1.23e6 385. ## # … with 1,310 more rows ``` #### 9\.3\.1\.2 Complex version If you do mind about grouping states within divisions within regions, and crimes within categories, then you have more work to do using `enhead()` rather than `behead()`. 1. Select the header cells at each level of the hierarchy and store them in their own variables. For example, filter for the bold cells in row 4, which are the categories of crimes, and store them in the `categories` variable. 2. Select the data cells, and use `enhead()` to join them to the headers. In fact the headers `unit`, `year`, `population` can be handled by `behead()`, because they aren’t hierarchichal, so only the variables `category`, `crime`, `region`, `division` and `state` are handled by `enhead()`. ``` cells <- xlsx_cells(smungs::us_crime_2) %>% mutate(character = map_chr(character_formatted, ~ ifelse(is.null(.x), character, .x$character[1])), character = str_replace_all(character, "\n", " ")) formats <- xlsx_formats(smungs::us_crime_2) categories <- cells %>% dplyr::filter(row == 4L, data_type == "character", formats$local$font$bold[local_format_id]) %>% select(row, col, category = character) categories ``` ``` ## # A tibble: 2 x 3 ## row col category ## <int> <int> <chr> ## 1 4 4 Violent crime ## 2 4 16 Property crime ``` ``` crimes <- cells %>% dplyr::filter(row == 4L, data_type == "character") %>% mutate(character = if_else(character %in% categories$category, "Total", character)) %>% select(row, col, crime = character) crimes ``` ``` ## # A tibble: 13 x 3 ## row col crime ## <int> <int> <chr> ## 1 4 1 Area ## 2 4 2 Year ## 3 4 3 Population ## 4 4 4 Total ## 5 4 6 Murder and nonnegligent manslaughter ## 6 4 8 Rape (revised definition) ## 7 4 10 Rape (legacy definition) ## 8 4 12 Robbery ## 9 4 14 Aggravated assault ## 10 4 16 Total ## 11 4 18 Burglary ## 12 4 20 Larceny-theft ## 13 4 22 Motor vehicle theft ``` ``` regions <- cells %>% dplyr::filter(row >= 6L, col == 1L, data_type == "character", formats$local$font$bold[local_format_id]) %>% select(row, col, region = character) regions ``` ``` ## # A tibble: 5 x 3 ## row col region ## <int> <int> <chr> ## 1 6 1 United States Total ## 2 9 1 Northeast ## 3 45 1 Midwest ## 4 90 1 South ## 5 153 1 West ``` ``` divisions <- cells %>% dplyr::filter(row >= 6L, col == 1L, data_type == "character", !formats$local$font$bold[local_format_id], !str_detect(character, "^ {5}")) %>% select(row, col, division = character) divisions ``` ``` ## # A tibble: 21 x 3 ## row col division ## <int> <int> <chr> ## 1 12 1 New England ## 2 33 1 Middle Atlantic ## 3 48 1 East North Central ## 4 66 1 West North Central ## 5 93 1 South Atlantic ## 6 123 1 East South Central ## 7 138 1 West South Central ## 8 156 1 Mountain ## 9 183 1 Pacific ## 10 201 1 Puerto Rico ## # … with 11 more rows ``` ``` states <- cells %>% dplyr::filter(row >= 6L, col == 1L, data_type == "character") %>% mutate(character = if_else(str_detect(character, "^ {5}"), str_trim(character), "Total")) %>% select(row, col, state = character) states ``` ``` ## # A tibble: 77 x 3 ## row col state ## <int> <int> <chr> ## 1 6 1 Total ## 2 9 1 Total ## 3 12 1 Total ## 4 15 1 Connecticut ## 5 18 1 Maine ## 6 21 1 Massachusetts ## 7 24 1 New Hampshire ## 8 27 1 Rhode Island ## 9 30 1 Vermont ## 10 33 1 Total ## # … with 67 more rows ``` ``` cells %>% dplyr::filter(row >= 5L, col >= 2L) %>% select(row, col, data_type, character, numeric) %>% behead("up", "unit") %>% behead("left", "year") %>% behead("left", "population") %>% enhead(categories, "up-left") %>% enhead(crimes, "up-left") %>% enhead(regions, "left-up") %>% enhead(divisions, "left-up", drop = FALSE) %>% enhead(states, "left-up", drop = FALSE) %>% dplyr::filter(year != "Percent change") %>% select(value = numeric, category, crime, region, division, state, year, population) ``` ``` ## # A tibble: 2,640 x 8 ## value category crime region division state year population ## <dbl> <chr> <chr> <chr> <chr> <chr> <chr> <chr> ## 1 42121 Violent cri… Total Northea… New Engla… Total 2015 14710229 ## 2 286. Violent cri… Total Northea… New Engla… Total 2015 14710229 ## 3 41598 Violent cri… Total Northea… New Engla… Total 2016 14735525 ## 4 282. Violent cri… Total Northea… New Engla… Total 2016 14735525 ## 5 326 Violent cri… Murder and nonnegligent … Northea… New Engla… Total 2015 14710229 ## 6 2.2 Violent cri… Murder and nonnegligent … Northea… New Engla… Total 2015 14710229 ## 7 292 Violent cri… Murder and nonnegligent … Northea… New Engla… Total 2016 14735525 ## 8 2 Violent cri… Murder and nonnegligent … Northea… New Engla… Total 2016 14735525 ## 9 4602 Violent cri… Rape (revised definition) Northea… New Engla… Total 2015 14710229 ## 10 31.3 Violent cri… Rape (revised definition) Northea… New Engla… Total 2015 14710229 ## # … with 2,630 more rows ``` #### 9\.3\.1\.1 Simple version This is straightforward to import as long as you don’t care to organise the hierarchies of crimes and areas. For example, Conneticut is within the division New England, which itself is within the region Northeast, but if you don’t need to express those relationships in the data then you can ignore the bold formatting. The only slight snag is that the header cells in row 5 are blank. There is a header for the units “Rate per 100,000”, but no header for the units “Count” – the cells in those positions are empty. It would be a problem if the cells didn’t exist at all, because `behead("up", "unit")` wouldn’t be able to associate data cells with missing header cells. Fortunately they do exist (because they have formatting), they are just empty or `NA`. To make sure they aren’t ignored, use `drop_na = FALSE` in `behead()`, and then later fill the blanks in the `units` column with `"Count"`. ``` cells <- xlsx_cells(smungs::us_crime_2) %>% mutate(character = map_chr(character_formatted, ~ ifelse(is.null(.x), character, .x$character[1])), character = str_replace_all(character, "\n", " ")) cells %>% dplyr::filter(row >= 4L) %>% select(row, col, data_type, character, numeric) %>% behead("up-left", "crime") %>% behead("up", "unit", drop_na = FALSE) %>% behead("left-up", "area") %>% behead("left", "year") %>% behead("left", "population") %>% dplyr::filter(year != "Percent change") %>% mutate(unit = if_else(unit == "", "Count", unit)) %>% select(row, data_type, numeric, unit, area, year, population, crime) %>% spatter(unit) %>% select(-row) ``` ``` ## # A tibble: 1,320 x 6 ## area year population crime Count `Rate per 100,00… ## <chr> <chr> <chr> <chr> <dbl> <dbl> ## 1 United States T… 2015 320896618 Aggravated assault 7.64e5 238. ## 2 United States T… 2015 320896618 Burglary 1.59e6 495. ## 3 United States T… 2015 320896618 Larceny-theft 5.72e6 1784. ## 4 United States T… 2015 320896618 Motor vehicle theft 7.13e5 222. ## 5 United States T… 2015 320896618 Murder and nonnegligent mans… 1.59e4 4.9 ## 6 United States T… 2015 320896618 Property crime 8.02e6 2500. ## 7 United States T… 2015 320896618 Rape (legacy definition) 9.13e4 28.4 ## 8 United States T… 2015 320896618 Rape (revised definition) 1.26e5 39.3 ## 9 United States T… 2015 320896618 Robbery 3.28e5 102. ## 10 United States T… 2015 320896618 Violent crime 1.23e6 385. ## # … with 1,310 more rows ``` #### 9\.3\.1\.2 Complex version If you do mind about grouping states within divisions within regions, and crimes within categories, then you have more work to do using `enhead()` rather than `behead()`. 1. Select the header cells at each level of the hierarchy and store them in their own variables. For example, filter for the bold cells in row 4, which are the categories of crimes, and store them in the `categories` variable. 2. Select the data cells, and use `enhead()` to join them to the headers. In fact the headers `unit`, `year`, `population` can be handled by `behead()`, because they aren’t hierarchichal, so only the variables `category`, `crime`, `region`, `division` and `state` are handled by `enhead()`. ``` cells <- xlsx_cells(smungs::us_crime_2) %>% mutate(character = map_chr(character_formatted, ~ ifelse(is.null(.x), character, .x$character[1])), character = str_replace_all(character, "\n", " ")) formats <- xlsx_formats(smungs::us_crime_2) categories <- cells %>% dplyr::filter(row == 4L, data_type == "character", formats$local$font$bold[local_format_id]) %>% select(row, col, category = character) categories ``` ``` ## # A tibble: 2 x 3 ## row col category ## <int> <int> <chr> ## 1 4 4 Violent crime ## 2 4 16 Property crime ``` ``` crimes <- cells %>% dplyr::filter(row == 4L, data_type == "character") %>% mutate(character = if_else(character %in% categories$category, "Total", character)) %>% select(row, col, crime = character) crimes ``` ``` ## # A tibble: 13 x 3 ## row col crime ## <int> <int> <chr> ## 1 4 1 Area ## 2 4 2 Year ## 3 4 3 Population ## 4 4 4 Total ## 5 4 6 Murder and nonnegligent manslaughter ## 6 4 8 Rape (revised definition) ## 7 4 10 Rape (legacy definition) ## 8 4 12 Robbery ## 9 4 14 Aggravated assault ## 10 4 16 Total ## 11 4 18 Burglary ## 12 4 20 Larceny-theft ## 13 4 22 Motor vehicle theft ``` ``` regions <- cells %>% dplyr::filter(row >= 6L, col == 1L, data_type == "character", formats$local$font$bold[local_format_id]) %>% select(row, col, region = character) regions ``` ``` ## # A tibble: 5 x 3 ## row col region ## <int> <int> <chr> ## 1 6 1 United States Total ## 2 9 1 Northeast ## 3 45 1 Midwest ## 4 90 1 South ## 5 153 1 West ``` ``` divisions <- cells %>% dplyr::filter(row >= 6L, col == 1L, data_type == "character", !formats$local$font$bold[local_format_id], !str_detect(character, "^ {5}")) %>% select(row, col, division = character) divisions ``` ``` ## # A tibble: 21 x 3 ## row col division ## <int> <int> <chr> ## 1 12 1 New England ## 2 33 1 Middle Atlantic ## 3 48 1 East North Central ## 4 66 1 West North Central ## 5 93 1 South Atlantic ## 6 123 1 East South Central ## 7 138 1 West South Central ## 8 156 1 Mountain ## 9 183 1 Pacific ## 10 201 1 Puerto Rico ## # … with 11 more rows ``` ``` states <- cells %>% dplyr::filter(row >= 6L, col == 1L, data_type == "character") %>% mutate(character = if_else(str_detect(character, "^ {5}"), str_trim(character), "Total")) %>% select(row, col, state = character) states ``` ``` ## # A tibble: 77 x 3 ## row col state ## <int> <int> <chr> ## 1 6 1 Total ## 2 9 1 Total ## 3 12 1 Total ## 4 15 1 Connecticut ## 5 18 1 Maine ## 6 21 1 Massachusetts ## 7 24 1 New Hampshire ## 8 27 1 Rhode Island ## 9 30 1 Vermont ## 10 33 1 Total ## # … with 67 more rows ``` ``` cells %>% dplyr::filter(row >= 5L, col >= 2L) %>% select(row, col, data_type, character, numeric) %>% behead("up", "unit") %>% behead("left", "year") %>% behead("left", "population") %>% enhead(categories, "up-left") %>% enhead(crimes, "up-left") %>% enhead(regions, "left-up") %>% enhead(divisions, "left-up", drop = FALSE) %>% enhead(states, "left-up", drop = FALSE) %>% dplyr::filter(year != "Percent change") %>% select(value = numeric, category, crime, region, division, state, year, population) ``` ``` ## # A tibble: 2,640 x 8 ## value category crime region division state year population ## <dbl> <chr> <chr> <chr> <chr> <chr> <chr> <chr> ## 1 42121 Violent cri… Total Northea… New Engla… Total 2015 14710229 ## 2 286. Violent cri… Total Northea… New Engla… Total 2015 14710229 ## 3 41598 Violent cri… Total Northea… New Engla… Total 2016 14735525 ## 4 282. Violent cri… Total Northea… New Engla… Total 2016 14735525 ## 5 326 Violent cri… Murder and nonnegligent … Northea… New Engla… Total 2015 14710229 ## 6 2.2 Violent cri… Murder and nonnegligent … Northea… New Engla… Total 2015 14710229 ## 7 292 Violent cri… Murder and nonnegligent … Northea… New Engla… Total 2016 14735525 ## 8 2 Violent cri… Murder and nonnegligent … Northea… New Engla… Total 2016 14735525 ## 9 4602 Violent cri… Rape (revised definition) Northea… New Engla… Total 2015 14710229 ## 10 31.3 Violent cri… Rape (revised definition) Northea… New Engla… Total 2015 14710229 ## # … with 2,630 more rows ``` ### 9\.3\.2 Table 3 [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/us-crime-3.xlsx?raw=true). [Original source](https://ucr.fbi.gov/crime-in-the-u.s/2016/crime-in-the-u.s.-2016/tables/table-3). This table is confusing to humans, let alone computers. The `Population` column seems to belong to a different table altogether, so that’s how we’ll treat it. 1. Import the `Population` column and the state/area headers to the left. 2. Import the crime\-related column headers, and the state/area headers to the left. 3. Join the two datasets. The `statistic` header ends up having blank values due to the cells being blank, so these are manually filled in. The hierarchy of crime (e.g. ‘robbery’ is within ‘violent crime’) is ignored. That would be handled in the same way as for [Table 2](us-crime.html#us-crime-2). ``` cells <- xlsx_cells(smungs::us_crime_3) %>% mutate(character = map_chr(character_formatted, ~ ifelse(is.null(.x), character, .x$character[1])), character = str_replace_all(character, "\n", " ")) population <- cells %>% dplyr::filter(row >= 5L, col <= 4L) %>% behead("left-up", "state") %>% behead("left-up", "area") %>% behead("left", "statistic", drop_na = FALSE) %>% mutate(statistic = case_when(is.na(statistic) ~ "Population", statistic == "" ~ "Population", TRUE ~ str_trim(statistic))) %>% dplyr::filter(data_type == "numeric", !str_detect(area, regex("total", ignore_case = TRUE)), statistic != "Estimated total") %>% select(data_type, numeric, state, area, statistic) %>% spatter(statistic) crime <- cells %>% dplyr::filter(row >= 4, col != 5L) %>% behead("left-up", "state") %>% behead("left-up", "area") %>% behead("left", "statistic", formatters = list(character = str_trim)) %>% behead("up", "crime") %>% dplyr::filter(data_type == "numeric", !str_detect(area, regex("total", ignore_case = TRUE)), !is.na(statistic), statistic != "") %>% mutate(statistic = case_when(statistic == "Area actually reporting" ~ "Actual", statistic == "Estimated total" ~ "Estimated")) %>% select(data_type, numeric, state, area, statistic, crime) %>% spatter(statistic) left_join(population, crime) ``` ``` ## Joining, by = c("state", "area") ``` ``` ## # A tibble: 1,480 x 7 ## state area `Area actually repo… Population crime Actual Estimated ## <chr> <chr> <dbl> <dbl> <chr> <dbl> <dbl> ## 1 ALABA… Cities outside … 0.966 520422 Aggravated assa… 2.84e+3 2914 ## 2 ALABA… Cities outside … 0.966 520422 Burglary 4.17e+3 4275 ## 3 ALABA… Cities outside … 0.966 520422 Larceny- theft 1.43e+4 14641 ## 4 ALABA… Cities outside … 0.966 520422 Motor vehicle … 1.34e+3 1375 ## 5 ALABA… Cities outside … 0.966 520422 Murder and nonn… 4.10e+1 42 ## 6 ALABA… Cities outside … 0.966 520422 Population 9.66e-1 1 ## 7 ALABA… Cities outside … 0.966 520422 Property crime 1.98e+4 20291 ## 8 ALABA… Cities outside … 0.966 520422 Rape (legacy def… 1.87e+2 193 ## 9 ALABA… Cities outside … 0.966 520422 Rape (revised de… 2.63e+2 269 ## 10 ALABA… Cities outside … 0.966 520422 Robbery 4.10e+2 421 ## # … with 1,470 more rows ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/toronto-transit-commission.html
9\.4 Toronto Transit Commission ------------------------------- [ This table shows the number of trips recorded on the Toronto Transit Commission per year, by type of ticket, person, vehicle, and weeday/weekend/holiday. Sharla Gelfand’s annotated screenshot explains the structure, and see her excellent [blog post](https://sharlagelfand.netlify.com/posts/tidy-ttc/) for how she wrangled it with standard tidyverse tools. I show here an alternative method with tidyxl and unpivotr. [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/toronto_transit.xlsx?raw=true). [Original source](https://portal0.cf.opendata.inter.sandbox-toronto.ca/dataset/ttc-ridership-analysis). ### 9\.4\.1 The full code listing ``` cells <- xlsx_cells(smungs::toronto_transit) %>% dplyr::filter(!is_blank, row >= 6) fare <- cells %>% dplyr::filter(col == 2, !str_detect(character, "^ "), !str_detect(character, "TOTAL")) %>% select(row, col, fare = character) cells %>% behead("up", "year", formatters = list(character = str_trim)) %>% behead("left-up", "context") %>% behead("left", "media", formatters = list(character = str_trim)) %>% enhead(fare, "left-up") %>% dplyr::filter(!str_detect(media, "TOTAL")) %>% separate(year, c("year", "note"), sep = " ", fill = "right") %>% select(year, context, fare, media, count = numeric) ``` ``` ## # A tibble: 1,188 x 5 ## year context fare media count ## <chr> <chr> <chr> <chr> <dbl> ## 1 2017 WHO ADULT TOKENS 76106 ## 2 2016 WHO ADULT TOKENS 102073 ## 3 2015 WHO ADULT TOKENS 110945 ## 4 2014 WHO ADULT TOKENS 111157 ## 5 2013 WHO ADULT TOKENS 112360 ## 6 2012 WHO ADULT TOKENS 117962 ## 7 2011 WHO ADULT TOKENS 124748 ## 8 2010 WHO ADULT TOKENS 120366 ## 9 2009 WHO ADULT TOKENS 114686 ## 10 2008 WHO ADULT TOKENS 94210 ## # … with 1,178 more rows ``` ### 9\.4\.2 Step by step Although the annotations point out that there are really three separate tables (`WHO`, `WHERE` and `WHEN`), they can be imported as one. Column 2 has two levels of headers in it: the fare in bold (“ADULT”, “BUS”, etc.), and the media used to pay for it indented by a few spaces (“TOKENS”, “WEEKLY PASS”, etc.). Because `behead()` can’t distinguish between different levels of headers in the same column, we need to put the bold fare headers into a separate variable on their own, and `enhead()` them back onto the rest of the table later. Unfortunately the fare headers in the “WHEN” context aren’t bold, so rather than filter for bold headers, instead we filter for headers that aren’t indented by spaces. We also filter out any “TOTAL” headers. ``` cells <- xlsx_cells(smungs::toronto_transit) %>% dplyr::filter(!is_blank, row >= 6) fare <- cells %>% dplyr::filter(col == 2, !str_detect(character, "^ "), # Filter out indented headers !str_detect(character, "TOTAL")) %>% # Filteroout totals select(row, col, fare = character) fare ``` ``` ## # A tibble: 7 x 3 ## row col fare ## <int> <int> <chr> ## 1 7 2 ADULT ## 2 21 2 SENIOR/STUDENT ## 3 31 2 CHILDREN ## 4 43 2 BUS ## 5 46 2 RAIL ## 6 53 2 WEEKDAY ## 7 54 2 WEEKEND/HOLIDAY ``` ``` ttc <- cells %>% behead("up", "year") %>% behead("left-up", "context") %>% behead("left", "media") %>% enhead(fare, "left-up") %>% dplyr::filter(!str_detect(media, "TOTAL")) %>% select(year, context, fare, media, count = numeric) ttc ``` ``` ## # A tibble: 1,188 x 5 ## year context fare media count ## <chr> <chr> <chr> <chr> <dbl> ## 1 "2017" WHO ADULT " TOKENS" 76106 ## 2 "2016" WHO ADULT " TOKENS" 102073 ## 3 " 2015 *" WHO ADULT " TOKENS" 110945 ## 4 "2014" WHO ADULT " TOKENS" 111157 ## 5 "2013" WHO ADULT " TOKENS" 112360 ## 6 "2012" WHO ADULT " TOKENS" 117962 ## 7 "2011" WHO ADULT " TOKENS" 124748 ## 8 "2010" WHO ADULT " TOKENS" 120366 ## 9 "2009" WHO ADULT " TOKENS" 114686 ## 10 "2008" WHO ADULT " TOKENS" 94210 ## # … with 1,178 more rows ``` There’s a bit more cosmetic cleaning to do. The indentation can be trimmed from the `media` and the `year` headers, and the asterisk removed from the year `2015 *`. ``` ttc %>% mutate(year = str_trim(year), media = str_trim(media)) %>% separate(year, c("year", "note"), sep = " ", fill = "right") %>% select(-note) ``` ``` ## # A tibble: 1,188 x 5 ## year context fare media count ## <chr> <chr> <chr> <chr> <dbl> ## 1 2017 WHO ADULT TOKENS 76106 ## 2 2016 WHO ADULT TOKENS 102073 ## 3 2015 WHO ADULT TOKENS 110945 ## 4 2014 WHO ADULT TOKENS 111157 ## 5 2013 WHO ADULT TOKENS 112360 ## 6 2012 WHO ADULT TOKENS 117962 ## 7 2011 WHO ADULT TOKENS 124748 ## 8 2010 WHO ADULT TOKENS 120366 ## 9 2009 WHO ADULT TOKENS 114686 ## 10 2008 WHO ADULT TOKENS 94210 ## # … with 1,178 more rows ``` ### 9\.4\.1 The full code listing ``` cells <- xlsx_cells(smungs::toronto_transit) %>% dplyr::filter(!is_blank, row >= 6) fare <- cells %>% dplyr::filter(col == 2, !str_detect(character, "^ "), !str_detect(character, "TOTAL")) %>% select(row, col, fare = character) cells %>% behead("up", "year", formatters = list(character = str_trim)) %>% behead("left-up", "context") %>% behead("left", "media", formatters = list(character = str_trim)) %>% enhead(fare, "left-up") %>% dplyr::filter(!str_detect(media, "TOTAL")) %>% separate(year, c("year", "note"), sep = " ", fill = "right") %>% select(year, context, fare, media, count = numeric) ``` ``` ## # A tibble: 1,188 x 5 ## year context fare media count ## <chr> <chr> <chr> <chr> <dbl> ## 1 2017 WHO ADULT TOKENS 76106 ## 2 2016 WHO ADULT TOKENS 102073 ## 3 2015 WHO ADULT TOKENS 110945 ## 4 2014 WHO ADULT TOKENS 111157 ## 5 2013 WHO ADULT TOKENS 112360 ## 6 2012 WHO ADULT TOKENS 117962 ## 7 2011 WHO ADULT TOKENS 124748 ## 8 2010 WHO ADULT TOKENS 120366 ## 9 2009 WHO ADULT TOKENS 114686 ## 10 2008 WHO ADULT TOKENS 94210 ## # … with 1,178 more rows ``` ### 9\.4\.2 Step by step Although the annotations point out that there are really three separate tables (`WHO`, `WHERE` and `WHEN`), they can be imported as one. Column 2 has two levels of headers in it: the fare in bold (“ADULT”, “BUS”, etc.), and the media used to pay for it indented by a few spaces (“TOKENS”, “WEEKLY PASS”, etc.). Because `behead()` can’t distinguish between different levels of headers in the same column, we need to put the bold fare headers into a separate variable on their own, and `enhead()` them back onto the rest of the table later. Unfortunately the fare headers in the “WHEN” context aren’t bold, so rather than filter for bold headers, instead we filter for headers that aren’t indented by spaces. We also filter out any “TOTAL” headers. ``` cells <- xlsx_cells(smungs::toronto_transit) %>% dplyr::filter(!is_blank, row >= 6) fare <- cells %>% dplyr::filter(col == 2, !str_detect(character, "^ "), # Filter out indented headers !str_detect(character, "TOTAL")) %>% # Filteroout totals select(row, col, fare = character) fare ``` ``` ## # A tibble: 7 x 3 ## row col fare ## <int> <int> <chr> ## 1 7 2 ADULT ## 2 21 2 SENIOR/STUDENT ## 3 31 2 CHILDREN ## 4 43 2 BUS ## 5 46 2 RAIL ## 6 53 2 WEEKDAY ## 7 54 2 WEEKEND/HOLIDAY ``` ``` ttc <- cells %>% behead("up", "year") %>% behead("left-up", "context") %>% behead("left", "media") %>% enhead(fare, "left-up") %>% dplyr::filter(!str_detect(media, "TOTAL")) %>% select(year, context, fare, media, count = numeric) ttc ``` ``` ## # A tibble: 1,188 x 5 ## year context fare media count ## <chr> <chr> <chr> <chr> <dbl> ## 1 "2017" WHO ADULT " TOKENS" 76106 ## 2 "2016" WHO ADULT " TOKENS" 102073 ## 3 " 2015 *" WHO ADULT " TOKENS" 110945 ## 4 "2014" WHO ADULT " TOKENS" 111157 ## 5 "2013" WHO ADULT " TOKENS" 112360 ## 6 "2012" WHO ADULT " TOKENS" 117962 ## 7 "2011" WHO ADULT " TOKENS" 124748 ## 8 "2010" WHO ADULT " TOKENS" 120366 ## 9 "2009" WHO ADULT " TOKENS" 114686 ## 10 "2008" WHO ADULT " TOKENS" 94210 ## # … with 1,178 more rows ``` There’s a bit more cosmetic cleaning to do. The indentation can be trimmed from the `media` and the `year` headers, and the asterisk removed from the year `2015 *`. ``` ttc %>% mutate(year = str_trim(year), media = str_trim(media)) %>% separate(year, c("year", "note"), sep = " ", fill = "right") %>% select(-note) ``` ``` ## # A tibble: 1,188 x 5 ## year context fare media count ## <chr> <chr> <chr> <chr> <dbl> ## 1 2017 WHO ADULT TOKENS 76106 ## 2 2016 WHO ADULT TOKENS 102073 ## 3 2015 WHO ADULT TOKENS 110945 ## 4 2014 WHO ADULT TOKENS 111157 ## 5 2013 WHO ADULT TOKENS 112360 ## 6 2012 WHO ADULT TOKENS 117962 ## 7 2011 WHO ADULT TOKENS 124748 ## 8 2010 WHO ADULT TOKENS 120366 ## 9 2009 WHO ADULT TOKENS 114686 ## 10 2008 WHO ADULT TOKENS 94210 ## # … with 1,178 more rows ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/ground-water.html
9\.5 Ground water ----------------- [ If the cells containing `U` didn’t exist, then this spreadsheet would be a textbook example of unpivoting a pivot table. There are two rows of column headers, as well as two columns of row headers, so you would use `behead()` for each header. [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/groundwater.xlsx?raw=true). Synthesised from the [original tweet](https://twitter.com/beckfrydenborg/status/974787652573646849). ``` x <- xlsx_cells(smungs::groundwater) %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) %>% behead("up-left", "sample-type") %>% behead("up-left", "site") %>% behead("left", "parameter") %>% behead("left", "unit") x ``` ``` ## # A tibble: 17 x 9 ## row col data_type character numeric `sample-type` site parameter unit ## <int> <int> <chr> <chr> <dbl> <chr> <chr> <chr> <chr> ## 1 4 3 numeric <NA> 3.2 ground water A Nitrogen, Kjeldahl mn/L ## 2 6 3 numeric <NA> 0.025 ground water A Nitrate Nitrite as N mg/L ## 3 6 4 character U NA ground water A Nitrate Nitrite as N mg/L ## 4 8 3 numeric <NA> 0.04 ground water A Phosphorus as P mg/L ## 5 4 5 numeric <NA> 1.2 ground water B Nitrogen, Kjeldahl mn/L ## 6 6 5 numeric <NA> 0.025 ground water B Nitrate Nitrite as N mg/L ## 7 6 6 character U NA ground water B Nitrate Nitrite as N mg/L ## 8 8 5 numeric <NA> 0.17 ground water B Phosphorus as P mg/L ## 9 4 7 numeric <NA> 0.5 ground water C Nitrogen, Kjeldahl mn/L ## 10 6 7 numeric <NA> 0.025 ground water C Nitrate Nitrite as N mg/L ## 11 6 8 character U NA ground water C Nitrate Nitrite as N mg/L ## 12 8 7 numeric <NA> 0.062 ground water C Phosphorus as P mg/L ## 13 4 9 numeric <NA> 0.4 ground water D Nitrogen, Kjeldahl mn/L ## 14 6 9 numeric <NA> 0.025 ground water D Nitrate Nitrite as N mg/L ## 15 6 10 character U NA ground water D Nitrate Nitrite as N mg/L ## 16 8 9 numeric <NA> 0.04 ground water D Phosphorus as P mg/L ## 17 8 10 character J3 NA ground water D Phosphorus as P mg/L ``` So what to do about the `U` cells? We don’t know what they mean, but perhaps they are some kind of flag, to inform the interpretation of the numbers. If that’s the case, then they should appear in the same row of the final data frame as the numbers. Something like `tidyr::spread()` would work, except that instead of spreading the values in just one column, we need to spread the values in both the `character` and `numeric` columns, depending on the value in the `data_type` column. This is what `spatter()` is for. ``` x %>% select(-col) %>% spatter(data_type) %>% select(-row) ``` ``` ## # A tibble: 12 x 6 ## `sample-type` site parameter unit character numeric ## <chr> <chr> <chr> <chr> <chr> <dbl> ## 1 ground water A Nitrogen, Kjeldahl mn/L <NA> 3.2 ## 2 ground water B Nitrogen, Kjeldahl mn/L <NA> 1.2 ## 3 ground water C Nitrogen, Kjeldahl mn/L <NA> 0.5 ## 4 ground water D Nitrogen, Kjeldahl mn/L <NA> 0.4 ## 5 ground water A Nitrate Nitrite as N mg/L U 0.025 ## 6 ground water B Nitrate Nitrite as N mg/L U 0.025 ## 7 ground water C Nitrate Nitrite as N mg/L U 0.025 ## 8 ground water D Nitrate Nitrite as N mg/L U 0.025 ## 9 ground water A Phosphorus as P mg/L <NA> 0.04 ## 10 ground water B Phosphorus as P mg/L <NA> 0.17 ## 11 ground water C Phosphorus as P mg/L <NA> 0.062 ## 12 ground water D Phosphorus as P mg/L J3 0.04 ``` Compare that with the results of `spread()`, which can only spread one value column at a time. ``` x %>% select(-col) %>% spread(data_type, character) ``` ``` ## # A tibble: 17 x 7 ## row `sample-type` site parameter unit character numeric ## <int> <chr> <chr> <chr> <chr> <chr> <chr> ## 1 4 ground water D Nitrogen, Kjeldahl mn/L <NA> <NA> ## 2 4 ground water C Nitrogen, Kjeldahl mn/L <NA> <NA> ## 3 4 ground water B Nitrogen, Kjeldahl mn/L <NA> <NA> ## 4 4 ground water A Nitrogen, Kjeldahl mn/L <NA> <NA> ## 5 6 ground water A Nitrate Nitrite as N mg/L <NA> <NA> ## 6 6 ground water B Nitrate Nitrite as N mg/L <NA> <NA> ## 7 6 ground water C Nitrate Nitrite as N mg/L <NA> <NA> ## 8 6 ground water D Nitrate Nitrite as N mg/L <NA> <NA> ## 9 6 ground water A Nitrate Nitrite as N mg/L U <NA> ## 10 6 ground water B Nitrate Nitrite as N mg/L U <NA> ## 11 6 ground water C Nitrate Nitrite as N mg/L U <NA> ## 12 6 ground water D Nitrate Nitrite as N mg/L U <NA> ## 13 8 ground water A Phosphorus as P mg/L <NA> <NA> ## 14 8 ground water D Phosphorus as P mg/L <NA> <NA> ## 15 8 ground water C Phosphorus as P mg/L <NA> <NA> ## 16 8 ground water B Phosphorus as P mg/L <NA> <NA> ## 17 8 ground water D Phosphorus as P mg/L J3 <NA> ``` ``` x %>% select(-col) %>% spread(data_type, numeric) ``` ``` ## # A tibble: 17 x 7 ## row `sample-type` site parameter unit character numeric ## <int> <chr> <chr> <chr> <chr> <dbl> <dbl> ## 1 4 ground water A Nitrogen, Kjeldahl mn/L NA 3.2 ## 2 4 ground water B Nitrogen, Kjeldahl mn/L NA 1.2 ## 3 4 ground water C Nitrogen, Kjeldahl mn/L NA 0.5 ## 4 4 ground water D Nitrogen, Kjeldahl mn/L NA 0.4 ## 5 6 ground water A Nitrate Nitrite as N mg/L NA NA ## 6 6 ground water B Nitrate Nitrite as N mg/L NA NA ## 7 6 ground water C Nitrate Nitrite as N mg/L NA NA ## 8 6 ground water D Nitrate Nitrite as N mg/L NA NA ## 9 6 ground water A Nitrate Nitrite as N mg/L NA 0.025 ## 10 6 ground water B Nitrate Nitrite as N mg/L NA 0.025 ## 11 6 ground water C Nitrate Nitrite as N mg/L NA 0.025 ## 12 6 ground water D Nitrate Nitrite as N mg/L NA 0.025 ## 13 8 ground water D Phosphorus as P mg/L NA NA ## 14 8 ground water A Phosphorus as P mg/L NA 0.04 ## 15 8 ground water B Phosphorus as P mg/L NA 0.17 ## 16 8 ground water C Phosphorus as P mg/L NA 0.062 ## 17 8 ground water D Phosphorus as P mg/L NA 0.04 ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/cashflows.html
9\.6 Cashflows -------------- [ Davis Vaughan kindly [blogged](https://blog.davisvaughan.com/post/tidying-excel-cash-flow-spreadsheets-in-r/) about using unpivotr to tidy spreadsheets of cashflows. Here is an example using unpivotr’s new, more powerful syntax. [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/cashflows.xlsx?raw=true). [Original source](https://github.com/DavisVaughan/tidying-excel-cashflows-blog-companion). The techniques are 1. Filter out `TOTAL` rows 2. Create an ordered factor of the months, which follow the fiscal year April to March. This is done using the fact that the months appear in column\-order as well as year\-order, so we can sort on `col`. ``` cashflows <- xlsx_cells(smungs::cashflows) %>% dplyr::filter(!is_blank, row >= 4L) %>% select(row, col, data_type, character, numeric) %>% behead("up", "month") %>% behead("left-up", "main_header") %>% behead("left", "sub_header") %>% dplyr::filter(month != "TOTALS", !str_detect(sub_header, "otal")) %>% arrange(col) %>% mutate(month = factor(month, levels = unique(month), ordered = TRUE), sub_header = str_trim(sub_header)) %>% select(main_header, sub_header, month, value = numeric) cashflows ``` ``` ## # A tibble: 336 x 4 ## main_header sub_header month value ## <chr> <chr> <ord> <dbl> ## 1 Cash Inflows (Income): Cash Collections April 2227 ## 2 Cash Inflows (Income): Credit Collections April -4712 ## 3 Cash Inflows (Income): Investment Income April -2412 ## 4 Cash Inflows (Income): Other: April 490 ## 5 Cash Outflows (Expenses): Advertising April -324 ## 6 Cash Outflows (Expenses): Bank Service Charges April 3221 ## 7 Cash Outflows (Expenses): Insurance April 960 ## 8 Cash Outflows (Expenses): Interest April 936 ## 9 Cash Outflows (Expenses): Inventory Purchases April 2522 ## 10 Cash Outflows (Expenses): Maintenance & Repairs April 3883 ## # … with 326 more rows ``` To prove that the data is correct, we can reproduce the total row at the bottom (‘Ending Cash Balance’). ``` cashflows %>% group_by(main_header, month) %>% summarise(value = sum(value)) %>% arrange(month, main_header) %>% dplyr::filter(str_detect(main_header, "ows")) %>% mutate(value = if_else(str_detect(main_header, "Income"), value, -value)) %>% group_by(month) %>% summarise(value = sum(value)) %>% mutate(value = cumsum(value)) ``` ``` ## `summarise()` regrouping by 'main_header' (override with `.groups` argument) ``` ``` ## `summarise()` ungrouping (override with `.groups` argument) ``` ``` ## # A tibble: 12 x 2 ## month value ## * <ord> <dbl> ## 1 April -39895 ## 2 May -43080 ## 3 June -39830 ## 4 July -14108 ## 5 Aug -25194 ## 6 Sept -42963 ## 7 Oct -39635 ## 8 Nov -29761 ## 9 Dec -49453 ## 10 Jan -30359 ## 11 Feb -33747 ## 12 Mar -27016 ```
Getting Cleaning and Wrangling Data
nacnudus.github.io
https://nacnudus.github.io/spreadsheet-munging-strategies/school-performance.html
9\.7 School performance ----------------------- A certain United States state education department provides its schools with spreadsheets of statistics. I bet the children in that state get a great education, because there’s at least one R enthusiast on the staff whose curiosity has never left them. ### 9\.7\.1 Sheet 1 The first sheet is an example of [mixed headers in column 1 being distinguished by bold formatting](pivot-simple.html#mixed-levels-of-headers-in-the-same-rowcolumn-distinguished-by-formatting). Filter for the bold cells in column 1 and assign them to a variable. Then `behead()` the other headers, and finally `enhead()` the bold headers back on. [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/school.xlsx?raw=true), modified from an original source provided to the author. ``` cells <- xlsx_cells(smungs::school, "Sheet1") %>% dplyr::filter(!is_blank) formats <- xlsx_formats(smungs::school) bold_headers <- cells %>% dplyr::filter(col == 1L, formats$local$font$bold[local_format_id]) %>% select(row, col, bold_header = character) cells %>% behead("up-left", "metric") %>% behead("left", "plain-header") %>% enhead(bold_headers, "left-up") %>% select(row, data_type, numeric, metric, `plain-header`) %>% spatter(metric) %>% select(-row) ``` ``` ## # A tibble: 21 x 10 ## `plain-header` `% Advanced` `% Needs Improv… `% Proficient` `% Proficient o… ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 All Students 0.515 0.0297 0.446 0.960 ## 2 Economically … 0.333 0.0667 0.567 0.9 ## 3 Non-Economica… 0.592 0.0141 0.394 0.986 ## 4 Students w/ D… NA NA NA NA ## 5 Non-Disabled 0.565 0.0217 0.413 0.978 ## 6 ELL NA NA NA NA ## 7 Non-ELL 0.525 0.0202 0.444 0.970 ## 8 African Amer.… NA NA NA NA ## 9 Asian NA NA NA NA ## 10 Hispanic/Lati… NA NA NA NA ## # … with 11 more rows, and 5 more variables: `% Warning/ Failing` <dbl>, CPI <dbl>, `Median ## # SGP` <dbl>, `N Included` <dbl>, `N Included in SGP` <dbl> ``` ### 9\.7\.2 Sheet 2 The second sheet is variation on [two clear rows of text column headers, left aligned](pivot-complex.html#two-clear-rows-of-text-column-headers-left-aligned-1). Here, there are three rows of colum headers. The first row is left\-aligned, and the second and third rows are directly above the data cells. But the second row is blank above columns D and E. That doesn’t actually matter; in the output, `header_2` will be `NA` for data from those columns. [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/school.xlsx?raw=true), modified from an original source provided to the author. ``` xlsx_cells(smungs::school, "Sheet2") %>% select(row, col, address, data_type, character, numeric, is_blank) %>% mutate(character = str_trim(character)) %>% behead("up-left", "header_1") %>% behead("up", "header_2") %>% behead("up", "header_3") %>% behead("left", "classroom") %>% dplyr::filter(!is_blank, !is.na(header_3)) %>% arrange(col, row) ``` ``` ## # A tibble: 32 x 11 ## row col address data_type character numeric is_blank header_1 header_2 header_3 ## <int> <int> <chr> <chr> <chr> <dbl> <lgl> <chr> <chr> <chr> ## 1 5 4 D5 character 10 NA FALSE MCAS Su… <NA> MCAS Gr… ## 2 6 4 D6 character 10 NA FALSE MCAS Su… <NA> MCAS Gr… ## 3 7 4 D7 character 10 NA FALSE MCAS Su… <NA> MCAS Gr… ## 4 8 4 D8 character 10 NA FALSE MCAS Su… <NA> MCAS Gr… ## 5 5 5 E5 character 4 NA FALSE MCAS Su… <NA> # Tested ## 6 6 5 E6 character 8 NA FALSE MCAS Su… <NA> # Tested ## 7 7 5 E7 character 5 NA FALSE MCAS Su… <NA> # Tested ## 8 8 5 E8 character 10 NA FALSE MCAS Su… <NA> # Tested ## 9 5 6 F5 numeric <NA> 0.342 FALSE Possibl… Total P… % ## 10 6 6 F6 numeric <NA> 0.319 FALSE Possibl… Total P… % ## # … with 22 more rows, and 1 more variable: classroom <chr> ``` ### 9\.7\.3 Sheet 3 The third sheet is variation on [two clear rows of text column headers, left aligned](pivot-complex.html#two-clear-rows-of-text-column-headers-left-aligned-1), with a nasty catch. The creator of the spreadsheet didn’t merge cells to make space for more words. They didn’t even ‘centre across selection’ (which is sometimes safer than merging cells). Instead, they wrote each word on a separate line, meaning it is ambiguous whether a cell part of another header, or a header in its own right. [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/school.xlsx?raw=true), modified from an original source provided to the author. Compare columns C and D. Column C has a single header, “Avg Years w/ Class Data”, written across four cells. Column D has two levels of headers, “Years in MA” first, then “% 3\+” nested within it (and written across two cells). There’s no way for a machine to tell which cells are whole headers, and which are parts of headers. We can deal with this by first treating every cell as a header in its own right, and then concatenating the headers of rows 2 to 5\. Using the `"up-left"` direction, headers like “Years in MA” in cell D4 will be carried to the right, which is good. Unfortunately so will headers like “\# Students” in cell B2, which we’ll just have to put up with. ``` cells <- xlsx_cells(smungs::school, "Sheet3") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) x <- cells %>% behead("left", "place") %>% behead("up-left", "category") %>% behead("up-left", "metric-cell-1") %>% # Treat every cell in every row as a header behead("up-left", "metric-cell-2") %>% behead("up-left", "metric-cell-3") %>% behead("up-left", "metric-cell-4") %>% behead("up-left", "metric-cell-5") glimpse(x) ``` ``` ## Rows: 36 ## Columns: 12 ## $ row <int> 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8… ## $ col <int> 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, … ## $ data_type <chr> "numeric", "numeric", "numeric", "numeric", "numeric", "numeric", "… ## $ character <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,… ## $ numeric <dbl> 1.000000000, 1.000000000, 1.000000000, 0.842777337, 0.896170213, 0.… ## $ place <chr> "State (All Students)", "Region", "School", "State (All Students)",… ## $ category <chr> "STUDENTS", "STUDENTS", "STUDENTS", "EDUCATOR EXPERIENCE", "EDUCATO… ## $ `metric-cell-1` <chr> "# Students", "# Students", "# Students", "# Students", "# Students… ## $ `metric-cell-2` <chr> "Avg", "Avg", "Avg", "Avg", "Avg", "Avg", "Avg", "Avg", "Avg", "Avg… ## $ `metric-cell-3` <chr> "Years w/", "Years w/", "Years w/", "Years in MA", "Years in MA", "… ## $ `metric-cell-4` <chr> "Class", "Class", "Class", "%", "%", "%", "%", "%", "%", "%", "%", … ## $ `metric-cell-5` <chr> "Data", "Data", "Data", "3+", "3+", "3+", "1-2", "1-2", "1-2", "0-1… ``` Above you can see that every cell in every header row has been treated as a header in its own right, e.g. `"Avg"` is a level\-2 header, and `"Years w/"` is a level\-3 header. The next step is to paste them together into a single header. ``` x <- x %>% # Replace NA with "" otherwise unite() will spell it as "NA". # This is a common irritation. # https://stackoverflow.com/questions/13673894/suppress-nas-in-paste mutate_at(vars(starts_with("metric-cell-")), replace_na, "") %>% unite("metric", starts_with("metric-cell-"), sep = " ") %>% mutate(metric = str_trim(metric)) glimpse(x) ``` ``` ## Rows: 36 ## Columns: 8 ## $ row <int> 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7… ## $ col <int> 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, 10, 10… ## $ data_type <chr> "numeric", "numeric", "numeric", "numeric", "numeric", "numeric", "numeri… ## $ character <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, N… ## $ numeric <dbl> 1.000000000, 1.000000000, 1.000000000, 0.842777337, 0.896170213, 0.846153… ## $ place <chr> "State (All Students)", "Region", "School", "State (All Students)", "Regi… ## $ category <chr> "STUDENTS", "STUDENTS", "STUDENTS", "EDUCATOR EXPERIENCE", "EDUCATOR EXPE… ## $ metric <chr> "# Students Avg Years w/ Class Data", "# Students Avg Years w/ Class Data… ``` Now the headers are manageable. They aren’t perfect – the `"# Students"` header has leaked into `"# Students Avg Years w/ Class Data"`, but that can be cleaned up manually later. At least `"# Students Avg Years w/ Class Data"` is within the `"STUDENTS"` category, which is the hard part. Spreading this data is the final step to make it easy to work with. ``` x %>% select(place, category, metric, numeric) %>% spread(place, numeric) %>% print(n = Inf) ``` ``` ## # A tibble: 12 x 5 ## category metric Region School `State (All Student… ## <chr> <chr> <dbl> <dbl> <dbl> ## 1 EDUCATOR EXPERIENCE # Students Avg Years in MA %… 0.0439 0 0.0535 ## 2 EDUCATOR EXPERIENCE # Students Avg Years in MA %… 0.0599 0.154 0.104 ## 3 EDUCATOR EXPERIENCE # Students Avg Years in MA %… 0.896 0.846 0.843 ## 4 EDUCATOR EXPERIENCE # Students PTS % Non- PTS 0.248 0.0684 0.247 ## 5 EDUCATOR EXPERIENCE # Students PTS Years in MA %… 0.752 0.932 0.753 ## 6 EDUCATOR QUALIFICATI… # Students % % In Field 0.944 1 0.903 ## 7 EDUCATOR QUALIFICATI… # Students % % Non-SEI Endor… NA NA NA ## 8 EDUCATOR QUALIFICATI… # Students % % SEI Endorsed NA NA NA ## 9 EDUCATOR QUALIFICATI… # Students % Long Term Subs 0.00182 0 0.0112 ## 10 EDUCATOR QUALIFICATI… # Students % Out of Field Fi… 0.0556 0 0.0965 ## 11 STUDENTS # Students 625 116 738499 ## 12 STUDENTS # Students Avg Years w/ Clas… 1 1 1 ``` ### 9\.7\.1 Sheet 1 The first sheet is an example of [mixed headers in column 1 being distinguished by bold formatting](pivot-simple.html#mixed-levels-of-headers-in-the-same-rowcolumn-distinguished-by-formatting). Filter for the bold cells in column 1 and assign them to a variable. Then `behead()` the other headers, and finally `enhead()` the bold headers back on. [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/school.xlsx?raw=true), modified from an original source provided to the author. ``` cells <- xlsx_cells(smungs::school, "Sheet1") %>% dplyr::filter(!is_blank) formats <- xlsx_formats(smungs::school) bold_headers <- cells %>% dplyr::filter(col == 1L, formats$local$font$bold[local_format_id]) %>% select(row, col, bold_header = character) cells %>% behead("up-left", "metric") %>% behead("left", "plain-header") %>% enhead(bold_headers, "left-up") %>% select(row, data_type, numeric, metric, `plain-header`) %>% spatter(metric) %>% select(-row) ``` ``` ## # A tibble: 21 x 10 ## `plain-header` `% Advanced` `% Needs Improv… `% Proficient` `% Proficient o… ## <chr> <dbl> <dbl> <dbl> <dbl> ## 1 All Students 0.515 0.0297 0.446 0.960 ## 2 Economically … 0.333 0.0667 0.567 0.9 ## 3 Non-Economica… 0.592 0.0141 0.394 0.986 ## 4 Students w/ D… NA NA NA NA ## 5 Non-Disabled 0.565 0.0217 0.413 0.978 ## 6 ELL NA NA NA NA ## 7 Non-ELL 0.525 0.0202 0.444 0.970 ## 8 African Amer.… NA NA NA NA ## 9 Asian NA NA NA NA ## 10 Hispanic/Lati… NA NA NA NA ## # … with 11 more rows, and 5 more variables: `% Warning/ Failing` <dbl>, CPI <dbl>, `Median ## # SGP` <dbl>, `N Included` <dbl>, `N Included in SGP` <dbl> ``` ### 9\.7\.2 Sheet 2 The second sheet is variation on [two clear rows of text column headers, left aligned](pivot-complex.html#two-clear-rows-of-text-column-headers-left-aligned-1). Here, there are three rows of colum headers. The first row is left\-aligned, and the second and third rows are directly above the data cells. But the second row is blank above columns D and E. That doesn’t actually matter; in the output, `header_2` will be `NA` for data from those columns. [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/school.xlsx?raw=true), modified from an original source provided to the author. ``` xlsx_cells(smungs::school, "Sheet2") %>% select(row, col, address, data_type, character, numeric, is_blank) %>% mutate(character = str_trim(character)) %>% behead("up-left", "header_1") %>% behead("up", "header_2") %>% behead("up", "header_3") %>% behead("left", "classroom") %>% dplyr::filter(!is_blank, !is.na(header_3)) %>% arrange(col, row) ``` ``` ## # A tibble: 32 x 11 ## row col address data_type character numeric is_blank header_1 header_2 header_3 ## <int> <int> <chr> <chr> <chr> <dbl> <lgl> <chr> <chr> <chr> ## 1 5 4 D5 character 10 NA FALSE MCAS Su… <NA> MCAS Gr… ## 2 6 4 D6 character 10 NA FALSE MCAS Su… <NA> MCAS Gr… ## 3 7 4 D7 character 10 NA FALSE MCAS Su… <NA> MCAS Gr… ## 4 8 4 D8 character 10 NA FALSE MCAS Su… <NA> MCAS Gr… ## 5 5 5 E5 character 4 NA FALSE MCAS Su… <NA> # Tested ## 6 6 5 E6 character 8 NA FALSE MCAS Su… <NA> # Tested ## 7 7 5 E7 character 5 NA FALSE MCAS Su… <NA> # Tested ## 8 8 5 E8 character 10 NA FALSE MCAS Su… <NA> # Tested ## 9 5 6 F5 numeric <NA> 0.342 FALSE Possibl… Total P… % ## 10 6 6 F6 numeric <NA> 0.319 FALSE Possibl… Total P… % ## # … with 22 more rows, and 1 more variable: classroom <chr> ``` ### 9\.7\.3 Sheet 3 The third sheet is variation on [two clear rows of text column headers, left aligned](pivot-complex.html#two-clear-rows-of-text-column-headers-left-aligned-1), with a nasty catch. The creator of the spreadsheet didn’t merge cells to make space for more words. They didn’t even ‘centre across selection’ (which is sometimes safer than merging cells). Instead, they wrote each word on a separate line, meaning it is ambiguous whether a cell part of another header, or a header in its own right. [Download the file](https://github.com/nacnudus/smungs/blob/master/inst/extdata/school.xlsx?raw=true), modified from an original source provided to the author. Compare columns C and D. Column C has a single header, “Avg Years w/ Class Data”, written across four cells. Column D has two levels of headers, “Years in MA” first, then “% 3\+” nested within it (and written across two cells). There’s no way for a machine to tell which cells are whole headers, and which are parts of headers. We can deal with this by first treating every cell as a header in its own right, and then concatenating the headers of rows 2 to 5\. Using the `"up-left"` direction, headers like “Years in MA” in cell D4 will be carried to the right, which is good. Unfortunately so will headers like “\# Students” in cell B2, which we’ll just have to put up with. ``` cells <- xlsx_cells(smungs::school, "Sheet3") %>% dplyr::filter(!is_blank) %>% select(row, col, data_type, character, numeric) x <- cells %>% behead("left", "place") %>% behead("up-left", "category") %>% behead("up-left", "metric-cell-1") %>% # Treat every cell in every row as a header behead("up-left", "metric-cell-2") %>% behead("up-left", "metric-cell-3") %>% behead("up-left", "metric-cell-4") %>% behead("up-left", "metric-cell-5") glimpse(x) ``` ``` ## Rows: 36 ## Columns: 12 ## $ row <int> 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8… ## $ col <int> 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, … ## $ data_type <chr> "numeric", "numeric", "numeric", "numeric", "numeric", "numeric", "… ## $ character <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,… ## $ numeric <dbl> 1.000000000, 1.000000000, 1.000000000, 0.842777337, 0.896170213, 0.… ## $ place <chr> "State (All Students)", "Region", "School", "State (All Students)",… ## $ category <chr> "STUDENTS", "STUDENTS", "STUDENTS", "EDUCATOR EXPERIENCE", "EDUCATO… ## $ `metric-cell-1` <chr> "# Students", "# Students", "# Students", "# Students", "# Students… ## $ `metric-cell-2` <chr> "Avg", "Avg", "Avg", "Avg", "Avg", "Avg", "Avg", "Avg", "Avg", "Avg… ## $ `metric-cell-3` <chr> "Years w/", "Years w/", "Years w/", "Years in MA", "Years in MA", "… ## $ `metric-cell-4` <chr> "Class", "Class", "Class", "%", "%", "%", "%", "%", "%", "%", "%", … ## $ `metric-cell-5` <chr> "Data", "Data", "Data", "3+", "3+", "3+", "1-2", "1-2", "1-2", "0-1… ``` Above you can see that every cell in every header row has been treated as a header in its own right, e.g. `"Avg"` is a level\-2 header, and `"Years w/"` is a level\-3 header. The next step is to paste them together into a single header. ``` x <- x %>% # Replace NA with "" otherwise unite() will spell it as "NA". # This is a common irritation. # https://stackoverflow.com/questions/13673894/suppress-nas-in-paste mutate_at(vars(starts_with("metric-cell-")), replace_na, "") %>% unite("metric", starts_with("metric-cell-"), sep = " ") %>% mutate(metric = str_trim(metric)) glimpse(x) ``` ``` ## Rows: 36 ## Columns: 8 ## $ row <int> 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7, 8, 9, 7… ## $ col <int> 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, 10, 10… ## $ data_type <chr> "numeric", "numeric", "numeric", "numeric", "numeric", "numeric", "numeri… ## $ character <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, N… ## $ numeric <dbl> 1.000000000, 1.000000000, 1.000000000, 0.842777337, 0.896170213, 0.846153… ## $ place <chr> "State (All Students)", "Region", "School", "State (All Students)", "Regi… ## $ category <chr> "STUDENTS", "STUDENTS", "STUDENTS", "EDUCATOR EXPERIENCE", "EDUCATOR EXPE… ## $ metric <chr> "# Students Avg Years w/ Class Data", "# Students Avg Years w/ Class Data… ``` Now the headers are manageable. They aren’t perfect – the `"# Students"` header has leaked into `"# Students Avg Years w/ Class Data"`, but that can be cleaned up manually later. At least `"# Students Avg Years w/ Class Data"` is within the `"STUDENTS"` category, which is the hard part. Spreading this data is the final step to make it easy to work with. ``` x %>% select(place, category, metric, numeric) %>% spread(place, numeric) %>% print(n = Inf) ``` ``` ## # A tibble: 12 x 5 ## category metric Region School `State (All Student… ## <chr> <chr> <dbl> <dbl> <dbl> ## 1 EDUCATOR EXPERIENCE # Students Avg Years in MA %… 0.0439 0 0.0535 ## 2 EDUCATOR EXPERIENCE # Students Avg Years in MA %… 0.0599 0.154 0.104 ## 3 EDUCATOR EXPERIENCE # Students Avg Years in MA %… 0.896 0.846 0.843 ## 4 EDUCATOR EXPERIENCE # Students PTS % Non- PTS 0.248 0.0684 0.247 ## 5 EDUCATOR EXPERIENCE # Students PTS Years in MA %… 0.752 0.932 0.753 ## 6 EDUCATOR QUALIFICATI… # Students % % In Field 0.944 1 0.903 ## 7 EDUCATOR QUALIFICATI… # Students % % Non-SEI Endor… NA NA NA ## 8 EDUCATOR QUALIFICATI… # Students % % SEI Endorsed NA NA NA ## 9 EDUCATOR QUALIFICATI… # Students % Long Term Subs 0.00182 0 0.0112 ## 10 EDUCATOR QUALIFICATI… # Students % Out of Field Fi… 0.0556 0 0.0965 ## 11 STUDENTS # Students 625 116 738499 ## 12 STUDENTS # Students Avg Years w/ Clas… 1 1 1 ```
Getting Cleaning and Wrangling Data
steviep42.github.io
https://steviep42.github.io/webscraping/book/index.html
Chapter 1 Motivations ===================== 1\.1 Lots of Data For The Taking ? ---------------------------------- The web hosts lots of interesting data that you can ”scrape”. Some of it is stashed in data bases, behind APIs, or in free form text. Lots of people want to grab information of of Twitter or from user forums to see what people are thinking. There is a lot of valuable information out there for the taking although some web sites have “caught on” and either block programmatic access or they setup “pay walls” that require you to subscribe to an API for access. The New York Times does this. But there are lots of opportunities to get data. | tables | Fetch tables like from Wikipedia | | --- | --- | | forms | You can submit forms and fetch the results | | css | You can access parts of a web site using style or css selectors | | Tweets | Process tweets including emojis | | Web Sites | User forums have lots of content | | Instagram | Yes you can “scrape” photos also | 1\.2 Web Scraping Can Be Ugly ----------------------------- Depending on what web sites you want to scrape the process can be involved and quite tedious. Many websites are very much aware that people are scraping so they offer Application Programming Interfaces (APIs) to make requests for information easier for the user and easier for the server administrators to control access. Most times the user must apply for a “key” to gain access. For premium sites, the key costs money. Some sites like Google and Wunderground (a popular weather site) allow some number of free accesses before they start charging you. Even so the results are typically returned in XML or JSON which then requires you to parse the result to get the information you want. In the best situation there is an R package that will wrap in the parsing and will return lists or data frames. Here is a summary: * First. Always try to find an R package that will access a site (e.g. New York Times, Wunderground, PubMed). These packages (e.g. omdbapi, easyPubMed, RBitCoin, rtimes) provide a programmatic search interface and return data frames with little to no effort on your part. * If no package exists then hopefully there is an API that allows you to query the website and get results back in JSON or XML. I prefer JSON because it’s “easier” and the packages for parsing JSON return lists which are native data structures to R. So you can easily turn results into data frames. You will ususally use the *rvest* package in conjunction with XML, and the RSJONIO packages. * If the Web site doesn’t have an API then you will need to scrape text. This isn’t hard but it is tedious. You will need to use *rvest* to parse HMTL elements. If you want to parse mutliple pages then you will need to use *rvest* to move to the other pages and possibly fill out forms. If there is a lot of Javascript then you might need to use RSelenium to programmatically manage the web page. 1\.3 Understanding The Language of The Web ------------------------------------------ The Web has its own languages: HTML, CSS, Javascript ``` <h1>, <h2>, ..., <h6> Heading 1 and so on <p> Paragraph elements <ul> Unordered List <ol> Ordered List <li> List Element <div> Division / Section <table> Tables <form> Web forms ``` So to be productive at scraping requires you to have some familiarity with HMTL XML, and CSS. Here we look at a very basic HTML file. Refer to See [http://bradleyboehmke.github.io/2015/12/scraping\-html\-text.html](http://bradleyboehmke.github.io/2015/12/scraping-html-text.html) for a basic introductory session on HTML and webscraping with R ``` <!DOCTYPE html> <html> <body> <h1>My First Heading</h1> <p>My first paragraph.</p> </body> </html> ```   And you could apply some styling to this courtest of the CSS language which allows you to inject styles into plain HTML:   ### 1\.3\.1 Useful tools There are a number of tools that allow us to inspect web pages and see “what is under the hood.” Warning \- I just discovered that one of my favorite browser plugins (firebug) to find the xpaths and/or css paths of page elements is no longer supported under Firefox or Chrome. I’ve found a couple of replacements but they don’t work as well. I’ll research it more. The way that **Selector Gadget** and **xPath** work is that you install them into your browswer and then activate them whenever you need to identify the **selector** associated with a part of a web page.   | Selector Gadget | <http://selectorgadget.com/> | | --- | --- | | Firebug | <https://getfirebug.com/> (now integrated into a version of Firefox) | | xPath | [https://addons.mozilla.org/en\-US/firefox/addon/xpath\_finder/](https://addons.mozilla.org/en-US/firefox/addon/xpath_finder/) | | Google Chrome | Right click to inspect a page element | | Google Chrome | View Developer \- Developer Tools | | Oxygen Editor | Can obtain via the Emory Software Express Site | 1\.4 Useful Packages -------------------- You will use the following three primary packages to help you get data from various web pages: *rvest*, *XML*, and *RJSONIO*. Note that you won’t always use them simultaneously but you might use them in pairs or individually depending on the task at hand. 1\.5 Quick **rvest** tutorial ----------------------------- Now let’s do a quick *rvest* tutorial. There are several steps involved in using **rvest** which are conceptually quite straightforward: 1. Identify a URL to be examined for content 2. Use Selector Gadet, xPath, or Google Insepct to identify the “selector” This will be a paragraph, table, hyper links, images 3. Load rvest 4. Use **read\_html** to “read” the URL 5. Pass the result to **html\_nodes** to get the selectors identified in step number 2 6. Get the text or table content ``` library(rvest) url <- "https://en.wikipedia.org/wiki/World_population" (paragraphs <- read_html(url) %>% html_nodes("p")) ``` ``` ## {xml_nodeset (51)} ## [1] <p class="mw-empty-elt">\n\n</p> ## [2] <p>In <a href="/wiki/Demography" title="Demography">demographics</a>, the <b>world population</b> is the total number of <a h ... ## [3] <p>The world population has experienced <a href="/wiki/Population_growth" title="Population growth">continuous growth</a> fol ... ## [4] <p><a href="/wiki/Birth_rate" title="Birth rate">Birth rates</a> were highest in the late 1980s at about 139 million,<sup id= ... ## [5] <p>Six of the Earth's seven <a href="/wiki/Continent" title="Continent">continents</a> are permanently inhabited on a large s ... ## [6] <p>Estimates of world population by their nature are an aspect of <a href="/wiki/Modernity" title="Modernity">modernity</a>, ... ## [7] <p>It is difficult for estimates to be better than rough approximations, as even modern population estimates are fraught with ... ## [8] <p>Estimates of the population of the world at the time agriculture emerged in around 10,000 BC have ranged between 1 million ... ## [9] <p>The <a href="/wiki/Plague_of_Justinian" title="Plague of Justinian">Plague of Justinian</a>, which first emerged during th ... ## [10] <p>Starting in AD 2, the <a href="/wiki/Han_Dynasty" class="mw-redirect" title="Han Dynasty">Han Dynasty</a> of <a href="/wik ... ## [11] <p>The <a href="/wiki/Pre-Columbian_era" title="Pre-Columbian era">pre-Columbian</a> population of the Americas is uncertain; ... ## [12] <p>During the European <a href="/wiki/British_Agricultural_Revolution" title="British Agricultural Revolution">Agricultural</ ... ## [13] <p>Population growth in the West became more rapid after the introduction of <a href="/wiki/Vaccination" title="Vaccination"> ... ## [14] <p>The first half of the 20th century in <a href="/wiki/Russian_Empire" title="Russian Empire">Imperial Russia</a> and the <a ... ## [15] <p>Many countries in the <a href="/wiki/Developing_world" class="mw-redirect" title="Developing world">developing world</a> h ... ## [16] <p>It is estimated that the world population reached one billion for the first time in 1804. It was another 123 years before ... ## [17] <p>According to current projections, the global population will reach eight billion by 2024, and is likely to reach around ni ... ## [18] <p>There is no estimation for the exact day or month the world's population surpassed one or two billion. The points at which ... ## [19] <p>As of 2012, the global <a href="/wiki/Human_sex_ratio" title="Human sex ratio">sex ratio</a> is approximately 1.01 males t ... ## [20] <p>According to the <a href="/wiki/World_Health_Organization" title="World Health Organization">World Health Organization</a> ... ## ... ``` Then we might want to actually parse out those paragraphs into text: ``` url <- "https://en.wikipedia.org/wiki/World_population" paragraphs <- read_html(url) %>% html_nodes("p") %>% html_text() paragraphs[1:10] ``` ``` ## [1] "\n\n" ## [2] "In demographics, the world population is the total number of humans currently living, and was estimated to have exceeded 7.9 billion people as of November 2021[update].[2] It took over 2 million years of human prehistory and history for the world's population to reach 1 billion[3] and only 200 years more to grow to 7 billion.[4]" ## [3] "The world population has experienced continuous growth following the Great Famine of 1315–1317 and the end of the Black Death in 1350, when it was near 370 million.[5]\nThe highest global population growth rates, with increases of over 1.8% per year, occurred between 1955 and 1975 – peaking at 2.1% between 1965 and 1970.[6] The growth rate declined to 1.2% between 2010 and 2015 and is projected to decline further in the course of the 21st century.[6] The global population is still increasing, but there is significant uncertainty about its long-term trajectory due to changing rates of fertility and mortality.[7] The UN Department of Economics and Social Affairs projects between 9–10 billion people by 2050, and gives an 80% confidence interval of 10–12 billion by the end of the 21st century.[8] Other demographers predict that world population will begin to decline in the second half of the 21st century.[9]" ## [4] "Birth rates were highest in the late 1980s at about 139 million,[11] and as of 2011 were expected to remain essentially constant at a level of 135 million,[12] while the mortality rate numbered 56 million per year and were expected to increase to 80 million per year by 2040.[13]\nThe median age of human beings as of 2020 is 31 years.[14]" ## [5] "Six of the Earth's seven continents are permanently inhabited on a large scale. Asia is the most populous continent, with its 4.64 billion inhabitants accounting for 60% of the world population. The world's two most populated countries, China and India, together constitute about 36% of the world's population. Africa is the second most populated continent, with around 1.34 billion people, or 17% of the world's population. Europe's 747 million people make up 10% of the world's population as of 2020, while the Latin American and Caribbean regions are home to around 653 million (8%). Northern America, primarily consisting of the United States and Canada, has a population of around 368 million (5%), and Oceania, the least populated region, has about 42 million inhabitants (0.5%).[16]Antarctica only has a very small, fluctuating population of about 1200 people based mainly in polar science stations.[17]" ## [6] "Estimates of world population by their nature are an aspect of modernity, possible only since the Age of Discovery. Early estimates for the population of the world[18] date to the 17th century: William Petty in 1682 estimated world population at 320 million (modern estimates ranging close to twice this number); by the late 18th century, estimates ranged close to one billion (consistent with modern estimates).[19] More refined estimates, broken down by continents, were published in the first half of the 19th century, at 600 million to 1 billion in the early 1800s and at 800 million to 1 billion in the 1840s.[20]" ## [7] "It is difficult for estimates to be better than rough approximations, as even modern population estimates are fraught with uncertainties on the order of 3% to 5%.[21]" ## [8] "Estimates of the population of the world at the time agriculture emerged in around 10,000 BC have ranged between 1 million and 15 million.[22][23] Even earlier, genetic evidence suggests humans may have gone through a population bottleneck of between 1,000 and 10,000 people about 70,000 BC, according to the Toba catastrophe theory. By contrast, it is estimated that around 50–60 million people lived in the combined eastern and western Roman Empire in the 4th century AD.[24]" ## [9] "The Plague of Justinian, which first emerged during the reign of the Roman emperor Justinian, caused Europe's population to drop by around 50% between the 6th and 8th centuries AD.[25] The population of Europe was more than 70 million in 1340.[26] The Black Death pandemic of the 14th century may have reduced the world's population from an estimated 450 million in 1340 to between 350 and 375 million in 1400;[27] it took 200 years for population figures to recover.[28] The population of China decreased from 123 million in 1200 to 65 million in 1393,[29] presumably from a combination of Mongol invasions, famine, and plague.[30]" ## [10] "Starting in AD 2, the Han Dynasty of ancient China kept consistent family registers in order to properly assess the poll taxes and labor service duties of each household.[31] In that year, the population of Western Han was recorded as 57,671,400 individuals in 12,366,470 households, decreasing to 47,566,772 individuals in 9,348,227 households by AD 146, towards the End of the Han Dynasty.[31] At the founding of the Ming Dynasty in 1368, China's population was reported to be close to 60 million; toward the end of the dynasty in 1644, it may have approached 150 million.[32] England's population reached an estimated 5.6 million in 1650, up from an estimated 2.6 million in 1500.[33] New crops that were brought to Asia and Europe from the Americas by Portuguese and Spanish colonists in the 16th century are believed to have contributed to population growth.[34][35][36] Since their introduction to Africa by Portuguese traders in the 16th century,[37]maize and cassava have similarly replaced traditional African crops as the most important staple food crops grown on the continent.[38]" ``` Get some other types of HTML obejects. Let’s get all the hyperlinks to other pages ``` read_html(url) %>% html_nodes("a") ``` ``` ## {xml_nodeset (1647)} ## [1] <a id="top"></a> ## [2] <a href="/wiki/Wikipedia:Protection_policy#semi" title="This article is semi-protected."><img alt="Page semi-protected" src=" ... ## [3] <a class="mw-jump-link" href="#mw-head">Jump to navigation</a> ## [4] <a class="mw-jump-link" href="#searchInput">Jump to search</a> ## [5] <a href="/wiki/Demographics_of_the_world" title="Demographics of the world">Demographics of the world</a> ## [6] <a href="/wiki/File:World_Population_Prospects_2019.png" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/comm ... ## [7] <a href="/wiki/File:World_Population_Prospects_2019.png" class="internal" title="Enlarge"></a> ## [8] <a href="#cite_note-1">[1]</a> ## [9] <a href="/wiki/Demography" title="Demography">demographics</a> ## [10] <a href="/wiki/Human" title="Human">humans</a> ## [11] <a class="external text" href="https://en.wikipedia.org/w/index.php?title=World_population&amp;action=edit">[update]</a> ## [12] <a href="#cite_note-2">[2]</a> ## [13] <a href="/wiki/Prehistory" title="Prehistory">human prehistory</a> ## [14] <a href="/wiki/Human_history" title="Human history">history</a> ## [15] <a href="/wiki/Billion" title="Billion">billion</a> ## [16] <a href="#cite_note-3">[3]</a> ## [17] <a href="#cite_note-4">[4]</a> ## [18] <a href="/wiki/Population_growth" title="Population growth">continuous growth</a> ## [19] <a href="/wiki/Great_Famine_of_1315%E2%80%931317" title="Great Famine of 1315–1317">Great Famine of 1315–1317</a> ## [20] <a href="/wiki/Black_Death" title="Black Death">Black Death</a> ## ... ``` What about tables ? ``` url <- "https://en.wikipedia.org/wiki/World_population" tables <- read_html(url) %>% html_nodes("table") tables ``` ``` ## {xml_nodeset (26)} ## [1] <table class="infobox" style="float: right; font-size:90%"><tbody>\n<tr><th colspan="5" style="text-align:center; background: ... ## [2] <table class="wikitable sortable">\n<caption>Population by region (2020 estimates)\n</caption>\n<tbody>\n<tr>\n<th>Region\n</ ... ## [3] <table class="wikitable" style="text-align:center; float:right; clear:right; margin-left:8px; margin-right:0;"><tbody>\n<tr>< ... ## [4] <table width="100%"><tbody><tr>\n<td valign="top"> <style data-mw-deduplicate="TemplateStyles:r981673959">.mw-parser-output . ... ## [5] <table class="wikitable sortable plainrowheaders" style="text-align:right"><tbody>\n<tr>\n<th data-sort-type="number">Rank</t ... ## [6] <table class="wikitable sortable" style="text-align:right">\n<caption>10 most densely populated countries <small>(with popula ... ## [7] <table class="wikitable sortable" style="text-align:right">\n<caption>Countries ranking highly in both total population <smal ... ## [8] <table class="wikitable sortable">\n<caption>Global annual population growth<sup id="cite_ref-114" class="reference"><a href= ... ## [9] <table class="wikitable sortable" style="font-size:97%; text-align:right;">\n<caption>World historical and predicted populati ... ## [10] <table class="wikitable sortable" style="font-size:97%; text-align:right;">\n<caption>World historical and predicted populati ... ## [11] <table class="wikitable" style="text-align:right;"><tbody>\n<tr>\n<th>Year\n</th>\n<th style="width:70px;">World\n</th>\n<th ... ## [12] <table class="box-More_citations_needed_section plainlinks metadata ambox ambox-content ambox-Refimprove" role="presentation" ... ## [13] <table class="wikitable" style="text-align:center; margin-top:0.5em; margin-right:1em; float:left; font-size:96%;">\n<caption ... ## [14] <table class="wikitable" style="text-align:right; margin-top:2.6em; font-size:96%;">\n<caption>UN 2019 estimates and medium v ... ## [15] <table class="wikitable" style="text-align:center">\n<caption>Starting at 500 million\n</caption>\n<tbody>\n<tr>\n<th>Populat ... ## [16] <table class="wikitable" style="text-align:center">\n<caption>Starting at 375 million\n</caption>\n<tbody>\n<tr>\n<th>Populat ... ## [17] <table role="presentation" class="mbox-small plainlinks sistersitebox" style="background-color:#f9f9f9;border:1px solid #aaa; ... ## [18] <table class="nowraplinks mw-collapsible autocollapse navbox-inner" style="border-spacing:0;background:transparent;color:inhe ... ## [19] <table class="nowraplinks mw-collapsible mw-collapsed navbox-inner" style="border-spacing:0;background:transparent;color:inhe ... ## [20] <table class="nowraplinks hlist mw-collapsible autocollapse navbox-inner" style="border-spacing:0;background:transparent;colo ... ## ... ``` 1\.6 Example: Parsing A Table From Wikipedia -------------------------------------------- Look at the [Wikipedia Page](https://en.wikipedia.org/wiki/World_population) for world population: <https://en.wikipedia.org/wiki/World_population> * We can get any table we want using rvest * We might have to experiment to figure out which one * Get the one that lists the ten most populous countries * I think this might be the 4th or 5th table on the page * How do we get this ? First we will load packages that will help us throughout this session. In this case we’ll need to figure out what number table it is we want. We could fetch all the tables and then experiment to find the precise one. ``` library(rvest) library(tidyr) library(dplyr) library(ggplot2) # Use read_html to fetch the webpage url <- "https://en.wikipedia.org/wiki/World_population" ten_most_df <- read_html(url) ten_most_populous <- ten_most_df %>% html_nodes("table") %>% `[[`(5) %>% html_table() # Let's get just the first three columns ten_most_populous <- ten_most_populous[,2:4] # Get some content - Change the column names names(ten_most_populous) <- c("Country_Territory","Population","Date") # Do reformatting on the columns to be actual numerics where appropriate ten_most_populous %>% mutate(Population=gsub(",","",Population)) %>% mutate(Population=round(as.numeric(Population)/1e+06)) %>% ggplot(aes(x=Country_Territory,y=Population)) + geom_point() + labs(y = "Population / 1,000,000") + coord_flip() + ggtitle("Top 10 Most Populous Countries") ``` In the above example we leveraged the fact that we were looking specifically for a table element and it became a project to locate the correct table number. This isn’t always the case with more complicated websites in that the element we are trying to grab or scrape is contained within a nested structure that doesn’t correspond neatly to a paragraph, link, heading, or table. This can be the case if the page is heavily styled with CSS or Javascript. We might have to work harder. But it’s okay to try to use simple elements and then try to refine the search some more. ``` # Could have use the xPath plugin to help url <- "https://en.wikipedia.org/wiki/World_population" ten_most_df <- read_html(url) ten_most_populous <- ten_most_df %>% html_nodes(xpath="/html/body/div[3]/div[3]/div[5]/div[1]/table[4]") %>% html_table() ``` 1\.7 Scraping Patient Dialysis Stories -------------------------------------- Here is an example relating to the experiences of dialysis patients with a specific dialysis provider. It might be more useful to find a support forum that is managed by dialysis patients to get more general opinions but this example is helpful in showing you what is involved. Check out this website: ``` https://www.americanrenal.com/dialysis-centers/patient-stories ``` ### 1\.7\.1 Getting More Detail In looking at this page you will see that there are a number of patient stories. Actually, there is a summary line followed by a “Read More” link that provides more detail on the patient experience. Our goal is to get the full content as opposed to only the summary. How would we do this ? ### 1\.7\.2 Writing Some Code Let’s use our new found knowledge of **rvest** to help us get these detailed stories. Maybe we want to do some sentiment analysis on this. If you hover over the **Read More** link on the website it will provide a specific link for each patient. For example, ``` https://www.americanrenal.com/dialysis-centers/patient-stories/john-baguchinsky ``` What we want to do is first get a list of all these links from the main page after which we can loop over each of the patient specific links and capture that information into a vector. Each element of the vector will be the content of a specific patient’s story. ``` library(rvest) burl <- "https://www.americanrenal.com/dialysis-centers/patient-stories" # Setup an empty vector to which we will add the content of each story workVector <- vector() # Grab the links from the site that relate patient stories links <- read_html(burl) %>% html_nodes("a") %>% html_attr("href") %>% grep("stories",.,value=TRUE) links ``` ``` ## [1] "http://www.americanrenal.com/dialysis-centers/patient-stories" ## [2] "http://www.americanrenal.com/dialysis-centers/patient-stories/randal-beatty" ## [3] "http://www.americanrenal.com/dialysis-centers/patient-stories/patricia-garcia" ## [4] "http://www.americanrenal.com/dialysis-centers/patient-stories/john-baguchinsky" ## [5] "http://www.americanrenal.com/dialysis-centers/patient-stories/sheryll-wyman" ## [6] "http://www.americanrenal.com/dialysis-centers/patient-stories/carol-sykes" ## [7] "http://www.americanrenal.com/dialysis-centers/patient-stories/sharon-cauthen" ## [8] "http://www.americanrenal.com/dialysis-centers/patient-stories/remond-ellis" ## [9] "http://www.americanrenal.com/dialysis-centers/patient-stories" ## [10] "http://www.americanrenal.com/dialysis-centers/patient-stories" ``` Some of these links do not correspond directly to a specific patient name so we need to filter those out. ``` # Get only the ones that seem to have actual names associated with them storiesLinks <- links[-grep("stories$",links)] storiesLinks ``` ``` ## [1] "http://www.americanrenal.com/dialysis-centers/patient-stories/randal-beatty" ## [2] "http://www.americanrenal.com/dialysis-centers/patient-stories/patricia-garcia" ## [3] "http://www.americanrenal.com/dialysis-centers/patient-stories/john-baguchinsky" ## [4] "http://www.americanrenal.com/dialysis-centers/patient-stories/sheryll-wyman" ## [5] "http://www.americanrenal.com/dialysis-centers/patient-stories/carol-sykes" ## [6] "http://www.americanrenal.com/dialysis-centers/patient-stories/sharon-cauthen" ## [7] "http://www.americanrenal.com/dialysis-centers/patient-stories/remond-ellis" ``` Next we will visit each of these pages and scrape the text information. We’ll step through this in class so you can see this in action but here is the code. We will get each story and place each paragrpah of the story into a vector element. After that we will eliminate blank lines and some junk lines that begin with a new line character. Then we will collapse all of the vector text into a single paragraph and store it into a list element. Let’s step through it for the first link. ``` # This corresponds to the first link # "http://www.americanrenal.com/dialysis-centers/patient-stories/randal-beatty" tmpResult <- read_html(storiesLinks[1]) %>% html_nodes("p") %>% html_text() tmpResult ``` ``` ## [1] "Mr. Randal Beatty, University Kidney Center Hikes Lane" ## [2] "In April 2010, Randal Beatty was diagnosed with end stage renal disease (ESRD). The diagnosis came as a surprise, and, Mr. Beatty admits, he kept praying for a miracle." ## [3] "“I heard all those stories about how people feel during dialysis and I didn’t want to deal with it,” said Mr. Beatty. “I didn’t want to lose my freedom and I certainly didn’t want to feel sick all the time.”" ## [4] "So for three years, he waited, hoping to find his miracle. But by August 2013, he knew he waited too long." ## [5] "“Before my first dialysis treatment, I could feel myself pulling away from everyone, especially my family. I didn’t want to go anywhere or do anything and I was sick all the time. I realized at that point that how I was feeling was exactly what I wanted to avoid.”" ## [6] "He began his in-center dialysis treatments with American Renal Associates (ARA) at University Kidney Center in Louisville, Kentucky in August 2013 and transferred to another local ARA facility – University Kidney Center Hikes Lane – in May 2014 since it was closer to his home." ## [7] "“In a short time, dialysis completely changed me,” he said. His health improved, along with his confidence, encouraging him to start driving himself to and from treatments. Not only did this give him a renewed feeling of independence, but a strong sense of accomplishment, as well." ## [8] "Now 67 years old, Mr. Beatty says he can do everything he did before, including keeping up with his two granddaughters, playing basketball, among other hobbies, and going on family vacations. In fact, with the help of ARA’s Travel Department, Mr. Beatty can travel stress free. Though he admits traveling while on dialysis can be intimidating, he explained, “All I had to do was show up. ARA’s Travel Team took care of everything.”" ## [9] "Receiving a diagnosis of ESRD can be challenging, but Mr. Beatty’s advice is to take a step back and see dialysis as the miracle it is." ## [10] "“It took me three years to realize that dialysis was the miracle I was waiting for. I was an extremely sick individual and just a few months on dialysis completely changed me. I don’t know why I waited as long as I did. I could have been enjoying life for the last few years rather than staying home sick. And I honestly haven’t been sick since I started my dialysis treatments!”" ## [11] "Read more patient stories" ## [12] "American Renal Associates operates 240 dialysis clinics in 27 states and Washington D.C., serving more than 17,300 patients with end-stage renal disease in partnership with approximately 400 local nephrologists." ## [13] "If you have questions, you can call 1-877-99-RENAL (1-877-997-3625) or [email protected]" ## [14] "" ## [15] "\n ©2022 American Renal® Associates. All Rights Reserved.500 Cummings Center, Suite\n 6550, Beverly, MA, 01915\n " ``` Okay, that has some junk in it like blank lines and lines that begin with new line characters. ``` # Get rid of elements that are a blank line tmpResult <- tmpResult[tmpResult!=""] # Get rid of elements that begin with a newline character "\n" newlines_begin <- sum(grepl("^\n",tmpResult)) if (newlines_begin > 0) { tmpResult <- tmpResult[-grep("^\n",tmpResult)] } tmpResult ``` ``` ## [1] "Mr. Randal Beatty, University Kidney Center Hikes Lane" ## [2] "In April 2010, Randal Beatty was diagnosed with end stage renal disease (ESRD). The diagnosis came as a surprise, and, Mr. Beatty admits, he kept praying for a miracle." ## [3] "“I heard all those stories about how people feel during dialysis and I didn’t want to deal with it,” said Mr. Beatty. “I didn’t want to lose my freedom and I certainly didn’t want to feel sick all the time.”" ## [4] "So for three years, he waited, hoping to find his miracle. But by August 2013, he knew he waited too long." ## [5] "“Before my first dialysis treatment, I could feel myself pulling away from everyone, especially my family. I didn’t want to go anywhere or do anything and I was sick all the time. I realized at that point that how I was feeling was exactly what I wanted to avoid.”" ## [6] "He began his in-center dialysis treatments with American Renal Associates (ARA) at University Kidney Center in Louisville, Kentucky in August 2013 and transferred to another local ARA facility – University Kidney Center Hikes Lane – in May 2014 since it was closer to his home." ## [7] "“In a short time, dialysis completely changed me,” he said. His health improved, along with his confidence, encouraging him to start driving himself to and from treatments. Not only did this give him a renewed feeling of independence, but a strong sense of accomplishment, as well." ## [8] "Now 67 years old, Mr. Beatty says he can do everything he did before, including keeping up with his two granddaughters, playing basketball, among other hobbies, and going on family vacations. In fact, with the help of ARA’s Travel Department, Mr. Beatty can travel stress free. Though he admits traveling while on dialysis can be intimidating, he explained, “All I had to do was show up. ARA’s Travel Team took care of everything.”" ## [9] "Receiving a diagnosis of ESRD can be challenging, but Mr. Beatty’s advice is to take a step back and see dialysis as the miracle it is." ## [10] "“It took me three years to realize that dialysis was the miracle I was waiting for. I was an extremely sick individual and just a few months on dialysis completely changed me. I don’t know why I waited as long as I did. I could have been enjoying life for the last few years rather than staying home sick. And I honestly haven’t been sick since I started my dialysis treatments!”" ## [11] "Read more patient stories" ## [12] "American Renal Associates operates 240 dialysis clinics in 27 states and Washington D.C., serving more than 17,300 patients with end-stage renal disease in partnership with approximately 400 local nephrologists." ## [13] "If you have questions, you can call 1-877-99-RENAL (1-877-997-3625) or [email protected]" ``` Next, let’s create a more compact version of the data. We’ll cram it all into a single element. ``` (tmpResult <- paste(tmpResult,collapse="")) ``` ``` ## [1] "Mr. Randal Beatty, University Kidney Center Hikes LaneIn April 2010, Randal Beatty was diagnosed with end stage renal disease (ESRD). The diagnosis came as a surprise, and, Mr. Beatty admits, he kept praying for a miracle.“I heard all those stories about how people feel during dialysis and I didn’t want to deal with it,” said Mr. Beatty. “I didn’t want to lose my freedom and I certainly didn’t want to feel sick all the time.”So for three years, he waited, hoping to find his miracle. But by August 2013, he knew he waited too long.“Before my first dialysis treatment, I could feel myself pulling away from everyone, especially my family. I didn’t want to go anywhere or do anything and I was sick all the time. I realized at that point that how I was feeling was exactly what I wanted to avoid.”He began his in-center dialysis treatments with American Renal Associates (ARA) at University Kidney Center in Louisville, Kentucky in August 2013 and transferred to another local ARA facility – University Kidney Center Hikes Lane – in May 2014 since it was closer to his home.“In a short time, dialysis completely changed me,” he said. His health improved, along with his confidence, encouraging him to start driving himself to and from treatments. Not only did this give him a renewed feeling of independence, but a strong sense of accomplishment, as well.Now 67 years old, Mr. Beatty says he can do everything he did before, including keeping up with his two granddaughters, playing basketball, among other hobbies, and going on family vacations. In fact, with the help of ARA’s Travel Department, Mr. Beatty can travel stress free. Though he admits traveling while on dialysis can be intimidating, he explained, “All I had to do was show up. ARA’s Travel Team took care of everything.”Receiving a diagnosis of ESRD can be challenging, but Mr. Beatty’s advice is to take a step back and see dialysis as the miracle it is.“It took me three years to realize that dialysis was the miracle I was waiting for. I was an extremely sick individual and just a few months on dialysis completely changed me. I don’t know why I waited as long as I did. I could have been enjoying life for the last few years rather than staying home sick. And I honestly haven’t been sick since I started my dialysis treatments!”Read more patient storiesAmerican Renal Associates operates 240 dialysis clinics in 27 states and Washington D.C., serving more than 17,300 patients with end-stage renal disease in partnership with approximately 400 local nephrologists.If you have questions, you can call 1-877-99-RENAL (1-877-997-3625) or [email protected]" ``` So we could put this logic into a loop and process each of the links programmatically. ``` # Now go to these pages and scrape the text necessary to # build a corpus tmpResult <- vector() textList <- list() for (ii in 1:length(storiesLinks)) { tmpResult <- read_html(storiesLinks[ii]) %>% html_nodes("p") %>% html_text() # Get rid of elements that are a blank line tmpResult <- tmpResult[tmpResult!=""] # Get rid of elements that begin with a newline character "\n" newlines_begin <- sum(grepl("^\n",tmpResult)) if (newlines_begin > 0) { tmpResult <- tmpResult[-grep("^\n",tmpResult)] } # Let's collpase all the elements into a single element and then store # it into a list element so we can maintain each patient story separately # This is not necessary but until we figure out what we want to do with # the data then this gives us some options tmpResult <- paste(tmpResult,collapse="") textList[[ii]] <- tmpResult } ``` If we did our job correctly then each element of the **textList** will have text in it corresponding to each patient ``` textList[[1]] ``` ``` ## [1] "Mr. Randal Beatty, University Kidney Center Hikes LaneIn April 2010, Randal Beatty was diagnosed with end stage renal disease (ESRD). The diagnosis came as a surprise, and, Mr. Beatty admits, he kept praying for a miracle.“I heard all those stories about how people feel during dialysis and I didn’t want to deal with it,” said Mr. Beatty. “I didn’t want to lose my freedom and I certainly didn’t want to feel sick all the time.”So for three years, he waited, hoping to find his miracle. But by August 2013, he knew he waited too long.“Before my first dialysis treatment, I could feel myself pulling away from everyone, especially my family. I didn’t want to go anywhere or do anything and I was sick all the time. I realized at that point that how I was feeling was exactly what I wanted to avoid.”He began his in-center dialysis treatments with American Renal Associates (ARA) at University Kidney Center in Louisville, Kentucky in August 2013 and transferred to another local ARA facility – University Kidney Center Hikes Lane – in May 2014 since it was closer to his home.“In a short time, dialysis completely changed me,” he said. His health improved, along with his confidence, encouraging him to start driving himself to and from treatments. Not only did this give him a renewed feeling of independence, but a strong sense of accomplishment, as well.Now 67 years old, Mr. Beatty says he can do everything he did before, including keeping up with his two granddaughters, playing basketball, among other hobbies, and going on family vacations. In fact, with the help of ARA’s Travel Department, Mr. Beatty can travel stress free. Though he admits traveling while on dialysis can be intimidating, he explained, “All I had to do was show up. ARA’s Travel Team took care of everything.”Receiving a diagnosis of ESRD can be challenging, but Mr. Beatty’s advice is to take a step back and see dialysis as the miracle it is.“It took me three years to realize that dialysis was the miracle I was waiting for. I was an extremely sick individual and just a few months on dialysis completely changed me. I don’t know why I waited as long as I did. I could have been enjoying life for the last few years rather than staying home sick. And I honestly haven’t been sick since I started my dialysis treatments!”Read more patient storiesAmerican Renal Associates operates 240 dialysis clinics in 27 states and Washington D.C., serving more than 17,300 patients with end-stage renal disease in partnership with approximately 400 local nephrologists.If you have questions, you can call 1-877-99-RENAL (1-877-997-3625) or [email protected]" ``` 1\.8 Summary ------------ * Need some basic HTML and CSS knowledge to find correct elements * How to extract text from common elements * How to extract text from specific elements * Always have to do some text cleanup of data * It usually takes multiple times to get it right See [http://bradleyboehmke.github.io/2015/12/scraping\-html\-text.html](http://bradleyboehmke.github.io/2015/12/scraping-html-text.html) 1\.1 Lots of Data For The Taking ? ---------------------------------- The web hosts lots of interesting data that you can ”scrape”. Some of it is stashed in data bases, behind APIs, or in free form text. Lots of people want to grab information of of Twitter or from user forums to see what people are thinking. There is a lot of valuable information out there for the taking although some web sites have “caught on” and either block programmatic access or they setup “pay walls” that require you to subscribe to an API for access. The New York Times does this. But there are lots of opportunities to get data. | tables | Fetch tables like from Wikipedia | | --- | --- | | forms | You can submit forms and fetch the results | | css | You can access parts of a web site using style or css selectors | | Tweets | Process tweets including emojis | | Web Sites | User forums have lots of content | | Instagram | Yes you can “scrape” photos also | 1\.2 Web Scraping Can Be Ugly ----------------------------- Depending on what web sites you want to scrape the process can be involved and quite tedious. Many websites are very much aware that people are scraping so they offer Application Programming Interfaces (APIs) to make requests for information easier for the user and easier for the server administrators to control access. Most times the user must apply for a “key” to gain access. For premium sites, the key costs money. Some sites like Google and Wunderground (a popular weather site) allow some number of free accesses before they start charging you. Even so the results are typically returned in XML or JSON which then requires you to parse the result to get the information you want. In the best situation there is an R package that will wrap in the parsing and will return lists or data frames. Here is a summary: * First. Always try to find an R package that will access a site (e.g. New York Times, Wunderground, PubMed). These packages (e.g. omdbapi, easyPubMed, RBitCoin, rtimes) provide a programmatic search interface and return data frames with little to no effort on your part. * If no package exists then hopefully there is an API that allows you to query the website and get results back in JSON or XML. I prefer JSON because it’s “easier” and the packages for parsing JSON return lists which are native data structures to R. So you can easily turn results into data frames. You will ususally use the *rvest* package in conjunction with XML, and the RSJONIO packages. * If the Web site doesn’t have an API then you will need to scrape text. This isn’t hard but it is tedious. You will need to use *rvest* to parse HMTL elements. If you want to parse mutliple pages then you will need to use *rvest* to move to the other pages and possibly fill out forms. If there is a lot of Javascript then you might need to use RSelenium to programmatically manage the web page. 1\.3 Understanding The Language of The Web ------------------------------------------ The Web has its own languages: HTML, CSS, Javascript ``` <h1>, <h2>, ..., <h6> Heading 1 and so on <p> Paragraph elements <ul> Unordered List <ol> Ordered List <li> List Element <div> Division / Section <table> Tables <form> Web forms ``` So to be productive at scraping requires you to have some familiarity with HMTL XML, and CSS. Here we look at a very basic HTML file. Refer to See [http://bradleyboehmke.github.io/2015/12/scraping\-html\-text.html](http://bradleyboehmke.github.io/2015/12/scraping-html-text.html) for a basic introductory session on HTML and webscraping with R ``` <!DOCTYPE html> <html> <body> <h1>My First Heading</h1> <p>My first paragraph.</p> </body> </html> ```   And you could apply some styling to this courtest of the CSS language which allows you to inject styles into plain HTML:   ### 1\.3\.1 Useful tools There are a number of tools that allow us to inspect web pages and see “what is under the hood.” Warning \- I just discovered that one of my favorite browser plugins (firebug) to find the xpaths and/or css paths of page elements is no longer supported under Firefox or Chrome. I’ve found a couple of replacements but they don’t work as well. I’ll research it more. The way that **Selector Gadget** and **xPath** work is that you install them into your browswer and then activate them whenever you need to identify the **selector** associated with a part of a web page.   | Selector Gadget | <http://selectorgadget.com/> | | --- | --- | | Firebug | <https://getfirebug.com/> (now integrated into a version of Firefox) | | xPath | [https://addons.mozilla.org/en\-US/firefox/addon/xpath\_finder/](https://addons.mozilla.org/en-US/firefox/addon/xpath_finder/) | | Google Chrome | Right click to inspect a page element | | Google Chrome | View Developer \- Developer Tools | | Oxygen Editor | Can obtain via the Emory Software Express Site | ### 1\.3\.1 Useful tools There are a number of tools that allow us to inspect web pages and see “what is under the hood.” Warning \- I just discovered that one of my favorite browser plugins (firebug) to find the xpaths and/or css paths of page elements is no longer supported under Firefox or Chrome. I’ve found a couple of replacements but they don’t work as well. I’ll research it more. The way that **Selector Gadget** and **xPath** work is that you install them into your browswer and then activate them whenever you need to identify the **selector** associated with a part of a web page.   | Selector Gadget | <http://selectorgadget.com/> | | --- | --- | | Firebug | <https://getfirebug.com/> (now integrated into a version of Firefox) | | xPath | [https://addons.mozilla.org/en\-US/firefox/addon/xpath\_finder/](https://addons.mozilla.org/en-US/firefox/addon/xpath_finder/) | | Google Chrome | Right click to inspect a page element | | Google Chrome | View Developer \- Developer Tools | | Oxygen Editor | Can obtain via the Emory Software Express Site | 1\.4 Useful Packages -------------------- You will use the following three primary packages to help you get data from various web pages: *rvest*, *XML*, and *RJSONIO*. Note that you won’t always use them simultaneously but you might use them in pairs or individually depending on the task at hand. 1\.5 Quick **rvest** tutorial ----------------------------- Now let’s do a quick *rvest* tutorial. There are several steps involved in using **rvest** which are conceptually quite straightforward: 1. Identify a URL to be examined for content 2. Use Selector Gadet, xPath, or Google Insepct to identify the “selector” This will be a paragraph, table, hyper links, images 3. Load rvest 4. Use **read\_html** to “read” the URL 5. Pass the result to **html\_nodes** to get the selectors identified in step number 2 6. Get the text or table content ``` library(rvest) url <- "https://en.wikipedia.org/wiki/World_population" (paragraphs <- read_html(url) %>% html_nodes("p")) ``` ``` ## {xml_nodeset (51)} ## [1] <p class="mw-empty-elt">\n\n</p> ## [2] <p>In <a href="/wiki/Demography" title="Demography">demographics</a>, the <b>world population</b> is the total number of <a h ... ## [3] <p>The world population has experienced <a href="/wiki/Population_growth" title="Population growth">continuous growth</a> fol ... ## [4] <p><a href="/wiki/Birth_rate" title="Birth rate">Birth rates</a> were highest in the late 1980s at about 139 million,<sup id= ... ## [5] <p>Six of the Earth's seven <a href="/wiki/Continent" title="Continent">continents</a> are permanently inhabited on a large s ... ## [6] <p>Estimates of world population by their nature are an aspect of <a href="/wiki/Modernity" title="Modernity">modernity</a>, ... ## [7] <p>It is difficult for estimates to be better than rough approximations, as even modern population estimates are fraught with ... ## [8] <p>Estimates of the population of the world at the time agriculture emerged in around 10,000 BC have ranged between 1 million ... ## [9] <p>The <a href="/wiki/Plague_of_Justinian" title="Plague of Justinian">Plague of Justinian</a>, which first emerged during th ... ## [10] <p>Starting in AD 2, the <a href="/wiki/Han_Dynasty" class="mw-redirect" title="Han Dynasty">Han Dynasty</a> of <a href="/wik ... ## [11] <p>The <a href="/wiki/Pre-Columbian_era" title="Pre-Columbian era">pre-Columbian</a> population of the Americas is uncertain; ... ## [12] <p>During the European <a href="/wiki/British_Agricultural_Revolution" title="British Agricultural Revolution">Agricultural</ ... ## [13] <p>Population growth in the West became more rapid after the introduction of <a href="/wiki/Vaccination" title="Vaccination"> ... ## [14] <p>The first half of the 20th century in <a href="/wiki/Russian_Empire" title="Russian Empire">Imperial Russia</a> and the <a ... ## [15] <p>Many countries in the <a href="/wiki/Developing_world" class="mw-redirect" title="Developing world">developing world</a> h ... ## [16] <p>It is estimated that the world population reached one billion for the first time in 1804. It was another 123 years before ... ## [17] <p>According to current projections, the global population will reach eight billion by 2024, and is likely to reach around ni ... ## [18] <p>There is no estimation for the exact day or month the world's population surpassed one or two billion. The points at which ... ## [19] <p>As of 2012, the global <a href="/wiki/Human_sex_ratio" title="Human sex ratio">sex ratio</a> is approximately 1.01 males t ... ## [20] <p>According to the <a href="/wiki/World_Health_Organization" title="World Health Organization">World Health Organization</a> ... ## ... ``` Then we might want to actually parse out those paragraphs into text: ``` url <- "https://en.wikipedia.org/wiki/World_population" paragraphs <- read_html(url) %>% html_nodes("p") %>% html_text() paragraphs[1:10] ``` ``` ## [1] "\n\n" ## [2] "In demographics, the world population is the total number of humans currently living, and was estimated to have exceeded 7.9 billion people as of November 2021[update].[2] It took over 2 million years of human prehistory and history for the world's population to reach 1 billion[3] and only 200 years more to grow to 7 billion.[4]" ## [3] "The world population has experienced continuous growth following the Great Famine of 1315–1317 and the end of the Black Death in 1350, when it was near 370 million.[5]\nThe highest global population growth rates, with increases of over 1.8% per year, occurred between 1955 and 1975 – peaking at 2.1% between 1965 and 1970.[6] The growth rate declined to 1.2% between 2010 and 2015 and is projected to decline further in the course of the 21st century.[6] The global population is still increasing, but there is significant uncertainty about its long-term trajectory due to changing rates of fertility and mortality.[7] The UN Department of Economics and Social Affairs projects between 9–10 billion people by 2050, and gives an 80% confidence interval of 10–12 billion by the end of the 21st century.[8] Other demographers predict that world population will begin to decline in the second half of the 21st century.[9]" ## [4] "Birth rates were highest in the late 1980s at about 139 million,[11] and as of 2011 were expected to remain essentially constant at a level of 135 million,[12] while the mortality rate numbered 56 million per year and were expected to increase to 80 million per year by 2040.[13]\nThe median age of human beings as of 2020 is 31 years.[14]" ## [5] "Six of the Earth's seven continents are permanently inhabited on a large scale. Asia is the most populous continent, with its 4.64 billion inhabitants accounting for 60% of the world population. The world's two most populated countries, China and India, together constitute about 36% of the world's population. Africa is the second most populated continent, with around 1.34 billion people, or 17% of the world's population. Europe's 747 million people make up 10% of the world's population as of 2020, while the Latin American and Caribbean regions are home to around 653 million (8%). Northern America, primarily consisting of the United States and Canada, has a population of around 368 million (5%), and Oceania, the least populated region, has about 42 million inhabitants (0.5%).[16]Antarctica only has a very small, fluctuating population of about 1200 people based mainly in polar science stations.[17]" ## [6] "Estimates of world population by their nature are an aspect of modernity, possible only since the Age of Discovery. Early estimates for the population of the world[18] date to the 17th century: William Petty in 1682 estimated world population at 320 million (modern estimates ranging close to twice this number); by the late 18th century, estimates ranged close to one billion (consistent with modern estimates).[19] More refined estimates, broken down by continents, were published in the first half of the 19th century, at 600 million to 1 billion in the early 1800s and at 800 million to 1 billion in the 1840s.[20]" ## [7] "It is difficult for estimates to be better than rough approximations, as even modern population estimates are fraught with uncertainties on the order of 3% to 5%.[21]" ## [8] "Estimates of the population of the world at the time agriculture emerged in around 10,000 BC have ranged between 1 million and 15 million.[22][23] Even earlier, genetic evidence suggests humans may have gone through a population bottleneck of between 1,000 and 10,000 people about 70,000 BC, according to the Toba catastrophe theory. By contrast, it is estimated that around 50–60 million people lived in the combined eastern and western Roman Empire in the 4th century AD.[24]" ## [9] "The Plague of Justinian, which first emerged during the reign of the Roman emperor Justinian, caused Europe's population to drop by around 50% between the 6th and 8th centuries AD.[25] The population of Europe was more than 70 million in 1340.[26] The Black Death pandemic of the 14th century may have reduced the world's population from an estimated 450 million in 1340 to between 350 and 375 million in 1400;[27] it took 200 years for population figures to recover.[28] The population of China decreased from 123 million in 1200 to 65 million in 1393,[29] presumably from a combination of Mongol invasions, famine, and plague.[30]" ## [10] "Starting in AD 2, the Han Dynasty of ancient China kept consistent family registers in order to properly assess the poll taxes and labor service duties of each household.[31] In that year, the population of Western Han was recorded as 57,671,400 individuals in 12,366,470 households, decreasing to 47,566,772 individuals in 9,348,227 households by AD 146, towards the End of the Han Dynasty.[31] At the founding of the Ming Dynasty in 1368, China's population was reported to be close to 60 million; toward the end of the dynasty in 1644, it may have approached 150 million.[32] England's population reached an estimated 5.6 million in 1650, up from an estimated 2.6 million in 1500.[33] New crops that were brought to Asia and Europe from the Americas by Portuguese and Spanish colonists in the 16th century are believed to have contributed to population growth.[34][35][36] Since their introduction to Africa by Portuguese traders in the 16th century,[37]maize and cassava have similarly replaced traditional African crops as the most important staple food crops grown on the continent.[38]" ``` Get some other types of HTML obejects. Let’s get all the hyperlinks to other pages ``` read_html(url) %>% html_nodes("a") ``` ``` ## {xml_nodeset (1647)} ## [1] <a id="top"></a> ## [2] <a href="/wiki/Wikipedia:Protection_policy#semi" title="This article is semi-protected."><img alt="Page semi-protected" src=" ... ## [3] <a class="mw-jump-link" href="#mw-head">Jump to navigation</a> ## [4] <a class="mw-jump-link" href="#searchInput">Jump to search</a> ## [5] <a href="/wiki/Demographics_of_the_world" title="Demographics of the world">Demographics of the world</a> ## [6] <a href="/wiki/File:World_Population_Prospects_2019.png" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/comm ... ## [7] <a href="/wiki/File:World_Population_Prospects_2019.png" class="internal" title="Enlarge"></a> ## [8] <a href="#cite_note-1">[1]</a> ## [9] <a href="/wiki/Demography" title="Demography">demographics</a> ## [10] <a href="/wiki/Human" title="Human">humans</a> ## [11] <a class="external text" href="https://en.wikipedia.org/w/index.php?title=World_population&amp;action=edit">[update]</a> ## [12] <a href="#cite_note-2">[2]</a> ## [13] <a href="/wiki/Prehistory" title="Prehistory">human prehistory</a> ## [14] <a href="/wiki/Human_history" title="Human history">history</a> ## [15] <a href="/wiki/Billion" title="Billion">billion</a> ## [16] <a href="#cite_note-3">[3]</a> ## [17] <a href="#cite_note-4">[4]</a> ## [18] <a href="/wiki/Population_growth" title="Population growth">continuous growth</a> ## [19] <a href="/wiki/Great_Famine_of_1315%E2%80%931317" title="Great Famine of 1315–1317">Great Famine of 1315–1317</a> ## [20] <a href="/wiki/Black_Death" title="Black Death">Black Death</a> ## ... ``` What about tables ? ``` url <- "https://en.wikipedia.org/wiki/World_population" tables <- read_html(url) %>% html_nodes("table") tables ``` ``` ## {xml_nodeset (26)} ## [1] <table class="infobox" style="float: right; font-size:90%"><tbody>\n<tr><th colspan="5" style="text-align:center; background: ... ## [2] <table class="wikitable sortable">\n<caption>Population by region (2020 estimates)\n</caption>\n<tbody>\n<tr>\n<th>Region\n</ ... ## [3] <table class="wikitable" style="text-align:center; float:right; clear:right; margin-left:8px; margin-right:0;"><tbody>\n<tr>< ... ## [4] <table width="100%"><tbody><tr>\n<td valign="top"> <style data-mw-deduplicate="TemplateStyles:r981673959">.mw-parser-output . ... ## [5] <table class="wikitable sortable plainrowheaders" style="text-align:right"><tbody>\n<tr>\n<th data-sort-type="number">Rank</t ... ## [6] <table class="wikitable sortable" style="text-align:right">\n<caption>10 most densely populated countries <small>(with popula ... ## [7] <table class="wikitable sortable" style="text-align:right">\n<caption>Countries ranking highly in both total population <smal ... ## [8] <table class="wikitable sortable">\n<caption>Global annual population growth<sup id="cite_ref-114" class="reference"><a href= ... ## [9] <table class="wikitable sortable" style="font-size:97%; text-align:right;">\n<caption>World historical and predicted populati ... ## [10] <table class="wikitable sortable" style="font-size:97%; text-align:right;">\n<caption>World historical and predicted populati ... ## [11] <table class="wikitable" style="text-align:right;"><tbody>\n<tr>\n<th>Year\n</th>\n<th style="width:70px;">World\n</th>\n<th ... ## [12] <table class="box-More_citations_needed_section plainlinks metadata ambox ambox-content ambox-Refimprove" role="presentation" ... ## [13] <table class="wikitable" style="text-align:center; margin-top:0.5em; margin-right:1em; float:left; font-size:96%;">\n<caption ... ## [14] <table class="wikitable" style="text-align:right; margin-top:2.6em; font-size:96%;">\n<caption>UN 2019 estimates and medium v ... ## [15] <table class="wikitable" style="text-align:center">\n<caption>Starting at 500 million\n</caption>\n<tbody>\n<tr>\n<th>Populat ... ## [16] <table class="wikitable" style="text-align:center">\n<caption>Starting at 375 million\n</caption>\n<tbody>\n<tr>\n<th>Populat ... ## [17] <table role="presentation" class="mbox-small plainlinks sistersitebox" style="background-color:#f9f9f9;border:1px solid #aaa; ... ## [18] <table class="nowraplinks mw-collapsible autocollapse navbox-inner" style="border-spacing:0;background:transparent;color:inhe ... ## [19] <table class="nowraplinks mw-collapsible mw-collapsed navbox-inner" style="border-spacing:0;background:transparent;color:inhe ... ## [20] <table class="nowraplinks hlist mw-collapsible autocollapse navbox-inner" style="border-spacing:0;background:transparent;colo ... ## ... ``` 1\.6 Example: Parsing A Table From Wikipedia -------------------------------------------- Look at the [Wikipedia Page](https://en.wikipedia.org/wiki/World_population) for world population: <https://en.wikipedia.org/wiki/World_population> * We can get any table we want using rvest * We might have to experiment to figure out which one * Get the one that lists the ten most populous countries * I think this might be the 4th or 5th table on the page * How do we get this ? First we will load packages that will help us throughout this session. In this case we’ll need to figure out what number table it is we want. We could fetch all the tables and then experiment to find the precise one. ``` library(rvest) library(tidyr) library(dplyr) library(ggplot2) # Use read_html to fetch the webpage url <- "https://en.wikipedia.org/wiki/World_population" ten_most_df <- read_html(url) ten_most_populous <- ten_most_df %>% html_nodes("table") %>% `[[`(5) %>% html_table() # Let's get just the first three columns ten_most_populous <- ten_most_populous[,2:4] # Get some content - Change the column names names(ten_most_populous) <- c("Country_Territory","Population","Date") # Do reformatting on the columns to be actual numerics where appropriate ten_most_populous %>% mutate(Population=gsub(",","",Population)) %>% mutate(Population=round(as.numeric(Population)/1e+06)) %>% ggplot(aes(x=Country_Territory,y=Population)) + geom_point() + labs(y = "Population / 1,000,000") + coord_flip() + ggtitle("Top 10 Most Populous Countries") ``` In the above example we leveraged the fact that we were looking specifically for a table element and it became a project to locate the correct table number. This isn’t always the case with more complicated websites in that the element we are trying to grab or scrape is contained within a nested structure that doesn’t correspond neatly to a paragraph, link, heading, or table. This can be the case if the page is heavily styled with CSS or Javascript. We might have to work harder. But it’s okay to try to use simple elements and then try to refine the search some more. ``` # Could have use the xPath plugin to help url <- "https://en.wikipedia.org/wiki/World_population" ten_most_df <- read_html(url) ten_most_populous <- ten_most_df %>% html_nodes(xpath="/html/body/div[3]/div[3]/div[5]/div[1]/table[4]") %>% html_table() ``` 1\.7 Scraping Patient Dialysis Stories -------------------------------------- Here is an example relating to the experiences of dialysis patients with a specific dialysis provider. It might be more useful to find a support forum that is managed by dialysis patients to get more general opinions but this example is helpful in showing you what is involved. Check out this website: ``` https://www.americanrenal.com/dialysis-centers/patient-stories ``` ### 1\.7\.1 Getting More Detail In looking at this page you will see that there are a number of patient stories. Actually, there is a summary line followed by a “Read More” link that provides more detail on the patient experience. Our goal is to get the full content as opposed to only the summary. How would we do this ? ### 1\.7\.2 Writing Some Code Let’s use our new found knowledge of **rvest** to help us get these detailed stories. Maybe we want to do some sentiment analysis on this. If you hover over the **Read More** link on the website it will provide a specific link for each patient. For example, ``` https://www.americanrenal.com/dialysis-centers/patient-stories/john-baguchinsky ``` What we want to do is first get a list of all these links from the main page after which we can loop over each of the patient specific links and capture that information into a vector. Each element of the vector will be the content of a specific patient’s story. ``` library(rvest) burl <- "https://www.americanrenal.com/dialysis-centers/patient-stories" # Setup an empty vector to which we will add the content of each story workVector <- vector() # Grab the links from the site that relate patient stories links <- read_html(burl) %>% html_nodes("a") %>% html_attr("href") %>% grep("stories",.,value=TRUE) links ``` ``` ## [1] "http://www.americanrenal.com/dialysis-centers/patient-stories" ## [2] "http://www.americanrenal.com/dialysis-centers/patient-stories/randal-beatty" ## [3] "http://www.americanrenal.com/dialysis-centers/patient-stories/patricia-garcia" ## [4] "http://www.americanrenal.com/dialysis-centers/patient-stories/john-baguchinsky" ## [5] "http://www.americanrenal.com/dialysis-centers/patient-stories/sheryll-wyman" ## [6] "http://www.americanrenal.com/dialysis-centers/patient-stories/carol-sykes" ## [7] "http://www.americanrenal.com/dialysis-centers/patient-stories/sharon-cauthen" ## [8] "http://www.americanrenal.com/dialysis-centers/patient-stories/remond-ellis" ## [9] "http://www.americanrenal.com/dialysis-centers/patient-stories" ## [10] "http://www.americanrenal.com/dialysis-centers/patient-stories" ``` Some of these links do not correspond directly to a specific patient name so we need to filter those out. ``` # Get only the ones that seem to have actual names associated with them storiesLinks <- links[-grep("stories$",links)] storiesLinks ``` ``` ## [1] "http://www.americanrenal.com/dialysis-centers/patient-stories/randal-beatty" ## [2] "http://www.americanrenal.com/dialysis-centers/patient-stories/patricia-garcia" ## [3] "http://www.americanrenal.com/dialysis-centers/patient-stories/john-baguchinsky" ## [4] "http://www.americanrenal.com/dialysis-centers/patient-stories/sheryll-wyman" ## [5] "http://www.americanrenal.com/dialysis-centers/patient-stories/carol-sykes" ## [6] "http://www.americanrenal.com/dialysis-centers/patient-stories/sharon-cauthen" ## [7] "http://www.americanrenal.com/dialysis-centers/patient-stories/remond-ellis" ``` Next we will visit each of these pages and scrape the text information. We’ll step through this in class so you can see this in action but here is the code. We will get each story and place each paragrpah of the story into a vector element. After that we will eliminate blank lines and some junk lines that begin with a new line character. Then we will collapse all of the vector text into a single paragraph and store it into a list element. Let’s step through it for the first link. ``` # This corresponds to the first link # "http://www.americanrenal.com/dialysis-centers/patient-stories/randal-beatty" tmpResult <- read_html(storiesLinks[1]) %>% html_nodes("p") %>% html_text() tmpResult ``` ``` ## [1] "Mr. Randal Beatty, University Kidney Center Hikes Lane" ## [2] "In April 2010, Randal Beatty was diagnosed with end stage renal disease (ESRD). The diagnosis came as a surprise, and, Mr. Beatty admits, he kept praying for a miracle." ## [3] "“I heard all those stories about how people feel during dialysis and I didn’t want to deal with it,” said Mr. Beatty. “I didn’t want to lose my freedom and I certainly didn’t want to feel sick all the time.”" ## [4] "So for three years, he waited, hoping to find his miracle. But by August 2013, he knew he waited too long." ## [5] "“Before my first dialysis treatment, I could feel myself pulling away from everyone, especially my family. I didn’t want to go anywhere or do anything and I was sick all the time. I realized at that point that how I was feeling was exactly what I wanted to avoid.”" ## [6] "He began his in-center dialysis treatments with American Renal Associates (ARA) at University Kidney Center in Louisville, Kentucky in August 2013 and transferred to another local ARA facility – University Kidney Center Hikes Lane – in May 2014 since it was closer to his home." ## [7] "“In a short time, dialysis completely changed me,” he said. His health improved, along with his confidence, encouraging him to start driving himself to and from treatments. Not only did this give him a renewed feeling of independence, but a strong sense of accomplishment, as well." ## [8] "Now 67 years old, Mr. Beatty says he can do everything he did before, including keeping up with his two granddaughters, playing basketball, among other hobbies, and going on family vacations. In fact, with the help of ARA’s Travel Department, Mr. Beatty can travel stress free. Though he admits traveling while on dialysis can be intimidating, he explained, “All I had to do was show up. ARA’s Travel Team took care of everything.”" ## [9] "Receiving a diagnosis of ESRD can be challenging, but Mr. Beatty’s advice is to take a step back and see dialysis as the miracle it is." ## [10] "“It took me three years to realize that dialysis was the miracle I was waiting for. I was an extremely sick individual and just a few months on dialysis completely changed me. I don’t know why I waited as long as I did. I could have been enjoying life for the last few years rather than staying home sick. And I honestly haven’t been sick since I started my dialysis treatments!”" ## [11] "Read more patient stories" ## [12] "American Renal Associates operates 240 dialysis clinics in 27 states and Washington D.C., serving more than 17,300 patients with end-stage renal disease in partnership with approximately 400 local nephrologists." ## [13] "If you have questions, you can call 1-877-99-RENAL (1-877-997-3625) or [email protected]" ## [14] "" ## [15] "\n ©2022 American Renal® Associates. All Rights Reserved.500 Cummings Center, Suite\n 6550, Beverly, MA, 01915\n " ``` Okay, that has some junk in it like blank lines and lines that begin with new line characters. ``` # Get rid of elements that are a blank line tmpResult <- tmpResult[tmpResult!=""] # Get rid of elements that begin with a newline character "\n" newlines_begin <- sum(grepl("^\n",tmpResult)) if (newlines_begin > 0) { tmpResult <- tmpResult[-grep("^\n",tmpResult)] } tmpResult ``` ``` ## [1] "Mr. Randal Beatty, University Kidney Center Hikes Lane" ## [2] "In April 2010, Randal Beatty was diagnosed with end stage renal disease (ESRD). The diagnosis came as a surprise, and, Mr. Beatty admits, he kept praying for a miracle." ## [3] "“I heard all those stories about how people feel during dialysis and I didn’t want to deal with it,” said Mr. Beatty. “I didn’t want to lose my freedom and I certainly didn’t want to feel sick all the time.”" ## [4] "So for three years, he waited, hoping to find his miracle. But by August 2013, he knew he waited too long." ## [5] "“Before my first dialysis treatment, I could feel myself pulling away from everyone, especially my family. I didn’t want to go anywhere or do anything and I was sick all the time. I realized at that point that how I was feeling was exactly what I wanted to avoid.”" ## [6] "He began his in-center dialysis treatments with American Renal Associates (ARA) at University Kidney Center in Louisville, Kentucky in August 2013 and transferred to another local ARA facility – University Kidney Center Hikes Lane – in May 2014 since it was closer to his home." ## [7] "“In a short time, dialysis completely changed me,” he said. His health improved, along with his confidence, encouraging him to start driving himself to and from treatments. Not only did this give him a renewed feeling of independence, but a strong sense of accomplishment, as well." ## [8] "Now 67 years old, Mr. Beatty says he can do everything he did before, including keeping up with his two granddaughters, playing basketball, among other hobbies, and going on family vacations. In fact, with the help of ARA’s Travel Department, Mr. Beatty can travel stress free. Though he admits traveling while on dialysis can be intimidating, he explained, “All I had to do was show up. ARA’s Travel Team took care of everything.”" ## [9] "Receiving a diagnosis of ESRD can be challenging, but Mr. Beatty’s advice is to take a step back and see dialysis as the miracle it is." ## [10] "“It took me three years to realize that dialysis was the miracle I was waiting for. I was an extremely sick individual and just a few months on dialysis completely changed me. I don’t know why I waited as long as I did. I could have been enjoying life for the last few years rather than staying home sick. And I honestly haven’t been sick since I started my dialysis treatments!”" ## [11] "Read more patient stories" ## [12] "American Renal Associates operates 240 dialysis clinics in 27 states and Washington D.C., serving more than 17,300 patients with end-stage renal disease in partnership with approximately 400 local nephrologists." ## [13] "If you have questions, you can call 1-877-99-RENAL (1-877-997-3625) or [email protected]" ``` Next, let’s create a more compact version of the data. We’ll cram it all into a single element. ``` (tmpResult <- paste(tmpResult,collapse="")) ``` ``` ## [1] "Mr. Randal Beatty, University Kidney Center Hikes LaneIn April 2010, Randal Beatty was diagnosed with end stage renal disease (ESRD). The diagnosis came as a surprise, and, Mr. Beatty admits, he kept praying for a miracle.“I heard all those stories about how people feel during dialysis and I didn’t want to deal with it,” said Mr. Beatty. “I didn’t want to lose my freedom and I certainly didn’t want to feel sick all the time.”So for three years, he waited, hoping to find his miracle. But by August 2013, he knew he waited too long.“Before my first dialysis treatment, I could feel myself pulling away from everyone, especially my family. I didn’t want to go anywhere or do anything and I was sick all the time. I realized at that point that how I was feeling was exactly what I wanted to avoid.”He began his in-center dialysis treatments with American Renal Associates (ARA) at University Kidney Center in Louisville, Kentucky in August 2013 and transferred to another local ARA facility – University Kidney Center Hikes Lane – in May 2014 since it was closer to his home.“In a short time, dialysis completely changed me,” he said. His health improved, along with his confidence, encouraging him to start driving himself to and from treatments. Not only did this give him a renewed feeling of independence, but a strong sense of accomplishment, as well.Now 67 years old, Mr. Beatty says he can do everything he did before, including keeping up with his two granddaughters, playing basketball, among other hobbies, and going on family vacations. In fact, with the help of ARA’s Travel Department, Mr. Beatty can travel stress free. Though he admits traveling while on dialysis can be intimidating, he explained, “All I had to do was show up. ARA’s Travel Team took care of everything.”Receiving a diagnosis of ESRD can be challenging, but Mr. Beatty’s advice is to take a step back and see dialysis as the miracle it is.“It took me three years to realize that dialysis was the miracle I was waiting for. I was an extremely sick individual and just a few months on dialysis completely changed me. I don’t know why I waited as long as I did. I could have been enjoying life for the last few years rather than staying home sick. And I honestly haven’t been sick since I started my dialysis treatments!”Read more patient storiesAmerican Renal Associates operates 240 dialysis clinics in 27 states and Washington D.C., serving more than 17,300 patients with end-stage renal disease in partnership with approximately 400 local nephrologists.If you have questions, you can call 1-877-99-RENAL (1-877-997-3625) or [email protected]" ``` So we could put this logic into a loop and process each of the links programmatically. ``` # Now go to these pages and scrape the text necessary to # build a corpus tmpResult <- vector() textList <- list() for (ii in 1:length(storiesLinks)) { tmpResult <- read_html(storiesLinks[ii]) %>% html_nodes("p") %>% html_text() # Get rid of elements that are a blank line tmpResult <- tmpResult[tmpResult!=""] # Get rid of elements that begin with a newline character "\n" newlines_begin <- sum(grepl("^\n",tmpResult)) if (newlines_begin > 0) { tmpResult <- tmpResult[-grep("^\n",tmpResult)] } # Let's collpase all the elements into a single element and then store # it into a list element so we can maintain each patient story separately # This is not necessary but until we figure out what we want to do with # the data then this gives us some options tmpResult <- paste(tmpResult,collapse="") textList[[ii]] <- tmpResult } ``` If we did our job correctly then each element of the **textList** will have text in it corresponding to each patient ``` textList[[1]] ``` ``` ## [1] "Mr. Randal Beatty, University Kidney Center Hikes LaneIn April 2010, Randal Beatty was diagnosed with end stage renal disease (ESRD). The diagnosis came as a surprise, and, Mr. Beatty admits, he kept praying for a miracle.“I heard all those stories about how people feel during dialysis and I didn’t want to deal with it,” said Mr. Beatty. “I didn’t want to lose my freedom and I certainly didn’t want to feel sick all the time.”So for three years, he waited, hoping to find his miracle. But by August 2013, he knew he waited too long.“Before my first dialysis treatment, I could feel myself pulling away from everyone, especially my family. I didn’t want to go anywhere or do anything and I was sick all the time. I realized at that point that how I was feeling was exactly what I wanted to avoid.”He began his in-center dialysis treatments with American Renal Associates (ARA) at University Kidney Center in Louisville, Kentucky in August 2013 and transferred to another local ARA facility – University Kidney Center Hikes Lane – in May 2014 since it was closer to his home.“In a short time, dialysis completely changed me,” he said. His health improved, along with his confidence, encouraging him to start driving himself to and from treatments. Not only did this give him a renewed feeling of independence, but a strong sense of accomplishment, as well.Now 67 years old, Mr. Beatty says he can do everything he did before, including keeping up with his two granddaughters, playing basketball, among other hobbies, and going on family vacations. In fact, with the help of ARA’s Travel Department, Mr. Beatty can travel stress free. Though he admits traveling while on dialysis can be intimidating, he explained, “All I had to do was show up. ARA’s Travel Team took care of everything.”Receiving a diagnosis of ESRD can be challenging, but Mr. Beatty’s advice is to take a step back and see dialysis as the miracle it is.“It took me three years to realize that dialysis was the miracle I was waiting for. I was an extremely sick individual and just a few months on dialysis completely changed me. I don’t know why I waited as long as I did. I could have been enjoying life for the last few years rather than staying home sick. And I honestly haven’t been sick since I started my dialysis treatments!”Read more patient storiesAmerican Renal Associates operates 240 dialysis clinics in 27 states and Washington D.C., serving more than 17,300 patients with end-stage renal disease in partnership with approximately 400 local nephrologists.If you have questions, you can call 1-877-99-RENAL (1-877-997-3625) or [email protected]" ``` ### 1\.7\.1 Getting More Detail In looking at this page you will see that there are a number of patient stories. Actually, there is a summary line followed by a “Read More” link that provides more detail on the patient experience. Our goal is to get the full content as opposed to only the summary. How would we do this ? ### 1\.7\.2 Writing Some Code Let’s use our new found knowledge of **rvest** to help us get these detailed stories. Maybe we want to do some sentiment analysis on this. If you hover over the **Read More** link on the website it will provide a specific link for each patient. For example, ``` https://www.americanrenal.com/dialysis-centers/patient-stories/john-baguchinsky ``` What we want to do is first get a list of all these links from the main page after which we can loop over each of the patient specific links and capture that information into a vector. Each element of the vector will be the content of a specific patient’s story. ``` library(rvest) burl <- "https://www.americanrenal.com/dialysis-centers/patient-stories" # Setup an empty vector to which we will add the content of each story workVector <- vector() # Grab the links from the site that relate patient stories links <- read_html(burl) %>% html_nodes("a") %>% html_attr("href") %>% grep("stories",.,value=TRUE) links ``` ``` ## [1] "http://www.americanrenal.com/dialysis-centers/patient-stories" ## [2] "http://www.americanrenal.com/dialysis-centers/patient-stories/randal-beatty" ## [3] "http://www.americanrenal.com/dialysis-centers/patient-stories/patricia-garcia" ## [4] "http://www.americanrenal.com/dialysis-centers/patient-stories/john-baguchinsky" ## [5] "http://www.americanrenal.com/dialysis-centers/patient-stories/sheryll-wyman" ## [6] "http://www.americanrenal.com/dialysis-centers/patient-stories/carol-sykes" ## [7] "http://www.americanrenal.com/dialysis-centers/patient-stories/sharon-cauthen" ## [8] "http://www.americanrenal.com/dialysis-centers/patient-stories/remond-ellis" ## [9] "http://www.americanrenal.com/dialysis-centers/patient-stories" ## [10] "http://www.americanrenal.com/dialysis-centers/patient-stories" ``` Some of these links do not correspond directly to a specific patient name so we need to filter those out. ``` # Get only the ones that seem to have actual names associated with them storiesLinks <- links[-grep("stories$",links)] storiesLinks ``` ``` ## [1] "http://www.americanrenal.com/dialysis-centers/patient-stories/randal-beatty" ## [2] "http://www.americanrenal.com/dialysis-centers/patient-stories/patricia-garcia" ## [3] "http://www.americanrenal.com/dialysis-centers/patient-stories/john-baguchinsky" ## [4] "http://www.americanrenal.com/dialysis-centers/patient-stories/sheryll-wyman" ## [5] "http://www.americanrenal.com/dialysis-centers/patient-stories/carol-sykes" ## [6] "http://www.americanrenal.com/dialysis-centers/patient-stories/sharon-cauthen" ## [7] "http://www.americanrenal.com/dialysis-centers/patient-stories/remond-ellis" ``` Next we will visit each of these pages and scrape the text information. We’ll step through this in class so you can see this in action but here is the code. We will get each story and place each paragrpah of the story into a vector element. After that we will eliminate blank lines and some junk lines that begin with a new line character. Then we will collapse all of the vector text into a single paragraph and store it into a list element. Let’s step through it for the first link. ``` # This corresponds to the first link # "http://www.americanrenal.com/dialysis-centers/patient-stories/randal-beatty" tmpResult <- read_html(storiesLinks[1]) %>% html_nodes("p") %>% html_text() tmpResult ``` ``` ## [1] "Mr. Randal Beatty, University Kidney Center Hikes Lane" ## [2] "In April 2010, Randal Beatty was diagnosed with end stage renal disease (ESRD). The diagnosis came as a surprise, and, Mr. Beatty admits, he kept praying for a miracle." ## [3] "“I heard all those stories about how people feel during dialysis and I didn’t want to deal with it,” said Mr. Beatty. “I didn’t want to lose my freedom and I certainly didn’t want to feel sick all the time.”" ## [4] "So for three years, he waited, hoping to find his miracle. But by August 2013, he knew he waited too long." ## [5] "“Before my first dialysis treatment, I could feel myself pulling away from everyone, especially my family. I didn’t want to go anywhere or do anything and I was sick all the time. I realized at that point that how I was feeling was exactly what I wanted to avoid.”" ## [6] "He began his in-center dialysis treatments with American Renal Associates (ARA) at University Kidney Center in Louisville, Kentucky in August 2013 and transferred to another local ARA facility – University Kidney Center Hikes Lane – in May 2014 since it was closer to his home." ## [7] "“In a short time, dialysis completely changed me,” he said. His health improved, along with his confidence, encouraging him to start driving himself to and from treatments. Not only did this give him a renewed feeling of independence, but a strong sense of accomplishment, as well." ## [8] "Now 67 years old, Mr. Beatty says he can do everything he did before, including keeping up with his two granddaughters, playing basketball, among other hobbies, and going on family vacations. In fact, with the help of ARA’s Travel Department, Mr. Beatty can travel stress free. Though he admits traveling while on dialysis can be intimidating, he explained, “All I had to do was show up. ARA’s Travel Team took care of everything.”" ## [9] "Receiving a diagnosis of ESRD can be challenging, but Mr. Beatty’s advice is to take a step back and see dialysis as the miracle it is." ## [10] "“It took me three years to realize that dialysis was the miracle I was waiting for. I was an extremely sick individual and just a few months on dialysis completely changed me. I don’t know why I waited as long as I did. I could have been enjoying life for the last few years rather than staying home sick. And I honestly haven’t been sick since I started my dialysis treatments!”" ## [11] "Read more patient stories" ## [12] "American Renal Associates operates 240 dialysis clinics in 27 states and Washington D.C., serving more than 17,300 patients with end-stage renal disease in partnership with approximately 400 local nephrologists." ## [13] "If you have questions, you can call 1-877-99-RENAL (1-877-997-3625) or [email protected]" ## [14] "" ## [15] "\n ©2022 American Renal® Associates. All Rights Reserved.500 Cummings Center, Suite\n 6550, Beverly, MA, 01915\n " ``` Okay, that has some junk in it like blank lines and lines that begin with new line characters. ``` # Get rid of elements that are a blank line tmpResult <- tmpResult[tmpResult!=""] # Get rid of elements that begin with a newline character "\n" newlines_begin <- sum(grepl("^\n",tmpResult)) if (newlines_begin > 0) { tmpResult <- tmpResult[-grep("^\n",tmpResult)] } tmpResult ``` ``` ## [1] "Mr. Randal Beatty, University Kidney Center Hikes Lane" ## [2] "In April 2010, Randal Beatty was diagnosed with end stage renal disease (ESRD). The diagnosis came as a surprise, and, Mr. Beatty admits, he kept praying for a miracle." ## [3] "“I heard all those stories about how people feel during dialysis and I didn’t want to deal with it,” said Mr. Beatty. “I didn’t want to lose my freedom and I certainly didn’t want to feel sick all the time.”" ## [4] "So for three years, he waited, hoping to find his miracle. But by August 2013, he knew he waited too long." ## [5] "“Before my first dialysis treatment, I could feel myself pulling away from everyone, especially my family. I didn’t want to go anywhere or do anything and I was sick all the time. I realized at that point that how I was feeling was exactly what I wanted to avoid.”" ## [6] "He began his in-center dialysis treatments with American Renal Associates (ARA) at University Kidney Center in Louisville, Kentucky in August 2013 and transferred to another local ARA facility – University Kidney Center Hikes Lane – in May 2014 since it was closer to his home." ## [7] "“In a short time, dialysis completely changed me,” he said. His health improved, along with his confidence, encouraging him to start driving himself to and from treatments. Not only did this give him a renewed feeling of independence, but a strong sense of accomplishment, as well." ## [8] "Now 67 years old, Mr. Beatty says he can do everything he did before, including keeping up with his two granddaughters, playing basketball, among other hobbies, and going on family vacations. In fact, with the help of ARA’s Travel Department, Mr. Beatty can travel stress free. Though he admits traveling while on dialysis can be intimidating, he explained, “All I had to do was show up. ARA’s Travel Team took care of everything.”" ## [9] "Receiving a diagnosis of ESRD can be challenging, but Mr. Beatty’s advice is to take a step back and see dialysis as the miracle it is." ## [10] "“It took me three years to realize that dialysis was the miracle I was waiting for. I was an extremely sick individual and just a few months on dialysis completely changed me. I don’t know why I waited as long as I did. I could have been enjoying life for the last few years rather than staying home sick. And I honestly haven’t been sick since I started my dialysis treatments!”" ## [11] "Read more patient stories" ## [12] "American Renal Associates operates 240 dialysis clinics in 27 states and Washington D.C., serving more than 17,300 patients with end-stage renal disease in partnership with approximately 400 local nephrologists." ## [13] "If you have questions, you can call 1-877-99-RENAL (1-877-997-3625) or [email protected]" ``` Next, let’s create a more compact version of the data. We’ll cram it all into a single element. ``` (tmpResult <- paste(tmpResult,collapse="")) ``` ``` ## [1] "Mr. Randal Beatty, University Kidney Center Hikes LaneIn April 2010, Randal Beatty was diagnosed with end stage renal disease (ESRD). The diagnosis came as a surprise, and, Mr. Beatty admits, he kept praying for a miracle.“I heard all those stories about how people feel during dialysis and I didn’t want to deal with it,” said Mr. Beatty. “I didn’t want to lose my freedom and I certainly didn’t want to feel sick all the time.”So for three years, he waited, hoping to find his miracle. But by August 2013, he knew he waited too long.“Before my first dialysis treatment, I could feel myself pulling away from everyone, especially my family. I didn’t want to go anywhere or do anything and I was sick all the time. I realized at that point that how I was feeling was exactly what I wanted to avoid.”He began his in-center dialysis treatments with American Renal Associates (ARA) at University Kidney Center in Louisville, Kentucky in August 2013 and transferred to another local ARA facility – University Kidney Center Hikes Lane – in May 2014 since it was closer to his home.“In a short time, dialysis completely changed me,” he said. His health improved, along with his confidence, encouraging him to start driving himself to and from treatments. Not only did this give him a renewed feeling of independence, but a strong sense of accomplishment, as well.Now 67 years old, Mr. Beatty says he can do everything he did before, including keeping up with his two granddaughters, playing basketball, among other hobbies, and going on family vacations. In fact, with the help of ARA’s Travel Department, Mr. Beatty can travel stress free. Though he admits traveling while on dialysis can be intimidating, he explained, “All I had to do was show up. ARA’s Travel Team took care of everything.”Receiving a diagnosis of ESRD can be challenging, but Mr. Beatty’s advice is to take a step back and see dialysis as the miracle it is.“It took me three years to realize that dialysis was the miracle I was waiting for. I was an extremely sick individual and just a few months on dialysis completely changed me. I don’t know why I waited as long as I did. I could have been enjoying life for the last few years rather than staying home sick. And I honestly haven’t been sick since I started my dialysis treatments!”Read more patient storiesAmerican Renal Associates operates 240 dialysis clinics in 27 states and Washington D.C., serving more than 17,300 patients with end-stage renal disease in partnership with approximately 400 local nephrologists.If you have questions, you can call 1-877-99-RENAL (1-877-997-3625) or [email protected]" ``` So we could put this logic into a loop and process each of the links programmatically. ``` # Now go to these pages and scrape the text necessary to # build a corpus tmpResult <- vector() textList <- list() for (ii in 1:length(storiesLinks)) { tmpResult <- read_html(storiesLinks[ii]) %>% html_nodes("p") %>% html_text() # Get rid of elements that are a blank line tmpResult <- tmpResult[tmpResult!=""] # Get rid of elements that begin with a newline character "\n" newlines_begin <- sum(grepl("^\n",tmpResult)) if (newlines_begin > 0) { tmpResult <- tmpResult[-grep("^\n",tmpResult)] } # Let's collpase all the elements into a single element and then store # it into a list element so we can maintain each patient story separately # This is not necessary but until we figure out what we want to do with # the data then this gives us some options tmpResult <- paste(tmpResult,collapse="") textList[[ii]] <- tmpResult } ``` If we did our job correctly then each element of the **textList** will have text in it corresponding to each patient ``` textList[[1]] ``` ``` ## [1] "Mr. Randal Beatty, University Kidney Center Hikes LaneIn April 2010, Randal Beatty was diagnosed with end stage renal disease (ESRD). The diagnosis came as a surprise, and, Mr. Beatty admits, he kept praying for a miracle.“I heard all those stories about how people feel during dialysis and I didn’t want to deal with it,” said Mr. Beatty. “I didn’t want to lose my freedom and I certainly didn’t want to feel sick all the time.”So for three years, he waited, hoping to find his miracle. But by August 2013, he knew he waited too long.“Before my first dialysis treatment, I could feel myself pulling away from everyone, especially my family. I didn’t want to go anywhere or do anything and I was sick all the time. I realized at that point that how I was feeling was exactly what I wanted to avoid.”He began his in-center dialysis treatments with American Renal Associates (ARA) at University Kidney Center in Louisville, Kentucky in August 2013 and transferred to another local ARA facility – University Kidney Center Hikes Lane – in May 2014 since it was closer to his home.“In a short time, dialysis completely changed me,” he said. His health improved, along with his confidence, encouraging him to start driving himself to and from treatments. Not only did this give him a renewed feeling of independence, but a strong sense of accomplishment, as well.Now 67 years old, Mr. Beatty says he can do everything he did before, including keeping up with his two granddaughters, playing basketball, among other hobbies, and going on family vacations. In fact, with the help of ARA’s Travel Department, Mr. Beatty can travel stress free. Though he admits traveling while on dialysis can be intimidating, he explained, “All I had to do was show up. ARA’s Travel Team took care of everything.”Receiving a diagnosis of ESRD can be challenging, but Mr. Beatty’s advice is to take a step back and see dialysis as the miracle it is.“It took me three years to realize that dialysis was the miracle I was waiting for. I was an extremely sick individual and just a few months on dialysis completely changed me. I don’t know why I waited as long as I did. I could have been enjoying life for the last few years rather than staying home sick. And I honestly haven’t been sick since I started my dialysis treatments!”Read more patient storiesAmerican Renal Associates operates 240 dialysis clinics in 27 states and Washington D.C., serving more than 17,300 patients with end-stage renal disease in partnership with approximately 400 local nephrologists.If you have questions, you can call 1-877-99-RENAL (1-877-997-3625) or [email protected]" ``` 1\.8 Summary ------------ * Need some basic HTML and CSS knowledge to find correct elements * How to extract text from common elements * How to extract text from specific elements * Always have to do some text cleanup of data * It usually takes multiple times to get it right See [http://bradleyboehmke.github.io/2015/12/scraping\-html\-text.html](http://bradleyboehmke.github.io/2015/12/scraping-html-text.html)
Getting Cleaning and Wrangling Data
steviep42.github.io
https://steviep42.github.io/webscraping/book/xml.html
Chapter 2 XML and JSON ====================== This is where things get a little dicey because some web pages will return XML and JSON in response to inquiries and while these formats seem complicated they are actually doing you a really big favor by doing this since these formats can ususally be easily parsed using various packges. XML is a bit hard to get your head around and JSON is the new kid on the block which is easier to use. Since this isn’t a full\-on course lecture I’ll keep it short as to how and why you would want to use these but any time you spend trying to better understand JSON (and XML) the better of you will be when parsing web pages. It’s not such a big deal if all you are going to be parsing is raw text since the mthods we use to do that avoid XML and JSON although cleaning up raw text has its own problems. Let’s revisit the Wikipedia example from the previous section. ``` library(rvest) # Use read_html to fetch the webpage url <- "https://en.wikipedia.org/wiki/World_population" ten_most_df <- read_html(url) ten_most_populous <- ten_most_df %>% html_nodes("table") %>% `[[`(5) %>% html_table() ``` Let’s look at an XML file that has some basic content:   ``` <?xml version="1.0" encoding="UTF-8"?> <bookstore> <book category="COOKING"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="CHILDREN"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="WEB"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> ``` Well we pulled out all tables and then, by experimentation, we isolated table 6 and got the content corresponding to that. But. Is there a more direct way to find the content ? There is. It requires us to install some helper plugins such as the xPath Finder for Firefox and Chrome. In reality there are a number of ways to find the XML Path or CSS Path for an element within a web page but this is a good one to start. Remeber that we want to find the table corresponding to the “10 Most Populous Countries.” So we activate the *xPath* finder plugin and the highlight the element of interest. This takes some practice to get it right. Once you highlight the desired elment you will see the corresponding XPATH. Here is a screenshot of what I did. We can use the resulting path to directly access the table without first having to first pull out all tables and then trying to find the right one   ``` # Use read_html to fetch the webpage url <- "https://en.wikipedia.org/wiki/World_population" ten_most_populous <- read_html(url) ten_most_df <- ten_most_populous %>% html_nodes(xpath='/html/body/div[3]/div[3]/div[5]/div[1]/table[4]') %>% html_table() # We have to get the first element of the list. ten_most_df <- ten_most_df[[1]] ten_most_df ``` ``` ## # A tibble: 10 × 6 ## Rank Country Population `% of world` Date `Source(official or UN)` ## <int> <chr> <chr> <chr> <chr> <chr> ## 1 1 China 1,411,860,200 17.8% 8 Feb 2022 National population clock[90] ## 2 2 India 1,387,792,351 17.5% 8 Feb 2022 National population clock[91] ## 3 3 United States 333,191,720 4.20% 8 Feb 2022 National population clock[92] ## 4 4 Indonesia 269,603,400 3.40% 1 Jul 2020 National annual projection[93] ## 5 5 Pakistan 220,892,331 2.79% 1 Jul 2020 UN Projection[94] ## 6 6 Brazil 214,324,314 2.70% 8 Feb 2022 National population clock[95] ## 7 7 Nigeria 206,139,587 2.60% 1 Jul 2020 UN Projection[94] ## 8 8 Bangladesh 172,180,722 2.17% 8 Feb 2022 National population clock[96] ## 9 9 Russia 146,748,590 1.85% 1 Jan 2020 National annual estimate[97] ## 10 10 Mexico 127,792,286 1.61% 1 Jul 2020 National annual projection[98] ``` 2\.1 Finding XPaths -------------------   In addition to Browser Plugins there are standalone tools such as the Oxygen XML Editor which is availabel through the Emory Software Express Website. This is a comprehensive XML editor that will allow you to parse XML and develop paths to locate specific nodes within an XML document. If you find yourself working with websites with lots of XML then this will be useful. The Oxygen editor is free. Let’s look at an XML file that has some basic content:   ``` <?xml version="1.0" encoding="UTF-8"?> <bookstore> <book category="COOKING"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="CHILDREN"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="WEB"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> ```         2\.2 Example: GeoCoding With Google ----------------------------------- Let’s run through an example of using the GeoCoding API with Google. They used to provide free access to this service but no more. You have to sign up for an account and get an API key. If you are currently taking one of my classes I probably have arranged for cloud credits that you can use to do Google Geocoding for free. So one way to do this is to create a URL according to the specification given in the Google Geocoding documentation. We need 1\) the base Google URL for the Geocoding service, 2\) the format of the desired output (XML or JSON), 3\) and address for which we want to find the latitude and longitude, and 4\) the API key we create at the Google API service. Here is a fully functional URL you can paste into your browser: [https://maps.googleapis.com/maps/api/geocode/xml?address\=1510\+Clifton\+Road,\+Atlanta,\+GA\&key\=AIzaSyDPwt1Ya79b7lhsZkh75BjCz\-GpMKC9ZYw](https://maps.googleapis.com/maps/api/geocode/xml?address=1510+Clifton+Road,+Atlanta,+GA&key=AIzaSyDPwt1Ya79b7lhsZkh75BjCz-GpMKC9ZYw) If you paste this into Chome then you get something back like this: So I could create an R function to take care of this kind of thing so I could maybe pass in arbitrary addressess to be geocided. Let’s run through this example and then look at how I parsed the XML file that is returned by the Google GeoCoding API. We will stick with the *1510 Clifton Rd, Atlanta, GA* address which corresponds to the Rollins Research Building. First we will see an example of what Google returns in terms of XML. We can use some tools like Oxygen Editor (available free via Emory Software Express) to develop an appropriate XPATH expression to parse out the latitude and longitude information. ``` # https://maps.googleapis.com/maps/api/geocode/xml?address=1510+Clifton+Road,+Atlanta,+GA&key=AIzaSyDPwt1Ya79b7lhsZkh75BjCz-GpMKC9ZYw # https://maps.googleapis.com/maps/api/geocode/json?address=1510+Clifton+Road,+Atlanta,+GA&key=AIzaSyDPwt1Ya79b7lhsZkh75BjCz-GpMKC9ZYw myGeo <- function(address="1510 Clifton Rd Atlanta GA",form="xml") { library(XML) library(RCurl) geourl <- "https://maps.googleapis.com/maps/api/geocode/" # You will need to replace this with your OWN key ! key <- "AIzaSyA3ereIVEjA0gPrxLupPLOKFGH_v98KpMA" address <- gsub(" ","+",address) add <- paste0(geourl,form,sep="?address=") add <- paste0(add,address,"&key=") geourl <- paste0(add,key) locale <- getURL(geourl) plocal <- xmlParse(locale,useInternalNodes=TRUE) # Okay let's extract the lat and lon latlon <- getNodeSet(plocal,"/GeocodeResponse/result/geometry/location/descendant::*") lat <- as.numeric(xmlSApply(latlon,xmlValue))[1] lon <- as.numeric(xmlSApply(latlon,xmlValue))[2] return(c(lat=lat,lng=lon)) } mylocs <- myGeo() ``` ``` lat lng 33.79667 -84.32319 ``` Now. We could have saved the report to a file on our local computer and open it up with Oxygen editor and figure out what the approproate XPATH would be. This is basically what I did. Here is a screenshot of the session. I picked an XPATH expression of //location/descendant::\*   We could expand this considerable to process a number of addresses. This is a great example of how once you get a single example working then you can generalize this into a function that will allow you to do the same thing for a much larger numnber of addressess. ``` namevec <- c("Atlanta GA", "Birmingham AL", "Seattle WA", "Sacramento CA", "Denver CO", "LosAngeles CA", "Rochester NY") cityList <- lapply(namevec,myGeo,eval=FALSE) # Or to get a data frame cities <- data.frame(city=namevec,do.call(rbind,cityList), stringsAsFactors = FALSE) ``` ``` cities ``` ``` ## city lat lng ## 1 Atlanta GA 33.74900 -84.38798 ## 2 Birmingham AL 33.51859 -86.81036 ## 3 Seattle WA 47.60621 -122.33207 ## 4 Sacramento CA 38.58157 -121.49440 ## 5 Denver CO 39.73924 -104.99025 ## 6 LosAngeles CA 34.05223 -118.24368 ## 7 Rochester NY 43.15658 -77.60885 ``` ``` # Let's create a Map library(leaflet) m <- leaflet(data=cities) m <- addTiles(m) m <- addMarkers(m,popup=cities$city) ``` ``` ## Assuming "lng" and "lat" are longitude and latitude, respectively ``` ``` # Put up the Map - click on the markers m ``` 2\.3 Using JSON --------------- JSON is fast becoming the primary interchange format over XML although XML is still well supported. R has a number of packages to ease the parsing of JSON/ documents returned by web pages. Ususally you get back a *list* which is a native data type in R that can easily be manipulated into a data frame. Most web APIs provide an option for JSON or XML although some only provide JSON. There are rules and regulations about how JSON is formed and we will learn them by example but you can look at the numerous tutotorials on the web to locate definitive references. See <http://www.w3schools.com/json/> Here is an XML file that describes some employees. ``` <employees> <employee> <firstName>John</firstName> <lastName>Doe</lastName> </employee> <employee> <firstName>Anna</firstName> <lastName>Smith</lastName> </employee> <employee> <firstName>Peter</firstName> <lastName>Jones</lastName> </employee> </employees> ``` And here is the corresposning JSON file: ``` { "employees":[ {"firstName":"John", "lastName":"Doe"}, {"firstName":"Anna", "lastName":"Smith"}, {"firstName":"Peter","lastName":"Jones"} ] } ``` * It is important to note that the actual information in the document, things like city name, county name, latitude, and longitude are the same as they would be in the comparable XML document. * JSON documents are at the heart of the NoSQL“database”called MongoDB * JSON can be found within many webpages since it is closely related to JavaScript which is a language strongly related to web pages. * JSON is very compact and lighweight which has made it a natural followon to XML so much so that it appears to be replacing XML See http:..www.json.org/ for a full description of the specification * An object is an unordered set of name/value pairs. An object begins with (left brace) and ends with (right brace). Each name is followed by : (colon) and the name/value pairs are separated by , (comma). * An array is an ordered collection of values. An array begins with \[ (left bracket) and ends with ] (right bracket). Values are separated by , (comma). * A value can be a string in double quotes, or a number, or true or false or null, or an object or an array. These structures can be nested. * A string is a sequence of zero or more Unicode characters, wrapped in double quotes, using backslash escapes. A character is represented as a single character string. Do you remember the Google Geocoding example from before ? We can tell Google to send us back JSON instead of XML just by adjusting the URL accordingly: ``` url <- "https://maps.googleapis.com/maps/api/geocode/json?address=1510+Clifton+Rd+Atlanta+GA&key=AIzaSyD0zIyn2ijIqb7OKYTGnAnchXY7zt3VB9Y" ``` 2\.4 Using the RJSONIO Package ------------------------------ To read/parse this in R we use a package called RJSONIO. There are other packages but this is the one we will be using. Download and install it. There is a function called fromJSON which will parse the JSON file and return a list to contain the data. So we parse lists instead of using XPath. Many people feel this to be easier than trying to construct XPath statments. You will have to decide for yourself. ``` library(RJSONIO) url <- "https://maps.googleapis.com/maps/api/geocode/json?address=1510+Clifton+Road,+Atlanta,+GA&key=AIzaSyD0zIyn2ijIqb7OKYTGnAnchXY7zt3VB9Y" geo <- fromJSON(url) ``` Since what we get back is a list we can directly access whatever we want. We just index into the list. No need for complicated XPATHS. ``` str(geo,3) ``` ``` ## List of 2 ## $ results:List of 1 ## ..$ :List of 6 ## .. ..$ address_components:List of 7 ## .. ..$ formatted_address : chr "1510 Clifton Rd, Atlanta, GA 30322, USA" ## .. ..$ geometry :List of 3 ## .. ..$ place_id : chr "ChIJ5QjdF_oG9YgRWAJzCm19Vf8" ## .. ..$ plus_code : Named chr [1:2] "QMWG+MP Druid Hills, Georgia, United States" "865QQMWG+MP" ## .. .. ..- attr(*, "names")= chr [1:2] "compound_code" "global_code" ## .. ..$ types : chr "street_address" ## $ status : chr "OK" ``` ``` geo$results[[1]]$geometry$location ``` ``` ## lat lng ## 33.79667 -84.32319 ``` Let’s put this into a function that helps us get the information for a number of addresses ``` myGeo <- function(address="1510 Clifton Rd Atlanta GA",form="json") { library(RJSONIO) geourl <- "https://maps.googleapis.com/maps/api/geocode/" # You will need to replace this with your OWN key ! key <- "AIzaSyA3ereIVEjA0gPrxLiopLKJRPOLH_v89DpMA" address <- gsub(" ","+",address) add <- paste0(geourl,form,sep="?address=") add <- paste0(add,address,"&key=") geourl <- paste0(add,key) geo <- fromJSON(geourl) lat <- geo$results[[1]]$geometry$location[1] lng <- geo$results[[1]]$geometry$location[2] return(c(lat,lng)) } ``` Consider the following: ``` namevec <- c("Atlanta GA", "Birmingham AL", "Seattle WA", "Sacramento CA", "Denver CO", "LosAngeles CA", "Rochester NY") cityList <- lapply(namevec,myGeo) # Or to get a data frame cities <- data.frame(city=namevec,do.call(rbind,cityList), stringsAsFactors = FALSE) ``` Now we can check out the geocoding cities and then make a map ``` cities ``` ``` ## city lat lng ## 1 Atlanta GA 33.74900 -84.38798 ## 2 Birmingham AL 33.51859 -86.81036 ## 3 Seattle WA 47.60621 -122.33207 ## 4 Sacramento CA 38.58157 -121.49440 ## 5 Denver CO 39.73924 -104.99025 ## 6 LosAngeles CA 34.05223 -118.24368 ## 7 Rochester NY 43.15658 -77.60885 ``` ``` # Let's create a Map library(leaflet) m <- leaflet(data=cities) m <- addTiles(m) m <- addMarkers(m,popup=cities$city) ``` ``` ## Assuming "lng" and "lat" are longitude and latitude, respectively ``` ``` # Put up the Map - click on the markers m ``` 2\.1 Finding XPaths -------------------   In addition to Browser Plugins there are standalone tools such as the Oxygen XML Editor which is availabel through the Emory Software Express Website. This is a comprehensive XML editor that will allow you to parse XML and develop paths to locate specific nodes within an XML document. If you find yourself working with websites with lots of XML then this will be useful. The Oxygen editor is free. Let’s look at an XML file that has some basic content:   ``` <?xml version="1.0" encoding="UTF-8"?> <bookstore> <book category="COOKING"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="CHILDREN"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="WEB"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> ```         2\.2 Example: GeoCoding With Google ----------------------------------- Let’s run through an example of using the GeoCoding API with Google. They used to provide free access to this service but no more. You have to sign up for an account and get an API key. If you are currently taking one of my classes I probably have arranged for cloud credits that you can use to do Google Geocoding for free. So one way to do this is to create a URL according to the specification given in the Google Geocoding documentation. We need 1\) the base Google URL for the Geocoding service, 2\) the format of the desired output (XML or JSON), 3\) and address for which we want to find the latitude and longitude, and 4\) the API key we create at the Google API service. Here is a fully functional URL you can paste into your browser: [https://maps.googleapis.com/maps/api/geocode/xml?address\=1510\+Clifton\+Road,\+Atlanta,\+GA\&key\=AIzaSyDPwt1Ya79b7lhsZkh75BjCz\-GpMKC9ZYw](https://maps.googleapis.com/maps/api/geocode/xml?address=1510+Clifton+Road,+Atlanta,+GA&key=AIzaSyDPwt1Ya79b7lhsZkh75BjCz-GpMKC9ZYw) If you paste this into Chome then you get something back like this: So I could create an R function to take care of this kind of thing so I could maybe pass in arbitrary addressess to be geocided. Let’s run through this example and then look at how I parsed the XML file that is returned by the Google GeoCoding API. We will stick with the *1510 Clifton Rd, Atlanta, GA* address which corresponds to the Rollins Research Building. First we will see an example of what Google returns in terms of XML. We can use some tools like Oxygen Editor (available free via Emory Software Express) to develop an appropriate XPATH expression to parse out the latitude and longitude information. ``` # https://maps.googleapis.com/maps/api/geocode/xml?address=1510+Clifton+Road,+Atlanta,+GA&key=AIzaSyDPwt1Ya79b7lhsZkh75BjCz-GpMKC9ZYw # https://maps.googleapis.com/maps/api/geocode/json?address=1510+Clifton+Road,+Atlanta,+GA&key=AIzaSyDPwt1Ya79b7lhsZkh75BjCz-GpMKC9ZYw myGeo <- function(address="1510 Clifton Rd Atlanta GA",form="xml") { library(XML) library(RCurl) geourl <- "https://maps.googleapis.com/maps/api/geocode/" # You will need to replace this with your OWN key ! key <- "AIzaSyA3ereIVEjA0gPrxLupPLOKFGH_v98KpMA" address <- gsub(" ","+",address) add <- paste0(geourl,form,sep="?address=") add <- paste0(add,address,"&key=") geourl <- paste0(add,key) locale <- getURL(geourl) plocal <- xmlParse(locale,useInternalNodes=TRUE) # Okay let's extract the lat and lon latlon <- getNodeSet(plocal,"/GeocodeResponse/result/geometry/location/descendant::*") lat <- as.numeric(xmlSApply(latlon,xmlValue))[1] lon <- as.numeric(xmlSApply(latlon,xmlValue))[2] return(c(lat=lat,lng=lon)) } mylocs <- myGeo() ``` ``` lat lng 33.79667 -84.32319 ``` Now. We could have saved the report to a file on our local computer and open it up with Oxygen editor and figure out what the approproate XPATH would be. This is basically what I did. Here is a screenshot of the session. I picked an XPATH expression of //location/descendant::\*   We could expand this considerable to process a number of addresses. This is a great example of how once you get a single example working then you can generalize this into a function that will allow you to do the same thing for a much larger numnber of addressess. ``` namevec <- c("Atlanta GA", "Birmingham AL", "Seattle WA", "Sacramento CA", "Denver CO", "LosAngeles CA", "Rochester NY") cityList <- lapply(namevec,myGeo,eval=FALSE) # Or to get a data frame cities <- data.frame(city=namevec,do.call(rbind,cityList), stringsAsFactors = FALSE) ``` ``` cities ``` ``` ## city lat lng ## 1 Atlanta GA 33.74900 -84.38798 ## 2 Birmingham AL 33.51859 -86.81036 ## 3 Seattle WA 47.60621 -122.33207 ## 4 Sacramento CA 38.58157 -121.49440 ## 5 Denver CO 39.73924 -104.99025 ## 6 LosAngeles CA 34.05223 -118.24368 ## 7 Rochester NY 43.15658 -77.60885 ``` ``` # Let's create a Map library(leaflet) m <- leaflet(data=cities) m <- addTiles(m) m <- addMarkers(m,popup=cities$city) ``` ``` ## Assuming "lng" and "lat" are longitude and latitude, respectively ``` ``` # Put up the Map - click on the markers m ``` 2\.3 Using JSON --------------- JSON is fast becoming the primary interchange format over XML although XML is still well supported. R has a number of packages to ease the parsing of JSON/ documents returned by web pages. Ususally you get back a *list* which is a native data type in R that can easily be manipulated into a data frame. Most web APIs provide an option for JSON or XML although some only provide JSON. There are rules and regulations about how JSON is formed and we will learn them by example but you can look at the numerous tutotorials on the web to locate definitive references. See <http://www.w3schools.com/json/> Here is an XML file that describes some employees. ``` <employees> <employee> <firstName>John</firstName> <lastName>Doe</lastName> </employee> <employee> <firstName>Anna</firstName> <lastName>Smith</lastName> </employee> <employee> <firstName>Peter</firstName> <lastName>Jones</lastName> </employee> </employees> ``` And here is the corresposning JSON file: ``` { "employees":[ {"firstName":"John", "lastName":"Doe"}, {"firstName":"Anna", "lastName":"Smith"}, {"firstName":"Peter","lastName":"Jones"} ] } ``` * It is important to note that the actual information in the document, things like city name, county name, latitude, and longitude are the same as they would be in the comparable XML document. * JSON documents are at the heart of the NoSQL“database”called MongoDB * JSON can be found within many webpages since it is closely related to JavaScript which is a language strongly related to web pages. * JSON is very compact and lighweight which has made it a natural followon to XML so much so that it appears to be replacing XML See http:..www.json.org/ for a full description of the specification * An object is an unordered set of name/value pairs. An object begins with (left brace) and ends with (right brace). Each name is followed by : (colon) and the name/value pairs are separated by , (comma). * An array is an ordered collection of values. An array begins with \[ (left bracket) and ends with ] (right bracket). Values are separated by , (comma). * A value can be a string in double quotes, or a number, or true or false or null, or an object or an array. These structures can be nested. * A string is a sequence of zero or more Unicode characters, wrapped in double quotes, using backslash escapes. A character is represented as a single character string. Do you remember the Google Geocoding example from before ? We can tell Google to send us back JSON instead of XML just by adjusting the URL accordingly: ``` url <- "https://maps.googleapis.com/maps/api/geocode/json?address=1510+Clifton+Rd+Atlanta+GA&key=AIzaSyD0zIyn2ijIqb7OKYTGnAnchXY7zt3VB9Y" ``` 2\.4 Using the RJSONIO Package ------------------------------ To read/parse this in R we use a package called RJSONIO. There are other packages but this is the one we will be using. Download and install it. There is a function called fromJSON which will parse the JSON file and return a list to contain the data. So we parse lists instead of using XPath. Many people feel this to be easier than trying to construct XPath statments. You will have to decide for yourself. ``` library(RJSONIO) url <- "https://maps.googleapis.com/maps/api/geocode/json?address=1510+Clifton+Road,+Atlanta,+GA&key=AIzaSyD0zIyn2ijIqb7OKYTGnAnchXY7zt3VB9Y" geo <- fromJSON(url) ``` Since what we get back is a list we can directly access whatever we want. We just index into the list. No need for complicated XPATHS. ``` str(geo,3) ``` ``` ## List of 2 ## $ results:List of 1 ## ..$ :List of 6 ## .. ..$ address_components:List of 7 ## .. ..$ formatted_address : chr "1510 Clifton Rd, Atlanta, GA 30322, USA" ## .. ..$ geometry :List of 3 ## .. ..$ place_id : chr "ChIJ5QjdF_oG9YgRWAJzCm19Vf8" ## .. ..$ plus_code : Named chr [1:2] "QMWG+MP Druid Hills, Georgia, United States" "865QQMWG+MP" ## .. .. ..- attr(*, "names")= chr [1:2] "compound_code" "global_code" ## .. ..$ types : chr "street_address" ## $ status : chr "OK" ``` ``` geo$results[[1]]$geometry$location ``` ``` ## lat lng ## 33.79667 -84.32319 ``` Let’s put this into a function that helps us get the information for a number of addresses ``` myGeo <- function(address="1510 Clifton Rd Atlanta GA",form="json") { library(RJSONIO) geourl <- "https://maps.googleapis.com/maps/api/geocode/" # You will need to replace this with your OWN key ! key <- "AIzaSyA3ereIVEjA0gPrxLiopLKJRPOLH_v89DpMA" address <- gsub(" ","+",address) add <- paste0(geourl,form,sep="?address=") add <- paste0(add,address,"&key=") geourl <- paste0(add,key) geo <- fromJSON(geourl) lat <- geo$results[[1]]$geometry$location[1] lng <- geo$results[[1]]$geometry$location[2] return(c(lat,lng)) } ``` Consider the following: ``` namevec <- c("Atlanta GA", "Birmingham AL", "Seattle WA", "Sacramento CA", "Denver CO", "LosAngeles CA", "Rochester NY") cityList <- lapply(namevec,myGeo) # Or to get a data frame cities <- data.frame(city=namevec,do.call(rbind,cityList), stringsAsFactors = FALSE) ``` Now we can check out the geocoding cities and then make a map ``` cities ``` ``` ## city lat lng ## 1 Atlanta GA 33.74900 -84.38798 ## 2 Birmingham AL 33.51859 -86.81036 ## 3 Seattle WA 47.60621 -122.33207 ## 4 Sacramento CA 38.58157 -121.49440 ## 5 Denver CO 39.73924 -104.99025 ## 6 LosAngeles CA 34.05223 -118.24368 ## 7 Rochester NY 43.15658 -77.60885 ``` ``` # Let's create a Map library(leaflet) m <- leaflet(data=cities) m <- addTiles(m) m <- addMarkers(m,popup=cities$city) ``` ``` ## Assuming "lng" and "lat" are longitude and latitude, respectively ``` ``` # Put up the Map - click on the markers m ```
Getting Cleaning and Wrangling Data
steviep42.github.io
https://steviep42.github.io/webscraping/book/Moreexamples.html
Chapter 3 More Real Life Examples ================================= Okay. This is a tour of some sites that will serve as important examples on how to parse sites. Let’s check the price of bitcoins. You want to be rich don’t you ? 3\.1 BitCoin Prices ------------------- The challenge here is that it’s all one big table and it’s not clear how to adress it. And the owners of the web site will ususally change the format or start using Javascript or HTML5 which will mess things up in the future. One solid approach I frequently use is to simply pull out all the tables and, by experimentation, try to figure out which one has the information I want. This always require some work. ``` library(rvest) url <- "https://coinmarketcap.com/all/views/all/" bc <- read_html(url) bc_table <- bc %>% html_nodes('table') %>% html_table() %>% .[[3]] # We get back a one element list that is a data frame str(bc_table,0) ``` ``` ## tibble [200 × 1,001] (S3: tbl_df/tbl/data.frame) ``` ``` bc_table <- bc_table[,c(2:3,5)] head(bc_table) ``` ``` ## # A tibble: 6 × 3 ## Name Symbol Price ## <chr> <chr> <chr> ## 1 BTCBitcoin BTC $44,103.58 ## 2 ETHEthereum ETH $3,109.35 ## 3 USDTTether USDT $1.00 ## 4 BNBBNB BNB $413.02 ## 5 USDCUSD Coin USDC $1.00 ## 6 XRPXRP XRP $0.8495 ``` Everything is a character at this point so we have to go in an do some surgery on the data frame to turn the Price into an actual numeric. ``` # The data is "dirty" and has characers in it that need cleaning bc_table <- bc_table %>% mutate(Price=gsub("\\$","",Price)) bc_table <- bc_table %>% mutate(Price=gsub(",","",Price)) bc_table <- bc_table %>% mutate(Price=round(as.numeric(Price),2)) # There are four rows wherein the Price is missing NA bc_table <- bc_table %>% filter(complete.cases(bc_table)) # Let's get the Crypto currencies with the Top 10 highest prices top_10 <- bc_table %>% arrange(desc(Price)) %>% head(10) top_10 ``` ``` ## # A tibble: 10 × 3 ## Name Symbol Price ## <chr> <chr> <dbl> ## 1 BTCBitcoin BTC 44104. ## 2 WBTCWrapped Bitcoin WBTC 43881. ## 3 ETHEthereum ETH 3109. ## 4 BNBBNB BNB 413. ## 5 LTCLitecoin LTC 134. ## 6 SOLSolana SOL 113. ## 7 AVAXAvalanche AVAX 87.4 ## 8 LUNATerra LUNA 57.2 ## 9 DOTPolkadot DOT 21.6 ## 10 MATICPolygon MATIC 1.93 ``` Let’s make a barplot of the top 10 crypto currencies. ``` # Next we want to make a barplot of the Top 10 ylim=c(0,max(top_10$Price)+10000) main="Top 10 Crypto Currencies in Terms of Price" bp <- barplot(top_10$Price,col="aquamarine", ylim=ylim,main=main) axis(1, at=bp, labels=top_10$Symbol, cex.axis = 0.7) grid() ``` So that didn’t work out so well since one of the crypto currencies dominates the others in terms of price. So let’s create a log transformed verion of the plot. ``` # Let's take the log of the price ylim=c(0,max(log(top_10$Price))+5) main="Top 10 Crypto Currencies in Terms of log(Price)" bp <- barplot(log(top_10$Price),col="aquamarine", ylim=ylim,main=main) axis(1, at=bp, labels=top_10$Symbol, cex.axis = 0.7) grid() ``` 3\.2 IMDB --------- Look at this example from IMDb (Internet Movie Database). According to Wikipedia: IMDb (Internet Movie Database)\[2] is an online database of information related to films, television programs, home videos, video games, and streaming content online – including cast, production crew and personal biographies, plot summaries, trivia, fan and critical reviews, and ratings. We can search or refer to specific movies by URL if we wanted. For example, consider the following link to the “Lego Movie”: <http://www.imdb.com/title/tt1490017/> In terms of scraping information from this site we could do that using the **rvest** package. Let’s say that we wanted to capture the rating information which is 7\.8 out of 10\. We could use the xPath Toolto zone in on this information. ``` url <- "http://www.imdb.com/title/tt1490017/" lego_movie <- read_html(url) ``` So this reads the page from which we must isolate the rating value. That wasn’t so bad. Let’s see what using the xPath plugin gives us: Using XPath we get longer xpath expression which should provide us with direct access to the value. ``` url <- "http://www.imdb.com/title/tt1490017/" lego_movie <- read_html(url) # Scrape the website for the movie rating rating <- lego_movie %>% html_nodes(xpath="/html/body/div[2]/main/div/section[1]/section/div[3]/section/section/div[1]/div[2]/div/div[1]/a/div/div/div[2]/div[1]/span[1]") %>% # html_nodes(".ratingValue span") %>% html_text() rating ``` ``` ## [1] "7.7" ``` Let’s access the summary section of the link. ``` xp = "/html/body/div[2]/main/div/section[1]/section/div[3]/section/section/div[3]/div[2]/div[1]/div[1]/p/span[3]" mov_summary <- lego_movie %>% html_nodes(xpath=xp) %>% html_text() mov_summary ``` ``` ## [1] "An ordinary LEGO construction worker, thought to be the prophesied as \"special\", is recruited to join a quest to stop an evil tyrant from gluing the LEGO universe into eternal stasis." ``` 3\.3 Faculty Salaries --------------------- In this example we have to parse the main table associated with the results page. Salary Salary ``` url <- "https://www.insidehighered.com/aaup-compensation-survey" df <- read_html(url) %>% html_table() %>% `[[`(1) intost <- c("Institution","Category","State") salary <- df %>% separate(InstitutionCategoryState,into=intost,sep="\n") salary ``` ``` ## # A tibble: 10 × 8 ## Institution Category State `Avg. SalaryFull … `Avg. ChangeContin… `CountFull Profe… `Avg. Total Compens… `Salary EquityFul… ## <chr> <chr> <chr> <chr> <chr> <int> <chr> <dbl> ## 1 Auburn Universi… Doctoral ALABA… $132,600 4.4% 407 $170,300 90.1 ## 2 Birmingham Sout… Baccalau… ALABA… $81,000 0.0% 39 $100,000 92.8 ## 3 Huntingdon Coll… Baccalau… ALABA… $76,700 0.0% 13 $89,500 110. ## 4 Jacksonville St… Master’s ALABA… $77,300 N/A 78 $104,100 94.8 ## 5 Samford Univers… Master ALABA… $105,200 3.2% 130 $133,300 86.8 ## 6 Troy University Masters ALABA… $84,500 1.6% 30 $88,500 106. ## 7 The University … Doctoral ALABA… $151,600 2.0% 300 $205,100 85.6 ## 8 University of A… Doctoral ALABA… $139,100 2.8% 191 $164,800 88.4 ## 9 University of A… Doctoral ALABA… $126,400 2.0% 64 $169,200 95.6 ## 10 University of M… Master ALABA… $80,700 2.1% 47 $106,100 94.4 ``` So the default is 10 listings per page but there are many more pages we could process to get more information. If we look at the bottom of the page we can get some clues as to what the URLs are. Here we’ll just process the first two pages since it will be quick and won’t burden the server. ``` # So now we could process multiple pages url <- 'https://www.insidehighered.com/aaup-compensation-survey?institution-name=&professor-category=1591&page=1' str1 <- "https://www.insidehighered.com/aaup-compensation-survey?" str2 <- "institution-name=&professor-category=1591&page=" intost <- c("Institution","Category","State") salary <- data.frame() # We'll get just the first two pages for (ii in 1:2) { nurl <- paste(str1,str2,ii,sep="") df <- read_html(nurl) tmp <- df %>% html_table() %>% `[[`(1) tmp <- tmp %>% separate(InstitutionCategoryState,into=intost,sep="\n") salary <- rbind(salary,tmp) } salary ``` Look at the URLs at the bottom of the main page to find beginning and ending page numbers. Visually this is easy. Programmatically we could do something like the following: ``` # https://www.insidehighered.com/aaup-compensation-survey?page=1 # https://www.insidehighered.com/aaup-compensation-survey?page=94 # What is the last page number ? We already know the answer - 94 lastnum <- df %>% html_nodes(xpath='//a') %>% html_attr("href") %>% '['(103) %>% strsplit(.,"page=") %>% '[['(1) %>% '['(2) %>% as.numeric(.) # So now we could get all pages of the survey str1 <- "https://www.insidehighered.com/aaup-compensation-survey?" str2 <- "institution-name=&professor-category=1591&page=" intost <- c("Institution","Category","State") salary <- data.frame() for (ii in 1:lastnum) { nurl <- paste(str1,str2,ii,sep="") df <- read_html(nurl) tmp <- df %>% html_table() %>% `[[`(1) tmp <- tmp %>% separate(InstitutionCategoryState,into=intost,sep="\n") salary <- rbind(salary,tmp) Sys.sleep(1) } names(salary) <- c("Institution","Category","State","AvgSalFP","AvgChgFP", "CntFP","AvgTotCompFP","SalEquityFP") salary <- salary %>% mutate(AvgSalFP=as.numeric(gsub("\\$|,","",salary$AvgSalFP))) %>% mutate(AvgTotCompFP=as.numeric(gsub("\\$|,","",salary$AvgTotCompFP))) salary %>% group_by(State,Category) %>% summarize(avg=mean(AvgSalFP)) %>% arrange(desc(avg)) ``` There are some problems: * Data is large and scattered across multiple pages * We could use above techniques to move from page to page * There is a form we could use to narrow criteria * But we have to programmatically submit the form * rvest (and other packages) let you do this 3\.4 Filling Out Forms From a Program ------------------------------------- Salary Let’s find salaries between $ 150,000 and the default max ($ 244,000\) * Find the element name associated with “Average Salary” * Establish a connection with the form (usually the url of the page) * Get a local copy of the form * Fill in the value for the “Average Salary” * Submit the lled in form * Get the results and parse them like above \` So finding the correct element is more challenging. I use Chrome to do this. Just highlight the area over the form and right click to “Insepct” the element. This opens up the developer tools. You have to dig down to find the corrext form and the element name. Here is a screen shot of my activity: Salary ``` url <- "https://www.insidehighered.com/aaup-compensation-survey" # Establish a session mysess <- html_session(url) # Get the form form_unfilled <- mysess %>% html_node("form") %>% html_form() form_filled <- form_unfilled %>% set_values("range-from"=150000) # Submit form results <- submit_form(mysess,form_filled) first_page <- results %>% html_nodes(xpath=expr) %>% html_table() first_page ``` 3\.5 PubMed ----------- Pubmed provides a rich source of information on published scientific literature. There are tutorials on how to leverage its capabilities but one thing to consider is that MESH terms are a good starting place since the search is index\-based. MeSH (Medical Subject Headings) is the NLM controlled vocabulary thesaurus used for indexing articles for PubMed. It’s faster and more accurate so you can first use the MESH browser to generate the appropriate search terms and add that into the Search interface. The MESH browser can be found at <https://www.ncbi.nlm.nih.gov/mesh/> What we do here is get the links associated with each publication so we can then process each of those and get the abstract associated with each publication. ``` # "hemodialysis, home" [MeSH Terms] url<-"https://www.ncbi.nlm.nih.gov/pubmed/?term=%22hemodialysis%2C+home%22+%5BMeSH+Terms%5D" # # The results from the search will be of the form: # https://www.ncbi.nlm.nih.gov/pubmed/30380542 results <- read_html(url) %>% html_nodes("a") %>% html_attr("href") %>% grep("/[0-9]{6,6}",.,value=TRUE) %>% unique(.) results ``` ``` ## [1] "/27061610/" "/30041224/" "/25925822/" "/25925819/" "/28066912/" "/28535526/" "/27545636/" "/30041223/" "/26586045/" ## [10] "/27781373/" ``` So now we could loop through these links and get the abstracts for these results. It looks that there are approximately 20 results per page. As before we would have to dive in to the underlying structure of the page to get the correct HTML pathnames or we could just look for Paragraph elements and pick out the links that way. ``` text.vec <- vector() for (ii in 1:length(results)) { string <- paste0("https://pubmed.ncbi.nlm.nih.gov",results[ii]) text.vec[ii] <- read_html(string) %>% html_nodes("p") %>% `[[`(7) %>% html_text() } # Eliminate lines with newlines characters final.vec <- gsub("\n","",text.vec) final.vec <- gsub("^\\s+","",final.vec) #final.vec <- text.vec[grep("^\n",text.vec,invert=TRUE)] final.vec ``` ``` ## [1] "Pediatric home hemodialysis is infrequently performed despite a growing need globally among patients with end-stage renal disease who do not have immediate access to a kidney transplant. In this review, we expand the scope of the Implementing Hemodialysis in the Home website and associated supplement published previously in Hemodialysis International and offer information tailored to the pediatric population. We describe the experience and outcomes of centers managing pediatric patients, and offer recommendations and practical tools to assist clinicians in providing quotidian dialysis for children, including infrastructural and staffing needs, equipment and prescriptions, and patient selection and training. " ## [2] "Home hemodialysis (HHD) has been available as a modality of renal replacement therapy since the 1960s. HHD allows intensive dialysis such as nocturnal hemodialysis or short daily hemodialysis. Previous studies have shown that patients receiving HHD have an increased survival and better quality of life compared with those receiving in-center conventional HD. However, HHD may increase the risk for specific complications such as vascular access complications, infection, loss of residual kidney function and patient and caregiver burden. In Japan, only 529 patients (0.2% of the total dialysis patients) were on maintenance HHD at the end of 2014. The most commonly perceived barriers to intensive HHD included lack of patient motivation, unwillingness to change from in-center modality, and fear of self-cannulation. However, these barriers can often be overcome by adequate predialysis education, motivational training of patient and caregiver, nurse-assisted cannulation, nurse-led home visits, a well-defined nursing/technical support system for patients, and provision of respite care. " ## [3] "This special supplement of Hemodialysis International focuses on home hemodialysis (HD). It has been compiled by a group of international experts in home HD who were brought together throughout 2013-2014 to construct a home HD \"manual.\" Drawing upon both the literature and their own extensive expertise, these experts have helped develop this supplement that now stands as an A-to-Z guide for any who may be unfamiliar or uncertain about how to establish and maintain a successful home HD program. " ## [4] "Prescribing a regimen that provides \"optimal dialysis\" to patients who wish to dialyze at home is of major importance, yet there is substantial variation in how home hemodialysis (HD) is prescribed. Geographic location, patient health status and clinical goals, and patient lifestyle and preferences all influence the selection of a prescription for a particular patient-there is no single prescription that provides optimal therapy for all patients, and careful weighing of potential benefit and burden is required for long-term success. This article describes how home HD prescribing patterns have changed over time and provides examples of commonly used home HD prescriptions. In addition, associated clinical outcomes and adequacy parameters as well as criteria for identifying which patients may benefit most from these diverse prescriptions are also presented. " ## [5] "Home hemodialysis (HD) was first introduced in the 1960s with a rapid increase in its use due to inability of dialysis units to accommodate patient demand. A sharp decline was subsequently seen with expanding outpatient dialysis facilities and changes in reimbursement policies. In the last decade, with emerging reports of benefits with home HD and more user-friendly equipment, there has been resurgence in home HD. However, home HD remains underutilized with considerable variations between and within countries. This paper will review the history of home HD, elaborate on its established benefits, identify some of the barriers in uptake of this modality and expand on potential strategies to overcome these barriers. " ## [6] "Home hemodialysis (HD) is undergoing a resurgence. A major driver of this is economics, however, providers are also encouraged by a combination of excellent patient outcomes and patient experiences as well as the development of newer technologies that offer ease of use. Home HD offers significant advantages in flexible scheduling and the practical implementation of extended hours dialysis. This paper explores the reasons why home HD is making a comeback and strives to offer approaches to improve the uptake of this dialysis modality. " ## [7] "The home extracorporeal hemodialysis, which aroused a great interest in the past, has not kept its promises due to the complexity and expectations for family involvement in treatment management. In the United States NxStage One portable system was proposed and designed for home use. In this work we describe, starting from the history of home hemodialysis, the method with NxStage system by comparing it with the conventional HD in 5 patients. The dialysis efficiency was similar between the two treatments, even if home hemodialysis showed a reduction in serum urea, creatinine and phosphorus. At the same time phosphate binders use decreased with an increase in serum calcium while hemoglobin increased reducing doses of erythropoietin. The method was successful in the training of the patients and their partners during hospital training and at home. Patients have shown great enthusiasm at the beginning and during the therapy, which is developed around the users personal needs, being able to decide at its own times during 24 hours according to personal needs, in addition to faster recovery after the dialysis. This method certainly improved the patients' wellness and increased their autonomy. " ## [8] "Most hemodialysis (HD) in Japan is based on the central dialysis fluid delivery system (CDDS). With CDDS, there is an improvement in work efficiency, reduction in cost, and a reduction in regional and institutional differences in dialysis conditions. This has resulted in an improvement in the survival rate throughout Japan. However, as the number of cases with various complications increases, it is necessary to select the optimal dialysis prescription (including hours and frequency) for each individual in order to further improve survival rates. To perform intensive HD, home HD is essential, and various prescriptions have been tried. However, several challenges remain before widespread implementation of home HD can occur. " ## [9] "Home hemodialysis (HD) is a modality of renal replacement therapy that can be safely and independently performed at home by end-stage renal disease (ESRD) patients. Home HD can be performed at the convenience of the patients on a daily basis, every other day and overnight (nocturnal). Despite the great and many perceived benefits of home HD, including the significant improvements in health outcomes and resource utilization, the adoption of home HD has been limited; lack or inadequate pre-dialysis education and training constitute a major barrier. The lack of self-confidence and/or self-efficacy to manage own therapy, lack of family and/or social support, fear of machine and cannulation of blood access and worries of possible catastrophic events represent other barriers for the implementation of home HD besides inadequate competence and/or expertise in caring for home HD patients among renal care providers (nephrologists, dialysis nurses, educators). A well-studied, planned and prepared and carefully implemented central country program supported by adequate budget can play a positive role in overcoming the challenges to home HD. Healthcare authorities, with the increasingly financial and logistic demands and the relatively higher mortality and morbidity rates of the conventional in-center HD, should tackle home HD as an attractive and cost-effective modality with more freedom, quality of life and improvement of clinical outcomes for the ESRD patients. " ## [10] "Home hemodialysis (HHD) is emerging as an important alternate renal replacement therapy. Although there are multiple clinical advantages with HHD, concerns surrounding increased risks of infection in this group of patients remain a major barrier to its implementation. In contrast to conventional hemodialysis, infection related complication represents the major morbidity in this mode of renal replacement therapy. Vascular access related infection is an important cause of infection in this population. Use of central vein catheters and buttonhole cannulation in HHD are important modifiable risk factors for HHD associated infection. Several preventive measures are suggested in the literature, which will require further prospective validation. " ``` Well that was tedious. And we processed only the first page of results. How do we “progrmmatically” hit the “Next” Button at the bottom of the page ? This is complicated by the fact that there appears to be some Javascript at work that we would have to somehow interact with to get the URL for the next page. Unlike with the school salary example it isn’t obvious how to do this. If we hove over the “Next” button we don’t get an associated link. 3\.1 BitCoin Prices ------------------- The challenge here is that it’s all one big table and it’s not clear how to adress it. And the owners of the web site will ususally change the format or start using Javascript or HTML5 which will mess things up in the future. One solid approach I frequently use is to simply pull out all the tables and, by experimentation, try to figure out which one has the information I want. This always require some work. ``` library(rvest) url <- "https://coinmarketcap.com/all/views/all/" bc <- read_html(url) bc_table <- bc %>% html_nodes('table') %>% html_table() %>% .[[3]] # We get back a one element list that is a data frame str(bc_table,0) ``` ``` ## tibble [200 × 1,001] (S3: tbl_df/tbl/data.frame) ``` ``` bc_table <- bc_table[,c(2:3,5)] head(bc_table) ``` ``` ## # A tibble: 6 × 3 ## Name Symbol Price ## <chr> <chr> <chr> ## 1 BTCBitcoin BTC $44,103.58 ## 2 ETHEthereum ETH $3,109.35 ## 3 USDTTether USDT $1.00 ## 4 BNBBNB BNB $413.02 ## 5 USDCUSD Coin USDC $1.00 ## 6 XRPXRP XRP $0.8495 ``` Everything is a character at this point so we have to go in an do some surgery on the data frame to turn the Price into an actual numeric. ``` # The data is "dirty" and has characers in it that need cleaning bc_table <- bc_table %>% mutate(Price=gsub("\\$","",Price)) bc_table <- bc_table %>% mutate(Price=gsub(",","",Price)) bc_table <- bc_table %>% mutate(Price=round(as.numeric(Price),2)) # There are four rows wherein the Price is missing NA bc_table <- bc_table %>% filter(complete.cases(bc_table)) # Let's get the Crypto currencies with the Top 10 highest prices top_10 <- bc_table %>% arrange(desc(Price)) %>% head(10) top_10 ``` ``` ## # A tibble: 10 × 3 ## Name Symbol Price ## <chr> <chr> <dbl> ## 1 BTCBitcoin BTC 44104. ## 2 WBTCWrapped Bitcoin WBTC 43881. ## 3 ETHEthereum ETH 3109. ## 4 BNBBNB BNB 413. ## 5 LTCLitecoin LTC 134. ## 6 SOLSolana SOL 113. ## 7 AVAXAvalanche AVAX 87.4 ## 8 LUNATerra LUNA 57.2 ## 9 DOTPolkadot DOT 21.6 ## 10 MATICPolygon MATIC 1.93 ``` Let’s make a barplot of the top 10 crypto currencies. ``` # Next we want to make a barplot of the Top 10 ylim=c(0,max(top_10$Price)+10000) main="Top 10 Crypto Currencies in Terms of Price" bp <- barplot(top_10$Price,col="aquamarine", ylim=ylim,main=main) axis(1, at=bp, labels=top_10$Symbol, cex.axis = 0.7) grid() ``` So that didn’t work out so well since one of the crypto currencies dominates the others in terms of price. So let’s create a log transformed verion of the plot. ``` # Let's take the log of the price ylim=c(0,max(log(top_10$Price))+5) main="Top 10 Crypto Currencies in Terms of log(Price)" bp <- barplot(log(top_10$Price),col="aquamarine", ylim=ylim,main=main) axis(1, at=bp, labels=top_10$Symbol, cex.axis = 0.7) grid() ``` 3\.2 IMDB --------- Look at this example from IMDb (Internet Movie Database). According to Wikipedia: IMDb (Internet Movie Database)\[2] is an online database of information related to films, television programs, home videos, video games, and streaming content online – including cast, production crew and personal biographies, plot summaries, trivia, fan and critical reviews, and ratings. We can search or refer to specific movies by URL if we wanted. For example, consider the following link to the “Lego Movie”: <http://www.imdb.com/title/tt1490017/> In terms of scraping information from this site we could do that using the **rvest** package. Let’s say that we wanted to capture the rating information which is 7\.8 out of 10\. We could use the xPath Toolto zone in on this information. ``` url <- "http://www.imdb.com/title/tt1490017/" lego_movie <- read_html(url) ``` So this reads the page from which we must isolate the rating value. That wasn’t so bad. Let’s see what using the xPath plugin gives us: Using XPath we get longer xpath expression which should provide us with direct access to the value. ``` url <- "http://www.imdb.com/title/tt1490017/" lego_movie <- read_html(url) # Scrape the website for the movie rating rating <- lego_movie %>% html_nodes(xpath="/html/body/div[2]/main/div/section[1]/section/div[3]/section/section/div[1]/div[2]/div/div[1]/a/div/div/div[2]/div[1]/span[1]") %>% # html_nodes(".ratingValue span") %>% html_text() rating ``` ``` ## [1] "7.7" ``` Let’s access the summary section of the link. ``` xp = "/html/body/div[2]/main/div/section[1]/section/div[3]/section/section/div[3]/div[2]/div[1]/div[1]/p/span[3]" mov_summary <- lego_movie %>% html_nodes(xpath=xp) %>% html_text() mov_summary ``` ``` ## [1] "An ordinary LEGO construction worker, thought to be the prophesied as \"special\", is recruited to join a quest to stop an evil tyrant from gluing the LEGO universe into eternal stasis." ``` 3\.3 Faculty Salaries --------------------- In this example we have to parse the main table associated with the results page. Salary Salary ``` url <- "https://www.insidehighered.com/aaup-compensation-survey" df <- read_html(url) %>% html_table() %>% `[[`(1) intost <- c("Institution","Category","State") salary <- df %>% separate(InstitutionCategoryState,into=intost,sep="\n") salary ``` ``` ## # A tibble: 10 × 8 ## Institution Category State `Avg. SalaryFull … `Avg. ChangeContin… `CountFull Profe… `Avg. Total Compens… `Salary EquityFul… ## <chr> <chr> <chr> <chr> <chr> <int> <chr> <dbl> ## 1 Auburn Universi… Doctoral ALABA… $132,600 4.4% 407 $170,300 90.1 ## 2 Birmingham Sout… Baccalau… ALABA… $81,000 0.0% 39 $100,000 92.8 ## 3 Huntingdon Coll… Baccalau… ALABA… $76,700 0.0% 13 $89,500 110. ## 4 Jacksonville St… Master’s ALABA… $77,300 N/A 78 $104,100 94.8 ## 5 Samford Univers… Master ALABA… $105,200 3.2% 130 $133,300 86.8 ## 6 Troy University Masters ALABA… $84,500 1.6% 30 $88,500 106. ## 7 The University … Doctoral ALABA… $151,600 2.0% 300 $205,100 85.6 ## 8 University of A… Doctoral ALABA… $139,100 2.8% 191 $164,800 88.4 ## 9 University of A… Doctoral ALABA… $126,400 2.0% 64 $169,200 95.6 ## 10 University of M… Master ALABA… $80,700 2.1% 47 $106,100 94.4 ``` So the default is 10 listings per page but there are many more pages we could process to get more information. If we look at the bottom of the page we can get some clues as to what the URLs are. Here we’ll just process the first two pages since it will be quick and won’t burden the server. ``` # So now we could process multiple pages url <- 'https://www.insidehighered.com/aaup-compensation-survey?institution-name=&professor-category=1591&page=1' str1 <- "https://www.insidehighered.com/aaup-compensation-survey?" str2 <- "institution-name=&professor-category=1591&page=" intost <- c("Institution","Category","State") salary <- data.frame() # We'll get just the first two pages for (ii in 1:2) { nurl <- paste(str1,str2,ii,sep="") df <- read_html(nurl) tmp <- df %>% html_table() %>% `[[`(1) tmp <- tmp %>% separate(InstitutionCategoryState,into=intost,sep="\n") salary <- rbind(salary,tmp) } salary ``` Look at the URLs at the bottom of the main page to find beginning and ending page numbers. Visually this is easy. Programmatically we could do something like the following: ``` # https://www.insidehighered.com/aaup-compensation-survey?page=1 # https://www.insidehighered.com/aaup-compensation-survey?page=94 # What is the last page number ? We already know the answer - 94 lastnum <- df %>% html_nodes(xpath='//a') %>% html_attr("href") %>% '['(103) %>% strsplit(.,"page=") %>% '[['(1) %>% '['(2) %>% as.numeric(.) # So now we could get all pages of the survey str1 <- "https://www.insidehighered.com/aaup-compensation-survey?" str2 <- "institution-name=&professor-category=1591&page=" intost <- c("Institution","Category","State") salary <- data.frame() for (ii in 1:lastnum) { nurl <- paste(str1,str2,ii,sep="") df <- read_html(nurl) tmp <- df %>% html_table() %>% `[[`(1) tmp <- tmp %>% separate(InstitutionCategoryState,into=intost,sep="\n") salary <- rbind(salary,tmp) Sys.sleep(1) } names(salary) <- c("Institution","Category","State","AvgSalFP","AvgChgFP", "CntFP","AvgTotCompFP","SalEquityFP") salary <- salary %>% mutate(AvgSalFP=as.numeric(gsub("\\$|,","",salary$AvgSalFP))) %>% mutate(AvgTotCompFP=as.numeric(gsub("\\$|,","",salary$AvgTotCompFP))) salary %>% group_by(State,Category) %>% summarize(avg=mean(AvgSalFP)) %>% arrange(desc(avg)) ``` There are some problems: * Data is large and scattered across multiple pages * We could use above techniques to move from page to page * There is a form we could use to narrow criteria * But we have to programmatically submit the form * rvest (and other packages) let you do this 3\.4 Filling Out Forms From a Program ------------------------------------- Salary Let’s find salaries between $ 150,000 and the default max ($ 244,000\) * Find the element name associated with “Average Salary” * Establish a connection with the form (usually the url of the page) * Get a local copy of the form * Fill in the value for the “Average Salary” * Submit the lled in form * Get the results and parse them like above \` So finding the correct element is more challenging. I use Chrome to do this. Just highlight the area over the form and right click to “Insepct” the element. This opens up the developer tools. You have to dig down to find the corrext form and the element name. Here is a screen shot of my activity: Salary ``` url <- "https://www.insidehighered.com/aaup-compensation-survey" # Establish a session mysess <- html_session(url) # Get the form form_unfilled <- mysess %>% html_node("form") %>% html_form() form_filled <- form_unfilled %>% set_values("range-from"=150000) # Submit form results <- submit_form(mysess,form_filled) first_page <- results %>% html_nodes(xpath=expr) %>% html_table() first_page ``` 3\.5 PubMed ----------- Pubmed provides a rich source of information on published scientific literature. There are tutorials on how to leverage its capabilities but one thing to consider is that MESH terms are a good starting place since the search is index\-based. MeSH (Medical Subject Headings) is the NLM controlled vocabulary thesaurus used for indexing articles for PubMed. It’s faster and more accurate so you can first use the MESH browser to generate the appropriate search terms and add that into the Search interface. The MESH browser can be found at <https://www.ncbi.nlm.nih.gov/mesh/> What we do here is get the links associated with each publication so we can then process each of those and get the abstract associated with each publication. ``` # "hemodialysis, home" [MeSH Terms] url<-"https://www.ncbi.nlm.nih.gov/pubmed/?term=%22hemodialysis%2C+home%22+%5BMeSH+Terms%5D" # # The results from the search will be of the form: # https://www.ncbi.nlm.nih.gov/pubmed/30380542 results <- read_html(url) %>% html_nodes("a") %>% html_attr("href") %>% grep("/[0-9]{6,6}",.,value=TRUE) %>% unique(.) results ``` ``` ## [1] "/27061610/" "/30041224/" "/25925822/" "/25925819/" "/28066912/" "/28535526/" "/27545636/" "/30041223/" "/26586045/" ## [10] "/27781373/" ``` So now we could loop through these links and get the abstracts for these results. It looks that there are approximately 20 results per page. As before we would have to dive in to the underlying structure of the page to get the correct HTML pathnames or we could just look for Paragraph elements and pick out the links that way. ``` text.vec <- vector() for (ii in 1:length(results)) { string <- paste0("https://pubmed.ncbi.nlm.nih.gov",results[ii]) text.vec[ii] <- read_html(string) %>% html_nodes("p") %>% `[[`(7) %>% html_text() } # Eliminate lines with newlines characters final.vec <- gsub("\n","",text.vec) final.vec <- gsub("^\\s+","",final.vec) #final.vec <- text.vec[grep("^\n",text.vec,invert=TRUE)] final.vec ``` ``` ## [1] "Pediatric home hemodialysis is infrequently performed despite a growing need globally among patients with end-stage renal disease who do not have immediate access to a kidney transplant. In this review, we expand the scope of the Implementing Hemodialysis in the Home website and associated supplement published previously in Hemodialysis International and offer information tailored to the pediatric population. We describe the experience and outcomes of centers managing pediatric patients, and offer recommendations and practical tools to assist clinicians in providing quotidian dialysis for children, including infrastructural and staffing needs, equipment and prescriptions, and patient selection and training. " ## [2] "Home hemodialysis (HHD) has been available as a modality of renal replacement therapy since the 1960s. HHD allows intensive dialysis such as nocturnal hemodialysis or short daily hemodialysis. Previous studies have shown that patients receiving HHD have an increased survival and better quality of life compared with those receiving in-center conventional HD. However, HHD may increase the risk for specific complications such as vascular access complications, infection, loss of residual kidney function and patient and caregiver burden. In Japan, only 529 patients (0.2% of the total dialysis patients) were on maintenance HHD at the end of 2014. The most commonly perceived barriers to intensive HHD included lack of patient motivation, unwillingness to change from in-center modality, and fear of self-cannulation. However, these barriers can often be overcome by adequate predialysis education, motivational training of patient and caregiver, nurse-assisted cannulation, nurse-led home visits, a well-defined nursing/technical support system for patients, and provision of respite care. " ## [3] "This special supplement of Hemodialysis International focuses on home hemodialysis (HD). It has been compiled by a group of international experts in home HD who were brought together throughout 2013-2014 to construct a home HD \"manual.\" Drawing upon both the literature and their own extensive expertise, these experts have helped develop this supplement that now stands as an A-to-Z guide for any who may be unfamiliar or uncertain about how to establish and maintain a successful home HD program. " ## [4] "Prescribing a regimen that provides \"optimal dialysis\" to patients who wish to dialyze at home is of major importance, yet there is substantial variation in how home hemodialysis (HD) is prescribed. Geographic location, patient health status and clinical goals, and patient lifestyle and preferences all influence the selection of a prescription for a particular patient-there is no single prescription that provides optimal therapy for all patients, and careful weighing of potential benefit and burden is required for long-term success. This article describes how home HD prescribing patterns have changed over time and provides examples of commonly used home HD prescriptions. In addition, associated clinical outcomes and adequacy parameters as well as criteria for identifying which patients may benefit most from these diverse prescriptions are also presented. " ## [5] "Home hemodialysis (HD) was first introduced in the 1960s with a rapid increase in its use due to inability of dialysis units to accommodate patient demand. A sharp decline was subsequently seen with expanding outpatient dialysis facilities and changes in reimbursement policies. In the last decade, with emerging reports of benefits with home HD and more user-friendly equipment, there has been resurgence in home HD. However, home HD remains underutilized with considerable variations between and within countries. This paper will review the history of home HD, elaborate on its established benefits, identify some of the barriers in uptake of this modality and expand on potential strategies to overcome these barriers. " ## [6] "Home hemodialysis (HD) is undergoing a resurgence. A major driver of this is economics, however, providers are also encouraged by a combination of excellent patient outcomes and patient experiences as well as the development of newer technologies that offer ease of use. Home HD offers significant advantages in flexible scheduling and the practical implementation of extended hours dialysis. This paper explores the reasons why home HD is making a comeback and strives to offer approaches to improve the uptake of this dialysis modality. " ## [7] "The home extracorporeal hemodialysis, which aroused a great interest in the past, has not kept its promises due to the complexity and expectations for family involvement in treatment management. In the United States NxStage One portable system was proposed and designed for home use. In this work we describe, starting from the history of home hemodialysis, the method with NxStage system by comparing it with the conventional HD in 5 patients. The dialysis efficiency was similar between the two treatments, even if home hemodialysis showed a reduction in serum urea, creatinine and phosphorus. At the same time phosphate binders use decreased with an increase in serum calcium while hemoglobin increased reducing doses of erythropoietin. The method was successful in the training of the patients and their partners during hospital training and at home. Patients have shown great enthusiasm at the beginning and during the therapy, which is developed around the users personal needs, being able to decide at its own times during 24 hours according to personal needs, in addition to faster recovery after the dialysis. This method certainly improved the patients' wellness and increased their autonomy. " ## [8] "Most hemodialysis (HD) in Japan is based on the central dialysis fluid delivery system (CDDS). With CDDS, there is an improvement in work efficiency, reduction in cost, and a reduction in regional and institutional differences in dialysis conditions. This has resulted in an improvement in the survival rate throughout Japan. However, as the number of cases with various complications increases, it is necessary to select the optimal dialysis prescription (including hours and frequency) for each individual in order to further improve survival rates. To perform intensive HD, home HD is essential, and various prescriptions have been tried. However, several challenges remain before widespread implementation of home HD can occur. " ## [9] "Home hemodialysis (HD) is a modality of renal replacement therapy that can be safely and independently performed at home by end-stage renal disease (ESRD) patients. Home HD can be performed at the convenience of the patients on a daily basis, every other day and overnight (nocturnal). Despite the great and many perceived benefits of home HD, including the significant improvements in health outcomes and resource utilization, the adoption of home HD has been limited; lack or inadequate pre-dialysis education and training constitute a major barrier. The lack of self-confidence and/or self-efficacy to manage own therapy, lack of family and/or social support, fear of machine and cannulation of blood access and worries of possible catastrophic events represent other barriers for the implementation of home HD besides inadequate competence and/or expertise in caring for home HD patients among renal care providers (nephrologists, dialysis nurses, educators). A well-studied, planned and prepared and carefully implemented central country program supported by adequate budget can play a positive role in overcoming the challenges to home HD. Healthcare authorities, with the increasingly financial and logistic demands and the relatively higher mortality and morbidity rates of the conventional in-center HD, should tackle home HD as an attractive and cost-effective modality with more freedom, quality of life and improvement of clinical outcomes for the ESRD patients. " ## [10] "Home hemodialysis (HHD) is emerging as an important alternate renal replacement therapy. Although there are multiple clinical advantages with HHD, concerns surrounding increased risks of infection in this group of patients remain a major barrier to its implementation. In contrast to conventional hemodialysis, infection related complication represents the major morbidity in this mode of renal replacement therapy. Vascular access related infection is an important cause of infection in this population. Use of central vein catheters and buttonhole cannulation in HHD are important modifiable risk factors for HHD associated infection. Several preventive measures are suggested in the literature, which will require further prospective validation. " ``` Well that was tedious. And we processed only the first page of results. How do we “progrmmatically” hit the “Next” Button at the bottom of the page ? This is complicated by the fact that there appears to be some Javascript at work that we would have to somehow interact with to get the URL for the next page. Unlike with the school salary example it isn’t obvious how to do this. If we hove over the “Next” button we don’t get an associated link.
Getting Cleaning and Wrangling Data
steviep42.github.io
https://steviep42.github.io/webscraping/book/APIs.html
Chapter 4 APIs ============== 4\.1 OMDB --------- Let’s look at the IMDB page which catalogues lots of information about movies. Just got to the web site and search although here is an example link. [https://www.imdb.com/title/tt0076786/?ref\_\=fn\_al\_tt\_2](https://www.imdb.com/title/tt0076786/?ref_=fn_al_tt_2) In this case we would like to get the summary information for the movie. So we would use Selector Gadget or some other method to find the XPATH or CSS associated with this element. This pretty easy and doesn’t present much of a problem although for large scale mining of movie data we would run into trouble because IMDB doesn’t really like you to scrape their pages. They have an API that they would like for you to use. ``` url <- 'https://www.imdb.com/title/tt0076786/?ref_=fn_al_tt_2' summary <- read_html(url) %>% html_nodes(xpath="/html/body/div[2]/main/div/section[1]/section/div[3]/section/section/div[3]/div[2]/div[1]/div[1]/div[2]/span[3]") %>% html_text() summary ``` But here we go again. We have to parse the desired elements on this page and then what if we wanted to follow other links or set up a general function to search IMDB for other movies of various genres, titles, directors, etc. --- So as an example on how this works. Paste the URL into any web browser. You must supply your key for this to work. What you get back is a JSON formatted entry corresponding to ”The GodFather”movie. --- ``` url <- "http://www.omdbapi.com/?apikey=f7c004c&t=The+Godfather" ``` ``` library(RJSONIO) url <- "http://www.omdbapi.com/?apikey=f7c004c&t=The+Godfather" # Fetch the URL via fromJSON movie <- fromJSON("http://www.omdbapi.com/?apikey=f7c004c&t=The+Godfather") # We get back a list which is much easier to process than raw JSON or XML str(movie) ``` ``` ## List of 25 ## $ Title : chr "The Godfather" ## $ Year : chr "1972" ## $ Rated : chr "R" ## $ Released : chr "24 Mar 1972" ## $ Runtime : chr "175 min" ## $ Genre : chr "Crime, Drama" ## $ Director : chr "Francis Ford Coppola" ## $ Writer : chr "Mario Puzo, Francis Ford Coppola" ## $ Actors : chr "Marlon Brando, Al Pacino, James Caan" ## $ Plot : chr "The Godfather follows Vito Corleone, Don of the Corleone family, as he passes the mantle to his unwilling son, Michael." ## $ Language : chr "English, Italian, Latin" ## $ Country : chr "United States" ## $ Awards : chr "Won 3 Oscars. 31 wins & 30 nominations total" ## $ Poster : chr "https://m.media-amazon.com/images/M/MV5BM2MyNjYxNmUtYTAwNi00MTYxLWJmNWYtYzZlODY3ZTk3OTFlXkEyXkFqcGdeQXVyNzkwMjQ"| __truncated__ ## $ Ratings :List of 3 ## ..$ : Named chr [1:2] "Internet Movie Database" "9.2/10" ## .. ..- attr(*, "names")= chr [1:2] "Source" "Value" ## ..$ : Named chr [1:2] "Rotten Tomatoes" "97%" ## .. ..- attr(*, "names")= chr [1:2] "Source" "Value" ## ..$ : Named chr [1:2] "Metacritic" "100/100" ## .. ..- attr(*, "names")= chr [1:2] "Source" "Value" ## $ Metascore : chr "100" ## $ imdbRating: chr "9.2" ## $ imdbVotes : chr "1,742,506" ## $ imdbID : chr "tt0068646" ## $ Type : chr "movie" ## $ DVD : chr "11 May 2004" ## $ BoxOffice : chr "$134,966,411" ## $ Production: chr "N/A" ## $ Website : chr "N/A" ## $ Response : chr "True" ``` ``` movie$Plot ``` ``` ## [1] "The Godfather follows Vito Corleone, Don of the Corleone family, as he passes the mantle to his unwilling son, Michael." ``` ``` sapply(movie$Ratings,unlist) ``` ``` ## [,1] [,2] [,3] ## Source "Internet Movie Database" "Rotten Tomatoes" "Metacritic" ## Value "9.2/10" "97%" "100/100" ``` Let’s Get all the Episodes for Season 1 of Game of Thrones ``` url <- "http://www.omdbapi.com/?apikey=f7c004c&t=Game%20of%20Thrones&Season=1" movie <- fromJSON(url) str(movie,1) ``` ``` ## List of 5 ## $ Title : chr "Game of Thrones" ## $ Season : chr "1" ## $ totalSeasons: chr "8" ## $ Episodes :List of 10 ## $ Response : chr "True" ``` ``` episodes <- data.frame(do.call(rbind,movie$Episodes),stringsAsFactors = FALSE) episodes ``` ``` ## Title Released Episode imdbRating imdbID ## 1 Winter Is Coming 2011-04-17 1 9.1 tt1480055 ## 2 The Kingsroad 2011-04-24 2 8.8 tt1668746 ## 3 Lord Snow 2011-05-01 3 8.7 tt1829962 ## 4 Cripples, Bastards, and Broken Things 2011-05-08 4 8.8 tt1829963 ## 5 The Wolf and the Lion 2011-05-15 5 9.1 tt1829964 ## 6 A Golden Crown 2011-05-22 6 9.2 tt1837862 ## 7 You Win or You Die 2011-05-29 7 9.2 tt1837863 ## 8 The Pointy End 2011-06-05 8 9.0 tt1837864 ## 9 Baelor 2011-06-12 9 9.6 tt1851398 ## 10 Fire and Blood 2011-06-19 10 9.5 tt1851397 ``` 4\.2 The omdbapi package ------------------------ Wait a minute. Looks like someone created an R package that wraps all this for us. It is called omdbapi ``` # Use devtools to install devtools::install_github("hrbrmstr/omdbapi") ``` ``` library(omdbapi) # The first time you use this you will be prompted to enter your # API key movie_df <- search_by_title("Star Wars", page = 2) (movie_df <- movie_df[,-5]) ``` ``` ## Title Year imdbID Type ## 1 Solo: A Star Wars Story 2018 tt3778644 movie ## 2 Star Wars: The Clone Wars 2008–2020 tt0458290 series ## 3 Star Wars: The Clone Wars 2008 tt1185834 movie ## 4 Star Wars: Rebels 2014–2018 tt2930604 series ## 5 Star Wars: Clone Wars 2003–2005 tt0361243 series ## 6 Star Wars: The Bad Batch 2021– tt12708542 series ## 7 The Star Wars Holiday Special 1978 tt0193524 movie ## 8 Star Wars: Visions 2021– tt13622982 series ## 9 Robot Chicken: Star Wars 2007 tt1020990 movie ## 10 Star Wars: Knights of the Old Republic 2003 tt0356070 game ``` ``` # Get lots of info on The GodFather (gf <- find_by_title("The GodFather")) ``` ``` ## # A tibble: 3 × 25 ## Title Year Rated Released Runtime Genre Director Writer Actors Plot Language Country Awards Poster Ratings Metascore imdbRating ## <chr> <chr> <chr> <date> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list> <chr> <dbl> ## 1 The … 1972 R 1972-03-24 175 min Crim… Francis… Mario… Marlo… The … English… United… Won 3… https… <named… 100 9.2 ## 2 The … 1972 R 1972-03-24 175 min Crim… Francis… Mario… Marlo… The … English… United… Won 3… https… <named… 100 9.2 ## 3 The … 1972 R 1972-03-24 175 min Crim… Francis… Mario… Marlo… The … English… United… Won 3… https… <named… 100 9.2 ## # … with 8 more variables: imdbVotes <dbl>, imdbID <chr>, Type <chr>, DVD <date>, BoxOffice <chr>, Production <chr>, Website <chr>, ## # Response <chr> ``` ``` # Get the actors from the GodFather get_actors((gf)) ``` ``` ## [1] "Marlon Brando" "Al Pacino" "James Caan" ``` 4\.3 RSelenium -------------- Sometimes we interact with websites that use Javascript to load more text or comments in a user forum. Here is an example of that. Look at <https://www.dailystrength.org/group/dialysis> which is a website associated with people wanting to share information about dialysis. If you check the bottom of the pag you will see a button. ``` # https://www.dailystrength.org/group/dialysis library(RSelenium) library(rvest) library(tm) library(SentimentAnalysis) library(wordcloud) url <- "https://www.dailystrength.org/group/dialysis" # The website has a "show more" button that hides most of the patient posts # If we don't find a way to programmatically "click" this button then we can # only get a few of the posts and their responses. To do this we need to # use the RSelenium package which does a lot of behind the scenes work # See https://cran.r-project.org/web/packages/RSelenium/RSelenium.pdf # http://brazenly.blogspot.com/2016/05/r-advanced-web-scraping-dynamic.html # Open up a connection # rD <- rsDriver() # So, you might have to specify the version of chrome you are using # For someone reason this seems now to be necessary (11/4/19) rD <- rsDriver(browser=c("chrome"),chromever="78.0.3904.70") remDr <- rD[["client"]] remDr$navigate(url) loadmorebutton <- remDr$findElement(using = 'css selector', "#load-more-discussions") # Do this a number of times to get more links loadmorebutton$clickElement() # Now get the page with more comments and questions page_source <- remDr$getPageSource() # So let's parse the contents comments <- read_html(page_source[[1]]) cumulative_comments <- vector() links <- comments %>% html_nodes(css=".newsfeed__description") %>% html_node("a") %>% html_attr("href") full_links <- paste0("https://www.dailystrength.org",links) if (length(grep("NA",full_links)) > 0) { full_links <- full_links[-grep("NA",full_links)] } ugly_xpath <- '//*[contains(concat( " ", @class, " " ), concat( " ", "comments__comment-text", " " ))] | //p' for (ii in 1:length(full_links)) { text <- read_html(full_links[ii]) %>% html_nodes(xpath=ugly_xpath) %>% html_text() length(text) <- length(text) - 1 text <- text[-1] text cumulative_comments <- c(cumulative_comments,text) } remDr$close() # stop the selenium server rD[["server"]]$stop() ``` 4\.4 EasyPubMed --------------- So there is an R package called *EasyPubMed* that helps ease the access of data on the Internet. The idea behind this package is to be able to query NCBI Entrez and retrieve PubMed records in XML or TXT format. The PubMed records can be downloaded and saved as XML or text files if desired. According to the package authours, “Data integrity is enforced during data download, allowing to retrieve and save very large number of records effortlessly.” The bottom line is that you can do what you want after that. Let’s look at an example involving home hemodialysis ``` library(easyPubMed) ``` Let’s do some searching ``` my_query <- 'hemodialysis, home" [MeSH Terms]' my_entrez_id <- get_pubmed_ids(my_query) my_abstracts <- fetch_pubmed_data(my_entrez_id) my_abstracts <- custom_grep(my_abstracts,"AbstractText","char") my_abstracts[1:3] [1] "Assisted PD (assPD) is an option of home dialysis treatment for dependent end-stage renal patients and worldwide applied in different countries since more than 40 years. China and Germany shares similar trends in demographic development with a growing proportion of elderly referred to dialysis treatment. So far number of patients treated by assPD is low in both countries. We analyze experiences in the implementation process, barriers, and benefits of ass PD in the aging population to provide a model for sustainable home dialysis treatment with PD in both countries. Differences and similarities of different factors (industrial, patient and facility based) which affect utilization of assPD are discussed. AssPD should be promoted in China and Germany to realize the benefits of home dialysis for the aging population by providing a structured model of implementation and quality assurance." [2] "End-stage renal disease (ESRD) is the final stage of chronic kidney disease in which the kidney is not sufficient to meet the needs of daily life. It is necessary to understand the role of genes expression involved in ESRD patient responses to nocturnal hemodialysis (NHD) and to improve the immunity responsiveness. The aim of this study was to investigate novel immune-associated genes that may play important roles in patients with ESRD.The microarray expression profiles of peripheral blood in patients with ESRD before and after NHD were analyzed by network-based approaches, and then using Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes pathway analysis to explore the biological process and molecular functions of differentially expressed genes. Subsequently, a transcriptional regulatory network of the core genes and the connected transcriptional regulators was constructed. We found that NHD had a significant effect on neutrophil activation and immune response in patients with ESRD.In addition, Our findings suggest that MAPKAPK3, RHOA, ARRB2, FLOT1, MYH9, PRKCD, RHOG, PTPN6, MAPK3, CNPY3, PI3KCG, and PYGL genes maybe potential targets regulated by core transcriptional factors, including ARNT, C/EBPalpha, CEBPA, CREB1, PSG1, DAND5, SP1, GATA1, MYC, EGR2, and EGR3." [3] "Only a minority of patients with chronic kidney disease treated by hemodialysis are currently treated at home. Until relatively recently, the only type of hemodialysis machine available for these patients was a slightly smaller version of the standard machines used for in-center dialysis treatments. Areas covered: There are now an alternative generation of dialysis machines specifically designed for home hemodialysis. The home dialysis patient wants a smaller machine, which is intuitive to use, easy to trouble shoot, robust and reliable, quick to setup and put away, requiring minimal waste disposal. The machines designed for home dialysis have some similarities in terms of touch-screen patient interfaces, and using pre-prepared cartridges to speed up setting up the machine. On the other hand, they differ in terms of whether they use slower or standard dialysate flows, prepare batches of dialysis fluid, require separate water purification equipment, or whether this is integrated, or use pre-prepared sterile bags of dialysis fluid. Expert commentary: Dialysis machine complexity is one of the hurdles reducing the number of patients opting for home hemodialysis and the introduction of the newer generation of dialysis machines designed for ease of use will hopefully increase the number of patients opting for home hemodialysis." ``` 4\.1 OMDB --------- Let’s look at the IMDB page which catalogues lots of information about movies. Just got to the web site and search although here is an example link. [https://www.imdb.com/title/tt0076786/?ref\_\=fn\_al\_tt\_2](https://www.imdb.com/title/tt0076786/?ref_=fn_al_tt_2) In this case we would like to get the summary information for the movie. So we would use Selector Gadget or some other method to find the XPATH or CSS associated with this element. This pretty easy and doesn’t present much of a problem although for large scale mining of movie data we would run into trouble because IMDB doesn’t really like you to scrape their pages. They have an API that they would like for you to use. ``` url <- 'https://www.imdb.com/title/tt0076786/?ref_=fn_al_tt_2' summary <- read_html(url) %>% html_nodes(xpath="/html/body/div[2]/main/div/section[1]/section/div[3]/section/section/div[3]/div[2]/div[1]/div[1]/div[2]/span[3]") %>% html_text() summary ``` But here we go again. We have to parse the desired elements on this page and then what if we wanted to follow other links or set up a general function to search IMDB for other movies of various genres, titles, directors, etc. --- So as an example on how this works. Paste the URL into any web browser. You must supply your key for this to work. What you get back is a JSON formatted entry corresponding to ”The GodFather”movie. --- ``` url <- "http://www.omdbapi.com/?apikey=f7c004c&t=The+Godfather" ``` ``` library(RJSONIO) url <- "http://www.omdbapi.com/?apikey=f7c004c&t=The+Godfather" # Fetch the URL via fromJSON movie <- fromJSON("http://www.omdbapi.com/?apikey=f7c004c&t=The+Godfather") # We get back a list which is much easier to process than raw JSON or XML str(movie) ``` ``` ## List of 25 ## $ Title : chr "The Godfather" ## $ Year : chr "1972" ## $ Rated : chr "R" ## $ Released : chr "24 Mar 1972" ## $ Runtime : chr "175 min" ## $ Genre : chr "Crime, Drama" ## $ Director : chr "Francis Ford Coppola" ## $ Writer : chr "Mario Puzo, Francis Ford Coppola" ## $ Actors : chr "Marlon Brando, Al Pacino, James Caan" ## $ Plot : chr "The Godfather follows Vito Corleone, Don of the Corleone family, as he passes the mantle to his unwilling son, Michael." ## $ Language : chr "English, Italian, Latin" ## $ Country : chr "United States" ## $ Awards : chr "Won 3 Oscars. 31 wins & 30 nominations total" ## $ Poster : chr "https://m.media-amazon.com/images/M/MV5BM2MyNjYxNmUtYTAwNi00MTYxLWJmNWYtYzZlODY3ZTk3OTFlXkEyXkFqcGdeQXVyNzkwMjQ"| __truncated__ ## $ Ratings :List of 3 ## ..$ : Named chr [1:2] "Internet Movie Database" "9.2/10" ## .. ..- attr(*, "names")= chr [1:2] "Source" "Value" ## ..$ : Named chr [1:2] "Rotten Tomatoes" "97%" ## .. ..- attr(*, "names")= chr [1:2] "Source" "Value" ## ..$ : Named chr [1:2] "Metacritic" "100/100" ## .. ..- attr(*, "names")= chr [1:2] "Source" "Value" ## $ Metascore : chr "100" ## $ imdbRating: chr "9.2" ## $ imdbVotes : chr "1,742,506" ## $ imdbID : chr "tt0068646" ## $ Type : chr "movie" ## $ DVD : chr "11 May 2004" ## $ BoxOffice : chr "$134,966,411" ## $ Production: chr "N/A" ## $ Website : chr "N/A" ## $ Response : chr "True" ``` ``` movie$Plot ``` ``` ## [1] "The Godfather follows Vito Corleone, Don of the Corleone family, as he passes the mantle to his unwilling son, Michael." ``` ``` sapply(movie$Ratings,unlist) ``` ``` ## [,1] [,2] [,3] ## Source "Internet Movie Database" "Rotten Tomatoes" "Metacritic" ## Value "9.2/10" "97%" "100/100" ``` Let’s Get all the Episodes for Season 1 of Game of Thrones ``` url <- "http://www.omdbapi.com/?apikey=f7c004c&t=Game%20of%20Thrones&Season=1" movie <- fromJSON(url) str(movie,1) ``` ``` ## List of 5 ## $ Title : chr "Game of Thrones" ## $ Season : chr "1" ## $ totalSeasons: chr "8" ## $ Episodes :List of 10 ## $ Response : chr "True" ``` ``` episodes <- data.frame(do.call(rbind,movie$Episodes),stringsAsFactors = FALSE) episodes ``` ``` ## Title Released Episode imdbRating imdbID ## 1 Winter Is Coming 2011-04-17 1 9.1 tt1480055 ## 2 The Kingsroad 2011-04-24 2 8.8 tt1668746 ## 3 Lord Snow 2011-05-01 3 8.7 tt1829962 ## 4 Cripples, Bastards, and Broken Things 2011-05-08 4 8.8 tt1829963 ## 5 The Wolf and the Lion 2011-05-15 5 9.1 tt1829964 ## 6 A Golden Crown 2011-05-22 6 9.2 tt1837862 ## 7 You Win or You Die 2011-05-29 7 9.2 tt1837863 ## 8 The Pointy End 2011-06-05 8 9.0 tt1837864 ## 9 Baelor 2011-06-12 9 9.6 tt1851398 ## 10 Fire and Blood 2011-06-19 10 9.5 tt1851397 ``` 4\.2 The omdbapi package ------------------------ Wait a minute. Looks like someone created an R package that wraps all this for us. It is called omdbapi ``` # Use devtools to install devtools::install_github("hrbrmstr/omdbapi") ``` ``` library(omdbapi) # The first time you use this you will be prompted to enter your # API key movie_df <- search_by_title("Star Wars", page = 2) (movie_df <- movie_df[,-5]) ``` ``` ## Title Year imdbID Type ## 1 Solo: A Star Wars Story 2018 tt3778644 movie ## 2 Star Wars: The Clone Wars 2008–2020 tt0458290 series ## 3 Star Wars: The Clone Wars 2008 tt1185834 movie ## 4 Star Wars: Rebels 2014–2018 tt2930604 series ## 5 Star Wars: Clone Wars 2003–2005 tt0361243 series ## 6 Star Wars: The Bad Batch 2021– tt12708542 series ## 7 The Star Wars Holiday Special 1978 tt0193524 movie ## 8 Star Wars: Visions 2021– tt13622982 series ## 9 Robot Chicken: Star Wars 2007 tt1020990 movie ## 10 Star Wars: Knights of the Old Republic 2003 tt0356070 game ``` ``` # Get lots of info on The GodFather (gf <- find_by_title("The GodFather")) ``` ``` ## # A tibble: 3 × 25 ## Title Year Rated Released Runtime Genre Director Writer Actors Plot Language Country Awards Poster Ratings Metascore imdbRating ## <chr> <chr> <chr> <date> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <list> <chr> <dbl> ## 1 The … 1972 R 1972-03-24 175 min Crim… Francis… Mario… Marlo… The … English… United… Won 3… https… <named… 100 9.2 ## 2 The … 1972 R 1972-03-24 175 min Crim… Francis… Mario… Marlo… The … English… United… Won 3… https… <named… 100 9.2 ## 3 The … 1972 R 1972-03-24 175 min Crim… Francis… Mario… Marlo… The … English… United… Won 3… https… <named… 100 9.2 ## # … with 8 more variables: imdbVotes <dbl>, imdbID <chr>, Type <chr>, DVD <date>, BoxOffice <chr>, Production <chr>, Website <chr>, ## # Response <chr> ``` ``` # Get the actors from the GodFather get_actors((gf)) ``` ``` ## [1] "Marlon Brando" "Al Pacino" "James Caan" ``` 4\.3 RSelenium -------------- Sometimes we interact with websites that use Javascript to load more text or comments in a user forum. Here is an example of that. Look at <https://www.dailystrength.org/group/dialysis> which is a website associated with people wanting to share information about dialysis. If you check the bottom of the pag you will see a button. ``` # https://www.dailystrength.org/group/dialysis library(RSelenium) library(rvest) library(tm) library(SentimentAnalysis) library(wordcloud) url <- "https://www.dailystrength.org/group/dialysis" # The website has a "show more" button that hides most of the patient posts # If we don't find a way to programmatically "click" this button then we can # only get a few of the posts and their responses. To do this we need to # use the RSelenium package which does a lot of behind the scenes work # See https://cran.r-project.org/web/packages/RSelenium/RSelenium.pdf # http://brazenly.blogspot.com/2016/05/r-advanced-web-scraping-dynamic.html # Open up a connection # rD <- rsDriver() # So, you might have to specify the version of chrome you are using # For someone reason this seems now to be necessary (11/4/19) rD <- rsDriver(browser=c("chrome"),chromever="78.0.3904.70") remDr <- rD[["client"]] remDr$navigate(url) loadmorebutton <- remDr$findElement(using = 'css selector', "#load-more-discussions") # Do this a number of times to get more links loadmorebutton$clickElement() # Now get the page with more comments and questions page_source <- remDr$getPageSource() # So let's parse the contents comments <- read_html(page_source[[1]]) cumulative_comments <- vector() links <- comments %>% html_nodes(css=".newsfeed__description") %>% html_node("a") %>% html_attr("href") full_links <- paste0("https://www.dailystrength.org",links) if (length(grep("NA",full_links)) > 0) { full_links <- full_links[-grep("NA",full_links)] } ugly_xpath <- '//*[contains(concat( " ", @class, " " ), concat( " ", "comments__comment-text", " " ))] | //p' for (ii in 1:length(full_links)) { text <- read_html(full_links[ii]) %>% html_nodes(xpath=ugly_xpath) %>% html_text() length(text) <- length(text) - 1 text <- text[-1] text cumulative_comments <- c(cumulative_comments,text) } remDr$close() # stop the selenium server rD[["server"]]$stop() ``` 4\.4 EasyPubMed --------------- So there is an R package called *EasyPubMed* that helps ease the access of data on the Internet. The idea behind this package is to be able to query NCBI Entrez and retrieve PubMed records in XML or TXT format. The PubMed records can be downloaded and saved as XML or text files if desired. According to the package authours, “Data integrity is enforced during data download, allowing to retrieve and save very large number of records effortlessly.” The bottom line is that you can do what you want after that. Let’s look at an example involving home hemodialysis ``` library(easyPubMed) ``` Let’s do some searching ``` my_query <- 'hemodialysis, home" [MeSH Terms]' my_entrez_id <- get_pubmed_ids(my_query) my_abstracts <- fetch_pubmed_data(my_entrez_id) my_abstracts <- custom_grep(my_abstracts,"AbstractText","char") my_abstracts[1:3] [1] "Assisted PD (assPD) is an option of home dialysis treatment for dependent end-stage renal patients and worldwide applied in different countries since more than 40 years. China and Germany shares similar trends in demographic development with a growing proportion of elderly referred to dialysis treatment. So far number of patients treated by assPD is low in both countries. We analyze experiences in the implementation process, barriers, and benefits of ass PD in the aging population to provide a model for sustainable home dialysis treatment with PD in both countries. Differences and similarities of different factors (industrial, patient and facility based) which affect utilization of assPD are discussed. AssPD should be promoted in China and Germany to realize the benefits of home dialysis for the aging population by providing a structured model of implementation and quality assurance." [2] "End-stage renal disease (ESRD) is the final stage of chronic kidney disease in which the kidney is not sufficient to meet the needs of daily life. It is necessary to understand the role of genes expression involved in ESRD patient responses to nocturnal hemodialysis (NHD) and to improve the immunity responsiveness. The aim of this study was to investigate novel immune-associated genes that may play important roles in patients with ESRD.The microarray expression profiles of peripheral blood in patients with ESRD before and after NHD were analyzed by network-based approaches, and then using Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes pathway analysis to explore the biological process and molecular functions of differentially expressed genes. Subsequently, a transcriptional regulatory network of the core genes and the connected transcriptional regulators was constructed. We found that NHD had a significant effect on neutrophil activation and immune response in patients with ESRD.In addition, Our findings suggest that MAPKAPK3, RHOA, ARRB2, FLOT1, MYH9, PRKCD, RHOG, PTPN6, MAPK3, CNPY3, PI3KCG, and PYGL genes maybe potential targets regulated by core transcriptional factors, including ARNT, C/EBPalpha, CEBPA, CREB1, PSG1, DAND5, SP1, GATA1, MYC, EGR2, and EGR3." [3] "Only a minority of patients with chronic kidney disease treated by hemodialysis are currently treated at home. Until relatively recently, the only type of hemodialysis machine available for these patients was a slightly smaller version of the standard machines used for in-center dialysis treatments. Areas covered: There are now an alternative generation of dialysis machines specifically designed for home hemodialysis. The home dialysis patient wants a smaller machine, which is intuitive to use, easy to trouble shoot, robust and reliable, quick to setup and put away, requiring minimal waste disposal. The machines designed for home dialysis have some similarities in terms of touch-screen patient interfaces, and using pre-prepared cartridges to speed up setting up the machine. On the other hand, they differ in terms of whether they use slower or standard dialysate flows, prepare batches of dialysis fluid, require separate water purification equipment, or whether this is integrated, or use pre-prepared sterile bags of dialysis fluid. Expert commentary: Dialysis machine complexity is one of the hurdles reducing the number of patients opting for home hemodialysis and the introduction of the newer generation of dialysis machines designed for ease of use will hopefully increase the number of patients opting for home hemodialysis." ```
Getting Cleaning and Wrangling Data
steviep42.github.io
https://steviep42.github.io/webscraping/book/bagofwords.html
Chapter 5 Bag of Words Sentiment Analysis ========================================= One we have a collection of text it’s interesting to figure out what it might mean or infer \- if anything at all. In text analysis and NLP (Natural Language Processing) we talk about “Bag of Words” to describe a collection or “corpus” of unstructured text. What do we do with a “bag of words” ? * Extract meaning from collections of text (without reading !) * Detect and analyze patterns in unstructured textual collections * Use Natural Language Processing techniques to reach conclusions * Discover what ideas occur in text and how they might be linked * Determine if the discovered patterns be used to predict behavior ? * Identify interesting ideas that might otherwise be ignored 5\.1 Workflow ------------- * Identify and Obtain text (e.g. websites, Twitter, Databases, PDFs, surveys) * Create a text ”Corpus”\- a structure that contains the raw text * Apply transformations: + Normalize case (convert to lower case) + Remove puncutation and stopwords + Remove domain specific stopwords * Perform Analysis and Visualizations (word frequency, tagging, wordclouds) * Do Sentiment Analysis R has Packages to Help. These are just some of them: * QDAP \- Quantitative Discourse Package * tm \- text mining applications within R * tidytext \- Text Mining using ddplyr and ggplot and tidyverse tools * SentimentAnalysis \- For Sentiment Analysis However, consider that: * Some of these are easier to use than others * Some can be kind of a problem to install (e.g. qdap) * They all offer similar capabilities * We’ll look at tidytext 5\.2 Simple Example ------------------- Find the URL for Lincoln’s March 4, 1865 Speech: ``` url <- "https://millercenter.org/the-presidency/presidential-speeches/march-4-1865-second-inaugural-address" library(rvest) lincoln_doc <- read_html(url) %>% html_nodes(".view-transcript") %>% html_text() lincoln_doc ``` ``` ## [1] "TranscriptFellow-Countrymen: At this second appearing to take the oath of the Presidential office there is less occasion for an extended address than there was at the first. Then a statement somewhat in detail of a course to be pursued seemed fitting and proper. Now, at the expiration of four years, during which public declarations have been constantly called forth on every point and phase of the great contest which still absorbs the attention and engrosses the energies of the nation, little that is new could be presented. The progress of our arms, upon which all else chiefly depends, is as well known to the public as to myself, and it is, I trust, reasonably satisfactory and encouraging to all. With high hope for the future, no prediction in regard to it is ventured.On the occasion corresponding to this four years ago all thoughts were anxiously directed to an impending civil war. All dreaded it, all sought to avert it. While the inaugural address was being delivered from this place, devoted altogether to saving the Union without war, insurgent agents were in the city seeking to destroy it without war-seeking to dissolve the Union and divide effects by negotiation. Both parties deprecated war, but one of them would make war rather than let the nation survive, and the other would accept war rather than let it perish, and the war came.One-eighth of the whole population were colored slaves, not distributed generally over the Union. but localized in the southern part of it. These slaves constituted a peculiar and powerful interest. All knew that this interest was somehow the cause of the war. To strengthen, perpetuate, and extend this interest was the object for which the insurgents would rend the Union even by war, while the Government claimed no right to do more than to restrict the territorial enlargement of it. Neither party expected for the war the magnitude or the duration which it has already attained. Neither anticipated that the cause of the conflict might cease with or even before the conflict itself should cease. Each looked for an easier triumph, and a result less fundamental and astounding. Both read the same Bible and pray to the same God, and each invokes His aid against the other. It may seem strange that any men should dare to ask a just God's assistance in wringing their bread from the sweat of other men's faces, but let us judge not, that we be not judged. The prayers of both could not be answered. That of neither has been answered fully. The Almighty has His own purposes. \"Woe unto the world because of offenses; for it must needs be that offenses come, but woe to that man by whom the offense cometh.\" If we shall suppose that American slavery is one of those offenses which, in the providence of God, must needs come, but which, having continued through His appointed time, He now wills to remove, and that He gives to both North and South this terrible war as the woe due to those by whom the offense came, shall we discern therein any departure from those divine attributes which the believers in a living God always ascribe to Him? Fondly do we hope, fervently do we pray, that this mighty scourge of war may speedily pass away. Yet, if God wills that it continue until all the wealth piled by the bondsman's two hundred and fifty years of unrequited toil shall be sunk, and until every drop of blood drawn with the lash shall be paid by another drawn with the sword, as was said three thousand years ago, so still it must be said \"the judgments of the Lord are true and righteous altogether.\"With malice toward none, with charity for all, with firmness in the fight as God gives us to see the right, let us strive on to finish the work we are in, to bind up the nation's wounds, to care for him who shall have borne the battle and for his widow and his orphan, to do all which may achieve and cherish a just and lasting peace among ourselves and with all nations." ``` There are probably lots of words that don’t really “matter” or contribute to the “real” meaning of the speech. ``` word_vec <- unlist(strsplit(lincoln_doc," ")) word_vec[1:20] ``` ``` ## [1] "TranscriptFellow-Countrymen:" "" "At" "this" ## [5] "second" "appearing" "to" "take" ## [9] "the" "oath" "of" "the" ## [13] "Presidential" "office" "there" "is" ## [17] "less" "occasion" "for" "an" ``` ``` sort(table(word_vec),decreasing = TRUE)[1:10] ``` ``` ## word_vec ## the to and of that for be in it a ## 54 26 24 22 11 9 8 8 8 7 ``` How do we remove all the uninteresting words ? We could do it manaully ``` # Remove all punctuation marks word_vec <- gsub("[[:punct:]]","",word_vec) stop_words <- c("the","to","and","of","the","for","in","it", "a","this","which","by","is","an","hqs","from", "that","with","as") for (ii in 1:length(stop_words)) { for (jj in 1:length(word_vec)) { if (stop_words[ii] == word_vec[jj]) { word_vec[jj] <- "" } } } word_vec <- word_vec[word_vec != ""] sort(table(word_vec),decreasing = TRUE)[1:10] ``` ``` ## word_vec ## war all be we but God shall was do let ## 11 8 8 6 5 5 5 5 4 4 ``` ``` word_vec[1:30] ``` ``` ## [1] "TranscriptFellowCountrymen" "At" "second" "appearing" ## [5] "take" "oath" "Presidential" "office" ## [9] "there" "less" "occasion" "extended" ## [13] "address" "than" "there" "was" ## [17] "at" "first" "Then" "statement" ## [21] "somewhat" "detail" "course" "be" ## [25] "pursued" "seemed" "fitting" "proper" ## [29] "Now" "at" ``` 5\.3 tidytext ------------- So the tidytext package provides some accomodations to convert your body of text into individual **tokens** which then simplfies the removal of less meaningful words and the creation of word frequency counts. The first thing you do is to create a data frame where the there is one line for each body of text. In this case we have only one long string of text this will be a one line data frame. ``` library(tidytext) library(tidyr) text_df <- data_frame(line = 1:length(lincoln_doc), text = lincoln_doc) text_df ``` ``` ## # A tibble: 1 × 2 ## line text ## <int> <chr> ## 1 1 "TranscriptFellow-Countrymen: At this second appearing to take the oath of the Presidential office there is less occasion f… ``` The next step is to breakup each of text lines (we have only 1\) into invdividual rows, each with it’s own line. We also want to count the number of times that each word appears. This is known as **tokenizing** the data frame. ``` token_text <- text_df %>% unnest_tokens(word, text) # Let's now count them token_text %>% count(word,sort=TRUE) ``` ``` ## # A tibble: 339 × 2 ## word n ## <chr> <int> ## 1 the 58 ## 2 to 27 ## 3 and 24 ## 4 of 22 ## 5 it 13 ## 6 that 12 ## 7 war 12 ## 8 all 10 ## 9 for 9 ## 10 in 9 ## # … with 329 more rows ``` But we need to get rid of the “stop words.” It’s a good thing that the **tidytext** package has a way to filter out the common words that do not significantly contribute to the meaning of the overall text. The **stop\_words** data frame is built into **tidytext**. Take a look to see some of the words contained therein: ``` data(stop_words) # Sample 40 random stop words stop_words %>% sample_n(40) ``` ``` ## # A tibble: 40 × 2 ## word lexicon ## <chr> <chr> ## 1 do onix ## 2 where SMART ## 3 contains SMART ## 4 him snowball ## 5 seeming SMART ## 6 ended onix ## 7 following SMART ## 8 area onix ## 9 across onix ## 10 ordered onix ## # … with 30 more rows ``` ``` # Now remove stop words from the document tidy_text <- token_text %>% anti_join(stop_words) ``` ``` ## Joining, by = "word" ``` ``` # This could also be done by the following. I point this out only because some people react # negatively to "joins" although fully understanding what joins are can only help you since # much of what the dplyr package does is based on SQL type joins. tidy_text <- token_text %>% filter(!word %in% stop_words$word) tidy_text %>% count(word,sort=TRUE) ``` ``` ## # A tibble: 193 × 2 ## word n ## <chr> <int> ## 1 war 12 ## 2 god 5 ## 3 union 4 ## 4 offenses 3 ## 5 woe 3 ## 6 address 2 ## 7 ago 2 ## 8 altogether 2 ## 9 answered 2 ## 10 cease 2 ## # … with 183 more rows ``` ``` tidy_text %>% count(word,sort=TRUE) ``` ``` ## # A tibble: 193 × 2 ## word n ## <chr> <int> ## 1 war 12 ## 2 god 5 ## 3 union 4 ## 4 offenses 3 ## 5 woe 3 ## 6 address 2 ## 7 ago 2 ## 8 altogether 2 ## 9 answered 2 ## 10 cease 2 ## # … with 183 more rows ``` ``` tidy_text %>% count(word, sort = TRUE) %>% filter(n > 2) %>% mutate(word = reorder(word, n)) %>% ggplot(aes(word, n)) + geom_col() + xlab(NULL) + coord_flip() ``` 5\.4 Back To The PubMed Example ------------------------------- We have around 935 abstracts that we mess with based on our work using the **easyPubMed** package ``` # Create a data frame out of the cleaned up abstracts library(tidytext) library(dplyr) text_df <- data_frame(line = 1:length(my_abstracts), text = my_abstracts) token_text <- text_df %>% unnest_tokens(word, text) # Many of these words aren't helpful token_text %>% count(total=word,sort=TRUE) ``` ``` ## # A tibble: 6,936 × 2 ## total n ## <chr> <int> ## 1 the 3062 ## 2 of 2896 ## 3 and 2871 ## 4 in 1915 ## 5 to 1884 ## 6 a 1373 ## 7 dialysis 1365 ## 8 patients 1335 ## 9 home 1281 ## 10 with 1035 ## # … with 6,926 more rows ``` ``` # Now remove stop words data(stop_words) tidy_text <- token_text %>% anti_join(stop_words) # This could also be done by the following. I point this out only because some people react # negatively to "joins" although fully understanding what joins are can only help you since # much of what the dplyr package does is based on SQL type joins. tidy_text <- token_text %>% filter(!word %in% stop_words$word) # Arrange the text by descending word frequency tidy_text %>% count(word, sort = TRUE) ``` ``` ## # A tibble: 6,460 × 2 ## word n ## <chr> <int> ## 1 dialysis 1365 ## 2 patients 1335 ## 3 home 1281 ## 4 hemodialysis 674 ## 5 hd 463 ## 6 hhd 440 ## 7 patient 395 ## 8 pd 303 ## 9 renal 279 ## 10 study 268 ## # … with 6,450 more rows ``` Some of the most frequently occurring words are in fact “dialysis,” “patients” so maybe we should consider them to be stop words also since we already know quite well that the overall theme is, well, dialysis and kidneys. There are also synonymns and abbreviations that are somewhat redundant such as “pdd,”“pd,”“hhd” so let’s eliminate them also. ``` tidy_text <- token_text %>% filter(!word %in% c(stop_words$word,"dialysis","patients","home","kidney", "hemodialysis","haemodialysis","patient","hhd", "pd","peritoneal","hd","renal","study","care", "ci","chd","nhd","disease","treatment")) tidy_text %>% count(word, sort = TRUE) ``` ``` ## # A tibble: 6,441 × 2 ## word n ## <chr> <int> ## 1 therapy 193 ## 2 conventional 191 ## 3 survival 191 ## 4 center 186 ## 5 compared 180 ## 6 clinical 175 ## 7 nocturnal 171 ## 8 outcomes 171 ## 9 quality 171 ## 10 data 161 ## # … with 6,431 more rows ``` Let’s do some plotting of these words ``` library(ggplot2) tidy_text %>% count(word, sort = TRUE) %>% filter(n > 120) %>% mutate(word = reorder(word, n)) %>% ggplot(aes(word, n)) + geom_col() + xlab(NULL) + coord_flip() ``` Okay, it looks like there are numbers in there which might be useful. I suspect that the “95” is probably associated with the idea of a confidence interval. But there are other references to numbers. ``` grep("^[0-9]{1,3}$",tidy_text$word)[1:20] ``` ``` ## [1] 9 273 275 284 288 293 296 305 308 387 388 554 614 671 679 680 682 744 758 762 ``` ``` tidy_text_nonum <- tidy_text[grep("^[0-9]{1,3}$",tidy_text$word,invert=TRUE),] ``` Okay well I think maybe we have some reasonable data to examine. As you might have realized by now, manipulating data to get it “clean” can be tedious and frustrating though it is an inevitable part of the process. ``` tidy_text_nonum %>% count(word, sort = TRUE) %>% filter(n > 120) %>% mutate(word = reorder(word, n)) %>% ggplot(aes(word, n)) + geom_col() + xlab(NULL) + coord_flip() ``` ### 5\.4\.1 How Do You Feel ? The next step is to explore what some of these words might mean. The **tidytext** package has four dictionaries that help you figure out what sentiment is being expressed by your data frame. ``` # NRC Emotion Lexicon from Saif Mohammad and Peter Turney get_sentiments("nrc") %>% sample_n(20) ``` ``` ## # A tibble: 20 × 2 ## word sentiment ## <chr> <chr> ## 1 disgrace negative ## 2 undying anticipation ## 3 warn fear ## 4 independence anticipation ## 5 judiciary anticipation ## 6 doll joy ## 7 glorify joy ## 8 alien disgust ## 9 purify joy ## 10 flop disgust ## 11 sanguine positive ## 12 beastly negative ## 13 blackness negative ## 14 wear trust ## 15 neutral trust ## 16 affluence positive ## 17 invade anger ## 18 alienation anger ## 19 deceit disgust ## 20 contemptible negative ``` ``` # the sentiment lexicon from Bing Liu and collaborators get_sentiments("bing") %>% sample_n(20) ``` ``` ## # A tibble: 20 × 2 ## word sentiment ## <chr> <chr> ## 1 nurturing positive ## 2 saintliness positive ## 3 split negative ## 4 object negative ## 5 mistrust negative ## 6 insolvent negative ## 7 extravagant negative ## 8 thinner positive ## 9 bombastic negative ## 10 disapointment negative ## 11 immoderate negative ## 12 dungeons negative ## 13 revengefully negative ## 14 mournful negative ## 15 sagacity positive ## 16 upgraded positive ## 17 adequate positive ## 18 spendy negative ## 19 awfully negative ## 20 displaced negative ``` ``` # Tim Loughran and Bill McDonald get_sentiments("loughran") %>% sample_n(20) ``` ``` ## # A tibble: 20 × 2 ## word sentiment ## <chr> <chr> ## 1 duress negative ## 2 opposes negative ## 3 resigns negative ## 4 limitation negative ## 5 claimant litigious ## 6 cancel negative ## 7 honored positive ## 8 incompatibility negative ## 9 premiere positive ## 10 variably uncertainty ## 11 reassessments negative ## 12 devolved negative ## 13 lingering negative ## 14 unpredicted negative ## 15 legalized litigious ## 16 obstructing negative ## 17 forfeited negative ## 18 foreclosure negative ## 19 prejudicing negative ## 20 gained positive ``` ``` # Pull out words that correspond to joy nrc_joy <- get_sentiments("nrc") %>% filter(sentiment == "joy") nrc_joy ``` ``` ## # A tibble: 689 × 2 ## word sentiment ## <chr> <chr> ## 1 absolution joy ## 2 abundance joy ## 3 abundant joy ## 4 accolade joy ## 5 accompaniment joy ## 6 accomplish joy ## 7 accomplished joy ## 8 achieve joy ## 9 achievement joy ## 10 acrobat joy ## # … with 679 more rows ``` So we will use the **nrc** sentiment dictionary to see the “sentiment” expressed in our abstracts. ``` bing_word_counts <- tidy_text_nonum %>% inner_join(get_sentiments("nrc")) %>% count(word,sentiment,sort=TRUE) ``` ``` ## Joining, by = "word" ``` t the positive vs negative words ``` bing_word_counts %>% group_by(sentiment) %>% top_n(10) %>% ungroup() %>% mutate(word = reorder(word, n)) %>% ggplot(aes(word, n, fill = sentiment)) + geom_col(show.legend = FALSE) + facet_wrap(~sentiment, scales = "free_y") + labs(y = "Contribution to sentiment", x = NULL) + coord_flip() ``` ``` ## Selecting by n ``` Let’s create a word cloud ``` library(wordcloud) # tidy_text_nonum %>% count(word) %>% with(wordcloud(word,n,max.words=90,scale=c(4,.5),colors=brewer.pal(8,"Dark2"))) ``` 5\.5 BiGrams ------------ Let’s look at bigrams. We need to go back to the cleaned abstracts and pair words to get phrase that might be suggestive of some sentiment ``` text_df <- data_frame(line = 1:length(my_abstracts), text = my_abstracts) dialysis_bigrams <- text_df %>% unnest_tokens(bigram, text, token = "ngrams", n = 2) dialysis_bigrams %>% count(bigram, sort = TRUE) ``` ``` ## # A tibble: 41,738 × 2 ## bigram n ## <chr> <int> ## 1 in the 382 ## 2 of the 310 ## 3 home dialysis 300 ## 4 home hemodialysis 279 ## 5 of home 195 ## 6 peritoneal dialysis 193 ## 7 associated with 174 ## 8 home hd 153 ## 9 home haemodialysis 144 ## 10 in center 144 ## # … with 41,728 more rows ``` But we have to filter out stop words ``` library(tidyr) bigrams_sep <- dialysis_bigrams %>% separate(bigram,c("word1","word2"),sep=" ") stop_list <- c(stop_words$word,"dialysis","patients","home","kidney", "hemodialysis","haemodialysis","treatment","patient","hhd", "pd","peritoneal","hd","renal","study","care", "ci","chd","nhd","esrd","lt","95","0.001") bigrams_filtered <- bigrams_sep %>% filter(!word1 %in% stop_list) %>% filter(!word2 %in% stop_list) bigram_counts <- bigrams_filtered %>% count(word1, word2, sort = TRUE) bigrams_united <- bigrams_filtered %>% unite(bigram, word1, word2, sep = " ") bigrams_united %>% count(bigram, sort = TRUE) %>% print(n=25) ``` ``` ## # A tibble: 11,842 × 2 ## bigram n ## <chr> <int> ## 1 replacement therapy 71 ## 2 vascular access 65 ## 3 technique failure 54 ## 4 confidence interval 41 ## 5 left ventricular 39 ## 6 blood pressure 36 ## 7 short daily 35 ## 8 clinical outcomes 33 ## 9 thrice weekly 30 ## 10 technique survival 29 ## 11 hazard ratio 26 ## 12 quality improvement 26 ## 13 adverse events 22 ## 14 6 months 21 ## 15 access related 21 ## 16 arteriovenous fistula 21 ## 17 12 months 19 ## 18 ventricular mass 18 ## 19 3 times 15 ## 20 buttonhole cannulation 15 ## 21 cost effective 15 ## 22 observational studies 15 ## 23 retrospective cohort 15 ## 24 cost effectiveness 14 ## 25 daily life 14 ## # … with 11,817 more rows ``` ``` library(tidyquant) bigram_counts %>% filter(n > 30) %>% ggplot(aes(x = reorder(word1, -n), y = reorder(word2, -n), fill = n)) + geom_tile(alpha = 0.8, color = "white") + scale_fill_gradientn(colours = c(palette_light()[[1]], palette_light()[[2]])) + coord_flip() + theme_tq() + theme(legend.position = "right") + theme(axis.text.x = element_text(angle = 45, vjust = 1, hjust = 1)) + labs(x = "first word in pair", y = "second word in pair") ``` 5\.1 Workflow ------------- * Identify and Obtain text (e.g. websites, Twitter, Databases, PDFs, surveys) * Create a text ”Corpus”\- a structure that contains the raw text * Apply transformations: + Normalize case (convert to lower case) + Remove puncutation and stopwords + Remove domain specific stopwords * Perform Analysis and Visualizations (word frequency, tagging, wordclouds) * Do Sentiment Analysis R has Packages to Help. These are just some of them: * QDAP \- Quantitative Discourse Package * tm \- text mining applications within R * tidytext \- Text Mining using ddplyr and ggplot and tidyverse tools * SentimentAnalysis \- For Sentiment Analysis However, consider that: * Some of these are easier to use than others * Some can be kind of a problem to install (e.g. qdap) * They all offer similar capabilities * We’ll look at tidytext 5\.2 Simple Example ------------------- Find the URL for Lincoln’s March 4, 1865 Speech: ``` url <- "https://millercenter.org/the-presidency/presidential-speeches/march-4-1865-second-inaugural-address" library(rvest) lincoln_doc <- read_html(url) %>% html_nodes(".view-transcript") %>% html_text() lincoln_doc ``` ``` ## [1] "TranscriptFellow-Countrymen: At this second appearing to take the oath of the Presidential office there is less occasion for an extended address than there was at the first. Then a statement somewhat in detail of a course to be pursued seemed fitting and proper. Now, at the expiration of four years, during which public declarations have been constantly called forth on every point and phase of the great contest which still absorbs the attention and engrosses the energies of the nation, little that is new could be presented. The progress of our arms, upon which all else chiefly depends, is as well known to the public as to myself, and it is, I trust, reasonably satisfactory and encouraging to all. With high hope for the future, no prediction in regard to it is ventured.On the occasion corresponding to this four years ago all thoughts were anxiously directed to an impending civil war. All dreaded it, all sought to avert it. While the inaugural address was being delivered from this place, devoted altogether to saving the Union without war, insurgent agents were in the city seeking to destroy it without war-seeking to dissolve the Union and divide effects by negotiation. Both parties deprecated war, but one of them would make war rather than let the nation survive, and the other would accept war rather than let it perish, and the war came.One-eighth of the whole population were colored slaves, not distributed generally over the Union. but localized in the southern part of it. These slaves constituted a peculiar and powerful interest. All knew that this interest was somehow the cause of the war. To strengthen, perpetuate, and extend this interest was the object for which the insurgents would rend the Union even by war, while the Government claimed no right to do more than to restrict the territorial enlargement of it. Neither party expected for the war the magnitude or the duration which it has already attained. Neither anticipated that the cause of the conflict might cease with or even before the conflict itself should cease. Each looked for an easier triumph, and a result less fundamental and astounding. Both read the same Bible and pray to the same God, and each invokes His aid against the other. It may seem strange that any men should dare to ask a just God's assistance in wringing their bread from the sweat of other men's faces, but let us judge not, that we be not judged. The prayers of both could not be answered. That of neither has been answered fully. The Almighty has His own purposes. \"Woe unto the world because of offenses; for it must needs be that offenses come, but woe to that man by whom the offense cometh.\" If we shall suppose that American slavery is one of those offenses which, in the providence of God, must needs come, but which, having continued through His appointed time, He now wills to remove, and that He gives to both North and South this terrible war as the woe due to those by whom the offense came, shall we discern therein any departure from those divine attributes which the believers in a living God always ascribe to Him? Fondly do we hope, fervently do we pray, that this mighty scourge of war may speedily pass away. Yet, if God wills that it continue until all the wealth piled by the bondsman's two hundred and fifty years of unrequited toil shall be sunk, and until every drop of blood drawn with the lash shall be paid by another drawn with the sword, as was said three thousand years ago, so still it must be said \"the judgments of the Lord are true and righteous altogether.\"With malice toward none, with charity for all, with firmness in the fight as God gives us to see the right, let us strive on to finish the work we are in, to bind up the nation's wounds, to care for him who shall have borne the battle and for his widow and his orphan, to do all which may achieve and cherish a just and lasting peace among ourselves and with all nations." ``` There are probably lots of words that don’t really “matter” or contribute to the “real” meaning of the speech. ``` word_vec <- unlist(strsplit(lincoln_doc," ")) word_vec[1:20] ``` ``` ## [1] "TranscriptFellow-Countrymen:" "" "At" "this" ## [5] "second" "appearing" "to" "take" ## [9] "the" "oath" "of" "the" ## [13] "Presidential" "office" "there" "is" ## [17] "less" "occasion" "for" "an" ``` ``` sort(table(word_vec),decreasing = TRUE)[1:10] ``` ``` ## word_vec ## the to and of that for be in it a ## 54 26 24 22 11 9 8 8 8 7 ``` How do we remove all the uninteresting words ? We could do it manaully ``` # Remove all punctuation marks word_vec <- gsub("[[:punct:]]","",word_vec) stop_words <- c("the","to","and","of","the","for","in","it", "a","this","which","by","is","an","hqs","from", "that","with","as") for (ii in 1:length(stop_words)) { for (jj in 1:length(word_vec)) { if (stop_words[ii] == word_vec[jj]) { word_vec[jj] <- "" } } } word_vec <- word_vec[word_vec != ""] sort(table(word_vec),decreasing = TRUE)[1:10] ``` ``` ## word_vec ## war all be we but God shall was do let ## 11 8 8 6 5 5 5 5 4 4 ``` ``` word_vec[1:30] ``` ``` ## [1] "TranscriptFellowCountrymen" "At" "second" "appearing" ## [5] "take" "oath" "Presidential" "office" ## [9] "there" "less" "occasion" "extended" ## [13] "address" "than" "there" "was" ## [17] "at" "first" "Then" "statement" ## [21] "somewhat" "detail" "course" "be" ## [25] "pursued" "seemed" "fitting" "proper" ## [29] "Now" "at" ``` 5\.3 tidytext ------------- So the tidytext package provides some accomodations to convert your body of text into individual **tokens** which then simplfies the removal of less meaningful words and the creation of word frequency counts. The first thing you do is to create a data frame where the there is one line for each body of text. In this case we have only one long string of text this will be a one line data frame. ``` library(tidytext) library(tidyr) text_df <- data_frame(line = 1:length(lincoln_doc), text = lincoln_doc) text_df ``` ``` ## # A tibble: 1 × 2 ## line text ## <int> <chr> ## 1 1 "TranscriptFellow-Countrymen: At this second appearing to take the oath of the Presidential office there is less occasion f… ``` The next step is to breakup each of text lines (we have only 1\) into invdividual rows, each with it’s own line. We also want to count the number of times that each word appears. This is known as **tokenizing** the data frame. ``` token_text <- text_df %>% unnest_tokens(word, text) # Let's now count them token_text %>% count(word,sort=TRUE) ``` ``` ## # A tibble: 339 × 2 ## word n ## <chr> <int> ## 1 the 58 ## 2 to 27 ## 3 and 24 ## 4 of 22 ## 5 it 13 ## 6 that 12 ## 7 war 12 ## 8 all 10 ## 9 for 9 ## 10 in 9 ## # … with 329 more rows ``` But we need to get rid of the “stop words.” It’s a good thing that the **tidytext** package has a way to filter out the common words that do not significantly contribute to the meaning of the overall text. The **stop\_words** data frame is built into **tidytext**. Take a look to see some of the words contained therein: ``` data(stop_words) # Sample 40 random stop words stop_words %>% sample_n(40) ``` ``` ## # A tibble: 40 × 2 ## word lexicon ## <chr> <chr> ## 1 do onix ## 2 where SMART ## 3 contains SMART ## 4 him snowball ## 5 seeming SMART ## 6 ended onix ## 7 following SMART ## 8 area onix ## 9 across onix ## 10 ordered onix ## # … with 30 more rows ``` ``` # Now remove stop words from the document tidy_text <- token_text %>% anti_join(stop_words) ``` ``` ## Joining, by = "word" ``` ``` # This could also be done by the following. I point this out only because some people react # negatively to "joins" although fully understanding what joins are can only help you since # much of what the dplyr package does is based on SQL type joins. tidy_text <- token_text %>% filter(!word %in% stop_words$word) tidy_text %>% count(word,sort=TRUE) ``` ``` ## # A tibble: 193 × 2 ## word n ## <chr> <int> ## 1 war 12 ## 2 god 5 ## 3 union 4 ## 4 offenses 3 ## 5 woe 3 ## 6 address 2 ## 7 ago 2 ## 8 altogether 2 ## 9 answered 2 ## 10 cease 2 ## # … with 183 more rows ``` ``` tidy_text %>% count(word,sort=TRUE) ``` ``` ## # A tibble: 193 × 2 ## word n ## <chr> <int> ## 1 war 12 ## 2 god 5 ## 3 union 4 ## 4 offenses 3 ## 5 woe 3 ## 6 address 2 ## 7 ago 2 ## 8 altogether 2 ## 9 answered 2 ## 10 cease 2 ## # … with 183 more rows ``` ``` tidy_text %>% count(word, sort = TRUE) %>% filter(n > 2) %>% mutate(word = reorder(word, n)) %>% ggplot(aes(word, n)) + geom_col() + xlab(NULL) + coord_flip() ``` 5\.4 Back To The PubMed Example ------------------------------- We have around 935 abstracts that we mess with based on our work using the **easyPubMed** package ``` # Create a data frame out of the cleaned up abstracts library(tidytext) library(dplyr) text_df <- data_frame(line = 1:length(my_abstracts), text = my_abstracts) token_text <- text_df %>% unnest_tokens(word, text) # Many of these words aren't helpful token_text %>% count(total=word,sort=TRUE) ``` ``` ## # A tibble: 6,936 × 2 ## total n ## <chr> <int> ## 1 the 3062 ## 2 of 2896 ## 3 and 2871 ## 4 in 1915 ## 5 to 1884 ## 6 a 1373 ## 7 dialysis 1365 ## 8 patients 1335 ## 9 home 1281 ## 10 with 1035 ## # … with 6,926 more rows ``` ``` # Now remove stop words data(stop_words) tidy_text <- token_text %>% anti_join(stop_words) # This could also be done by the following. I point this out only because some people react # negatively to "joins" although fully understanding what joins are can only help you since # much of what the dplyr package does is based on SQL type joins. tidy_text <- token_text %>% filter(!word %in% stop_words$word) # Arrange the text by descending word frequency tidy_text %>% count(word, sort = TRUE) ``` ``` ## # A tibble: 6,460 × 2 ## word n ## <chr> <int> ## 1 dialysis 1365 ## 2 patients 1335 ## 3 home 1281 ## 4 hemodialysis 674 ## 5 hd 463 ## 6 hhd 440 ## 7 patient 395 ## 8 pd 303 ## 9 renal 279 ## 10 study 268 ## # … with 6,450 more rows ``` Some of the most frequently occurring words are in fact “dialysis,” “patients” so maybe we should consider them to be stop words also since we already know quite well that the overall theme is, well, dialysis and kidneys. There are also synonymns and abbreviations that are somewhat redundant such as “pdd,”“pd,”“hhd” so let’s eliminate them also. ``` tidy_text <- token_text %>% filter(!word %in% c(stop_words$word,"dialysis","patients","home","kidney", "hemodialysis","haemodialysis","patient","hhd", "pd","peritoneal","hd","renal","study","care", "ci","chd","nhd","disease","treatment")) tidy_text %>% count(word, sort = TRUE) ``` ``` ## # A tibble: 6,441 × 2 ## word n ## <chr> <int> ## 1 therapy 193 ## 2 conventional 191 ## 3 survival 191 ## 4 center 186 ## 5 compared 180 ## 6 clinical 175 ## 7 nocturnal 171 ## 8 outcomes 171 ## 9 quality 171 ## 10 data 161 ## # … with 6,431 more rows ``` Let’s do some plotting of these words ``` library(ggplot2) tidy_text %>% count(word, sort = TRUE) %>% filter(n > 120) %>% mutate(word = reorder(word, n)) %>% ggplot(aes(word, n)) + geom_col() + xlab(NULL) + coord_flip() ``` Okay, it looks like there are numbers in there which might be useful. I suspect that the “95” is probably associated with the idea of a confidence interval. But there are other references to numbers. ``` grep("^[0-9]{1,3}$",tidy_text$word)[1:20] ``` ``` ## [1] 9 273 275 284 288 293 296 305 308 387 388 554 614 671 679 680 682 744 758 762 ``` ``` tidy_text_nonum <- tidy_text[grep("^[0-9]{1,3}$",tidy_text$word,invert=TRUE),] ``` Okay well I think maybe we have some reasonable data to examine. As you might have realized by now, manipulating data to get it “clean” can be tedious and frustrating though it is an inevitable part of the process. ``` tidy_text_nonum %>% count(word, sort = TRUE) %>% filter(n > 120) %>% mutate(word = reorder(word, n)) %>% ggplot(aes(word, n)) + geom_col() + xlab(NULL) + coord_flip() ``` ### 5\.4\.1 How Do You Feel ? The next step is to explore what some of these words might mean. The **tidytext** package has four dictionaries that help you figure out what sentiment is being expressed by your data frame. ``` # NRC Emotion Lexicon from Saif Mohammad and Peter Turney get_sentiments("nrc") %>% sample_n(20) ``` ``` ## # A tibble: 20 × 2 ## word sentiment ## <chr> <chr> ## 1 disgrace negative ## 2 undying anticipation ## 3 warn fear ## 4 independence anticipation ## 5 judiciary anticipation ## 6 doll joy ## 7 glorify joy ## 8 alien disgust ## 9 purify joy ## 10 flop disgust ## 11 sanguine positive ## 12 beastly negative ## 13 blackness negative ## 14 wear trust ## 15 neutral trust ## 16 affluence positive ## 17 invade anger ## 18 alienation anger ## 19 deceit disgust ## 20 contemptible negative ``` ``` # the sentiment lexicon from Bing Liu and collaborators get_sentiments("bing") %>% sample_n(20) ``` ``` ## # A tibble: 20 × 2 ## word sentiment ## <chr> <chr> ## 1 nurturing positive ## 2 saintliness positive ## 3 split negative ## 4 object negative ## 5 mistrust negative ## 6 insolvent negative ## 7 extravagant negative ## 8 thinner positive ## 9 bombastic negative ## 10 disapointment negative ## 11 immoderate negative ## 12 dungeons negative ## 13 revengefully negative ## 14 mournful negative ## 15 sagacity positive ## 16 upgraded positive ## 17 adequate positive ## 18 spendy negative ## 19 awfully negative ## 20 displaced negative ``` ``` # Tim Loughran and Bill McDonald get_sentiments("loughran") %>% sample_n(20) ``` ``` ## # A tibble: 20 × 2 ## word sentiment ## <chr> <chr> ## 1 duress negative ## 2 opposes negative ## 3 resigns negative ## 4 limitation negative ## 5 claimant litigious ## 6 cancel negative ## 7 honored positive ## 8 incompatibility negative ## 9 premiere positive ## 10 variably uncertainty ## 11 reassessments negative ## 12 devolved negative ## 13 lingering negative ## 14 unpredicted negative ## 15 legalized litigious ## 16 obstructing negative ## 17 forfeited negative ## 18 foreclosure negative ## 19 prejudicing negative ## 20 gained positive ``` ``` # Pull out words that correspond to joy nrc_joy <- get_sentiments("nrc") %>% filter(sentiment == "joy") nrc_joy ``` ``` ## # A tibble: 689 × 2 ## word sentiment ## <chr> <chr> ## 1 absolution joy ## 2 abundance joy ## 3 abundant joy ## 4 accolade joy ## 5 accompaniment joy ## 6 accomplish joy ## 7 accomplished joy ## 8 achieve joy ## 9 achievement joy ## 10 acrobat joy ## # … with 679 more rows ``` So we will use the **nrc** sentiment dictionary to see the “sentiment” expressed in our abstracts. ``` bing_word_counts <- tidy_text_nonum %>% inner_join(get_sentiments("nrc")) %>% count(word,sentiment,sort=TRUE) ``` ``` ## Joining, by = "word" ``` t the positive vs negative words ``` bing_word_counts %>% group_by(sentiment) %>% top_n(10) %>% ungroup() %>% mutate(word = reorder(word, n)) %>% ggplot(aes(word, n, fill = sentiment)) + geom_col(show.legend = FALSE) + facet_wrap(~sentiment, scales = "free_y") + labs(y = "Contribution to sentiment", x = NULL) + coord_flip() ``` ``` ## Selecting by n ``` Let’s create a word cloud ``` library(wordcloud) # tidy_text_nonum %>% count(word) %>% with(wordcloud(word,n,max.words=90,scale=c(4,.5),colors=brewer.pal(8,"Dark2"))) ``` ### 5\.4\.1 How Do You Feel ? The next step is to explore what some of these words might mean. The **tidytext** package has four dictionaries that help you figure out what sentiment is being expressed by your data frame. ``` # NRC Emotion Lexicon from Saif Mohammad and Peter Turney get_sentiments("nrc") %>% sample_n(20) ``` ``` ## # A tibble: 20 × 2 ## word sentiment ## <chr> <chr> ## 1 disgrace negative ## 2 undying anticipation ## 3 warn fear ## 4 independence anticipation ## 5 judiciary anticipation ## 6 doll joy ## 7 glorify joy ## 8 alien disgust ## 9 purify joy ## 10 flop disgust ## 11 sanguine positive ## 12 beastly negative ## 13 blackness negative ## 14 wear trust ## 15 neutral trust ## 16 affluence positive ## 17 invade anger ## 18 alienation anger ## 19 deceit disgust ## 20 contemptible negative ``` ``` # the sentiment lexicon from Bing Liu and collaborators get_sentiments("bing") %>% sample_n(20) ``` ``` ## # A tibble: 20 × 2 ## word sentiment ## <chr> <chr> ## 1 nurturing positive ## 2 saintliness positive ## 3 split negative ## 4 object negative ## 5 mistrust negative ## 6 insolvent negative ## 7 extravagant negative ## 8 thinner positive ## 9 bombastic negative ## 10 disapointment negative ## 11 immoderate negative ## 12 dungeons negative ## 13 revengefully negative ## 14 mournful negative ## 15 sagacity positive ## 16 upgraded positive ## 17 adequate positive ## 18 spendy negative ## 19 awfully negative ## 20 displaced negative ``` ``` # Tim Loughran and Bill McDonald get_sentiments("loughran") %>% sample_n(20) ``` ``` ## # A tibble: 20 × 2 ## word sentiment ## <chr> <chr> ## 1 duress negative ## 2 opposes negative ## 3 resigns negative ## 4 limitation negative ## 5 claimant litigious ## 6 cancel negative ## 7 honored positive ## 8 incompatibility negative ## 9 premiere positive ## 10 variably uncertainty ## 11 reassessments negative ## 12 devolved negative ## 13 lingering negative ## 14 unpredicted negative ## 15 legalized litigious ## 16 obstructing negative ## 17 forfeited negative ## 18 foreclosure negative ## 19 prejudicing negative ## 20 gained positive ``` ``` # Pull out words that correspond to joy nrc_joy <- get_sentiments("nrc") %>% filter(sentiment == "joy") nrc_joy ``` ``` ## # A tibble: 689 × 2 ## word sentiment ## <chr> <chr> ## 1 absolution joy ## 2 abundance joy ## 3 abundant joy ## 4 accolade joy ## 5 accompaniment joy ## 6 accomplish joy ## 7 accomplished joy ## 8 achieve joy ## 9 achievement joy ## 10 acrobat joy ## # … with 679 more rows ``` So we will use the **nrc** sentiment dictionary to see the “sentiment” expressed in our abstracts. ``` bing_word_counts <- tidy_text_nonum %>% inner_join(get_sentiments("nrc")) %>% count(word,sentiment,sort=TRUE) ``` ``` ## Joining, by = "word" ``` t the positive vs negative words ``` bing_word_counts %>% group_by(sentiment) %>% top_n(10) %>% ungroup() %>% mutate(word = reorder(word, n)) %>% ggplot(aes(word, n, fill = sentiment)) + geom_col(show.legend = FALSE) + facet_wrap(~sentiment, scales = "free_y") + labs(y = "Contribution to sentiment", x = NULL) + coord_flip() ``` ``` ## Selecting by n ``` Let’s create a word cloud ``` library(wordcloud) # tidy_text_nonum %>% count(word) %>% with(wordcloud(word,n,max.words=90,scale=c(4,.5),colors=brewer.pal(8,"Dark2"))) ``` 5\.5 BiGrams ------------ Let’s look at bigrams. We need to go back to the cleaned abstracts and pair words to get phrase that might be suggestive of some sentiment ``` text_df <- data_frame(line = 1:length(my_abstracts), text = my_abstracts) dialysis_bigrams <- text_df %>% unnest_tokens(bigram, text, token = "ngrams", n = 2) dialysis_bigrams %>% count(bigram, sort = TRUE) ``` ``` ## # A tibble: 41,738 × 2 ## bigram n ## <chr> <int> ## 1 in the 382 ## 2 of the 310 ## 3 home dialysis 300 ## 4 home hemodialysis 279 ## 5 of home 195 ## 6 peritoneal dialysis 193 ## 7 associated with 174 ## 8 home hd 153 ## 9 home haemodialysis 144 ## 10 in center 144 ## # … with 41,728 more rows ``` But we have to filter out stop words ``` library(tidyr) bigrams_sep <- dialysis_bigrams %>% separate(bigram,c("word1","word2"),sep=" ") stop_list <- c(stop_words$word,"dialysis","patients","home","kidney", "hemodialysis","haemodialysis","treatment","patient","hhd", "pd","peritoneal","hd","renal","study","care", "ci","chd","nhd","esrd","lt","95","0.001") bigrams_filtered <- bigrams_sep %>% filter(!word1 %in% stop_list) %>% filter(!word2 %in% stop_list) bigram_counts <- bigrams_filtered %>% count(word1, word2, sort = TRUE) bigrams_united <- bigrams_filtered %>% unite(bigram, word1, word2, sep = " ") bigrams_united %>% count(bigram, sort = TRUE) %>% print(n=25) ``` ``` ## # A tibble: 11,842 × 2 ## bigram n ## <chr> <int> ## 1 replacement therapy 71 ## 2 vascular access 65 ## 3 technique failure 54 ## 4 confidence interval 41 ## 5 left ventricular 39 ## 6 blood pressure 36 ## 7 short daily 35 ## 8 clinical outcomes 33 ## 9 thrice weekly 30 ## 10 technique survival 29 ## 11 hazard ratio 26 ## 12 quality improvement 26 ## 13 adverse events 22 ## 14 6 months 21 ## 15 access related 21 ## 16 arteriovenous fistula 21 ## 17 12 months 19 ## 18 ventricular mass 18 ## 19 3 times 15 ## 20 buttonhole cannulation 15 ## 21 cost effective 15 ## 22 observational studies 15 ## 23 retrospective cohort 15 ## 24 cost effectiveness 14 ## 25 daily life 14 ## # … with 11,817 more rows ``` ``` library(tidyquant) bigram_counts %>% filter(n > 30) %>% ggplot(aes(x = reorder(word1, -n), y = reorder(word2, -n), fill = n)) + geom_tile(alpha = 0.8, color = "white") + scale_fill_gradientn(colours = c(palette_light()[[1]], palette_light()[[2]])) + coord_flip() + theme_tq() + theme(legend.position = "right") + theme(axis.text.x = element_text(angle = 45, vjust = 1, hjust = 1)) + labs(x = "first word in pair", y = "second word in pair") ```
Getting Cleaning and Wrangling Data
brouwern.github.io
https://brouwern.github.io/lbrb/installing-packages.html
Chapter 3 Installing *R* packages ================================= **By**: Avril Coghlan. **Adapted, edited \& expanded**: Nathan Brouwer under the Creative Commons 3\.0 Attribution License [(CC BY 3\.0\)](https://creativecommons.org/licenses/by/3.0/). *R* is a programming language, and **packages** (aka **libraries**) are bundles of software built using *R*. Most sessions using *R* involve using additional *R* packages. This is especially true for bioinformatics and computational biology. > **NOTE**: If you are working in an RStudio Cloud environment organized by someone else (e.g. a course instructor), they likely are taking care of many of the package management issues. The following information is still useful to be familiar with. 3\.1 Downloading packages with the RStudio IDE ---------------------------------------------- There is a point\-and\-click interface for installing *R* packages in RStudio. There is a brief introduction to downloading packages on this site: [http://web.cs.ucla.edu/\~gulzar/rstudio/](http://web.cs.ucla.edu/~gulzar/rstudio/) I’ve summarized it here: 1. “Click on the”Packages” tab in the bottom\-right section and then click on “Install”. The following dialog box will appear. 2. In the “Install Packages” dialog, write the package name you want to install under the Packages field and then click install. This will install the package you searched for or give you a list of matching package based on your package text. 3\.2 Downloading packages with the function `install.packages()` ---------------------------------------------------------------- The easiest way to install a package if you know its name is to use the *R* function `install.packages(`)\`. Note that it might be better to call this “download.packages” since after you install it, you also have to load it! Frequently I will include `install.packages(...)` at the beginning of a chapter the first time we use a package to make sure the package is downloaded. Note, however, that if you already have downloaded the package, running `install.packages(...)` will download a new copy. While packages do get updated from time to time, but its best to re\-run `install.packages(...)` only occassionaly. We’ll download a package used for plotting called `ggplot2`, which stands for “Grammar of Graphics.” `ggplot2` was developed by Dr. [Hadley Wickham](http://hadley.nz/), who is now the Chief Scientists for RStudio. To download `ggplot2`, run the following command: ``` install.packages("ggplot2") # note the " " ``` Often when you download a package you’ll see a fair bit of angry\-looking red text, and sometime other things will pop up. Usually there’s nothing of interest here, but sometimes you need to read things carefully over it for hints about why something didn’t work. 3\.3 Using packages after they are downloaded --------------------------------------------- To actually make the functions in package accessible you need to use the `library()` command. Note that this is *not* in quotes. ``` library(ggplot2) # note: NO " " ``` 3\.1 Downloading packages with the RStudio IDE ---------------------------------------------- There is a point\-and\-click interface for installing *R* packages in RStudio. There is a brief introduction to downloading packages on this site: [http://web.cs.ucla.edu/\~gulzar/rstudio/](http://web.cs.ucla.edu/~gulzar/rstudio/) I’ve summarized it here: 1. “Click on the”Packages” tab in the bottom\-right section and then click on “Install”. The following dialog box will appear. 2. In the “Install Packages” dialog, write the package name you want to install under the Packages field and then click install. This will install the package you searched for or give you a list of matching package based on your package text. 3\.2 Downloading packages with the function `install.packages()` ---------------------------------------------------------------- The easiest way to install a package if you know its name is to use the *R* function `install.packages(`)\`. Note that it might be better to call this “download.packages” since after you install it, you also have to load it! Frequently I will include `install.packages(...)` at the beginning of a chapter the first time we use a package to make sure the package is downloaded. Note, however, that if you already have downloaded the package, running `install.packages(...)` will download a new copy. While packages do get updated from time to time, but its best to re\-run `install.packages(...)` only occassionaly. We’ll download a package used for plotting called `ggplot2`, which stands for “Grammar of Graphics.” `ggplot2` was developed by Dr. [Hadley Wickham](http://hadley.nz/), who is now the Chief Scientists for RStudio. To download `ggplot2`, run the following command: ``` install.packages("ggplot2") # note the " " ``` Often when you download a package you’ll see a fair bit of angry\-looking red text, and sometime other things will pop up. Usually there’s nothing of interest here, but sometimes you need to read things carefully over it for hints about why something didn’t work. 3\.3 Using packages after they are downloaded --------------------------------------------- To actually make the functions in package accessible you need to use the `library()` command. Note that this is *not* in quotes. ``` library(ggplot2) # note: NO " " ```
Life Sciences
brouwern.github.io
https://brouwern.github.io/lbrb/installing-bioconductor.html
Chapter 4 Installing Bioconductor ================================= **By**: Avril Coghlan. **Adapted, edited and expanded**: Nathan Brouwer under the Creative Commons 3\.0 Attribution License [(CC BY 3\.0\)](https://creativecommons.org/licenses/by/3.0/), including details on install Bioconductor and common prompts and error messages that appear during installation. 4\.1 Bioconductor ----------------- *R* **packages** (aka “libraries”) can live in many places. Most are accessed via **CRAN**, the **Comprehensive R Archive Network**. The bioinformatics and computational biology community also has its own package hosting system called [Bioconductor](www.bioconductor.org). *R* has played an important part in the development and application of bioinformatics techniques in the 21th century. Bioconductor 1\.0 was released in 2002 with 15 packages. As of winter 2021, there are almost 2000 packages in the current release! > **NOTE**: If you are working in an RStudio Cloud environment organized by someone else (eg a course instructor), they likely are taking care of most of package management issues, inlcuding setting up Bioconductor. The following information is still useful to be familiar with. To interface with Bioconductor you need the [BiocManager](https://cran.r-project.org/web/packages/BiocManager/vignettes/BiocManager.html) package. The Bioconductor people have put [BiocManager](https://cran.r-project.org/web/packages/BiocManager/vignettes/BiocManager.html) on CRAN to allow you to set up interactions with Bioconductor. See the [BiocManager documentation](https://cran.r-project.org/web/packages/BiocManager/vignettes/BiocManager.html) for more information ([https://cran.r\-project.org/web/packages/BiocManager/vignettes/BiocManager.html](https://cran.r-project.org/web/packages/BiocManager/vignettes/BiocManager.html)). Note that if you have an old version of R you will need to update it to interact with Bioconductor. 4\.2 Installing BiocManager --------------------------- BiocManager can be installed using the `install.packages()` packages command. ``` install.packages("BiocManager") # Remember the " "; don't worry about the red text ``` Once downloaded, BioManager needs to be explicitly loaded into your active R session using `library()` ``` library(BiocManager) # no quotes; again, ignore the red text ``` Individual Bioconductor packages can then be downloaded using the `install()` command. An essential packages is `Biostrings`. To do this , ``` BiocManager::install("Biostrings") ``` 4\.3 The ins and outs of package installation --------------------------------------------- **IMPORANT** Bioconductor has many **dependencies** \- other packages which is relies on. When you install Bioconductor packages you may need to update these packages. If something seems to not be working during this process, restart R and begin the Bioconductor installation process until things seem to work. Below I discuss the series of prompts I had to deal with while re\-installing Biostrings while editing this chapter. ### 4\.3\.1 Updating other packages when downloading a package When I re\-installed `Biostrings` while writing this I was given a HUGE blog of red test that contained something like what’s shown below (this only about 1/3 of the actual output!): ``` 'getOption("repos")' replaces Bioconductor standard repositories, see '?repositories' for details replacement repositories: CRAN: https://cran.rstudio.com/ Bioconductor version 3.11 (BiocManager 1.30.16), R 4.0.5 (2021-03-31) Old packages: 'ade4', 'ape', 'aster', 'bayestestR', 'bio3d', 'bitops', 'blogdown', 'bookdown', 'brio', 'broom', 'broom.mixed', 'broomExtra', 'bslib', 'cachem', 'callr', 'car', 'circlize', 'class', 'cli', 'cluster', 'colorspace', 'corrplot', 'cpp11', 'curl', 'devtools', 'DHARMa', 'doBy', 'dplyr', 'DT', 'e1071', 'ellipsis', 'emmeans', 'emojifont', 'extRemes', 'fansi', 'flextable', 'forecast', 'formatR', 'gap', 'gargle', 'gert', 'GGally' ``` Hidden at the bottom was a prompt: `"Update all/some/none? [a/s/n]:"` Its a little vague, but what it wants me to do is type in `a`, `s` or `n` and press enter to tell it what to do. I almost always chose `a`, though this may take a while to update everything. ### 4\.3\.2 Packages “from source” You are likely to get lots of random\-looking feedback from R when doing Bioconductor\-related installations. Look carefully for any prompts as the very last line. While updating `Biostrings` I was told: “*There are binary versions available but the source versions are later:*” and given a table of packages. I was then asked “*Do you want to install from sources the packages which need compilation? (Yes/no/cancel)*” I almost always chose “no”. ### 4\.3\.3 More on angry red text After the prompt about packages from source, R proceeded to download a lot of updates to packages, which took a few minutes. Lots of red text scrolled by, but this is normal. 4\.4 Actually loading a package ------------------------------- Again, to actually load the `Biostrings` package into your active R sessions requires the `libary()` command: ``` library(Biostrings) ``` As you might expect, there’s more red text scrolling up my screen! I can tell that is actually worked because at the end of all the red stuff is the R prompt of “\>” and my cursor. 4\.1 Bioconductor ----------------- *R* **packages** (aka “libraries”) can live in many places. Most are accessed via **CRAN**, the **Comprehensive R Archive Network**. The bioinformatics and computational biology community also has its own package hosting system called [Bioconductor](www.bioconductor.org). *R* has played an important part in the development and application of bioinformatics techniques in the 21th century. Bioconductor 1\.0 was released in 2002 with 15 packages. As of winter 2021, there are almost 2000 packages in the current release! > **NOTE**: If you are working in an RStudio Cloud environment organized by someone else (eg a course instructor), they likely are taking care of most of package management issues, inlcuding setting up Bioconductor. The following information is still useful to be familiar with. To interface with Bioconductor you need the [BiocManager](https://cran.r-project.org/web/packages/BiocManager/vignettes/BiocManager.html) package. The Bioconductor people have put [BiocManager](https://cran.r-project.org/web/packages/BiocManager/vignettes/BiocManager.html) on CRAN to allow you to set up interactions with Bioconductor. See the [BiocManager documentation](https://cran.r-project.org/web/packages/BiocManager/vignettes/BiocManager.html) for more information ([https://cran.r\-project.org/web/packages/BiocManager/vignettes/BiocManager.html](https://cran.r-project.org/web/packages/BiocManager/vignettes/BiocManager.html)). Note that if you have an old version of R you will need to update it to interact with Bioconductor. 4\.2 Installing BiocManager --------------------------- BiocManager can be installed using the `install.packages()` packages command. ``` install.packages("BiocManager") # Remember the " "; don't worry about the red text ``` Once downloaded, BioManager needs to be explicitly loaded into your active R session using `library()` ``` library(BiocManager) # no quotes; again, ignore the red text ``` Individual Bioconductor packages can then be downloaded using the `install()` command. An essential packages is `Biostrings`. To do this , ``` BiocManager::install("Biostrings") ``` 4\.3 The ins and outs of package installation --------------------------------------------- **IMPORANT** Bioconductor has many **dependencies** \- other packages which is relies on. When you install Bioconductor packages you may need to update these packages. If something seems to not be working during this process, restart R and begin the Bioconductor installation process until things seem to work. Below I discuss the series of prompts I had to deal with while re\-installing Biostrings while editing this chapter. ### 4\.3\.1 Updating other packages when downloading a package When I re\-installed `Biostrings` while writing this I was given a HUGE blog of red test that contained something like what’s shown below (this only about 1/3 of the actual output!): ``` 'getOption("repos")' replaces Bioconductor standard repositories, see '?repositories' for details replacement repositories: CRAN: https://cran.rstudio.com/ Bioconductor version 3.11 (BiocManager 1.30.16), R 4.0.5 (2021-03-31) Old packages: 'ade4', 'ape', 'aster', 'bayestestR', 'bio3d', 'bitops', 'blogdown', 'bookdown', 'brio', 'broom', 'broom.mixed', 'broomExtra', 'bslib', 'cachem', 'callr', 'car', 'circlize', 'class', 'cli', 'cluster', 'colorspace', 'corrplot', 'cpp11', 'curl', 'devtools', 'DHARMa', 'doBy', 'dplyr', 'DT', 'e1071', 'ellipsis', 'emmeans', 'emojifont', 'extRemes', 'fansi', 'flextable', 'forecast', 'formatR', 'gap', 'gargle', 'gert', 'GGally' ``` Hidden at the bottom was a prompt: `"Update all/some/none? [a/s/n]:"` Its a little vague, but what it wants me to do is type in `a`, `s` or `n` and press enter to tell it what to do. I almost always chose `a`, though this may take a while to update everything. ### 4\.3\.2 Packages “from source” You are likely to get lots of random\-looking feedback from R when doing Bioconductor\-related installations. Look carefully for any prompts as the very last line. While updating `Biostrings` I was told: “*There are binary versions available but the source versions are later:*” and given a table of packages. I was then asked “*Do you want to install from sources the packages which need compilation? (Yes/no/cancel)*” I almost always chose “no”. ### 4\.3\.3 More on angry red text After the prompt about packages from source, R proceeded to download a lot of updates to packages, which took a few minutes. Lots of red text scrolled by, but this is normal. ### 4\.3\.1 Updating other packages when downloading a package When I re\-installed `Biostrings` while writing this I was given a HUGE blog of red test that contained something like what’s shown below (this only about 1/3 of the actual output!): ``` 'getOption("repos")' replaces Bioconductor standard repositories, see '?repositories' for details replacement repositories: CRAN: https://cran.rstudio.com/ Bioconductor version 3.11 (BiocManager 1.30.16), R 4.0.5 (2021-03-31) Old packages: 'ade4', 'ape', 'aster', 'bayestestR', 'bio3d', 'bitops', 'blogdown', 'bookdown', 'brio', 'broom', 'broom.mixed', 'broomExtra', 'bslib', 'cachem', 'callr', 'car', 'circlize', 'class', 'cli', 'cluster', 'colorspace', 'corrplot', 'cpp11', 'curl', 'devtools', 'DHARMa', 'doBy', 'dplyr', 'DT', 'e1071', 'ellipsis', 'emmeans', 'emojifont', 'extRemes', 'fansi', 'flextable', 'forecast', 'formatR', 'gap', 'gargle', 'gert', 'GGally' ``` Hidden at the bottom was a prompt: `"Update all/some/none? [a/s/n]:"` Its a little vague, but what it wants me to do is type in `a`, `s` or `n` and press enter to tell it what to do. I almost always chose `a`, though this may take a while to update everything. ### 4\.3\.2 Packages “from source” You are likely to get lots of random\-looking feedback from R when doing Bioconductor\-related installations. Look carefully for any prompts as the very last line. While updating `Biostrings` I was told: “*There are binary versions available but the source versions are later:*” and given a table of packages. I was then asked “*Do you want to install from sources the packages which need compilation? (Yes/no/cancel)*” I almost always chose “no”. ### 4\.3\.3 More on angry red text After the prompt about packages from source, R proceeded to download a lot of updates to packages, which took a few minutes. Lots of red text scrolled by, but this is normal. 4\.4 Actually loading a package ------------------------------- Again, to actually load the `Biostrings` package into your active R sessions requires the `libary()` command: ``` library(Biostrings) ``` As you might expect, there’s more red text scrolling up my screen! I can tell that is actually worked because at the end of all the red stuff is the R prompt of “\>” and my cursor.
Life Sciences
brouwern.github.io
https://brouwern.github.io/lbrb/basic-R.html
Chapter 5 A Brief introduction to R =================================== **By**: Avril Coghlan. **Adapted, edited and expanded**: Nathan Brouwer under the Creative Commons 3\.0 Attribution License [(CC BY 3\.0\)](https://creativecommons.org/licenses/by/3.0/). This chapter provides a brief introduction to R. At the end of are links to additional resources for getting started with R. 5\.1 Vocabulary --------------- * scalar * vector * list * class * numeric * character * assignment * elements of an object * indices * attributes of an object * argument of a function 5\.2 R functions ---------------- * \<\- * \[ ] * $ * table() * function * c() * log10() * help(), ? * help.search() * RSiteSearch() * mean() * return() * q() 5\.3 Interacting with R ----------------------- You will type *R* commands into the RStudio **console** in order to carry out analyses in *R*. In the RStudio console you will see the R prompt starting with the symbol “\>”. “\>” will always be there at the beginning of each new command \- don’t try to delete it! Moreover, you never need to type it. We type the **commands** needed for a particular task after this prompt. The command is carried out by *R* after you hit the Return key. Once you have started R, you can start typing commands into the RStudio console, and the results will be calculated immediately, for example: ``` 2*3 ``` ``` ## [1] 6 ``` Note that prior to the output of “6” it shows “\[1]”. Now subtraction: ``` 10-3 ``` ``` ## [1] 7 ``` Again, prior to the output of “7” it shows “\[1]”. *R* can act like a basic calculator that you type commands in to. You can also use it like a more advanced scientific calculator and create **variables** that store information. All variables created by R are called **objects**. In R, we assign values to variables using an arrow\-looking function `<-` the **assignment operator**. For example, we can **assign** the value 2\*3 to the variable x using the command: ``` x <- 2*3 ``` To view the contents of any R object, just type its name, press enter, and the contents of that R object will be displayed: ``` x ``` ``` ## [1] 6 ``` 5\.4 Variables in R ------------------- There are several different types of objects in R with fancy math names, including **scalars**, **vectors**, **matrices** (singular: **matrix),** arrays**,** dataframes**,** tables**, and** lists**. The** scalar\*\* variable x above is one example of an R object. While a scalar variable such as x has just one element, a **vector** consists of several elements. The elements in a vector are all of the same **type** (e.g.. numbers or alphabetic characters), while **lists** may include elements such as characters as well as numeric quantities. Vectors and dataframes are the most common variables you’ll use. You’ll also encounter matrices often, and lists are ubiquitous in R but beginning users often don’t encounter them because they remain behind the scenes. ### 5\.4\.1 Vectors To create a vector, we can use the `c()` (combine) function. For example, to create a vector called `myvector` that has elements with values 8, 6, 9, 10, and 5, we type: ``` myvector <- c(8, 6, 9, 10, 5) # note: commas between each number! ``` To see the contents of the variable `myvector`, we can just type its name and press enter: ``` myvector ``` ``` ## [1] 8 6 9 10 5 ``` ### 5\.4\.2 Vector indexing The `[1]` is the **index** of the first **element** in the vector. We can **extract** any element of the vector by typing the vector name with the index of that element given in **square brackets** `[...]`. For example, to get the value of the 4th element in the vector `myvector`, we type: ``` myvector[4] ``` ``` ## [1] 10 ``` ### 5\.4\.3 Character vectors Vectors can contain letters, such as those designating nucleic acids ``` my.seq <- c("A","T","C","G") ``` They can also contain multi\-letter **strings**: ``` my.oligos <- c("ATCGC","TTTCGC","CCCGCG","GGGCGC") ``` ### 5\.4\.4 Lists **NOTE**: *below is a discussion of lists in R. This is excellent information, but not necessary if this is your very very first time using R.* In contrast to a vector, a **list** can contain elements of different types, for example, both numbers and letters. A list can even include other variables such as a vector. The `list()` function is used to create a list. For example, we could create a list `mylist` by typing: ``` mylist <- list(name="Charles Darwin", wife="Emma Darwin", myvector) ``` We can then print out the contents of the list `mylist` by typing its name: ``` mylist ``` ``` ## $name ## [1] "Charles Darwin" ## ## $wife ## [1] "Emma Darwin" ## ## [[3]] ## [1] 8 6 9 10 5 ``` The **elements** in a list are numbered, and can be referred to using **indices**. We can extract an element of a list by typing the list name with the index of the element given in double **square brackets** (in contrast to a vector, where we only use single square brackets). We can extract the second element from `mylist` by typing: ``` mylist[[2]] # note the double square brackets [[...]] ``` ``` ## [1] "Emma Darwin" ``` As a baby step towards our next task, we can wrap index values as in the `c()` command like this: ``` mylist[[c(2)]] # note the double square brackets [[...]] ``` ``` ## [1] "Emma Darwin" ``` The number `2` and `c(2)` mean the same thing. Now, we can extract the second AND third elements from `mylist`. First, we put the indices 2 and 3 into a vector `c(2,3)`, then wrap that vector in double square brackets: `[c(2,3)]`. All together it looks like this. ``` mylist[c(2,3)] # note the double brackets ``` ``` ## $wife ## [1] "Emma Darwin" ## ## [[2]] ## [1] 8 6 9 10 5 ``` Elements of lists may also be named, resulting in a **named lists**. The elements may then be referred to by giving the list name, followed by “$”, followed by the element name. For example, mylist$name is the same as mylist\[\[1]] and mylist$wife is the same as mylist\[\[2]]: ``` mylist$wife ``` ``` ## [1] "Emma Darwin" ``` We can find out the names of the named elements in a list by using the `attributes()` function, for example: ``` attributes(mylist) ``` ``` ## $names ## [1] "name" "wife" "" ``` When you use the `attributes()` function to find the named elements of a list variable, the named elements are always listed under a heading “$names”. Therefore, we see that the named elements of the list variable `mylist` are called “name” and “wife”, and we can retrieve their values by typing mylist$name and mylist$wife, respectively. ### 5\.4\.5 Tables Another type of object that you will encounter in R is a **table**. The `table()` function allows you to total up or tabulate the number of times a value occurs within a vector. Tables are typically used on vectors containing **character data**, such as letters, words, or names, but can work on numeric data data. #### 5\.4\.5\.1 Tables \- The basics If we made a vector variable “nucleotides” containing the of a DNA molecule, we can use the `table()` function to produce a **table variable** that contains the number of bases with each possible nucleotides: ``` bases <- c("A", "T", "A", "A", "T", "C", "G", "C", "G") ``` Now make the table ``` table(bases) ``` ``` ## bases ## A C G T ## 3 2 2 2 ``` We can store the table variable produced by the function `table()`, and call the stored table “bases.table”, by typing: ``` bases.table <- table(bases) ``` Tables also work on vectors containing numbers. First, a vector of numbers. ``` numeric.vecter <- c(1,1,1,1,3,4,4,4,4) ``` Second, a table, showing how many times each number occurs. ``` table(numeric.vecter) ``` ``` ## numeric.vecter ## 1 3 4 ## 4 1 4 ``` #### 5\.4\.5\.2 Tables \- further details To access elements in a table variable, you need to use double square brackets, just like accessing elements in a list. For example, to access the fourth element in the table bases.table (the number of Ts in the sequence), we type: ``` bases.table[[4]] # double brackets! ``` ``` ## [1] 2 ``` Alternatively, you can use the name of the fourth element in the table (“John”) to find the value of that table element: ``` bases.table[["T"]] ``` ``` ## [1] 2 ``` 5\.5 Arguments -------------- Functions in R usually require **arguments**, which are input variables (i.e.. objects) that are **passed** to them, which they then carry out some operation on. For example, the `log10()` function is passed a number, and it then calculates the log to the base 10 of that number: ``` log10(100) ``` ``` ## [1] 2 ``` There’s a more generic function, `log()`, where we pass it not only a number to take the log of, but also the specific **base** of the logarithm. To take the log base 10 with the `log()` function we do this ``` log(100, base = 10) ``` ``` ## [1] 2 ``` We can also take logs with other bases, such as 2: ``` log(100, base = 2) ``` ``` ## [1] 6.643856 ``` 5\.6 Help files with `help()` and `?` ------------------------------------- In *R*, you can get help about a particular function by using the `help()` function. For example, if you want help about the `log10()` function, you can type: ``` help("log10") ``` When you use the `help()` function, a box or web page will show up in one of the panes of RStudio with information about the function that you asked for help with. You can also use the `?` next to the function like this ``` ?log10 ``` Help files are a mixed bag in R, and it can take some getting used to them. An excellent overview of this is Kieran Healy’s [“How to read an R help page.”](https://socviz.co/appendix.html) 5\.7 Searching for functions with `help.search()` and `RSiteSearch()` --------------------------------------------------------------------- If you are not sure of the name of a function, but think you know part of its name, you can search for the function name using the `help.search()` and `RSiteSearch()` functions. The `help.search()` function searches to see if you already have a function installed (from one of the R packages that you have installed) that may be related to some topic you’re interested in. `RSiteSearch()` searches *all* R functions (including those in packages that you haven’t yet installed) for functions related to the topic you are interested in. For example, if you want to know if there is a function to calculate the standard deviation (SD) of a set of numbers, you can search for the names of all installed functions containing the word “deviation” in their description by typing: ``` help.search("deviation") ``` Among the functions that were found, is the function `sd()` in the `stats` package (an R package that comes with the base R installation), which is used for calculating the standard deviation. Now, instead of searching just the packages we’ve have on our computer let’s search all R packages on CRAN. Let’s look for things related to DNA. Note that `RSiteSearch()` doesn’t provide output within RStudio, but rather opens up your web browser for you to display the results. ``` RSiteSearch("DNA") ``` The results of the `RSiteSearch()` function will be hits to descriptions of R functions, as well as to R mailing list discussions of those functions. 5\.8 More on functions ---------------------- We can perform computations with R using objects such as scalars and vectors. For example, to calculate the average of the values in the vector `myvector` (i.e.. the average of 8, 6, 9, 10 and 5\), we can use the `mean()` function: ``` mean(myvector) # note: no " " ``` ``` ## [1] 7.6 ``` We have been using built\-in R functions such as mean(), length(), print(), plot(), etc. ### 5\.8\.1 Writing your own functions **NOTE**: \*Writing your own functions is an advanced skills. New users can skip this section. We can also create our own functions in R to do calculations that you want to carry out very often on different input data sets. For example, we can create a function to calculate the value of 20 plus square of some input number: ``` myfunction <- function(x) { return(20 + (x*x)) } ``` This function will calculate the square of a number (x), and then add 20 to that value. The `return()` statement returns the calculated value. Once you have typed in this function, the function is then available for use. For example, we can use the function for different input numbers (e.g.. 10, 25\): ``` myfunction(10) ``` ``` ## [1] 120 ``` 5\.9 Quiting R -------------- To quit R either close the program, or type: ``` q() ``` 5\.10 Links and Further Reading ------------------------------- Some links are included here for further reading. For a more in\-depth introduction to R, a good online tutorial is available on the “Kickstarting R” website, cran.r\-project.org/doc/contrib/Lemon\-kickstart. There is another nice (slightly more in\-depth) tutorial to R available on the “Introduction to R” website, cran.r\-project.org/doc/manuals/R\-intro.html. [Chapter 3](https://learningstatisticswithr.com/book/introR.html) of Danielle Navarro’s book is an excellent intro to the basics of R. 5\.1 Vocabulary --------------- * scalar * vector * list * class * numeric * character * assignment * elements of an object * indices * attributes of an object * argument of a function 5\.2 R functions ---------------- * \<\- * \[ ] * $ * table() * function * c() * log10() * help(), ? * help.search() * RSiteSearch() * mean() * return() * q() 5\.3 Interacting with R ----------------------- You will type *R* commands into the RStudio **console** in order to carry out analyses in *R*. In the RStudio console you will see the R prompt starting with the symbol “\>”. “\>” will always be there at the beginning of each new command \- don’t try to delete it! Moreover, you never need to type it. We type the **commands** needed for a particular task after this prompt. The command is carried out by *R* after you hit the Return key. Once you have started R, you can start typing commands into the RStudio console, and the results will be calculated immediately, for example: ``` 2*3 ``` ``` ## [1] 6 ``` Note that prior to the output of “6” it shows “\[1]”. Now subtraction: ``` 10-3 ``` ``` ## [1] 7 ``` Again, prior to the output of “7” it shows “\[1]”. *R* can act like a basic calculator that you type commands in to. You can also use it like a more advanced scientific calculator and create **variables** that store information. All variables created by R are called **objects**. In R, we assign values to variables using an arrow\-looking function `<-` the **assignment operator**. For example, we can **assign** the value 2\*3 to the variable x using the command: ``` x <- 2*3 ``` To view the contents of any R object, just type its name, press enter, and the contents of that R object will be displayed: ``` x ``` ``` ## [1] 6 ``` 5\.4 Variables in R ------------------- There are several different types of objects in R with fancy math names, including **scalars**, **vectors**, **matrices** (singular: **matrix),** arrays**,** dataframes**,** tables**, and** lists**. The** scalar\*\* variable x above is one example of an R object. While a scalar variable such as x has just one element, a **vector** consists of several elements. The elements in a vector are all of the same **type** (e.g.. numbers or alphabetic characters), while **lists** may include elements such as characters as well as numeric quantities. Vectors and dataframes are the most common variables you’ll use. You’ll also encounter matrices often, and lists are ubiquitous in R but beginning users often don’t encounter them because they remain behind the scenes. ### 5\.4\.1 Vectors To create a vector, we can use the `c()` (combine) function. For example, to create a vector called `myvector` that has elements with values 8, 6, 9, 10, and 5, we type: ``` myvector <- c(8, 6, 9, 10, 5) # note: commas between each number! ``` To see the contents of the variable `myvector`, we can just type its name and press enter: ``` myvector ``` ``` ## [1] 8 6 9 10 5 ``` ### 5\.4\.2 Vector indexing The `[1]` is the **index** of the first **element** in the vector. We can **extract** any element of the vector by typing the vector name with the index of that element given in **square brackets** `[...]`. For example, to get the value of the 4th element in the vector `myvector`, we type: ``` myvector[4] ``` ``` ## [1] 10 ``` ### 5\.4\.3 Character vectors Vectors can contain letters, such as those designating nucleic acids ``` my.seq <- c("A","T","C","G") ``` They can also contain multi\-letter **strings**: ``` my.oligos <- c("ATCGC","TTTCGC","CCCGCG","GGGCGC") ``` ### 5\.4\.4 Lists **NOTE**: *below is a discussion of lists in R. This is excellent information, but not necessary if this is your very very first time using R.* In contrast to a vector, a **list** can contain elements of different types, for example, both numbers and letters. A list can even include other variables such as a vector. The `list()` function is used to create a list. For example, we could create a list `mylist` by typing: ``` mylist <- list(name="Charles Darwin", wife="Emma Darwin", myvector) ``` We can then print out the contents of the list `mylist` by typing its name: ``` mylist ``` ``` ## $name ## [1] "Charles Darwin" ## ## $wife ## [1] "Emma Darwin" ## ## [[3]] ## [1] 8 6 9 10 5 ``` The **elements** in a list are numbered, and can be referred to using **indices**. We can extract an element of a list by typing the list name with the index of the element given in double **square brackets** (in contrast to a vector, where we only use single square brackets). We can extract the second element from `mylist` by typing: ``` mylist[[2]] # note the double square brackets [[...]] ``` ``` ## [1] "Emma Darwin" ``` As a baby step towards our next task, we can wrap index values as in the `c()` command like this: ``` mylist[[c(2)]] # note the double square brackets [[...]] ``` ``` ## [1] "Emma Darwin" ``` The number `2` and `c(2)` mean the same thing. Now, we can extract the second AND third elements from `mylist`. First, we put the indices 2 and 3 into a vector `c(2,3)`, then wrap that vector in double square brackets: `[c(2,3)]`. All together it looks like this. ``` mylist[c(2,3)] # note the double brackets ``` ``` ## $wife ## [1] "Emma Darwin" ## ## [[2]] ## [1] 8 6 9 10 5 ``` Elements of lists may also be named, resulting in a **named lists**. The elements may then be referred to by giving the list name, followed by “$”, followed by the element name. For example, mylist$name is the same as mylist\[\[1]] and mylist$wife is the same as mylist\[\[2]]: ``` mylist$wife ``` ``` ## [1] "Emma Darwin" ``` We can find out the names of the named elements in a list by using the `attributes()` function, for example: ``` attributes(mylist) ``` ``` ## $names ## [1] "name" "wife" "" ``` When you use the `attributes()` function to find the named elements of a list variable, the named elements are always listed under a heading “$names”. Therefore, we see that the named elements of the list variable `mylist` are called “name” and “wife”, and we can retrieve their values by typing mylist$name and mylist$wife, respectively. ### 5\.4\.5 Tables Another type of object that you will encounter in R is a **table**. The `table()` function allows you to total up or tabulate the number of times a value occurs within a vector. Tables are typically used on vectors containing **character data**, such as letters, words, or names, but can work on numeric data data. #### 5\.4\.5\.1 Tables \- The basics If we made a vector variable “nucleotides” containing the of a DNA molecule, we can use the `table()` function to produce a **table variable** that contains the number of bases with each possible nucleotides: ``` bases <- c("A", "T", "A", "A", "T", "C", "G", "C", "G") ``` Now make the table ``` table(bases) ``` ``` ## bases ## A C G T ## 3 2 2 2 ``` We can store the table variable produced by the function `table()`, and call the stored table “bases.table”, by typing: ``` bases.table <- table(bases) ``` Tables also work on vectors containing numbers. First, a vector of numbers. ``` numeric.vecter <- c(1,1,1,1,3,4,4,4,4) ``` Second, a table, showing how many times each number occurs. ``` table(numeric.vecter) ``` ``` ## numeric.vecter ## 1 3 4 ## 4 1 4 ``` #### 5\.4\.5\.2 Tables \- further details To access elements in a table variable, you need to use double square brackets, just like accessing elements in a list. For example, to access the fourth element in the table bases.table (the number of Ts in the sequence), we type: ``` bases.table[[4]] # double brackets! ``` ``` ## [1] 2 ``` Alternatively, you can use the name of the fourth element in the table (“John”) to find the value of that table element: ``` bases.table[["T"]] ``` ``` ## [1] 2 ``` ### 5\.4\.1 Vectors To create a vector, we can use the `c()` (combine) function. For example, to create a vector called `myvector` that has elements with values 8, 6, 9, 10, and 5, we type: ``` myvector <- c(8, 6, 9, 10, 5) # note: commas between each number! ``` To see the contents of the variable `myvector`, we can just type its name and press enter: ``` myvector ``` ``` ## [1] 8 6 9 10 5 ``` ### 5\.4\.2 Vector indexing The `[1]` is the **index** of the first **element** in the vector. We can **extract** any element of the vector by typing the vector name with the index of that element given in **square brackets** `[...]`. For example, to get the value of the 4th element in the vector `myvector`, we type: ``` myvector[4] ``` ``` ## [1] 10 ``` ### 5\.4\.3 Character vectors Vectors can contain letters, such as those designating nucleic acids ``` my.seq <- c("A","T","C","G") ``` They can also contain multi\-letter **strings**: ``` my.oligos <- c("ATCGC","TTTCGC","CCCGCG","GGGCGC") ``` ### 5\.4\.4 Lists **NOTE**: *below is a discussion of lists in R. This is excellent information, but not necessary if this is your very very first time using R.* In contrast to a vector, a **list** can contain elements of different types, for example, both numbers and letters. A list can even include other variables such as a vector. The `list()` function is used to create a list. For example, we could create a list `mylist` by typing: ``` mylist <- list(name="Charles Darwin", wife="Emma Darwin", myvector) ``` We can then print out the contents of the list `mylist` by typing its name: ``` mylist ``` ``` ## $name ## [1] "Charles Darwin" ## ## $wife ## [1] "Emma Darwin" ## ## [[3]] ## [1] 8 6 9 10 5 ``` The **elements** in a list are numbered, and can be referred to using **indices**. We can extract an element of a list by typing the list name with the index of the element given in double **square brackets** (in contrast to a vector, where we only use single square brackets). We can extract the second element from `mylist` by typing: ``` mylist[[2]] # note the double square brackets [[...]] ``` ``` ## [1] "Emma Darwin" ``` As a baby step towards our next task, we can wrap index values as in the `c()` command like this: ``` mylist[[c(2)]] # note the double square brackets [[...]] ``` ``` ## [1] "Emma Darwin" ``` The number `2` and `c(2)` mean the same thing. Now, we can extract the second AND third elements from `mylist`. First, we put the indices 2 and 3 into a vector `c(2,3)`, then wrap that vector in double square brackets: `[c(2,3)]`. All together it looks like this. ``` mylist[c(2,3)] # note the double brackets ``` ``` ## $wife ## [1] "Emma Darwin" ## ## [[2]] ## [1] 8 6 9 10 5 ``` Elements of lists may also be named, resulting in a **named lists**. The elements may then be referred to by giving the list name, followed by “$”, followed by the element name. For example, mylist$name is the same as mylist\[\[1]] and mylist$wife is the same as mylist\[\[2]]: ``` mylist$wife ``` ``` ## [1] "Emma Darwin" ``` We can find out the names of the named elements in a list by using the `attributes()` function, for example: ``` attributes(mylist) ``` ``` ## $names ## [1] "name" "wife" "" ``` When you use the `attributes()` function to find the named elements of a list variable, the named elements are always listed under a heading “$names”. Therefore, we see that the named elements of the list variable `mylist` are called “name” and “wife”, and we can retrieve their values by typing mylist$name and mylist$wife, respectively. ### 5\.4\.5 Tables Another type of object that you will encounter in R is a **table**. The `table()` function allows you to total up or tabulate the number of times a value occurs within a vector. Tables are typically used on vectors containing **character data**, such as letters, words, or names, but can work on numeric data data. #### 5\.4\.5\.1 Tables \- The basics If we made a vector variable “nucleotides” containing the of a DNA molecule, we can use the `table()` function to produce a **table variable** that contains the number of bases with each possible nucleotides: ``` bases <- c("A", "T", "A", "A", "T", "C", "G", "C", "G") ``` Now make the table ``` table(bases) ``` ``` ## bases ## A C G T ## 3 2 2 2 ``` We can store the table variable produced by the function `table()`, and call the stored table “bases.table”, by typing: ``` bases.table <- table(bases) ``` Tables also work on vectors containing numbers. First, a vector of numbers. ``` numeric.vecter <- c(1,1,1,1,3,4,4,4,4) ``` Second, a table, showing how many times each number occurs. ``` table(numeric.vecter) ``` ``` ## numeric.vecter ## 1 3 4 ## 4 1 4 ``` #### 5\.4\.5\.2 Tables \- further details To access elements in a table variable, you need to use double square brackets, just like accessing elements in a list. For example, to access the fourth element in the table bases.table (the number of Ts in the sequence), we type: ``` bases.table[[4]] # double brackets! ``` ``` ## [1] 2 ``` Alternatively, you can use the name of the fourth element in the table (“John”) to find the value of that table element: ``` bases.table[["T"]] ``` ``` ## [1] 2 ``` #### 5\.4\.5\.1 Tables \- The basics If we made a vector variable “nucleotides” containing the of a DNA molecule, we can use the `table()` function to produce a **table variable** that contains the number of bases with each possible nucleotides: ``` bases <- c("A", "T", "A", "A", "T", "C", "G", "C", "G") ``` Now make the table ``` table(bases) ``` ``` ## bases ## A C G T ## 3 2 2 2 ``` We can store the table variable produced by the function `table()`, and call the stored table “bases.table”, by typing: ``` bases.table <- table(bases) ``` Tables also work on vectors containing numbers. First, a vector of numbers. ``` numeric.vecter <- c(1,1,1,1,3,4,4,4,4) ``` Second, a table, showing how many times each number occurs. ``` table(numeric.vecter) ``` ``` ## numeric.vecter ## 1 3 4 ## 4 1 4 ``` #### 5\.4\.5\.2 Tables \- further details To access elements in a table variable, you need to use double square brackets, just like accessing elements in a list. For example, to access the fourth element in the table bases.table (the number of Ts in the sequence), we type: ``` bases.table[[4]] # double brackets! ``` ``` ## [1] 2 ``` Alternatively, you can use the name of the fourth element in the table (“John”) to find the value of that table element: ``` bases.table[["T"]] ``` ``` ## [1] 2 ``` 5\.5 Arguments -------------- Functions in R usually require **arguments**, which are input variables (i.e.. objects) that are **passed** to them, which they then carry out some operation on. For example, the `log10()` function is passed a number, and it then calculates the log to the base 10 of that number: ``` log10(100) ``` ``` ## [1] 2 ``` There’s a more generic function, `log()`, where we pass it not only a number to take the log of, but also the specific **base** of the logarithm. To take the log base 10 with the `log()` function we do this ``` log(100, base = 10) ``` ``` ## [1] 2 ``` We can also take logs with other bases, such as 2: ``` log(100, base = 2) ``` ``` ## [1] 6.643856 ``` 5\.6 Help files with `help()` and `?` ------------------------------------- In *R*, you can get help about a particular function by using the `help()` function. For example, if you want help about the `log10()` function, you can type: ``` help("log10") ``` When you use the `help()` function, a box or web page will show up in one of the panes of RStudio with information about the function that you asked for help with. You can also use the `?` next to the function like this ``` ?log10 ``` Help files are a mixed bag in R, and it can take some getting used to them. An excellent overview of this is Kieran Healy’s [“How to read an R help page.”](https://socviz.co/appendix.html) 5\.7 Searching for functions with `help.search()` and `RSiteSearch()` --------------------------------------------------------------------- If you are not sure of the name of a function, but think you know part of its name, you can search for the function name using the `help.search()` and `RSiteSearch()` functions. The `help.search()` function searches to see if you already have a function installed (from one of the R packages that you have installed) that may be related to some topic you’re interested in. `RSiteSearch()` searches *all* R functions (including those in packages that you haven’t yet installed) for functions related to the topic you are interested in. For example, if you want to know if there is a function to calculate the standard deviation (SD) of a set of numbers, you can search for the names of all installed functions containing the word “deviation” in their description by typing: ``` help.search("deviation") ``` Among the functions that were found, is the function `sd()` in the `stats` package (an R package that comes with the base R installation), which is used for calculating the standard deviation. Now, instead of searching just the packages we’ve have on our computer let’s search all R packages on CRAN. Let’s look for things related to DNA. Note that `RSiteSearch()` doesn’t provide output within RStudio, but rather opens up your web browser for you to display the results. ``` RSiteSearch("DNA") ``` The results of the `RSiteSearch()` function will be hits to descriptions of R functions, as well as to R mailing list discussions of those functions. 5\.8 More on functions ---------------------- We can perform computations with R using objects such as scalars and vectors. For example, to calculate the average of the values in the vector `myvector` (i.e.. the average of 8, 6, 9, 10 and 5\), we can use the `mean()` function: ``` mean(myvector) # note: no " " ``` ``` ## [1] 7.6 ``` We have been using built\-in R functions such as mean(), length(), print(), plot(), etc. ### 5\.8\.1 Writing your own functions **NOTE**: \*Writing your own functions is an advanced skills. New users can skip this section. We can also create our own functions in R to do calculations that you want to carry out very often on different input data sets. For example, we can create a function to calculate the value of 20 plus square of some input number: ``` myfunction <- function(x) { return(20 + (x*x)) } ``` This function will calculate the square of a number (x), and then add 20 to that value. The `return()` statement returns the calculated value. Once you have typed in this function, the function is then available for use. For example, we can use the function for different input numbers (e.g.. 10, 25\): ``` myfunction(10) ``` ``` ## [1] 120 ``` ### 5\.8\.1 Writing your own functions **NOTE**: \*Writing your own functions is an advanced skills. New users can skip this section. We can also create our own functions in R to do calculations that you want to carry out very often on different input data sets. For example, we can create a function to calculate the value of 20 plus square of some input number: ``` myfunction <- function(x) { return(20 + (x*x)) } ``` This function will calculate the square of a number (x), and then add 20 to that value. The `return()` statement returns the calculated value. Once you have typed in this function, the function is then available for use. For example, we can use the function for different input numbers (e.g.. 10, 25\): ``` myfunction(10) ``` ``` ## [1] 120 ``` 5\.9 Quiting R -------------- To quit R either close the program, or type: ``` q() ```
Life Sciences