domain
stringclasses 48
values | url
stringlengths 35
137
| text
stringlengths 0
836k
| topic
stringclasses 13
values |
---|---|---|---|
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-04-01-linear-regression-MLE.html |
12\.2 A maximum\-likelihood approach
------------------------------------
In order to be able to extend regression modeling to predictor variables other than metric variables (so\-called generalized linear regression models, see Chapter [15](Chap-04-04-GLM.html#Chap-04-04-GLM)), the geometric approach needs to be abandoned in favor of a likelihood\-based approach. The likelihood\-based approach tries to find coefficients that explain the observed data most plausibly.
### 12\.2\.1 A likelihood\-based model
There are two equivalent formulations of a (simple) linear regression model using a likelihood\-based approach. The first is more explicit, showing clearly that the model assumes that for each observation \\(y\_i\\) there is an error term \\(\\epsilon\_i\\), which is an iid sample from a Normal distribution. (Notice that the likelihood\-based model assumes an additional parameter \\(\\sigma\\), the standard deviation of the error terms.)
\\\[
\\text{likelihood\-based regression }
\\text{\[explicit version]}
\\]
\\\[
\\begin{aligned}
\\xi \& \= X \\beta \\\\
y\_i \& \= \\xi \+ \\epsilon\_i \\\\
\\epsilon\_i \& \\sim \\text{Normal}(0, \\sigma) \\\\
\\end{aligned}
\\]
The second, equivalent version of this writes this more compactly, suppressing the explicit mentioning of iid error terms:
\\\[
\\text{likelihood\-based regression }
\\text{\[compact version]}
\\]
\\\[
\\begin{aligned}
y\_i \& \\sim \\text{Normal}((X \\beta)\_i, \\sigma)
\\end{aligned}
\\]
### 12\.2\.2 Finding the MLE\-solution with `optim`
We can use `optim` to find maximum likelihood estimates for the simple linear regression of `murder_rates` predicted in terms of `unemployment` like so:
```
# data to be explained / predicted
y <- murder_data %>% pull(murder_rate)
# data to use for prediction / explanation
x <- murder_data %>% pull(unemployment)
# function to calculate negative log-likelihood
get_nll = function(y, x, beta_0, beta_1, sd) {
if (sd <= 0) {return( Inf )}
yPred = beta_0 + x * beta_1
nll = -dnorm(y, mean = yPred, sd = sd, log = T)
sum(nll)
}
# finding MLE
fit_lh = optim(par = c(0, 1, 1),
fn = function(par) {
get_nll(y, x, par[1], par[2], par[3])
}
)
# output the results
message(
"Best fitting parameter values:",
"\n\tIntercept: ", fit_lh$par[1] %>% round(2),
"\n\tSlope: ", fit_lh$par[2] %>% round(2),
"\nNegative log-likelihood for best fit: ", fit_lh$value %>% round(2)
)
```
```
## Best fitting parameter values:
## Intercept: -28.52
## Slope: 7.08
## Negative log-likelihood for best fit: 59.9
```
### 12\.2\.3 Finding the MLE\-solution with `glm`
R also has a built\-in way of approaching simple linear regression with a maximum\-likelihood approach, namely by using the function `glm` (generalized linear model).
Notice that the output looks slightly different from that of `lm`.
```
fit_glm <- glm(murder_rate ~ unemployment, data = murder_data)
fit_glm
```
```
##
## Call: glm(formula = murder_rate ~ unemployment, data = murder_data)
##
## Coefficients:
## (Intercept) unemployment
## -28.53 7.08
##
## Degrees of Freedom: 19 Total (i.e. Null); 18 Residual
## Null Deviance: 1855
## Residual Deviance: 467.6 AIC: 125.8
```
### 12\.2\.4 Finding the MLE\-solution with math
It is no coincidence that these fitted values are (modulo number imprecision) the same as for the geometric OLS approach.
**Theorem 12\.3 (MLE solution)** The vector \\(\\hat{\\beta} \\in \\mathbb{R}^k\\) maximizing the likelihood of a linear regression model with \\(k\\) predictors is the same as the vector that minimizes the residual sum of squares, namely:
\\\[
\\arg \\max\_{\\beta} \\prod\_{i \= 1}^n \\text{Normal}(y\_i \\mid \\mu \= (X \\beta)\_i, \\sigma) \= (X^T X)^{\-1} X^Ty
\\]
Show proof.
*Proof*. Using the more explicit formulation of likelihood\-based regression, we can rewrite the likelihood function in terms of the probability of “sampling” error terms \\(\\epsilon\_i\\) for each \\(y\_i\\) in such a way that \\(\\epsilon\_i \= y\_i \- \\xi\_i \= y\_i \- (X \\beta)\_i\\):
\\\[
\\begin{align\*}
LH(\\beta) \& \= \\prod\_{i \= 1}^n \\text{Normal}(\\epsilon\_i \\mid \\mu \= 0, \\sigma) \\\\
\& \= \\prod\_{i\=1}^{n}\\frac{1}{\\sqrt{2\\pi} \\sigma} \\exp\\left\[{\-\\frac{1}{2}\\left(\\frac{\\epsilon\_i^2}{\\sigma^2}\\right)}\\right] \& \\text{\[by def. of normal distr.]}
\\end{align\*}
\\]
Since we are only interested in the maximum of this function, we can also look for the maximum of \\(\\log LH(\\beta)\\) because the logarithm is a strictly monotone increasing function. This is useful because the logarithm can then be rewritten as a sum.
\\\[
\\begin{align}
LLH(\\beta)\&\=\\log \\left(LH(\\beta)\\right)\\\\
\&\=\-\\left( \\frac{n}{2}\\right) \\log(2\\pi)\-\\left( \\frac{n}{2}\\right) \\log(\\sigma^2\)\-\\left( \\frac{1}{2}\\sigma^2\\right) \\sum\_{i\=1}^n(\\epsilon\_i)^2
\\tag{2\.7}
\\end{align}
\\]
Since only the last summand depends on \\(\\beta\\), and since we can drop the factor \\(\\frac{1}{2}\\sigma^2\\) for finding a maximum, we obtain:
\\\[
\\arg \\max\_\\beta LLH(\\beta) \= \- \\sum\_{i\=1}^n(\\epsilon\_i)^2
\\]
If we substitute \\(\\epsilon\_i\\) and multiply with \\(\-1\\) to find the minimum, we see that we are back at the original problem of finding the OLS solution:
\\\[
\\arg \\min\_\\beta \-LLH(\\beta) \= \\sum\_{i\=1}^n(y\_i \- (X \\beta)\_i)^2
\\]
Notice that this result holds independently of \\(\\sigma\\), which just canceled out in this derivation.
**Exercise 13\.3**
Let’s assume that following the MLE approach, we obtained \\(\\beta\_0 \= 1\\), \\(\\beta\_1 \= 2\\) and \\(\\sigma \= 0\.5\\). For \\(x\_i \= 0\\), which \\(\\xi\_i\\) value will maximize the likelihood?
Solution
Since \\(y\_i \\sim \\text{Normal} (\\beta\_0 \+ \\beta\_1 x\_i , \\sigma)\\), the likelihood is maximized at the mean of the normal distribution, i.e., \\(y\_i \= \\beta\_0 \+ \\beta\_1 x\_i \= 1\\).
### 12\.2\.1 A likelihood\-based model
There are two equivalent formulations of a (simple) linear regression model using a likelihood\-based approach. The first is more explicit, showing clearly that the model assumes that for each observation \\(y\_i\\) there is an error term \\(\\epsilon\_i\\), which is an iid sample from a Normal distribution. (Notice that the likelihood\-based model assumes an additional parameter \\(\\sigma\\), the standard deviation of the error terms.)
\\\[
\\text{likelihood\-based regression }
\\text{\[explicit version]}
\\]
\\\[
\\begin{aligned}
\\xi \& \= X \\beta \\\\
y\_i \& \= \\xi \+ \\epsilon\_i \\\\
\\epsilon\_i \& \\sim \\text{Normal}(0, \\sigma) \\\\
\\end{aligned}
\\]
The second, equivalent version of this writes this more compactly, suppressing the explicit mentioning of iid error terms:
\\\[
\\text{likelihood\-based regression }
\\text{\[compact version]}
\\]
\\\[
\\begin{aligned}
y\_i \& \\sim \\text{Normal}((X \\beta)\_i, \\sigma)
\\end{aligned}
\\]
### 12\.2\.2 Finding the MLE\-solution with `optim`
We can use `optim` to find maximum likelihood estimates for the simple linear regression of `murder_rates` predicted in terms of `unemployment` like so:
```
# data to be explained / predicted
y <- murder_data %>% pull(murder_rate)
# data to use for prediction / explanation
x <- murder_data %>% pull(unemployment)
# function to calculate negative log-likelihood
get_nll = function(y, x, beta_0, beta_1, sd) {
if (sd <= 0) {return( Inf )}
yPred = beta_0 + x * beta_1
nll = -dnorm(y, mean = yPred, sd = sd, log = T)
sum(nll)
}
# finding MLE
fit_lh = optim(par = c(0, 1, 1),
fn = function(par) {
get_nll(y, x, par[1], par[2], par[3])
}
)
# output the results
message(
"Best fitting parameter values:",
"\n\tIntercept: ", fit_lh$par[1] %>% round(2),
"\n\tSlope: ", fit_lh$par[2] %>% round(2),
"\nNegative log-likelihood for best fit: ", fit_lh$value %>% round(2)
)
```
```
## Best fitting parameter values:
## Intercept: -28.52
## Slope: 7.08
## Negative log-likelihood for best fit: 59.9
```
### 12\.2\.3 Finding the MLE\-solution with `glm`
R also has a built\-in way of approaching simple linear regression with a maximum\-likelihood approach, namely by using the function `glm` (generalized linear model).
Notice that the output looks slightly different from that of `lm`.
```
fit_glm <- glm(murder_rate ~ unemployment, data = murder_data)
fit_glm
```
```
##
## Call: glm(formula = murder_rate ~ unemployment, data = murder_data)
##
## Coefficients:
## (Intercept) unemployment
## -28.53 7.08
##
## Degrees of Freedom: 19 Total (i.e. Null); 18 Residual
## Null Deviance: 1855
## Residual Deviance: 467.6 AIC: 125.8
```
### 12\.2\.4 Finding the MLE\-solution with math
It is no coincidence that these fitted values are (modulo number imprecision) the same as for the geometric OLS approach.
**Theorem 12\.3 (MLE solution)** The vector \\(\\hat{\\beta} \\in \\mathbb{R}^k\\) maximizing the likelihood of a linear regression model with \\(k\\) predictors is the same as the vector that minimizes the residual sum of squares, namely:
\\\[
\\arg \\max\_{\\beta} \\prod\_{i \= 1}^n \\text{Normal}(y\_i \\mid \\mu \= (X \\beta)\_i, \\sigma) \= (X^T X)^{\-1} X^Ty
\\]
Show proof.
*Proof*. Using the more explicit formulation of likelihood\-based regression, we can rewrite the likelihood function in terms of the probability of “sampling” error terms \\(\\epsilon\_i\\) for each \\(y\_i\\) in such a way that \\(\\epsilon\_i \= y\_i \- \\xi\_i \= y\_i \- (X \\beta)\_i\\):
\\\[
\\begin{align\*}
LH(\\beta) \& \= \\prod\_{i \= 1}^n \\text{Normal}(\\epsilon\_i \\mid \\mu \= 0, \\sigma) \\\\
\& \= \\prod\_{i\=1}^{n}\\frac{1}{\\sqrt{2\\pi} \\sigma} \\exp\\left\[{\-\\frac{1}{2}\\left(\\frac{\\epsilon\_i^2}{\\sigma^2}\\right)}\\right] \& \\text{\[by def. of normal distr.]}
\\end{align\*}
\\]
Since we are only interested in the maximum of this function, we can also look for the maximum of \\(\\log LH(\\beta)\\) because the logarithm is a strictly monotone increasing function. This is useful because the logarithm can then be rewritten as a sum.
\\\[
\\begin{align}
LLH(\\beta)\&\=\\log \\left(LH(\\beta)\\right)\\\\
\&\=\-\\left( \\frac{n}{2}\\right) \\log(2\\pi)\-\\left( \\frac{n}{2}\\right) \\log(\\sigma^2\)\-\\left( \\frac{1}{2}\\sigma^2\\right) \\sum\_{i\=1}^n(\\epsilon\_i)^2
\\tag{2\.7}
\\end{align}
\\]
Since only the last summand depends on \\(\\beta\\), and since we can drop the factor \\(\\frac{1}{2}\\sigma^2\\) for finding a maximum, we obtain:
\\\[
\\arg \\max\_\\beta LLH(\\beta) \= \- \\sum\_{i\=1}^n(\\epsilon\_i)^2
\\]
If we substitute \\(\\epsilon\_i\\) and multiply with \\(\-1\\) to find the minimum, we see that we are back at the original problem of finding the OLS solution:
\\\[
\\arg \\min\_\\beta \-LLH(\\beta) \= \\sum\_{i\=1}^n(y\_i \- (X \\beta)\_i)^2
\\]
Notice that this result holds independently of \\(\\sigma\\), which just canceled out in this derivation.
**Exercise 13\.3**
Let’s assume that following the MLE approach, we obtained \\(\\beta\_0 \= 1\\), \\(\\beta\_1 \= 2\\) and \\(\\sigma \= 0\.5\\). For \\(x\_i \= 0\\), which \\(\\xi\_i\\) value will maximize the likelihood?
Solution
Since \\(y\_i \\sim \\text{Normal} (\\beta\_0 \+ \\beta\_1 x\_i , \\sigma)\\), the likelihood is maximized at the mean of the normal distribution, i.e., \\(y\_i \= \\beta\_0 \+ \\beta\_1 x\_i \= 1\\).
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/a-bayesian-approach.html |
12\.3 A Bayesian approach
-------------------------
The Bayesian approach to linear regression just builds on the likelihood\-based approach of the last section, to which it adds priors for the model parameters \\(\\beta\\) (a vector of regression coefficients) and \\(\\sigma\\) (the standard deviation of the normal distribution).
The next Chapter [13](Chap-04-02-Bayes-regression-practice.html#Chap-04-02-Bayes-regression-practice) introduces ways of conveniently sampling from Bayesian regression models with variable specifications of these model priors using the R package `brms`.
This section introduces a Bayesian regression model with non\-informative priors which extends the non\-informative priors model for inferring the parameters of a normal distribution, which was introduced in section [9\.4](ch-03-04-parameter-estimation-normal.html#ch-03-04-parameter-estimation-normal).
The Bayesian non\-informative priors regression model uses the same likelihood function as the likelihood\-based model from before and assumes essentially the same non\-informative priors as the model from section [9\.4](ch-03-04-parameter-estimation-normal.html#ch-03-04-parameter-estimation-normal).
Concretely, it assumes an (improper) flat distribution over regression coefficients, and it assumes that the variance \\(\\sigma^2\\) is log\-uniformly distributed, which is equivalent to saying that \\(\\sigma^2\\) follows an inverse distribution.
So, for a regression problem with \\(k\\) predictors, predictor matrix \\(X\\) of size \\(x \\times (k\+1\)\\) and dependent data \\(y\\) of size \\(n\\), we have:
\\\[
\\begin{aligned}
\\beta\_j \& \\sim \\mathrm{Uniform}(\-\\infty, \\infty) \\ \\ \\ \\text{for all } 0 \\le j \\le k \\\\
\\log(\\sigma^2\) \& \\sim \\mathrm{Uniform}(\-\\infty, \\infty) \\\\
y\_i \& \\sim \\text{Normal}((X \\beta)\_i, \\sigma)
\\end{aligned}
\\]
Using this prior, we can calculate a closed\-form of the posterior to sample from (for details see Gelman et al. ([2014](#ref-gelman2014)) Chap. 14\).
The posterior has the general form:
\\\[
P(\\beta, \\sigma^2 \\mid y) \\propto P(\\sigma^2 \\mid y) \\ P(\\beta \\mid \\sigma^2, y)
\\]
Without going into details here, the posterior distribution of the variance \\(\\sigma^2\\) is an inverse\-\\(\\chi^2\\):
\\\[
\\sigma^2 \\mid y \\sim \\text{Inv\-}\\chi^2(n\-k, \\color{gray}{\\text{ some\-complicated\-term}})
\\]
The posterior of the regression coefficients is a (multivariate) normal distribution:
\\\[
\\beta \\mid \\sigma^2, y \\sim \\text{MV\-Normal}(\\hat{\\beta}, \\color{gray}{\\text{ some\-complicated\-term}})
\\]
What is interesting to note is that the mean of the posterior distribution of regression coefficients is exactly the optimal solution for ordinary least\-squares regression and the maximum likelihood estimate:
\\\[
\\hat{\\beta} \= (X^T \\ X)^{\-1}\\ X^Ty
\\]
This is not surprising given that the mean of a normal distribution is also its mode and that the MAP for a non\-informative prior coincides with the MLE.
The `aida` package provides the convenience function `aida::get_samples_regression_noninformative` for sampling from the posterior of the Bayesian non\-informative priors model. We use this function below but show it explicitly first:
```
get_samples_regression_noninformative <- function(
X, # design matrix
y, # dependent variable
n_samples = 1000
)
{
if(is.null(colnames(X))) {
stop("Design matrix X must have meaningful column names for coefficients.")
}
n <- length(y)
k <- ncol(X)
# calculating the formula from Gelman et al
# NB 'solve' computes the inverse of a matrix
beta_hat <- solve(t(X) %*% X) %*% t(X) %*% y
V_beta <- solve(t(X) %*% X)
# 'sample co-variance matrix'
s_squared <- 1 / (n - k) * t(y - (X %*% beta_hat)) %*% (y - (X %*% beta_hat))
# sample from posterior of variance
samples_sigma_squared <- extraDistr::rinvchisq(
n = n_samples,
nu = n - k,
tau = s_squared
)
# sample full joint posterior triples
samples_posterior <- map_df(
seq(n_samples),
function(i) {
s <- mvtnorm::rmvnorm(1, beta_hat, V_beta * samples_sigma_squared[i])
colnames(s) = colnames(X)
as_tibble(s) %>%
mutate(sigma = samples_sigma_squared[i] %>% sqrt())
}
)
return(samples_posterior)
}
```
Let’s apply this function to the running example of this chapter:
```
# variables for regression
y <- murder_data$murder_rate
x <- murder_data$unemployment
# the predictor 'intercept' is just a
# column vector of ones of the same length as y
int <- rep(1, length(y))
# create predictor matrix with values of all explanatory variables
# (here only intercept and slope for MAD)
X <- matrix(c(int, x), ncol = 2)
colnames(X) <- c("intercept", "slope")
# collect samples with convenience function
samples_Bayes_regression <- aida::get_samples_regression_noninformative(X, y, 10000)
```
The tibble `samples_Bayes_regression` contains 10,000 samples from the posterior. Let’s have a look at the first couple of samples:
```
head(samples_Bayes_regression)
```
```
## # A tibble: 6 × 3
## intercept slope sigma
## <dbl> <dbl> <dbl>
## 1 -40.8 8.74 4.40
## 2 -26.1 6.70 6.03
## 3 -37.2 8.30 4.08
## 4 -30.1 7.09 7.03
## 5 -29.8 7.13 5.08
## 6 -22.3 6.14 5.05
```
Remember that each sample is a triple, one value for each of the model’s parameters.
We can also look at some summary statistics, using another convenience function from the `aida` package:
```
rbind(
aida::summarize_sample_vector(samples_Bayes_regression$intercept, "intercept"),
aida::summarize_sample_vector(samples_Bayes_regression$slope, "slope"),
aida::summarize_sample_vector(samples_Bayes_regression$sigma, "sigma")
)
```
```
## # A tibble: 3 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 intercept -42.7 -28.5 -14.5
## 2 slope 5.08 7.08 9.07
## 3 sigma 3.59 5.32 7.15
```
Here’s a density plot of the (marginal) posteriors for each parameter value:
```
samples_Bayes_regression %>%
pivot_longer(cols = everything(), values_to = "sample", names_to = "parameter") %>%
mutate(parameter = factor(parameter, levels = c('intercept', 'slope', 'sigma'))) %>%
ggplot(aes(x = sample)) +
geom_density(fill = "lightgray", alpha = 0.5) +
facet_wrap(~parameter, scales = "free")
```
While these results are approximate, because they are affected by random sampling, they also convey a sense of uncertainty: given the data and the model, how certain should we be that, say, the slope is really at 7\.08?
We could, for instance, now test the hypothesis that the factor `unemployment` does not contribute at all to predicting the value of the variable `murder_rate`.
This could be addressed as the point\-valued hypothesis that the slope coefficient is exactly equal to zero.
Since the 95% credible interval clearly excludes this value, we might (based on this binary decision logic) now say that based on the model and the data there is reason to believe that `unemployment` *does* contribute to predicting `murder_rate`, in fact, that the higher a city’s unemployment rate, the higher we should expect its murder rate to be.
(Remember: this is not a claim about any kind of causal relation!)
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/simple-linear-regression-with-brms.html |
13\.1 Simple linear regression with `brms`
------------------------------------------
The main function of the `brms` package is `brm` (short for **B**ayesian **R**egression **M**odel). It behaves very similarly to the `glm` function we saw above.[59](#fn59) Here is an example of the current case study based on the [world temperature data set](app-93-data-sets-temperature.html#app-93-data-sets-temperature):
```
fit_temperature <- brm(
# specify what to explain in terms of what
# using the formula syntax
formula = avg_temp ~ year,
# which data to use
data = aida::data_WorldTemp
)
```
The formula syntax `y ~ x` tells R that we want to explain or predict the dependent variable `y` in terms of associated measurements of `x`, as stored in the data set (`tibble` or `data.frame`) supplied in the function call as `data`.
The object returned by this function call is a special\-purpose object of the class `brmsfit`. If we print this object to the screen we get a summary (which we can also produce with the explicit call `summary(fit_temperature)`).
```
fit_temperature
```
```
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: avg_temp ~ year
## Data: aida::data_WorldTemp (Number of observations: 269)
## Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup draws = 4000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -3.51 0.61 -4.68 -2.30 1.00 3961 2594
## year 0.01 0.00 0.01 0.01 1.00 3967 2626
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 0.41 0.02 0.37 0.44 1.00 1394 1605
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
```
This output tells us which model we fitted and it states some properties of the MCMC sampling routine used to obtain samples from the posterior distribution.
The most important pieces of information for drawing conclusions from this analysis are the summaries for the estimated parameters, here called “Intercept” (the \\(\\beta\_0\\) of the regression model), “year” (the slope coefficient \\(\\beta\_1\\) for the `year` column in the data) and “sigma” (the standard deviation of the Gaussian error function around the central predictor).
The “Estimate” shown here for each parameter is its posterior mean.
The columns “l\-95% CI” and “u\-95% CI” give the 95% inner quantile range of the marginal posterior distribution for each parameter.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/extracting-posterior-samples.html |
13\.2 Extracting posterior samples
----------------------------------
The function `brms::posterior_samples` extracts the samples from the posterior which are part of the `brmsfit` object.[60](#fn60)
```
post_samples_temperature <- brms::posterior_samples(fit_temperature) %>% select(-lp__,-lprior)
head(post_samples_temperature)
```
```
## b_Intercept b_year sigma
## 1 -4.328078 0.006692733 0.4100904
## 2 -4.202760 0.006633079 0.4050540
## 3 -2.887432 0.005926658 0.4006999
## 4 -4.580320 0.006821945 0.3996143
## 5 -2.587499 0.005782539 0.3916289
## 6 -4.247891 0.006654168 0.4081150
```
These extracted samples can be used as before, e.g., to compute our own summary tibble:
```
map_dfr(post_samples_temperature, aida::summarize_sample_vector) %>%
mutate(Parameter = colnames(post_samples_temperature[1:3]))
```
```
## # A tibble: 3 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 b_Intercept -4.69 -3.51 -2.32
## 2 b_year 0.00563 0.00627 0.00689
## 3 sigma 0.373 0.405 0.440
```
Or for manual plotting:[61](#fn61)
```
post_samples_temperature %>%
pivot_longer(cols = everything()) %>%
ggplot(aes(x = value)) +
geom_density() +
facet_wrap(~name, scales = "free")
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/excursion-inspecting-the-underlying-stan-code.html |
13\.3 \[Excursion:] Inspecting the underlying Stan code
-------------------------------------------------------
Under the hood, the `brms` package automatically creates Stan code, runs it and computes useful additional information for regression modeling around the `stan_fit` object.
Here’s how we can inspect the precise model that `brms` set up for us and ran:
```
brms::stancode(fit_temperature)
```
```
## // generated with brms 2.18.0
## functions {
## }
## data {
## int<lower=1> N; // total number of observations
## vector[N] Y; // response variable
## int<lower=1> K; // number of population-level effects
## matrix[N, K] X; // population-level design matrix
## int prior_only; // should the likelihood be ignored?
## }
## transformed data {
## int Kc = K - 1;
## matrix[N, Kc] Xc; // centered version of X without an intercept
## vector[Kc] means_X; // column means of X before centering
## for (i in 2:K) {
## means_X[i - 1] = mean(X[, i]);
## Xc[, i - 1] = X[, i] - means_X[i - 1];
## }
## }
## parameters {
## vector[Kc] b; // population-level effects
## real Intercept; // temporary intercept for centered predictors
## real<lower=0> sigma; // dispersion parameter
## }
## transformed parameters {
## real lprior = 0; // prior contributions to the log posterior
## lprior += student_t_lpdf(Intercept | 3, 8.3, 2.5);
## lprior += student_t_lpdf(sigma | 3, 0, 2.5)
## - 1 * student_t_lccdf(0 | 3, 0, 2.5);
## }
## model {
## // likelihood including constants
## if (!prior_only) {
## target += normal_id_glm_lpdf(Y | Xc, Intercept, b, sigma);
## }
## // priors including constants
## target += lprior;
## }
## generated quantities {
## // actual population-level intercept
## real b_Intercept = Intercept - dot_product(means_X, b);
## }
```
Even if the Stan code itself is not entirely transparent, a few interesting observations to be glimpsed are:
1. `brms` automatically centralizes the predictor values, but returns fits for the non\-centralized coefficients
2. by default, the prior for slope coefficients is a completely uninformative one (every value is equally likely)
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/setting-priors.html |
13\.4 Setting priors
--------------------
Bayesian models require priors for all parameters.
The function `brms::prior_summary` shows which priors a model fitted with `brms` has (implicitly) assumed.
```
prior_summary(fit_temperature)
```
```
## prior class coef group resp dpar nlpar lb ub source
## (flat) b default
## (flat) b year (vectorized)
## student_t(3, 8.3, 2.5) Intercept default
## student_t(3, 0, 2.5) sigma 0 default
```
This output tells us that `brms` used a Student’s \\(t\\) distribution for the intercept and the standard deviation.[62](#fn62)
It also shows us that all slope coefficients (abbreviated here as “b”) have a flat (non\-informative) prior.
If we want to change the prior for any model parameter, or family of model parameters, we can use the `prior` argument in the `brm` function, which requires a special type of input using `brms`’ `prior()` function.
The syntax for distributions inside the `prior()` follows that of Stan, as documented in the [Stan function reference](https://mc-stan.org/docs/2_25/functions-reference/index.html).
The example below sets the prior for the slope coefficient to a very narrow Student’s \\(t\\) distribution with mean `-0.01` and standard deviation `0.001`.
```
fit_temperature_skeptical <- brm(
# specify what to explain in terms of what
# using the formula syntax
formula = avg_temp ~ year,
# which data to use
data = aida::data_WorldTemp,
# hand-craft priors for slope
prior = prior(student_t(1, -0.01, 0.001), coef = year)
)
```
This prior is a *skeptical prior* in the sense that it assumes a negative slope to be more likely, that is, the world has been getting colder over the years.
Comparing the summary statistics for the original fit:
```
map_dfr(post_samples_temperature, aida::summarize_sample_vector) %>%
mutate(Parameter = colnames(post_samples_temperature[1:3]))
```
```
## # A tibble: 3 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 b_Intercept -4.69 -3.51 -2.32
## 2 b_year 0.00563 0.00627 0.00689
## 3 sigma 0.373 0.405 0.440
```
against those of the new fit using skeptical priors:
```
post_samples_temperature_skeptical <- brms::posterior_samples(fit_temperature_skeptical) %>%
select(-lp__,-lprior)
map_dfr(post_samples_temperature_skeptical,
aida::summarize_sample_vector) %>%
mutate(Parameter = colnames(post_samples_temperature_skeptical[1:3]))
```
```
## # A tibble: 3 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 b_Intercept -4.65 -3.49 -2.21
## 2 b_year 0.00559 0.00626 0.00688
## 3 sigma 0.372 0.406 0.439
```
we see that the data has overturned the initial skeptical prior, suggesting that the evidence provided in the data for the belief that the slope coefficient is positive is stronger than the original (maybe hypothetical) assumption to the contrary.
**Exercise 13\.1**
What do you expect to happen to the estimate of the intercept when using a very strong prior on the slope coefficient for `year`, e.g., a normal distribution with a mean of 5 and a standard deviation of .01?
Solution
We should expect the posterior of the slope for `year` to be much higher than the original estimate, much closer to 5\.
The reason is that the normal distribution is much less “willing” to allow outliers and so constraints the fit much stronger towards the mean of the prior than the Student’s \\(t\\) distribution.
Notice that with slope values close to 5, the estimates for the intercept and standard deviation also change (in ridiculous ways).
```
fit_temperature_ridiculous <- brm(
# specify what to explain in terms of what
# using the formula syntax
formula = avg_temp ~ year,
# which data to use
data = aida::data_WorldTemp,
# hand-craft priors for slope
prior = prior(normal(5, 0.01), coef = year)
)
post_samples_temperature_ridiculous <- brms::posterior_samples(fit_temperature_ridiculous) %>%
select(-lp__,-lprior)
map_dfr(post_samples_temperature_ridiculous,
aida::summarize_sample_vector) %>%
mutate(Parameter = colnames(post_samples_temperature_ridiculous[1:3]))
```
```
## # A tibble: 3 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 b_Intercept -9446. -9407. -9369.
## 2 b_year 4.97 4.99 5.01
## 3 sigma 355. 386. 422.
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/posterior-predictions.html |
13\.5 Posterior predictions
---------------------------
The function `brms::posterior_predict` returns samples from the posterior predictive distribution of a `brms_fit` object.
For example, the code below yields 4000 sampled predictions for each of the 269 `year` values in the world temperature data set.
```
samples_post_pred_temperature <- brms::posterior_predict(fit_temperature)
dim(samples_post_pred_temperature)
```
```
## [1] 4000 269
```
The function `brms::posterior_predict` can also be used to sample from the posterior predictive distribution of a fitted regression model for new values of the model’s predictors.
If we are interested in predictions of average world surface temperature for the years 2025 and 2040, all we need to do is supply a data frame (or tibble) with the predictor values of interest as an argument.
```
# create a tibble with new predictor values
X_new <- tribble(
~ "year", 2025, 2040
)
# get sample predictions from the Bayesian model
post_pred_new <- brms::posterior_predict(fit_temperature, X_new)
# get a (Bayesian) summary for these posterior samples
rbind(
aida::summarize_sample_vector(post_pred_new[,1], "2025"),
aida::summarize_sample_vector(post_pred_new[,2], "2040")
)
```
```
## # A tibble: 2 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 2025 8.43 9.19 9.99
## 2 2040 8.50 9.30 10.1
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/testing-hypotheses.html |
13\.6 Testing hypotheses
------------------------
The `brms` package also contains a useful function to address hypotheses about model parameters.
The function `brms::hypothesis` can compute Bayes factors for point\-valued hypotheses using the Savage\-Dickey method.
It also computes a binary test of whether a point\-valued hypothesis is credible based on inclusion in a Bayesian credible interval.
For interval\-valued hypotheses \\(\\theta \\in \[a;b]\\), the function `brms::hypothesis` computes the posterior odds (called *evidence ratio* in the context of this function):[63](#fn63)
\\\[
\\frac{P(\\theta \\in \[a;b] \\mid D)}{P(\\theta \\not \\in \[a;b] \\mid D)}
\\]
Computing Bayes factors for point\-valued hypotheses with `brms::hypothesis` requires proper priors for all parameters that are part of the hypothesis.
It also requires taking samples from the priors of parameters.[64](#fn64)
So, here is a function call of `brms:brm` which (i) specifies a reasonably unconstrained but proper parameter for the slope coefficient for `year` and (ii) also collects samples from the prior (by setting the option `sample_prior = "yes"`):
```
fit_temperature_weakinfo <- brm(
# specify what to explain in terms of what
# using the formula syntax
formula = avg_temp ~ year,
# which data to use
data = aida::data_WorldTemp,
# weakly informative prior for slope
prior = prior(student_t(1, 0, 1), coef = year),
# option to sample from priors as well
# (necessary for calculating BFs with Savage-Dickey)
sample_prior = 'yes',
# increase number of iterations (for precision of estimates)
iter = 20000
)
```
Before addressing hypotheses about the slope parameter for `year`, let’s remind ourselves of the summary statistics for the posterior:
```
brms::posterior_samples(fit_temperature_weakinfo) %>%
pull(b_year) %>%
aida::summarize_sample_vector()
```
```
## # A tibble: 1 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 "" 0.00565 0.00627 0.00689
```
The main “research hypothesis” of interest is whether the slope for `year` is credibly positive.
This is an interval\-valued hypothesis and we can test it like so:
```
hypothesis(fit_temperature_weakinfo, "year > 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob Star
## 1 (year) > 0 0.01 0 0.01 0.01 Inf 1 *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
The table shows the estimate for the slope of `year`, together with an estimated error, lower and upper bounds of a credible interval (95% by default).
All of these numbers are rounded.
It also shows the “Evidence ratio” which, for an interval\-valued hypothesis is *not* the Bayes factor, but the posterior odds (see above).
In the present case, an evidence ratio of `Inf` means that all posterior samples for the slope coefficient were positive.
This is also expressed in the posterior probability (“Post.Prod” in the table) for the proposition that the interval\-valued hypothesis is true (given data and model).
The following tests a point\-valued hypothesis:
```
hypothesis(fit_temperature_weakinfo, "year = 0.005")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
## 1 (year)-(0.005) = 0 0 0 0 0 0.17 0.15
## Star
## 1 *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
For this point\-valued hypothesis, the estimate (and associated error and credible interval) are calculated as a comparison against 0, as shown in the “Hypothesis” column.
The evidence ratio given in the results table is the Bayes factor of the point\-valued hypothesis against the embedding model (the full regression model with the prior we specified), as calculated by the Savage\-Dickey method.
As before, the posterior probability is also shown.
The “Star” in this table indicates that the point\-valued hypothesis is excluded from the computed credible interval, so that \- if we adopted the (controversial) binary decision logic discussed in Chapter [11](ch-03-07-hypothesis-testing-Bayes.html#ch-03-07-hypothesis-testing-Bayes) \- we would reject the tested hypothesis.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-04-03-predictors-two-levels.html |
14\.1 Single two\-level predictor
---------------------------------
Let’s revisit the data from the [Simon task](app-93-data-sets-simon-task.html#app-93-data-sets-simon-task).
Just like in chapter [11](ch-03-07-hypothesis-testing-Bayes.html#ch-03-07-hypothesis-testing-Bayes), we will be looking at the hypothesis that, among all correct responses, the mean reaction times for the congruent condition are lower than those of the incongruent condition.
```
# extract just the currently relevant columns
# from the data set
data_ST_excerpt <- aida::data_ST %>%
filter(correctness == "correct") %>%
select(RT, condition)
# show the first couple of lines
head(data_ST_excerpt, 5)
```
```
## # A tibble: 5 × 2
## RT condition
## <dbl> <chr>
## 1 735 incongruent
## 2 557 incongruent
## 3 455 congruent
## 4 376 congruent
## 5 626 incongruent
```
Notice that this tibble contains the data in a tidy format, i.e., each row contains a pair of associated measurements.
We want to explain or predict the variable `RT` in terms of the variable `condition`.
The variable `RT` is a metric measurement.
But the variable `condition` is categorical variable with two category levels.
Before we head on, let’s look at the data (again).
Here’s a visualization of the distribution of RTs in each condition:
```
data_ST_excerpt %>%
ggplot(aes(x = condition, y = RT, color = condition, fill = condition)) +
geom_violin() +
theme(legend.position = "none")
```
The means for both conditions are:
```
data_ST_excerpt %>%
group_by(condition) %>%
summarize(mean_RT = mean(RT))
```
```
## # A tibble: 2 × 2
## condition mean_RT
## <chr> <dbl>
## 1 congruent 453.
## 2 incongruent 477.
```
The difference between the means of conditions is:
```
data_ST_excerpt %>% filter(condition == "incongruent") %>% pull(RT) %>% mean() -
data_ST_excerpt %>% filter(condition == "congruent") %>% pull(RT) %>% mean()
```
```
## [1] 23.63348
```
While numerically this difference seems high, the question remains whether this difference is, say, big enough to earn our trust.
We address this question here using posterior inference based on a regression model.
Notice that we simply use the same formula syntax as before: we want a model that explains `RT` in terms of `condition`.
```
fit_brms_ST <- brm(
formula = RT ~ condition,
data = data_ST_excerpt
)
```
Let’s inspect the summary information for the posterior samples, which we do here using the `summary` function for the `brms_fit` object from which we extract information only about the `fixed` effects, showing the mean (here called “Estimate”) and indicators of the lower and upper 95% inner quantile.
```
summary(fit_brms_ST)$fixed[,c("l-95% CI", "Estimate", "u-95% CI")]
```
```
## l-95% CI Estimate u-95% CI
## Intercept 450.91698 452.87440 454.7599
## conditionincongruent 21.02123 23.66309 26.2741
```
We see that the model inferred a value for an “Intercept” variable and for another variable called “conditionincongruent”.
What are these?
If you look back at the empirically inferred means, you will see that the mean estimate for “Intercept” corresponds to the mean of RTs in the “congruent” condition and that the mean estimate for the variable “conditionincongruent” closely matches the computed difference between the means of the two conditions.
And, indeed, that is what this regression model is doing for us.
Using a uniform formula syntax, `brms` has set up a regression model in which a predictor, given as a character (string) column, was internally coerced somehow into a format that produced an estimate for the mean of one condition and an estimate for the difference between conditions.
How do these results come about?
And why are the variables returned by `brms` called “Intercept” and “conditionincongruent”?
In order to use the simple linear regression model, the categorical predictor \\(x\\) has been coded as either \\(0\\) or \\(1\\).
Concretely, `brms` has introduced a new predictor variable, call it `new_predictor`, which has value \\(0\\) for the “congruent” condition and \\(1\\) for the “incongruent” condition.
By default, `brms` chooses the level that is alphanumerically first as the so\-called **reference level**, assigning to it the value \\(0\\).
Here, that’s “congruent”.
The result would look like this:
```
data_ST_excerpt %>%
mutate(new_predictor = ifelse(condition == "congruent", 0, 1)) %>%
head(5)
```
```
## # A tibble: 5 × 3
## RT condition new_predictor
## <dbl> <chr> <dbl>
## 1 735 incongruent 1
## 2 557 incongruent 1
## 3 455 congruent 0
## 4 376 congruent 0
## 5 626 incongruent 1
```
Now, with this new numeric coding of the predictor, we can calculate the linear regression model as usual:
\\\[
\\begin{aligned}
\\xi\_i \& \= \\beta\_0 \+ \\beta\_1 x\_i \& y\_i \& \\sim \\text{Normal}(\\mu \= \\xi\_i, \\sigma)
\\end{aligned}
\\]
As a consequence, the linear model’s intercept parameter \\(\\beta\_0\\) can be interpreted as the predicted mean of the reference level: if for some \\(i\\) we have \\(x\_i \= 0\\), then the predictor \\(\\xi\_i\\) will just be \\(\\xi\_i \= \\beta\_0\\); whence that the intercept \\(\\beta\_0\\) will be fitted to the mean of the reference level if for some \\(i\\) we have \\(x\_i \= 1\\) instead, the predicted value will be computed as \\(\\xi\_i \= \\beta\_0 \+ \\beta\_1\\), so that the slope term \\(\\beta\_1\\) will effectively play the role of the difference \\(\\delta\\) between the mean of the groups.
The upshot is that we can conceive of a **\\(t\\)\-test as a special case of a linear regression model!**
Schematically, we can represent this coding scheme for coefficients like so:
```
## # A tibble: 2 × 3
## condition x_0 x_1
## <chr> <dbl> <dbl>
## 1 congruent 1 0
## 2 incongruent 1 1
```
**Exercise 14\.1**
For the given data below, compute (or guess) the MLEs of the regression coefficients. Choose the appropriate 0/1 encoding of group information.
We have two groups, and three measurements of \\(y\\) for each:
groupA: (1,0,2\) and groupB: (10,13,7\)
Solution
For \\(\\xi\_i \= \\beta\_0 \+ \\beta\_1 x\_i\\), let \\(x\_i \=0\\) if the data point is from groupA and \\(x\_i\=1\\) if it’s from groupB. Then the mean of groupA is computed by the intercept \\(\\mu\_A \= \\beta\_0\\) and the mean of groupB is computed as the sum of the intercept and the slope \\(\\mu\_B \= \\beta\_0 \+ \\beta\_1\\). Since \\(\\mu\_A \= 1\\) and \\(\\mu\_B \= 10\\), we can guess that \\(\\beta\_0 \= 1\\) and \\(\\beta\_1 \= 10 \- 1 \= 9\\).
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-04-03-predictors-multi-levels.html |
14\.2 Single multi\-level predictor
-----------------------------------
The 0/1 coding scheme above works fine for a single categorical predictor value with two levels.
It is possible to use linear regression also for categorical predictors with more than two levels.
Only, in that case, there are quite a few more reasonable **contrast coding** schemes, i.e., ways to choose numbers to encode the levels of the predictor.
The [mental chronometry data](app-93-data-sets-mental-chronometry.html#app-93-data-sets-mental-chronometry) has a single categorical predictor, called `block`, with three levels, called “reaction”, “goNoGo” and “discrimination”.
We are interested in regressing reaction times, stored in variable `RT`, against `block`.
Our main question of interest is whether these inequalities are supported by the data:
\\\[
\\text{RT in 'reaction'} \<
\\text{RT in 'goNoGo'} \<
\\text{RT in 'discrimination'}
\\]
So we are interested in the \\(\\delta\\)s, so to speak, between ‘reaction’ and ‘goNoGo’ and between ‘discrimination’ and ‘goNoGo’.
Let’s consider only the data relevant for our current purposes:
```
# select the relevant columns
data_MC_excerpt <- aida::data_MC_cleaned %>%
select(RT, block)
# show the first couple of lines
data_MC_excerpt %>%
head(5)
```
```
## # A tibble: 5 × 2
## RT block
## <dbl> <ord>
## 1 311 reaction
## 2 269 reaction
## 3 317 reaction
## 4 325 reaction
## 5 240 reaction
```
Here are the means of the reaction times for different `block` levels:
```
data_MC_excerpt %>%
group_by(block) %>%
summarize(mean_RT = mean(RT))
```
```
## # A tibble: 3 × 2
## block mean_RT
## <ord> <dbl>
## 1 reaction 300.
## 2 goNoGo 427.
## 3 discrimination 488.
```
And here is a plot of the distribution of measurements in each block:
To fit this model with `brms`, we need a simple function call with the formula `RT ~ block` that precisely describes what we are interested in, namely explaining reaction times as a function of the experimental condition:
```
fit_brms_mc <- brm(
formula = RT ~ block,
data = data_MC_excerpt
)
```
To inspect the posterior fits of this model, we can extract the relevant summary statistics as before:
```
summary(fit_brms_mc)$fixed[,c("l-95% CI", "Estimate", "u-95% CI")]
```
```
## l-95% CI Estimate u-95% CI
## Intercept 400.98876 404.90896 408.87119
## block.L 127.21126 132.80967 138.33805
## block.Q -34.71398 -27.28713 -19.86181
```
Notice that there is an intercept term, as before.
This corresponds to the mean reaction time of the reference level, which is again set based on alphanumeric ordering, so corresponding to “discrimination”.
There are two slope coefficients, one for the difference between the reference level and “goNoGo” and another for the difference between the reference level and the “reaction” condition.
These intercepts are estimated to be credibly negative, suggesting that the “discrimination” condition indeed had the highest mean reaction times.
This answers one half of the comparisons we are interested in:
\\\[
\\text{RT in 'reaction'} \<
\\text{RT in 'goNoGo'} \<
\\text{RT in 'discrimination'}
\\]
Unfortunately, it is not directly possible to read off information about the second comparison we care about, namely the comparison between “reaction” and “goNoGo”.
And here is where we see the point of **contrast coding** pop up for the first time.
We would like to encode predictor levels ideally in such a way that we can read off (test) the hypotheses we care about directly.
In other words, if possible, we would like to have parameters in our model in the form of slope coefficients, which directly encode the \\(\\delta\\)s, so to speak, that we want to test.[65](#fn65)
In the case at hand, all we need to do is change the reference level.
If the reference level is the “middle category” (as per our ordered hypothesis), the two slopes will express the contrasts we care about.
To change the reference level, we only need to make `block` a factor and order its levels manually, like so:
```
data_MC_excerpt <- data_MC_excerpt %>%
mutate(block_reordered = factor(block, levels = c("goNoGo", "reaction", "discrimination")))
```
We then run another Bayesian regression model, regressing `RT` against `block_reordered`.
```
fit_brms_mc_reordered <- brm(
formula = RT ~ block_reordered,
data = data_MC_excerpt
)
```
And inspect the summary of the posterior samples for the relevant coefficients:
```
summary(fit_brms_mc_reordered)$fixed[,c("l-95% CI", "Estimate", "u-95% CI")]
```
```
## l-95% CI Estimate u-95% CI
## Intercept 401.16332 404.89229 408.64559
## block_reordered.L 35.94682 42.79215 49.73385
## block_reordered.Q 122.74315 128.61676 134.46136
```
Now the “Intercept” corresponds to the new reference level “goNoGo”.
And the two slope coefficients give the differences to the other two levels.
Which numeric encoding leads to this result?
In formulaic terms, we have three coefficients \\(\\beta\_0, \\dots, \\beta\_2\\).
The predicted mean value for observation \\(i\\) is \\(\\xi\_i \= \\beta\_0 \+ \\beta\_1 x\_{i1} \+ \\beta\_2 x\_{i2}\\).
We assign numeric value \\(1\\) for predictor \\(x\_1\\) when the observation is from the “reaction” block.
We assign numeric value \\(1\\) for predictor \\(x\_2\\) when the observation is from the “discrimination” block.
Schematically, what we now have is:
```
## # A tibble: 3 × 4
## block x_0 x_1 x_2
## <chr> <dbl> <dbl> <dbl>
## 1 goNoGo 1 0 0
## 2 reaction 1 1 0
## 3 discrimination 1 0 1
```
As we may have expected, the 95% inter\-quantile range for both slope coefficients (which, given the amount of data we have, is almost surely almost identical to the 95% HDI) does not include 0 by a very wide margin.
We could therefore conclude that, based on a Bayesian approach to hypothesis testing in terms of posterior estimation, the reaction times of conditions are credibly different.
The coding of levels in terms of a reference level is called *treatment coding*, or also *dummy coding*.
The video included at the beginning of this chapter discusses further contrast coding schemes, and also shows in more detail how a coding scheme translates into “directly testable” hypotheses.
**Exercise 14\.2**
Suppose that there are three groups, A, B, and C as levels of your predictor. You want the regression intercept to be the mean of group A. You want the first slope to be the difference between the means of group B and group A. And, you want the second slope to be the difference between the mean of C and B. How do you numerically encode these contrasts in terms of numeric predictor values?
Solution
Schematically, like this:
```
## # A tibble: 3 × 4
## group x_0 x_1 x_2
## <chr> <dbl> <dbl> <dbl>
## 1 A 1 0 0
## 2 B 1 1 0
## 3 C 1 1 1
```
As group A is a reference category, \\(\\beta\_0\\) expresses the mean reaction time of group A. The mean reaction time of group B is \\(\\beta\_0 \+ \\beta\_1\\), so we need \\((x\_{i1} \=1 , x\_{i2} \= 0\)\\) for any \\(i\\) which is of group B. In the text above, the mean reaction time of group C is given by \\(\\beta\_0 \+ \\beta\_2\\). However, the value we need now is given by \\(\\beta\_0 \+ \\beta\_1 \+ \\beta\_2\\), so \\((x\_{i1} \=1 , x\_{i2} \= 1\)\\).
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-04-03-predictors-multiple-predictors.html |
14\.3 Multiple predictors
-------------------------
Factorial designs, which have more than one categorical predictor variable, are common in experimental psychology.
Any contrast coding scheme usable for encoding a single categorical predictor can, in principle, also be used when there are multiple categorical predictors.
But having multiple categorical predictors also requires some additional considerations relating to how (the model assumes that) different predictors might or might not interact with one another.
Here is an informal example.
Suppose that we have metric measurements of how tasty a snack is perceived to be.
There are two categorical factors that we want to use to predict the average tastiness of a snack.
The first predictor is `mayo` and we encode it numerically as: 0 if the dish does not contain mayonnaise and 1 if it does.
The second predictor is `chocolate` and we encode it similarly as: 0 if the dish does not contain chocolate and 1 if it does.
Suppose we estimate these two slope coefficients (one for `mayo` and one for `chocolate`) for our imaginary data set and find that both are credibly positive.
That means that there is reason to believe that, all else equal, when we find `mayo` in a snack we may expect it to be rated as more tasty, and, all else equal, when we find `chocolate` in a snack we may also expect it to be rated as more tasty.[66](#fn66)
But what about a dish with *both* `mayo` *and* `chocolate`?
Maybe we can agree to assume for the sake of argument that, on average, snacks containing both `mayo` and `chocolate` are *not* rated as tasty at all.
Or, at least, we might want to include in our model the possibility that the combination of `mayo` and `chocolate` has a different effect than the sum of the contributions of (i) `mayo` on its own and that of (ii) `chocolate` of its own.
That is why, when we have multiple categorical predictors, we also often want to include yet another type of slope coefficient, so\-called **interaction terms**, that capture how the combination of different factor levels from different categorical predictors, well, interact.
If you like a more precise characterization at this moment already (although an example below will make things hopefully much clearer), we could say that, in the context of a linear regression model, an interaction between levels of several predictors is a (potential) deviation from the sum of all of the additive effects of the individual predictor levels in isolation.
To make this more precise, let us consider the example of the [politeness data](app-93-data-sets-politeness.html#app-93-data-sets-politeness).[67](#fn67)
The to\-be\-predicted data are measurements of voice pitch in a \\(2 \\times 2\\) factorial design, with factors `gender` and `context`.
The factor `gender` has (sadly only) two levels: “male” and “female”.
The factor `context` has two levels, namely “informal” for informal speech situations and “polite” for polite speech situations.
Let us first load the data \& inspect it.
```
politeness_data <- aida::data_polite
politeness_data %>% head(5)
```
```
## # A tibble: 5 × 5
## subject gender sentence context pitch
## <chr> <chr> <chr> <chr> <dbl>
## 1 F1 F S1 pol 213.
## 2 F1 F S1 inf 204.
## 3 F1 F S2 pol 285.
## 4 F1 F S2 inf 260.
## 5 F1 F S3 pol 204.
```
The research hypotheses of interest are:
1. **H1: (gender)**: the voice pitch of male speakers is lower than that of female speakers;
2. **H2: (context)**: the voice pitch of speech in polite contexts is lower than in informal contexts; and
3. **H3: (interaction)**: the effect of context (\= the difference of voice pitch between polite and informal context; as mentioned in the second hypothesis) is larger for female speakers than for male speakers.
The first two hypotheses are statements related to what is often called **main effects**, namely differences between levels of a single categorical predictor, averaging over all levels of any other categorical predictor.
Consequently, we could also rephrase this as saying: “We expect a main effect of gender (H1\) and a main effect of context (H2\).” only thereby omitting the direction of the difference between respective factor levels.
The third hypothesis is a more convoluted formulation about the interaction of the two categorical predictors.
To understand hypotheses about main effects and interactions better, at least in the easiest case of a \\(2 \\times 2\\) factorial design, it is useful to consider stylized diagrams, like in Figure [14\.1](Chap-04-03-predictors-multiple-predictors.html#fig:04-03-2x2-hypotheses), which show how the data would look like if main effects or various interaction relations are present or absent.
Concretely, the panels in Figure [14\.1](Chap-04-03-predictors-multiple-predictors.html#fig:04-03-2x2-hypotheses) depict the following types of situations:
* **A**: no main effect (neither gender nor context) and no interaction;
* **B**: main effect of gender, no main effect of context and no interaction;
* **C**: main effect of context, no main effect of gender and no interaction;
* **D**: main effects of both context and gender but no interaction;
* **E**: main effects of both context and gender with an interaction amplifying the strength of the main effect of context for the female category; and
* **F**: as in E but with a different kind of interaction (effect reversal).
Notice that the type of situation shown in panel E is the expectation derivable from the conjunction of the hypotheses H1\-H3 formulated above: we predict/expect main effects for both predictors (in the direction shown in panel E) and we expect the effect of context to be stronger for female speakers than for male speakers.
Figure 14\.1: Schematic representation of the presence/absence of main effects and (different kinds of) interactions. The situations shown are as follows: A: no main effect (neither gender nor context) and no interaction; B: main effect of gender only w/ no interaction; C: main effect of context only w/ no interaction; D: main effects of both context and gender but no interaction; E: main effects of both context and gender with an interaction amplifying the strength of the main effect of context for the female category (this is the situation envisaged by hypotheses 1\-3 from the main text); F: as in E but with a different kind of interaction (effect reversal).
Let us now take a look at the actual data:
Judging from visual inspection, we might say that the empirical data most resembles panel D in Figure [14\.1](Chap-04-03-predictors-multiple-predictors.html#fig:04-03-2x2-hypotheses).
It looks as if there might be a rather strong effect of gender.
The measurements in the female category seem (on average) higher than in the male category.
Also, there might well be a main effect of context.
Probably the voice pitch in informal contexts is higher than in polite contexts, but we cannot be as sure as for a potential main effect of gender.
It is very difficult to discern whether the data supports the hypothesized interaction.
In the following, we are therefore going to test these hypotheses (more or less directly) with two different kinds of coding schemes: treatment coding and sum coding.
### 14\.3\.1 Treatment coding
In a \\(2 \\times 2\\) factorial design there are essentially four pairs of factor levels (so\-called **design cells**).
For the politeness data, these are female speakers in informal contexts, female speakers in polite contexts, male speakers in informal contexts and male speakers in polite contexts.
Different coding schemes exist by means of which different comparisons of means of design cells (or single factors) can be probed.
A simple coding scheme for differences in our \\(2 \\times 2\\) design is shown in Figure [14\.2](Chap-04-03-predictors-multiple-predictors.html#fig:Chap-04-02-beyond-simple-regression-factorial-coefficients).
This is a straightforward extension of *treatment coding* for the single predictors introduced previously which additionally includes a potential interaction.
Figure 14\.2: Regression coefficients for a factorial design (using so\-called ‘treatment coding’).
The coding scheme in Figure [14\.2](Chap-04-03-predictors-multiple-predictors.html#fig:Chap-04-02-beyond-simple-regression-factorial-coefficients) considers the cell “female\+informal” as the reference level and therefore models its mean as intercept \\(\\beta\_0\\).
We then have a slope term \\(\\beta\_{\\text{pol}}\\) which encodes the difference between female pitch in informal and female pitch in polite contexts.
Analogous reasoning holds for \\(\\beta\_{\\text{male}}\\).
Finally, we also include a so\-called **interaction term**, denoted as \\(\\beta\_{\\text{pol\&male}}\\) in Figure [14\.2](Chap-04-03-predictors-multiple-predictors.html#fig:Chap-04-02-beyond-simple-regression-factorial-coefficients).
The interaction term quantifies how much a change away from the reference level in both variables differs from the sum of unilateral changes.
Another way of describing what the interaction term \\(\\beta\_{\\text{pol\&male}}\\) captures is that it represents the difference which the manipulation of context has on female and male speakers.
To see this, notice that the “extent of the effect of context”, i.e., the decrease in pitch between informal and polite contexts, for female speakers is:
\\\[
\\text{eff\_context\_on\_female} \= \\beta\_0 \- (\\beta\_0 \+ \\beta\_\\text{pol}) \= \- \\beta\_\\text{pol}
\\]
The bigger this number, the larger, so to speak, “the effect of context on female speakers”.
The effect of context on male speaker’s pitch is correspondingly:
\\\[
\\text{eff\_context\_on\_male} \= (\\beta\_0 \+ \\beta\_\\text{male} \+ \\beta\_{\\text{pol}} \+ \\beta\_\\text{pol\&male}) \- (\\beta\_\\text{pol} \+ \\beta\_\\text{male}) \= \- \\beta\_{\\text{pol}} \- \\beta\_\\text{pol\&male}
\\]
Therefore, the difference \-comparing female and male speakers\- of the effect of context is:
\\\[\\text{eff\_context\_on\_female} \- \\text{eff\_context\_on\_male} \= \\beta\_\\text{pol\&male}\\]
How do these model coefficients help address the research hypotheses we formulated above? \-
The interaction term \\(\\beta\_\\text{pol\&male}\\) directly relates to hypothesis 3 above, namely that the context\-effect is larger for female speakers than for male speakers.
In other words, we can express H3 as the parameter\-based hypothesis that:
\\\[\\textbf{H3: (interaction)} \\ \\ \\ \\ \\beta\_\\text{pol\&male} \> 0\\]
The other two hypotheses are not directly expressible as a statement involving a single coefficient.
But they can be expressed as a complex hypothesis involving more than one coefficient of the model.
Hypothesis H1 states that the pitch of male speakers (averaging over context types) is lower than that of female speakers (averaging over context types).
This translates directly into the following statement (where the LHS/RHS is the average pitch of male/female speakers):
\\\[
\\frac{1}{2} (\\beta\_0 \+ \\beta\_\\text{male} \+ \\beta\_0 \+ \\beta\_\\text{male} \+ \\beta\_\\text{pol} \+ \\beta\_\\text{pol\&male}) \<
\\frac{1}{2} (\\beta\_0 \+ \\beta\_0 \+ \\beta\_\\text{pol})
\\]
This can be simplified to:
\\\[
\\textbf{H1: (gender)} \\ \\ \\ \\ \\beta\_\\text{male} \+ \\frac{1}{2} \\beta\_\\text{pol\&male} \< 0
\\]
Similar reasoning leads to the following formulation of hypothesis H2 concerning a main effect of factor context:
\\\[
\\textbf{H2: (context)} \\ \\ \\ \\ \\beta\_\\text{pol} \+ \\frac{1}{2} \\beta\_\\text{pol\&male} \< 0
\\]
To test these hypotheses, we can fit a regression model with this coding scheme using the formula `pitch ~ gender * context`.
Importantly the star `*` between explanatory variables `gender` and `context` indicates that we also want to include the interaction term.[68](#fn68)
```
fit_brms_politeness <- brm(
# model 'pitch' as a function of 'gender' and 'context',
# also including the interaction between `gender` and `context`
formula = pitch ~ gender * context,
data = politeness_data
)
```
The summary statistics below lists Bayesian summary statistics for the (marginal) posteriors of the model parameters indicated in Figure [14\.2](Chap-04-03-predictors-multiple-predictors.html#fig:Chap-04-02-beyond-simple-regression-factorial-coefficients).
```
summary(fit_brms_politeness)$fixed[,c("l-95% CI", "Estimate", "u-95% CI")]
```
```
## l-95% CI Estimate u-95% CI
## Intercept 245.65872 261.02993 276.800178
## genderM -138.41661 -116.53009 -93.764509
## contextpol -49.44973 -27.69013 -4.936747
## genderM:contextpol -15.79220 16.16356 46.823184
```
The function `brms::hypothesis` can test the relevant hypotheses based on the `brms_fit` object stored in `fit_brms_politeness`.
Starting with H1, we find very strong support for a main effect of gender:
```
brms::hypothesis(fit_brms_politeness, "genderM + 0.5 * genderM:contextpol < 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (genderM+0.5*gend... < 0 -108.45 8.13 -121.85 -94.97 Inf
## Post.Prob Star
## 1 1 *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
As for H2, we also find very strong evidence in support of a belief in a main effect of context:
```
brms::hypothesis(fit_brms_politeness, "contextpol + 0.5 * genderM:contextpol < 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (contextpol+0.5*g... < 0 -19.61 8.13 -33.14 -6.24 128.03
## Post.Prob Star
## 1 0.99 *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
In contrast, based on the data and the model, there is at best very mildly suggestive evidence in favor of the third hypothesis according to which female speakers are more susceptible to pitch differences induced by different context types.
```
brms::hypothesis(fit_brms_politeness, "genderM:contextpol > 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (genderM:contextpol) > 0 16.16 15.94 -9.44 41.91 5.71
## Post.Prob Star
## 1 0.85
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
We can interpret this as saying that, given model and data, it is plausible to think that male speakers had lower voice pitch than female speakers (averaging over both context types).
We may also conclude that given model and data, it is plausible to think that voice pitch was lower in polite contexts than informal contexts (averaged over both levels of factor `gender`).
The posterior of the interaction term `genderM:contextpol` does not give any indication to think that 0, or any value near it, is not plausible.
This can be interpreted as saying that there is no indication, given model and data, to believe that male speakers’ voice pitch changes differently from informal to polite contexts than female speakers’ voice pitch does.
**Exercise 14\.3**
Based on the estimate given above, what is the mean estimate for male speakers speaking in informal contexts?
Solution
The mean estimate for male speakers speaking in informal contexts is given by \\(\\beta\_0 \+\\beta\_{\\text{male}} \= 261\.02993 \-116\.53009 \\approx 144\\).
### 14\.3\.2 Sum coding
Treatment coding allowed us to *directly* test H3 in terms of a single coefficient, but testing of hypotheses about so\-called “main effects” (H1 and H2\) cannot be directly read off a single coefficient’s posterior.
As hypotheses about main effects are natural and common in experimental psychology, another coding scheme is very popular, namely **sum coding**.[69](#fn69)
Figure [14\.3](Chap-04-03-predictors-multiple-predictors.html#fig:Chap-04-03-coefficients-sum-coding) shows how the mean of each design cell in our \\(2\\times2\\) design is expressed in terms of four regression coefficients.
Parameter \\(\\beta\_0\\) is called “intercept” as usual, but encodes the so\-called **grand mean**, i.e. the mean value of all data observations.
To see this, just sum all of the four terms in Figure [14\.3](Chap-04-03-predictors-multiple-predictors.html#fig:Chap-04-03-coefficients-sum-coding) and divide by 4: the result is \\(\\beta\_0\\).
The parameters \\(\\beta\_\\text{male}\\) and \\(\\beta\_\\text{pol}\\) are slope coefficients, but they now encode the deviance from the grand mean.
For example, \\(\\beta\_\\text{male}\\) encodes the difference between (i) the average pitch of all measurements taken from male participants and (ii) the grand mean.
Finally, the interaction coefficient \\(\\beta\_\\text{pol\&male}\\) serves the same function as before in treatment coding, namely to make room for a difference in the strength of one main effect, e.g., of context, on the levels of the other predictor, e.g., gender.
Figure 14\.3: Regression coefficients for a factorial design (using so\-called ‘sum coding’).
It is then clear that under treatment coding, the hypotheses H1 and H2, which target main effects, can be straightforwardly stated as inequalities concerning singular coefficients, namely:
\\\[
\\textbf{H1: (gender)} \\ \\ \\ \\ \\beta\_\\text{male} \< 0
\\]
\\\[
\\textbf{H2: (context)} \\ \\ \\ \\ \\beta\_\\text{pol} \< 0
\\]
What is less obvious is that the interaction term, as defined under sum coding, still directly expresses the interaction hypothesis H3\.
To see this, calculate as before:
\\\[
\\begin{align\*}
\& \\text{eff\_context\_on\_female} \\\\
\& \= (\\beta\_0 \- \\beta\_\\text{male} \- \\beta\_\\text{pol} \+ \\beta\_\\text{pol\&male}) \- (\\beta\_0 \- \\beta\_\\text{male} \+ \\beta\_\\text{pol} \- \\beta\_\\text{pol\&male}) \\\\
\& \= \- 2 \\beta\_\\text{pol} \+ 2 \\beta\_\\text{pol\&male}
\\end{align\*}
\\]
The effect of context on male speaker’s pitch is:
\\\[
\\begin{align\*}
\& \\text{eff\_context\_on\_male} \\\\
\& \= (\\beta\_0 \+ \\beta\_\\text{male} \- \\beta\_\\text{pol} \- \\beta\_\\text{pol\&male}) \- (\\beta\_0 \+ \\beta\_\\text{male} \+ \\beta\_\\text{pol} \+ \\beta\_\\text{pol\&male}) \\\\
\& \= \- 2 \\beta\_\\text{pol} \- 2 \\beta\_\\text{pol\&male}
\\end{align\*}
\\]
Consequently, the difference \-comparing female and male speakers\- of the effect of context under sum coding is expressed as:
\\\[\\text{eff\_context\_on\_female} \- \\text{eff\_context\_on\_male} \= 4 \\beta\_\\text{pol\&male}\\]
To implement sum coding for use in `brms`, R provides the functions `contrasts` and `contr.sum`.
Here is an example.
```
# make predictors 'factors' b/c that's required for contrast coding
# also: change order to match coding assumed in the main text
data_polite <- aida::data_polite %>%
mutate(
gender = factor(gender, levels = c('M', 'F')),
context = factor(context, levels = c('pol', 'inf'))
)
# apply 'sum' contrasts
contrasts(data_polite$context) <- contr.sum(2)
contrasts(data_polite$gender) <- contr.sum(2)
# add intelligible name to the new contrast coding
colnames(contrasts(data_polite$context)) <- ":polite"
colnames(contrasts(data_polite$gender)) <- ":male"
# run brms as usual
fit_brms_politeness_sum <- brm(
pitch ~ gender * context,
data_polite
)
```
We can inspect the coefficients as usual:
```
summary(fit_brms_politeness_sum)$fixed[, c("l-95% CI", "Estimate" ,"u-95% CI")]
```
```
## l-95% CI Estimate u-95% CI
## Intercept 185.021259 192.952242 200.709895
## gender:male -61.627917 -54.157114 -46.267964
## context:polite -17.512988 -9.639643 -1.855826
## gender:male:context:polite -3.851784 3.870962 11.573682
```
The summary statistics for the posterior already directly address all three hypotheses in question, but we should compare our previous results to the full results of using `brms::hypothesis` also for the sum\-coded analysis.
```
# testing H1
brms::hypothesis(fit_brms_politeness_sum, "gender:male < 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
## 1 (gender:male) < 0 -54.16 3.92 -60.56 -47.88 Inf 1
## Star
## 1 *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
```
# testing H2
brms::hypothesis(fit_brms_politeness_sum, "context:polite < 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (context:polite) < 0 -9.64 3.97 -16.08 -3.05 113.29
## Post.Prob Star
## 1 0.99 *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
```
# testing H3
brms::hypothesis(fit_brms_politeness_sum, "gender:male:context:polite > 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (gender:male:cont... > 0 3.87 3.93 -2.64 10.24 5.33
## Post.Prob Star
## 1 0.84
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
Since we didn’t use any priors, which could have altered results slightly between treatment\- and sum\-coded regression modeling, we find (modulo sampling imprecision) the same “evidence ratios” and posterior probabilities of these hypotheses.
The overall conclusions are therefore the exact same: evidence for both main effects; no evidence for interaction.
### 14\.3\.1 Treatment coding
In a \\(2 \\times 2\\) factorial design there are essentially four pairs of factor levels (so\-called **design cells**).
For the politeness data, these are female speakers in informal contexts, female speakers in polite contexts, male speakers in informal contexts and male speakers in polite contexts.
Different coding schemes exist by means of which different comparisons of means of design cells (or single factors) can be probed.
A simple coding scheme for differences in our \\(2 \\times 2\\) design is shown in Figure [14\.2](Chap-04-03-predictors-multiple-predictors.html#fig:Chap-04-02-beyond-simple-regression-factorial-coefficients).
This is a straightforward extension of *treatment coding* for the single predictors introduced previously which additionally includes a potential interaction.
Figure 14\.2: Regression coefficients for a factorial design (using so\-called ‘treatment coding’).
The coding scheme in Figure [14\.2](Chap-04-03-predictors-multiple-predictors.html#fig:Chap-04-02-beyond-simple-regression-factorial-coefficients) considers the cell “female\+informal” as the reference level and therefore models its mean as intercept \\(\\beta\_0\\).
We then have a slope term \\(\\beta\_{\\text{pol}}\\) which encodes the difference between female pitch in informal and female pitch in polite contexts.
Analogous reasoning holds for \\(\\beta\_{\\text{male}}\\).
Finally, we also include a so\-called **interaction term**, denoted as \\(\\beta\_{\\text{pol\&male}}\\) in Figure [14\.2](Chap-04-03-predictors-multiple-predictors.html#fig:Chap-04-02-beyond-simple-regression-factorial-coefficients).
The interaction term quantifies how much a change away from the reference level in both variables differs from the sum of unilateral changes.
Another way of describing what the interaction term \\(\\beta\_{\\text{pol\&male}}\\) captures is that it represents the difference which the manipulation of context has on female and male speakers.
To see this, notice that the “extent of the effect of context”, i.e., the decrease in pitch between informal and polite contexts, for female speakers is:
\\\[
\\text{eff\_context\_on\_female} \= \\beta\_0 \- (\\beta\_0 \+ \\beta\_\\text{pol}) \= \- \\beta\_\\text{pol}
\\]
The bigger this number, the larger, so to speak, “the effect of context on female speakers”.
The effect of context on male speaker’s pitch is correspondingly:
\\\[
\\text{eff\_context\_on\_male} \= (\\beta\_0 \+ \\beta\_\\text{male} \+ \\beta\_{\\text{pol}} \+ \\beta\_\\text{pol\&male}) \- (\\beta\_\\text{pol} \+ \\beta\_\\text{male}) \= \- \\beta\_{\\text{pol}} \- \\beta\_\\text{pol\&male}
\\]
Therefore, the difference \-comparing female and male speakers\- of the effect of context is:
\\\[\\text{eff\_context\_on\_female} \- \\text{eff\_context\_on\_male} \= \\beta\_\\text{pol\&male}\\]
How do these model coefficients help address the research hypotheses we formulated above? \-
The interaction term \\(\\beta\_\\text{pol\&male}\\) directly relates to hypothesis 3 above, namely that the context\-effect is larger for female speakers than for male speakers.
In other words, we can express H3 as the parameter\-based hypothesis that:
\\\[\\textbf{H3: (interaction)} \\ \\ \\ \\ \\beta\_\\text{pol\&male} \> 0\\]
The other two hypotheses are not directly expressible as a statement involving a single coefficient.
But they can be expressed as a complex hypothesis involving more than one coefficient of the model.
Hypothesis H1 states that the pitch of male speakers (averaging over context types) is lower than that of female speakers (averaging over context types).
This translates directly into the following statement (where the LHS/RHS is the average pitch of male/female speakers):
\\\[
\\frac{1}{2} (\\beta\_0 \+ \\beta\_\\text{male} \+ \\beta\_0 \+ \\beta\_\\text{male} \+ \\beta\_\\text{pol} \+ \\beta\_\\text{pol\&male}) \<
\\frac{1}{2} (\\beta\_0 \+ \\beta\_0 \+ \\beta\_\\text{pol})
\\]
This can be simplified to:
\\\[
\\textbf{H1: (gender)} \\ \\ \\ \\ \\beta\_\\text{male} \+ \\frac{1}{2} \\beta\_\\text{pol\&male} \< 0
\\]
Similar reasoning leads to the following formulation of hypothesis H2 concerning a main effect of factor context:
\\\[
\\textbf{H2: (context)} \\ \\ \\ \\ \\beta\_\\text{pol} \+ \\frac{1}{2} \\beta\_\\text{pol\&male} \< 0
\\]
To test these hypotheses, we can fit a regression model with this coding scheme using the formula `pitch ~ gender * context`.
Importantly the star `*` between explanatory variables `gender` and `context` indicates that we also want to include the interaction term.[68](#fn68)
```
fit_brms_politeness <- brm(
# model 'pitch' as a function of 'gender' and 'context',
# also including the interaction between `gender` and `context`
formula = pitch ~ gender * context,
data = politeness_data
)
```
The summary statistics below lists Bayesian summary statistics for the (marginal) posteriors of the model parameters indicated in Figure [14\.2](Chap-04-03-predictors-multiple-predictors.html#fig:Chap-04-02-beyond-simple-regression-factorial-coefficients).
```
summary(fit_brms_politeness)$fixed[,c("l-95% CI", "Estimate", "u-95% CI")]
```
```
## l-95% CI Estimate u-95% CI
## Intercept 245.65872 261.02993 276.800178
## genderM -138.41661 -116.53009 -93.764509
## contextpol -49.44973 -27.69013 -4.936747
## genderM:contextpol -15.79220 16.16356 46.823184
```
The function `brms::hypothesis` can test the relevant hypotheses based on the `brms_fit` object stored in `fit_brms_politeness`.
Starting with H1, we find very strong support for a main effect of gender:
```
brms::hypothesis(fit_brms_politeness, "genderM + 0.5 * genderM:contextpol < 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (genderM+0.5*gend... < 0 -108.45 8.13 -121.85 -94.97 Inf
## Post.Prob Star
## 1 1 *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
As for H2, we also find very strong evidence in support of a belief in a main effect of context:
```
brms::hypothesis(fit_brms_politeness, "contextpol + 0.5 * genderM:contextpol < 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (contextpol+0.5*g... < 0 -19.61 8.13 -33.14 -6.24 128.03
## Post.Prob Star
## 1 0.99 *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
In contrast, based on the data and the model, there is at best very mildly suggestive evidence in favor of the third hypothesis according to which female speakers are more susceptible to pitch differences induced by different context types.
```
brms::hypothesis(fit_brms_politeness, "genderM:contextpol > 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (genderM:contextpol) > 0 16.16 15.94 -9.44 41.91 5.71
## Post.Prob Star
## 1 0.85
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
We can interpret this as saying that, given model and data, it is plausible to think that male speakers had lower voice pitch than female speakers (averaging over both context types).
We may also conclude that given model and data, it is plausible to think that voice pitch was lower in polite contexts than informal contexts (averaged over both levels of factor `gender`).
The posterior of the interaction term `genderM:contextpol` does not give any indication to think that 0, or any value near it, is not plausible.
This can be interpreted as saying that there is no indication, given model and data, to believe that male speakers’ voice pitch changes differently from informal to polite contexts than female speakers’ voice pitch does.
**Exercise 14\.3**
Based on the estimate given above, what is the mean estimate for male speakers speaking in informal contexts?
Solution
The mean estimate for male speakers speaking in informal contexts is given by \\(\\beta\_0 \+\\beta\_{\\text{male}} \= 261\.02993 \-116\.53009 \\approx 144\\).
### 14\.3\.2 Sum coding
Treatment coding allowed us to *directly* test H3 in terms of a single coefficient, but testing of hypotheses about so\-called “main effects” (H1 and H2\) cannot be directly read off a single coefficient’s posterior.
As hypotheses about main effects are natural and common in experimental psychology, another coding scheme is very popular, namely **sum coding**.[69](#fn69)
Figure [14\.3](Chap-04-03-predictors-multiple-predictors.html#fig:Chap-04-03-coefficients-sum-coding) shows how the mean of each design cell in our \\(2\\times2\\) design is expressed in terms of four regression coefficients.
Parameter \\(\\beta\_0\\) is called “intercept” as usual, but encodes the so\-called **grand mean**, i.e. the mean value of all data observations.
To see this, just sum all of the four terms in Figure [14\.3](Chap-04-03-predictors-multiple-predictors.html#fig:Chap-04-03-coefficients-sum-coding) and divide by 4: the result is \\(\\beta\_0\\).
The parameters \\(\\beta\_\\text{male}\\) and \\(\\beta\_\\text{pol}\\) are slope coefficients, but they now encode the deviance from the grand mean.
For example, \\(\\beta\_\\text{male}\\) encodes the difference between (i) the average pitch of all measurements taken from male participants and (ii) the grand mean.
Finally, the interaction coefficient \\(\\beta\_\\text{pol\&male}\\) serves the same function as before in treatment coding, namely to make room for a difference in the strength of one main effect, e.g., of context, on the levels of the other predictor, e.g., gender.
Figure 14\.3: Regression coefficients for a factorial design (using so\-called ‘sum coding’).
It is then clear that under treatment coding, the hypotheses H1 and H2, which target main effects, can be straightforwardly stated as inequalities concerning singular coefficients, namely:
\\\[
\\textbf{H1: (gender)} \\ \\ \\ \\ \\beta\_\\text{male} \< 0
\\]
\\\[
\\textbf{H2: (context)} \\ \\ \\ \\ \\beta\_\\text{pol} \< 0
\\]
What is less obvious is that the interaction term, as defined under sum coding, still directly expresses the interaction hypothesis H3\.
To see this, calculate as before:
\\\[
\\begin{align\*}
\& \\text{eff\_context\_on\_female} \\\\
\& \= (\\beta\_0 \- \\beta\_\\text{male} \- \\beta\_\\text{pol} \+ \\beta\_\\text{pol\&male}) \- (\\beta\_0 \- \\beta\_\\text{male} \+ \\beta\_\\text{pol} \- \\beta\_\\text{pol\&male}) \\\\
\& \= \- 2 \\beta\_\\text{pol} \+ 2 \\beta\_\\text{pol\&male}
\\end{align\*}
\\]
The effect of context on male speaker’s pitch is:
\\\[
\\begin{align\*}
\& \\text{eff\_context\_on\_male} \\\\
\& \= (\\beta\_0 \+ \\beta\_\\text{male} \- \\beta\_\\text{pol} \- \\beta\_\\text{pol\&male}) \- (\\beta\_0 \+ \\beta\_\\text{male} \+ \\beta\_\\text{pol} \+ \\beta\_\\text{pol\&male}) \\\\
\& \= \- 2 \\beta\_\\text{pol} \- 2 \\beta\_\\text{pol\&male}
\\end{align\*}
\\]
Consequently, the difference \-comparing female and male speakers\- of the effect of context under sum coding is expressed as:
\\\[\\text{eff\_context\_on\_female} \- \\text{eff\_context\_on\_male} \= 4 \\beta\_\\text{pol\&male}\\]
To implement sum coding for use in `brms`, R provides the functions `contrasts` and `contr.sum`.
Here is an example.
```
# make predictors 'factors' b/c that's required for contrast coding
# also: change order to match coding assumed in the main text
data_polite <- aida::data_polite %>%
mutate(
gender = factor(gender, levels = c('M', 'F')),
context = factor(context, levels = c('pol', 'inf'))
)
# apply 'sum' contrasts
contrasts(data_polite$context) <- contr.sum(2)
contrasts(data_polite$gender) <- contr.sum(2)
# add intelligible name to the new contrast coding
colnames(contrasts(data_polite$context)) <- ":polite"
colnames(contrasts(data_polite$gender)) <- ":male"
# run brms as usual
fit_brms_politeness_sum <- brm(
pitch ~ gender * context,
data_polite
)
```
We can inspect the coefficients as usual:
```
summary(fit_brms_politeness_sum)$fixed[, c("l-95% CI", "Estimate" ,"u-95% CI")]
```
```
## l-95% CI Estimate u-95% CI
## Intercept 185.021259 192.952242 200.709895
## gender:male -61.627917 -54.157114 -46.267964
## context:polite -17.512988 -9.639643 -1.855826
## gender:male:context:polite -3.851784 3.870962 11.573682
```
The summary statistics for the posterior already directly address all three hypotheses in question, but we should compare our previous results to the full results of using `brms::hypothesis` also for the sum\-coded analysis.
```
# testing H1
brms::hypothesis(fit_brms_politeness_sum, "gender:male < 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
## 1 (gender:male) < 0 -54.16 3.92 -60.56 -47.88 Inf 1
## Star
## 1 *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
```
# testing H2
brms::hypothesis(fit_brms_politeness_sum, "context:polite < 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (context:polite) < 0 -9.64 3.97 -16.08 -3.05 113.29
## Post.Prob Star
## 1 0.99 *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
```
# testing H3
brms::hypothesis(fit_brms_politeness_sum, "gender:male:context:polite > 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (gender:male:cont... > 0 3.87 3.93 -2.64 10.24 5.33
## Post.Prob Star
## 1 0.84
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
Since we didn’t use any priors, which could have altered results slightly between treatment\- and sum\-coded regression modeling, we find (modulo sampling imprecision) the same “evidence ratios” and posterior probabilities of these hypotheses.
The overall conclusions are therefore the exact same: evidence for both main effects; no evidence for interaction.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/logistic-regression.html |
15\.2 Logistic regression
-------------------------
Suppose \\(y \\in \\{0,1\\}^n\\) is an \\(n\\)\-placed vector of binary outcomes, and \\(X\\) a predictor matrix for a linear regression model.
A Bayesian logistic regression model has the following form:
\\\[
\\begin{align\*}
\\beta, \\sigma \& \\sim \\text{some prior} \\\\
\\xi \& \= X \\beta \&\& \\text{\[linear predictor]} \\\\
\\eta\_i \& \= \\text{logistic}(\\xi\_i) \&\& \\text{\[predictor of central tendency]} \\\\
y\_i \& \\sim \\text{Bernoulli}(\\eta\_i) \&\& \\text{\[likelihood]} \\\\
\\end{align\*}
\\]
The logistic function used as a link function is a function in \\(\\mathbb{R} \\rightarrow \[0;1]\\), i.e., from the reals to the unit interval.
It is defined as:
\\\[\\text{logistic}(\\xi\_i) \= (1 \+ \\exp(\-\\xi\_i))^{\-1}\\]
It’s shape (a sigmoid, or S\-shaped curve) is this:
We use the [Simon task data](app-93-data-sets-simon-task.html#app-93-data-sets-simon-task) as an example application.
So far we only tested the first of two hypotheses about the Simon task data, namely the hypothesis relating to reaction times.
The second hypothesis which arose in the context of the Simon task refers to the accuracy of answers, i.e., the proportion of “correct” choices:
\\\[
\\text{Accuracy}\_{\\text{correct},\\ \\text{congruent}} \> \\text{Accuracy}\_{\\text{correct},\\ \\text{incongruent}}
\\]
Notice that `correctness` is a binary categorical variable.
Therefore, we use logistic regression to test this hypothesis.
Here is how to set up a logistic regression model with `brms`.
The only thing that is new here is that we specify explicitly the likelihood function and the (inverse!) link function.[70](#fn70)
This is done using the syntax `family = bernoulli(link = "logit")`.
For later hypothesis testing we also use proper priors and take samples from the prior as well.
```
fit_brms_ST_Acc = brm(
# regress 'correctness' against 'condition'
formula = correctness ~ condition,
# specify link and likeihood function
family = bernoulli(link = "logit"),
# which data to use
data = aida::data_ST %>%
# 'reorder' answer categories (making 'correct' the target to be explained)
mutate(correctness = correctness == 'correct'),
# weakly informative priors (slightly conservative)
# for `class = 'b'` (i.e., all slopes)
prior = prior(student_t(1, 0, 2), class = 'b'),
# also collect samples from the prior (for point-valued testing)
sample_prior = 'yes',
# take more than the usual samples (for numerical stability of testing)
iter = 20000
)
```
The Bayesian summary statistics of the posterior samples of values for regression coefficients are:
```
summary(fit_brms_ST_Acc)$fixed[,c("l-95% CI", "Estimate", "u-95% CI")]
```
```
## l-95% CI Estimate u-95% CI
## Intercept 3.1067020 3.2042928 3.3059530
## conditionincongruent -0.8496013 -0.7260651 -0.6050912
```
What do these specific numerical estimates for coefficients mean?
The mean estimate for the linear predictor \\(\\xi\_\\text{cong}\\) for the “congruent” condition is roughly 3\.204\.
The mean estimate for the linear predictor \\(\\xi\_\\text{inc}\\) for the “incongruent” condition is roughly 3\.204 \+ \-0\.726, so roughly 2\.478\.
The central predictors corresponding to these linear predictors are:
\\\[
\\begin{align\*}
\\eta\_\\text{cong} \& \= \\text{logistic}(3\.204\) \\approx 0\.961 \\\\
\\eta\_\\text{incon} \& \= \\text{logistic}(2\.478\) \\approx 0\.923
\\end{align\*}
\\]
These central estimates for the latent proportion of “correct” answers in each condition tightly match the empirically observed proportion of “correct” answers in the data:
```
proportions_correct_ST <- aida::data_ST %>%
group_by(condition, correctness) %>%
dplyr::count() %>%
group_by(condition) %>%
mutate(proportion_correct = (n / sum(n)) %>% round(3)) %>%
filter( correctness == "correct") %>%
select(-n, -correctness)
proportions_correct_ST
```
```
## # A tibble: 2 × 2
## # Groups: condition [2]
## condition proportion_correct
## <chr> <dbl>
## 1 congruent 0.961
## 2 incongruent 0.923
```
Testing hypothesis for a logistic regression model is the exact same as for a standard regression model.
And so, we find very strong support for hypothesis 2, suggesting that (given model and data), there is reason to believe that the accuracy in incongruent trials is lower than in congruent trials.
```
brms::hypothesis(fit_brms_ST_Acc, "conditionincongruent < 0")
```
```
## Hypothesis Tests for class b:
## Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (conditionincongr... < 0 -0.73 0.06 -0.83 -0.62 Inf
## Post.Prob Star
## 1 1 *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/ch-03-05-hypothesis-p-values.html |
16\.2 Quantifying evidence against a null\-model with *p*\-values
-----------------------------------------------------------------
All prominent frequentist approaches to statistical hypothesis testing (see Section [16\.1](ch-05-01-frequentist-testing-overview.html#ch-05-01-frequentist-testing-overview)) agree that if empirical observations are sufficiently *un*likely from the point of view of the null hypothesis \\(H\_0\\), this should be treated (in some way or other) as evidence *against* the null hypothesis.
A measure of how unlikely the data is in the light of \\(H\_0\\) is the \\(p\\)\-value.[73](#fn73)
To preview the main definition and intuition (to be worked out in detail hereafter), let’s first consider a verbal and then a mathematical formulation.
**Definition \\(p\\)\-value.** The \\(p\\)\-value associated with observed data \\(D\_\\text{obs}\\) gives the probability, derived from the assumption that \\(H\_0\\) is true, of observing an outcome for the chosen test statistic that is at least as extreme evidence against \\(H\_0\\) as the observed outcome.
Formally, the \\(p\\)\-value of observed data \\(D\_\\text{obs}\\) is:
\\\[
p\\left(D\_{\\text{obs}}\\right) \= P\\left(T^{\|H\_0} \\succeq^{H\_{0,a}} t\\left(D\_{\\text{obs}}\\right)\\right) % \= P(\\mathcal{D}^{\|H\_0} \\in \\{D \\mid t(D) \\ge t(D\_{\\text{obs}})\\})
\\]
where \\(t \\colon \\mathcal{D} \\rightarrow \\mathbb{R}\\) is a **test statistic** which picks out a relevant summary statistic of each potential data observation, \\(T^{\|H\_0}\\) is the **sampling distribution**, namely the random variable derived from test statistic \\(t\\) and the assumption that \\(H\_0\\) is true, and \\(\\succeq^{H\_{0,a}}\\) is a linear order on the image of \\(t\\) such that \\(t(D\_1\) \\succeq^{H\_{0,a}} t(D\_2\)\\) expresses that test value \\(t(D\_1\)\\) is at least as extreme evidence *against* \\(H\_0\\) as test value \\(t(D\_2\)\\) when compared to an alternative hypothesis \\(H\_a\\).[74](#fn74)
A few aspects of this definition are particularly important (and subsequent text is dedicated to making these aspects more comprehensible):
1. this is a frequentist approach in the sense that probabilities are entirely based on (hypothetical) repetitions of the assumed data\-generating process, which assumes that \\(H\_0\\) is true;
2. the test statistic *t* plays a fundamental role and should be chosen such that:
* it must necessarily select exactly those aspects of the data that matter to our research question,
* it should optimally make it possible to derive a closed\-form (approximation) of \\(T\\),[75](#fn75) and
* it would be desirable (but not necessary) to formulate \\(t\\) in such a way that the comparison relation \\(\\succeq^{H\_{0,a}}\\) coincides with a simple comparison of numbers: \\(t(D\_1\) \\succeq^{H\_{0,a}} t(D\_2\)\\) iff \\(t(D\_1\) \\ge t(D\_2\)\\);
3. there is an assumed data\-generating model buried inside notation \\(T^{\|H\_0}\\); and
4. the notion of “more extreme evidence against \\(H\_0\\)”, captured in comparison relation \\(\\succeq^{H\_{0,a}}\\) depends on our epistemic purposes, i.e., what research question we are ultimately interested in.[76](#fn76)
The remainder of this section will elaborate on all of these points. It is important to mention that especially the third aspect (that there is an implicit data\-generating model “inside of” classical hypothesis tests) is not something that receives a lot of emphasis in traditional statistics textbooks. Many textbooks do not even mention the assumptions implicit in a given test. Here we will not only stress key assumptions behind a test but present all of the assumptions behind classical tests in a graphical model, similar to what we did for Bayesian models. This arguably makes all implicit assumptions maximally transparent in a concise and lucid representation. It will also help see parallels between Bayesian and frequentist approaches, thereby helping to see both as more of the same rather than as something completely different. In order to cash in this model\-based approach, the following sections will therefore introduce new graphical tools to communicate the data\-generating model implicit in the classical tests we cover.
### 16\.2\.1 Frequentist null\-models
We start with the Binomial Model because it is the simplest and perhaps most intuitive case. We work out what a \\(p\\)\-value is for data for this model and introduce the new graphical language to communicate “frequentist models” in the following. We also introduce the notions of *test statistic* and *sampling distribution* based on a case that should be very intuitive, if not familiar.
The Binomial Model was covered before from a Bayesian point of view, where we represented it using graphical notation like in Figure [16\.2](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-Binomial-Model-repeated) (repeated from before). Remember that this is a model to draw inferences about a coin’s bias \\(\\theta\\) based on observations of outcomes of flips of that coin. The Bayesian modeling approach treated the number of observed heads \\(k\\) and the number of flips in total \\(N\\) as given, and the coin’s bias parameter \\(\\theta\\) as latent.
Figure 16\.2: The Binomial Model (repeated from before) for a Bayesian approach to parameter inference/testing.
Actually, this way of writing the Binomial Model is a shortcut. It glosses over each individual data observation (whether the \\(i\\)\-th coin flip was heads or tails) and jumps directly to the most relevant summary statistic of how many of the \\(N\\) flips were heads. This might, of course, be just the relevant level of analysis. If our assumption is true that the outcome of each coin flip is independent of any other flip, and given our goal to learn something about \\(\\theta\\), all that really matters is \\(k\\). But we can also rewrite the Bayesian model from Figure [16\.2](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-Binomial-Model-repeated) as the equivalent extended model in Figure [16\.3](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-Binomial-Model-extended). In the latter representation, the individual outcomes of each flip are represented as \\(x\_i \\in \\{0,1\\}\\). Each individual outcome is sampled from a [Bernoulli distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-bernoulli). Based on the whole vector of \\(x\_i\\)\-s and our knowledge of \\(N\\), we derive the **test statistic** \\(k\\), which maps each observation (a vector \\(x\\) of zeros and ones) to a single number \\(k\\) (the number of heads in the vector). Notice that the node for \\(k\\) has a solid double edge, indicating that it follows deterministically from its parent nodes. This is why we can think of \\(k\\) as a sample from a random variable constructed from “raw data” observations \\(x\\).
Figure 16\.3: The Binomial Model for a Bayesian approach, extended to show ‘raw observations’ and the ‘summary statistic’ implicitly used.
Compare this latter representation in Figure [16\.3](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-Binomial-Model-extended) with the frequentist Binomial Model in Figure [16\.4](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-Binomial-Model-frequentist). The frequentist model treats the number of observations \\(N\\) as observed, just like the Bayesian model. But it also fixes a specific value for the coin’s bias \\(\\theta\\). This is where the (point\-valued) null hypothesis comes in. For purposes of analysis, we fix the value of the relevant unobservable latent parameter to a specific value (because we do not want to assign probabilities to latent parameters, but we still like to talk about probabilities somehow). In our graphical model in Figure [16\.4](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-Binomial-Model-frequentist), the node for the coin’s bias is shaded (\= treated as known) but also has a dotted second edge to indicate that this is where our null hypothesis assumption kicks in. We then treat the data vector \\(x\\) and, with it, the associated test statistic \\(k\\) as unobserved. The data we actually observed will, of course, come in at some point. But the frequentist model leaves the observed data out at first in order to bring in the kinds of probabilities frequentist approaches feel comfortable with: probabilities derived from (hypothetical) repetitions of chance events. So, the frequentist model can now make statements about the likelihood of (raw) data \\(x\\) and values of the derived summary statistic \\(k\\) based on the assumption that the null hypothesis is true. Indeed, for the case at hand, we already know that the **sampling distribution**, i.e., the distribution of values for \\(k\\) given \\(\\theta\_0\\) is the [Binomial distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-binomial).
Figure 16\.4: The Binomial Model for a frequentist binomial test.
Let’s take a step back. The frequentist model for the binomial case considers (“raw”) data of the form \\(\\langle x\_1, \\dots, x\_N \\rangle\\) where each \\(x\_i \\in \\{0,1\\}\\) indicates whether the \\(i\\)\-th flip was a success (\= heads, \= 1\) or a failure (\= tails, \= 0\). We identify the set of all binary vectors of length \\(N\\) as the set of hypothetical data that we could, in principle, observe in a fictitious repetition of this data\-generating process. \\(\\mathcal{D}^{\|H\_0}\\) is then the random variable that assigns each potential observation \\(D \= \\langle x\_1, \\dots, x\_N \\rangle\\) the probability with which it would occur if \\(H\_0\\) (\= a specific value of \\(\\theta\\)) is true. In our case, that is:
\\\[P(\\mathcal{D}^{\|H\_0} \= \\langle x\_1, \\dots, x\_N \\rangle) \= \\prod\_{i\=1}^N \\text{Bernoulli}(x\_i, \\theta\_0\)\\]
The model does not work with this raw data and its implied distribution (represented by random variable \\(\\mathcal{D}^{\|H\_0}\\)), it instead uses a (very natural!) **test statistic** \\(t \\colon \\langle x\_1, \\dots, x\_N \\rangle \\mapsto \\sum\_{i\=1}^N x\_i\\). The **sampling distribution** for this model is therefore the distribution of values for the derived measure \\(k\\) \- a distribution that follows from the distribution of the raw data (\\(\\mathcal{D}^{\|H\_0}\\)) and this particular test statistic \\(t\\). In its most general form, we write the sampling distribution as \\(T^{\|H\_0} \= t(\\mathcal{D^{H\_0}})\\). [77](#fn77) It just so happens (what a relief!) that we know how to express \\(T^{\|H\_0}\\) in a mathematically very concise fashion. It’s just the Binomial distribution, so that \\(k \\sim \\text{Binomial}(\\theta\_0, N)\\). (Notice how the sampling distribution is really a function of \\(\\theta\_0\\), i.e., the null hypothesis, and also of \\(N\\).)
### 16\.2\.2 One\- vs. two\-sided \\(p\\)\-values
After seeing a frequentist null model and learning about notions like “test statistic” and “sampling distribution”, let’s explore what a \\(p\\)\-value is based on the frequentist Binomial Model. Our running example will be the 24/7 case, where \\(N \= 24\\) and \\(k \= 7\\). Notice that we are glossing over the “raw” data immediately and work with the value of the test statistic of the observed data directly: \\(t(D\_{\\text{obs}}) \= 7\\).
Remember that, by the definition given above, \\(p(D\_{\\text{obs}})\\) is the probability of observing a value of the test statistic that is at least as extreme evidence against \\(H\_0\\) as \\(t(D\_{\\text{obs}})\\), under the assumption that \\(H\_0\\) is true:
\\\[
p(D\_{\\text{obs}}) \= P(T^{\|H\_0} \\succeq^{H\_{0,a}} t(D\_{\\text{obs}})) % \= P(\\mathcal{D}^{\|H\_0} \\in \\{D \\mid t(D) \\ge t(D\_{\\text{obs}})\\})
\\]
To fill this with life, we need to set a null hypothesis, i.e., a value \\(\\theta\_0\\) of coin bias \\(\\theta\\), that we would like to collect evidence *against*. A fixed \\(H\_0\\) will directly fix \\(T^{\|H\_0}\\), but we will have to put extra thought into how to conceptualize \\(\\succeq^{H\_{0,a}}\\) for any given \\(H\_0\\). To make exactly this clearer is the job of this section. Specifically, we will look at what is standardly called a **two\-sided \\(p\\)\-value** and a **one\-sided \\(p\\)\-value**.
The difference lies in whether we are testing a point\-valued or an interval\-based null hypothesis.
So, let’s suppose that we want to test the following null hypotheses:
* Is the coin fair (\\(\\theta \= 0\.5\\))?
* Is the coin biased towards heads (\\(\\theta \> 0\.5\\))?
In the case of testing for fairness (\\(\\theta \= 0\.5\\)), the pair of null hypothesis and alternative hypothesis are:
\\\[
\\begin{aligned}
H\_0 \\colon \\theta \= 0\.5 \&\& H\_a \\colon \\theta \\neq 0\.5
\\end{aligned}
\\]
The case for testing the null hypothesis \\(\\theta \> 0\.5\\) is slightly more convoluted.
The frequentist construction of a null model strictly requires point\-valued assumptions about all model parameters.
Otherwise, subjective priors would sneak it.
(NB: Even the assumption of equal probability of parameter values, as in a non\-informative prior, *is* a biased and subjective assumption, according to frequentism.)
We therefore actually test the point\-valued null hypothesis \\(\\theta \= 0\.5\\), but we contrast it with a different alternative hypothesis, which is now one\-sided:
\\\[
\\begin{aligned}
H\_0 \\colon \\theta \= 0\.5 \&\& H\_a \\colon \\theta \< 0\.5
\\end{aligned}
\\]
**Case \\(\\theta \= 0\.5\\).** To begin with, assume that we want to address the question of whether the coin is fair.
Figure [16\.5](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-testing-binomial-sampling-distribution) shows the sampling distribution of the test statistic \\(k\\).
The probability of the observed value of the sampling statistic is shown in red.
Figure 16\.5: Sampling distribution (here: Binomial distribution) and the probability associated with observed data \\(k\=7\\) highlighted in red, for \\(N \= 24\\) coin flips, under the assumption of a null hypothesis \\(\\theta \= 0\.5\\).
The question we need to settle to obtain a \\(p\\)\-value is how to interpret \\(\\succeq^{H\_{0,a}}\\) for this case.
To do this, we need to decide which alternative values of \\(k\\) would count as equally or more extreme evidence *against* the chosen null hypothesis when compared to the specified alternative hypothesis.
The obvious approach is to use the probability of any value of the test statistic \\(k\\) directly and say that observing \\(D\_1\\) counts as at least as extreme evidence against \\(H\_0\\) as observing \\(D\_2\\), \\(t(D\_1\) \\succeq^{H\_{0,a}} t(D\_2\)\\), iff the probability of observing the test statistic associated with \\(D\_1\\) is at least as unlikely as observing \\(D\_2\\): \\(P(T^{\|H\_0} \= t(D\_1\)) \\le P(T^{\|H\_0} \= t(D\_2\))\\). To calculate the \\(p\\)\-value in this way, we therefore need to sum up the probabilities of all values \\(k\\) under the Binomial distribution (with parameters \\(N\=24\\) and \\(\\theta \= \\theta\_0 \= 0\.5\\)) that are no larger than the value of the observed \\(k \= 7\\). In mathematical language:[78](#fn78)
\\\[
p(k) \= \\sum\_{k' \= 0}^{N} \[\\text{Binomial}(k', N, \\theta\_0\) \<\= \\text{Binomial}(k, N, \\theta\_0\)] \\ \\text{Binomial}(k', N, \\theta\_0\)
\\]
In code, we calculate this \\(p\\)\-value as follows:
```
# exact p-value for k = 7 with N = 24 and null hypothesis theta = 0.5
k_obs <- 7
N <- 24
theta_0 <- 0.5
tibble( lh = dbinom(0:N, N, theta_0) ) %>%
filter( lh <= dbinom(k_obs, N, theta_0) ) %>%
pull(lh) %>% sum %>% round(5)
```
```
## [1] 0.06391
```
Figure [16\.6](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-testing-binomial-p-value) shows the values that need to be summed over in red.
Figure 16\.6: Sampling distribution (Binomial likelihood function) and two\-sided \\(p\\)\-value for the observation of \\(k\=7\\) successes in \\(N \= 24\\) coin flips, under the assumption of a null hypothesis \\(\\theta \= 0\.5\\).
Of course, R also has a built\-in function for a Binomial test. We can use it to verify that we get the same result for the \\(p\\)\-value:
```
binom.test(
x = 7, # observed successes
n = 24, # total no. of observations
p = 0.5 # null hypothesis
)
```
```
##
## Exact binomial test
##
## data: 7 and 24
## number of successes = 7, number of trials = 24, p-value = 0.06391
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.1261521 0.5109478
## sample estimates:
## probability of success
## 0.2916667
```
**Exercise 16\.1: Output of R’s `binom.test`**
Look at the output of the above call to R’s `binom.test` function.
Which pieces of information in that output make sense to you (given your current knowledge) and which do not?
Solution
The output given first states what was computed, namely an *exact binomial test*.
You should understand what a *binomial test* is.
The additional adjective *exact* refers to the fact that we did not use any approximation to get at the shown \\(p\\)\-value.
Next, we see the data repeated and the calculated \\(p\\)\-value, which we have seen how to calculate by hand.
The output then also names the alternative hypothesis, just like the text previously explained, making clear that this is a two\-valued \\(p\\)\-value.
Then comes something which you do not yet know about: the notion of a 95% confidence interval will be covered later in this chapter.
Finally, the output also gives the maximum likelihood estimate of the theta parameter.
Together with the 95% confidence interval, the test result therefore also reports the most common frequentist estimators, point\-values (MLE) and interval\-valued (95% confidence interval) for the parameter of interest.
**Case \\(\\theta \> 0\.5\\).** Let’s now look at the case where we want to test whether the coin is biased towards heads \\(\\theta \> 0\.5\\).
As explained above, we need a point\-valued assumption for the coin bias \\(\\theta\\) to set up a frequentist model and retrieve a sampling distribution for the relevant test statistic.
We choose \\(\\theta\_{0} \= 0\.5\\) as the point\-valued null hypothesis, because *if* we get a high measure of the evidence against the hypothesis \\(\\theta\_{0} \= 0\.5\\) (in a comparison against the alternative \\(\\theta \< 0\.5\\)), we can discredit the whole interval\-based hypothesis \\(\\theta \> 0\.5\\) because any other value of \\(\\theta\\) bigger than 0\.5 would give at least as high a \\(p\\)\-value.
In other words, we pick the single value for the comparison which is most favorable *for* the hypothesis \\(\\theta \> 0\.5\\) when compared against \\(\\theta \< 0\.5\\), so as when *even that* value is discredited, the whole hypothesis \\(\\theta \> 0\.5\\) is discredited.
But even though we use the same null\-value of \\(\\theta\_0 \= 0\.5\\), the calculation of the \\(p\\)\-value will be different from the case we looked at previously.
It will be one\-sided.
The reason lies in a change to what we should consider more extreme evidence against this interval\-valued null hypothesis, i.e., the interpretation of \\(\\succeq^{H\_{0,a}}\\).
Look at Figure [16\.7](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-testing-binomial-p-value-one-sided).
As before we see the Bernoulli likelihood function derived from the point\-value null hypothesis.
The \\(k\\)\-value observed is \\(k\=7\\).
Again we need to ask: which values of \\(k\\) would constitute equal or more evidence against the null hypothesis *when compared against the alternative hypothesis, which is now \\(\\theta \< 0\.5\\)*
Unlike in the previous, two\-sided case, observing large values of \\(k\\), e.g., larger than 12, even if they are unlikely for the point\-valued hypothesis \\(\\theta\_0 \= 0\.5\\), does not constitute evidence against the interval\-valued hypothesis we are interested in.
So therefore, we disregard the contribution of the right\-hand side in Figure [16\.6](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-testing-binomial-p-value) to arrive at a picture like in Figure [16\.7](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-testing-binomial-p-value-one-sided).
Figure 16\.7: Sampling distribution (Binomial likelihood function) and one\-sided \\(p\\)\-value for the observation of \\(k\=7\\) successes in \\(N \= 24\\) coin flips, under the assumption of a null hypothesis \\(\\theta \= 0\.5\\) compared against the alternative hypothesis \\(\\theta \< 0\\).
The associated \\(p\\)\-value with this so\-called **one\-sided test** is consequently:
```
k_obs <- 7
N <- 24
theta_0 <- 0.5
# exact p-value for k = 7 with N = 24 and null hypothesis theta > 0.5
dbinom(0:k_obs, N, theta_0) %>% sum %>% round(5)
```
```
## [1] 0.03196
```
We can double\-check against the built\-in function `binom.test` when we ask for a one\-sided test:
```
binom.test(
x = 7, # observed successes
n = 24, # total no. of observations
p = 0.5, # null hypothesis
alternative = "less" # the alternative to compare against is theta < 0.5
)
```
```
##
## Exact binomial test
##
## data: 7 and 24
## number of successes = 7, number of trials = 24, p-value = 0.03196
## alternative hypothesis: true probability of success is less than 0.5
## 95 percent confidence interval:
## 0.0000000 0.4787279
## sample estimates:
## probability of success
## 0.2916667
```
### 16\.2\.3 Significance \& categorical decisions
Fisher’s early writings suggest that he considered \\(p\\)\-values as quantitative measures of strength of evidence against the null hypothesis.
What would need to be done or concluded from such a quantitative measure would need to depend on further careful case\-by\-base deliberation.
In contrast, the Neyman\-Pearson approach, as well as the presently practiced hybrid NHST approach use \\(p\\)\-values to check, in a rigid conventionalized manner, whether a test result is noteworthy in a categorical, not quantitative way.
More on the Neyman\-Pearson approach in Section [16\.4](ch-03-04-hypothesis-significance-errors.html#ch-03-04-hypothesis-significance-errors).
Fixing an \\(\\alpha\\)\-level of significance (with common values \\(\\alpha \\in \\{0\.05, 0\.01, 0\.001\\}\\)), we say that a test result is **statistically significant** (at level \\(\\alpha\\)) if the \\(p\\)\-value of the observed data is lower than the specified \\(\\alpha\\).
The significance of a test result, as a categorical measure, can then be further interpreted as a trigger for decision making.
Commonly, a significant test result is interpreted as the signal to reject the null hypothesis, i.e., to speak and act as if it was false.
Importantly, a non\-significant test results by some \\(\\alpha\\)\-level is *not* to be treated as evidence in favor of the null hypothesis.[79](#fn79)
**Exercise 16\.2: Significance \& errors**
If the \\(p\\)\-value is larger than a *prespecified* significance threshold \\(\\alpha\\) (e.g., \\(\\alpha \= 0\.05\\)), we…
1. …accept \\(H\_0\\).
2. …reject \\(H\_0\\) in favor of \\(H\_a\\).
3. …fail to reject \\(H\_0\\).
Solution
Statement c. is correct.
### 16\.2\.4 How (not) to interpret *p*\-values
Though central to much of frequentist statistics, \\(p\\)\-values are frequently misinterpreted, even by seasoned scientists ([Haller and Krauss 2002](#ref-HallerKrauss2002:Misinterpretati)). To repeat, the \\(p\\)\-value measures the probability of observing, if the null hypothesis is correct, a value of the test statistic that is (in a specific, contextually specified sense) more extreme than the value of the test statistic that we assign to the observed data. We can therefore treat \\(p\\)\-values as a measure of evidence *against* the null hypothesis. And if we want to be even more precise, we interpret this as evidence against the whole assumed data\-generating process, a central part of which is the null hypothesis.
The \\(p\\)\-value is *not* a statement about the probability of the null hypothesis given the data. So, it is *not* something like \\(P(H\_0 \\mid D)\\). The latter is a very appealing notion, but it is one that the frequentist denies herself access to. It can also only be computed based on some consideration of prior plausibility of \\(H\_0\\) in relation to some alternative hypothesis. Indeed, to calculate \\(P(H\_0 \\mid D)\\) is unforgivingly a subjective, Bayesian notion.
**Exercise 16\.3: \\(p\\)\-values**
1. Which statement(s) about \\(p\\)\-values is/are true?
The \\(p\\)\-value is…
1. …the probability that the null hypothesis \\(H\_0\\) is true.
2. …the probability that the alternative hypothesis \\(H\_a\\) is true.
3. …the probability, derived from the assumption that \\(H\_0\\) is true, of obtaining an outcome for the chosen test statistic that is the exact same as the observed outcome.
4. …a measure of evidence in favor of \\(H\_0\\).
5. …the probability, derived from the assumption that \\(H\_0\\) is true, of obtaining an outcome for the chosen test statistic that is the same as the observed outcome or more extreme evidence for \\(H\_a\\).
6. …a measure of evidence against \\(H\_0\\).
Solution
Statements e. and f. are correct.
### 16\.2\.5 \[Excursion] Distribution of \\(p\\)\-values
A result that might seem surprising at first is that if the null hypothesis is true, the distribution of \\(p\\)\-values is uniform. This, however, is intuitive on second thought. Mathematically it is a direct consequence of the **Probability Integral Transform Theorem**.
**Theorem 16\.1 (Probability Integral Transform)** If \\(X\\) is a continuous random variable with cumulative distribution function \\(F\_X\\), the random variable \\(Y \= F\_X(X)\\) is uniformly distributed over the interval \\(\[0;1]\\), i.e., \\(y \\sim \\text{Uniform}(0,1\)\\).
Proof
*Proof*. Notice that the cumulative density function of a standard uniform distribution \\(y \\sim \\text{Uniform}(0,1\)\\) is a linear line with intercept 0 and slope 1\. It therefore suffices to show that \\(F\_Y(y) \= y\\).
\\\[
\\begin{aligned}
F\_Y(y) \& \= P(Y \\le y) \&\& \[\\text{def. of cumulative distribution}] \\\\
\& \= P(F\_X(X) \\le y) \&\& \[\\text{by construction / assumption}] \\\\
\& \= P(X \\le F^{\-1}\_X(y)) \&\& \[\\text{applying inverse cumulative function}] \\\\
\& \= F\_X(F^{\-1}\_X(y)) \&\& \[\\text{def. of cumulative distribution}] \\\\
\& \= y \&\& \[\\text{inverses cancel out}] \\\\
\\end{aligned}
\\]
Seeing the uniform distribution of \\(p\\)\-values (under a true null hypothesis) helps appreciate how the \\(\\alpha\\)\-level of significance is related to long\-term error control. If the null hypothesis is true, the probability of a significant test result is exactly the significance level.
### 16\.2\.1 Frequentist null\-models
We start with the Binomial Model because it is the simplest and perhaps most intuitive case. We work out what a \\(p\\)\-value is for data for this model and introduce the new graphical language to communicate “frequentist models” in the following. We also introduce the notions of *test statistic* and *sampling distribution* based on a case that should be very intuitive, if not familiar.
The Binomial Model was covered before from a Bayesian point of view, where we represented it using graphical notation like in Figure [16\.2](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-Binomial-Model-repeated) (repeated from before). Remember that this is a model to draw inferences about a coin’s bias \\(\\theta\\) based on observations of outcomes of flips of that coin. The Bayesian modeling approach treated the number of observed heads \\(k\\) and the number of flips in total \\(N\\) as given, and the coin’s bias parameter \\(\\theta\\) as latent.
Figure 16\.2: The Binomial Model (repeated from before) for a Bayesian approach to parameter inference/testing.
Actually, this way of writing the Binomial Model is a shortcut. It glosses over each individual data observation (whether the \\(i\\)\-th coin flip was heads or tails) and jumps directly to the most relevant summary statistic of how many of the \\(N\\) flips were heads. This might, of course, be just the relevant level of analysis. If our assumption is true that the outcome of each coin flip is independent of any other flip, and given our goal to learn something about \\(\\theta\\), all that really matters is \\(k\\). But we can also rewrite the Bayesian model from Figure [16\.2](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-Binomial-Model-repeated) as the equivalent extended model in Figure [16\.3](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-Binomial-Model-extended). In the latter representation, the individual outcomes of each flip are represented as \\(x\_i \\in \\{0,1\\}\\). Each individual outcome is sampled from a [Bernoulli distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-bernoulli). Based on the whole vector of \\(x\_i\\)\-s and our knowledge of \\(N\\), we derive the **test statistic** \\(k\\), which maps each observation (a vector \\(x\\) of zeros and ones) to a single number \\(k\\) (the number of heads in the vector). Notice that the node for \\(k\\) has a solid double edge, indicating that it follows deterministically from its parent nodes. This is why we can think of \\(k\\) as a sample from a random variable constructed from “raw data” observations \\(x\\).
Figure 16\.3: The Binomial Model for a Bayesian approach, extended to show ‘raw observations’ and the ‘summary statistic’ implicitly used.
Compare this latter representation in Figure [16\.3](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-Binomial-Model-extended) with the frequentist Binomial Model in Figure [16\.4](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-Binomial-Model-frequentist). The frequentist model treats the number of observations \\(N\\) as observed, just like the Bayesian model. But it also fixes a specific value for the coin’s bias \\(\\theta\\). This is where the (point\-valued) null hypothesis comes in. For purposes of analysis, we fix the value of the relevant unobservable latent parameter to a specific value (because we do not want to assign probabilities to latent parameters, but we still like to talk about probabilities somehow). In our graphical model in Figure [16\.4](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-Binomial-Model-frequentist), the node for the coin’s bias is shaded (\= treated as known) but also has a dotted second edge to indicate that this is where our null hypothesis assumption kicks in. We then treat the data vector \\(x\\) and, with it, the associated test statistic \\(k\\) as unobserved. The data we actually observed will, of course, come in at some point. But the frequentist model leaves the observed data out at first in order to bring in the kinds of probabilities frequentist approaches feel comfortable with: probabilities derived from (hypothetical) repetitions of chance events. So, the frequentist model can now make statements about the likelihood of (raw) data \\(x\\) and values of the derived summary statistic \\(k\\) based on the assumption that the null hypothesis is true. Indeed, for the case at hand, we already know that the **sampling distribution**, i.e., the distribution of values for \\(k\\) given \\(\\theta\_0\\) is the [Binomial distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-binomial).
Figure 16\.4: The Binomial Model for a frequentist binomial test.
Let’s take a step back. The frequentist model for the binomial case considers (“raw”) data of the form \\(\\langle x\_1, \\dots, x\_N \\rangle\\) where each \\(x\_i \\in \\{0,1\\}\\) indicates whether the \\(i\\)\-th flip was a success (\= heads, \= 1\) or a failure (\= tails, \= 0\). We identify the set of all binary vectors of length \\(N\\) as the set of hypothetical data that we could, in principle, observe in a fictitious repetition of this data\-generating process. \\(\\mathcal{D}^{\|H\_0}\\) is then the random variable that assigns each potential observation \\(D \= \\langle x\_1, \\dots, x\_N \\rangle\\) the probability with which it would occur if \\(H\_0\\) (\= a specific value of \\(\\theta\\)) is true. In our case, that is:
\\\[P(\\mathcal{D}^{\|H\_0} \= \\langle x\_1, \\dots, x\_N \\rangle) \= \\prod\_{i\=1}^N \\text{Bernoulli}(x\_i, \\theta\_0\)\\]
The model does not work with this raw data and its implied distribution (represented by random variable \\(\\mathcal{D}^{\|H\_0}\\)), it instead uses a (very natural!) **test statistic** \\(t \\colon \\langle x\_1, \\dots, x\_N \\rangle \\mapsto \\sum\_{i\=1}^N x\_i\\). The **sampling distribution** for this model is therefore the distribution of values for the derived measure \\(k\\) \- a distribution that follows from the distribution of the raw data (\\(\\mathcal{D}^{\|H\_0}\\)) and this particular test statistic \\(t\\). In its most general form, we write the sampling distribution as \\(T^{\|H\_0} \= t(\\mathcal{D^{H\_0}})\\). [77](#fn77) It just so happens (what a relief!) that we know how to express \\(T^{\|H\_0}\\) in a mathematically very concise fashion. It’s just the Binomial distribution, so that \\(k \\sim \\text{Binomial}(\\theta\_0, N)\\). (Notice how the sampling distribution is really a function of \\(\\theta\_0\\), i.e., the null hypothesis, and also of \\(N\\).)
### 16\.2\.2 One\- vs. two\-sided \\(p\\)\-values
After seeing a frequentist null model and learning about notions like “test statistic” and “sampling distribution”, let’s explore what a \\(p\\)\-value is based on the frequentist Binomial Model. Our running example will be the 24/7 case, where \\(N \= 24\\) and \\(k \= 7\\). Notice that we are glossing over the “raw” data immediately and work with the value of the test statistic of the observed data directly: \\(t(D\_{\\text{obs}}) \= 7\\).
Remember that, by the definition given above, \\(p(D\_{\\text{obs}})\\) is the probability of observing a value of the test statistic that is at least as extreme evidence against \\(H\_0\\) as \\(t(D\_{\\text{obs}})\\), under the assumption that \\(H\_0\\) is true:
\\\[
p(D\_{\\text{obs}}) \= P(T^{\|H\_0} \\succeq^{H\_{0,a}} t(D\_{\\text{obs}})) % \= P(\\mathcal{D}^{\|H\_0} \\in \\{D \\mid t(D) \\ge t(D\_{\\text{obs}})\\})
\\]
To fill this with life, we need to set a null hypothesis, i.e., a value \\(\\theta\_0\\) of coin bias \\(\\theta\\), that we would like to collect evidence *against*. A fixed \\(H\_0\\) will directly fix \\(T^{\|H\_0}\\), but we will have to put extra thought into how to conceptualize \\(\\succeq^{H\_{0,a}}\\) for any given \\(H\_0\\). To make exactly this clearer is the job of this section. Specifically, we will look at what is standardly called a **two\-sided \\(p\\)\-value** and a **one\-sided \\(p\\)\-value**.
The difference lies in whether we are testing a point\-valued or an interval\-based null hypothesis.
So, let’s suppose that we want to test the following null hypotheses:
* Is the coin fair (\\(\\theta \= 0\.5\\))?
* Is the coin biased towards heads (\\(\\theta \> 0\.5\\))?
In the case of testing for fairness (\\(\\theta \= 0\.5\\)), the pair of null hypothesis and alternative hypothesis are:
\\\[
\\begin{aligned}
H\_0 \\colon \\theta \= 0\.5 \&\& H\_a \\colon \\theta \\neq 0\.5
\\end{aligned}
\\]
The case for testing the null hypothesis \\(\\theta \> 0\.5\\) is slightly more convoluted.
The frequentist construction of a null model strictly requires point\-valued assumptions about all model parameters.
Otherwise, subjective priors would sneak it.
(NB: Even the assumption of equal probability of parameter values, as in a non\-informative prior, *is* a biased and subjective assumption, according to frequentism.)
We therefore actually test the point\-valued null hypothesis \\(\\theta \= 0\.5\\), but we contrast it with a different alternative hypothesis, which is now one\-sided:
\\\[
\\begin{aligned}
H\_0 \\colon \\theta \= 0\.5 \&\& H\_a \\colon \\theta \< 0\.5
\\end{aligned}
\\]
**Case \\(\\theta \= 0\.5\\).** To begin with, assume that we want to address the question of whether the coin is fair.
Figure [16\.5](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-testing-binomial-sampling-distribution) shows the sampling distribution of the test statistic \\(k\\).
The probability of the observed value of the sampling statistic is shown in red.
Figure 16\.5: Sampling distribution (here: Binomial distribution) and the probability associated with observed data \\(k\=7\\) highlighted in red, for \\(N \= 24\\) coin flips, under the assumption of a null hypothesis \\(\\theta \= 0\.5\\).
The question we need to settle to obtain a \\(p\\)\-value is how to interpret \\(\\succeq^{H\_{0,a}}\\) for this case.
To do this, we need to decide which alternative values of \\(k\\) would count as equally or more extreme evidence *against* the chosen null hypothesis when compared to the specified alternative hypothesis.
The obvious approach is to use the probability of any value of the test statistic \\(k\\) directly and say that observing \\(D\_1\\) counts as at least as extreme evidence against \\(H\_0\\) as observing \\(D\_2\\), \\(t(D\_1\) \\succeq^{H\_{0,a}} t(D\_2\)\\), iff the probability of observing the test statistic associated with \\(D\_1\\) is at least as unlikely as observing \\(D\_2\\): \\(P(T^{\|H\_0} \= t(D\_1\)) \\le P(T^{\|H\_0} \= t(D\_2\))\\). To calculate the \\(p\\)\-value in this way, we therefore need to sum up the probabilities of all values \\(k\\) under the Binomial distribution (with parameters \\(N\=24\\) and \\(\\theta \= \\theta\_0 \= 0\.5\\)) that are no larger than the value of the observed \\(k \= 7\\). In mathematical language:[78](#fn78)
\\\[
p(k) \= \\sum\_{k' \= 0}^{N} \[\\text{Binomial}(k', N, \\theta\_0\) \<\= \\text{Binomial}(k, N, \\theta\_0\)] \\ \\text{Binomial}(k', N, \\theta\_0\)
\\]
In code, we calculate this \\(p\\)\-value as follows:
```
# exact p-value for k = 7 with N = 24 and null hypothesis theta = 0.5
k_obs <- 7
N <- 24
theta_0 <- 0.5
tibble( lh = dbinom(0:N, N, theta_0) ) %>%
filter( lh <= dbinom(k_obs, N, theta_0) ) %>%
pull(lh) %>% sum %>% round(5)
```
```
## [1] 0.06391
```
Figure [16\.6](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-testing-binomial-p-value) shows the values that need to be summed over in red.
Figure 16\.6: Sampling distribution (Binomial likelihood function) and two\-sided \\(p\\)\-value for the observation of \\(k\=7\\) successes in \\(N \= 24\\) coin flips, under the assumption of a null hypothesis \\(\\theta \= 0\.5\\).
Of course, R also has a built\-in function for a Binomial test. We can use it to verify that we get the same result for the \\(p\\)\-value:
```
binom.test(
x = 7, # observed successes
n = 24, # total no. of observations
p = 0.5 # null hypothesis
)
```
```
##
## Exact binomial test
##
## data: 7 and 24
## number of successes = 7, number of trials = 24, p-value = 0.06391
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.1261521 0.5109478
## sample estimates:
## probability of success
## 0.2916667
```
**Exercise 16\.1: Output of R’s `binom.test`**
Look at the output of the above call to R’s `binom.test` function.
Which pieces of information in that output make sense to you (given your current knowledge) and which do not?
Solution
The output given first states what was computed, namely an *exact binomial test*.
You should understand what a *binomial test* is.
The additional adjective *exact* refers to the fact that we did not use any approximation to get at the shown \\(p\\)\-value.
Next, we see the data repeated and the calculated \\(p\\)\-value, which we have seen how to calculate by hand.
The output then also names the alternative hypothesis, just like the text previously explained, making clear that this is a two\-valued \\(p\\)\-value.
Then comes something which you do not yet know about: the notion of a 95% confidence interval will be covered later in this chapter.
Finally, the output also gives the maximum likelihood estimate of the theta parameter.
Together with the 95% confidence interval, the test result therefore also reports the most common frequentist estimators, point\-values (MLE) and interval\-valued (95% confidence interval) for the parameter of interest.
**Case \\(\\theta \> 0\.5\\).** Let’s now look at the case where we want to test whether the coin is biased towards heads \\(\\theta \> 0\.5\\).
As explained above, we need a point\-valued assumption for the coin bias \\(\\theta\\) to set up a frequentist model and retrieve a sampling distribution for the relevant test statistic.
We choose \\(\\theta\_{0} \= 0\.5\\) as the point\-valued null hypothesis, because *if* we get a high measure of the evidence against the hypothesis \\(\\theta\_{0} \= 0\.5\\) (in a comparison against the alternative \\(\\theta \< 0\.5\\)), we can discredit the whole interval\-based hypothesis \\(\\theta \> 0\.5\\) because any other value of \\(\\theta\\) bigger than 0\.5 would give at least as high a \\(p\\)\-value.
In other words, we pick the single value for the comparison which is most favorable *for* the hypothesis \\(\\theta \> 0\.5\\) when compared against \\(\\theta \< 0\.5\\), so as when *even that* value is discredited, the whole hypothesis \\(\\theta \> 0\.5\\) is discredited.
But even though we use the same null\-value of \\(\\theta\_0 \= 0\.5\\), the calculation of the \\(p\\)\-value will be different from the case we looked at previously.
It will be one\-sided.
The reason lies in a change to what we should consider more extreme evidence against this interval\-valued null hypothesis, i.e., the interpretation of \\(\\succeq^{H\_{0,a}}\\).
Look at Figure [16\.7](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-testing-binomial-p-value-one-sided).
As before we see the Bernoulli likelihood function derived from the point\-value null hypothesis.
The \\(k\\)\-value observed is \\(k\=7\\).
Again we need to ask: which values of \\(k\\) would constitute equal or more evidence against the null hypothesis *when compared against the alternative hypothesis, which is now \\(\\theta \< 0\.5\\)*
Unlike in the previous, two\-sided case, observing large values of \\(k\\), e.g., larger than 12, even if they are unlikely for the point\-valued hypothesis \\(\\theta\_0 \= 0\.5\\), does not constitute evidence against the interval\-valued hypothesis we are interested in.
So therefore, we disregard the contribution of the right\-hand side in Figure [16\.6](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-testing-binomial-p-value) to arrive at a picture like in Figure [16\.7](ch-03-05-hypothesis-p-values.html#fig:ch-03-04-testing-binomial-p-value-one-sided).
Figure 16\.7: Sampling distribution (Binomial likelihood function) and one\-sided \\(p\\)\-value for the observation of \\(k\=7\\) successes in \\(N \= 24\\) coin flips, under the assumption of a null hypothesis \\(\\theta \= 0\.5\\) compared against the alternative hypothesis \\(\\theta \< 0\\).
The associated \\(p\\)\-value with this so\-called **one\-sided test** is consequently:
```
k_obs <- 7
N <- 24
theta_0 <- 0.5
# exact p-value for k = 7 with N = 24 and null hypothesis theta > 0.5
dbinom(0:k_obs, N, theta_0) %>% sum %>% round(5)
```
```
## [1] 0.03196
```
We can double\-check against the built\-in function `binom.test` when we ask for a one\-sided test:
```
binom.test(
x = 7, # observed successes
n = 24, # total no. of observations
p = 0.5, # null hypothesis
alternative = "less" # the alternative to compare against is theta < 0.5
)
```
```
##
## Exact binomial test
##
## data: 7 and 24
## number of successes = 7, number of trials = 24, p-value = 0.03196
## alternative hypothesis: true probability of success is less than 0.5
## 95 percent confidence interval:
## 0.0000000 0.4787279
## sample estimates:
## probability of success
## 0.2916667
```
### 16\.2\.3 Significance \& categorical decisions
Fisher’s early writings suggest that he considered \\(p\\)\-values as quantitative measures of strength of evidence against the null hypothesis.
What would need to be done or concluded from such a quantitative measure would need to depend on further careful case\-by\-base deliberation.
In contrast, the Neyman\-Pearson approach, as well as the presently practiced hybrid NHST approach use \\(p\\)\-values to check, in a rigid conventionalized manner, whether a test result is noteworthy in a categorical, not quantitative way.
More on the Neyman\-Pearson approach in Section [16\.4](ch-03-04-hypothesis-significance-errors.html#ch-03-04-hypothesis-significance-errors).
Fixing an \\(\\alpha\\)\-level of significance (with common values \\(\\alpha \\in \\{0\.05, 0\.01, 0\.001\\}\\)), we say that a test result is **statistically significant** (at level \\(\\alpha\\)) if the \\(p\\)\-value of the observed data is lower than the specified \\(\\alpha\\).
The significance of a test result, as a categorical measure, can then be further interpreted as a trigger for decision making.
Commonly, a significant test result is interpreted as the signal to reject the null hypothesis, i.e., to speak and act as if it was false.
Importantly, a non\-significant test results by some \\(\\alpha\\)\-level is *not* to be treated as evidence in favor of the null hypothesis.[79](#fn79)
**Exercise 16\.2: Significance \& errors**
If the \\(p\\)\-value is larger than a *prespecified* significance threshold \\(\\alpha\\) (e.g., \\(\\alpha \= 0\.05\\)), we…
1. …accept \\(H\_0\\).
2. …reject \\(H\_0\\) in favor of \\(H\_a\\).
3. …fail to reject \\(H\_0\\).
Solution
Statement c. is correct.
### 16\.2\.4 How (not) to interpret *p*\-values
Though central to much of frequentist statistics, \\(p\\)\-values are frequently misinterpreted, even by seasoned scientists ([Haller and Krauss 2002](#ref-HallerKrauss2002:Misinterpretati)). To repeat, the \\(p\\)\-value measures the probability of observing, if the null hypothesis is correct, a value of the test statistic that is (in a specific, contextually specified sense) more extreme than the value of the test statistic that we assign to the observed data. We can therefore treat \\(p\\)\-values as a measure of evidence *against* the null hypothesis. And if we want to be even more precise, we interpret this as evidence against the whole assumed data\-generating process, a central part of which is the null hypothesis.
The \\(p\\)\-value is *not* a statement about the probability of the null hypothesis given the data. So, it is *not* something like \\(P(H\_0 \\mid D)\\). The latter is a very appealing notion, but it is one that the frequentist denies herself access to. It can also only be computed based on some consideration of prior plausibility of \\(H\_0\\) in relation to some alternative hypothesis. Indeed, to calculate \\(P(H\_0 \\mid D)\\) is unforgivingly a subjective, Bayesian notion.
**Exercise 16\.3: \\(p\\)\-values**
1. Which statement(s) about \\(p\\)\-values is/are true?
The \\(p\\)\-value is…
1. …the probability that the null hypothesis \\(H\_0\\) is true.
2. …the probability that the alternative hypothesis \\(H\_a\\) is true.
3. …the probability, derived from the assumption that \\(H\_0\\) is true, of obtaining an outcome for the chosen test statistic that is the exact same as the observed outcome.
4. …a measure of evidence in favor of \\(H\_0\\).
5. …the probability, derived from the assumption that \\(H\_0\\) is true, of obtaining an outcome for the chosen test statistic that is the same as the observed outcome or more extreme evidence for \\(H\_a\\).
6. …a measure of evidence against \\(H\_0\\).
Solution
Statements e. and f. are correct.
### 16\.2\.5 \[Excursion] Distribution of \\(p\\)\-values
A result that might seem surprising at first is that if the null hypothesis is true, the distribution of \\(p\\)\-values is uniform. This, however, is intuitive on second thought. Mathematically it is a direct consequence of the **Probability Integral Transform Theorem**.
**Theorem 16\.1 (Probability Integral Transform)** If \\(X\\) is a continuous random variable with cumulative distribution function \\(F\_X\\), the random variable \\(Y \= F\_X(X)\\) is uniformly distributed over the interval \\(\[0;1]\\), i.e., \\(y \\sim \\text{Uniform}(0,1\)\\).
Proof
*Proof*. Notice that the cumulative density function of a standard uniform distribution \\(y \\sim \\text{Uniform}(0,1\)\\) is a linear line with intercept 0 and slope 1\. It therefore suffices to show that \\(F\_Y(y) \= y\\).
\\\[
\\begin{aligned}
F\_Y(y) \& \= P(Y \\le y) \&\& \[\\text{def. of cumulative distribution}] \\\\
\& \= P(F\_X(X) \\le y) \&\& \[\\text{by construction / assumption}] \\\\
\& \= P(X \\le F^{\-1}\_X(y)) \&\& \[\\text{applying inverse cumulative function}] \\\\
\& \= F\_X(F^{\-1}\_X(y)) \&\& \[\\text{def. of cumulative distribution}] \\\\
\& \= y \&\& \[\\text{inverses cancel out}] \\\\
\\end{aligned}
\\]
Seeing the uniform distribution of \\(p\\)\-values (under a true null hypothesis) helps appreciate how the \\(\\alpha\\)\-level of significance is related to long\-term error control. If the null hypothesis is true, the probability of a significant test result is exactly the significance level.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/ch-03-05-hypothesis-testing-CLT.html |
16\.3 \[Excursion] Central Limit Theorem
----------------------------------------
The previous sections expanded on the notion of a \\(p\\)\-value and showed how to calculate \\(p\\)\-values for different kinds of research questions for data from repeated Bernoulli trials (\= coin flips).
We saw that a natural test statistic is the Binomial distribution. The Binomial distribution described the sampling distribution precisely, i.e., the sampling distribution for the frequentist Binomial Model as we set it up *is* the Binomial distribution. Unfortunately, there are models and types of data for which the sampling distribution is not known precisely. In these cases, frequentist statistics work with approximations to the true sampling distribution. These approximations get better the more data was observed, i.e., these are limit\-approximations that hold in the limit when the amount of data observed goes towards infinity. For small samples, the error might be substantial. Rules of thumb have become conventional guides for judging when (not) to use a given approximation. Which (approximation for a) sampling distribution to use needs to be decided on a case\-by\-case basis.
To establish that a particular distribution is a good approximation of the true sampling distribution, the most important formal result is the *Central Limit Theorem* (CLT). In rough terms, the CLT says that, under certain conditions, we can use a normal distribution as an approximation of the sampling distribution.
To appreciate the CLT, let’s start with another seminal result, the **Law of Large Numbers**, which we had already relied on when we discussed a sample\-based approach to representing probability distributions. For example, the Law of Large Numbers justifies why taking (large) samples from a random variable sufficiently approximate a mean (the most prominent Bayesian point\-estimator of, e.g., a posterior approximated by samples from MCMC algorithms).
**Theorem 16\.2 (Law of Large Numbers)** Let \\(X\_1, \\dots, X\_n\\) be a sequence of \\(n\\) differentiable random variables with equal mean, such that \\(\\mathbb{E}\_{X\_i} \= \\mu\_X\\) for all \\(1 \\le i \\le n\\).[80](#fn80) As the number of samples \\(n\\) goes to infinity, the mean of any tuple of samples, one from each \\(X\_i\\), convergences almost surely to \\(\\mu\_X\\):
\\\[ P \\left(\\lim\_{n \\rightarrow \\infty} \\frac{1}{n} \\sum\_{i \= 1}^n X\_i \= \\mu\_X \\right) \= 1 \\]
Computer simulations make the point and usefulness of this fact easier to appreciate:
```
# sample from a standard normal distribution (mean = 0, sd = 1)
samples <- rnorm(100000)
# collect the mean after each 10 samples & plot
tibble(
n = seq(100, length(samples), by = 10)
) %>%
group_by(n) %>%
mutate(
mu = mean(samples[1:n])
) %>%
ggplot(aes(x = n, y = mu)) +
geom_line()
```
For practical purposes, think of the Central Limit Theorem as an extension of the Law of Large Numbers. While the latter tells us that, as \\(n \\rightarrow \\infty\\), the mean of repeated samples from a random variable \\(X\\) converges to the mean of \\(X\\), the Central Limit Theorem tells us something about the distribution of our estimate of \\(X\\)’s mean. The Central Limit Theorem tells us that the sampling distribution of the mean approximates a normal distribution for a large enough sample size.
**Theorem 16\.3 (Central Limit Theorem)** Let \\(X\_1, \\dots, X\_n\\) be a sequence of \\(n\\) differentiable random variables with equal mean \\(\\mathbb{E}\_{X\_i} \= \\mu\_X\\) and equal finite variance \\(\\text{Var}(X\_i) \= \\sigma\_X^2\\) for all \\(1 \\le i \\le n\\).[81](#fn81) The random variable \\(S\_n\\) which captures the distribution of the sample mean for any \\(n\\) is:
\\\[ S\_n \= \\frac{1}{n} \\sum\_{i\=1}^n X\_i \\]
As the number of samples \\(n\\) goes to infinity, the random variable \\(\\sqrt{n} (S\_n \- \\mu\_X)\\) converges in distribution to a normal distribution with mean 0 and standard deviation \\(\\sigma\_X\\).
A proof of the CLT is not trivial, and we will omit it here. We will only point to the CLT when justifying approximations of sampling distributions, e.g., for the case of Pearson’s \\(\\chi^2\\)\-test.
Below you can explore the effect of different sample sizes and numbers of samples on the sampling distribution of the mean. Play around with the values and note how with increasing sample size and number of samples…
1. …the sample mean approximates the population mean (Law of Large Numbers).
2. …the distribution of sample means approximates a normal distribution (Central Limit Theorem).
To be able to simulate the CLT, we first need a population to sample from. In the drop\-down menu below, you can choose how the population should be distributed, where the parameter values are fixed (e.g., if you choose “normally distributed”, the population will be distributed according to \\(N(\\mu \= 4, \\sigma \= 1\)\\)).[82](#fn82) Also try out the custom option to appreciate that both concepts hold for *every* distribution.
In variable `sample_size`, you can specify how many samples you want to take from the population. `number_of_samples` denotes how many samples of size `sample_size` are taken. E.g., if `number_of_samples = 5` and `sample_size = 3`, we would repeat the process of taking three samples from the population a total of five times. The output will show the population and sampling distribution with their means.
Population distribution
normally distributed
uniformly distributed
beta distributed
binomially distributed
custom
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/ch-05-01-frequentist-testing-confidence-intervals.html |
16\.5 Confidence intervals
--------------------------
The most commonly used interval estimate in frequentist analyses is *confidence intervals*. Although (frequentist) confidence intervals *can* coincide with (subjectivist) credible intervals in specific cases, they generally do not. And even when confidence and credible values yield the same numerical results, these notions are fundamentally different and ought not to be confused.
Let’s look at credible intervals to establish the proper contrast. Recall that part of the definition of a credible interval for a posterior distribution over \\(\\theta\\), captured here notationally in terms of a random variable \\(\\Theta\\), was the probability \\(P(l \\le \\Theta \\le u)\\) that the value realized by random variable \\(\\Theta\\) lies in the interval \\(\[l;u]\\). This statement makes no sense to the frequentist. There cannot be any non\-trivial value for \\(P(l \\le \\Theta \\le u)\\). The true value of \\(\\theta\\) is either in the interval \\(\[l;u]\\) or it is not. To speak of a probability that \\(\\theta\\) is in \\(\[l;u]\\) is to appeal to an ill\-formed concept of probability which the frequentist denies.
In order to give an interval estimate nonetheless, the frequentist appeals to probabilities that she can accept: probabilities derived from (hypothetical) repetitions of a genuine random event with objectively observable outcomes. Let \\(\\mathcal{D}\\) be the random variable that captures the probability with which data \\(\\mathcal{D}\=D\\) is realized. We obtain a pair of derived random variables \\(X\_l\\) and \\(X\_u\\) from a pair of functions \\(g\_{l,u} \\colon d \\mapsto \\mathbb{R}\\). A **\\(\\gamma\\%\\) confidence interval** for observed data \\(D\_{\\text{obs}}\\) is the interval \\(\[g\_l(D\_{\\text{obs}}), g\_u(D\_{\\text{obs}})]\\) whenever functions \\(g\_{l,u}\\) are constructed in such a way that
\\\[
\\begin{aligned}
P(X\_l \\le \\theta\_{\\text{true}} \\le X\_u) \= \\frac{\\gamma}{100}
\\end{aligned}
\\]
where \\(\\theta\_{\\text{true}}\\) is the unknown but fixed true value of \\(\\theta\\). In more intuitive words, a confidence interval is an outcome of special construction (functions \\(g\_{l,u}\\)) such that, when applying this procedure repeatedly to outcomes of the assumed data\-generating process, the true value of parameter \\(\\theta\\) will lie inside of the computed confidence interval in exactly \\(\\gamma\\)% of the cases.
It is easier to think of the definition of a confidence interval in terms of computer code and sampling (see Figure [16\.9](ch-05-01-frequentist-testing-confidence-intervals.html#fig:03-03-estimation-confidence-interval-scheme)). Suppose Grandma gives you computer code, a `magic_function` which takes as input data observations, and returns an interval estimate for the parameter of interest. We sample a value for the parameter of interest repeatedly and consider it the “true parameter” for the time being. For each sampled “true parameter”, we generate data repeatedly. We apply Grandma’s `magic_function`, obtain an interval estimate, and check if the true value that triggered the whole process is included in the interval. Grandma’s `magic_function` is a \\(\\gamma\\%\\) confidence interval if the proportion of inclusions (the checkmarks in Figure [16\.9](ch-05-01-frequentist-testing-confidence-intervals.html#fig:03-03-estimation-confidence-interval-scheme)) is \\(\\gamma\\%\\).
Figure 16\.9: Schematic representation of what a confidence interval does: think of it as a magic function that returns intervals that contain the true value in \\(\\gamma\\) percent of the cases.
In some complex cases, the frequentist analyst relies on functions \\(g\_{l}\\) and \\(g\_{u}\\) that are easy to compute but only approximately satisfy the condition \\(P(X\_l \\le \\theta\_{\\text{true}} \\le X\_u) \= \\frac{\\gamma}{100}\\). For example, we might use an
asymptotically correct calculation, based on the observation that, if \\(n\\) grows to infinity, the binomial distribution approximates a normal distribution. We can then calculate a confidence interval *as if* our binomial distribution actually was a normal distribution. If \\(n\\) is not large enough, this will be increasingly imprecise. Rules of thumb are used to decide how big \\(n\\) has to be to involve at best a tolerable amount of imprecision (see the Info Box below).
For our running example (\\(k \= 7\\), \\(n\=24\\)), the rule of thumb mentioned in the Info Box below recommends *not* using the asymptotic calculation. If we did nonetheless, we would get a confidence interval of \\(\[0\.110; 0\.474]\\). For the binomial distribution, also a more reliable calculation exists, which yields \\(\[0\.126; 0\.511]\\) for the running example. (We can use numeric simulation to explore how good/bad a particular approximate calculation is, as shown in the next section.) The more reliable construction, the so\-called *exact method*, implemented in the function `binom.confint` of R package `binom`, revolves around the close relationship between confidence intervals and \\(p\\)\-values. (To foreshadow a later discussion: the exact \\(\\gamma\\%\\) confidence interval is the set of all parameter values for which an exact (binomial) test does not yield a significant test result as the level of \\(\\alpha \= 1\-\\frac{\\gamma}{100}\\).)
**Asymptotic approximation of a binomial confidence interval using a normal distribution.**
Let \\(X\\) be the random variable that determines the binomial distribution, i.e., the probability of seeing \\(k\\) successes in \\(n\\) flips. For large \\(n\\), \\(X\\) approximates a normal distribution with a mean \\(\\mu \= n \\ \\theta\\) and a standard deviation of \\(\\sigma \= \\sqrt{n \\ \\theta \\ (1 \- \\theta)}\\). The random variable \\(U\\):
\\\[U \= \\frac{X \- \\mu}{\\sigma} \= \\frac{X \- n \\ \\theta}{\\sqrt{n \\ \\theta \\ (1\-\\theta)}}\\]
Let \\(\\hat{P}\\) be the random variable that captures the distribution of our maximum likelihood estimates for an observed outcome \\(k\\):
\\\[\\hat{P} \= \\frac{X}{n}\\]
Since \\(X \= \\hat{P} \\ n\\) we obtain:
\\\[U \= \\frac{\\hat{P} \\ n \- n \\ \\theta}{\\sqrt{n \\ \\theta \\ (1\-\\theta)}}\\]
We now look at the probability that \\(U\\) is realized to lie in a symmetric interval \\(\[\-c,c]\\), centered around zero — a probability which we require to match our confidence level:
\\\[P(\-c \\le U \\le c) \= \\frac{\\gamma}{100}\\]
We now expand the definition of \\(U\\) in terms of \\(\\hat{P}\\), equate \\(\\hat{P}\\) with the current best estimate \\(\\hat{p} \= \\frac{k}{n}\\) based on the observed \\(k\\) and rearrange terms, yielding the asymptotic approximation of a binomial confidence interval:
\\\[\\left \[ \\hat{p} \- \\frac{c}{n} \\ \\sqrt{n \\ \\hat{p} \\ (1\-\\hat{p})} ; \\ \\
\\hat{p} \+ \\frac{c}{n} \\ \\sqrt{n \\ \\hat{p} \\ (1\-\\hat{p})} \\right ]\\]
This approximation is conventionally considered precise enough when the following *rule of thumb* is met:
\\\[n \\ \\hat{p} \\ (1 \- \\hat{p}) \> 9\\]
### 16\.5\.1 Relation of *p*\-values to confidence intervals
There is a close relation between \\(p\\)\-values and confidence intervals.[84](#fn84) For a two\-sided test of a null hypothesis \\(H\_0 \\colon \\theta \= \\theta\_0\\), with alternative hypothesis \\(H\_a \\colon \\theta \\neq \\theta\_0\\), it holds for all possible data observations \\(D\\) that
\\\[ p(D) \< \\alpha \\ \\ \\text{iff} \\ \\ \\theta\_0 \\not \\in \\text{CI}(D) \\]
where \\(\\text{CI}(D)\\) is the \\((1\-\\alpha) \\cdot 100\\%\\) confidence interval constructed for data \\(D\\).
This connection is intuitive when we think about long\-term error. Decisions to reject the null hypothesis are false in exactly \\((\\alpha \\cdot 100\)\\%\\) of the cases when the null hypothesis is true. The definition of a confidence interval was exactly the same: the true value should lay outside a \\((1\-\\alpha) \\cdot 100\\%\\) confidence interval in exactly \\((\\alpha \\cdot 100\)\\%\\) of the cases. (Of course, this is only a vague and intuitively appealing argument based on the overall rate, not any particular case.)
**Exercise 16\.4: \\(p\\)\-value, confidence interval, interpretation etc.**
Suppose that we have reason to believe that a coin is biased to land heads. A hypothesis test should shed light on this belief. We toss the coin \\(N \= 10\\) times and observe \\(k \= 8\\) heads. We set \\(\\alpha \= 0\.05\\).
1. What is an appropriate null hypothesis, what is an appropriate alternative hypothesis?
Solution
\\(H\_0\\): \\(\\theta\_0 \= 0\.5\\), \\(H\_a\\): \\(\\theta \> 0\.5\\).
2. Which alternative values of \\(k\\) provide more extreme evidence against \\(H\_0\\)?
Solution
Values greater than 8 (we conduct a one\-sided hypothesis test).
3. The 95% confidence interval ranges between 0\.493 and 1\.0\. Based on this information, decide whether the \\(p\\)\-value is significant or non\-significant. Why?
Solution
The \\(p\\)\-value is non\-significant because the value of the null hypothesis \\(H\_0\\): \\(\\theta\_0 \= 0\.5\\) is contained within the 95% CI. Hence, it is not sufficiently unlikely that the observed outcome was generated by a fair coin.
4. Below is the probability mass function of the [Binomial distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-binomial) (our sampling distribution). The probability of obtaining exactly \\(k\\) successes in \\(N\\) independent trials is defined as: \\\[P(X \= k)\=\\binom{N}{k}p^k(1\-p)^{N\-k},\\]
where \\(\\binom{N}{k}\=\\frac{N!}{k!(N\-k)!}\\) is the Binomial coefficient. Given the formula above, calculate the \\(p\\)\-value (by hand) associated with our test statistic \\(k\\), under the assumption that \\(H\_0\\) is true.
Solution
As this is a one\-sided test, we look at those values of \\(k\\) that provide more extreme evidence against \\(H\_0\\). We therefore compute the probability of at least 8 heads, given that \\(H\_0\\) is true:
\\\[
P(X\\geq8\)\=P(X\=8\)\+P(X\=9\)\+P(X\=10\)\\\\
P(X\=8\)\=\\binom{10}{8}0\.5^8(1\-0\.5\)^{10\-8}\=45\\cdot0\.5^{10}\\\\
P(X\=9\)\=\\binom{10}{9}0\.5^9(1\-0\.5\)^{10\-9}\=10\\cdot0\.5^{10}\\\\
P(X\=10\)\=\\binom{10}{10}0\.5^{10}(1\-0\.5\)^{10\-10}\=1\\cdot0\.5^{10}\\\\
P(X\\geq8\)\=0\.5^{10}(45\+10\+1\)\\approx 0\.0547
\\]
5. Based on your result in d., decide whether we should reject the null hypothesis.
Solution
As we have a non\-significant \\(p\\)\-value (\\(p\>\\alpha\\)), we fail to reject the null hypothesis. Hence, we do not have evidence in favor of the hypothesis that the coin is biased to land heads.
6. Use R’s built\-in function for a Binomial test to check your results.
Solution
```
binom.test(
x = 8, # observed successes
n = 10, # total no. of observations
p = 0.5, # null hypothesis
alternative = "greater" # alternative hypothesis
)
```
```
##
## Exact binomial test
##
## data: 8 and 10
## number of successes = 8, number of trials = 10, p-value = 0.05469
## alternative hypothesis: true probability of success is greater than 0.5
## 95 percent confidence interval:
## 0.4930987 1.0000000
## sample estimates:
## probability of success
## 0.8
```
7. The \\(p\\)\-value is affected by the sample size \\(N\\). Try out different values for \\(N\\) while keeping the proportion of successes constant to 80%. What do you notice with regard to the \\(p\\)\-value?
Solution
With a larger sample size, the \\(p\\)\-value is smaller compared to 10 coin flips. It requires only a few more samples to cross the significance threshold, allowing us to reject \\(H\_0\\). However, this is just the case if the null hypothesis is in fact false. NB: Don’t collect more data *after* you observed the \\(p\\)\-value! The sample size should be fixed prior to data collection and not increased afterwards.
### 16\.5\.1 Relation of *p*\-values to confidence intervals
There is a close relation between \\(p\\)\-values and confidence intervals.[84](#fn84) For a two\-sided test of a null hypothesis \\(H\_0 \\colon \\theta \= \\theta\_0\\), with alternative hypothesis \\(H\_a \\colon \\theta \\neq \\theta\_0\\), it holds for all possible data observations \\(D\\) that
\\\[ p(D) \< \\alpha \\ \\ \\text{iff} \\ \\ \\theta\_0 \\not \\in \\text{CI}(D) \\]
where \\(\\text{CI}(D)\\) is the \\((1\-\\alpha) \\cdot 100\\%\\) confidence interval constructed for data \\(D\\).
This connection is intuitive when we think about long\-term error. Decisions to reject the null hypothesis are false in exactly \\((\\alpha \\cdot 100\)\\%\\) of the cases when the null hypothesis is true. The definition of a confidence interval was exactly the same: the true value should lay outside a \\((1\-\\alpha) \\cdot 100\\%\\) confidence interval in exactly \\((\\alpha \\cdot 100\)\\%\\) of the cases. (Of course, this is only a vague and intuitively appealing argument based on the overall rate, not any particular case.)
**Exercise 16\.4: \\(p\\)\-value, confidence interval, interpretation etc.**
Suppose that we have reason to believe that a coin is biased to land heads. A hypothesis test should shed light on this belief. We toss the coin \\(N \= 10\\) times and observe \\(k \= 8\\) heads. We set \\(\\alpha \= 0\.05\\).
1. What is an appropriate null hypothesis, what is an appropriate alternative hypothesis?
Solution
\\(H\_0\\): \\(\\theta\_0 \= 0\.5\\), \\(H\_a\\): \\(\\theta \> 0\.5\\).
2. Which alternative values of \\(k\\) provide more extreme evidence against \\(H\_0\\)?
Solution
Values greater than 8 (we conduct a one\-sided hypothesis test).
3. The 95% confidence interval ranges between 0\.493 and 1\.0\. Based on this information, decide whether the \\(p\\)\-value is significant or non\-significant. Why?
Solution
The \\(p\\)\-value is non\-significant because the value of the null hypothesis \\(H\_0\\): \\(\\theta\_0 \= 0\.5\\) is contained within the 95% CI. Hence, it is not sufficiently unlikely that the observed outcome was generated by a fair coin.
4. Below is the probability mass function of the [Binomial distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-binomial) (our sampling distribution). The probability of obtaining exactly \\(k\\) successes in \\(N\\) independent trials is defined as: \\\[P(X \= k)\=\\binom{N}{k}p^k(1\-p)^{N\-k},\\]
where \\(\\binom{N}{k}\=\\frac{N!}{k!(N\-k)!}\\) is the Binomial coefficient. Given the formula above, calculate the \\(p\\)\-value (by hand) associated with our test statistic \\(k\\), under the assumption that \\(H\_0\\) is true.
Solution
As this is a one\-sided test, we look at those values of \\(k\\) that provide more extreme evidence against \\(H\_0\\). We therefore compute the probability of at least 8 heads, given that \\(H\_0\\) is true:
\\\[
P(X\\geq8\)\=P(X\=8\)\+P(X\=9\)\+P(X\=10\)\\\\
P(X\=8\)\=\\binom{10}{8}0\.5^8(1\-0\.5\)^{10\-8}\=45\\cdot0\.5^{10}\\\\
P(X\=9\)\=\\binom{10}{9}0\.5^9(1\-0\.5\)^{10\-9}\=10\\cdot0\.5^{10}\\\\
P(X\=10\)\=\\binom{10}{10}0\.5^{10}(1\-0\.5\)^{10\-10}\=1\\cdot0\.5^{10}\\\\
P(X\\geq8\)\=0\.5^{10}(45\+10\+1\)\\approx 0\.0547
\\]
5. Based on your result in d., decide whether we should reject the null hypothesis.
Solution
As we have a non\-significant \\(p\\)\-value (\\(p\>\\alpha\\)), we fail to reject the null hypothesis. Hence, we do not have evidence in favor of the hypothesis that the coin is biased to land heads.
6. Use R’s built\-in function for a Binomial test to check your results.
Solution
```
binom.test(
x = 8, # observed successes
n = 10, # total no. of observations
p = 0.5, # null hypothesis
alternative = "greater" # alternative hypothesis
)
```
```
##
## Exact binomial test
##
## data: 8 and 10
## number of successes = 8, number of trials = 10, p-value = 0.05469
## alternative hypothesis: true probability of success is greater than 0.5
## 95 percent confidence interval:
## 0.4930987 1.0000000
## sample estimates:
## probability of success
## 0.8
```
7. The \\(p\\)\-value is affected by the sample size \\(N\\). Try out different values for \\(N\\) while keeping the proportion of successes constant to 80%. What do you notice with regard to the \\(p\\)\-value?
Solution
With a larger sample size, the \\(p\\)\-value is smaller compared to 10 coin flips. It requires only a few more samples to cross the significance threshold, allowing us to reject \\(H\_0\\). However, this is just the case if the null hypothesis is in fact false. NB: Don’t collect more data *after* you observed the \\(p\\)\-value! The sample size should be fixed prior to data collection and not increased afterwards.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/ch-03-05-hypothesis-testing-tests.html |
16\.6 Selected tests
--------------------
This section captures a selection of commonly used frequentist tests.
### 16\.6\.1 Pearson’s \\(\\chi^2\\)\-tests
There are many tests that use the [\\(\\chi^2\\)\-distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-chi2) as an (approximate) sampling distribution. But given relevance and historical prominence, the name “\\(\\chi^2\\)\-test” is usually interpreted to refer to one of several flavor’s of what we could specifically call “Pearson’s \\(\\chi^2\\)\-test”.
We will look at two flavors here. Pearson’s \\(\\chi^2\\)\-test for **goodness of fit** tests whether an observed vector of counts is well explained by a given vector of predicted proportion. Pearson’s \\(\\chi^2\\)\-test for **independence** tests whether a (two\-dimensional) table of counts could plausibly have been generated by a process of independently selecting the column and the row category. We will explain how both of these tests work based on an application of the [BLJM data](app-93-data-sets-BLJM.html#app-93-data-sets-BLJM), which we load as usual:
```
data_BLJM_processed <- aida::data_BLJM
```
The focus is on the counts of music\-subject choices:
```
BLJM_associated_counts <- data_BLJM_processed %>%
select(submission_id, condition, response) %>%
pivot_wider(names_from = condition, values_from = response) %>%
# drop the Beach-vs-Mountain condition
select(-BM) %>%
dplyr::count(JM,LB)
BLJM_associated_counts
```
```
## # A tibble: 4 × 3
## JM LB n
## <chr> <chr> <int>
## 1 Jazz Biology 38
## 2 Jazz Logic 26
## 3 Metal Biology 20
## 4 Metal Logic 18
```
Remember that the lecturer’s bold conjecture was that a preference for Logic over Biology goes together with a preference for Metal over Jazz. The visualization suggests that there might be such a trend, but the (statistical) jury is still out as to whether this conjecture has empirical support.
#### 16\.6\.1\.1 Pearson’s \\(\\chi^2\\)\-test for goodness of fit
“Goodness of fit” is a term used in model checking (a.k.a. model criticism, model validation, …). In such a context, tests for goodness\-of\-fit investigate whether a model’s predictions are compatible with the observed data. Pearson’s \\(\\chi^2\\)\-test for goodness of fit does exactly this for categorical data.
Categorical data is data where each data observation falls into one of several unordered categories. If we have \\(k\\) such categories, a **prediction vector** \\(\\vec{p} \= \\langle p\_1, \\dots, p\_k \\rangle\\) is a probability vector of length \\(k\\) such that \\(p\_i\\) gives the probability with which a single data observation falls into the \\(i\\)\-th category. The likelihood of a single data observation is given by the [Categorical distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-categorical), and the likelihood of \\(N\\) data observations is given by the [Multinomial distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-multinomial). These are generalizations of the Bernoulli and Binomial distributions, which expand the case of two unordered categories to more than two unordered categories.
The BLJM data supplies us with categorical data. Here is the vector of counts of how many participants selected a given music\+subject pair:
```
# add category names
BLJM_associated_counts <- BLJM_associated_counts %>%
mutate(
category = str_c(
BLJM_associated_counts %>% pull(LB),
"-",
BLJM_associated_counts %>% pull(JM)
)
)
counts_BLJM_choice_pairs_vector <- BLJM_associated_counts %>% pull(n)
names(counts_BLJM_choice_pairs_vector) <- BLJM_associated_counts %>% pull(category)
counts_BLJM_choice_pairs_vector
```
```
## Biology-Jazz Logic-Jazz Biology-Metal Logic-Metal
## 38 26 20 18
```
Figure [16\.10](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-BLJM-count-pairs-plot) shows a crude plot of these counts, together with a baseline prediction of equal proportion in each category.
Figure 16\.10: Observed counts of choice pairs of music\+subject preference in the BLJM data.
Pearson’s \\(\\chi^2\\)\-test for goodness of fit allows us to test whether this data could plausibly have been generated by (a model whose predictions are given by) a prediction vector \\(\\vec{p} \= \\langle p\_1, \\dots, p\_4 \\rangle\\), where \\(p\_1\\) would be the predicted probability of a choice pair “Biology\-Jazz” occurring for a single participant, and so on. Frequently, this test is used to check whether an equal baseline distribution could have generated the data. We do that here, too. We form the null hypothesis that \\(\\vec{p} \= \\vec{p}\_0\\) with \\(p\_{0i} \= \\frac{1}{4}\\) for all categories \\(i\\).
Figure [16\.11](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-model-goodness) shows a graphical representation of the model implicitly assumed in the background for a Pearson’s \\(\\chi^2\\)\-test for goodness of fit. The model assumes that the observed vector of counts (like our `counts_BLJM_choice_pairs_vector` from above) follows a Multinomial distribution.[85](#fn85) Each vector of (hypothetical) data is associated with a test statistic, called \\(\\chi^2\\), which sums over the standardized squared deviation of the observed counts from the predicted baseline in each cell. It can be shown that, if the number of observations \\(N\\) is large enough, the sampling distribution of the \\(\\chi^2\\) test statistic is approximated well enough by the [\\(\\chi^2\\)\-distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-chi2) with \\(k\-1\\) degrees of freedom (where \\(k\\) is the number of categories).[86](#fn86) Notice that the approximation by a \\(\\chi^2\\)\-distribution hinges on an approximation, which is only met when there are enough samples (just as we needed in the CLT). A rule\-of\-thumb is that at most 20% of all cells should have expected frequencies below 5 in order for the test to be applicable, i.e., \\(np\_i \< 5\\) for all \\(i\\) in Figure [16\.11](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-model-goodness).
Figure 16\.11: Graphical representation of Pearson’s \\(\\chi^2\\)\-test for goodness of fit (testing a vector of predicted proportion).
We can compute the \\(\\chi^2\\)\-value associated with the observed data \\(t(D\_{obs})\\) as follows:
```
# observed counts
n <- counts_BLJM_choice_pairs_vector
# proportion predicted
p <- rep(1/4, 4)
# expected number in each cell
e <- sum(n) * p
# chi-squared for observed data
chi2_observed <- sum((n - e)^2 * 1/e)
chi2_observed
```
```
## [1] 9.529412
```
We can then compare this value to the sampling distribution, which is a \\(\\chi^2\\)\-distribution with \\(k\-1 \= 3\\) degrees of freedom. We compute the \\(p\\)\-value associated with our data as the tail of the sampling distribution, as also shown in Figure [16\.12](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-plot):[87](#fn87)
```
p_value_BLJM <- 1 - pchisq(chi2_observed, df = 3)
```
Figure 16\.12: Sampling distribution for a Pearson’s \\(\\chi^2\\)\-test of goodness of fit (\\(\\chi^2\\)\-distribution with \\(k\-1 \= 3\\) degrees of freedom), testing a flat baseline null hypothesis based on the BLJM data.
Of course, these calculations can also be performed by using a built\-in R function, namely `chisq.test`:
```
counts_BLJM_choice_pairs_vector <- BLJM_associated_counts %>% pull(n)
chisq.test(counts_BLJM_choice_pairs_vector)
```
```
##
## Chi-squared test for given probabilities
##
## data: counts_BLJM_choice_pairs_vector
## X-squared = 9.5294, df = 3, p-value = 0.02302
```
The common interpretation of our calculations would be to say that the test yielded a significant result, at least at the significance level of \\(\\alpha \= 0\.5\\). In a research paper, we might report these results roughly as follows:
> Observed counts deviated significantly from what is expected if each category (here: pair of music\+subject choice) was equally likely (\\(\\chi^2\\)\-test, with \\(\\chi^2 \\approx 9\.53\\), \\(df \= 3\\) and \\(p \\approx 0\.023\\)).
Notice that this test is an “omnibus test of difference”. We can conclude from a significant test result that the whole vector of observations is unlikely to have been generated by chance. Still, we cannot conclude from this result (without doing anything else) why, where or how the observations deviated from the assumed prediction vector. Looking at the plot of the data in Figure [16\.10](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-BLJM-count-pairs-plot) above, it seems intuitive to think that Metal is disproportionally disfavored and that the combination of Biology and Jazz looks particularly outliery when compared to the baseline expectation.
#### 16\.6\.1\.2 Pearson’s \\(\\chi^2\\)\-test of independence
The previous test of goodness of fit does not allow us to address the lecturer’s conjecture that a preference of Metal over Jazz goes with a preference of Logic over Biology. A slightly different kind of \\(\\chi^2\\)\-test is better suited for this. In Pearson’s \\(\\chi^2\\)\-test of independence, we look at a two\-dimensional table of correlated data observations, like this one:
```
BLJM_table <- BLJM_associated_counts %>%
select(-category) %>%
pivot_wider(names_from = LB, values_from = n)
BLJM_table
```
```
## # A tibble: 2 × 3
## JM Biology Logic
## <chr> <int> <int>
## 1 Jazz 38 26
## 2 Metal 20 18
```
For easier computation and compatibility with the function `chisq.test`, we handle the same data but stored as a matrix:
```
counts_BLJM_choice_pairs_matrix <- matrix(
counts_BLJM_choice_pairs_vector,
nrow = 2,
byrow = T
)
rownames(counts_BLJM_choice_pairs_matrix) <- c("Jazz", "Metal")
colnames(counts_BLJM_choice_pairs_matrix) <- c("Biology", "Logic")
counts_BLJM_choice_pairs_matrix
```
```
## Biology Logic
## Jazz 38 26
## Metal 20 18
```
Pearson’s \\(\\chi^2\\)\-test of independence addresses the question of whether two\-dimensional tabular count data like the above could plausibly have been generated by a prediction vector \\(\\vec{p}\\), which results from the assumption that the realizations of row\- and column\-choices are [stochastically independent](Chap-03-01-probability-conditional.html#Chap-03-01-probability-independence). If row\- and column\-choices are independent, the probability of seeing an outcome result in cell \\(ij\\) is the probability of realizing row \\(i\\) times the probability of realizing column \\(j\\). So, under an independence assumption, we expect a matrix and a resulting vector of choice proportions like this:
```
# number of observations in total
N <- sum(counts_BLJM_choice_pairs_matrix)
# marginal proportions observed in the data
# the following is the vector r in the model graph
row_prob <- counts_BLJM_choice_pairs_matrix %>% rowSums() / N
# the following is the vector c in the model graph
col_prob <- counts_BLJM_choice_pairs_matrix %>% colSums() / N
# table of expected observations under independence assumption
# NB: %o% is the outer product of vectors
BLJM_expectation_matrix <- (row_prob %o% col_prob) * N
BLJM_expectation_matrix
```
```
## Biology Logic
## Jazz 36.39216 27.60784
## Metal 21.60784 16.39216
```
```
# the following is the vector p in the model graph
BLJM_expectation_vector <- as.vector(BLJM_expectation_matrix)
BLJM_expectation_vector
```
```
## [1] 36.39216 21.60784 27.60784 16.39216
```
Figure [16\.13](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-model-independence) shows a graphical representation of the \\(\\chi^2\\)\-test of independence. The main difference to the previous test of goodness of fit is that we do no longer just fix any\-old prediction vector \\(\\vec{p}\\), but consider \\(\\vec{p}\\) the deterministic results of independence *and* the best estimates (based on the data at hand) of the row\- and column probabilities.
Figure 16\.13: Graphical representation of Pearson’s \\(\\chi^2\\)\-test for independence.
We can compute the observed \\(\\chi^2\\)\-test statistic and the \\(p\\)\-value as follows:
```
chi2_observed <- sum(
(counts_BLJM_choice_pairs_matrix - BLJM_expectation_matrix)^2 /
BLJM_expectation_matrix
)
p_value_BLJM <- 1 - pchisq(q = chi2_observed, df = 1)
round(p_value_BLJM, 5)
```
```
## [1] 0.50615
```
Figure [16\.14](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-plot-independence) shows the sampling distribution, the value of the test statistic for the observed data and the \\(p\\)\-value.
Figure 16\.14: Sampling distribution for a Pearson’s \\(\\chi^2\\) test of independence (\\(\\chi^2\\)\-distribution with \\(1\\) degree of freedom), testing a flat baseline null hypothesis based on the BLJM data.
We can also use the built\-in function `chisq.test` in R to obtain this result more efficiently:
```
chisq.test(
# supply data as a matrix, not as a vector, for a test of independence
counts_BLJM_choice_pairs_matrix,
# do not use the default correction (because we didn't introduce it)
correct = FALSE
)
```
```
##
## Pearson's Chi-squared test
##
## data: counts_BLJM_choice_pairs_matrix
## X-squared = 0.44202, df = 1, p-value = 0.5061
```
With a \\(p\\)\-value of about 0\.5061, we should conclude that there is no indication of strong evidence *against* the assumption of independence. Consequently, there is no evidence *in favor* of the lecturer’s conjecture of dependence of musical and academic preferences. In a research paper, we might report this result as follows:
> A \\(\\chi^2\\)\-test of independence did not yield a significant test result (\\(\\chi^2\\)\-test, with \\(\\chi^2 \\approx 0\.44\\), \\(df \= 1\\) and \\(p \\approx 0\.5\\)). Therefore, we cannot claim to have found any evidence for the research hypothesis of dependence.
**Exercise 16\.5: \\(\\chi^2\\)\-test of independence**
Let us assume that there are two unordered categorical variables \\(A\\) and \\(B\\). Categorical variable \\(A\\) has two levels \\(a\_1\\) and \\(a\_2\\). Categorical variable \\(B\\) has three levels \\(b\_1\\), \\(b\_2\\) and \\(b\_3\\). Let us further assume that the (marginal) probabilities of a choice from categories \\(A\\) or \\(B\\) is as follows:
\\\[
P(A\=a\_i)\=\\begin{cases}
0\.3 \&\\textbf{if \\(i\=1\\)} \\\\
0\.7 \&\\textbf{if \\(i\=2\\)}
\\end{cases}
\\quad P(B\=b\_i)\=\\begin{cases}
0\.2 \&\\textbf{if \\(i\=1\\)}\\\\
0\.3 \&\\textbf{if \\(i\=2\\)}\\\\
0\.5 \&\\textbf{if \\(i\=3\\)}
\\end{cases}
\\]
1. If observations of pairs of instances from categories \\(A\\) and \\(B\\) are stochastically independent, what would the expected joint probability of each pair of potential observations be?
Solution
| | \\(b\_1\\) | \\(b\_2\\) | \\(b\_3\\) |
| --- | --- | --- | --- |
| \\(a\_1\\) | .3 \\(\\times\\) .2 \= .06 | .3 \\(\\times\\) .3 \= .09 | .3 \\(\\times\\) .5 \= .15 |
| \\(a\_2\\) | .7 \\(\\times\\) .2 \= .14 | .7 \\(\\times\\) .3 \= .21 | .7 \\(\\times\\) .5 \= .35 |
2. Imagine you observe the following table of counts for each pair of instances of categories \\(A\\) and \\(B\\):
| | \\(b\_1\\) | \\(b\_2\\) | \\(b\_3\\) |
| --- | --- | --- | --- |
| \\(a\_1\\) | 1 | 26 | 3 |
| \\(a\_2\\) | 19 | 4 | 47 |
Which of the \\(p\\)\-values given below would you expect to see when feeding this table into a
Pearson \\(\\chi^2\\)\-test of independence? (only one correct answer)
1. \\(p \\approx 1\\)
2. \\(p \\approx 0\.5\\)
3. \\(p \\approx 0\\)
4. I expect no result because the test is not suitable for this kind of data.
Solution
The correct answer is \\(p \\approx 0\\).
3. Explain the answer you gave in the previous part in at most three concise sentences.
Solution
As the marginal proportions of observed counts for the table in b. equal the marginal probabilities given above, the joint probability table in a. actually gives the predicted probabilities under the assumption of independence. Comparing prediction against observed proportion (obtained by dividing the table in b. by the total count of 100\), we see severe divergences, especially in the middle column.
### 16\.6\.2 *z*\-test
The Central Limit Theorem tells us that, given enough data, we can treat means of repeated samples from any arbitrary probability distribution as approximately normally distributed. Notice in addition that if \\(X\\) and \\(Y\\) are random variables following a normal distribution, then so is \\(Z \= X \- Y\\) (see also the [chapter on the normal distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-normal)). It now becomes clear how research questions about means and differences between means (e.g., in the Mental Chronometry experiment) can be addressed, at least approximately: We conduct tests that hinge on a sampling distribution which is a normal distribution (usually a standard normal distribution).
The \\(z\\)\-test is perhaps the simplest of a family of tests that rely on normality of the sampling distribution. Unfortunately, what makes it so simple is also what makes it inapplicable in a wide range of cases. The \\(z\\)\-test assumes that a quantity that is normally distributed has an unknown mean (to be inferred by testing), but it also assumes that the *variance is known*. Since we do not know the variance in most cases of practical relevance, the \\(z\\)\-test needs to be replaced by a more adequate test, usually a test from the \\(t\\)\-test family, to be discussed below.
We start with the \\(z\\)\-test nonetheless because of the added benefit to our understanding. Figure [16\.15](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-z-test-model) shows the model that implicitly underlies a \\(z\\)\-test. It checks whether the data \\(\\vec{x}\\), which are assumed to be normally distributed with known \\(\\sigma\\), could have been generated by a hypothesized mean \\(\\mu \= \\mu\_0\\). The sampling distribution of the derived test statistic \\(z\\) is a standard normal distribution.
Figure 16\.15: Graphical representation of a \\(z\\)\-test.
We know that IQ test results are normally distributed around a mean of 100 with a standard deviation of 15\. This holds when the sample is representative of the whole population. But suppose we have reason to believe that the sample is from CogSci students. The standard deviation in a sample from CogSci students might still plausibly be fixed to 15, but we’d like to test the assumption that *this* sample was generated by a mean \\(\\mu \= 100\\), our null hypothesis.
For illustration, suppose we observed the following data set of IQ test results:
```
# fictitious IQ data
IQ_data <- c(87, 91, 93, 97, 100, 101, 103, 104,
104, 105, 105, 106, 108, 110, 111,
112, 114, 115, 119, 121)
mean(IQ_data)
```
```
## [1] 105.3
```
The mean of this data set is 105\.3\. Suspicious!
Following the model in Figure [16\.15](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-z-test-model), we calculate the value of the test statistic for the observed data.
```
# number of observations
N <- length(IQ_data)
# null hypothesis to test
mu_0 <- 100
# standard deviation (known/assumed as true)
sd <- 15
z_observed <- (mean(IQ_data) - mu_0) / (sd / sqrt(N))
z_observed %>% round(4)
```
```
## [1] 1.5802
```
We focus on a one\-sided \\(p\\)\-value because our “research” hypothesis is that CogSci students have, on average, a higher IQ. Since we observed a mean of 105\.3 in the data, which is higher than the critical value of 100, we test the null hypothesis \\(\\mu \= 100\\) against an alternative hypothesis that assumes that the data was generated by a mean *bigger* than 100 (which is exactly our research hypothesis).
As before, we can then compute the \\(p\\)\-value by checking the area under the sampling distribution, here a standard normal, in the appropriate way. Figure [16\.16](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-z-test) shows this result graphically.
```
p_value_IQ_data_ztest <- 1 - pnorm(z_observed)
p_value_IQ_data_ztest %>% round(6)
```
```
## [1] 0.057036
```
Figure 16\.16: Sampling distribution for a \\(z\\)\-test, testing the null hypothesis based on the assumption that the IQ\-data was generated by \\(\\mu \= 100\\) (with assumed/known \\(\\sigma\\)).
We can also use a ready\-made function for the \\(z\\)\-test. However, as the \\(z\\)\-test is so uncommon, it is not built into core R. We need to rely on the `BSDA` package to find the function `z.test`.
```
BSDA::z.test(x = IQ_data, mu = 100, sigma.x = 15, alternative = "greater")
```
```
##
## One-sample z-Test
##
## data: IQ_data
## z = 1.5802, p-value = 0.05704
## alternative hypothesis: true mean is greater than 100
## 95 percent confidence interval:
## 99.78299 NA
## sample estimates:
## mean of x
## 105.3
```
The conclusion to be drawn from this test could be formulated in a research report as follows:
> We tested the null hypothesis of a mean equal to 100, assuming a known standard deviation of 15, in a one\-sided \\(z\\)\-test against the alternative hypothesis that the data was generated by a mean greater than 100 (our research hypothesis). The test was not significant (\\(N \= 20\\), \\(z \\approx 1\.5802\\), \\(p \\approx 0\.05704\\)), giving us no indication of strong evidence against the assumption that the mean is at most 100\.
### 16\.6\.3 *t*\-tests
In most practical applications where a \\(z\\)\-test might be useful, the standard deviation is not known. If unknown, it should also not lightly be fixed by clever guess\-work. This is where the family of \\(t\\)\-tests comes in. We will look at two examples of these: the one\-sample \\(t\\)\-test, which compares one set of samples to a fixed mean, and the two\-sample \\(t\\)\-test, which compares the means of two sets of samples.
#### 16\.6\.3\.1 One\-sample \\(t\\)\-test
The simplest example of this family, namely a \\(t\\)\-test for one metric vector \\(\\vec{x}\\) of normally distributed observations, tests the null hypothesis that \\(\\vec{x}\\) was generated by some \\(\\mu \= \\mu\_0\\) (just like the \\(z\\)\-test). However, unlike the \\(z\\)\-test, a one\-sample \\(t\\)\-test does not assume that the standard deviation is known. It rather uses the observed data to obtain an estimate for this parameter. More concretely, a one\-sample \\(t\\)\-test for \\(\\vec{x}\\) estimates the standard deviation in the usual way (see Chapter [5](Chap-02-03-summary-statistics.html#Chap-02-03-summary-statistics)):
\\\[\\hat{\\sigma}\_x \= \\sqrt{\\frac{1}{n\-1} \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}})^2}\\]
Figure [16\.17](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-model-one-population) shows a graphical representation of a one\-sample \\(t\\)\-test model. The light shading of the node for the standard deviation indicates that this parameter is estimated from the observed data. Importantly, the distribution of the test statistic \\(t\\) is no longer well approximated by a normal distribution when the sample size is low. It is better captured by a [Student’s \\(t\\) distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-students-t).
Figure 16\.17: Graphical representation of the model underlying a frequentist one\-sample \\(t\\)\-test. Notice that the lightly shaded node for the standard deviation represents that the value for this parameter is estimated from the data.
Let’s revisit our IQ\-data set from above to calculate a \\(t\\)\-test. Using a \\(t\\)\-test implies that we are now assuming that the standard deviation is actually unknown. We can calculate the value of the test statistic for the observed data and use this to compute a \\(p\\)\-value, much like in the case of the \\(z\\)\-test before.
```
N <- length(IQ_data)
# fix the null hypothesis
mean_0 <- 100
# unlike in a z-test, we use the sample to estimate the SD
sigma_hat <- sd(IQ_data)
t_observed <- (mean(IQ_data) - mean_0) / sigma_hat * sqrt(N)
t_observed %>% round(4)
```
```
## [1] 2.6446
```
We calculate the relevant one\-sided \\(p\\)\-value using the cumulative distribution function `pt` of the \\(t\\)\-distribution.
```
p_value_t_test_IQ <- 1 - pt(t_observed, df = N - 1)
p_value_t_test_IQ %>% round(6)
```
```
## [1] 0.007992
```
Figure 16\.18: Sampling distribution for a \\(t\\)\-test, testing the null hypothesis that the IQ\-data was generated by \\(\\mu \= 100\\) (with unknown \\(\\sigma\\)).
Compare these calculations against the built\-in function `t.test`:
```
t.test(x = IQ_data, mu = 100, alternative = "greater")
```
```
##
## One Sample t-test
##
## data: IQ_data
## t = 2.6446, df = 19, p-value = 0.007992
## alternative hypothesis: true mean is greater than 100
## 95 percent confidence interval:
## 101.8347 Inf
## sample estimates:
## mean of x
## 105.3
```
These results could be stated in a research report much like so:
> We tested the null hypothesis of a mean equal to 100, assuming an unknown standard deviation, using a one\-sided, one\-sample \\(t\\)\-test against the alternative hypothesis that the data was generated by a mean greater than 100 (our research hypothesis). The significant test result (\\(N \= 20\\), \\(t \\approx 2\.6446\\), \\(p \\approx 0\.007992\\)) suggests that the data provides strong evidence against the assumption that the mean is not bigger than 100\.
Notice that the conclusions we draw from the previous \\(z\\)\-test and this one\-sample \\(t\\)\-test are quite different. Why is this so? Well, it is because we (cheekily) chose a data set `IQ_data` that was actually *not* generated by a normal distribution with a standard deviation of 15, contrary to what we said about IQ\-scores normally having this standard deviation. The assumption about \\(\\sigma\\) fed into the \\(z\\)\-test was (deliberately!) wrong. The result of the \\(t\\)\-test, at least for this example, is better. The data in `IQ_data` are actually samples from \\(\\text{Normal}(105,10\)\\). This demonstrates why the one\-sample \\(t\\)\-test is usually preferred over a \\(z\\)\-test: unshakable, true knowledge of \\(\\sigma\\) is very rare.
#### 16\.6\.3\.2 Two\-sample \\(t\\)\-test (for unpaired data with equal variance and unequal sample sizes)
The “mother of all experimental designs” compares two groups of measurements. We give a drug to one group of patients, a placebo to another. We take a metric measure (say, blood sugar level) and ask whether there is a difference between these two groups. Section [9](ch-03-04-parameter-estimation.html#ch-03-04-parameter-estimation) introduced the \\(T\\)\-Test Model for a Bayesian approach. Here, we look at a corresponding model for a frequentist approach, a so\-called two\-sample \\(t\\)\-test. There are different kinds of such two\-sample \\(t\\)\-tests. The differences lie, e.g., in whether we assume that both groups have equal variance, in whether the sample sizes are the same in both groups, or in whether observations are paired (e.g., as in a within\-subjects design, where we get two measurements from each participant, one from each condition/group). Here, we focus on unpaired data (as from a between\-subjects design), assume equal variance but (possibly) unequal sample sizes. The case we look at is the [avocado data](app-93-data-sets-avocado.html#app-93-data-sets-avocado), where we want to specifically investigate whether the weekly average price of organically grown avocados is higher than that of conventionally grown avocados.[88](#fn88)
We here consider the preprocessed avocado data set (see Appendix Chapter [D.5](app-93-data-sets-avocado.html#app-93-data-sets-avocado) for details on how this preprocessing was performed).
```
avocado_data <- aida::data_avocado
```
Remember that the distribution of prices looks as follows:
A graphical representation of the two\-sample \\(t\\)\-test (for unpaired data with equal variance and unequal sample sizes), which we will apply to this case, is shown in Figure [16\.19](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-model-two-populations). The model assumes that we have two vectors of metric measurements \\(\\vec{x}\_A\\) and \\(\\vec{x}\_B\\), with length \\(n\_A\\) and \\(n\_B\\), respectively. These are the price measures for conventionally grown and for organically grown avocados. The model assumes that measures in both \\(\\vec{x}\_A\\) and \\(\\vec{x}\_B\\) are i.i.d. samples from a normal distribution. The mean of one group (group \\(B\\) in the graph) is assumed to be some unknown \\(\\mu\\). Interestingly, this parameter will cancel out eventually: the approximation of the sampling distribution turns out to be independent of this parameter.[89](#fn89) The mean of the other group (group \\(A\\) in the graph) is computed as \\(\\mu \+ \\delta\\), so with some additive parameter \\(\\delta\\) indicating the difference between means of these groups. This \\(\\delta\\) is the main parameter of interest for inferences regarding hypotheses concerning differences between groups. Finally, the model assumes that both groups have the same standard deviation, an estimate of which is derived from the data (in a rather convoluted looking formula that is not important for our introductory concerns). As indicated in Figure [16\.19](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-model-two-populations), the sampling distribution for this model is an instance of Student’s \\(t\\)\-distribution with mean 0, standard deviation 1 and degrees of freedom \\(\\nu\\) given as \\(n\_A \+ n\_B \- 2\\).
Figure 16\.19: Graphical representation of the model underlying a frequentist two\-population \\(t\\)\-test (for unpaired data with equal variance and unequal sample sizes). Notice that the light shading of the node for the standard deviation indicates that the value for this parameter is estimated from the data.
Figure [16\.19](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-model-two-populations) gives us the template to compute the value of the test statistic for the observed data:
```
# fix the null hypothesis: no difference between groups
delta_0 <- 0
# data (group A)
x_A <- avocado_data %>%
filter(type == "organic") %>% pull(average_price)
# data (group B)
x_B <- avocado_data %>%
filter(type == "conventional") %>% pull(average_price)
# sample mean for organic (group A)
mu_A <- mean(x_A)
# sample mean for conventional (group B)
mu_B <- mean(x_B)
# numbers of observations
n_A <- length(x_A)
n_B <- length(x_B)
# variance estimate
sigma_AB <- sqrt(
( ((n_A - 1) * sd(x_A)^2 + (n_B - 1) * sd(x_B)^2 ) /
(n_A + n_B - 2) ) * (1/n_A + 1/n_B)
)
t_observed <- (mu_A - mu_B - delta_0) / sigma_AB
t_observed
```
```
## [1] 105.5878
```
We can use the value of the test statistic for the observed data to compute a one\-sided \\(p\\)\-value, as before. Notice that we use a one\-sided test because we hypothesize that organically grown avocados are more expensive, not just that they have a different price (more expensive or cheaper).
```
p_value_t_test_avocado <- 1 - pt(q = t_observed, df = n_A + n_B - 2)
p_value_t_test_avocado
```
```
## [1] 0
```
Owing to number imprecision, the calculated \\(p\\)\-value comes up as a flat zero. We have a lot of data, and the task of defending that conventionally grown avocados are not less expensive than organically grown is very tough. This also shows in the corresponding picture in Figure [16\.20](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-two-sample).
Figure 16\.20: Sampling distribution for a two\-sample \\(t\\)\-test, testing the null hypothesis of no difference between groups, based on the avocado data.
We can also, of course, calculate this test result with the built\-in function `t.test`:
```
t.test(
x = x_A, # first vector of data measurements
y = x_B, # second vector of data measurements
paired = FALSE, # measurements are to be treated as unpaired
var.equal = TRUE, # we assume equal variance in both groups
mu = 0 # NH is delta = 0 (name 'mu' is misleading!)
)
```
```
##
## Two Sample t-test
##
## data: x_A and x_B
## t = 105.59, df = 18247, p-value < 2.2e-16
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.4867522 0.5051658
## sample estimates:
## mean of x mean of y
## 1.653999 1.158040
```
The result could be reported as follows:
> We conducted a two\-sample \\(t\\)\-test of differences of means (unpaired samples, equal variance, unequal sample sizes) to compare the average weekly price of conventionally grown avocados to that of organically grown avocados. The test result indicates a significant difference for the null hypothesis that conventionally grown avocados are not cheaper (\\(N\_A \= 9123\\), \\(N\_B \= 9126\\), \\(t \\approx 105\.59\\), \\(p \\approx 0\\)).
**Exercise 16\.6: Two\-sample \\(t\\)\-test**
Your fellow student is skeptical of her flatmate’s claim that pizzas from place \\(A\\) have a smaller diameter than place \\(B\\) (both pizzerias have just one pizza size, namely \\(\\varnothing\\ 32\\ cm\\)). She decides to test that claim with a two\-sample \\(t\\)\-test and sets \\(H\_0: \\mu\_A \= \\mu\_B\\) (\\(\\delta \= 0\\)), \\(H\_a: \\mu\_A \< \\mu\_B\\), \\(\\alpha \= 0\.05\\). She then asks your class to always measure the pizza’s diameter if ordered from one of the two places. At the end of the semester, she has the following table:
| | Pizzeria \\(A\\) | Pizzeria \\(B\\) |
| --- | --- | --- |
| mean | 30\.9 | 31\.8 |
| standard deviation | 2\.3 | 2 |
| sample size | 38 | 44 |
1. How many degrees of freedom \\(\\nu\\) are there?
Solution
\\(\\nu \= n\_A\+n\_B\-2 \= 38\+44\-2 \= 80\\) degrees of freedom.
2. Given the table above, calculate the test statistic \\(t\\).
Solution
\\\[
\\hat{\\sigma}\=\\sqrt{\\frac{(n\_A\-1\)\\hat{\\sigma}\_A^2\+(n\_B\-1\)\\hat{\\sigma}^2\_B}{n\_A\+n\_B\-2}(\\frac{1}{n\_A}\+\\frac{1}{n\_B})}\\\\
\\hat{\\sigma}\=\\sqrt{\\frac{37\\cdot2\.3^2\+43\\cdot2^2}{80}(\\frac{1}{38}\+\\frac{1}{44})}\\approx 0\.47\\\\
t\=((\\bar{x}\_A\-\\bar{x}\_B)\-\\delta)\\cdot\\frac{1}{\\hat{\\sigma}}\\\\
t\=\\frac{30\.9\-31\.8}{0\.47}\\approx \-1\.91
\\]
3. Look at this so\-called [t table](http://www.ttable.org/) and determine the critical value to be exceeded in order to get a statistically significant result. NB: We are looking for the critical value that is on the *left* side of the distribution. So, in order to have a statistically significant result, the test statistic from b. has to be smaller than the *negated* critical value in the table.
Solution
The critical value is \-1\.664\.
4. Compare the test statistic from b. with the critical value from c. and interpret the result.
Solution
The calculated test statistic from b. is smaller than the critical value. We therefore know that the \\(p\\)\-value is statistically significant. The fellow student should reject the null hypothesis of equal pizza diameters.
### 16\.6\.4 ANOVA
ANOVA is short for “analysis of variance”.
It’s an umbrella term for a number of different models centered around testing the influence of one or several categorical predictors on a metric measurement.
In previous sections, we have summoned regression models for this task.
This is indeed the more modern and preferred approach, especially when the regression modeling also takes random effects (so\-called hierarchical modeling) into account.
Nonetheless, it is good to have a basic understanding of ANOVAs, as they are featured prominently in a lot of published research papers, whose findings are still relevant.
Also, in some areas of empirical science, ANOVAs are still commonly used.
Here we are just going to cover the most basic type of ANOVA, which is called a *one\-way ANOVA*.
A one\-way ANOVA is, in regression jargon, a suitable approach for the case of a single categorical predictor with more than two levels (otherwise a \\(t\\)\-test would be enough) and a metric dependent variable.
For illustration we will here consider a fictitious case of metric measurement for three groups: A, B, and C.
These groups are levels of a categorical predictor `group`.
We want to address the research question of whether the means of the measurements of groups A, B and C could plausibly be identical.
The main idea behind analysis of variance is *not* to look at the means of measurements to be compared, but rather to compare the *between\-group variances* to the *within\-group variances*.
Whence the name “analysis of variance”.
While mathematically complex, the idea is quite intuitive.
Figure [16\.21](ch-03-05-hypothesis-testing-tests.html#fig:ch-05-01-examples-F-score) shows four different (made\-up) data sets, each with different measurements for groups A, B and C.
It also shows the “pooled data”, i.e., the data from all three groups combined.
What is also shown in each panel is the so\-called F\-statistic, which is a number derived from a sample in the following way.
We have \\(k \\ge 2\\) groups of metric observations.
For group \\(1 \\le j \\le k\\), there are \\(n\_j\\) observations.
Let \\(x\_{ij}\\) be the observation \\(1 \\le i \\le n\_j\\) for group \\(1 \\le j \\le k\\).
Let \\(\\bar{x}\_j \= \\frac{1}{n\_j} \\sum\_{i \= 1}^{n\_j} x\_{ij}\\) be the mean of group \\(j\\) and let \\(\\bar{\\bar{x}} \= \\frac{1}{k} \\sum\_{j\=1}^k \\frac{1}{n\_j} \\sum\_{i\=1}^{n\_j} x\_{ij}\\) be the grand mean of all data points.
The **between\-group variance** measures how much, on average, the mean of each group deviates from the grand mean of all data points (where distance is squared distance, as usual):
\\\[
\\hat{\\sigma}\_{\\mathrm{between}} \= \\frac{\\sum\_{j\=1}^k n\_j (\\bar{x}\_j \- \\bar{\\bar{x}})^2}{k\-1}
\\]
The **within\-group variance** is a measure of the average variance of the data points inside of each group:
\\\[
\\hat{\\sigma}\_{\\mathrm{within}} \= \\frac{\\sum\_{j\=1}^k \\sum\_{i\=1}^{n\_j} (x\_{ij} \- \\bar{x}\_j)^2}{\\sum\_{i\=1}^k (n\_i \- 1\)}
\\]
Now, if the means of different groups are rather different from each other, the between\-group variance should be high.
But absolute numbers may be misleading, so we need to scale the between\-group variance also by how much variance we see, on average, in each group, i.e., the within\-group variance.
That is why the \\(F\\)\-statistic is defined as:
\\\[
F \= \\frac{\\hat{\\sigma}\_{\\mathrm{between}}}{\\mathrm{\\hat{\\sigma}\_{\\mathrm{within}}}}
\\]
For illustration, Figure [16\.21](ch-03-05-hypothesis-testing-tests.html#fig:ch-05-01-examples-F-score) shows four different scenarios with associated measures of \\(F\\).
Figure 16\.21: Different examples of metric measurements for three groups (A, B, C), shown here together with a plot of the combined (\= pooled) data. We see that, as the means of measurements go apart, so does the ratio of between\-group variance and within\-group variance.
It can be shown that, under the assumption that the \\(k\\) groups have identical means, the sampling distribution of the \\(F\\) statistic follows an [\\(F\\)\-distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-F) with appropriate parameters (which is, unsurprisingly, the distribution constructed for exactly this purpose):
\\\[
F \\sim F\\mathrm{\\text{\-}distribution}\\left(k \- 1, \\sum\_{i\=1}^k (n\_i \- 1\) \\right)
\\]
The complete frequentist model of a one\-way ANOVA is shown in Figure [16\.22](ch-03-05-hypothesis-testing-tests.html#fig:ch-05-01-ANOVA-onway-model).
Notice that the null hypothesis of equal means is not shown explicitly, but rather only a single mean \\(\\mu\\) is shown, which functions as the mean for all groups.
Figure 16\.22: Graphical representation of the model underlying a one\-way ANOVA.
Let’s consider some concrete, but fictitious data for a full example:
```
# fictitious data
x_A <- c(78, 43, 60, 60, 60, 50, 57, 58, 64, 64, 56, 62, 66, 53, 59)
x_B <- c(52, 53, 51, 49, 64, 60, 45, 50, 55, 65, 76, 62, 62, 45)
x_C <- c(78, 66, 74, 57, 75, 64, 64, 53, 63, 60, 79, 68, 68, 47, 63, 67)
# number of observations in each group
n_A <- length(x_A)
n_B <- length(x_B)
n_C <- length(x_C)
# in tibble form
anova_data <- tibble(
condition = c(
rep("A", n_A),
rep("B", n_B),
rep("C", n_C)
),
value = c(x_A, x_B, x_C)
)
```
Here’s a plot of this data:
We want to know whether it is plausible to entertain the idea that the means of these three groups are identical.
We can calculate the one\-way ANOVA explicitly as follows, following the calculations described in Figure [16\.22](ch-03-05-hypothesis-testing-tests.html#fig:ch-05-01-ANOVA-onway-model):
```
# compute grand_mean
grand_mean <- anova_data %>% pull(value) %>% mean()
# compute degrees of freedom (parameters to F-distribution)
df1 <- 2
df2 <- n_A + n_B + n_C - 3
# between-group variance
between_group_variance <- 1/df1 *
(
n_A * (mean(x_A) - grand_mean)^2 +
n_B * (mean(x_B) - grand_mean)^2 +
n_C * (mean(x_C) - grand_mean)^2
)
# within-group variance
within_group_variance <- 1/df2 *
(
sum((x_A - mean(x_A))^2) +
sum((x_B - mean(x_B))^2) +
sum((x_C - mean(x_C))^2)
)
# test statistic of observed data
F_observed <- between_group_variance / within_group_variance
# retrieving the p-value (using the F-distribution)
p_value_anova <- 1 - pf(F_observed, 2, n_A + n_B + n_C - 3)
p_value_anova %>% round(4)
```
```
## [1] 0.0172
```
Compare this to the result of calling R’s built\-in function `aov`:
```
aov(formula = value ~ condition, anova_data) %>% summary()
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## condition 2 640.8 320.4 4.485 0.0172 *
## Residuals 42 3000.3 71.4
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
To report these results, we could use a statement like this:
> Based on a one\-way ANOVA, we find evidence against the assumption of equal means across all groups (\\(F(2, 42\) \\approx 4\.485\\), \\(p \\approx 0\.0172\\)).
### 16\.6\.5 Linear regression
Significance testing for linear regression parameters follows the same logic as for other models as well.
In particular, it can be shown that the relevant test statistic for ML\-estimates of regression coefficients \\(\\hat\\beta\_i\\), under the assumption that the true model has \\(\\beta\_i \= 0\\), follows a \\(t\\)\-distribution.
We can run a linear regression model (with a Gaussian noise function) using the built\-in function `glm` (for “generalized linear model”):
```
fit_murder_mle <- glm(
formula = murder_rate ~ low_income,
data = aida::data_murder
)
```
If we inspect a summary for the model fit, we see the results of a \\(t\\)\-test, one for each coefficient, based on the null\-hypothesis that this coefficient’s true value is 0\.
```
summary(fit_murder_mle)
```
```
##
## Call:
## glm(formula = murder_rate ~ low_income, data = aida::data_murder)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -9.1663 -2.5613 -0.9552 2.8887 12.3475
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -29.901 7.789 -3.839 0.0012 **
## low_income 2.559 0.390 6.562 3.64e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for gaussian family taken to be 30.38125)
##
## Null deviance: 1855.20 on 19 degrees of freedom
## Residual deviance: 546.86 on 18 degrees of freedom
## AIC: 128.93
##
## Number of Fisher Scoring iterations: 2
```
So, in the case of the `murder_data`, we would conclude that there is strong evidence *against* the assumption that the data could have been generated by a model whose slope parameter for `low_income` is set to 0\.
### 16\.6\.6 Likelihood\-Ratio Test
The likelihood\-ratio (LR) test is a very popular frequentist method of model comparison.
The LR\-test assimilates model comparison to frequentist hypothesis testing.
It defines a suitable test statistic and supplies an approximation of the sampling distribution.
The LR\-test first and foremost applies to the comparison of **nested models**, but there are results about how approximate results can be obtained when comparing non\-nested models with an LR\-test ([Vuong 1989](#ref-Vuong1989:Likelihood-Rati)).
A frequentist model \\(M\_i\\) is **nested** inside another frequentist model \\(M\_j\\) iff \\(M\_i\\) can be obtained from \\(M\_j\\) by fixing at least one of \\(M\_j\\)’s free parameters to a specific value.
If \\(M\_i\\) is nested under \\(M\_j\\), \\(M\_i\\) is called the **nested model**, and \\(M\_j\\) is called the **nesting model** or the **encompassing model**.
Obviously, the nested model is simpler (of lower complexity) than the nesting model.
For example, we had the two\-parameter exponential model of forgetting previously in Chapter [10](Chap-03-06-model-comparison.html#Chap-03-06-model-comparison):
\\\[
\\begin{aligned}
P(D \= \\langle k, N \\rangle \\mid \\langle a, b\\rangle) \& \= \\text{Binom}(k,N, a \\exp (\-bt)), \\ \\ \\ \\ \\text{where } a,b\>0
\\end{aligned}
\\]
We wanted to explain the following “forgetting data”:
```
# time after memorization (in seconds)
t <- c(1, 3, 6, 9, 12, 18)
# proportion (out of 100) of correct recall
y <- c(.94, .77, .40, .26, .24, .16)
# number of observed correct recalls (out of 100)
obs <- y * 100
```
An example of a model that is nested under this two\-parameter model is the following one\-parameter model, which fixes \\(a \= 1\.1\\).
\\\[
\\begin{aligned}
P(D \= \\langle k, N \\rangle \\mid b) \& \= \\text{Binom}(k,N, 1\.1 \\ \\exp (\-bt)), \\ \\ \\ \\ \\text{where } b\>0
\\end{aligned}
\\]
Here’s an ML\-estimation for the nested nested model (the best fit for the nesting model `bestExpo` was obtained in Chapter [10](Chap-03-06-model-comparison.html#Chap-03-06-model-comparison)):
```
nLL_expo_nested <- function(b) {
# calculate predicted recall rates for given parameters
theta <- 1.1 * exp(-b * t) # one-param exponential model
# avoid edge cases of infinite log-likelihood
theta[theta <= 0.0] <- 1.0e-4
theta[theta >= 1.0] <- 1 - 1.0e-4
# return negative log-likelihood of data
- sum(dbinom(x = obs, prob = theta, size = 100, log = T))
}
bestExpo_nested <- optim(
nLL_expo_nested,
par = 0.5,
method = "Brent",
lower = 0,
upper = 20
)
bestExpo_nested
```
```
## $par
## [1] 0.1372445
##
## $value
## [1] 19.21569
##
## $counts
## function gradient
## NA NA
##
## $convergence
## [1] 0
##
## $message
## NULL
```
The LR\-test looks at the likelihood ratio of the nested model \\(M\_0\\) over the encompassing model \\(M\_1\\) using the following test statistic:
\\\[\\text{LR}(M\_1, M\_0\) \= \-2\\log \\left(\\frac{P\_{M\_0}(D\_\\text{obs} \\mid \\hat{\\theta}\_0\)}{P\_{M\_1}(D\_\\text{obs} \\mid \\hat{\\theta}\_1\)}\\right)\\]
We can calculate the value of this test statistic for the current example as follows:
```
LR_observed <- 2 * bestExpo_nested$value - 2 * bestExpo$value
LR_observed
```
```
## [1] 1.098429
```
If the simpler (nested) model is true, the sampling distribution of this test statistic approximates a \\(\\chi^2\\)\-distribution with \\(d\\) if we have more and more data.
The degrees of freedom \\(d\\) are given by the difference in free parameters, i.e., the number of parameters the nested model fixes to specific values, but which are free in the nesting model.
We can therefore calculate the \\(p\\)\-value for the LR\-test for our current example like so:
```
p_value_LR_test <- 1 - pchisq(LR_observed, 1)
p_value_LR_test
```
```
## [1] 0.2946111
```
The \\(p\\)\-value of this test quantifies the evidence against the assumption that the data was generated by the simpler model.
A significant test result would therefore indicate that it would be surprising if the data was generated by the simpler model.
This is usually taken as evidence in favor of the more complex, nesting model.
Given the current \\(p\\)\-value \\(p \\approx 0\.2946\\), we would conclude that there is no strong evidence against the simpler model.
Often this may lead researchers to favor the nested model due to its simplicity; the data at hand does not seem to warrant the added complexity of the nesting model; the nested model seems to suffice.
**Exercise 16\.7**
TRUE OR FALSE?
1. The nested model usually has more free parameters than the nesting model.
2. When we perform the LR\-test, we initially assume that the nested model is more plausible.
3. An LR\-test can only compare the nested model with nesting models.
4. If the LR\-test result has a \\(p\\)\-value equal to 1\.0, one can conclude that it’s a piece of evidence in favor of the simpler model.
Solution
1. False
2. True
3. False
4. True
### 16\.6\.1 Pearson’s \\(\\chi^2\\)\-tests
There are many tests that use the [\\(\\chi^2\\)\-distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-chi2) as an (approximate) sampling distribution. But given relevance and historical prominence, the name “\\(\\chi^2\\)\-test” is usually interpreted to refer to one of several flavor’s of what we could specifically call “Pearson’s \\(\\chi^2\\)\-test”.
We will look at two flavors here. Pearson’s \\(\\chi^2\\)\-test for **goodness of fit** tests whether an observed vector of counts is well explained by a given vector of predicted proportion. Pearson’s \\(\\chi^2\\)\-test for **independence** tests whether a (two\-dimensional) table of counts could plausibly have been generated by a process of independently selecting the column and the row category. We will explain how both of these tests work based on an application of the [BLJM data](app-93-data-sets-BLJM.html#app-93-data-sets-BLJM), which we load as usual:
```
data_BLJM_processed <- aida::data_BLJM
```
The focus is on the counts of music\-subject choices:
```
BLJM_associated_counts <- data_BLJM_processed %>%
select(submission_id, condition, response) %>%
pivot_wider(names_from = condition, values_from = response) %>%
# drop the Beach-vs-Mountain condition
select(-BM) %>%
dplyr::count(JM,LB)
BLJM_associated_counts
```
```
## # A tibble: 4 × 3
## JM LB n
## <chr> <chr> <int>
## 1 Jazz Biology 38
## 2 Jazz Logic 26
## 3 Metal Biology 20
## 4 Metal Logic 18
```
Remember that the lecturer’s bold conjecture was that a preference for Logic over Biology goes together with a preference for Metal over Jazz. The visualization suggests that there might be such a trend, but the (statistical) jury is still out as to whether this conjecture has empirical support.
#### 16\.6\.1\.1 Pearson’s \\(\\chi^2\\)\-test for goodness of fit
“Goodness of fit” is a term used in model checking (a.k.a. model criticism, model validation, …). In such a context, tests for goodness\-of\-fit investigate whether a model’s predictions are compatible with the observed data. Pearson’s \\(\\chi^2\\)\-test for goodness of fit does exactly this for categorical data.
Categorical data is data where each data observation falls into one of several unordered categories. If we have \\(k\\) such categories, a **prediction vector** \\(\\vec{p} \= \\langle p\_1, \\dots, p\_k \\rangle\\) is a probability vector of length \\(k\\) such that \\(p\_i\\) gives the probability with which a single data observation falls into the \\(i\\)\-th category. The likelihood of a single data observation is given by the [Categorical distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-categorical), and the likelihood of \\(N\\) data observations is given by the [Multinomial distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-multinomial). These are generalizations of the Bernoulli and Binomial distributions, which expand the case of two unordered categories to more than two unordered categories.
The BLJM data supplies us with categorical data. Here is the vector of counts of how many participants selected a given music\+subject pair:
```
# add category names
BLJM_associated_counts <- BLJM_associated_counts %>%
mutate(
category = str_c(
BLJM_associated_counts %>% pull(LB),
"-",
BLJM_associated_counts %>% pull(JM)
)
)
counts_BLJM_choice_pairs_vector <- BLJM_associated_counts %>% pull(n)
names(counts_BLJM_choice_pairs_vector) <- BLJM_associated_counts %>% pull(category)
counts_BLJM_choice_pairs_vector
```
```
## Biology-Jazz Logic-Jazz Biology-Metal Logic-Metal
## 38 26 20 18
```
Figure [16\.10](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-BLJM-count-pairs-plot) shows a crude plot of these counts, together with a baseline prediction of equal proportion in each category.
Figure 16\.10: Observed counts of choice pairs of music\+subject preference in the BLJM data.
Pearson’s \\(\\chi^2\\)\-test for goodness of fit allows us to test whether this data could plausibly have been generated by (a model whose predictions are given by) a prediction vector \\(\\vec{p} \= \\langle p\_1, \\dots, p\_4 \\rangle\\), where \\(p\_1\\) would be the predicted probability of a choice pair “Biology\-Jazz” occurring for a single participant, and so on. Frequently, this test is used to check whether an equal baseline distribution could have generated the data. We do that here, too. We form the null hypothesis that \\(\\vec{p} \= \\vec{p}\_0\\) with \\(p\_{0i} \= \\frac{1}{4}\\) for all categories \\(i\\).
Figure [16\.11](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-model-goodness) shows a graphical representation of the model implicitly assumed in the background for a Pearson’s \\(\\chi^2\\)\-test for goodness of fit. The model assumes that the observed vector of counts (like our `counts_BLJM_choice_pairs_vector` from above) follows a Multinomial distribution.[85](#fn85) Each vector of (hypothetical) data is associated with a test statistic, called \\(\\chi^2\\), which sums over the standardized squared deviation of the observed counts from the predicted baseline in each cell. It can be shown that, if the number of observations \\(N\\) is large enough, the sampling distribution of the \\(\\chi^2\\) test statistic is approximated well enough by the [\\(\\chi^2\\)\-distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-chi2) with \\(k\-1\\) degrees of freedom (where \\(k\\) is the number of categories).[86](#fn86) Notice that the approximation by a \\(\\chi^2\\)\-distribution hinges on an approximation, which is only met when there are enough samples (just as we needed in the CLT). A rule\-of\-thumb is that at most 20% of all cells should have expected frequencies below 5 in order for the test to be applicable, i.e., \\(np\_i \< 5\\) for all \\(i\\) in Figure [16\.11](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-model-goodness).
Figure 16\.11: Graphical representation of Pearson’s \\(\\chi^2\\)\-test for goodness of fit (testing a vector of predicted proportion).
We can compute the \\(\\chi^2\\)\-value associated with the observed data \\(t(D\_{obs})\\) as follows:
```
# observed counts
n <- counts_BLJM_choice_pairs_vector
# proportion predicted
p <- rep(1/4, 4)
# expected number in each cell
e <- sum(n) * p
# chi-squared for observed data
chi2_observed <- sum((n - e)^2 * 1/e)
chi2_observed
```
```
## [1] 9.529412
```
We can then compare this value to the sampling distribution, which is a \\(\\chi^2\\)\-distribution with \\(k\-1 \= 3\\) degrees of freedom. We compute the \\(p\\)\-value associated with our data as the tail of the sampling distribution, as also shown in Figure [16\.12](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-plot):[87](#fn87)
```
p_value_BLJM <- 1 - pchisq(chi2_observed, df = 3)
```
Figure 16\.12: Sampling distribution for a Pearson’s \\(\\chi^2\\)\-test of goodness of fit (\\(\\chi^2\\)\-distribution with \\(k\-1 \= 3\\) degrees of freedom), testing a flat baseline null hypothesis based on the BLJM data.
Of course, these calculations can also be performed by using a built\-in R function, namely `chisq.test`:
```
counts_BLJM_choice_pairs_vector <- BLJM_associated_counts %>% pull(n)
chisq.test(counts_BLJM_choice_pairs_vector)
```
```
##
## Chi-squared test for given probabilities
##
## data: counts_BLJM_choice_pairs_vector
## X-squared = 9.5294, df = 3, p-value = 0.02302
```
The common interpretation of our calculations would be to say that the test yielded a significant result, at least at the significance level of \\(\\alpha \= 0\.5\\). In a research paper, we might report these results roughly as follows:
> Observed counts deviated significantly from what is expected if each category (here: pair of music\+subject choice) was equally likely (\\(\\chi^2\\)\-test, with \\(\\chi^2 \\approx 9\.53\\), \\(df \= 3\\) and \\(p \\approx 0\.023\\)).
Notice that this test is an “omnibus test of difference”. We can conclude from a significant test result that the whole vector of observations is unlikely to have been generated by chance. Still, we cannot conclude from this result (without doing anything else) why, where or how the observations deviated from the assumed prediction vector. Looking at the plot of the data in Figure [16\.10](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-BLJM-count-pairs-plot) above, it seems intuitive to think that Metal is disproportionally disfavored and that the combination of Biology and Jazz looks particularly outliery when compared to the baseline expectation.
#### 16\.6\.1\.2 Pearson’s \\(\\chi^2\\)\-test of independence
The previous test of goodness of fit does not allow us to address the lecturer’s conjecture that a preference of Metal over Jazz goes with a preference of Logic over Biology. A slightly different kind of \\(\\chi^2\\)\-test is better suited for this. In Pearson’s \\(\\chi^2\\)\-test of independence, we look at a two\-dimensional table of correlated data observations, like this one:
```
BLJM_table <- BLJM_associated_counts %>%
select(-category) %>%
pivot_wider(names_from = LB, values_from = n)
BLJM_table
```
```
## # A tibble: 2 × 3
## JM Biology Logic
## <chr> <int> <int>
## 1 Jazz 38 26
## 2 Metal 20 18
```
For easier computation and compatibility with the function `chisq.test`, we handle the same data but stored as a matrix:
```
counts_BLJM_choice_pairs_matrix <- matrix(
counts_BLJM_choice_pairs_vector,
nrow = 2,
byrow = T
)
rownames(counts_BLJM_choice_pairs_matrix) <- c("Jazz", "Metal")
colnames(counts_BLJM_choice_pairs_matrix) <- c("Biology", "Logic")
counts_BLJM_choice_pairs_matrix
```
```
## Biology Logic
## Jazz 38 26
## Metal 20 18
```
Pearson’s \\(\\chi^2\\)\-test of independence addresses the question of whether two\-dimensional tabular count data like the above could plausibly have been generated by a prediction vector \\(\\vec{p}\\), which results from the assumption that the realizations of row\- and column\-choices are [stochastically independent](Chap-03-01-probability-conditional.html#Chap-03-01-probability-independence). If row\- and column\-choices are independent, the probability of seeing an outcome result in cell \\(ij\\) is the probability of realizing row \\(i\\) times the probability of realizing column \\(j\\). So, under an independence assumption, we expect a matrix and a resulting vector of choice proportions like this:
```
# number of observations in total
N <- sum(counts_BLJM_choice_pairs_matrix)
# marginal proportions observed in the data
# the following is the vector r in the model graph
row_prob <- counts_BLJM_choice_pairs_matrix %>% rowSums() / N
# the following is the vector c in the model graph
col_prob <- counts_BLJM_choice_pairs_matrix %>% colSums() / N
# table of expected observations under independence assumption
# NB: %o% is the outer product of vectors
BLJM_expectation_matrix <- (row_prob %o% col_prob) * N
BLJM_expectation_matrix
```
```
## Biology Logic
## Jazz 36.39216 27.60784
## Metal 21.60784 16.39216
```
```
# the following is the vector p in the model graph
BLJM_expectation_vector <- as.vector(BLJM_expectation_matrix)
BLJM_expectation_vector
```
```
## [1] 36.39216 21.60784 27.60784 16.39216
```
Figure [16\.13](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-model-independence) shows a graphical representation of the \\(\\chi^2\\)\-test of independence. The main difference to the previous test of goodness of fit is that we do no longer just fix any\-old prediction vector \\(\\vec{p}\\), but consider \\(\\vec{p}\\) the deterministic results of independence *and* the best estimates (based on the data at hand) of the row\- and column probabilities.
Figure 16\.13: Graphical representation of Pearson’s \\(\\chi^2\\)\-test for independence.
We can compute the observed \\(\\chi^2\\)\-test statistic and the \\(p\\)\-value as follows:
```
chi2_observed <- sum(
(counts_BLJM_choice_pairs_matrix - BLJM_expectation_matrix)^2 /
BLJM_expectation_matrix
)
p_value_BLJM <- 1 - pchisq(q = chi2_observed, df = 1)
round(p_value_BLJM, 5)
```
```
## [1] 0.50615
```
Figure [16\.14](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-plot-independence) shows the sampling distribution, the value of the test statistic for the observed data and the \\(p\\)\-value.
Figure 16\.14: Sampling distribution for a Pearson’s \\(\\chi^2\\) test of independence (\\(\\chi^2\\)\-distribution with \\(1\\) degree of freedom), testing a flat baseline null hypothesis based on the BLJM data.
We can also use the built\-in function `chisq.test` in R to obtain this result more efficiently:
```
chisq.test(
# supply data as a matrix, not as a vector, for a test of independence
counts_BLJM_choice_pairs_matrix,
# do not use the default correction (because we didn't introduce it)
correct = FALSE
)
```
```
##
## Pearson's Chi-squared test
##
## data: counts_BLJM_choice_pairs_matrix
## X-squared = 0.44202, df = 1, p-value = 0.5061
```
With a \\(p\\)\-value of about 0\.5061, we should conclude that there is no indication of strong evidence *against* the assumption of independence. Consequently, there is no evidence *in favor* of the lecturer’s conjecture of dependence of musical and academic preferences. In a research paper, we might report this result as follows:
> A \\(\\chi^2\\)\-test of independence did not yield a significant test result (\\(\\chi^2\\)\-test, with \\(\\chi^2 \\approx 0\.44\\), \\(df \= 1\\) and \\(p \\approx 0\.5\\)). Therefore, we cannot claim to have found any evidence for the research hypothesis of dependence.
**Exercise 16\.5: \\(\\chi^2\\)\-test of independence**
Let us assume that there are two unordered categorical variables \\(A\\) and \\(B\\). Categorical variable \\(A\\) has two levels \\(a\_1\\) and \\(a\_2\\). Categorical variable \\(B\\) has three levels \\(b\_1\\), \\(b\_2\\) and \\(b\_3\\). Let us further assume that the (marginal) probabilities of a choice from categories \\(A\\) or \\(B\\) is as follows:
\\\[
P(A\=a\_i)\=\\begin{cases}
0\.3 \&\\textbf{if \\(i\=1\\)} \\\\
0\.7 \&\\textbf{if \\(i\=2\\)}
\\end{cases}
\\quad P(B\=b\_i)\=\\begin{cases}
0\.2 \&\\textbf{if \\(i\=1\\)}\\\\
0\.3 \&\\textbf{if \\(i\=2\\)}\\\\
0\.5 \&\\textbf{if \\(i\=3\\)}
\\end{cases}
\\]
1. If observations of pairs of instances from categories \\(A\\) and \\(B\\) are stochastically independent, what would the expected joint probability of each pair of potential observations be?
Solution
| | \\(b\_1\\) | \\(b\_2\\) | \\(b\_3\\) |
| --- | --- | --- | --- |
| \\(a\_1\\) | .3 \\(\\times\\) .2 \= .06 | .3 \\(\\times\\) .3 \= .09 | .3 \\(\\times\\) .5 \= .15 |
| \\(a\_2\\) | .7 \\(\\times\\) .2 \= .14 | .7 \\(\\times\\) .3 \= .21 | .7 \\(\\times\\) .5 \= .35 |
2. Imagine you observe the following table of counts for each pair of instances of categories \\(A\\) and \\(B\\):
| | \\(b\_1\\) | \\(b\_2\\) | \\(b\_3\\) |
| --- | --- | --- | --- |
| \\(a\_1\\) | 1 | 26 | 3 |
| \\(a\_2\\) | 19 | 4 | 47 |
Which of the \\(p\\)\-values given below would you expect to see when feeding this table into a
Pearson \\(\\chi^2\\)\-test of independence? (only one correct answer)
1. \\(p \\approx 1\\)
2. \\(p \\approx 0\.5\\)
3. \\(p \\approx 0\\)
4. I expect no result because the test is not suitable for this kind of data.
Solution
The correct answer is \\(p \\approx 0\\).
3. Explain the answer you gave in the previous part in at most three concise sentences.
Solution
As the marginal proportions of observed counts for the table in b. equal the marginal probabilities given above, the joint probability table in a. actually gives the predicted probabilities under the assumption of independence. Comparing prediction against observed proportion (obtained by dividing the table in b. by the total count of 100\), we see severe divergences, especially in the middle column.
#### 16\.6\.1\.1 Pearson’s \\(\\chi^2\\)\-test for goodness of fit
“Goodness of fit” is a term used in model checking (a.k.a. model criticism, model validation, …). In such a context, tests for goodness\-of\-fit investigate whether a model’s predictions are compatible with the observed data. Pearson’s \\(\\chi^2\\)\-test for goodness of fit does exactly this for categorical data.
Categorical data is data where each data observation falls into one of several unordered categories. If we have \\(k\\) such categories, a **prediction vector** \\(\\vec{p} \= \\langle p\_1, \\dots, p\_k \\rangle\\) is a probability vector of length \\(k\\) such that \\(p\_i\\) gives the probability with which a single data observation falls into the \\(i\\)\-th category. The likelihood of a single data observation is given by the [Categorical distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-categorical), and the likelihood of \\(N\\) data observations is given by the [Multinomial distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-multinomial). These are generalizations of the Bernoulli and Binomial distributions, which expand the case of two unordered categories to more than two unordered categories.
The BLJM data supplies us with categorical data. Here is the vector of counts of how many participants selected a given music\+subject pair:
```
# add category names
BLJM_associated_counts <- BLJM_associated_counts %>%
mutate(
category = str_c(
BLJM_associated_counts %>% pull(LB),
"-",
BLJM_associated_counts %>% pull(JM)
)
)
counts_BLJM_choice_pairs_vector <- BLJM_associated_counts %>% pull(n)
names(counts_BLJM_choice_pairs_vector) <- BLJM_associated_counts %>% pull(category)
counts_BLJM_choice_pairs_vector
```
```
## Biology-Jazz Logic-Jazz Biology-Metal Logic-Metal
## 38 26 20 18
```
Figure [16\.10](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-BLJM-count-pairs-plot) shows a crude plot of these counts, together with a baseline prediction of equal proportion in each category.
Figure 16\.10: Observed counts of choice pairs of music\+subject preference in the BLJM data.
Pearson’s \\(\\chi^2\\)\-test for goodness of fit allows us to test whether this data could plausibly have been generated by (a model whose predictions are given by) a prediction vector \\(\\vec{p} \= \\langle p\_1, \\dots, p\_4 \\rangle\\), where \\(p\_1\\) would be the predicted probability of a choice pair “Biology\-Jazz” occurring for a single participant, and so on. Frequently, this test is used to check whether an equal baseline distribution could have generated the data. We do that here, too. We form the null hypothesis that \\(\\vec{p} \= \\vec{p}\_0\\) with \\(p\_{0i} \= \\frac{1}{4}\\) for all categories \\(i\\).
Figure [16\.11](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-model-goodness) shows a graphical representation of the model implicitly assumed in the background for a Pearson’s \\(\\chi^2\\)\-test for goodness of fit. The model assumes that the observed vector of counts (like our `counts_BLJM_choice_pairs_vector` from above) follows a Multinomial distribution.[85](#fn85) Each vector of (hypothetical) data is associated with a test statistic, called \\(\\chi^2\\), which sums over the standardized squared deviation of the observed counts from the predicted baseline in each cell. It can be shown that, if the number of observations \\(N\\) is large enough, the sampling distribution of the \\(\\chi^2\\) test statistic is approximated well enough by the [\\(\\chi^2\\)\-distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-chi2) with \\(k\-1\\) degrees of freedom (where \\(k\\) is the number of categories).[86](#fn86) Notice that the approximation by a \\(\\chi^2\\)\-distribution hinges on an approximation, which is only met when there are enough samples (just as we needed in the CLT). A rule\-of\-thumb is that at most 20% of all cells should have expected frequencies below 5 in order for the test to be applicable, i.e., \\(np\_i \< 5\\) for all \\(i\\) in Figure [16\.11](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-model-goodness).
Figure 16\.11: Graphical representation of Pearson’s \\(\\chi^2\\)\-test for goodness of fit (testing a vector of predicted proportion).
We can compute the \\(\\chi^2\\)\-value associated with the observed data \\(t(D\_{obs})\\) as follows:
```
# observed counts
n <- counts_BLJM_choice_pairs_vector
# proportion predicted
p <- rep(1/4, 4)
# expected number in each cell
e <- sum(n) * p
# chi-squared for observed data
chi2_observed <- sum((n - e)^2 * 1/e)
chi2_observed
```
```
## [1] 9.529412
```
We can then compare this value to the sampling distribution, which is a \\(\\chi^2\\)\-distribution with \\(k\-1 \= 3\\) degrees of freedom. We compute the \\(p\\)\-value associated with our data as the tail of the sampling distribution, as also shown in Figure [16\.12](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-plot):[87](#fn87)
```
p_value_BLJM <- 1 - pchisq(chi2_observed, df = 3)
```
Figure 16\.12: Sampling distribution for a Pearson’s \\(\\chi^2\\)\-test of goodness of fit (\\(\\chi^2\\)\-distribution with \\(k\-1 \= 3\\) degrees of freedom), testing a flat baseline null hypothesis based on the BLJM data.
Of course, these calculations can also be performed by using a built\-in R function, namely `chisq.test`:
```
counts_BLJM_choice_pairs_vector <- BLJM_associated_counts %>% pull(n)
chisq.test(counts_BLJM_choice_pairs_vector)
```
```
##
## Chi-squared test for given probabilities
##
## data: counts_BLJM_choice_pairs_vector
## X-squared = 9.5294, df = 3, p-value = 0.02302
```
The common interpretation of our calculations would be to say that the test yielded a significant result, at least at the significance level of \\(\\alpha \= 0\.5\\). In a research paper, we might report these results roughly as follows:
> Observed counts deviated significantly from what is expected if each category (here: pair of music\+subject choice) was equally likely (\\(\\chi^2\\)\-test, with \\(\\chi^2 \\approx 9\.53\\), \\(df \= 3\\) and \\(p \\approx 0\.023\\)).
Notice that this test is an “omnibus test of difference”. We can conclude from a significant test result that the whole vector of observations is unlikely to have been generated by chance. Still, we cannot conclude from this result (without doing anything else) why, where or how the observations deviated from the assumed prediction vector. Looking at the plot of the data in Figure [16\.10](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-BLJM-count-pairs-plot) above, it seems intuitive to think that Metal is disproportionally disfavored and that the combination of Biology and Jazz looks particularly outliery when compared to the baseline expectation.
#### 16\.6\.1\.2 Pearson’s \\(\\chi^2\\)\-test of independence
The previous test of goodness of fit does not allow us to address the lecturer’s conjecture that a preference of Metal over Jazz goes with a preference of Logic over Biology. A slightly different kind of \\(\\chi^2\\)\-test is better suited for this. In Pearson’s \\(\\chi^2\\)\-test of independence, we look at a two\-dimensional table of correlated data observations, like this one:
```
BLJM_table <- BLJM_associated_counts %>%
select(-category) %>%
pivot_wider(names_from = LB, values_from = n)
BLJM_table
```
```
## # A tibble: 2 × 3
## JM Biology Logic
## <chr> <int> <int>
## 1 Jazz 38 26
## 2 Metal 20 18
```
For easier computation and compatibility with the function `chisq.test`, we handle the same data but stored as a matrix:
```
counts_BLJM_choice_pairs_matrix <- matrix(
counts_BLJM_choice_pairs_vector,
nrow = 2,
byrow = T
)
rownames(counts_BLJM_choice_pairs_matrix) <- c("Jazz", "Metal")
colnames(counts_BLJM_choice_pairs_matrix) <- c("Biology", "Logic")
counts_BLJM_choice_pairs_matrix
```
```
## Biology Logic
## Jazz 38 26
## Metal 20 18
```
Pearson’s \\(\\chi^2\\)\-test of independence addresses the question of whether two\-dimensional tabular count data like the above could plausibly have been generated by a prediction vector \\(\\vec{p}\\), which results from the assumption that the realizations of row\- and column\-choices are [stochastically independent](Chap-03-01-probability-conditional.html#Chap-03-01-probability-independence). If row\- and column\-choices are independent, the probability of seeing an outcome result in cell \\(ij\\) is the probability of realizing row \\(i\\) times the probability of realizing column \\(j\\). So, under an independence assumption, we expect a matrix and a resulting vector of choice proportions like this:
```
# number of observations in total
N <- sum(counts_BLJM_choice_pairs_matrix)
# marginal proportions observed in the data
# the following is the vector r in the model graph
row_prob <- counts_BLJM_choice_pairs_matrix %>% rowSums() / N
# the following is the vector c in the model graph
col_prob <- counts_BLJM_choice_pairs_matrix %>% colSums() / N
# table of expected observations under independence assumption
# NB: %o% is the outer product of vectors
BLJM_expectation_matrix <- (row_prob %o% col_prob) * N
BLJM_expectation_matrix
```
```
## Biology Logic
## Jazz 36.39216 27.60784
## Metal 21.60784 16.39216
```
```
# the following is the vector p in the model graph
BLJM_expectation_vector <- as.vector(BLJM_expectation_matrix)
BLJM_expectation_vector
```
```
## [1] 36.39216 21.60784 27.60784 16.39216
```
Figure [16\.13](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-model-independence) shows a graphical representation of the \\(\\chi^2\\)\-test of independence. The main difference to the previous test of goodness of fit is that we do no longer just fix any\-old prediction vector \\(\\vec{p}\\), but consider \\(\\vec{p}\\) the deterministic results of independence *and* the best estimates (based on the data at hand) of the row\- and column probabilities.
Figure 16\.13: Graphical representation of Pearson’s \\(\\chi^2\\)\-test for independence.
We can compute the observed \\(\\chi^2\\)\-test statistic and the \\(p\\)\-value as follows:
```
chi2_observed <- sum(
(counts_BLJM_choice_pairs_matrix - BLJM_expectation_matrix)^2 /
BLJM_expectation_matrix
)
p_value_BLJM <- 1 - pchisq(q = chi2_observed, df = 1)
round(p_value_BLJM, 5)
```
```
## [1] 0.50615
```
Figure [16\.14](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-chi2-plot-independence) shows the sampling distribution, the value of the test statistic for the observed data and the \\(p\\)\-value.
Figure 16\.14: Sampling distribution for a Pearson’s \\(\\chi^2\\) test of independence (\\(\\chi^2\\)\-distribution with \\(1\\) degree of freedom), testing a flat baseline null hypothesis based on the BLJM data.
We can also use the built\-in function `chisq.test` in R to obtain this result more efficiently:
```
chisq.test(
# supply data as a matrix, not as a vector, for a test of independence
counts_BLJM_choice_pairs_matrix,
# do not use the default correction (because we didn't introduce it)
correct = FALSE
)
```
```
##
## Pearson's Chi-squared test
##
## data: counts_BLJM_choice_pairs_matrix
## X-squared = 0.44202, df = 1, p-value = 0.5061
```
With a \\(p\\)\-value of about 0\.5061, we should conclude that there is no indication of strong evidence *against* the assumption of independence. Consequently, there is no evidence *in favor* of the lecturer’s conjecture of dependence of musical and academic preferences. In a research paper, we might report this result as follows:
> A \\(\\chi^2\\)\-test of independence did not yield a significant test result (\\(\\chi^2\\)\-test, with \\(\\chi^2 \\approx 0\.44\\), \\(df \= 1\\) and \\(p \\approx 0\.5\\)). Therefore, we cannot claim to have found any evidence for the research hypothesis of dependence.
**Exercise 16\.5: \\(\\chi^2\\)\-test of independence**
Let us assume that there are two unordered categorical variables \\(A\\) and \\(B\\). Categorical variable \\(A\\) has two levels \\(a\_1\\) and \\(a\_2\\). Categorical variable \\(B\\) has three levels \\(b\_1\\), \\(b\_2\\) and \\(b\_3\\). Let us further assume that the (marginal) probabilities of a choice from categories \\(A\\) or \\(B\\) is as follows:
\\\[
P(A\=a\_i)\=\\begin{cases}
0\.3 \&\\textbf{if \\(i\=1\\)} \\\\
0\.7 \&\\textbf{if \\(i\=2\\)}
\\end{cases}
\\quad P(B\=b\_i)\=\\begin{cases}
0\.2 \&\\textbf{if \\(i\=1\\)}\\\\
0\.3 \&\\textbf{if \\(i\=2\\)}\\\\
0\.5 \&\\textbf{if \\(i\=3\\)}
\\end{cases}
\\]
1. If observations of pairs of instances from categories \\(A\\) and \\(B\\) are stochastically independent, what would the expected joint probability of each pair of potential observations be?
Solution
| | \\(b\_1\\) | \\(b\_2\\) | \\(b\_3\\) |
| --- | --- | --- | --- |
| \\(a\_1\\) | .3 \\(\\times\\) .2 \= .06 | .3 \\(\\times\\) .3 \= .09 | .3 \\(\\times\\) .5 \= .15 |
| \\(a\_2\\) | .7 \\(\\times\\) .2 \= .14 | .7 \\(\\times\\) .3 \= .21 | .7 \\(\\times\\) .5 \= .35 |
2. Imagine you observe the following table of counts for each pair of instances of categories \\(A\\) and \\(B\\):
| | \\(b\_1\\) | \\(b\_2\\) | \\(b\_3\\) |
| --- | --- | --- | --- |
| \\(a\_1\\) | 1 | 26 | 3 |
| \\(a\_2\\) | 19 | 4 | 47 |
Which of the \\(p\\)\-values given below would you expect to see when feeding this table into a
Pearson \\(\\chi^2\\)\-test of independence? (only one correct answer)
1. \\(p \\approx 1\\)
2. \\(p \\approx 0\.5\\)
3. \\(p \\approx 0\\)
4. I expect no result because the test is not suitable for this kind of data.
Solution
The correct answer is \\(p \\approx 0\\).
3. Explain the answer you gave in the previous part in at most three concise sentences.
Solution
As the marginal proportions of observed counts for the table in b. equal the marginal probabilities given above, the joint probability table in a. actually gives the predicted probabilities under the assumption of independence. Comparing prediction against observed proportion (obtained by dividing the table in b. by the total count of 100\), we see severe divergences, especially in the middle column.
### 16\.6\.2 *z*\-test
The Central Limit Theorem tells us that, given enough data, we can treat means of repeated samples from any arbitrary probability distribution as approximately normally distributed. Notice in addition that if \\(X\\) and \\(Y\\) are random variables following a normal distribution, then so is \\(Z \= X \- Y\\) (see also the [chapter on the normal distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-normal)). It now becomes clear how research questions about means and differences between means (e.g., in the Mental Chronometry experiment) can be addressed, at least approximately: We conduct tests that hinge on a sampling distribution which is a normal distribution (usually a standard normal distribution).
The \\(z\\)\-test is perhaps the simplest of a family of tests that rely on normality of the sampling distribution. Unfortunately, what makes it so simple is also what makes it inapplicable in a wide range of cases. The \\(z\\)\-test assumes that a quantity that is normally distributed has an unknown mean (to be inferred by testing), but it also assumes that the *variance is known*. Since we do not know the variance in most cases of practical relevance, the \\(z\\)\-test needs to be replaced by a more adequate test, usually a test from the \\(t\\)\-test family, to be discussed below.
We start with the \\(z\\)\-test nonetheless because of the added benefit to our understanding. Figure [16\.15](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-z-test-model) shows the model that implicitly underlies a \\(z\\)\-test. It checks whether the data \\(\\vec{x}\\), which are assumed to be normally distributed with known \\(\\sigma\\), could have been generated by a hypothesized mean \\(\\mu \= \\mu\_0\\). The sampling distribution of the derived test statistic \\(z\\) is a standard normal distribution.
Figure 16\.15: Graphical representation of a \\(z\\)\-test.
We know that IQ test results are normally distributed around a mean of 100 with a standard deviation of 15\. This holds when the sample is representative of the whole population. But suppose we have reason to believe that the sample is from CogSci students. The standard deviation in a sample from CogSci students might still plausibly be fixed to 15, but we’d like to test the assumption that *this* sample was generated by a mean \\(\\mu \= 100\\), our null hypothesis.
For illustration, suppose we observed the following data set of IQ test results:
```
# fictitious IQ data
IQ_data <- c(87, 91, 93, 97, 100, 101, 103, 104,
104, 105, 105, 106, 108, 110, 111,
112, 114, 115, 119, 121)
mean(IQ_data)
```
```
## [1] 105.3
```
The mean of this data set is 105\.3\. Suspicious!
Following the model in Figure [16\.15](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-z-test-model), we calculate the value of the test statistic for the observed data.
```
# number of observations
N <- length(IQ_data)
# null hypothesis to test
mu_0 <- 100
# standard deviation (known/assumed as true)
sd <- 15
z_observed <- (mean(IQ_data) - mu_0) / (sd / sqrt(N))
z_observed %>% round(4)
```
```
## [1] 1.5802
```
We focus on a one\-sided \\(p\\)\-value because our “research” hypothesis is that CogSci students have, on average, a higher IQ. Since we observed a mean of 105\.3 in the data, which is higher than the critical value of 100, we test the null hypothesis \\(\\mu \= 100\\) against an alternative hypothesis that assumes that the data was generated by a mean *bigger* than 100 (which is exactly our research hypothesis).
As before, we can then compute the \\(p\\)\-value by checking the area under the sampling distribution, here a standard normal, in the appropriate way. Figure [16\.16](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-z-test) shows this result graphically.
```
p_value_IQ_data_ztest <- 1 - pnorm(z_observed)
p_value_IQ_data_ztest %>% round(6)
```
```
## [1] 0.057036
```
Figure 16\.16: Sampling distribution for a \\(z\\)\-test, testing the null hypothesis based on the assumption that the IQ\-data was generated by \\(\\mu \= 100\\) (with assumed/known \\(\\sigma\\)).
We can also use a ready\-made function for the \\(z\\)\-test. However, as the \\(z\\)\-test is so uncommon, it is not built into core R. We need to rely on the `BSDA` package to find the function `z.test`.
```
BSDA::z.test(x = IQ_data, mu = 100, sigma.x = 15, alternative = "greater")
```
```
##
## One-sample z-Test
##
## data: IQ_data
## z = 1.5802, p-value = 0.05704
## alternative hypothesis: true mean is greater than 100
## 95 percent confidence interval:
## 99.78299 NA
## sample estimates:
## mean of x
## 105.3
```
The conclusion to be drawn from this test could be formulated in a research report as follows:
> We tested the null hypothesis of a mean equal to 100, assuming a known standard deviation of 15, in a one\-sided \\(z\\)\-test against the alternative hypothesis that the data was generated by a mean greater than 100 (our research hypothesis). The test was not significant (\\(N \= 20\\), \\(z \\approx 1\.5802\\), \\(p \\approx 0\.05704\\)), giving us no indication of strong evidence against the assumption that the mean is at most 100\.
### 16\.6\.3 *t*\-tests
In most practical applications where a \\(z\\)\-test might be useful, the standard deviation is not known. If unknown, it should also not lightly be fixed by clever guess\-work. This is where the family of \\(t\\)\-tests comes in. We will look at two examples of these: the one\-sample \\(t\\)\-test, which compares one set of samples to a fixed mean, and the two\-sample \\(t\\)\-test, which compares the means of two sets of samples.
#### 16\.6\.3\.1 One\-sample \\(t\\)\-test
The simplest example of this family, namely a \\(t\\)\-test for one metric vector \\(\\vec{x}\\) of normally distributed observations, tests the null hypothesis that \\(\\vec{x}\\) was generated by some \\(\\mu \= \\mu\_0\\) (just like the \\(z\\)\-test). However, unlike the \\(z\\)\-test, a one\-sample \\(t\\)\-test does not assume that the standard deviation is known. It rather uses the observed data to obtain an estimate for this parameter. More concretely, a one\-sample \\(t\\)\-test for \\(\\vec{x}\\) estimates the standard deviation in the usual way (see Chapter [5](Chap-02-03-summary-statistics.html#Chap-02-03-summary-statistics)):
\\\[\\hat{\\sigma}\_x \= \\sqrt{\\frac{1}{n\-1} \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}})^2}\\]
Figure [16\.17](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-model-one-population) shows a graphical representation of a one\-sample \\(t\\)\-test model. The light shading of the node for the standard deviation indicates that this parameter is estimated from the observed data. Importantly, the distribution of the test statistic \\(t\\) is no longer well approximated by a normal distribution when the sample size is low. It is better captured by a [Student’s \\(t\\) distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-students-t).
Figure 16\.17: Graphical representation of the model underlying a frequentist one\-sample \\(t\\)\-test. Notice that the lightly shaded node for the standard deviation represents that the value for this parameter is estimated from the data.
Let’s revisit our IQ\-data set from above to calculate a \\(t\\)\-test. Using a \\(t\\)\-test implies that we are now assuming that the standard deviation is actually unknown. We can calculate the value of the test statistic for the observed data and use this to compute a \\(p\\)\-value, much like in the case of the \\(z\\)\-test before.
```
N <- length(IQ_data)
# fix the null hypothesis
mean_0 <- 100
# unlike in a z-test, we use the sample to estimate the SD
sigma_hat <- sd(IQ_data)
t_observed <- (mean(IQ_data) - mean_0) / sigma_hat * sqrt(N)
t_observed %>% round(4)
```
```
## [1] 2.6446
```
We calculate the relevant one\-sided \\(p\\)\-value using the cumulative distribution function `pt` of the \\(t\\)\-distribution.
```
p_value_t_test_IQ <- 1 - pt(t_observed, df = N - 1)
p_value_t_test_IQ %>% round(6)
```
```
## [1] 0.007992
```
Figure 16\.18: Sampling distribution for a \\(t\\)\-test, testing the null hypothesis that the IQ\-data was generated by \\(\\mu \= 100\\) (with unknown \\(\\sigma\\)).
Compare these calculations against the built\-in function `t.test`:
```
t.test(x = IQ_data, mu = 100, alternative = "greater")
```
```
##
## One Sample t-test
##
## data: IQ_data
## t = 2.6446, df = 19, p-value = 0.007992
## alternative hypothesis: true mean is greater than 100
## 95 percent confidence interval:
## 101.8347 Inf
## sample estimates:
## mean of x
## 105.3
```
These results could be stated in a research report much like so:
> We tested the null hypothesis of a mean equal to 100, assuming an unknown standard deviation, using a one\-sided, one\-sample \\(t\\)\-test against the alternative hypothesis that the data was generated by a mean greater than 100 (our research hypothesis). The significant test result (\\(N \= 20\\), \\(t \\approx 2\.6446\\), \\(p \\approx 0\.007992\\)) suggests that the data provides strong evidence against the assumption that the mean is not bigger than 100\.
Notice that the conclusions we draw from the previous \\(z\\)\-test and this one\-sample \\(t\\)\-test are quite different. Why is this so? Well, it is because we (cheekily) chose a data set `IQ_data` that was actually *not* generated by a normal distribution with a standard deviation of 15, contrary to what we said about IQ\-scores normally having this standard deviation. The assumption about \\(\\sigma\\) fed into the \\(z\\)\-test was (deliberately!) wrong. The result of the \\(t\\)\-test, at least for this example, is better. The data in `IQ_data` are actually samples from \\(\\text{Normal}(105,10\)\\). This demonstrates why the one\-sample \\(t\\)\-test is usually preferred over a \\(z\\)\-test: unshakable, true knowledge of \\(\\sigma\\) is very rare.
#### 16\.6\.3\.2 Two\-sample \\(t\\)\-test (for unpaired data with equal variance and unequal sample sizes)
The “mother of all experimental designs” compares two groups of measurements. We give a drug to one group of patients, a placebo to another. We take a metric measure (say, blood sugar level) and ask whether there is a difference between these two groups. Section [9](ch-03-04-parameter-estimation.html#ch-03-04-parameter-estimation) introduced the \\(T\\)\-Test Model for a Bayesian approach. Here, we look at a corresponding model for a frequentist approach, a so\-called two\-sample \\(t\\)\-test. There are different kinds of such two\-sample \\(t\\)\-tests. The differences lie, e.g., in whether we assume that both groups have equal variance, in whether the sample sizes are the same in both groups, or in whether observations are paired (e.g., as in a within\-subjects design, where we get two measurements from each participant, one from each condition/group). Here, we focus on unpaired data (as from a between\-subjects design), assume equal variance but (possibly) unequal sample sizes. The case we look at is the [avocado data](app-93-data-sets-avocado.html#app-93-data-sets-avocado), where we want to specifically investigate whether the weekly average price of organically grown avocados is higher than that of conventionally grown avocados.[88](#fn88)
We here consider the preprocessed avocado data set (see Appendix Chapter [D.5](app-93-data-sets-avocado.html#app-93-data-sets-avocado) for details on how this preprocessing was performed).
```
avocado_data <- aida::data_avocado
```
Remember that the distribution of prices looks as follows:
A graphical representation of the two\-sample \\(t\\)\-test (for unpaired data with equal variance and unequal sample sizes), which we will apply to this case, is shown in Figure [16\.19](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-model-two-populations). The model assumes that we have two vectors of metric measurements \\(\\vec{x}\_A\\) and \\(\\vec{x}\_B\\), with length \\(n\_A\\) and \\(n\_B\\), respectively. These are the price measures for conventionally grown and for organically grown avocados. The model assumes that measures in both \\(\\vec{x}\_A\\) and \\(\\vec{x}\_B\\) are i.i.d. samples from a normal distribution. The mean of one group (group \\(B\\) in the graph) is assumed to be some unknown \\(\\mu\\). Interestingly, this parameter will cancel out eventually: the approximation of the sampling distribution turns out to be independent of this parameter.[89](#fn89) The mean of the other group (group \\(A\\) in the graph) is computed as \\(\\mu \+ \\delta\\), so with some additive parameter \\(\\delta\\) indicating the difference between means of these groups. This \\(\\delta\\) is the main parameter of interest for inferences regarding hypotheses concerning differences between groups. Finally, the model assumes that both groups have the same standard deviation, an estimate of which is derived from the data (in a rather convoluted looking formula that is not important for our introductory concerns). As indicated in Figure [16\.19](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-model-two-populations), the sampling distribution for this model is an instance of Student’s \\(t\\)\-distribution with mean 0, standard deviation 1 and degrees of freedom \\(\\nu\\) given as \\(n\_A \+ n\_B \- 2\\).
Figure 16\.19: Graphical representation of the model underlying a frequentist two\-population \\(t\\)\-test (for unpaired data with equal variance and unequal sample sizes). Notice that the light shading of the node for the standard deviation indicates that the value for this parameter is estimated from the data.
Figure [16\.19](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-model-two-populations) gives us the template to compute the value of the test statistic for the observed data:
```
# fix the null hypothesis: no difference between groups
delta_0 <- 0
# data (group A)
x_A <- avocado_data %>%
filter(type == "organic") %>% pull(average_price)
# data (group B)
x_B <- avocado_data %>%
filter(type == "conventional") %>% pull(average_price)
# sample mean for organic (group A)
mu_A <- mean(x_A)
# sample mean for conventional (group B)
mu_B <- mean(x_B)
# numbers of observations
n_A <- length(x_A)
n_B <- length(x_B)
# variance estimate
sigma_AB <- sqrt(
( ((n_A - 1) * sd(x_A)^2 + (n_B - 1) * sd(x_B)^2 ) /
(n_A + n_B - 2) ) * (1/n_A + 1/n_B)
)
t_observed <- (mu_A - mu_B - delta_0) / sigma_AB
t_observed
```
```
## [1] 105.5878
```
We can use the value of the test statistic for the observed data to compute a one\-sided \\(p\\)\-value, as before. Notice that we use a one\-sided test because we hypothesize that organically grown avocados are more expensive, not just that they have a different price (more expensive or cheaper).
```
p_value_t_test_avocado <- 1 - pt(q = t_observed, df = n_A + n_B - 2)
p_value_t_test_avocado
```
```
## [1] 0
```
Owing to number imprecision, the calculated \\(p\\)\-value comes up as a flat zero. We have a lot of data, and the task of defending that conventionally grown avocados are not less expensive than organically grown is very tough. This also shows in the corresponding picture in Figure [16\.20](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-two-sample).
Figure 16\.20: Sampling distribution for a two\-sample \\(t\\)\-test, testing the null hypothesis of no difference between groups, based on the avocado data.
We can also, of course, calculate this test result with the built\-in function `t.test`:
```
t.test(
x = x_A, # first vector of data measurements
y = x_B, # second vector of data measurements
paired = FALSE, # measurements are to be treated as unpaired
var.equal = TRUE, # we assume equal variance in both groups
mu = 0 # NH is delta = 0 (name 'mu' is misleading!)
)
```
```
##
## Two Sample t-test
##
## data: x_A and x_B
## t = 105.59, df = 18247, p-value < 2.2e-16
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.4867522 0.5051658
## sample estimates:
## mean of x mean of y
## 1.653999 1.158040
```
The result could be reported as follows:
> We conducted a two\-sample \\(t\\)\-test of differences of means (unpaired samples, equal variance, unequal sample sizes) to compare the average weekly price of conventionally grown avocados to that of organically grown avocados. The test result indicates a significant difference for the null hypothesis that conventionally grown avocados are not cheaper (\\(N\_A \= 9123\\), \\(N\_B \= 9126\\), \\(t \\approx 105\.59\\), \\(p \\approx 0\\)).
**Exercise 16\.6: Two\-sample \\(t\\)\-test**
Your fellow student is skeptical of her flatmate’s claim that pizzas from place \\(A\\) have a smaller diameter than place \\(B\\) (both pizzerias have just one pizza size, namely \\(\\varnothing\\ 32\\ cm\\)). She decides to test that claim with a two\-sample \\(t\\)\-test and sets \\(H\_0: \\mu\_A \= \\mu\_B\\) (\\(\\delta \= 0\\)), \\(H\_a: \\mu\_A \< \\mu\_B\\), \\(\\alpha \= 0\.05\\). She then asks your class to always measure the pizza’s diameter if ordered from one of the two places. At the end of the semester, she has the following table:
| | Pizzeria \\(A\\) | Pizzeria \\(B\\) |
| --- | --- | --- |
| mean | 30\.9 | 31\.8 |
| standard deviation | 2\.3 | 2 |
| sample size | 38 | 44 |
1. How many degrees of freedom \\(\\nu\\) are there?
Solution
\\(\\nu \= n\_A\+n\_B\-2 \= 38\+44\-2 \= 80\\) degrees of freedom.
2. Given the table above, calculate the test statistic \\(t\\).
Solution
\\\[
\\hat{\\sigma}\=\\sqrt{\\frac{(n\_A\-1\)\\hat{\\sigma}\_A^2\+(n\_B\-1\)\\hat{\\sigma}^2\_B}{n\_A\+n\_B\-2}(\\frac{1}{n\_A}\+\\frac{1}{n\_B})}\\\\
\\hat{\\sigma}\=\\sqrt{\\frac{37\\cdot2\.3^2\+43\\cdot2^2}{80}(\\frac{1}{38}\+\\frac{1}{44})}\\approx 0\.47\\\\
t\=((\\bar{x}\_A\-\\bar{x}\_B)\-\\delta)\\cdot\\frac{1}{\\hat{\\sigma}}\\\\
t\=\\frac{30\.9\-31\.8}{0\.47}\\approx \-1\.91
\\]
3. Look at this so\-called [t table](http://www.ttable.org/) and determine the critical value to be exceeded in order to get a statistically significant result. NB: We are looking for the critical value that is on the *left* side of the distribution. So, in order to have a statistically significant result, the test statistic from b. has to be smaller than the *negated* critical value in the table.
Solution
The critical value is \-1\.664\.
4. Compare the test statistic from b. with the critical value from c. and interpret the result.
Solution
The calculated test statistic from b. is smaller than the critical value. We therefore know that the \\(p\\)\-value is statistically significant. The fellow student should reject the null hypothesis of equal pizza diameters.
#### 16\.6\.3\.1 One\-sample \\(t\\)\-test
The simplest example of this family, namely a \\(t\\)\-test for one metric vector \\(\\vec{x}\\) of normally distributed observations, tests the null hypothesis that \\(\\vec{x}\\) was generated by some \\(\\mu \= \\mu\_0\\) (just like the \\(z\\)\-test). However, unlike the \\(z\\)\-test, a one\-sample \\(t\\)\-test does not assume that the standard deviation is known. It rather uses the observed data to obtain an estimate for this parameter. More concretely, a one\-sample \\(t\\)\-test for \\(\\vec{x}\\) estimates the standard deviation in the usual way (see Chapter [5](Chap-02-03-summary-statistics.html#Chap-02-03-summary-statistics)):
\\\[\\hat{\\sigma}\_x \= \\sqrt{\\frac{1}{n\-1} \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}})^2}\\]
Figure [16\.17](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-model-one-population) shows a graphical representation of a one\-sample \\(t\\)\-test model. The light shading of the node for the standard deviation indicates that this parameter is estimated from the observed data. Importantly, the distribution of the test statistic \\(t\\) is no longer well approximated by a normal distribution when the sample size is low. It is better captured by a [Student’s \\(t\\) distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-students-t).
Figure 16\.17: Graphical representation of the model underlying a frequentist one\-sample \\(t\\)\-test. Notice that the lightly shaded node for the standard deviation represents that the value for this parameter is estimated from the data.
Let’s revisit our IQ\-data set from above to calculate a \\(t\\)\-test. Using a \\(t\\)\-test implies that we are now assuming that the standard deviation is actually unknown. We can calculate the value of the test statistic for the observed data and use this to compute a \\(p\\)\-value, much like in the case of the \\(z\\)\-test before.
```
N <- length(IQ_data)
# fix the null hypothesis
mean_0 <- 100
# unlike in a z-test, we use the sample to estimate the SD
sigma_hat <- sd(IQ_data)
t_observed <- (mean(IQ_data) - mean_0) / sigma_hat * sqrt(N)
t_observed %>% round(4)
```
```
## [1] 2.6446
```
We calculate the relevant one\-sided \\(p\\)\-value using the cumulative distribution function `pt` of the \\(t\\)\-distribution.
```
p_value_t_test_IQ <- 1 - pt(t_observed, df = N - 1)
p_value_t_test_IQ %>% round(6)
```
```
## [1] 0.007992
```
Figure 16\.18: Sampling distribution for a \\(t\\)\-test, testing the null hypothesis that the IQ\-data was generated by \\(\\mu \= 100\\) (with unknown \\(\\sigma\\)).
Compare these calculations against the built\-in function `t.test`:
```
t.test(x = IQ_data, mu = 100, alternative = "greater")
```
```
##
## One Sample t-test
##
## data: IQ_data
## t = 2.6446, df = 19, p-value = 0.007992
## alternative hypothesis: true mean is greater than 100
## 95 percent confidence interval:
## 101.8347 Inf
## sample estimates:
## mean of x
## 105.3
```
These results could be stated in a research report much like so:
> We tested the null hypothesis of a mean equal to 100, assuming an unknown standard deviation, using a one\-sided, one\-sample \\(t\\)\-test against the alternative hypothesis that the data was generated by a mean greater than 100 (our research hypothesis). The significant test result (\\(N \= 20\\), \\(t \\approx 2\.6446\\), \\(p \\approx 0\.007992\\)) suggests that the data provides strong evidence against the assumption that the mean is not bigger than 100\.
Notice that the conclusions we draw from the previous \\(z\\)\-test and this one\-sample \\(t\\)\-test are quite different. Why is this so? Well, it is because we (cheekily) chose a data set `IQ_data` that was actually *not* generated by a normal distribution with a standard deviation of 15, contrary to what we said about IQ\-scores normally having this standard deviation. The assumption about \\(\\sigma\\) fed into the \\(z\\)\-test was (deliberately!) wrong. The result of the \\(t\\)\-test, at least for this example, is better. The data in `IQ_data` are actually samples from \\(\\text{Normal}(105,10\)\\). This demonstrates why the one\-sample \\(t\\)\-test is usually preferred over a \\(z\\)\-test: unshakable, true knowledge of \\(\\sigma\\) is very rare.
#### 16\.6\.3\.2 Two\-sample \\(t\\)\-test (for unpaired data with equal variance and unequal sample sizes)
The “mother of all experimental designs” compares two groups of measurements. We give a drug to one group of patients, a placebo to another. We take a metric measure (say, blood sugar level) and ask whether there is a difference between these two groups. Section [9](ch-03-04-parameter-estimation.html#ch-03-04-parameter-estimation) introduced the \\(T\\)\-Test Model for a Bayesian approach. Here, we look at a corresponding model for a frequentist approach, a so\-called two\-sample \\(t\\)\-test. There are different kinds of such two\-sample \\(t\\)\-tests. The differences lie, e.g., in whether we assume that both groups have equal variance, in whether the sample sizes are the same in both groups, or in whether observations are paired (e.g., as in a within\-subjects design, where we get two measurements from each participant, one from each condition/group). Here, we focus on unpaired data (as from a between\-subjects design), assume equal variance but (possibly) unequal sample sizes. The case we look at is the [avocado data](app-93-data-sets-avocado.html#app-93-data-sets-avocado), where we want to specifically investigate whether the weekly average price of organically grown avocados is higher than that of conventionally grown avocados.[88](#fn88)
We here consider the preprocessed avocado data set (see Appendix Chapter [D.5](app-93-data-sets-avocado.html#app-93-data-sets-avocado) for details on how this preprocessing was performed).
```
avocado_data <- aida::data_avocado
```
Remember that the distribution of prices looks as follows:
A graphical representation of the two\-sample \\(t\\)\-test (for unpaired data with equal variance and unequal sample sizes), which we will apply to this case, is shown in Figure [16\.19](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-model-two-populations). The model assumes that we have two vectors of metric measurements \\(\\vec{x}\_A\\) and \\(\\vec{x}\_B\\), with length \\(n\_A\\) and \\(n\_B\\), respectively. These are the price measures for conventionally grown and for organically grown avocados. The model assumes that measures in both \\(\\vec{x}\_A\\) and \\(\\vec{x}\_B\\) are i.i.d. samples from a normal distribution. The mean of one group (group \\(B\\) in the graph) is assumed to be some unknown \\(\\mu\\). Interestingly, this parameter will cancel out eventually: the approximation of the sampling distribution turns out to be independent of this parameter.[89](#fn89) The mean of the other group (group \\(A\\) in the graph) is computed as \\(\\mu \+ \\delta\\), so with some additive parameter \\(\\delta\\) indicating the difference between means of these groups. This \\(\\delta\\) is the main parameter of interest for inferences regarding hypotheses concerning differences between groups. Finally, the model assumes that both groups have the same standard deviation, an estimate of which is derived from the data (in a rather convoluted looking formula that is not important for our introductory concerns). As indicated in Figure [16\.19](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-model-two-populations), the sampling distribution for this model is an instance of Student’s \\(t\\)\-distribution with mean 0, standard deviation 1 and degrees of freedom \\(\\nu\\) given as \\(n\_A \+ n\_B \- 2\\).
Figure 16\.19: Graphical representation of the model underlying a frequentist two\-population \\(t\\)\-test (for unpaired data with equal variance and unequal sample sizes). Notice that the light shading of the node for the standard deviation indicates that the value for this parameter is estimated from the data.
Figure [16\.19](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-model-two-populations) gives us the template to compute the value of the test statistic for the observed data:
```
# fix the null hypothesis: no difference between groups
delta_0 <- 0
# data (group A)
x_A <- avocado_data %>%
filter(type == "organic") %>% pull(average_price)
# data (group B)
x_B <- avocado_data %>%
filter(type == "conventional") %>% pull(average_price)
# sample mean for organic (group A)
mu_A <- mean(x_A)
# sample mean for conventional (group B)
mu_B <- mean(x_B)
# numbers of observations
n_A <- length(x_A)
n_B <- length(x_B)
# variance estimate
sigma_AB <- sqrt(
( ((n_A - 1) * sd(x_A)^2 + (n_B - 1) * sd(x_B)^2 ) /
(n_A + n_B - 2) ) * (1/n_A + 1/n_B)
)
t_observed <- (mu_A - mu_B - delta_0) / sigma_AB
t_observed
```
```
## [1] 105.5878
```
We can use the value of the test statistic for the observed data to compute a one\-sided \\(p\\)\-value, as before. Notice that we use a one\-sided test because we hypothesize that organically grown avocados are more expensive, not just that they have a different price (more expensive or cheaper).
```
p_value_t_test_avocado <- 1 - pt(q = t_observed, df = n_A + n_B - 2)
p_value_t_test_avocado
```
```
## [1] 0
```
Owing to number imprecision, the calculated \\(p\\)\-value comes up as a flat zero. We have a lot of data, and the task of defending that conventionally grown avocados are not less expensive than organically grown is very tough. This also shows in the corresponding picture in Figure [16\.20](ch-03-05-hypothesis-testing-tests.html#fig:ch-03-04-t-test-two-sample).
Figure 16\.20: Sampling distribution for a two\-sample \\(t\\)\-test, testing the null hypothesis of no difference between groups, based on the avocado data.
We can also, of course, calculate this test result with the built\-in function `t.test`:
```
t.test(
x = x_A, # first vector of data measurements
y = x_B, # second vector of data measurements
paired = FALSE, # measurements are to be treated as unpaired
var.equal = TRUE, # we assume equal variance in both groups
mu = 0 # NH is delta = 0 (name 'mu' is misleading!)
)
```
```
##
## Two Sample t-test
##
## data: x_A and x_B
## t = 105.59, df = 18247, p-value < 2.2e-16
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.4867522 0.5051658
## sample estimates:
## mean of x mean of y
## 1.653999 1.158040
```
The result could be reported as follows:
> We conducted a two\-sample \\(t\\)\-test of differences of means (unpaired samples, equal variance, unequal sample sizes) to compare the average weekly price of conventionally grown avocados to that of organically grown avocados. The test result indicates a significant difference for the null hypothesis that conventionally grown avocados are not cheaper (\\(N\_A \= 9123\\), \\(N\_B \= 9126\\), \\(t \\approx 105\.59\\), \\(p \\approx 0\\)).
**Exercise 16\.6: Two\-sample \\(t\\)\-test**
Your fellow student is skeptical of her flatmate’s claim that pizzas from place \\(A\\) have a smaller diameter than place \\(B\\) (both pizzerias have just one pizza size, namely \\(\\varnothing\\ 32\\ cm\\)). She decides to test that claim with a two\-sample \\(t\\)\-test and sets \\(H\_0: \\mu\_A \= \\mu\_B\\) (\\(\\delta \= 0\\)), \\(H\_a: \\mu\_A \< \\mu\_B\\), \\(\\alpha \= 0\.05\\). She then asks your class to always measure the pizza’s diameter if ordered from one of the two places. At the end of the semester, she has the following table:
| | Pizzeria \\(A\\) | Pizzeria \\(B\\) |
| --- | --- | --- |
| mean | 30\.9 | 31\.8 |
| standard deviation | 2\.3 | 2 |
| sample size | 38 | 44 |
1. How many degrees of freedom \\(\\nu\\) are there?
Solution
\\(\\nu \= n\_A\+n\_B\-2 \= 38\+44\-2 \= 80\\) degrees of freedom.
2. Given the table above, calculate the test statistic \\(t\\).
Solution
\\\[
\\hat{\\sigma}\=\\sqrt{\\frac{(n\_A\-1\)\\hat{\\sigma}\_A^2\+(n\_B\-1\)\\hat{\\sigma}^2\_B}{n\_A\+n\_B\-2}(\\frac{1}{n\_A}\+\\frac{1}{n\_B})}\\\\
\\hat{\\sigma}\=\\sqrt{\\frac{37\\cdot2\.3^2\+43\\cdot2^2}{80}(\\frac{1}{38}\+\\frac{1}{44})}\\approx 0\.47\\\\
t\=((\\bar{x}\_A\-\\bar{x}\_B)\-\\delta)\\cdot\\frac{1}{\\hat{\\sigma}}\\\\
t\=\\frac{30\.9\-31\.8}{0\.47}\\approx \-1\.91
\\]
3. Look at this so\-called [t table](http://www.ttable.org/) and determine the critical value to be exceeded in order to get a statistically significant result. NB: We are looking for the critical value that is on the *left* side of the distribution. So, in order to have a statistically significant result, the test statistic from b. has to be smaller than the *negated* critical value in the table.
Solution
The critical value is \-1\.664\.
4. Compare the test statistic from b. with the critical value from c. and interpret the result.
Solution
The calculated test statistic from b. is smaller than the critical value. We therefore know that the \\(p\\)\-value is statistically significant. The fellow student should reject the null hypothesis of equal pizza diameters.
### 16\.6\.4 ANOVA
ANOVA is short for “analysis of variance”.
It’s an umbrella term for a number of different models centered around testing the influence of one or several categorical predictors on a metric measurement.
In previous sections, we have summoned regression models for this task.
This is indeed the more modern and preferred approach, especially when the regression modeling also takes random effects (so\-called hierarchical modeling) into account.
Nonetheless, it is good to have a basic understanding of ANOVAs, as they are featured prominently in a lot of published research papers, whose findings are still relevant.
Also, in some areas of empirical science, ANOVAs are still commonly used.
Here we are just going to cover the most basic type of ANOVA, which is called a *one\-way ANOVA*.
A one\-way ANOVA is, in regression jargon, a suitable approach for the case of a single categorical predictor with more than two levels (otherwise a \\(t\\)\-test would be enough) and a metric dependent variable.
For illustration we will here consider a fictitious case of metric measurement for three groups: A, B, and C.
These groups are levels of a categorical predictor `group`.
We want to address the research question of whether the means of the measurements of groups A, B and C could plausibly be identical.
The main idea behind analysis of variance is *not* to look at the means of measurements to be compared, but rather to compare the *between\-group variances* to the *within\-group variances*.
Whence the name “analysis of variance”.
While mathematically complex, the idea is quite intuitive.
Figure [16\.21](ch-03-05-hypothesis-testing-tests.html#fig:ch-05-01-examples-F-score) shows four different (made\-up) data sets, each with different measurements for groups A, B and C.
It also shows the “pooled data”, i.e., the data from all three groups combined.
What is also shown in each panel is the so\-called F\-statistic, which is a number derived from a sample in the following way.
We have \\(k \\ge 2\\) groups of metric observations.
For group \\(1 \\le j \\le k\\), there are \\(n\_j\\) observations.
Let \\(x\_{ij}\\) be the observation \\(1 \\le i \\le n\_j\\) for group \\(1 \\le j \\le k\\).
Let \\(\\bar{x}\_j \= \\frac{1}{n\_j} \\sum\_{i \= 1}^{n\_j} x\_{ij}\\) be the mean of group \\(j\\) and let \\(\\bar{\\bar{x}} \= \\frac{1}{k} \\sum\_{j\=1}^k \\frac{1}{n\_j} \\sum\_{i\=1}^{n\_j} x\_{ij}\\) be the grand mean of all data points.
The **between\-group variance** measures how much, on average, the mean of each group deviates from the grand mean of all data points (where distance is squared distance, as usual):
\\\[
\\hat{\\sigma}\_{\\mathrm{between}} \= \\frac{\\sum\_{j\=1}^k n\_j (\\bar{x}\_j \- \\bar{\\bar{x}})^2}{k\-1}
\\]
The **within\-group variance** is a measure of the average variance of the data points inside of each group:
\\\[
\\hat{\\sigma}\_{\\mathrm{within}} \= \\frac{\\sum\_{j\=1}^k \\sum\_{i\=1}^{n\_j} (x\_{ij} \- \\bar{x}\_j)^2}{\\sum\_{i\=1}^k (n\_i \- 1\)}
\\]
Now, if the means of different groups are rather different from each other, the between\-group variance should be high.
But absolute numbers may be misleading, so we need to scale the between\-group variance also by how much variance we see, on average, in each group, i.e., the within\-group variance.
That is why the \\(F\\)\-statistic is defined as:
\\\[
F \= \\frac{\\hat{\\sigma}\_{\\mathrm{between}}}{\\mathrm{\\hat{\\sigma}\_{\\mathrm{within}}}}
\\]
For illustration, Figure [16\.21](ch-03-05-hypothesis-testing-tests.html#fig:ch-05-01-examples-F-score) shows four different scenarios with associated measures of \\(F\\).
Figure 16\.21: Different examples of metric measurements for three groups (A, B, C), shown here together with a plot of the combined (\= pooled) data. We see that, as the means of measurements go apart, so does the ratio of between\-group variance and within\-group variance.
It can be shown that, under the assumption that the \\(k\\) groups have identical means, the sampling distribution of the \\(F\\) statistic follows an [\\(F\\)\-distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-F) with appropriate parameters (which is, unsurprisingly, the distribution constructed for exactly this purpose):
\\\[
F \\sim F\\mathrm{\\text{\-}distribution}\\left(k \- 1, \\sum\_{i\=1}^k (n\_i \- 1\) \\right)
\\]
The complete frequentist model of a one\-way ANOVA is shown in Figure [16\.22](ch-03-05-hypothesis-testing-tests.html#fig:ch-05-01-ANOVA-onway-model).
Notice that the null hypothesis of equal means is not shown explicitly, but rather only a single mean \\(\\mu\\) is shown, which functions as the mean for all groups.
Figure 16\.22: Graphical representation of the model underlying a one\-way ANOVA.
Let’s consider some concrete, but fictitious data for a full example:
```
# fictitious data
x_A <- c(78, 43, 60, 60, 60, 50, 57, 58, 64, 64, 56, 62, 66, 53, 59)
x_B <- c(52, 53, 51, 49, 64, 60, 45, 50, 55, 65, 76, 62, 62, 45)
x_C <- c(78, 66, 74, 57, 75, 64, 64, 53, 63, 60, 79, 68, 68, 47, 63, 67)
# number of observations in each group
n_A <- length(x_A)
n_B <- length(x_B)
n_C <- length(x_C)
# in tibble form
anova_data <- tibble(
condition = c(
rep("A", n_A),
rep("B", n_B),
rep("C", n_C)
),
value = c(x_A, x_B, x_C)
)
```
Here’s a plot of this data:
We want to know whether it is plausible to entertain the idea that the means of these three groups are identical.
We can calculate the one\-way ANOVA explicitly as follows, following the calculations described in Figure [16\.22](ch-03-05-hypothesis-testing-tests.html#fig:ch-05-01-ANOVA-onway-model):
```
# compute grand_mean
grand_mean <- anova_data %>% pull(value) %>% mean()
# compute degrees of freedom (parameters to F-distribution)
df1 <- 2
df2 <- n_A + n_B + n_C - 3
# between-group variance
between_group_variance <- 1/df1 *
(
n_A * (mean(x_A) - grand_mean)^2 +
n_B * (mean(x_B) - grand_mean)^2 +
n_C * (mean(x_C) - grand_mean)^2
)
# within-group variance
within_group_variance <- 1/df2 *
(
sum((x_A - mean(x_A))^2) +
sum((x_B - mean(x_B))^2) +
sum((x_C - mean(x_C))^2)
)
# test statistic of observed data
F_observed <- between_group_variance / within_group_variance
# retrieving the p-value (using the F-distribution)
p_value_anova <- 1 - pf(F_observed, 2, n_A + n_B + n_C - 3)
p_value_anova %>% round(4)
```
```
## [1] 0.0172
```
Compare this to the result of calling R’s built\-in function `aov`:
```
aov(formula = value ~ condition, anova_data) %>% summary()
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## condition 2 640.8 320.4 4.485 0.0172 *
## Residuals 42 3000.3 71.4
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
To report these results, we could use a statement like this:
> Based on a one\-way ANOVA, we find evidence against the assumption of equal means across all groups (\\(F(2, 42\) \\approx 4\.485\\), \\(p \\approx 0\.0172\\)).
### 16\.6\.5 Linear regression
Significance testing for linear regression parameters follows the same logic as for other models as well.
In particular, it can be shown that the relevant test statistic for ML\-estimates of regression coefficients \\(\\hat\\beta\_i\\), under the assumption that the true model has \\(\\beta\_i \= 0\\), follows a \\(t\\)\-distribution.
We can run a linear regression model (with a Gaussian noise function) using the built\-in function `glm` (for “generalized linear model”):
```
fit_murder_mle <- glm(
formula = murder_rate ~ low_income,
data = aida::data_murder
)
```
If we inspect a summary for the model fit, we see the results of a \\(t\\)\-test, one for each coefficient, based on the null\-hypothesis that this coefficient’s true value is 0\.
```
summary(fit_murder_mle)
```
```
##
## Call:
## glm(formula = murder_rate ~ low_income, data = aida::data_murder)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -9.1663 -2.5613 -0.9552 2.8887 12.3475
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -29.901 7.789 -3.839 0.0012 **
## low_income 2.559 0.390 6.562 3.64e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for gaussian family taken to be 30.38125)
##
## Null deviance: 1855.20 on 19 degrees of freedom
## Residual deviance: 546.86 on 18 degrees of freedom
## AIC: 128.93
##
## Number of Fisher Scoring iterations: 2
```
So, in the case of the `murder_data`, we would conclude that there is strong evidence *against* the assumption that the data could have been generated by a model whose slope parameter for `low_income` is set to 0\.
### 16\.6\.6 Likelihood\-Ratio Test
The likelihood\-ratio (LR) test is a very popular frequentist method of model comparison.
The LR\-test assimilates model comparison to frequentist hypothesis testing.
It defines a suitable test statistic and supplies an approximation of the sampling distribution.
The LR\-test first and foremost applies to the comparison of **nested models**, but there are results about how approximate results can be obtained when comparing non\-nested models with an LR\-test ([Vuong 1989](#ref-Vuong1989:Likelihood-Rati)).
A frequentist model \\(M\_i\\) is **nested** inside another frequentist model \\(M\_j\\) iff \\(M\_i\\) can be obtained from \\(M\_j\\) by fixing at least one of \\(M\_j\\)’s free parameters to a specific value.
If \\(M\_i\\) is nested under \\(M\_j\\), \\(M\_i\\) is called the **nested model**, and \\(M\_j\\) is called the **nesting model** or the **encompassing model**.
Obviously, the nested model is simpler (of lower complexity) than the nesting model.
For example, we had the two\-parameter exponential model of forgetting previously in Chapter [10](Chap-03-06-model-comparison.html#Chap-03-06-model-comparison):
\\\[
\\begin{aligned}
P(D \= \\langle k, N \\rangle \\mid \\langle a, b\\rangle) \& \= \\text{Binom}(k,N, a \\exp (\-bt)), \\ \\ \\ \\ \\text{where } a,b\>0
\\end{aligned}
\\]
We wanted to explain the following “forgetting data”:
```
# time after memorization (in seconds)
t <- c(1, 3, 6, 9, 12, 18)
# proportion (out of 100) of correct recall
y <- c(.94, .77, .40, .26, .24, .16)
# number of observed correct recalls (out of 100)
obs <- y * 100
```
An example of a model that is nested under this two\-parameter model is the following one\-parameter model, which fixes \\(a \= 1\.1\\).
\\\[
\\begin{aligned}
P(D \= \\langle k, N \\rangle \\mid b) \& \= \\text{Binom}(k,N, 1\.1 \\ \\exp (\-bt)), \\ \\ \\ \\ \\text{where } b\>0
\\end{aligned}
\\]
Here’s an ML\-estimation for the nested nested model (the best fit for the nesting model `bestExpo` was obtained in Chapter [10](Chap-03-06-model-comparison.html#Chap-03-06-model-comparison)):
```
nLL_expo_nested <- function(b) {
# calculate predicted recall rates for given parameters
theta <- 1.1 * exp(-b * t) # one-param exponential model
# avoid edge cases of infinite log-likelihood
theta[theta <= 0.0] <- 1.0e-4
theta[theta >= 1.0] <- 1 - 1.0e-4
# return negative log-likelihood of data
- sum(dbinom(x = obs, prob = theta, size = 100, log = T))
}
bestExpo_nested <- optim(
nLL_expo_nested,
par = 0.5,
method = "Brent",
lower = 0,
upper = 20
)
bestExpo_nested
```
```
## $par
## [1] 0.1372445
##
## $value
## [1] 19.21569
##
## $counts
## function gradient
## NA NA
##
## $convergence
## [1] 0
##
## $message
## NULL
```
The LR\-test looks at the likelihood ratio of the nested model \\(M\_0\\) over the encompassing model \\(M\_1\\) using the following test statistic:
\\\[\\text{LR}(M\_1, M\_0\) \= \-2\\log \\left(\\frac{P\_{M\_0}(D\_\\text{obs} \\mid \\hat{\\theta}\_0\)}{P\_{M\_1}(D\_\\text{obs} \\mid \\hat{\\theta}\_1\)}\\right)\\]
We can calculate the value of this test statistic for the current example as follows:
```
LR_observed <- 2 * bestExpo_nested$value - 2 * bestExpo$value
LR_observed
```
```
## [1] 1.098429
```
If the simpler (nested) model is true, the sampling distribution of this test statistic approximates a \\(\\chi^2\\)\-distribution with \\(d\\) if we have more and more data.
The degrees of freedom \\(d\\) are given by the difference in free parameters, i.e., the number of parameters the nested model fixes to specific values, but which are free in the nesting model.
We can therefore calculate the \\(p\\)\-value for the LR\-test for our current example like so:
```
p_value_LR_test <- 1 - pchisq(LR_observed, 1)
p_value_LR_test
```
```
## [1] 0.2946111
```
The \\(p\\)\-value of this test quantifies the evidence against the assumption that the data was generated by the simpler model.
A significant test result would therefore indicate that it would be surprising if the data was generated by the simpler model.
This is usually taken as evidence in favor of the more complex, nesting model.
Given the current \\(p\\)\-value \\(p \\approx 0\.2946\\), we would conclude that there is no strong evidence against the simpler model.
Often this may lead researchers to favor the nested model due to its simplicity; the data at hand does not seem to warrant the added complexity of the nesting model; the nested model seems to suffice.
**Exercise 16\.7**
TRUE OR FALSE?
1. The nested model usually has more free parameters than the nesting model.
2. When we perform the LR\-test, we initially assume that the nested model is more plausible.
3. An LR\-test can only compare the nested model with nesting models.
4. If the LR\-test result has a \\(p\\)\-value equal to 1\.0, one can conclude that it’s a piece of evidence in favor of the simpler model.
Solution
1. False
2. True
3. False
4. True
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/mc-simulated-p-values.html |
17\.3 MC\-simulated \\(p\\)\-values
-----------------------------------
Let’s reconsider the 24/7 data set, where we have \\(k\=7\\) observations of ‘heads’ in \\(N\=24\\) tosses of a coin.
```
# 24/7 data
k_obs <- 7
n_obs <- 24
```
The question of interest is whether the coin is fair, i.e., whether \\(\\theta\_c \= 0\.5\\).
R’s built\-in function `binom.test` calculates a binomial test and produces a \\(p\\)\-value which is calculated precisely (since this is possible and cheap in this case).
```
binom.test(7,24)
```
```
##
## Exact binomial test
##
## data: 7 and 24
## number of successes = 7, number of trials = 24, p-value = 0.06391
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.1261521 0.5109478
## sample estimates:
## probability of success
## 0.2916667
```
It is also possible to approximate a \\(p\\)\-value by Monte Carlo simulation.
Notice that the definition of a \\(p\\)\-value repeated here from Section [16\.2](ch-03-05-hypothesis-p-values.html#ch-03-05-hypothesis-p-values) is just a statement about the probability that a random variable (from which we can take samples with MC simulation) delivers a value below a fixed threshold:
\\\[
p\\left(D\_{\\text{obs}}\\right) \= P\\left(T^{\|H\_0} \\succeq^{H\_{0,a}} t\\left(D\_{\\text{obs}}\\right)\\right) % \= P(\\mathcal{D}^{\|H\_0} \\in \\{D \\mid t(D) \\ge t(D\_{\\text{obs}})\\})
\\]
So here goes:
```
# specify how many Monte Carlo samples to take
x_reps <- 500000
# build a vector of likelihoods (= the relevant test statistic)
# for hypothetical data observations, which are
# sampled based on the assumption that H0 is true
lhs <- map_dbl(1:x_reps, function(i) {
# hypothetical data assuming H0 is true
k_hyp <- rbinom(1, size = n_obs, prob = 0.5)
# likelihood of that hypothetical observation
dbinom(k_hyp, size = n_obs, prob = 0.5)
})
# likelihood (= test statistic) of the observed data
lh_obs = dbinom(k_obs, size = n_obs, prob = 0.5)
# proportion of samples with a lower or equal likelihood than
# the observed data
mean(lhs <= lh_obs) %>% show()
```
```
## [1] 0.06414
```
Monte Carlo sampling for \\(p\\)\-value approximation is always possible, even for cases where we cannot rely on known simplifying assumptions.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/bayesian-p-values-model-checking.html |
17\.4 Bayesian \\(p\\)\-values \& model checking
------------------------------------------------
The previous section showed how to approximate a \\(p\\)\-value with Monte Carlo sampling.
Notice that nothing in this sampling\-based approach hinges on the model having no free parameters.
Indeed, we can similarly approximate so\-called *Bayesian predictive \\(p\\)\-values*.
Bayesian predictive \\(p\\)\-values have a good role to play in Bayesian data analysis: they are one possible tool for *model checking* a.k.a. *model criticism*.
Suppose we have a Bayesian model for the binomial 24/7 data.
The model consists of the usual likelihood function, but also has a prior (maybe from previous research, or maybe obtained from training the model on a training data set):
\\\[
\\theta\_c \\sim \\text{Beta}(11,2\)
\\]
Notice that this is a biased prior, placing more weight on the idea that the coin is biased towards heads.
In model checking we ask whether the given model could be a plausible model for some data at hand.
We are not comparing models, we just “check” or “test” (!) the model as such.
Acing the test doesn’t mean that there could not be much better models.
Failing the test doesn’t mean that we know of a better model (we may just have to do more thinking).
Let’s approximate a Bayesian predictive \\(p\\)\-value for this Bayesian model and the 24/7 data.
The calculations are analogous to those in the previous section.
```
# 24/7 data
k_obs <- 7
n_obs <- 24
# specify how many Monte Carlo samples to take
x_reps <- 500000
# build a vector of likelihoods (= the relevant test statistic)
# for hypothetical data observations, which are
# sampled based on the assumption that the
# Bayesian model to be tested is true
lhs <- map_dbl(1:x_reps, function(i) {
# hypothetical data assuming the model is true
# first sample from the prior
# then sample from the likelihood
theta_hyp <- rbeta(1, 11, 2)
k_hyp <- rbinom(1, size = n_obs, prob = theta_hyp)
# likelihood of that hypothetical observation
dbinom(k_hyp, size = n_obs, prob = theta_hyp)
})
# likelihood (= test statistic) of the observed data
# determined using MC sampling
lh_obs = map_dbl(1:x_reps, function(i){
theta_hyp <- rbeta(1, 11, 2)
dbinom(k_obs, size = n_obs, prob = theta_hyp)
}) %>% mean()
# proportion of samples with a lower or equal likelihood than
# the observed data
mean(lhs <= lh_obs) %>% show()
```
```
## [1] 0.000176
```
This Bayesian predictive \\(p\\)\-value is rather low, suggesting that this model (prior \& likelihood) is *NOT* a good model for the 24/7 data set.
We can use Bayesian \\(p\\)\-values for any Bayesian model, whether built on a prior or posterior distribution.
A common application of Bayesian \\(p\\)\-values in model checking are so\-called **posterior predictive checks**.
We compute a Bayesian posterior for observed data \\(D\_\\text{obs}\\) and then test, via a Bayesian posterior predictive \\(p\\)\-value, whether the trained model is actually a good model for \\(D\_\\text{obs}\\) itself.
If the \\(p\\)\-value is high, that’s no cause for hysterical glee.
It just means that there is no cause for alarm.
If the Bayesian posterior predictive \\(p\\)\-value is very low, the posterior predictive test has failed, and that means that the model, even when trained on the data \\(D\_\\text{obs}\\), is *NOT* a good model of that very data.
The model must miss something crucial about the data \\(D\_\\text{obs}\\).
Better start researching what that is and build a better model if possible.
Most importantly, these considerations of Bayesian \\(p\\)\-values show that frequentist testing has a clear analog in the Bayesian realm, namely as model checking.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/ch-05-01-estimation-comparison.html |
17\.5 Comparing Bayesian and frequentist estimates
--------------------------------------------------
As discussed in Chapter [9](ch-03-04-parameter-estimation.html#ch-03-04-parameter-estimation), parameter estimation is traditionally governed by two measures: (i) a point\-estimate for the best parameter value, and (ii) an interval\-estimate for a range of values that are considered “good enough”. Table [17\.1](ch-05-01-estimation-comparison.html#tab:ch-05-01-estimation-overview) gives the most salient answers that the Bayesian and the frequentist approaches give.
Table 17\.1: Common methods of obtaining point\-valued and interval\-range estimates for parameters, given some data, in frequentist and Bayesian approaches.
| estimate | Bayesian | frequentist |
| --- | --- | --- |
| best value | mean of posterior | maximum likelihood estimate |
| interval range | credible interval (HDI) | confidence interval |
For Bayesians, point\-valued and interval\-based estimates are just summary statistics to efficiently communicate about or reason with the main thing: the full posterior distribution.
For the frequentist, the point\-valued and interval\-based estimates might be all there is.
Computing a full posterior can be very hard.
Computing point\-estimates is usually much simpler.
Yet, all the trouble of having to specify priors, and having to calculate a much more complex mathematical object, can pay off.
An example which is intuitive enough is that of a likelihood function in a multi\-dimensional parameter space where there is an infinite collection of parameter values that maximize the likelihood function (think of a plateau).
Asking a godly oracle for “the” MLE can be disastrously misleading.
The full posterior will show the quirkiness.
In other words, to find an MLE can be an ill\-posed problem where exploring the posterior surface is not.
Practical issues aside, there are also conceptual arguments that can be pinned against each other.
Suppose you do not know the bias of a coin, you flip it once and it lands heads.
The case in mathematical notation: \\(k\=1\\), \\(N\=1\\).
As a frequentist, your “best” estimate of the coin’s bias is that it is 100% rigged: it will *never* land tails.
As a Bayesian, with uninformed priors, your “best” estimate is, following Laplace rule, \\(\\frac{k\+1}{N\+2} \= \\frac{2}{3}\\).
Notice that there might be different notions of what counts as “best” in place.
Still, the frequentist “best” estimate seems rather extreme.
What about interval\-ranged estimates?
Which is the better tool, confidence intervals or credible intervals?
– This is hard to answer.
Numerical simulations can help answer these questions.[90](#fn90)
The idea is simple but immensely powerful.
We simulate, repeatedly, a ground\-truth and synthetic results for fictitious experiments, and then we apply the statistical tests/procedures to these fictitious data sets.
Since we know the ground\-truth, we can check which tests/procedures got it right.
Let’s look at a simulation set\-up to compare credible intervals to confidence intervals, the latter of which are calculated by asymptotic approximation or the so\-called exact method (see the info\-box in Section [16\.5](ch-05-01-frequentist-testing-confidence-intervals.html#ch-05-01-frequentist-testing-confidence-intervals)).
To do so, we repeatedly sample a ground\-truth (e.g., a known coin bias \\(\\theta\_{\\text{true}}\\)) from a flat distribution over \\(\[0;1]\\).[91](#fn91).
We then simulate an experiment in a synthetic world with \\(\\theta\_{\\text{true}}\\), using a fixed value of \\(n\\), here taken from the set \\(n \\in \\left \\{ 10, 25, 100, 1000 \\right \\}\\).
We then construct a confidence interval (either approximately or precisely) and a 95% credible interval; for each of the three interval estimates.
We check whether the ground\-truth \\(\\theta\_{\\text{true}}\\) is *not* included in any given interval estimate.
We calculate the mean number of times such as non\-inclusion (errors!) happen for each kind of interval estimate.
The code below implements this and the figure below shows the results based on 10,000 samples of \\(\\theta\_{\\text{true}}\\).
```
# how many "true" thetas to sample
n_samples <- 10000
# sample a "true" theta
theta_true <- runif(n = n_samples)
# create data frame to store results in
results <- expand.grid(
theta_true = theta_true,
n_flips = c(10, 25, 100, 1000)
) %>%
as_tibble() %>%
mutate(
outcome = 0,
norm_approx = 0,
exact = 0,
Bayes_HDI = 0
)
for (i in 1:nrow(results)) {
# sample fictitious experimental outcome for current true theta
results$outcome[i] <- rbinom(
n = 1,
size = results$n_flips[i],
prob = results$theta_true[i]
)
# get CI based on asymptotic Gaussian
norm_approx_CI <- binom::binom.confint(
results$outcome[i],
results$n_flips[i],
method = "asymptotic"
)
results$norm_approx[i] <- !(
norm_approx_CI$lower <= results$theta_true[i] &&
norm_approx_CI$upper >= results$theta_true[i]
)
# get CI based on exact method
exact_CI <- binom::binom.confint(
results$outcome[i],
results$n_flips[i],
method = "exact"
)
results$exact[i] <- !(
exact_CI$lower <= results$theta_true[i] &&
exact_CI$upper >= results$theta_true[i]
)
# get 95% HDI (flat priors)
Bayes_HDI <- binom::binom.bayes(
results$outcome[i],
results$n_flips[i],
type = "highest",
prior.shape1 = 1,
prior.shape2 = 1
)
results$Bayes_HDI[i] <- !(
Bayes_HDI$lower <= results$theta_true[i] &&
Bayes_HDI$upper >= results$theta_true[i]
)
}
results %>%
gather(key = "method", "Type_1", norm_approx, exact, Bayes_HDI) %>%
group_by(method, n_flips) %>%
dplyr::summarize(avg_type_1 = mean(Type_1)) %>%
ungroup() %>%
mutate(
method = factor(
method,
ordered = T,
levels = c("norm_approx", "Bayes_HDI", "exact")
)
) %>%
ggplot(aes(x = as.factor(n_flips), y = avg_type_1, color = method)) +
geom_point(size = 3) + geom_line(aes(group = method), size = 1.3) +
xlab("number of flips per experiment") +
ylab("proportion of exclusions of true theta")
```
These results show a few interesting things.
For one, looking at the error\-level of the exact confidence intervals, we see that the \\(\\alpha\\)\-level of frequentist statistics is an *upper bound* on the amount of error.
For a discrete sample space, the actual error rate can be substantially lower.
Second, the approximate method for computing confidence intervals is off unless the sample size warrants the approximation.
This stresses the importance of caring about when an approximation underlying a frequentist test is (not) warranted.
Thirdly, the Bayesian credible interval has a “perfect match” to the assumed \\(\\alpha\\)\-level for all sample sizes.
However, we must take into account that the simulation assumes that the Bayesian analysis “knows the true prior”.
We have actually sampled the latent parameter \\(\\theta\\) from a uniform distribution; and we have used a flat prior for the Bayesian calculations.
Obviously, the more the prior divergences from the true distribution, and the fewer data observations we have, the more errors will the Bayesian approach make.
**Exercise 9\.5**
Pick the correct answer:
The most frequently used point\-estimate of Bayesian parameter estimation looks at…
1. …the median of the posterior distribution.
2. …the maximum likelihood estimate.
3. …the mean of the posterior distribution.
4. …the normalizing constant in Bayes rule.
Solution
Statement c.
is correct.
The most frequently used interval\-based estimate in frequentist approaches is…
1. …the support of the likelihood distribution.
2. …the confidence interval.
3. …the hypothesis interval.
4. …the 95% highest\-density interval of the maximum likelihood estimate.
Solution
Statement b.
is correct.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/jeffreys-lindley-paradox.html |
17\.10 Jeffreys\-Lindley paradox
--------------------------------
Often, Bayesian and frequentist methods yield qualitatively similar results.
But sometimes results diverge.
A prominent case of divergence is known as the Savage\-Lindley paradox.
The case is not really a “paradox” in a strict sense.
It’s a case where predictions are clearly divergent, and it raises attention for the differences between frequentist and Bayesian testing of point\-valued null hypotheses.
Let’s take the following data.
```
k = 49581
N = 98451
```
The point\-valued null hypothesis is whether the binomial rate is unbiased, so \\(\\theta\_c \= 0\.5\\).
```
binom.test(k, N)$p.value
```
```
## [1] 0.02364686
```
Based on the standard \\(\\alpha\\)\-level of \\(0\.05\\), frequentism thus prescribes to reject \\(H\_0\\).
In contrast, using the Savage\-Dickey method to compute the Bayes factor, we find strong support *in favor of* \\(H\_0\\).
```
dbeta(0.5, k + 1, N - k + 1)
```
```
## [1] 19.21139
```
The reason why these methods give different results is because they *are* conceptually completely different things. There is no genuine paradox.
Frequentist testing is a form of model checking.
The question addressed by the frequentist hypothesis test is whether a model that assumes that \\(\\theta\_c \= 0\.5\\) is such that, if we assume that this model is true, the data above appears surprising.
The Bayesian method used above hinges on the comparison of two models.
The question addressed by the Bayesian comparison\-based hypothesis test is which of two models better predicts the observed data from an *ex ante* point of view (i.e., before having seen the data): the first model assumes that \\(\\theta\_c \= 0\.5\\) and the second model assumes that \\(\\theta\_c \\sim \\text{Beta}(1,1\)\\).
For a large \\(N\\), like in the example at hand, it can be the case that \\(\\theta\_c \= 0\.5\\) is a bad explanation for the data, so that a model\-checking test rejects this null hypothesis.
At the same time, the alternative model with \\(\\theta\_c \\sim \\text{Beta}(1,1\)\\) is *even worse* than the model \\(\\theta\_c \= 0\.5\\), because it puts credence on many values for \\(\\theta\_c\\) that are very, very bad predictors of the data.
None of these considerations lend themselves to a principled argument for or against frequentism or Bayesianism.
The lesson to be learned is that these different approaches ask different questions (about models and data).
The agile data analyst will diligently check each concrete research context for which method is most conducive to gaining the insights relevant for the given purpose.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/further-information-on-webppl.html |
A.6 Further information on WebPPL
---------------------------------
WebPPL (pronounced “web people”) is a probabilistic programming language embedded in JavaScript. Unlike JavaScript, WebPPL does not support
looping constructs (such as `for` or `while`). Instead, it encourages a functional way of programming, using recursion and higher\-order functions. Please refer to this [tutorial](http://www.problang.org/chapters/app-06-intro-to-webppl.html) for examples and further explanations.
### A.6\.1 Primitives and sampling functions
We can use WebPPL to (easily) sample from probability distributions, many of which are already implemented and ready to use. A full list of built\-in primitive distributions can be found in the [documentation](https://webppl.readthedocs.io/en/master/distributions.html#primitives). If we would like to draw one sample from, say, a standard normal distribution, we could run `sample(Gaussian({mu: 0, sigma: 1}))`. A more convenient expression would be to just use the respective sampling function, in this case `gaussian({mu: 0, sigma: 1})` (notice the lowercase letter in the function name). Sampling functions can be combined with the `repeat()` function to take more than one sample, ultimately leading to better approximations.
Let’s look at a simple example to see how repeated sampling from a primitive distribution works. In the code box below, we take \\(1000\\) samples from a [beta distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-beta) with parameters \\(\\alpha \= 4\\) and \\(\\beta \= 6\\) and visualize them (more on this below).
```
viz(repeat(1000, function() {beta({a: 4, b: 6})}));
```
```
```
### A.6\.2 Inference with `Infer()`
We might also want to create our own distribution objects (\= probability distributions). For this purpose, the built\-in function `Infer()` comes in pretty handy. It takes as input a function with no arguments and returns a distribution object. The function passed to `Infer()` is the sampling function that should be turned into a distribution object. Additionally, `Infer()` can take on another optional argument, namely the *method* for performing inference. If this argument is not specified, WebPPL will automatically choose a reasonable method for inference. More on this function and different methods [here](https://webppl.readthedocs.io/en/master/inference/index.html).
Here’s an example of how to perform inference using the MCMC method. The example is one of a logistic regression (based on very little data) and the model returns samples from the posterior predictive distribution for a previously unseen data point. Click on the yellowish box to check what the code does and how `Infer()` is used. Please re\-visit Chapter [9\.3\.1](Ch-03-03-estimation-algorithms.html#ch-03-03-MCMC) for more information on MCMC algorithms.
```
// training data
var xs = [-10, -5, 2, 6, 10]
var labels = [false, false, true, true, true]
// new data point to predict a label for
var x_new = 1
///fold:
var model = function() {
// priors of regression parameters
var beta_1 = gaussian(0, 1)
var beta_0 = gaussian(0, 1)
var sigmoid = function(x) {
return 1 / (1 + Math.exp(-1 * (beta_1 * x + beta_0)))
}
map2(
function(x, label) {
factor(Bernoulli({p: sigmoid(x)}).score(label))
},
xs,
labels)
return bernoulli(sigmoid(x_new))
}
viz.auto(Infer({method: 'MCMC', samples: 10000, burn: 2000}, model))
///
```
```
```
### A.6\.3 Visualization
WebPPL comes with a major benefit in that it makes plotting as easy as pie. All we have to do is basically wrap the `viz()` function of the `viz`\-package around our data, and depending on the nature of the data (continuous or discrete), WebPPL will automatically come up with a visualization of it. Of course, we can also explicitly tell WebPPL how we want our data to be plotted. Much like in `ggplot`, we just add the (abbreviated) plotting method to the function name. An explicit way of plotting a histogram, for instance, would be to call `viz.hist()`. The supported methods for data visualization are documented [here](https://github.com/probmods/webppl-viz).
In the example below, the data stored in variable `xs` is plotted once with the default `viz()` function and once with the explicit `viz.hist()` function. What do you notice with regard to the output?
```
var xs = [-2, -1, 1, 2, 3, 4, 4, 5];
viz(xs);
viz.hist(xs);
```
```
```
### A.6\.4 Installation
You can run WebPPL code directly from within the editor on [webppl.org](http://webppl.org/). If you want to install WebPPL locally, follow the steps below:
1. Install [git](https://git-scm.com/downloads).
2. Install [Node.js](https://nodejs.org/en/).
3. Run `npm install -g webppl` in your command line.
Run `npm update -g webppl` to update your current version of WebPPL.
These steps are also mentioned in the [documentation](https://webppl.readthedocs.io/en/master/installation.html).
### A.6\.5 Usage
Run WebPPL programs locally with `webppl FILE_NAME.wppl`.
### A.6\.6 Keyboard shortcuts (for in\-browser use)
* Press `Ctrl` \+ `Enter` to run code.
* Select code and press the `Tab` key to fix indentations.
* Press `Ctrl` \+ `/` to comment or uncomment code (apparently, this shortcut only works with an English keyboard).
### A.6\.7 Further resources
* [official website](http://webppl.org)
* [documentation](http://docs.webppl.org/en/master/)
* [short introduction tutorial](http://www.problang.org/chapters/app-06-intro-to-webppl.html)
* [Bayesian Data Analysis using Probabilistic Programs: Statistics as pottery](https://mhtess.github.io/bdappl/) by webbook on BDA with WebPPL by MH Tessler
### A.6\.1 Primitives and sampling functions
We can use WebPPL to (easily) sample from probability distributions, many of which are already implemented and ready to use. A full list of built\-in primitive distributions can be found in the [documentation](https://webppl.readthedocs.io/en/master/distributions.html#primitives). If we would like to draw one sample from, say, a standard normal distribution, we could run `sample(Gaussian({mu: 0, sigma: 1}))`. A more convenient expression would be to just use the respective sampling function, in this case `gaussian({mu: 0, sigma: 1})` (notice the lowercase letter in the function name). Sampling functions can be combined with the `repeat()` function to take more than one sample, ultimately leading to better approximations.
Let’s look at a simple example to see how repeated sampling from a primitive distribution works. In the code box below, we take \\(1000\\) samples from a [beta distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-beta) with parameters \\(\\alpha \= 4\\) and \\(\\beta \= 6\\) and visualize them (more on this below).
```
viz(repeat(1000, function() {beta({a: 4, b: 6})}));
```
```
```
### A.6\.2 Inference with `Infer()`
We might also want to create our own distribution objects (\= probability distributions). For this purpose, the built\-in function `Infer()` comes in pretty handy. It takes as input a function with no arguments and returns a distribution object. The function passed to `Infer()` is the sampling function that should be turned into a distribution object. Additionally, `Infer()` can take on another optional argument, namely the *method* for performing inference. If this argument is not specified, WebPPL will automatically choose a reasonable method for inference. More on this function and different methods [here](https://webppl.readthedocs.io/en/master/inference/index.html).
Here’s an example of how to perform inference using the MCMC method. The example is one of a logistic regression (based on very little data) and the model returns samples from the posterior predictive distribution for a previously unseen data point. Click on the yellowish box to check what the code does and how `Infer()` is used. Please re\-visit Chapter [9\.3\.1](Ch-03-03-estimation-algorithms.html#ch-03-03-MCMC) for more information on MCMC algorithms.
```
// training data
var xs = [-10, -5, 2, 6, 10]
var labels = [false, false, true, true, true]
// new data point to predict a label for
var x_new = 1
///fold:
var model = function() {
// priors of regression parameters
var beta_1 = gaussian(0, 1)
var beta_0 = gaussian(0, 1)
var sigmoid = function(x) {
return 1 / (1 + Math.exp(-1 * (beta_1 * x + beta_0)))
}
map2(
function(x, label) {
factor(Bernoulli({p: sigmoid(x)}).score(label))
},
xs,
labels)
return bernoulli(sigmoid(x_new))
}
viz.auto(Infer({method: 'MCMC', samples: 10000, burn: 2000}, model))
///
```
```
```
### A.6\.3 Visualization
WebPPL comes with a major benefit in that it makes plotting as easy as pie. All we have to do is basically wrap the `viz()` function of the `viz`\-package around our data, and depending on the nature of the data (continuous or discrete), WebPPL will automatically come up with a visualization of it. Of course, we can also explicitly tell WebPPL how we want our data to be plotted. Much like in `ggplot`, we just add the (abbreviated) plotting method to the function name. An explicit way of plotting a histogram, for instance, would be to call `viz.hist()`. The supported methods for data visualization are documented [here](https://github.com/probmods/webppl-viz).
In the example below, the data stored in variable `xs` is plotted once with the default `viz()` function and once with the explicit `viz.hist()` function. What do you notice with regard to the output?
```
var xs = [-2, -1, 1, 2, 3, 4, 4, 5];
viz(xs);
viz.hist(xs);
```
```
```
### A.6\.4 Installation
You can run WebPPL code directly from within the editor on [webppl.org](http://webppl.org/). If you want to install WebPPL locally, follow the steps below:
1. Install [git](https://git-scm.com/downloads).
2. Install [Node.js](https://nodejs.org/en/).
3. Run `npm install -g webppl` in your command line.
Run `npm update -g webppl` to update your current version of WebPPL.
These steps are also mentioned in the [documentation](https://webppl.readthedocs.io/en/master/installation.html).
### A.6\.5 Usage
Run WebPPL programs locally with `webppl FILE_NAME.wppl`.
### A.6\.6 Keyboard shortcuts (for in\-browser use)
* Press `Ctrl` \+ `Enter` to run code.
* Select code and press the `Tab` key to fix indentations.
* Press `Ctrl` \+ `/` to comment or uncomment code (apparently, this shortcut only works with an English keyboard).
### A.6\.7 Further resources
* [official website](http://webppl.org)
* [documentation](http://docs.webppl.org/en/master/)
* [short introduction tutorial](http://www.problang.org/chapters/app-06-intro-to-webppl.html)
* [Bayesian Data Analysis using Probabilistic Programs: Statistics as pottery](https://mhtess.github.io/bdappl/) by webbook on BDA with WebPPL by MH Tessler
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/selected-continuous-distributions-of-random-variables.html |
B.1 Selected continuous distributions of random variables
---------------------------------------------------------
### B.1\.1 Normal distribution
One of the most important distribution families is the *Gaussian* or *normal family* because it fits many natural phenomena. Furthermore, the sampling distributions of many estimators depend on the normal distribution either because they are derived from normally distributed random variables or because they can be asymptotically approximated by a normal distribution for large samples (*Central limit theorem*).
Distributions of the normal family are symmetric with range \\((\-\\infty,\+\\infty)\\) and have two parameters \\(\\mu\\) and \\(\\sigma\\), respectively referred to as the *mean* and the *standard deviation* of the normal random variable. These parameters are examples of *location* and *scale* parameters. The normal distribution is located at \\(\\mu\\), and the choice of \\(\\sigma\\) scales its width. The distribution is symmetric, with most observations lying around the central peak \\(\\mu\\) and more extreme values being further away depending on \\(\\sigma\\).
\\\[X \\sim Normal(\\mu,\\sigma) \\ \\ \\text{, or alternatively written as: } \\ \\ X \\sim \\mathcal{N}(\\mu,\\sigma) \\]
Figure [B.1](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-normal-distribution-density) shows the probability density function of three normally distributed random variables with different parameters. Figure [B.2](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-normal-distribution-cumulative) shows the corresponding cumulative function of the three normal distributions.
Figure B.1: Examples of a probability density function of the normal distribution. Numbers in legend represent parameter pairs \\((\\mu, \\sigma)\\).
Figure B.2: The cumulative distribution functions of the normal distributions corresponding to the previous probability density functions.
**Probability density function**
\\\[f(x)\=\\frac{1}{\\sigma\\sqrt{2\\pi}}\\exp\\left(\-0\.5\\left(\\frac{x\-\\mu}{\\sigma}\\right)^2\\right)\\]
**Cumulative distribution function**
\\\[F(x)\=\\int\_{\-\\inf}^{x}f(t)dt\\]
**Expected value** \\(E(X)\=\\mu\\)
**Variance** \\(Var(X)\=\\sigma^2\\)
**Deviation and Coverage**
The normal distribution is often associated with the *68\-95\-99\.7 rule*. The values refer to the probability of a random data point landing within *one*, *two* or *three* standard deviations of the mean (Figure [B.3](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-normal-distribution-coverage) depicts these three intervals). For example, about 68% of the values drawn from a normal distribution are within one standard deviation \\(\\sigma\\) away from the mean \\(\\mu\\).
* \\(P(\\mu\-\\sigma \\leq X \\leq \\mu\+\\sigma) \= 0\.6827\\)
* \\(P(\\mu\-2\\sigma \\leq X \\leq \\mu\+2\\sigma) \= 0\.9545\\)
* \\(P(\\mu\-3\\sigma \\leq X \\leq \\mu\+3\\sigma) \= 0\.9973\\)
Figure B.3: The coverage of a normal distribution.
**Z\-transformation / standardization**
A special case of normally distributed random variables is the *standard normal* distributed variable with \\(\\mu\=0\\) and \\(\\sigma\=1\\): \\(Y\\sim Normal(0,1\)\\). Each normal distribution \\(X\\) can be converted into a standard normal distribution \\(Z\\) by *z\-transformation* (see equation below):
\\\[Z\=\\frac{X\-\\mu}{\\sigma}\\]
The advantage of standardization is that values from different scales can be compared because they become *scale\-independent* by z\-transformation.
**Alternative parameterization**
Often a normal distribution is parameterized in terms of its mean \\(\\mu\\) and variance \\(\\sigma^2\\). This is clear, from writing \\(X\\sim Normal(\\mu, \\sigma^2\)\\), instead of \\(X\\sim Normal(\\mu, \\sigma)\\).
**Linear transformations**
1. If normal random variable \\(X\\sim Normal(\\mu, \\sigma^2\)\\) is linearly transformed by \\(Y\=a\*X\+b\\), then the new random variable \\(Y\\) is again normally distributed with \\(Y \\sim Normal(a\\mu\+b,a^2\\sigma^2\)\\).
2. Are \\(X\\sim Normal(\\mu\_x, \\sigma^2\)\\) and \\(Y\\sim Normal(\\mu\_y, \\sigma^2\)\\) normally distributed and independent, then their sum is again normally distributed with \\(X\+Y \\sim Normal(\\mu\_x\+\\mu\_y, \\sigma\_x^2\+\\sigma\_y^2\)\\).
#### B.1\.1\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a normal distribution:
```
var mu = 2; // mean
var sigma = 3; // standard deviation
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {gaussian({mu: mu, sigma: sigma})}));
///
```
```
```
### B.1\.2 Chi\-squared distribution
The \\(\\chi^2\\)\-distribution is widely used in hypothesis testing in inferential statistics because many test statistics are approximately distributed as \\(\\chi^2\\)\-distribution.
The \\(\\chi^2\\)\-distribution is directly related to the standard normal distribution: The sum of the squares of \\(n\\) independent and standard normally distributed random variables \\(X\_1,X\_2,...,X\_n\\) is distributed according to a \\(\\chi^2\\)\-distribution with \\(n\\) *degrees of freedom*:
\\\[Y\=X\_1^2\+X\_2^2\+...\+X\_n^2\.\\]
The \\(\\chi^2\\)\-distribution is a skewed probability distribution with range \\(\[0,\+\\infty)\\) and only one parameter \\(n\\), the *degrees of freedom* (if \\(n\=1\\), then the range is \\((0,\+\\infty)\\)):
\\\[X\\sim \\chi^2(n).\\]
Figure [B.4](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-chi-squared-distribution-density) shows the probability density function of three \\(\\chi^2\\)\-distributed random variables with different values for the parameter. Notice that with increasing degrees of freedom, the \\(\\chi^2\\)\-distribution can be approximated by a normal distribution (for \\(n \\geq 30\\)). Figure [B.5](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-chi-squared-distribution-cumulative) shows the corresponding cumulative function of the three \\(\\chi^2\\)\-density distributions.
Figure B.4: Examples of a probability density function of the chi\-squared distribution.
Figure B.5: The cumulative distribution functions of the chi\-squared distributions corresponding to the previous probability density functions.
**Probability density function**
\\\[f(x)\=\\begin{cases}\\frac{x^{\\frac{n}{2}\-1}e^{\-\\frac{x}{2}}}{2^{\\frac{n}{2}}\\Gamma (\\frac{n}{2})} \&\\textrm{ for }x\>0,\\\\ 0 \&\\textrm{ otherwise.}\\end{cases}\\]
where \\(\\Gamma (\\frac{n}{2})\\) denotes the gamma function.
**Cumulative distribution function**
\\\[F(x)\=\\frac{\\gamma (\\frac{n}{2},\\frac{x}{2})}{\\Gamma \\frac{n}{2}},\\]
with \\(\\gamma(s,t)\\) being the lower incomplete gamma function:
\\\[\\gamma(s,t)\=\\int\_0^t t^{s\-1}e^{\-t} dt.\\]
**Expected value** \\(E(X)\=n\\)
**Variance** \\(Var(X)\=2n\\)
**Transformations**
The sum of two \\(\\chi^2\\)\-distributed random variables \\(X \\sim \\chi^2(m)\\) and \\(Y \\sim \\chi^2(n)\\) is again a \\(\\chi^2\\)\-distributed random variable with \\(X\+Y\=\\chi^2(m\+n)\\).
#### B.1\.2\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a \\(\\chi^2\\)\-distribution:
```
var df = 1; // degrees of freedom
var n_samples = 30000; // number of samples used for approximation
///fold:
var chisq = function(nu) {
var y = sample(Gaussian({mu: 0, sigma: 1}));
if (nu == 1) {
return y*y;
} else {
return y*y+chisq(nu-1);
}
}
viz(repeat(n_samples, function(x) {chisq(df)}));
///
```
```
```
### B.1\.3 F\-distribution
The F\-distribution, named after R.A. Fisher, is particularly used in regression and variance analysis. It is defined by the ratio of two \\(\\chi^2\\)\-distributed random variables \\(X\\sim \\chi^2(m)\\) and \\(Y\\sim \\chi^2(n)\\), each divided by its degrees of freedom:
\\\[F\=\\frac{\\frac{X}{m}}{\\frac{Y}{n}}.\\]
The F\-distribution is a continuous skewed probability distribution with range \\((0,\+\\infty)\\) and two parameters \\(m\\) and \\(n\\), corresponding to the degrees of freedom of the two \\(\\chi^2\\)\-distributed random variables:
\\\[X \\sim F(m,n).\\]
Figure [B.6](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-F-distribution-density) shows the probability density function of three F\-distributed random variables with different parameter values. For a small number of degrees of freedom, the density distribution is skewed to the left side. When the number increases, the density distribution gets more and more symmetric. Figure [B.7](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-F-distribution-cumulative) shows the corresponding cumulative function of the three density distributions.
Figure B.6: Examples of a probability density function of the F\-distribution. Pairs of numbers in the legend are parameters \\((m,n)\\).
Figure B.7: The cumulative distribution functions of the F\-distributions corresponding to the previous probability density functions.
**Probability density function**
\\\[F(x)\=m^{\\frac{m}{2}}n^{\\frac{n}{2}} \\cdot \\frac{\\Gamma (\\frac{m\+n}{2})}{\\Gamma (\\frac{m}{2})\\Gamma (\\frac{n}{2})} \\cdot \\frac{x^{\\frac{m}{2}\-1}}{(mx\+n)^{\\frac{m\+n}{2}}} \\textrm{ for } x\>0,\\]
where \\(\\Gamma(x)\\) denotes the gamma function.
**Cumulative distribution function**
\\\[F(x)\=I\\left(\\frac{m \\cdot x}{m \\cdot x\+n},\\frac{m}{2},\\frac{n}{2}\\right),\\]
with \\(I(z,a,b)\\) being the regularized incomplete beta function:
\\\[I(z,a,b)\=\\frac{1}{B(a,b)} \\cdot \\int\_0^z t^{a\-1}(1\-t)^{b\-1} dt.\\]
**Expected value** \\(E(X) \= \\frac{n}{n\-2}\\) (for \\(n \\geq 3\\))
**Variance** \\(Var(X) \= \\frac{2n^2(n\+m\-2\)}{m(n\-4\)(n\-2\)^2}\\) (for \\(n \\geq 5\\))
#### B.1\.3\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on an F\-distribution:
```
var df1 = 12; // degrees of freedom 1
var df2 = 12; // degrees of freedom 2
var n_samples = 30000; // number of samples used for approximation
///fold:
var chisq = function(nu) {
var y = sample(Gaussian({mu: 0, sigma: 1}));
if (nu == 1) {
return y*y;
} else {
return y*y+chisq(nu-1);
}
}
var F = function(nu1, nu2) {
var X = chisq(nu1)/nu1;
var Y = chisq(nu2)/nu2;
return X/Y;
}
viz(repeat(n_samples, function(x) {F(df1, df2)}));
///
```
```
```
### B.1\.4 Student’s *t*\-distribution
The Student’s \\(t\\)\-distribution, or just \\(t\\)\-distribution for short, was discovered by William S. Gosset in 1908 ([Vallverdú 2016](#ref-vallverdu2015)), who published his work under the pseudonym “Student”. He worked at the Guinness factory and had to deal with the problem of small sample sizes, where using a normal distribution as an approximation can be too crude. To overcome this problem, Gosset conceived of the \\(t\\)\-distribution. Accordingly, this distribution is used in particular when the sample size is small and the variance unknown, which is often the case in reality. Its shape resembles the normal bell shape and has a peak at zero, but the \\(t\\)\-distribution is a bit lower and wider (bigger tails) than the normal distribution.
The *standard \\(t\\)\-distribution* consists of a standard\-normally distributed random variable \\(X \\sim \\text{Normal}(0,1\)\\) and a \\(\\chi^2\\)\-distributed random variable \\(Y \\sim \\chi^2(n)\\) (\\(X\\) and \\(Y\\) are independent):
\\\[T \= \\frac{X}{\\sqrt{Y / n}}.\\]
The \\(t\\)\-distribution has the range \\((\-\\infty,\+\\infty)\\) and one parameter \\(\\nu\\), the degrees of freedom. The degrees of freedom can be calculated by the sample size \\(n\\) minus one:
\\\[t \\sim \\text{Student\-}t(\\nu \= n \-1\).\\]
Figure [B.8](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-t-distribution-density) shows the probability density function of three \\(t\\)\-distributed random variables with different parameters, and Figure [B.9](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-t-distribution-cumulative) shows the corresponding cumulative functions. Notice that for small degrees of freedom \\(\\nu\\), the \\(t\\)\-distribution has bigger tails. This is because the \\(t\\)\-distribution was specially designed to provide more conservative test results when analyzing small samples. When the degrees of freedom increase, the \\(t\\)\-distribution approaches a normal distribution. For \\(\\nu \\geq 30\\), this approximation is quite good.
Figure B.8: Examples of a probability density function of the \\(t\\)\-distribution.
Figure B.9: The cumulative distribution functions of the \\(t\\)\-distributions corresponding to the previous probability density functions.
**Probability density function**
\\\[ f(x, \\nu)\=\\frac{\\Gamma(\\frac{\\nu\+1}{2})}{\\sqrt{\\nu\\pi} \\cdot \\Gamma(\\frac{\\nu}{2})}\\left(1\+\\frac{x^2}{\\nu}\\right)^{\-\\frac{\\nu\+1}{2}},\\]
with \\(\\Gamma(x)\\) denoting the gamma function.
**Cumulative distribution function**
\\\[F(x, \\nu)\=I\\left(\\frac{x\+\\sqrt{x^2\+\\nu}}{2\\sqrt{x^2\+\\nu}},\\frac{\\nu}{2},\\frac{\\nu}{2}\\right),\\]
where \\(I(z,a,b)\\) denotes the regularized incomplete beta function:
\\\[I(z,a,b)\=\\frac{1}{B(a,b)} \\cdot \\int\_0^z t^{a\-1}(1\-t)^{b\-1} \\text{d}t.\\]
**Expected value** \\(E(X) \= 0\\)
**Variance** \\(Var(X) \= \\frac{n}{n\-2}\\) (for \\(n \\geq 30\\))
#### B.1\.4\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a \\(t\\)\-distribution:
```
var df = 3; // degrees of freedom
var n_samples = 30000; // number of samples used for approximation
///fold:
var chisq = function(nu) {
var y = sample(Gaussian({mu: 0, sigma: 1}));
if (nu == 1) {
return y*y;
} else {
return y*y+chisq(nu-1);
}
}
var t = function(nu) {
var X = sample(Gaussian({mu: 0, sigma: 1}));
var Y = chisq(nu);
return X/Math.sqrt(Y/nu);
}
viz(repeat(n_samples, function(x) {t(df)}));
///
```
```
```
Beyond the standard \\(t\\)\-distribution there are also generalized \\(t\\)\-distributions taking three parameters \\(\\nu\\), \\(\\mu\\) and \\(\\sigma\\), where the latter two are just the mean and the standard deviations, similar to the case of the normal distribution.
### B.1\.5 Beta distribution
The beta distribution creates a continuous distribution of numbers between 0 and 1\. Therefore, this distribution is useful if the uncertain quantity is bounded by 0 and 1 (or 100%), is continuous, and has a single mode. In Bayesian Data Analysis, the beta distribution has a special standing as prior distribution for a [Bernoulli](selected-discrete-distributions-of-random-variables.html#app-91-distributions-bernoulli) or [binomial](selected-discrete-distributions-of-random-variables.html#app-91-distributions-binomial) likelihood. The reason for this is that a combination of a beta prior and a Bernoulli (or binomial) likelihood results in a posterior distribution with the same form as the beta distribution. Such priors are referred to as *conjugate priors* (see Chapter [9\.1\.3](ch-03-03-estimation-bayes.html#ch-03-04-parameter-estimation-conjugacy)).
A beta distribution has two parameters \\(a\\) and \\(b\\) (sometimes also represented in Greek letters \\(\\alpha\\) and \\(\\beta\\)):
\\\[X \\sim Beta(a,b).\\]
The two parameters can be interpreted as the number of observations made, such that: \\(n\=a\+b\\). If \\(a\\) and \\(b\\) get bigger, the beta distribution gets narrower. If only \\(a\\) gets bigger, the distribution moves rightward, and if only \\(b\\) gets bigger, the distribution moves leftward. As the parameters define the shape of the distribution, they are also called *shape parameters*. A Beta(1,1\) is equivalent to a uniform distribution. Figure [B.10](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-beta-distribution-density) shows the probability density function of four beta distributed random variables with different parameter values. Figure [B.11](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-beta-distribution-cumulative) shows the corresponding cumulative functions.
Figure B.10: Examples of a probability density function of the beta distribution. Pairs of numbers in the legend represent parameters \\((a, b)\\).
Figure B.11: The cumulative distribution functions of the beta distributions corresponding to the previous probability density functions.
**Probability density function**
\\\[f(x)\=\\frac{\\theta^{(a\-1\)} (1\-\\theta)^{(b\-1\)}}{B(a,b)},\\]
where \\(B(a,b)\\) is the beta function:
\\\[B(a,b)\=\\int^1\_0 \\theta^{(a\-1\)} (1\-\\theta)^{(b\-1\)}d\\theta.\\]
**Cumulative distribution function**
\\\[F(x)\=\\frac{B(x;a,b)}{B(a,b)},\\]
where \\(B(x;a,b)\\) is the incomplete beta function:
\\\[B(x;a,b)\=\\int^x\_0 t^{(a\-1\)} (1\-t)^{(b\-1\)} dt,\\]
and \\(B(a,b)\\) the (complete) beta function:
\\\[B(a,b)\=\\int^1\_0 \\theta^{(a\-1\)} (1\-\\theta)^{(b\-1\)}d\\theta.\\]
**Expected value**
Mean: \\(E(X)\=\\frac{a}{a\+b}\\)
Mode: \\(\\omega\=\\frac{(a\-1\)}{a\+b\-2}\\)
**Variance**
Variance: \\(Var(X)\=\\frac{ab}{(a\+b)^2(a\+b\+1\)}\\)
Concentration: \\(\\kappa\=a\+b\\) (related to variance such that the bigger \\(a\\) and \\(b\\) are, the narrower the distribution)
**Reparameterization of the beta distribution**
Sometimes it is helpful (and more intuitive) to write the beta distribution in terms of its mode \\(\\omega\\) and concentration \\(\\kappa\\) instead of \\(a\\) and \\(b\\):
\\\[Beta(a,b)\=Beta(\\omega(\\kappa\-2\)\+1, (1\-\\omega)(\\kappa\-2\)\+1\), \\textrm{ for } \\kappa \> 2\.\\]
#### B.1\.5\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a beta distribution:
```
var a = 2; // shape parameter alpha
var b = 4; // shape parameter beta
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {beta({a: a, b: b})}));
///
```
```
```
### B.1\.6 Uniform distribution
The (continuous) uniform distribution takes values within a specified range \\(a\\) and \\(b\\) that have constant probability. Due to its shape, the distribution is also sometimes called *rectangular distribution*. The uniform distribution is common for random number generation. In Bayesian Data Analysis, it is often used as prior distribution to express *ignorance*. This can be thought of in the following way: When different events are possible, but no (reliable) information exists about their probability of occurrence, the most conservative (and also intuitive) choice would be to assign probability in such a way that all events are equally likely to occur. The uniform distribution models this intuition and generates a completely random number in some interval \\(\[a,b]\\).
The distribution is specified by two parameters: the endpoints \\(a\\) (minimum) and \\(b\\) (maximum).
\\\[X \\sim Uniform(a,b) \\ \\ \\text{or alternativelly written as: } \\ \\ \\mathcal{U}(a,b)\\]
When \\(a\=0\\) and \\(b\=1\\), the distribution is referred to as *standard* uniform distribution. Figure [B.12](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-uniform-distribution-density) shows the probability density function of two uniformly distributed random variables with different parameter values. Figure [B.13](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-uniform-distribution-cumulative) shows the corresponding cumulative functions.
Figure B.12: Examples of a probability density function of the uniform distribution. Pairs of numbers in the legend are parameter values \\((a,b)\\).
Figure B.13: The cumulative distribution functions of the uniform distributions corresponding to the previous probability density functions.
**Probability density function**
\\\[f(x)\=\\begin{cases} \\frac{1}{b\-a} \&\\textrm{ for } x \\in \[a,b],\\\\0 \&\\textrm{ otherwise.}\\end{cases}\\]
**Cumulative distribution function**
\\\[F(x)\=\\begin{cases}0 \& \\textrm{ for } x\<a,\\\\\\frac{x\-a}{b\-a} \&\\textrm{ for } a\\leq x \< b,\\\\ 1 \&\\textrm{ for }x \\geq b. \\end{cases}\\]
**Expected value** \\(E(X)\=\\frac{a\+b}{2}\\)
**Variance** \\(Var(X)\=\\frac{(b\-a)^2}{12}\\)
#### B.1\.6\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a uniform distribution:
```
var a = 0; // lower bound
var b = 1; // upper bound (> a)
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {uniform({a: a, b: b})}));
///
```
```
```
### B.1\.7 Dirichlet distribution
The Dirichlet distribution is a multivariate generalization of the beta distribution: While the beta distribution is a distribution over binomials, the Dirichlet is a distribution over multinomials.
It can be used in any situation where an entity has to necessarily fall into one of \\(n\+1\\) mutually exclusive subclasses, and the goal is to study the proportion of entities belonging to the different subclasses.
The Dirichlet distribution is commonly used as *prior distribution* in Bayesian statistics, as this family is a *conjugate prior* for the categorical distribution and the multinomial distribution.
The Dirichlet distribution \\(\\mathcal{Dir}(\\alpha)\\) is a family of continuous multivariate probability distributions, parameterized by a vector \\(\\alpha\\) of positive reals. Thus, it is a distribution with \\(k\\) positive parameters \\(\\alpha^k\\) with respect to a \\(k\\)\-dimensional space.
\\\[X \\sim \\mathcal{Dirichlet}(\\boldsymbol{\\alpha})\\]
The probability density function (see formula below) of the Dirichlet distribution for \\(k\\) random variables is a \\(k\-1\\) dimensional probability *simplex* that exists on a \\(k\\)\-dimensional space. How does the parameter \\(\\alpha\\) influence the Dirichlet distribution?
* Values of \\(\\alpha\_i\<1\\) can be thought of as anti\-weight that pushes away \\(x\_i\\) toward extremes (see upper left panel of Figure [B.14](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-dirichlet-distribution-density)).
* If \\(\\alpha\_1\=...\=\\alpha\_k\=1\\), then the points are uniformly distributed (see upper right panel).
* Higher values of \\(\\alpha\_i\\) lead to greater “weight” of \\(X\_i\\) and a greater amount of the total “mass” assigned to it (see lower left panel).
* If all \\(\\alpha\_i\\) are equal, the distribution is symmetric (see lower right panel for an asymmetric distribution).
Figure B.14: Examples of a probability density function of the Dirichlet distribution with dimension \\(k\\) for different parameter vectors \\(\\alpha\\).
**Probability density function**
\\\[f(x)\=\\frac{\\Gamma\\left(\\sum\_{i\=1}^{n\+1} \\alpha\_i\\right)}{\\prod\_{i\=1}^{n\+1}\\Gamma(\\alpha\_i)}\\prod\_{i\=1}^{n\+1}p\_i^{\\alpha\_i\-1},\\]
with \\(\\Gamma(x)\\) denoting the gamma function and
\\\[p\_i\=\\frac{X\_i}{\\sum\_{j\=1}^{n\+1}X\_j}, 1\\leq i\\leq n,\\]
where \\(X\_1,X\_2,...,X\_{n\+1}\\) are independent gamma random variables with \\(X\_i \\sim Gamma(\\alpha\_i,1\)\\).
**Expected value** \\(E(p\_i)\=\\frac{\\alpha\_i}{t}, \\textrm{ with } t\=\\sum\_{i\=1}^{n\+1}\\alpha\_i\\)
**Variance** \\(Var(p\_i)\=\\frac{\\alpha\_i(t\-\\alpha\_i)}{t^2(t\+1\)}, \\textrm{ with } t\=\\sum\_{i\=1}^{n\+1}\\alpha\_i\\)
#### B.1\.7\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a Dirichlet distribution:
```
var alpha = Vector([1, 1, 5]); // concentration parameter
var n_samples = 1000; // number of samples used for approximation
///fold:
var model = function() {
var dir_sample = dirichlet({alpha: alpha})
return({"x_1" : dir_sample.data["0"], "x_2" : dir_sample.data["1"]})
}
viz(Infer({method : "rejection", samples: n_samples}, model))
///
```
```
```
### B.1\.1 Normal distribution
One of the most important distribution families is the *Gaussian* or *normal family* because it fits many natural phenomena. Furthermore, the sampling distributions of many estimators depend on the normal distribution either because they are derived from normally distributed random variables or because they can be asymptotically approximated by a normal distribution for large samples (*Central limit theorem*).
Distributions of the normal family are symmetric with range \\((\-\\infty,\+\\infty)\\) and have two parameters \\(\\mu\\) and \\(\\sigma\\), respectively referred to as the *mean* and the *standard deviation* of the normal random variable. These parameters are examples of *location* and *scale* parameters. The normal distribution is located at \\(\\mu\\), and the choice of \\(\\sigma\\) scales its width. The distribution is symmetric, with most observations lying around the central peak \\(\\mu\\) and more extreme values being further away depending on \\(\\sigma\\).
\\\[X \\sim Normal(\\mu,\\sigma) \\ \\ \\text{, or alternatively written as: } \\ \\ X \\sim \\mathcal{N}(\\mu,\\sigma) \\]
Figure [B.1](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-normal-distribution-density) shows the probability density function of three normally distributed random variables with different parameters. Figure [B.2](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-normal-distribution-cumulative) shows the corresponding cumulative function of the three normal distributions.
Figure B.1: Examples of a probability density function of the normal distribution. Numbers in legend represent parameter pairs \\((\\mu, \\sigma)\\).
Figure B.2: The cumulative distribution functions of the normal distributions corresponding to the previous probability density functions.
**Probability density function**
\\\[f(x)\=\\frac{1}{\\sigma\\sqrt{2\\pi}}\\exp\\left(\-0\.5\\left(\\frac{x\-\\mu}{\\sigma}\\right)^2\\right)\\]
**Cumulative distribution function**
\\\[F(x)\=\\int\_{\-\\inf}^{x}f(t)dt\\]
**Expected value** \\(E(X)\=\\mu\\)
**Variance** \\(Var(X)\=\\sigma^2\\)
**Deviation and Coverage**
The normal distribution is often associated with the *68\-95\-99\.7 rule*. The values refer to the probability of a random data point landing within *one*, *two* or *three* standard deviations of the mean (Figure [B.3](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-normal-distribution-coverage) depicts these three intervals). For example, about 68% of the values drawn from a normal distribution are within one standard deviation \\(\\sigma\\) away from the mean \\(\\mu\\).
* \\(P(\\mu\-\\sigma \\leq X \\leq \\mu\+\\sigma) \= 0\.6827\\)
* \\(P(\\mu\-2\\sigma \\leq X \\leq \\mu\+2\\sigma) \= 0\.9545\\)
* \\(P(\\mu\-3\\sigma \\leq X \\leq \\mu\+3\\sigma) \= 0\.9973\\)
Figure B.3: The coverage of a normal distribution.
**Z\-transformation / standardization**
A special case of normally distributed random variables is the *standard normal* distributed variable with \\(\\mu\=0\\) and \\(\\sigma\=1\\): \\(Y\\sim Normal(0,1\)\\). Each normal distribution \\(X\\) can be converted into a standard normal distribution \\(Z\\) by *z\-transformation* (see equation below):
\\\[Z\=\\frac{X\-\\mu}{\\sigma}\\]
The advantage of standardization is that values from different scales can be compared because they become *scale\-independent* by z\-transformation.
**Alternative parameterization**
Often a normal distribution is parameterized in terms of its mean \\(\\mu\\) and variance \\(\\sigma^2\\). This is clear, from writing \\(X\\sim Normal(\\mu, \\sigma^2\)\\), instead of \\(X\\sim Normal(\\mu, \\sigma)\\).
**Linear transformations**
1. If normal random variable \\(X\\sim Normal(\\mu, \\sigma^2\)\\) is linearly transformed by \\(Y\=a\*X\+b\\), then the new random variable \\(Y\\) is again normally distributed with \\(Y \\sim Normal(a\\mu\+b,a^2\\sigma^2\)\\).
2. Are \\(X\\sim Normal(\\mu\_x, \\sigma^2\)\\) and \\(Y\\sim Normal(\\mu\_y, \\sigma^2\)\\) normally distributed and independent, then their sum is again normally distributed with \\(X\+Y \\sim Normal(\\mu\_x\+\\mu\_y, \\sigma\_x^2\+\\sigma\_y^2\)\\).
#### B.1\.1\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a normal distribution:
```
var mu = 2; // mean
var sigma = 3; // standard deviation
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {gaussian({mu: mu, sigma: sigma})}));
///
```
```
```
#### B.1\.1\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a normal distribution:
```
var mu = 2; // mean
var sigma = 3; // standard deviation
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {gaussian({mu: mu, sigma: sigma})}));
///
```
```
```
### B.1\.2 Chi\-squared distribution
The \\(\\chi^2\\)\-distribution is widely used in hypothesis testing in inferential statistics because many test statistics are approximately distributed as \\(\\chi^2\\)\-distribution.
The \\(\\chi^2\\)\-distribution is directly related to the standard normal distribution: The sum of the squares of \\(n\\) independent and standard normally distributed random variables \\(X\_1,X\_2,...,X\_n\\) is distributed according to a \\(\\chi^2\\)\-distribution with \\(n\\) *degrees of freedom*:
\\\[Y\=X\_1^2\+X\_2^2\+...\+X\_n^2\.\\]
The \\(\\chi^2\\)\-distribution is a skewed probability distribution with range \\(\[0,\+\\infty)\\) and only one parameter \\(n\\), the *degrees of freedom* (if \\(n\=1\\), then the range is \\((0,\+\\infty)\\)):
\\\[X\\sim \\chi^2(n).\\]
Figure [B.4](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-chi-squared-distribution-density) shows the probability density function of three \\(\\chi^2\\)\-distributed random variables with different values for the parameter. Notice that with increasing degrees of freedom, the \\(\\chi^2\\)\-distribution can be approximated by a normal distribution (for \\(n \\geq 30\\)). Figure [B.5](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-chi-squared-distribution-cumulative) shows the corresponding cumulative function of the three \\(\\chi^2\\)\-density distributions.
Figure B.4: Examples of a probability density function of the chi\-squared distribution.
Figure B.5: The cumulative distribution functions of the chi\-squared distributions corresponding to the previous probability density functions.
**Probability density function**
\\\[f(x)\=\\begin{cases}\\frac{x^{\\frac{n}{2}\-1}e^{\-\\frac{x}{2}}}{2^{\\frac{n}{2}}\\Gamma (\\frac{n}{2})} \&\\textrm{ for }x\>0,\\\\ 0 \&\\textrm{ otherwise.}\\end{cases}\\]
where \\(\\Gamma (\\frac{n}{2})\\) denotes the gamma function.
**Cumulative distribution function**
\\\[F(x)\=\\frac{\\gamma (\\frac{n}{2},\\frac{x}{2})}{\\Gamma \\frac{n}{2}},\\]
with \\(\\gamma(s,t)\\) being the lower incomplete gamma function:
\\\[\\gamma(s,t)\=\\int\_0^t t^{s\-1}e^{\-t} dt.\\]
**Expected value** \\(E(X)\=n\\)
**Variance** \\(Var(X)\=2n\\)
**Transformations**
The sum of two \\(\\chi^2\\)\-distributed random variables \\(X \\sim \\chi^2(m)\\) and \\(Y \\sim \\chi^2(n)\\) is again a \\(\\chi^2\\)\-distributed random variable with \\(X\+Y\=\\chi^2(m\+n)\\).
#### B.1\.2\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a \\(\\chi^2\\)\-distribution:
```
var df = 1; // degrees of freedom
var n_samples = 30000; // number of samples used for approximation
///fold:
var chisq = function(nu) {
var y = sample(Gaussian({mu: 0, sigma: 1}));
if (nu == 1) {
return y*y;
} else {
return y*y+chisq(nu-1);
}
}
viz(repeat(n_samples, function(x) {chisq(df)}));
///
```
```
```
#### B.1\.2\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a \\(\\chi^2\\)\-distribution:
```
var df = 1; // degrees of freedom
var n_samples = 30000; // number of samples used for approximation
///fold:
var chisq = function(nu) {
var y = sample(Gaussian({mu: 0, sigma: 1}));
if (nu == 1) {
return y*y;
} else {
return y*y+chisq(nu-1);
}
}
viz(repeat(n_samples, function(x) {chisq(df)}));
///
```
```
```
### B.1\.3 F\-distribution
The F\-distribution, named after R.A. Fisher, is particularly used in regression and variance analysis. It is defined by the ratio of two \\(\\chi^2\\)\-distributed random variables \\(X\\sim \\chi^2(m)\\) and \\(Y\\sim \\chi^2(n)\\), each divided by its degrees of freedom:
\\\[F\=\\frac{\\frac{X}{m}}{\\frac{Y}{n}}.\\]
The F\-distribution is a continuous skewed probability distribution with range \\((0,\+\\infty)\\) and two parameters \\(m\\) and \\(n\\), corresponding to the degrees of freedom of the two \\(\\chi^2\\)\-distributed random variables:
\\\[X \\sim F(m,n).\\]
Figure [B.6](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-F-distribution-density) shows the probability density function of three F\-distributed random variables with different parameter values. For a small number of degrees of freedom, the density distribution is skewed to the left side. When the number increases, the density distribution gets more and more symmetric. Figure [B.7](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-F-distribution-cumulative) shows the corresponding cumulative function of the three density distributions.
Figure B.6: Examples of a probability density function of the F\-distribution. Pairs of numbers in the legend are parameters \\((m,n)\\).
Figure B.7: The cumulative distribution functions of the F\-distributions corresponding to the previous probability density functions.
**Probability density function**
\\\[F(x)\=m^{\\frac{m}{2}}n^{\\frac{n}{2}} \\cdot \\frac{\\Gamma (\\frac{m\+n}{2})}{\\Gamma (\\frac{m}{2})\\Gamma (\\frac{n}{2})} \\cdot \\frac{x^{\\frac{m}{2}\-1}}{(mx\+n)^{\\frac{m\+n}{2}}} \\textrm{ for } x\>0,\\]
where \\(\\Gamma(x)\\) denotes the gamma function.
**Cumulative distribution function**
\\\[F(x)\=I\\left(\\frac{m \\cdot x}{m \\cdot x\+n},\\frac{m}{2},\\frac{n}{2}\\right),\\]
with \\(I(z,a,b)\\) being the regularized incomplete beta function:
\\\[I(z,a,b)\=\\frac{1}{B(a,b)} \\cdot \\int\_0^z t^{a\-1}(1\-t)^{b\-1} dt.\\]
**Expected value** \\(E(X) \= \\frac{n}{n\-2}\\) (for \\(n \\geq 3\\))
**Variance** \\(Var(X) \= \\frac{2n^2(n\+m\-2\)}{m(n\-4\)(n\-2\)^2}\\) (for \\(n \\geq 5\\))
#### B.1\.3\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on an F\-distribution:
```
var df1 = 12; // degrees of freedom 1
var df2 = 12; // degrees of freedom 2
var n_samples = 30000; // number of samples used for approximation
///fold:
var chisq = function(nu) {
var y = sample(Gaussian({mu: 0, sigma: 1}));
if (nu == 1) {
return y*y;
} else {
return y*y+chisq(nu-1);
}
}
var F = function(nu1, nu2) {
var X = chisq(nu1)/nu1;
var Y = chisq(nu2)/nu2;
return X/Y;
}
viz(repeat(n_samples, function(x) {F(df1, df2)}));
///
```
```
```
#### B.1\.3\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on an F\-distribution:
```
var df1 = 12; // degrees of freedom 1
var df2 = 12; // degrees of freedom 2
var n_samples = 30000; // number of samples used for approximation
///fold:
var chisq = function(nu) {
var y = sample(Gaussian({mu: 0, sigma: 1}));
if (nu == 1) {
return y*y;
} else {
return y*y+chisq(nu-1);
}
}
var F = function(nu1, nu2) {
var X = chisq(nu1)/nu1;
var Y = chisq(nu2)/nu2;
return X/Y;
}
viz(repeat(n_samples, function(x) {F(df1, df2)}));
///
```
```
```
### B.1\.4 Student’s *t*\-distribution
The Student’s \\(t\\)\-distribution, or just \\(t\\)\-distribution for short, was discovered by William S. Gosset in 1908 ([Vallverdú 2016](#ref-vallverdu2015)), who published his work under the pseudonym “Student”. He worked at the Guinness factory and had to deal with the problem of small sample sizes, where using a normal distribution as an approximation can be too crude. To overcome this problem, Gosset conceived of the \\(t\\)\-distribution. Accordingly, this distribution is used in particular when the sample size is small and the variance unknown, which is often the case in reality. Its shape resembles the normal bell shape and has a peak at zero, but the \\(t\\)\-distribution is a bit lower and wider (bigger tails) than the normal distribution.
The *standard \\(t\\)\-distribution* consists of a standard\-normally distributed random variable \\(X \\sim \\text{Normal}(0,1\)\\) and a \\(\\chi^2\\)\-distributed random variable \\(Y \\sim \\chi^2(n)\\) (\\(X\\) and \\(Y\\) are independent):
\\\[T \= \\frac{X}{\\sqrt{Y / n}}.\\]
The \\(t\\)\-distribution has the range \\((\-\\infty,\+\\infty)\\) and one parameter \\(\\nu\\), the degrees of freedom. The degrees of freedom can be calculated by the sample size \\(n\\) minus one:
\\\[t \\sim \\text{Student\-}t(\\nu \= n \-1\).\\]
Figure [B.8](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-t-distribution-density) shows the probability density function of three \\(t\\)\-distributed random variables with different parameters, and Figure [B.9](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-t-distribution-cumulative) shows the corresponding cumulative functions. Notice that for small degrees of freedom \\(\\nu\\), the \\(t\\)\-distribution has bigger tails. This is because the \\(t\\)\-distribution was specially designed to provide more conservative test results when analyzing small samples. When the degrees of freedom increase, the \\(t\\)\-distribution approaches a normal distribution. For \\(\\nu \\geq 30\\), this approximation is quite good.
Figure B.8: Examples of a probability density function of the \\(t\\)\-distribution.
Figure B.9: The cumulative distribution functions of the \\(t\\)\-distributions corresponding to the previous probability density functions.
**Probability density function**
\\\[ f(x, \\nu)\=\\frac{\\Gamma(\\frac{\\nu\+1}{2})}{\\sqrt{\\nu\\pi} \\cdot \\Gamma(\\frac{\\nu}{2})}\\left(1\+\\frac{x^2}{\\nu}\\right)^{\-\\frac{\\nu\+1}{2}},\\]
with \\(\\Gamma(x)\\) denoting the gamma function.
**Cumulative distribution function**
\\\[F(x, \\nu)\=I\\left(\\frac{x\+\\sqrt{x^2\+\\nu}}{2\\sqrt{x^2\+\\nu}},\\frac{\\nu}{2},\\frac{\\nu}{2}\\right),\\]
where \\(I(z,a,b)\\) denotes the regularized incomplete beta function:
\\\[I(z,a,b)\=\\frac{1}{B(a,b)} \\cdot \\int\_0^z t^{a\-1}(1\-t)^{b\-1} \\text{d}t.\\]
**Expected value** \\(E(X) \= 0\\)
**Variance** \\(Var(X) \= \\frac{n}{n\-2}\\) (for \\(n \\geq 30\\))
#### B.1\.4\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a \\(t\\)\-distribution:
```
var df = 3; // degrees of freedom
var n_samples = 30000; // number of samples used for approximation
///fold:
var chisq = function(nu) {
var y = sample(Gaussian({mu: 0, sigma: 1}));
if (nu == 1) {
return y*y;
} else {
return y*y+chisq(nu-1);
}
}
var t = function(nu) {
var X = sample(Gaussian({mu: 0, sigma: 1}));
var Y = chisq(nu);
return X/Math.sqrt(Y/nu);
}
viz(repeat(n_samples, function(x) {t(df)}));
///
```
```
```
Beyond the standard \\(t\\)\-distribution there are also generalized \\(t\\)\-distributions taking three parameters \\(\\nu\\), \\(\\mu\\) and \\(\\sigma\\), where the latter two are just the mean and the standard deviations, similar to the case of the normal distribution.
#### B.1\.4\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a \\(t\\)\-distribution:
```
var df = 3; // degrees of freedom
var n_samples = 30000; // number of samples used for approximation
///fold:
var chisq = function(nu) {
var y = sample(Gaussian({mu: 0, sigma: 1}));
if (nu == 1) {
return y*y;
} else {
return y*y+chisq(nu-1);
}
}
var t = function(nu) {
var X = sample(Gaussian({mu: 0, sigma: 1}));
var Y = chisq(nu);
return X/Math.sqrt(Y/nu);
}
viz(repeat(n_samples, function(x) {t(df)}));
///
```
```
```
Beyond the standard \\(t\\)\-distribution there are also generalized \\(t\\)\-distributions taking three parameters \\(\\nu\\), \\(\\mu\\) and \\(\\sigma\\), where the latter two are just the mean and the standard deviations, similar to the case of the normal distribution.
### B.1\.5 Beta distribution
The beta distribution creates a continuous distribution of numbers between 0 and 1\. Therefore, this distribution is useful if the uncertain quantity is bounded by 0 and 1 (or 100%), is continuous, and has a single mode. In Bayesian Data Analysis, the beta distribution has a special standing as prior distribution for a [Bernoulli](selected-discrete-distributions-of-random-variables.html#app-91-distributions-bernoulli) or [binomial](selected-discrete-distributions-of-random-variables.html#app-91-distributions-binomial) likelihood. The reason for this is that a combination of a beta prior and a Bernoulli (or binomial) likelihood results in a posterior distribution with the same form as the beta distribution. Such priors are referred to as *conjugate priors* (see Chapter [9\.1\.3](ch-03-03-estimation-bayes.html#ch-03-04-parameter-estimation-conjugacy)).
A beta distribution has two parameters \\(a\\) and \\(b\\) (sometimes also represented in Greek letters \\(\\alpha\\) and \\(\\beta\\)):
\\\[X \\sim Beta(a,b).\\]
The two parameters can be interpreted as the number of observations made, such that: \\(n\=a\+b\\). If \\(a\\) and \\(b\\) get bigger, the beta distribution gets narrower. If only \\(a\\) gets bigger, the distribution moves rightward, and if only \\(b\\) gets bigger, the distribution moves leftward. As the parameters define the shape of the distribution, they are also called *shape parameters*. A Beta(1,1\) is equivalent to a uniform distribution. Figure [B.10](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-beta-distribution-density) shows the probability density function of four beta distributed random variables with different parameter values. Figure [B.11](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-beta-distribution-cumulative) shows the corresponding cumulative functions.
Figure B.10: Examples of a probability density function of the beta distribution. Pairs of numbers in the legend represent parameters \\((a, b)\\).
Figure B.11: The cumulative distribution functions of the beta distributions corresponding to the previous probability density functions.
**Probability density function**
\\\[f(x)\=\\frac{\\theta^{(a\-1\)} (1\-\\theta)^{(b\-1\)}}{B(a,b)},\\]
where \\(B(a,b)\\) is the beta function:
\\\[B(a,b)\=\\int^1\_0 \\theta^{(a\-1\)} (1\-\\theta)^{(b\-1\)}d\\theta.\\]
**Cumulative distribution function**
\\\[F(x)\=\\frac{B(x;a,b)}{B(a,b)},\\]
where \\(B(x;a,b)\\) is the incomplete beta function:
\\\[B(x;a,b)\=\\int^x\_0 t^{(a\-1\)} (1\-t)^{(b\-1\)} dt,\\]
and \\(B(a,b)\\) the (complete) beta function:
\\\[B(a,b)\=\\int^1\_0 \\theta^{(a\-1\)} (1\-\\theta)^{(b\-1\)}d\\theta.\\]
**Expected value**
Mean: \\(E(X)\=\\frac{a}{a\+b}\\)
Mode: \\(\\omega\=\\frac{(a\-1\)}{a\+b\-2}\\)
**Variance**
Variance: \\(Var(X)\=\\frac{ab}{(a\+b)^2(a\+b\+1\)}\\)
Concentration: \\(\\kappa\=a\+b\\) (related to variance such that the bigger \\(a\\) and \\(b\\) are, the narrower the distribution)
**Reparameterization of the beta distribution**
Sometimes it is helpful (and more intuitive) to write the beta distribution in terms of its mode \\(\\omega\\) and concentration \\(\\kappa\\) instead of \\(a\\) and \\(b\\):
\\\[Beta(a,b)\=Beta(\\omega(\\kappa\-2\)\+1, (1\-\\omega)(\\kappa\-2\)\+1\), \\textrm{ for } \\kappa \> 2\.\\]
#### B.1\.5\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a beta distribution:
```
var a = 2; // shape parameter alpha
var b = 4; // shape parameter beta
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {beta({a: a, b: b})}));
///
```
```
```
#### B.1\.5\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a beta distribution:
```
var a = 2; // shape parameter alpha
var b = 4; // shape parameter beta
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {beta({a: a, b: b})}));
///
```
```
```
### B.1\.6 Uniform distribution
The (continuous) uniform distribution takes values within a specified range \\(a\\) and \\(b\\) that have constant probability. Due to its shape, the distribution is also sometimes called *rectangular distribution*. The uniform distribution is common for random number generation. In Bayesian Data Analysis, it is often used as prior distribution to express *ignorance*. This can be thought of in the following way: When different events are possible, but no (reliable) information exists about their probability of occurrence, the most conservative (and also intuitive) choice would be to assign probability in such a way that all events are equally likely to occur. The uniform distribution models this intuition and generates a completely random number in some interval \\(\[a,b]\\).
The distribution is specified by two parameters: the endpoints \\(a\\) (minimum) and \\(b\\) (maximum).
\\\[X \\sim Uniform(a,b) \\ \\ \\text{or alternativelly written as: } \\ \\ \\mathcal{U}(a,b)\\]
When \\(a\=0\\) and \\(b\=1\\), the distribution is referred to as *standard* uniform distribution. Figure [B.12](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-uniform-distribution-density) shows the probability density function of two uniformly distributed random variables with different parameter values. Figure [B.13](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-uniform-distribution-cumulative) shows the corresponding cumulative functions.
Figure B.12: Examples of a probability density function of the uniform distribution. Pairs of numbers in the legend are parameter values \\((a,b)\\).
Figure B.13: The cumulative distribution functions of the uniform distributions corresponding to the previous probability density functions.
**Probability density function**
\\\[f(x)\=\\begin{cases} \\frac{1}{b\-a} \&\\textrm{ for } x \\in \[a,b],\\\\0 \&\\textrm{ otherwise.}\\end{cases}\\]
**Cumulative distribution function**
\\\[F(x)\=\\begin{cases}0 \& \\textrm{ for } x\<a,\\\\\\frac{x\-a}{b\-a} \&\\textrm{ for } a\\leq x \< b,\\\\ 1 \&\\textrm{ for }x \\geq b. \\end{cases}\\]
**Expected value** \\(E(X)\=\\frac{a\+b}{2}\\)
**Variance** \\(Var(X)\=\\frac{(b\-a)^2}{12}\\)
#### B.1\.6\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a uniform distribution:
```
var a = 0; // lower bound
var b = 1; // upper bound (> a)
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {uniform({a: a, b: b})}));
///
```
```
```
#### B.1\.6\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a uniform distribution:
```
var a = 0; // lower bound
var b = 1; // upper bound (> a)
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {uniform({a: a, b: b})}));
///
```
```
```
### B.1\.7 Dirichlet distribution
The Dirichlet distribution is a multivariate generalization of the beta distribution: While the beta distribution is a distribution over binomials, the Dirichlet is a distribution over multinomials.
It can be used in any situation where an entity has to necessarily fall into one of \\(n\+1\\) mutually exclusive subclasses, and the goal is to study the proportion of entities belonging to the different subclasses.
The Dirichlet distribution is commonly used as *prior distribution* in Bayesian statistics, as this family is a *conjugate prior* for the categorical distribution and the multinomial distribution.
The Dirichlet distribution \\(\\mathcal{Dir}(\\alpha)\\) is a family of continuous multivariate probability distributions, parameterized by a vector \\(\\alpha\\) of positive reals. Thus, it is a distribution with \\(k\\) positive parameters \\(\\alpha^k\\) with respect to a \\(k\\)\-dimensional space.
\\\[X \\sim \\mathcal{Dirichlet}(\\boldsymbol{\\alpha})\\]
The probability density function (see formula below) of the Dirichlet distribution for \\(k\\) random variables is a \\(k\-1\\) dimensional probability *simplex* that exists on a \\(k\\)\-dimensional space. How does the parameter \\(\\alpha\\) influence the Dirichlet distribution?
* Values of \\(\\alpha\_i\<1\\) can be thought of as anti\-weight that pushes away \\(x\_i\\) toward extremes (see upper left panel of Figure [B.14](selected-continuous-distributions-of-random-variables.html#fig:ch-app-01-dirichlet-distribution-density)).
* If \\(\\alpha\_1\=...\=\\alpha\_k\=1\\), then the points are uniformly distributed (see upper right panel).
* Higher values of \\(\\alpha\_i\\) lead to greater “weight” of \\(X\_i\\) and a greater amount of the total “mass” assigned to it (see lower left panel).
* If all \\(\\alpha\_i\\) are equal, the distribution is symmetric (see lower right panel for an asymmetric distribution).
Figure B.14: Examples of a probability density function of the Dirichlet distribution with dimension \\(k\\) for different parameter vectors \\(\\alpha\\).
**Probability density function**
\\\[f(x)\=\\frac{\\Gamma\\left(\\sum\_{i\=1}^{n\+1} \\alpha\_i\\right)}{\\prod\_{i\=1}^{n\+1}\\Gamma(\\alpha\_i)}\\prod\_{i\=1}^{n\+1}p\_i^{\\alpha\_i\-1},\\]
with \\(\\Gamma(x)\\) denoting the gamma function and
\\\[p\_i\=\\frac{X\_i}{\\sum\_{j\=1}^{n\+1}X\_j}, 1\\leq i\\leq n,\\]
where \\(X\_1,X\_2,...,X\_{n\+1}\\) are independent gamma random variables with \\(X\_i \\sim Gamma(\\alpha\_i,1\)\\).
**Expected value** \\(E(p\_i)\=\\frac{\\alpha\_i}{t}, \\textrm{ with } t\=\\sum\_{i\=1}^{n\+1}\\alpha\_i\\)
**Variance** \\(Var(p\_i)\=\\frac{\\alpha\_i(t\-\\alpha\_i)}{t^2(t\+1\)}, \\textrm{ with } t\=\\sum\_{i\=1}^{n\+1}\\alpha\_i\\)
#### B.1\.7\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a Dirichlet distribution:
```
var alpha = Vector([1, 1, 5]); // concentration parameter
var n_samples = 1000; // number of samples used for approximation
///fold:
var model = function() {
var dir_sample = dirichlet({alpha: alpha})
return({"x_1" : dir_sample.data["0"], "x_2" : dir_sample.data["1"]})
}
viz(Infer({method : "rejection", samples: n_samples}, model))
///
```
```
```
#### B.1\.7\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a Dirichlet distribution:
```
var alpha = Vector([1, 1, 5]); // concentration parameter
var n_samples = 1000; // number of samples used for approximation
///fold:
var model = function() {
var dir_sample = dirichlet({alpha: alpha})
return({"x_1" : dir_sample.data["0"], "x_2" : dir_sample.data["1"]})
}
viz(Infer({method : "rejection", samples: n_samples}, model))
///
```
```
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/selected-discrete-distributions-of-random-variables.html |
B.2 Selected discrete distributions of random variables
-------------------------------------------------------
### B.2\.1 Binomial distribution
The binomial distribution is a useful model for binary decisions where the outcome is a choice between two alternatives (e.g., Yes/No, Left/Right, Present/Absent, Heads/Tails, …). The two outcomes are coded as \\(0\\) (failure) and \\(1\\) (success). Consequently, let the probability of occurrence of the outcome “success” be \\(p\\), then the probability of occurrence of “failure” is \\(1\-p\\).
Consider a coin\-flip experiment with the outcomes “heads” or “tails”. If we flip a coin repeatedly, e.g., 30 times, the successive trials are independent of each other and the probability \\(p\\) is constant, then the resulting binomial distribution is a discrete random variable with outcomes \\(\\{0,1,2,...,30\\}\\).
The binomial distribution has two parameters “size” and “prob”, often denoted as \\(n\\) and \\(p\\), respectively. The “size” parameter refers to the number of trials and “prob” to the probability of success:
\\\[X \\sim Binomial(n,p).\\]
Figure [B.15](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-binomial-distribution-mass) shows the probability mass function of three binomially distributed random variables with different parameter values. As stated above, \\(p\\) refers to the probability of success. The higher this probability, the more often we will observe the outcome coded with “1”. Therefore, the distribution tends toward the right side and vice\-versa. The distribution gets more symmetric if the parameter \\(p\\) approximates 0\.5\. Figure [B.16](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-binomial-distribution-cumulative) shows the corresponding cumulative functions.
Figure B.15: Examples of a probability mass function of the binomial distribution. Numbers in the legend are pairs of parameters \\((n, p)\\).
Figure B.16: The cumulative distribution functions of the binomial distributions corresponding to the previous probability mass functions.
**Probability mass function**
\\\[f(x)\=\\binom{n}{x}p^x(1\-p)^{n\-x},\\]
where \\(\\binom{n}{x}\\) is the binomial coefficient.
**Cumulative function**
\\\[F(x)\=\\sum\_{k\=0}^{x}\\binom{n}{k}p^k(1\-p)^{n\-k}\\]
**Expected value** \\(E(X)\=n \\cdot p\\)
**Variance** \\(Var(X)\=n \\cdot p \\cdot (1\-p)\\)
#### B.2\.1\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a binomial distribution:
```
var p = 0.5; // probability of success
var n = 4; // number of trials (>= 1)
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {binomial({p: p, n: n})}));
///
```
```
```
### B.2\.2 Multinomial distribution
The multinomial distribution is a generalization of the binomial distribution to the case of \\(n\\) repeated trials: While the binomial distribution can have two outcomes, the multinomial distribution can have multiple outcomes.
Consider an experiment where each trial can result in any of \\(k\\) possible outcomes with a probability \\(p\_i\\), where \\(i\=1,2,...,k\\), with \\(\\sum\_{i\=1}^kp\_i\=1\\). For \\(n\\) repeated trials, let \\(k\_i\\) denote the number of times \\(X\=x\_i\\) was observed, where \\(i\=1,2,...,m\\). It follows that \\(\\sum\_{i\=1}^m k\_i\=n\\).
**Probability mass function**
The probability of observing a vector of outcomes \\(\\mathbf{k}\=\[k\_1,...,k\_m]^T\\) is
\\\[f(\\mathbf{k}\|\\mathbf{p})\=\\binom{n}{k\_1\\cdot k\_2 \\cdot...\\cdot k\_m} \\prod\_{i\=1}^m p\_i^{k\_i},\\]
where \\(\\binom{n}{k\_1\\cdot k\_2 \\cdot...\\cdot k\_m}\\) is the multinomial coefficient: \\\[\\binom{n}{k\_1\\cdot k\_2 \\cdot...\\cdot k\_m}\=\\frac{n!}{k\_1!\\cdot k\_2! \\cdot...\\cdot k\_m!}.\\]
It is a generalization of the binomial coefficient \\(\\binom{n}{k}\\).
**Expected value:** \\(E(X)\=n\\cdot p\_i\\)
**Variance:** \\(Var(X)\=n\\cdot p\_i\\cdot (1\-p\_i)\\)
#### B.2\.2\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a multinomial distribution:
```
var ps = [0.25, 0.25, 0.25, 0.25]; // probabilities
var n = 4; // number of trials (>= 1)
var n_samples = 30000; // number of samples used for approximation
///fold:
viz.hist(repeat(n_samples, function(x) {multinomial({ps: ps, n: n})}));
///
```
```
```
### B.2\.3 Bernoulli distribution
The Bernoulli distribution is a special case of the binomial distribution with \\(size \= 1\\). The outcome of a Bernoulli random variable is therefore either 0 or 1\. Apart from that, the same information holds as for the binomial distribution.
As the “size” parameter is now negligible, the Bernoulli distribution has only one parameter, the probability of success \\(p\\):
\\\[X \\sim Bern(p).\\]
Figure [B.17](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-bernoulli-distribution-mass) shows the probability mass function of three Bernoulli distributed random variables with different parameters. Figure [B.18](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-bernoulli-distribution-cumulative) shows the corresponding cumulative distributions.
Figure B.17: Examples of a probability mass function of the Bernoulli distribution.
Figure B.18: The cumulative distribution functions of the Bernoulli distributions corresponding to the previous probability mass functions.
**Probability mass function**
\\\[f(x)\=\\begin{cases} p \&\\textrm{ if } x\=1,\\\\ 1\-p \&\\textrm{ if } x\=0\.\\end{cases}\\]
**Cumulative function**
\\\[F(x)\=\\begin{cases} 0 \&\\textrm{ if } x \< 0, \\\\ 1\-p \&\\textrm{ if } 0 \\leq x \<1,\\\\1 \&\\textrm{ if } x \\geq 1\.\\end{cases}\\]
**Expected value** \\(E(X)\=p\\)
**Variance** \\(Var(X)\=p \\cdot (1\-p)\\)
#### B.2\.3\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a Bernoulli distribution:
```
var p = 0.5; // probability of success
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {bernoulli({p: p})}));
///
```
```
```
### B.2\.4 Categorical distribution
The categorical distribution is a generalization of the Bernoulli distribution for categorical random variables: While a Bernoulli distribution is a distribution over two alternatives, the categorical is a distribution over multiple alternatives. For a single trial (e.g., a single die roll), the categorical distribution is equal to the multinomial distribution.
The categorical distribution is parametrized by the probabilities assigned to each event. Let \\(p\_i\\) be the probability assigned to outcome \\(i\\). The set of \\(p\_i\\)’s are the parameters, constrained by \\(\\sum\_{i\=1}^kp\_i\=1\\).
\\\[X \\sim Categorical(\\mathbf{p})\\]
Figure B.19: Examples of a probability mass function of the categorical distribution.
Figure B.20: The cumulative distribution functions of the categorical distributions corresponding to the previous probability mass functions.
**Probability mass function**
\\\[f(x\|\\mathbf{p})\=\\prod\_{i\=1}^kp\_i^{\\{x\=i\\}},\\]
where \\(\\{x\=i\\}\\) evaluates to 1 if \\(x\=i\\), otherwise 0 and \\(\\mathbf{p}\={p\_1,...,p\_k}\\), where \\(p\_i\\) is the probability of seeing event \\(i\\).
**Expected Value** \\(E(\\mathbf{x})\=\\mathbf{p}\\)
**Variance** \\(Var(\\mathbf{x})\=\\mathbf{p}\\cdot(1\-\\mathbf{p})\\)
#### B.2\.4\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a categorical distribution:
```
var ps = [0.5, 0.25, 0.25]; // probabilities
var vs = [1, 2, 3]; // categories
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {categorical({ps: ps, vs: vs})}));
///
```
```
```
### B.2\.5 Beta\-Binomial distribution
As the name already indicates, the beta\-binomial distribution is a mixture of a binomial and beta distribution. Remember, a binomial distribution is useful to model a binary choice with outcomes “0” and “1”. The binomial distribution has two parameters \\(p\\) and \\(n\\), denoting the probability of success (“1”) and the number of trials, respectively. Furthermore, we assume that the successive trials are independent and \\(p\\) is constant. In a beta\-binomial distribution, \\(p\\) is not anymore assumed to be constant (or fixed) but changes from trial to trial. Thus, a further assumption about the distribution of \\(p\\) is made, and here the beta distribution comes into play: the probability \\(p\\) is assumed to be randomly drawn from a beta distribution with parameters \\(a\\) and \\(b\\).
Therefore, the beta\-binomial distribution has three parameters \\(n\\), \\(a\\) and \\(b\\):
\\\[X \\sim BetaBinom(n,a,b).\\]
For large values of a and b, the distribution approaches a binomial distribution. When \\(a\=1\\) and \\(b\=1\\), the distribution equals a discrete uniform distribution from 0 to \\(n\\). When \\(n \= 1\\), the distribution equals a Bernoulli distribution.
Figure [B.21](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-betabinom-distribution-mass) shows the probability mass function of three beta\-binomial distributed random variables with different parameter values. Figure [B.22](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-betabinom-distribution-cumulative) shows the corresponding cumulative distributions.
Figure B.21: Examples of a probability mass function of the beta\-binomial distribution. Triples of numbers in the legend represent parameter values \\((n,a,b)\\).
Figure B.22: The cumulative distribution functions of the beta\-binomial distributions corresponding to the previous probability mass functions.
**Probability mass function**
\\\[f(x)\=\\binom{n}{x} \\frac{B(a\+x,b\+n\-x)}{B(a,b)},\\]
where \\(\\binom{n}{x}\\) is the binomial coefficient and \\(B(x)\\) is the beta function (see beta distribution).
**Cumulative function**
\\\[F(x)\=\\begin{cases} 0 \&\\textrm{ if } x\<0,\\\\ \\binom{n}{x} \\frac{B(a\+x,b\+n\-x)}{B(a,b)} {}\_3F\_2(n,a,b) \&\\textrm{ if } 0 \\leq x \< n,\\\\ 1 \&\\textrm{ if } x \\geq n. \\end{cases}\\]
where \\({}\_3F\_2(n,a,b)\\) is the generalized hypergeometric function.
**Expected value** \\(E(X)\=n \\frac{a}{a\+b}\\)
**Variance** \\(Var(X)\=n \\frac{ab}{(a\+b)^2} \\frac{a\+b\+n}{a\+b\+1}\\)
#### B.2\.5\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a beta\-binomial distribution:
```
var a = 1; // shape parameter alpha
var b = 1; // shape parameter beta
var n = 10; // number of trials (>= 1)
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {binomial({n: n, p: beta(a, b)})}));
///
```
```
```
### B.2\.6 Poisson distribution
A Poisson distributed random variable represents the number of events occurring in a given *time interval*. The Poisson distribution is a limiting case of the binomial distribution when the number of trials becomes very large and the probability of success is small (e.g., the number of car accidents in Osnabrueck in the next month, the number of typing errors on a page, the number of interruptions generated by a CPU during T seconds, etc.).
Events described by a Poisson distribution must fulfill the following conditions: they occur in non\-overlapping intervals, they do not occur simultaneously, and each event occurs at a constant rate.
The Poisson distribution has one parameter, the rate \\(\\lambda\\), sometimes also referred to as *intensity*:
\\\[X \\sim Poisson(\\lambda).\\]
The parameter \\(\\lambda\\) can be thought of as the expected number of events in the time interval. Consequently, changing the rate parameter changes the probability of seeing different numbers of events in one interval. Figure [B.23](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-poisson-distribution-mass) shows the probability mass function of three Poisson distributed random variables with different parameter values. Notice that the higher \\(\\lambda\\), the more symmetrical the distribution gets. In fact, the Poisson distribution can be approximated by a normal distribution for a rate parameter of \\(\\geq\\) 10\. Figure [B.24](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-poisson-distribution-cumulative) shows the corresponding cumulative distributions.
Figure B.23: Examples of a probability mass function of the Poisson distribution.
Figure B.24: The cumulative distribution functions of the Poisson distributions corresponding to the previous probability mass functions.
**Probability mass function**
\\\[f(x)\=\\frac{\\lambda^x}{x!}e^{\-\\lambda}\\]
**Cumulative function**
\\\[F(x)\=\\sum\_{k\=0}^{x}\\frac{\\lambda^k}{k!}e^{\-\\lambda}\\]
**Expected value** \\(E(X)\= \\lambda\\)
**Variance** \\(Var(X)\=\\lambda\\)
#### B.2\.6\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a Poisson distribution:
```
var lambda = 5; // rate parameter
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {poisson({mu: lambda})}));
///
```
```
```
### B.2\.1 Binomial distribution
The binomial distribution is a useful model for binary decisions where the outcome is a choice between two alternatives (e.g., Yes/No, Left/Right, Present/Absent, Heads/Tails, …). The two outcomes are coded as \\(0\\) (failure) and \\(1\\) (success). Consequently, let the probability of occurrence of the outcome “success” be \\(p\\), then the probability of occurrence of “failure” is \\(1\-p\\).
Consider a coin\-flip experiment with the outcomes “heads” or “tails”. If we flip a coin repeatedly, e.g., 30 times, the successive trials are independent of each other and the probability \\(p\\) is constant, then the resulting binomial distribution is a discrete random variable with outcomes \\(\\{0,1,2,...,30\\}\\).
The binomial distribution has two parameters “size” and “prob”, often denoted as \\(n\\) and \\(p\\), respectively. The “size” parameter refers to the number of trials and “prob” to the probability of success:
\\\[X \\sim Binomial(n,p).\\]
Figure [B.15](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-binomial-distribution-mass) shows the probability mass function of three binomially distributed random variables with different parameter values. As stated above, \\(p\\) refers to the probability of success. The higher this probability, the more often we will observe the outcome coded with “1”. Therefore, the distribution tends toward the right side and vice\-versa. The distribution gets more symmetric if the parameter \\(p\\) approximates 0\.5\. Figure [B.16](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-binomial-distribution-cumulative) shows the corresponding cumulative functions.
Figure B.15: Examples of a probability mass function of the binomial distribution. Numbers in the legend are pairs of parameters \\((n, p)\\).
Figure B.16: The cumulative distribution functions of the binomial distributions corresponding to the previous probability mass functions.
**Probability mass function**
\\\[f(x)\=\\binom{n}{x}p^x(1\-p)^{n\-x},\\]
where \\(\\binom{n}{x}\\) is the binomial coefficient.
**Cumulative function**
\\\[F(x)\=\\sum\_{k\=0}^{x}\\binom{n}{k}p^k(1\-p)^{n\-k}\\]
**Expected value** \\(E(X)\=n \\cdot p\\)
**Variance** \\(Var(X)\=n \\cdot p \\cdot (1\-p)\\)
#### B.2\.1\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a binomial distribution:
```
var p = 0.5; // probability of success
var n = 4; // number of trials (>= 1)
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {binomial({p: p, n: n})}));
///
```
```
```
#### B.2\.1\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a binomial distribution:
```
var p = 0.5; // probability of success
var n = 4; // number of trials (>= 1)
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {binomial({p: p, n: n})}));
///
```
```
```
### B.2\.2 Multinomial distribution
The multinomial distribution is a generalization of the binomial distribution to the case of \\(n\\) repeated trials: While the binomial distribution can have two outcomes, the multinomial distribution can have multiple outcomes.
Consider an experiment where each trial can result in any of \\(k\\) possible outcomes with a probability \\(p\_i\\), where \\(i\=1,2,...,k\\), with \\(\\sum\_{i\=1}^kp\_i\=1\\). For \\(n\\) repeated trials, let \\(k\_i\\) denote the number of times \\(X\=x\_i\\) was observed, where \\(i\=1,2,...,m\\). It follows that \\(\\sum\_{i\=1}^m k\_i\=n\\).
**Probability mass function**
The probability of observing a vector of outcomes \\(\\mathbf{k}\=\[k\_1,...,k\_m]^T\\) is
\\\[f(\\mathbf{k}\|\\mathbf{p})\=\\binom{n}{k\_1\\cdot k\_2 \\cdot...\\cdot k\_m} \\prod\_{i\=1}^m p\_i^{k\_i},\\]
where \\(\\binom{n}{k\_1\\cdot k\_2 \\cdot...\\cdot k\_m}\\) is the multinomial coefficient: \\\[\\binom{n}{k\_1\\cdot k\_2 \\cdot...\\cdot k\_m}\=\\frac{n!}{k\_1!\\cdot k\_2! \\cdot...\\cdot k\_m!}.\\]
It is a generalization of the binomial coefficient \\(\\binom{n}{k}\\).
**Expected value:** \\(E(X)\=n\\cdot p\_i\\)
**Variance:** \\(Var(X)\=n\\cdot p\_i\\cdot (1\-p\_i)\\)
#### B.2\.2\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a multinomial distribution:
```
var ps = [0.25, 0.25, 0.25, 0.25]; // probabilities
var n = 4; // number of trials (>= 1)
var n_samples = 30000; // number of samples used for approximation
///fold:
viz.hist(repeat(n_samples, function(x) {multinomial({ps: ps, n: n})}));
///
```
```
```
#### B.2\.2\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a multinomial distribution:
```
var ps = [0.25, 0.25, 0.25, 0.25]; // probabilities
var n = 4; // number of trials (>= 1)
var n_samples = 30000; // number of samples used for approximation
///fold:
viz.hist(repeat(n_samples, function(x) {multinomial({ps: ps, n: n})}));
///
```
```
```
### B.2\.3 Bernoulli distribution
The Bernoulli distribution is a special case of the binomial distribution with \\(size \= 1\\). The outcome of a Bernoulli random variable is therefore either 0 or 1\. Apart from that, the same information holds as for the binomial distribution.
As the “size” parameter is now negligible, the Bernoulli distribution has only one parameter, the probability of success \\(p\\):
\\\[X \\sim Bern(p).\\]
Figure [B.17](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-bernoulli-distribution-mass) shows the probability mass function of three Bernoulli distributed random variables with different parameters. Figure [B.18](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-bernoulli-distribution-cumulative) shows the corresponding cumulative distributions.
Figure B.17: Examples of a probability mass function of the Bernoulli distribution.
Figure B.18: The cumulative distribution functions of the Bernoulli distributions corresponding to the previous probability mass functions.
**Probability mass function**
\\\[f(x)\=\\begin{cases} p \&\\textrm{ if } x\=1,\\\\ 1\-p \&\\textrm{ if } x\=0\.\\end{cases}\\]
**Cumulative function**
\\\[F(x)\=\\begin{cases} 0 \&\\textrm{ if } x \< 0, \\\\ 1\-p \&\\textrm{ if } 0 \\leq x \<1,\\\\1 \&\\textrm{ if } x \\geq 1\.\\end{cases}\\]
**Expected value** \\(E(X)\=p\\)
**Variance** \\(Var(X)\=p \\cdot (1\-p)\\)
#### B.2\.3\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a Bernoulli distribution:
```
var p = 0.5; // probability of success
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {bernoulli({p: p})}));
///
```
```
```
#### B.2\.3\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a Bernoulli distribution:
```
var p = 0.5; // probability of success
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {bernoulli({p: p})}));
///
```
```
```
### B.2\.4 Categorical distribution
The categorical distribution is a generalization of the Bernoulli distribution for categorical random variables: While a Bernoulli distribution is a distribution over two alternatives, the categorical is a distribution over multiple alternatives. For a single trial (e.g., a single die roll), the categorical distribution is equal to the multinomial distribution.
The categorical distribution is parametrized by the probabilities assigned to each event. Let \\(p\_i\\) be the probability assigned to outcome \\(i\\). The set of \\(p\_i\\)’s are the parameters, constrained by \\(\\sum\_{i\=1}^kp\_i\=1\\).
\\\[X \\sim Categorical(\\mathbf{p})\\]
Figure B.19: Examples of a probability mass function of the categorical distribution.
Figure B.20: The cumulative distribution functions of the categorical distributions corresponding to the previous probability mass functions.
**Probability mass function**
\\\[f(x\|\\mathbf{p})\=\\prod\_{i\=1}^kp\_i^{\\{x\=i\\}},\\]
where \\(\\{x\=i\\}\\) evaluates to 1 if \\(x\=i\\), otherwise 0 and \\(\\mathbf{p}\={p\_1,...,p\_k}\\), where \\(p\_i\\) is the probability of seeing event \\(i\\).
**Expected Value** \\(E(\\mathbf{x})\=\\mathbf{p}\\)
**Variance** \\(Var(\\mathbf{x})\=\\mathbf{p}\\cdot(1\-\\mathbf{p})\\)
#### B.2\.4\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a categorical distribution:
```
var ps = [0.5, 0.25, 0.25]; // probabilities
var vs = [1, 2, 3]; // categories
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {categorical({ps: ps, vs: vs})}));
///
```
```
```
#### B.2\.4\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a categorical distribution:
```
var ps = [0.5, 0.25, 0.25]; // probabilities
var vs = [1, 2, 3]; // categories
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {categorical({ps: ps, vs: vs})}));
///
```
```
```
### B.2\.5 Beta\-Binomial distribution
As the name already indicates, the beta\-binomial distribution is a mixture of a binomial and beta distribution. Remember, a binomial distribution is useful to model a binary choice with outcomes “0” and “1”. The binomial distribution has two parameters \\(p\\) and \\(n\\), denoting the probability of success (“1”) and the number of trials, respectively. Furthermore, we assume that the successive trials are independent and \\(p\\) is constant. In a beta\-binomial distribution, \\(p\\) is not anymore assumed to be constant (or fixed) but changes from trial to trial. Thus, a further assumption about the distribution of \\(p\\) is made, and here the beta distribution comes into play: the probability \\(p\\) is assumed to be randomly drawn from a beta distribution with parameters \\(a\\) and \\(b\\).
Therefore, the beta\-binomial distribution has three parameters \\(n\\), \\(a\\) and \\(b\\):
\\\[X \\sim BetaBinom(n,a,b).\\]
For large values of a and b, the distribution approaches a binomial distribution. When \\(a\=1\\) and \\(b\=1\\), the distribution equals a discrete uniform distribution from 0 to \\(n\\). When \\(n \= 1\\), the distribution equals a Bernoulli distribution.
Figure [B.21](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-betabinom-distribution-mass) shows the probability mass function of three beta\-binomial distributed random variables with different parameter values. Figure [B.22](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-betabinom-distribution-cumulative) shows the corresponding cumulative distributions.
Figure B.21: Examples of a probability mass function of the beta\-binomial distribution. Triples of numbers in the legend represent parameter values \\((n,a,b)\\).
Figure B.22: The cumulative distribution functions of the beta\-binomial distributions corresponding to the previous probability mass functions.
**Probability mass function**
\\\[f(x)\=\\binom{n}{x} \\frac{B(a\+x,b\+n\-x)}{B(a,b)},\\]
where \\(\\binom{n}{x}\\) is the binomial coefficient and \\(B(x)\\) is the beta function (see beta distribution).
**Cumulative function**
\\\[F(x)\=\\begin{cases} 0 \&\\textrm{ if } x\<0,\\\\ \\binom{n}{x} \\frac{B(a\+x,b\+n\-x)}{B(a,b)} {}\_3F\_2(n,a,b) \&\\textrm{ if } 0 \\leq x \< n,\\\\ 1 \&\\textrm{ if } x \\geq n. \\end{cases}\\]
where \\({}\_3F\_2(n,a,b)\\) is the generalized hypergeometric function.
**Expected value** \\(E(X)\=n \\frac{a}{a\+b}\\)
**Variance** \\(Var(X)\=n \\frac{ab}{(a\+b)^2} \\frac{a\+b\+n}{a\+b\+1}\\)
#### B.2\.5\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a beta\-binomial distribution:
```
var a = 1; // shape parameter alpha
var b = 1; // shape parameter beta
var n = 10; // number of trials (>= 1)
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {binomial({n: n, p: beta(a, b)})}));
///
```
```
```
#### B.2\.5\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a beta\-binomial distribution:
```
var a = 1; // shape parameter alpha
var b = 1; // shape parameter beta
var n = 10; // number of trials (>= 1)
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {binomial({n: n, p: beta(a, b)})}));
///
```
```
```
### B.2\.6 Poisson distribution
A Poisson distributed random variable represents the number of events occurring in a given *time interval*. The Poisson distribution is a limiting case of the binomial distribution when the number of trials becomes very large and the probability of success is small (e.g., the number of car accidents in Osnabrueck in the next month, the number of typing errors on a page, the number of interruptions generated by a CPU during T seconds, etc.).
Events described by a Poisson distribution must fulfill the following conditions: they occur in non\-overlapping intervals, they do not occur simultaneously, and each event occurs at a constant rate.
The Poisson distribution has one parameter, the rate \\(\\lambda\\), sometimes also referred to as *intensity*:
\\\[X \\sim Poisson(\\lambda).\\]
The parameter \\(\\lambda\\) can be thought of as the expected number of events in the time interval. Consequently, changing the rate parameter changes the probability of seeing different numbers of events in one interval. Figure [B.23](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-poisson-distribution-mass) shows the probability mass function of three Poisson distributed random variables with different parameter values. Notice that the higher \\(\\lambda\\), the more symmetrical the distribution gets. In fact, the Poisson distribution can be approximated by a normal distribution for a rate parameter of \\(\\geq\\) 10\. Figure [B.24](selected-discrete-distributions-of-random-variables.html#fig:ch-app-01-poisson-distribution-cumulative) shows the corresponding cumulative distributions.
Figure B.23: Examples of a probability mass function of the Poisson distribution.
Figure B.24: The cumulative distribution functions of the Poisson distributions corresponding to the previous probability mass functions.
**Probability mass function**
\\\[f(x)\=\\frac{\\lambda^x}{x!}e^{\-\\lambda}\\]
**Cumulative function**
\\\[F(x)\=\\sum\_{k\=0}^{x}\\frac{\\lambda^k}{k!}e^{\-\\lambda}\\]
**Expected value** \\(E(X)\= \\lambda\\)
**Variance** \\(Var(X)\=\\lambda\\)
#### B.2\.6\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a Poisson distribution:
```
var lambda = 5; // rate parameter
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {poisson({mu: lambda})}));
///
```
```
```
#### B.2\.6\.1 Hands\-on
Here’s WebPPL code to explore the effect of different parameter values on a Poisson distribution:
```
var lambda = 5; // rate parameter
var n_samples = 30000; // number of samples used for approximation
///fold:
viz(repeat(n_samples, function(x) {poisson({mu: lambda})}));
///
```
```
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/app-93-data-sets-mental-chronometry.html |
D.1 Mental Chronometry
----------------------
### D.1\.1 Nature, origin and rationale of the data
[Franciscus Donders](https://en.wikipedia.org/wiki/Franciscus_Donders) is remembered as one of, if not the first experimental cognitive psychologists. He famously introduced the **subtraction logic** which looks at differences in reaction times across different tasks to infer the difference in the complexity of the mental processes involved in these tasks. The Mental Chronometry data set presents the results of an online replication of one such subtraction\-experiment.
#### D.1\.1\.1 The experiment
Fifty participants were recruited using the crowd\-sourcing platform [Prolific](https://www.prolific.co) and paid for their participation.
In each experimental trial, participants see either a blue square or a blue circle appear on the screen and are asked to respond as quickly as possible. The experiment consists of three parts, presented to all participants in the same order (see below). The parts differ in the adequate response to the visual stimuli.
1. **Reaction task**
The participant presses the space bar whenever there is a stimulus (square or circle).
*Recorded*: reaction time
2. **Go/No\-Go task**
The participant presses the space bar whenever their target (one of the two stimuli) is on the screen.
*Recorded*: the reaction time and the response
3. **Discrimination task**
The participant presses the **F** key on the keyboard when there is one of the stimuli and the **J** key when there is the other one of the stimuli on the screen.
*Recorded*: the reaction time and the response
The **reaction time** measurement starts from the onset of the visual stimuli to the button press. The **response** variable records whether the reaction was correct or incorrect.
For each participant, the experiment randomly allocates one shape (circle or square) as the target to be used in both the second and the third task.
The experiment was realized using [\_magpie](https://magpie-ea.github.io/magpie-site/index.html) and can be tried out [here](https://magpie-exp-mental-chronometry.netlify.com).
#### D.1\.1\.2 Theoretical motivation \& hypotheses
We expect that reaction times of correct responses are lowest in the reaction task, higher in the Go/No\-Go task, and highest in the discrimination task.
### D.1\.2 Loading and preprocessing the data
The raw data produced by the online experiment is not particularly tidy. It needs substantial massages before plotting and analysis.
```
mc_data_raw <- aida::data_MC_raw
glimpse(mc_data_raw)
```
```
## Rows: 3,750
## Columns: 32
## $ submission_id <dbl> 8554, 8554, 8554, 8554, 8554, 8554, 8554, 8554, 8554, 85…
## $ QUD <chr> "Press SPACE when you see a shape on the screen", "Press…
## $ RT <dbl> 376, 311, 329, 270, 284, 311, 269, 317, 325, 240, 262, 2…
## $ age <dbl> 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, …
## $ comments <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ correctness <chr> "correct", "correct", "correct", "correct", "correct", "…
## $ education <chr> "high school / college", "high school / college", "high …
## $ elemSize <dbl> 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 1…
## $ endTime <dbl> 1.570374e+12, 1.570374e+12, 1.570374e+12, 1.570374e+12, …
## $ expected <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ experiment_id <dbl> 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, …
## $ f <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ focalColor <chr> "blue", "blue", "blue", "blue", "blue", "blue", "blue", …
## $ focalNumber <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,…
## $ focalShape <chr> "square", "square", "circle", "square", "circle", "circl…
## $ gender <chr> "female", "female", "female", "female", "female", "femal…
## $ j <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ key1 <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ key2 <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ key_pressed <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ languages <chr> "Right", "Right", "Right", "Right", "Right", "Right", "R…
## $ pause <dbl> 2631, 1700, 1322, 1787, 1295, 2330, 1620, 2460, 1580, 17…
## $ response <chr> "space", "space", "space", "space", "space", "space", "s…
## $ sort <chr> "grid", "grid", "grid", "grid", "grid", "grid", "grid", …
## $ startDate <chr> "Sun Oct 06 2019 15:45:19 GMT+0100 (Hora de verão da Eur…
## $ startTime <dbl> 1.570373e+12, 1.570373e+12, 1.570373e+12, 1.570373e+12, …
## $ stimulus <chr> "square", "square", "circle", "square", "circle", "circl…
## $ target <chr> "square", "square", "circle", "square", "circle", "circl…
## $ timeSpent <dbl> 7.2514, 7.2514, 7.2514, 7.2514, 7.2514, 7.2514, 7.2514, …
## $ total <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,…
## $ trial_number <dbl> 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13…
## $ trial_type <chr> "reaction_practice", "reaction_practice", "reaction_prac…
```
The most pressing problem is that entries in the column `trial_type` contain two logically separate pieces of information: the block (reaction, go/no\-go, discrimination) *and* whether the data comes from a practice trial (which we want to discard) or a main trial (which we want to analyze). We therefore separate this information, and perform some other massages, to finally select a preprocessed data set for further analysis:
```
block_levels <- c("reaction", "goNoGo", "discrimination") # ordering of blocks for plotting, etc.
mc_data_preprocessed <- mc_data_raw %>%
separate(trial_type, c("block", "stage"), sep = "_", remove = FALSE) %>%
mutate(comments = ifelse(is.na(comments), "non given", comments)) %>%
filter(stage == "main") %>%
mutate(
block = factor(block, ordered = T, levels = block_levels),
response = ifelse(is.na(response), "none", response)
) %>%
filter(response != "wait") %>%
rename(
handedness = languages, # variable name is simply wrong
total_time_spent = timeSpent,
shape = focalShape
) %>%
select(
submission_id,
trial_number,
block,
shape,
RT,
handedness,
gender,
total_time_spent,
comments
)
```
### D.1\.3 Cleaning the data
Remember that the criteria for data exclusion should ideally be defined before data collection (or at least inspection). They should definitely never be chosen in such a way as to maximize the “desirability” of an analysis. Data cleaning is not a way of making sure that your favorite research hypothesis “wins”.
Although we have not preregistered any data cleaning regime or analyses for this data set, we demonstrate a frequently used cleaning scheme for reaction time data, which does depend on the data in some sense, but does not require precise knowledge of the data. In particular, we are going to do this:
1. We remove the data from an individual participant \\(X\\) if there is an experimental condition \\(C\\) such that the mean RT of \\(X\\) for condition \\(C\\) is more than 2 standard deviations away from the overall mean RT for condition \\(C\\).
2. From the remaining data, we then remove any individual trial \\(Y\\) if the RT of \\(Y\\) is more than 2 standard deviations away from the mean of experimental condition \\(C\\) (where \\(C\\) is the condition of \\(Y\\), of course).
Notice that in the case at hand, the experimental conditions are the three types of tasks.
#### D.1\.3\.1 Cleaning by\-participant
Our rule for removing data from outlier participants is this:
> We remove the data from an individual participant \\(X\\) if there is an experimental condition \\(C\\) such that the mean RT of \\(X\\) for condition \\(C\\) is more than 2 standard deviations away from the overall mean RT for condition \\(C\\). We also remove all trials with reaction times below 100ms.
This procedure is implemented in this code:
```
# summary stats (means) for participants
d_sum_stats_participants <- mc_data_preprocessed %>%
group_by(submission_id, block) %>%
summarise(
mean_P = mean(RT)
)
# summary stats (means and SDs) for conditions
d_sum_stats_conditions <- mc_data_preprocessed %>%
group_by(block) %>%
summarise(
mean_C = mean(RT),
sd_C = sd(RT)
)
d_sum_stats_participants <-
full_join(
d_sum_stats_participants,
d_sum_stats_conditions,
by = "block"
) %>%
mutate(
outlier_P = abs(mean_P - mean_C) > 2 * sd_C
)
# show outlier participants
d_sum_stats_participants %>% filter(outlier_P == 1) %>% show()
```
```
## # A tibble: 1 × 6
## # Groups: submission_id [1]
## submission_id block mean_P mean_C sd_C outlier_P
## <dbl> <ord> <dbl> <dbl> <dbl> <lgl>
## 1 8505 discrimination 1078. 518. 185. TRUE
```
When plotting the data for this condition and this participant, we see that the high overall mean is not just caused by a single outlier, but several trials that took longer than 1 second.
```
mc_data_preprocessed %>%
semi_join(
d_sum_stats_participants %>% filter(outlier_P == 1),
by = c("submission_id")
) %>%
ggplot(aes(x = trial_number, y = RT)) +
geom_point()
```
We are then going to exclude this participant’s entire data from all subsequent analysis:[93](#fn93)
```
mc_data_cleaned <- mc_data_preprocessed %>%
filter(submission_id != d_sum_stats_participants$submission_id[1])
```
#### D.1\.3\.2 Cleaning by\-trial
Our rule for excluding data from individual trials is:
> From the remaining data, we then remove any individual trial \\(Y\\) if the RT of \\(Y\\) is more than 2 standard deviations away from the mean of experimental condition \\(C\\) (where \\(C\\) is the condition of \\(Y\\), of course). We also remove all trials with reaction times below 100ms.
The following code implements this:
```
# mark individual trials as outliers
mc_data_cleaned <- mc_data_cleaned %>%
full_join(
d_sum_stats_conditions,
by = "block"
) %>%
mutate(
trial_type = case_when(
abs(RT - mean_C) > 2 * sd_C ~ "too far from mean",
RT < 100 ~ "< 100ms",
TRUE ~ "acceptable"
) %>% factor(levels = c("acceptable", "< 100ms", "too far from mean")),
trial = 1:nrow(mc_data_cleaned)
)
# visualize outlier trials
mc_data_cleaned %>%
ggplot(aes(x = trial, y = RT, color = trial_type)) +
geom_point(alpha = 0.4) + facet_grid(~block) +
geom_point(alpha = 0.9, data = filter(mc_data_cleaned, trial_type != "acceptable"))
```
So, we remove 63 individual trials.
```
mc_data_cleaned <- mc_data_cleaned %>%
filter(trial_type == "acceptable")
```
### D.1\.4 Exploration: summary stats \& plots
What’s the distribution of `total_time_spent`, i.e., the time each participant took to complete the whole study?
```
mc_data_cleaned %>%
select(submission_id, total_time_spent) %>%
unique() %>%
ggplot(aes(x = total_time_spent)) +
geom_histogram()
```
There are two participants who took noticeably longer than all the others, but we need not necessarily be concerned about this, because it is not unusual for participants of online experiments to open the experiment and wait before actually starting.
Here are summary statistics for the reaction time measures for each condition (\= block).
```
mc_sum_stats_blocks_cleaned <- mc_data_cleaned %>%
group_by(block) %>%
nest() %>%
summarise(
CIs = map(data, function(d) bootstrapped_CI(d$RT))
) %>%
unnest(CIs)
mc_sum_stats_blocks_cleaned
```
```
## # A tibble: 3 × 4
## block lower mean upper
## <ord> <dbl> <dbl> <dbl>
## 1 reaction 296. 300. 303.
## 2 goNoGo 420. 427. 434.
## 3 discrimination 481. 488. 495.
```
And a plot of the summary:
```
mc_sum_stats_blocks_cleaned %>%
ggplot(aes(x = block, y = mean, fill = block)) +
geom_col() +
geom_errorbar(aes(ymin = lower, ymax = upper), size = 0.3, width = 0.2 ) +
ylab("mean reaction time") + xlab("") +
scale_fill_manual(values = project_colors) +
theme(legend.position = "none")
```
We can also plot the data in a manner that is more revealing of the distribution of measurements in each condition:
```
mc_data_cleaned %>%
ggplot(aes(x = RT, color = block, fill = block)) +
geom_density(alpha = 0.3)
```
### D.1\.1 Nature, origin and rationale of the data
[Franciscus Donders](https://en.wikipedia.org/wiki/Franciscus_Donders) is remembered as one of, if not the first experimental cognitive psychologists. He famously introduced the **subtraction logic** which looks at differences in reaction times across different tasks to infer the difference in the complexity of the mental processes involved in these tasks. The Mental Chronometry data set presents the results of an online replication of one such subtraction\-experiment.
#### D.1\.1\.1 The experiment
Fifty participants were recruited using the crowd\-sourcing platform [Prolific](https://www.prolific.co) and paid for their participation.
In each experimental trial, participants see either a blue square or a blue circle appear on the screen and are asked to respond as quickly as possible. The experiment consists of three parts, presented to all participants in the same order (see below). The parts differ in the adequate response to the visual stimuli.
1. **Reaction task**
The participant presses the space bar whenever there is a stimulus (square or circle).
*Recorded*: reaction time
2. **Go/No\-Go task**
The participant presses the space bar whenever their target (one of the two stimuli) is on the screen.
*Recorded*: the reaction time and the response
3. **Discrimination task**
The participant presses the **F** key on the keyboard when there is one of the stimuli and the **J** key when there is the other one of the stimuli on the screen.
*Recorded*: the reaction time and the response
The **reaction time** measurement starts from the onset of the visual stimuli to the button press. The **response** variable records whether the reaction was correct or incorrect.
For each participant, the experiment randomly allocates one shape (circle or square) as the target to be used in both the second and the third task.
The experiment was realized using [\_magpie](https://magpie-ea.github.io/magpie-site/index.html) and can be tried out [here](https://magpie-exp-mental-chronometry.netlify.com).
#### D.1\.1\.2 Theoretical motivation \& hypotheses
We expect that reaction times of correct responses are lowest in the reaction task, higher in the Go/No\-Go task, and highest in the discrimination task.
#### D.1\.1\.1 The experiment
Fifty participants were recruited using the crowd\-sourcing platform [Prolific](https://www.prolific.co) and paid for their participation.
In each experimental trial, participants see either a blue square or a blue circle appear on the screen and are asked to respond as quickly as possible. The experiment consists of three parts, presented to all participants in the same order (see below). The parts differ in the adequate response to the visual stimuli.
1. **Reaction task**
The participant presses the space bar whenever there is a stimulus (square or circle).
*Recorded*: reaction time
2. **Go/No\-Go task**
The participant presses the space bar whenever their target (one of the two stimuli) is on the screen.
*Recorded*: the reaction time and the response
3. **Discrimination task**
The participant presses the **F** key on the keyboard when there is one of the stimuli and the **J** key when there is the other one of the stimuli on the screen.
*Recorded*: the reaction time and the response
The **reaction time** measurement starts from the onset of the visual stimuli to the button press. The **response** variable records whether the reaction was correct or incorrect.
For each participant, the experiment randomly allocates one shape (circle or square) as the target to be used in both the second and the third task.
The experiment was realized using [\_magpie](https://magpie-ea.github.io/magpie-site/index.html) and can be tried out [here](https://magpie-exp-mental-chronometry.netlify.com).
#### D.1\.1\.2 Theoretical motivation \& hypotheses
We expect that reaction times of correct responses are lowest in the reaction task, higher in the Go/No\-Go task, and highest in the discrimination task.
### D.1\.2 Loading and preprocessing the data
The raw data produced by the online experiment is not particularly tidy. It needs substantial massages before plotting and analysis.
```
mc_data_raw <- aida::data_MC_raw
glimpse(mc_data_raw)
```
```
## Rows: 3,750
## Columns: 32
## $ submission_id <dbl> 8554, 8554, 8554, 8554, 8554, 8554, 8554, 8554, 8554, 85…
## $ QUD <chr> "Press SPACE when you see a shape on the screen", "Press…
## $ RT <dbl> 376, 311, 329, 270, 284, 311, 269, 317, 325, 240, 262, 2…
## $ age <dbl> 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, …
## $ comments <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ correctness <chr> "correct", "correct", "correct", "correct", "correct", "…
## $ education <chr> "high school / college", "high school / college", "high …
## $ elemSize <dbl> 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 1…
## $ endTime <dbl> 1.570374e+12, 1.570374e+12, 1.570374e+12, 1.570374e+12, …
## $ expected <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ experiment_id <dbl> 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, 68, …
## $ f <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ focalColor <chr> "blue", "blue", "blue", "blue", "blue", "blue", "blue", …
## $ focalNumber <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,…
## $ focalShape <chr> "square", "square", "circle", "square", "circle", "circl…
## $ gender <chr> "female", "female", "female", "female", "female", "femal…
## $ j <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ key1 <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ key2 <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ key_pressed <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ languages <chr> "Right", "Right", "Right", "Right", "Right", "Right", "R…
## $ pause <dbl> 2631, 1700, 1322, 1787, 1295, 2330, 1620, 2460, 1580, 17…
## $ response <chr> "space", "space", "space", "space", "space", "space", "s…
## $ sort <chr> "grid", "grid", "grid", "grid", "grid", "grid", "grid", …
## $ startDate <chr> "Sun Oct 06 2019 15:45:19 GMT+0100 (Hora de verão da Eur…
## $ startTime <dbl> 1.570373e+12, 1.570373e+12, 1.570373e+12, 1.570373e+12, …
## $ stimulus <chr> "square", "square", "circle", "square", "circle", "circl…
## $ target <chr> "square", "square", "circle", "square", "circle", "circl…
## $ timeSpent <dbl> 7.2514, 7.2514, 7.2514, 7.2514, 7.2514, 7.2514, 7.2514, …
## $ total <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,…
## $ trial_number <dbl> 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13…
## $ trial_type <chr> "reaction_practice", "reaction_practice", "reaction_prac…
```
The most pressing problem is that entries in the column `trial_type` contain two logically separate pieces of information: the block (reaction, go/no\-go, discrimination) *and* whether the data comes from a practice trial (which we want to discard) or a main trial (which we want to analyze). We therefore separate this information, and perform some other massages, to finally select a preprocessed data set for further analysis:
```
block_levels <- c("reaction", "goNoGo", "discrimination") # ordering of blocks for plotting, etc.
mc_data_preprocessed <- mc_data_raw %>%
separate(trial_type, c("block", "stage"), sep = "_", remove = FALSE) %>%
mutate(comments = ifelse(is.na(comments), "non given", comments)) %>%
filter(stage == "main") %>%
mutate(
block = factor(block, ordered = T, levels = block_levels),
response = ifelse(is.na(response), "none", response)
) %>%
filter(response != "wait") %>%
rename(
handedness = languages, # variable name is simply wrong
total_time_spent = timeSpent,
shape = focalShape
) %>%
select(
submission_id,
trial_number,
block,
shape,
RT,
handedness,
gender,
total_time_spent,
comments
)
```
### D.1\.3 Cleaning the data
Remember that the criteria for data exclusion should ideally be defined before data collection (or at least inspection). They should definitely never be chosen in such a way as to maximize the “desirability” of an analysis. Data cleaning is not a way of making sure that your favorite research hypothesis “wins”.
Although we have not preregistered any data cleaning regime or analyses for this data set, we demonstrate a frequently used cleaning scheme for reaction time data, which does depend on the data in some sense, but does not require precise knowledge of the data. In particular, we are going to do this:
1. We remove the data from an individual participant \\(X\\) if there is an experimental condition \\(C\\) such that the mean RT of \\(X\\) for condition \\(C\\) is more than 2 standard deviations away from the overall mean RT for condition \\(C\\).
2. From the remaining data, we then remove any individual trial \\(Y\\) if the RT of \\(Y\\) is more than 2 standard deviations away from the mean of experimental condition \\(C\\) (where \\(C\\) is the condition of \\(Y\\), of course).
Notice that in the case at hand, the experimental conditions are the three types of tasks.
#### D.1\.3\.1 Cleaning by\-participant
Our rule for removing data from outlier participants is this:
> We remove the data from an individual participant \\(X\\) if there is an experimental condition \\(C\\) such that the mean RT of \\(X\\) for condition \\(C\\) is more than 2 standard deviations away from the overall mean RT for condition \\(C\\). We also remove all trials with reaction times below 100ms.
This procedure is implemented in this code:
```
# summary stats (means) for participants
d_sum_stats_participants <- mc_data_preprocessed %>%
group_by(submission_id, block) %>%
summarise(
mean_P = mean(RT)
)
# summary stats (means and SDs) for conditions
d_sum_stats_conditions <- mc_data_preprocessed %>%
group_by(block) %>%
summarise(
mean_C = mean(RT),
sd_C = sd(RT)
)
d_sum_stats_participants <-
full_join(
d_sum_stats_participants,
d_sum_stats_conditions,
by = "block"
) %>%
mutate(
outlier_P = abs(mean_P - mean_C) > 2 * sd_C
)
# show outlier participants
d_sum_stats_participants %>% filter(outlier_P == 1) %>% show()
```
```
## # A tibble: 1 × 6
## # Groups: submission_id [1]
## submission_id block mean_P mean_C sd_C outlier_P
## <dbl> <ord> <dbl> <dbl> <dbl> <lgl>
## 1 8505 discrimination 1078. 518. 185. TRUE
```
When plotting the data for this condition and this participant, we see that the high overall mean is not just caused by a single outlier, but several trials that took longer than 1 second.
```
mc_data_preprocessed %>%
semi_join(
d_sum_stats_participants %>% filter(outlier_P == 1),
by = c("submission_id")
) %>%
ggplot(aes(x = trial_number, y = RT)) +
geom_point()
```
We are then going to exclude this participant’s entire data from all subsequent analysis:[93](#fn93)
```
mc_data_cleaned <- mc_data_preprocessed %>%
filter(submission_id != d_sum_stats_participants$submission_id[1])
```
#### D.1\.3\.2 Cleaning by\-trial
Our rule for excluding data from individual trials is:
> From the remaining data, we then remove any individual trial \\(Y\\) if the RT of \\(Y\\) is more than 2 standard deviations away from the mean of experimental condition \\(C\\) (where \\(C\\) is the condition of \\(Y\\), of course). We also remove all trials with reaction times below 100ms.
The following code implements this:
```
# mark individual trials as outliers
mc_data_cleaned <- mc_data_cleaned %>%
full_join(
d_sum_stats_conditions,
by = "block"
) %>%
mutate(
trial_type = case_when(
abs(RT - mean_C) > 2 * sd_C ~ "too far from mean",
RT < 100 ~ "< 100ms",
TRUE ~ "acceptable"
) %>% factor(levels = c("acceptable", "< 100ms", "too far from mean")),
trial = 1:nrow(mc_data_cleaned)
)
# visualize outlier trials
mc_data_cleaned %>%
ggplot(aes(x = trial, y = RT, color = trial_type)) +
geom_point(alpha = 0.4) + facet_grid(~block) +
geom_point(alpha = 0.9, data = filter(mc_data_cleaned, trial_type != "acceptable"))
```
So, we remove 63 individual trials.
```
mc_data_cleaned <- mc_data_cleaned %>%
filter(trial_type == "acceptable")
```
#### D.1\.3\.1 Cleaning by\-participant
Our rule for removing data from outlier participants is this:
> We remove the data from an individual participant \\(X\\) if there is an experimental condition \\(C\\) such that the mean RT of \\(X\\) for condition \\(C\\) is more than 2 standard deviations away from the overall mean RT for condition \\(C\\). We also remove all trials with reaction times below 100ms.
This procedure is implemented in this code:
```
# summary stats (means) for participants
d_sum_stats_participants <- mc_data_preprocessed %>%
group_by(submission_id, block) %>%
summarise(
mean_P = mean(RT)
)
# summary stats (means and SDs) for conditions
d_sum_stats_conditions <- mc_data_preprocessed %>%
group_by(block) %>%
summarise(
mean_C = mean(RT),
sd_C = sd(RT)
)
d_sum_stats_participants <-
full_join(
d_sum_stats_participants,
d_sum_stats_conditions,
by = "block"
) %>%
mutate(
outlier_P = abs(mean_P - mean_C) > 2 * sd_C
)
# show outlier participants
d_sum_stats_participants %>% filter(outlier_P == 1) %>% show()
```
```
## # A tibble: 1 × 6
## # Groups: submission_id [1]
## submission_id block mean_P mean_C sd_C outlier_P
## <dbl> <ord> <dbl> <dbl> <dbl> <lgl>
## 1 8505 discrimination 1078. 518. 185. TRUE
```
When plotting the data for this condition and this participant, we see that the high overall mean is not just caused by a single outlier, but several trials that took longer than 1 second.
```
mc_data_preprocessed %>%
semi_join(
d_sum_stats_participants %>% filter(outlier_P == 1),
by = c("submission_id")
) %>%
ggplot(aes(x = trial_number, y = RT)) +
geom_point()
```
We are then going to exclude this participant’s entire data from all subsequent analysis:[93](#fn93)
```
mc_data_cleaned <- mc_data_preprocessed %>%
filter(submission_id != d_sum_stats_participants$submission_id[1])
```
#### D.1\.3\.2 Cleaning by\-trial
Our rule for excluding data from individual trials is:
> From the remaining data, we then remove any individual trial \\(Y\\) if the RT of \\(Y\\) is more than 2 standard deviations away from the mean of experimental condition \\(C\\) (where \\(C\\) is the condition of \\(Y\\), of course). We also remove all trials with reaction times below 100ms.
The following code implements this:
```
# mark individual trials as outliers
mc_data_cleaned <- mc_data_cleaned %>%
full_join(
d_sum_stats_conditions,
by = "block"
) %>%
mutate(
trial_type = case_when(
abs(RT - mean_C) > 2 * sd_C ~ "too far from mean",
RT < 100 ~ "< 100ms",
TRUE ~ "acceptable"
) %>% factor(levels = c("acceptable", "< 100ms", "too far from mean")),
trial = 1:nrow(mc_data_cleaned)
)
# visualize outlier trials
mc_data_cleaned %>%
ggplot(aes(x = trial, y = RT, color = trial_type)) +
geom_point(alpha = 0.4) + facet_grid(~block) +
geom_point(alpha = 0.9, data = filter(mc_data_cleaned, trial_type != "acceptable"))
```
So, we remove 63 individual trials.
```
mc_data_cleaned <- mc_data_cleaned %>%
filter(trial_type == "acceptable")
```
### D.1\.4 Exploration: summary stats \& plots
What’s the distribution of `total_time_spent`, i.e., the time each participant took to complete the whole study?
```
mc_data_cleaned %>%
select(submission_id, total_time_spent) %>%
unique() %>%
ggplot(aes(x = total_time_spent)) +
geom_histogram()
```
There are two participants who took noticeably longer than all the others, but we need not necessarily be concerned about this, because it is not unusual for participants of online experiments to open the experiment and wait before actually starting.
Here are summary statistics for the reaction time measures for each condition (\= block).
```
mc_sum_stats_blocks_cleaned <- mc_data_cleaned %>%
group_by(block) %>%
nest() %>%
summarise(
CIs = map(data, function(d) bootstrapped_CI(d$RT))
) %>%
unnest(CIs)
mc_sum_stats_blocks_cleaned
```
```
## # A tibble: 3 × 4
## block lower mean upper
## <ord> <dbl> <dbl> <dbl>
## 1 reaction 296. 300. 303.
## 2 goNoGo 420. 427. 434.
## 3 discrimination 481. 488. 495.
```
And a plot of the summary:
```
mc_sum_stats_blocks_cleaned %>%
ggplot(aes(x = block, y = mean, fill = block)) +
geom_col() +
geom_errorbar(aes(ymin = lower, ymax = upper), size = 0.3, width = 0.2 ) +
ylab("mean reaction time") + xlab("") +
scale_fill_manual(values = project_colors) +
theme(legend.position = "none")
```
We can also plot the data in a manner that is more revealing of the distribution of measurements in each condition:
```
mc_data_cleaned %>%
ggplot(aes(x = RT, color = block, fill = block)) +
geom_density(alpha = 0.3)
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/app-93-data-sets-simon-task.html |
D.2 Simon Task
--------------
---
The [Simon task](https://en.wikipedia.org/wiki/Simon_effect) is a well\-established experimental paradigm designed to study how different properties of a stimulus might interfere during information processing or decision making.
Concretely, the original Simon investigates if responses are faster and more accurate when the stimulus to respond to occurs in the same relative location (e.g., right on the screen) as the response button required by that stimulus (e.g., pressing the button `p` on the keyboard).
### D.2\.1 Experiment
You can try out the experiment for yourself [here](https://icnp-exp3.netlify.app/).
#### D.2\.1\.1 Participants
A total of 213 participants took part in an online version of a Simon task.
Participants were students of Cognitive Science at the University of Osnabrück, taking part in courses “Introduction to Cognitive (Neuro\-)Psychology” or “Experimental Psychology Lab Practice” in the summer term of 2019\.
#### D.2\.1\.2 Materials \& Design
Each trial started by showing a fixation cross for 200 ms in the center of the screen. Then, one of two geometrical shapes was shown for 500 ms. The **target shape** was either a blue square or a blue circle. The target shape appeared either on the left or right of the screen. Each trial determined uniformly at random which shape (square or circle) to show as target and where on the screen to display it (left or right). Participants were instructed to press keys `q` (left of keyboard) or `p` (right of keyboard) to identify the kind of shape on the screen. The shape\-key allocation happened initially, uniformly at random once for each participant and remained constant throughout the experiment. For example, a participant may have been asked to press `q` for square and `p` for circle.
Trials were categorized as either ‘congruent’ or ‘incongruent’. They were congruent if the location of the stimulus was the same relative location as the response key (e.g., square on the right of the screen, and `p` key to be pressed for square) and incongruent if the stimulus was not in the same relative location as the response key (e.g., square on the right and `q` key to be pressed for square).
In each trial, if no key was pressed within 3 seconds after the appearance of the target shape, a message to please respond faster was displayed on the screen.
#### D.2\.1\.3 Procedure
Participants were first welcomed and made familiar with the experiment. They were told to optimize both speed and accuracy. They then practiced the task for 20 trials before starting the main task, which consisted of 100 trials. Finally, the experiment ended with a post\-test survey in which participants were asked for their student IDs and the class they were enrolled in. They were also able to leave any optional comments.
### D.2\.2 Hypotheses
We are interested in the following hypotheses:
#### D.2\.2\.1 Hypothesis 1: Reaction times
If stimulus location interferes with information processing, we expect that it should take longer to make correct responses in the incongruent condition than in the congruent condition. Schematically, our first hypothesis about decision speed is therefore:
\\\[
\\text{RT}\_{\\text{correct},\\ \\text{congruent}} \< \\text{RT}\_{\\text{correct},\\ \\text{incongruent}}
\\]
#### D.2\.2\.2 Hypothesis 2: Accuracy
If stimulus location interferes with information processing, we also expect to see more errors in the incongruent condition than in the congruent condition.
Schematically, our second hypothesis about decision accuracy is therefore:
\\\[
\\text{Accuracy}\_{\\text{correct},\\ \\text{congruent}} \> \\text{Accuracy}\_{\\text{correct},\\ \\text{incongruent}}
\\]
### D.2\.3 Results
#### D.2\.3\.1 Loading and inspecting the data
We load the data and show a summary of the variables stored in the tibble:
```
d <- aida::data_ST_raw
glimpse(d)
```
```
## Rows: 25,560
## Columns: 15
## $ submission_id <dbl> 7432, 7432, 7432, 7432, 7432, 7432, 7432, 7432, 7432, …
## $ RT <dbl> 1239, 938, 744, 528, 706, 547, 591, 652, 627, 485, 515…
## $ condition <chr> "incongruent", "incongruent", "incongruent", "incongru…
## $ correctness <chr> "correct", "correct", "correct", "correct", "correct",…
## $ class <chr> "Intro Cogn. Neuro-Psychology", "Intro Cogn. Neuro-Psy…
## $ experiment_id <dbl> 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52…
## $ key_pressed <chr> "q", "q", "q", "q", "p", "p", "q", "p", "q", "q", "q",…
## $ p <chr> "circle", "circle", "circle", "circle", "circle", "cir…
## $ pause <dbl> 1896, 1289, 1705, 2115, 2446, 2289, 2057, 2513, 1865, …
## $ q <chr> "square", "square", "square", "square", "square", "squ…
## $ target_object <chr> "square", "square", "square", "square", "circle", "cir…
## $ target_position <chr> "right", "right", "right", "right", "left", "right", "…
## $ timeSpent <dbl> 7.565417, 7.565417, 7.565417, 7.565417, 7.565417, 7.56…
## $ trial_number <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,…
## $ trial_type <chr> "practice", "practice", "practice", "practice", "pract…
```
The most important columns in this data set for our purposes are:
* `submission_id`: an ID identifying each participant
* `RT`: the reaction time for each trial
* `condition`: whether the trial was a congruent or an incongruent trial
* `correctness`: whether the answer in the current trial was correct or incorrect
* `trial_type`: whether the data is from a practice or a main test trial
#### D.2\.3\.2 Cleaning the data
We look at outlier\-y behavior at the level of individual participants first, then at the level of individual trials.
##### D.2\.3\.2\.1 Individual\-level error rates \& reaction times
It is conceivable that some participants did not take the task seriously. They may have just fooled around. We will therefore inspect each individual’s response patterns and reaction times. If participants appear to have “misbehaved”, we discard all of their data. (**CAVEAT:** Notice the researcher degrees of freedom in the decision of what counts as “misbehavior”! It is therefore that choices like these are best committed to in advance, e.g., via pre\-registration!)
We can calculate the mean reaction times and the error rates for each participant.
```
d_individual_summary <- d %>%
filter(trial_type == "main") %>% # look at only data from main trials
group_by(submission_id) %>% # calculate the following for each individual
summarize(mean_RT = mean(RT),
error_rate = 1 - mean(ifelse(correctness == "correct", 1, 0)))
head(d_individual_summary)
```
```
## # A tibble: 6 × 3
## submission_id mean_RT error_rate
## <dbl> <dbl> <dbl>
## 1 7432 595. 0.0500
## 2 7433 458. 0.0400
## 3 7434 531. 0.0400
## 4 7435 433. 0.12
## 5 7436 748. 0.0600
## 6 7437 522. 0.12
```
Let’s plot this summary information:
```
d_individual_summary %>%
ggplot(aes(x = mean_RT, y = error_rate)) +
geom_point()
```
Here’s a crude way of branding outlier\-participants:
```
d_individual_summary <- d_individual_summary %>%
mutate(outlier = case_when(mean_RT < 350 ~ TRUE,
mean_RT > 750 ~ TRUE,
error_rate > 0.5 ~ TRUE,
TRUE ~ FALSE))
d_individual_summary %>%
ggplot(aes(x = mean_RT, y = error_rate)) +
geom_point() +
geom_point(data = filter(d_individual_summary, outlier == TRUE),
color = "firebrick", shape = "square", size = 5)
```
We then clean the data set in a first step by removing all participants identified as outlier\-y:
```
d <- full_join(d, d_individual_summary, by = "submission_id") # merge the tibbles
d <- filter(d, outlier == FALSE)
message("We excluded ", sum(d_individual_summary$outlier), " participants for suspicious mean RTs and higher error rates.")
```
```
## We excluded 5 participants for suspicious mean RTs and higher error rates.
```
##### D.2\.3\.2\.2 Trial\-level reaction times
It is also conceivable that individual trials resulted in early accidental key presses or were interrupted in some way or another. We therefore look at the overall distribution of RTs and determine what to exclude. (Again, it is important that decisions of what to exclude should ideally be publicly preregistered before data analysis.)
Let’s first plot the overall distribution of RTs.
```
d %>% ggplot(aes(x = RT)) +
geom_histogram() +
geom_jitter(aes(x = RT, y = 1), alpha = 0.3, height = 300)
```
Some very long RTs make this graph rather uninformative.
Let’s therefore exclude all trials that lasted longer than 1 second and also all trials with reaction times under 100 ms.
```
message(
"We exclude ",
nrow(filter(d, RT < 100)) + nrow(filter(d, RT > 1000)),
" trials based on too fast or too slow RTs."
)
# exclude these trials
d <- filter(d, RT > 100 & RT < 1000)
```
Here’s the distribution of RTs after cleaning:
```
d %>% ggplot(aes(x = RT)) +
geom_histogram() +
geom_jitter(aes(x = RT, y = 1), alpha = 0.3, height = 300)
```
Finally, we discard the training trials:
```
d <- filter(d, trial_type == "main")
```
#### D.2\.3\.3 Hypothesis\-driven summary statistics
##### D.2\.3\.3\.1 Hypothesis 1: Reaction times
We are mostly interested in the influence of congruency on the reaction times in the trials where participants gave a correct answer. But here we also look at, for comparison, the reaction times for incorrect trials.
Here is a summary of the means and standard deviations for each condition:
```
d_sum <- d %>%
group_by(correctness, condition) %>%
summarize(mean_RT = mean(RT),
sd_RT = sd(RT))
d_sum
```
```
## # A tibble: 4 × 4
## # Groups: correctness [2]
## correctness condition mean_RT sd_RT
## <chr> <chr> <dbl> <dbl>
## 1 correct congruent 453. 99.6
## 2 correct incongruent 477. 85.1
## 3 incorrect congruent 462 97.6
## 4 incorrect incongruent 393. 78.1
```
Numerically, the reaction times for the correct\-congruent trials are indeed faster than for the correct\-incongruent trials.
Here’s a plot of the reaction times split up by whether the answer was correct and whether the trial was congruent or incongruent.
```
d %>% ggplot(aes(x = RT)) +
geom_jitter(aes(y = 0.0005), alpha = 0.1, height = 0.0005) +
geom_density(fill = "gray", alpha = 0.5) +
geom_vline(data = d_sum,
mapping = aes(xintercept = mean_RT),
color = "firebrick") +
facet_grid(condition ~ correctness)
```
##### D.2\.3\.3\.2 Hypothesis 2: Accuracy
Our second hypothesis is about the proportion of correct answers, comparing the congruent against the incongruent trials.
Here is a summary statistic for the acurracy in both conditions:
```
d %>% group_by(condition) %>%
summarize(acurracy = mean(correctness == "correct"))
```
```
## # A tibble: 2 × 2
## condition acurracy
## <chr> <dbl>
## 1 congruent 0.961
## 2 incongruent 0.923
```
Again, numerically it seems that the hypothesis is borne out that accuracy is higher in the congruent trials.
### D.2\.1 Experiment
You can try out the experiment for yourself [here](https://icnp-exp3.netlify.app/).
#### D.2\.1\.1 Participants
A total of 213 participants took part in an online version of a Simon task.
Participants were students of Cognitive Science at the University of Osnabrück, taking part in courses “Introduction to Cognitive (Neuro\-)Psychology” or “Experimental Psychology Lab Practice” in the summer term of 2019\.
#### D.2\.1\.2 Materials \& Design
Each trial started by showing a fixation cross for 200 ms in the center of the screen. Then, one of two geometrical shapes was shown for 500 ms. The **target shape** was either a blue square or a blue circle. The target shape appeared either on the left or right of the screen. Each trial determined uniformly at random which shape (square or circle) to show as target and where on the screen to display it (left or right). Participants were instructed to press keys `q` (left of keyboard) or `p` (right of keyboard) to identify the kind of shape on the screen. The shape\-key allocation happened initially, uniformly at random once for each participant and remained constant throughout the experiment. For example, a participant may have been asked to press `q` for square and `p` for circle.
Trials were categorized as either ‘congruent’ or ‘incongruent’. They were congruent if the location of the stimulus was the same relative location as the response key (e.g., square on the right of the screen, and `p` key to be pressed for square) and incongruent if the stimulus was not in the same relative location as the response key (e.g., square on the right and `q` key to be pressed for square).
In each trial, if no key was pressed within 3 seconds after the appearance of the target shape, a message to please respond faster was displayed on the screen.
#### D.2\.1\.3 Procedure
Participants were first welcomed and made familiar with the experiment. They were told to optimize both speed and accuracy. They then practiced the task for 20 trials before starting the main task, which consisted of 100 trials. Finally, the experiment ended with a post\-test survey in which participants were asked for their student IDs and the class they were enrolled in. They were also able to leave any optional comments.
#### D.2\.1\.1 Participants
A total of 213 participants took part in an online version of a Simon task.
Participants were students of Cognitive Science at the University of Osnabrück, taking part in courses “Introduction to Cognitive (Neuro\-)Psychology” or “Experimental Psychology Lab Practice” in the summer term of 2019\.
#### D.2\.1\.2 Materials \& Design
Each trial started by showing a fixation cross for 200 ms in the center of the screen. Then, one of two geometrical shapes was shown for 500 ms. The **target shape** was either a blue square or a blue circle. The target shape appeared either on the left or right of the screen. Each trial determined uniformly at random which shape (square or circle) to show as target and where on the screen to display it (left or right). Participants were instructed to press keys `q` (left of keyboard) or `p` (right of keyboard) to identify the kind of shape on the screen. The shape\-key allocation happened initially, uniformly at random once for each participant and remained constant throughout the experiment. For example, a participant may have been asked to press `q` for square and `p` for circle.
Trials were categorized as either ‘congruent’ or ‘incongruent’. They were congruent if the location of the stimulus was the same relative location as the response key (e.g., square on the right of the screen, and `p` key to be pressed for square) and incongruent if the stimulus was not in the same relative location as the response key (e.g., square on the right and `q` key to be pressed for square).
In each trial, if no key was pressed within 3 seconds after the appearance of the target shape, a message to please respond faster was displayed on the screen.
#### D.2\.1\.3 Procedure
Participants were first welcomed and made familiar with the experiment. They were told to optimize both speed and accuracy. They then practiced the task for 20 trials before starting the main task, which consisted of 100 trials. Finally, the experiment ended with a post\-test survey in which participants were asked for their student IDs and the class they were enrolled in. They were also able to leave any optional comments.
### D.2\.2 Hypotheses
We are interested in the following hypotheses:
#### D.2\.2\.1 Hypothesis 1: Reaction times
If stimulus location interferes with information processing, we expect that it should take longer to make correct responses in the incongruent condition than in the congruent condition. Schematically, our first hypothesis about decision speed is therefore:
\\\[
\\text{RT}\_{\\text{correct},\\ \\text{congruent}} \< \\text{RT}\_{\\text{correct},\\ \\text{incongruent}}
\\]
#### D.2\.2\.2 Hypothesis 2: Accuracy
If stimulus location interferes with information processing, we also expect to see more errors in the incongruent condition than in the congruent condition.
Schematically, our second hypothesis about decision accuracy is therefore:
\\\[
\\text{Accuracy}\_{\\text{correct},\\ \\text{congruent}} \> \\text{Accuracy}\_{\\text{correct},\\ \\text{incongruent}}
\\]
#### D.2\.2\.1 Hypothesis 1: Reaction times
If stimulus location interferes with information processing, we expect that it should take longer to make correct responses in the incongruent condition than in the congruent condition. Schematically, our first hypothesis about decision speed is therefore:
\\\[
\\text{RT}\_{\\text{correct},\\ \\text{congruent}} \< \\text{RT}\_{\\text{correct},\\ \\text{incongruent}}
\\]
#### D.2\.2\.2 Hypothesis 2: Accuracy
If stimulus location interferes with information processing, we also expect to see more errors in the incongruent condition than in the congruent condition.
Schematically, our second hypothesis about decision accuracy is therefore:
\\\[
\\text{Accuracy}\_{\\text{correct},\\ \\text{congruent}} \> \\text{Accuracy}\_{\\text{correct},\\ \\text{incongruent}}
\\]
### D.2\.3 Results
#### D.2\.3\.1 Loading and inspecting the data
We load the data and show a summary of the variables stored in the tibble:
```
d <- aida::data_ST_raw
glimpse(d)
```
```
## Rows: 25,560
## Columns: 15
## $ submission_id <dbl> 7432, 7432, 7432, 7432, 7432, 7432, 7432, 7432, 7432, …
## $ RT <dbl> 1239, 938, 744, 528, 706, 547, 591, 652, 627, 485, 515…
## $ condition <chr> "incongruent", "incongruent", "incongruent", "incongru…
## $ correctness <chr> "correct", "correct", "correct", "correct", "correct",…
## $ class <chr> "Intro Cogn. Neuro-Psychology", "Intro Cogn. Neuro-Psy…
## $ experiment_id <dbl> 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52…
## $ key_pressed <chr> "q", "q", "q", "q", "p", "p", "q", "p", "q", "q", "q",…
## $ p <chr> "circle", "circle", "circle", "circle", "circle", "cir…
## $ pause <dbl> 1896, 1289, 1705, 2115, 2446, 2289, 2057, 2513, 1865, …
## $ q <chr> "square", "square", "square", "square", "square", "squ…
## $ target_object <chr> "square", "square", "square", "square", "circle", "cir…
## $ target_position <chr> "right", "right", "right", "right", "left", "right", "…
## $ timeSpent <dbl> 7.565417, 7.565417, 7.565417, 7.565417, 7.565417, 7.56…
## $ trial_number <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,…
## $ trial_type <chr> "practice", "practice", "practice", "practice", "pract…
```
The most important columns in this data set for our purposes are:
* `submission_id`: an ID identifying each participant
* `RT`: the reaction time for each trial
* `condition`: whether the trial was a congruent or an incongruent trial
* `correctness`: whether the answer in the current trial was correct or incorrect
* `trial_type`: whether the data is from a practice or a main test trial
#### D.2\.3\.2 Cleaning the data
We look at outlier\-y behavior at the level of individual participants first, then at the level of individual trials.
##### D.2\.3\.2\.1 Individual\-level error rates \& reaction times
It is conceivable that some participants did not take the task seriously. They may have just fooled around. We will therefore inspect each individual’s response patterns and reaction times. If participants appear to have “misbehaved”, we discard all of their data. (**CAVEAT:** Notice the researcher degrees of freedom in the decision of what counts as “misbehavior”! It is therefore that choices like these are best committed to in advance, e.g., via pre\-registration!)
We can calculate the mean reaction times and the error rates for each participant.
```
d_individual_summary <- d %>%
filter(trial_type == "main") %>% # look at only data from main trials
group_by(submission_id) %>% # calculate the following for each individual
summarize(mean_RT = mean(RT),
error_rate = 1 - mean(ifelse(correctness == "correct", 1, 0)))
head(d_individual_summary)
```
```
## # A tibble: 6 × 3
## submission_id mean_RT error_rate
## <dbl> <dbl> <dbl>
## 1 7432 595. 0.0500
## 2 7433 458. 0.0400
## 3 7434 531. 0.0400
## 4 7435 433. 0.12
## 5 7436 748. 0.0600
## 6 7437 522. 0.12
```
Let’s plot this summary information:
```
d_individual_summary %>%
ggplot(aes(x = mean_RT, y = error_rate)) +
geom_point()
```
Here’s a crude way of branding outlier\-participants:
```
d_individual_summary <- d_individual_summary %>%
mutate(outlier = case_when(mean_RT < 350 ~ TRUE,
mean_RT > 750 ~ TRUE,
error_rate > 0.5 ~ TRUE,
TRUE ~ FALSE))
d_individual_summary %>%
ggplot(aes(x = mean_RT, y = error_rate)) +
geom_point() +
geom_point(data = filter(d_individual_summary, outlier == TRUE),
color = "firebrick", shape = "square", size = 5)
```
We then clean the data set in a first step by removing all participants identified as outlier\-y:
```
d <- full_join(d, d_individual_summary, by = "submission_id") # merge the tibbles
d <- filter(d, outlier == FALSE)
message("We excluded ", sum(d_individual_summary$outlier), " participants for suspicious mean RTs and higher error rates.")
```
```
## We excluded 5 participants for suspicious mean RTs and higher error rates.
```
##### D.2\.3\.2\.2 Trial\-level reaction times
It is also conceivable that individual trials resulted in early accidental key presses or were interrupted in some way or another. We therefore look at the overall distribution of RTs and determine what to exclude. (Again, it is important that decisions of what to exclude should ideally be publicly preregistered before data analysis.)
Let’s first plot the overall distribution of RTs.
```
d %>% ggplot(aes(x = RT)) +
geom_histogram() +
geom_jitter(aes(x = RT, y = 1), alpha = 0.3, height = 300)
```
Some very long RTs make this graph rather uninformative.
Let’s therefore exclude all trials that lasted longer than 1 second and also all trials with reaction times under 100 ms.
```
message(
"We exclude ",
nrow(filter(d, RT < 100)) + nrow(filter(d, RT > 1000)),
" trials based on too fast or too slow RTs."
)
# exclude these trials
d <- filter(d, RT > 100 & RT < 1000)
```
Here’s the distribution of RTs after cleaning:
```
d %>% ggplot(aes(x = RT)) +
geom_histogram() +
geom_jitter(aes(x = RT, y = 1), alpha = 0.3, height = 300)
```
Finally, we discard the training trials:
```
d <- filter(d, trial_type == "main")
```
#### D.2\.3\.3 Hypothesis\-driven summary statistics
##### D.2\.3\.3\.1 Hypothesis 1: Reaction times
We are mostly interested in the influence of congruency on the reaction times in the trials where participants gave a correct answer. But here we also look at, for comparison, the reaction times for incorrect trials.
Here is a summary of the means and standard deviations for each condition:
```
d_sum <- d %>%
group_by(correctness, condition) %>%
summarize(mean_RT = mean(RT),
sd_RT = sd(RT))
d_sum
```
```
## # A tibble: 4 × 4
## # Groups: correctness [2]
## correctness condition mean_RT sd_RT
## <chr> <chr> <dbl> <dbl>
## 1 correct congruent 453. 99.6
## 2 correct incongruent 477. 85.1
## 3 incorrect congruent 462 97.6
## 4 incorrect incongruent 393. 78.1
```
Numerically, the reaction times for the correct\-congruent trials are indeed faster than for the correct\-incongruent trials.
Here’s a plot of the reaction times split up by whether the answer was correct and whether the trial was congruent or incongruent.
```
d %>% ggplot(aes(x = RT)) +
geom_jitter(aes(y = 0.0005), alpha = 0.1, height = 0.0005) +
geom_density(fill = "gray", alpha = 0.5) +
geom_vline(data = d_sum,
mapping = aes(xintercept = mean_RT),
color = "firebrick") +
facet_grid(condition ~ correctness)
```
##### D.2\.3\.3\.2 Hypothesis 2: Accuracy
Our second hypothesis is about the proportion of correct answers, comparing the congruent against the incongruent trials.
Here is a summary statistic for the acurracy in both conditions:
```
d %>% group_by(condition) %>%
summarize(acurracy = mean(correctness == "correct"))
```
```
## # A tibble: 2 × 2
## condition acurracy
## <chr> <dbl>
## 1 congruent 0.961
## 2 incongruent 0.923
```
Again, numerically it seems that the hypothesis is borne out that accuracy is higher in the congruent trials.
#### D.2\.3\.1 Loading and inspecting the data
We load the data and show a summary of the variables stored in the tibble:
```
d <- aida::data_ST_raw
glimpse(d)
```
```
## Rows: 25,560
## Columns: 15
## $ submission_id <dbl> 7432, 7432, 7432, 7432, 7432, 7432, 7432, 7432, 7432, …
## $ RT <dbl> 1239, 938, 744, 528, 706, 547, 591, 652, 627, 485, 515…
## $ condition <chr> "incongruent", "incongruent", "incongruent", "incongru…
## $ correctness <chr> "correct", "correct", "correct", "correct", "correct",…
## $ class <chr> "Intro Cogn. Neuro-Psychology", "Intro Cogn. Neuro-Psy…
## $ experiment_id <dbl> 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52…
## $ key_pressed <chr> "q", "q", "q", "q", "p", "p", "q", "p", "q", "q", "q",…
## $ p <chr> "circle", "circle", "circle", "circle", "circle", "cir…
## $ pause <dbl> 1896, 1289, 1705, 2115, 2446, 2289, 2057, 2513, 1865, …
## $ q <chr> "square", "square", "square", "square", "square", "squ…
## $ target_object <chr> "square", "square", "square", "square", "circle", "cir…
## $ target_position <chr> "right", "right", "right", "right", "left", "right", "…
## $ timeSpent <dbl> 7.565417, 7.565417, 7.565417, 7.565417, 7.565417, 7.56…
## $ trial_number <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,…
## $ trial_type <chr> "practice", "practice", "practice", "practice", "pract…
```
The most important columns in this data set for our purposes are:
* `submission_id`: an ID identifying each participant
* `RT`: the reaction time for each trial
* `condition`: whether the trial was a congruent or an incongruent trial
* `correctness`: whether the answer in the current trial was correct or incorrect
* `trial_type`: whether the data is from a practice or a main test trial
#### D.2\.3\.2 Cleaning the data
We look at outlier\-y behavior at the level of individual participants first, then at the level of individual trials.
##### D.2\.3\.2\.1 Individual\-level error rates \& reaction times
It is conceivable that some participants did not take the task seriously. They may have just fooled around. We will therefore inspect each individual’s response patterns and reaction times. If participants appear to have “misbehaved”, we discard all of their data. (**CAVEAT:** Notice the researcher degrees of freedom in the decision of what counts as “misbehavior”! It is therefore that choices like these are best committed to in advance, e.g., via pre\-registration!)
We can calculate the mean reaction times and the error rates for each participant.
```
d_individual_summary <- d %>%
filter(trial_type == "main") %>% # look at only data from main trials
group_by(submission_id) %>% # calculate the following for each individual
summarize(mean_RT = mean(RT),
error_rate = 1 - mean(ifelse(correctness == "correct", 1, 0)))
head(d_individual_summary)
```
```
## # A tibble: 6 × 3
## submission_id mean_RT error_rate
## <dbl> <dbl> <dbl>
## 1 7432 595. 0.0500
## 2 7433 458. 0.0400
## 3 7434 531. 0.0400
## 4 7435 433. 0.12
## 5 7436 748. 0.0600
## 6 7437 522. 0.12
```
Let’s plot this summary information:
```
d_individual_summary %>%
ggplot(aes(x = mean_RT, y = error_rate)) +
geom_point()
```
Here’s a crude way of branding outlier\-participants:
```
d_individual_summary <- d_individual_summary %>%
mutate(outlier = case_when(mean_RT < 350 ~ TRUE,
mean_RT > 750 ~ TRUE,
error_rate > 0.5 ~ TRUE,
TRUE ~ FALSE))
d_individual_summary %>%
ggplot(aes(x = mean_RT, y = error_rate)) +
geom_point() +
geom_point(data = filter(d_individual_summary, outlier == TRUE),
color = "firebrick", shape = "square", size = 5)
```
We then clean the data set in a first step by removing all participants identified as outlier\-y:
```
d <- full_join(d, d_individual_summary, by = "submission_id") # merge the tibbles
d <- filter(d, outlier == FALSE)
message("We excluded ", sum(d_individual_summary$outlier), " participants for suspicious mean RTs and higher error rates.")
```
```
## We excluded 5 participants for suspicious mean RTs and higher error rates.
```
##### D.2\.3\.2\.2 Trial\-level reaction times
It is also conceivable that individual trials resulted in early accidental key presses or were interrupted in some way or another. We therefore look at the overall distribution of RTs and determine what to exclude. (Again, it is important that decisions of what to exclude should ideally be publicly preregistered before data analysis.)
Let’s first plot the overall distribution of RTs.
```
d %>% ggplot(aes(x = RT)) +
geom_histogram() +
geom_jitter(aes(x = RT, y = 1), alpha = 0.3, height = 300)
```
Some very long RTs make this graph rather uninformative.
Let’s therefore exclude all trials that lasted longer than 1 second and also all trials with reaction times under 100 ms.
```
message(
"We exclude ",
nrow(filter(d, RT < 100)) + nrow(filter(d, RT > 1000)),
" trials based on too fast or too slow RTs."
)
# exclude these trials
d <- filter(d, RT > 100 & RT < 1000)
```
Here’s the distribution of RTs after cleaning:
```
d %>% ggplot(aes(x = RT)) +
geom_histogram() +
geom_jitter(aes(x = RT, y = 1), alpha = 0.3, height = 300)
```
Finally, we discard the training trials:
```
d <- filter(d, trial_type == "main")
```
##### D.2\.3\.2\.1 Individual\-level error rates \& reaction times
It is conceivable that some participants did not take the task seriously. They may have just fooled around. We will therefore inspect each individual’s response patterns and reaction times. If participants appear to have “misbehaved”, we discard all of their data. (**CAVEAT:** Notice the researcher degrees of freedom in the decision of what counts as “misbehavior”! It is therefore that choices like these are best committed to in advance, e.g., via pre\-registration!)
We can calculate the mean reaction times and the error rates for each participant.
```
d_individual_summary <- d %>%
filter(trial_type == "main") %>% # look at only data from main trials
group_by(submission_id) %>% # calculate the following for each individual
summarize(mean_RT = mean(RT),
error_rate = 1 - mean(ifelse(correctness == "correct", 1, 0)))
head(d_individual_summary)
```
```
## # A tibble: 6 × 3
## submission_id mean_RT error_rate
## <dbl> <dbl> <dbl>
## 1 7432 595. 0.0500
## 2 7433 458. 0.0400
## 3 7434 531. 0.0400
## 4 7435 433. 0.12
## 5 7436 748. 0.0600
## 6 7437 522. 0.12
```
Let’s plot this summary information:
```
d_individual_summary %>%
ggplot(aes(x = mean_RT, y = error_rate)) +
geom_point()
```
Here’s a crude way of branding outlier\-participants:
```
d_individual_summary <- d_individual_summary %>%
mutate(outlier = case_when(mean_RT < 350 ~ TRUE,
mean_RT > 750 ~ TRUE,
error_rate > 0.5 ~ TRUE,
TRUE ~ FALSE))
d_individual_summary %>%
ggplot(aes(x = mean_RT, y = error_rate)) +
geom_point() +
geom_point(data = filter(d_individual_summary, outlier == TRUE),
color = "firebrick", shape = "square", size = 5)
```
We then clean the data set in a first step by removing all participants identified as outlier\-y:
```
d <- full_join(d, d_individual_summary, by = "submission_id") # merge the tibbles
d <- filter(d, outlier == FALSE)
message("We excluded ", sum(d_individual_summary$outlier), " participants for suspicious mean RTs and higher error rates.")
```
```
## We excluded 5 participants for suspicious mean RTs and higher error rates.
```
##### D.2\.3\.2\.2 Trial\-level reaction times
It is also conceivable that individual trials resulted in early accidental key presses or were interrupted in some way or another. We therefore look at the overall distribution of RTs and determine what to exclude. (Again, it is important that decisions of what to exclude should ideally be publicly preregistered before data analysis.)
Let’s first plot the overall distribution of RTs.
```
d %>% ggplot(aes(x = RT)) +
geom_histogram() +
geom_jitter(aes(x = RT, y = 1), alpha = 0.3, height = 300)
```
Some very long RTs make this graph rather uninformative.
Let’s therefore exclude all trials that lasted longer than 1 second and also all trials with reaction times under 100 ms.
```
message(
"We exclude ",
nrow(filter(d, RT < 100)) + nrow(filter(d, RT > 1000)),
" trials based on too fast or too slow RTs."
)
# exclude these trials
d <- filter(d, RT > 100 & RT < 1000)
```
Here’s the distribution of RTs after cleaning:
```
d %>% ggplot(aes(x = RT)) +
geom_histogram() +
geom_jitter(aes(x = RT, y = 1), alpha = 0.3, height = 300)
```
Finally, we discard the training trials:
```
d <- filter(d, trial_type == "main")
```
#### D.2\.3\.3 Hypothesis\-driven summary statistics
##### D.2\.3\.3\.1 Hypothesis 1: Reaction times
We are mostly interested in the influence of congruency on the reaction times in the trials where participants gave a correct answer. But here we also look at, for comparison, the reaction times for incorrect trials.
Here is a summary of the means and standard deviations for each condition:
```
d_sum <- d %>%
group_by(correctness, condition) %>%
summarize(mean_RT = mean(RT),
sd_RT = sd(RT))
d_sum
```
```
## # A tibble: 4 × 4
## # Groups: correctness [2]
## correctness condition mean_RT sd_RT
## <chr> <chr> <dbl> <dbl>
## 1 correct congruent 453. 99.6
## 2 correct incongruent 477. 85.1
## 3 incorrect congruent 462 97.6
## 4 incorrect incongruent 393. 78.1
```
Numerically, the reaction times for the correct\-congruent trials are indeed faster than for the correct\-incongruent trials.
Here’s a plot of the reaction times split up by whether the answer was correct and whether the trial was congruent or incongruent.
```
d %>% ggplot(aes(x = RT)) +
geom_jitter(aes(y = 0.0005), alpha = 0.1, height = 0.0005) +
geom_density(fill = "gray", alpha = 0.5) +
geom_vline(data = d_sum,
mapping = aes(xintercept = mean_RT),
color = "firebrick") +
facet_grid(condition ~ correctness)
```
##### D.2\.3\.3\.2 Hypothesis 2: Accuracy
Our second hypothesis is about the proportion of correct answers, comparing the congruent against the incongruent trials.
Here is a summary statistic for the acurracy in both conditions:
```
d %>% group_by(condition) %>%
summarize(acurracy = mean(correctness == "correct"))
```
```
## # A tibble: 2 × 2
## condition acurracy
## <chr> <dbl>
## 1 congruent 0.961
## 2 incongruent 0.923
```
Again, numerically it seems that the hypothesis is borne out that accuracy is higher in the congruent trials.
##### D.2\.3\.3\.1 Hypothesis 1: Reaction times
We are mostly interested in the influence of congruency on the reaction times in the trials where participants gave a correct answer. But here we also look at, for comparison, the reaction times for incorrect trials.
Here is a summary of the means and standard deviations for each condition:
```
d_sum <- d %>%
group_by(correctness, condition) %>%
summarize(mean_RT = mean(RT),
sd_RT = sd(RT))
d_sum
```
```
## # A tibble: 4 × 4
## # Groups: correctness [2]
## correctness condition mean_RT sd_RT
## <chr> <chr> <dbl> <dbl>
## 1 correct congruent 453. 99.6
## 2 correct incongruent 477. 85.1
## 3 incorrect congruent 462 97.6
## 4 incorrect incongruent 393. 78.1
```
Numerically, the reaction times for the correct\-congruent trials are indeed faster than for the correct\-incongruent trials.
Here’s a plot of the reaction times split up by whether the answer was correct and whether the trial was congruent or incongruent.
```
d %>% ggplot(aes(x = RT)) +
geom_jitter(aes(y = 0.0005), alpha = 0.1, height = 0.0005) +
geom_density(fill = "gray", alpha = 0.5) +
geom_vline(data = d_sum,
mapping = aes(xintercept = mean_RT),
color = "firebrick") +
facet_grid(condition ~ correctness)
```
##### D.2\.3\.3\.2 Hypothesis 2: Accuracy
Our second hypothesis is about the proportion of correct answers, comparing the congruent against the incongruent trials.
Here is a summary statistic for the acurracy in both conditions:
```
d %>% group_by(condition) %>%
summarize(acurracy = mean(correctness == "correct"))
```
```
## # A tibble: 2 × 2
## condition acurracy
## <chr> <dbl>
## 1 congruent 0.961
## 2 incongruent 0.923
```
Again, numerically it seems that the hypothesis is borne out that accuracy is higher in the congruent trials.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/app-93-data-sets-king-of-france.html |
D.3 King of France
------------------
### D.3\.1 Nature, origin and rationale of the data
A **presupposition** of a sentence is a piece of information that is necessary for the sentence to make sense, but which is not communicated explicitly. If I say “Jones chained my camel to a tree”, this sentence presupposes, somewhat incredibly, that I own a camel. If it is false that I own a camel, the sentence makes no sense. Yet, if I say it and you say: “I disagree”, you take issue with my claim about chaining, not about me owning a camel. In this sense, the presupposition is not part of the explicitly contributed content (it is “not at issue content”, as the linguists would say).
We here partially replicate a previous study by Abrusán and Szendröi ([2013](#ref-AbrusanSzendroi2013:Experimenting-w)) investigating how sentences with false presuppositions are perceived. The main question of interest for us is whether sentences with a false presupposition are rather regarded as true or rather as false. We therefore present participants with sentences (see below) and have them rate these as ‘true’ or ‘false’, a so\-called **truth\-value judgement task**, a common paradigm in experimental semantics and pragmatics. (The original study by Abrusán and Szendröi ([2013](#ref-AbrusanSzendroi2013:Experimenting-w)) also included a third option ‘cannot tell’, which we do not use since this data set is mainly used for toying around with binary choice data.)
Abrusán and Szendröi ([2013](#ref-AbrusanSzendroi2013:Experimenting-w)) presented their participants with 11 different types of sentences, of which we here only focus on five. Here are examples of the five conditions we test, using the corresponding condition numbers from the experiment by Abrusán and Szendröi ([2013](#ref-AbrusanSzendroi2013:Experimenting-w)).
**C0\.** The king of France is bald.
**C1\.** France has a king, and he is bald.
**C6\.** The King of France isn’t bald.
**C9\.** The King of France, he did not call Emmanuel Macron last night.
**C10\.** Emmanuel Macron, he did not call the King of France last night.
The presupposition in question is “France has a king”. C0 and C1 differ only with respect to whether this piece of information is presupposed (C0\) or explicitly asserted (C1\). The variants C0 and C6 differ only with respect to negation in the main (asserted) proposition. Finally, the contrast pair C9 and C10 is interesting because of a particular topic\-focus structure and the placement of negation. In C9, the topic is “the King of France”, which introduces the presupposition in question. In C10, the topic is “Emmanuel Macron”, but it introduces the presupposition under a negation.
Figure [D.1](app-93-data-sets-king-of-france.html#fig:App-93-04-Results-KoF-Original) shows the results reported by Abrusán and Szendröi ([2013](#ref-AbrusanSzendroi2013:Experimenting-w)).
Figure D.1: Results of Abrusán and Szendröi ([2013](#ref-AbrusanSzendroi2013:Experimenting-w)) .
#### D.3\.1\.1 The experiment
##### D.3\.1\.1\.1 Participants
We obtained data from 97 participants via the online crowd\-sourcing platform [Prolific](https://prolific.co).[94](#fn94) All participants were native speakers of English.
##### D.3\.1\.1\.2 Material
The sentence material consisted of five vignettes. Here are the sentences that constitute “condition 1” of each of the five vignettes:
**V1\.** The King of France is bald.
**V2\.** The Emperor of Canada is fond of sushi.
**V3\.** The Pope’s wife is a lawyer.
**V4\.** The Belgian rainforest provides a habitat for many species.
**V5\.** The volcanoes of Germany dominate the landscape.
As every vignette occurred in each of the five conditions, there are a total of 25 critical sentences. Additionally, for each vignette, there is a “background check” sentence, which is intended to find out whether participants know whether the relevant presuppositions are true. The “background check” sentences are:
**BC1\.** France has a king.
**BC2\.** The Pope is currently not married.
**BC3\.** Canada is a democracy.
**BC4\.** Belgium has rainforests.
**BC5\.** Germany has volcanoes.
Finally, there are also 110 filler sentences, which do not have a presupposition, but also require common world knowledge for a correct answer. As each filler has an uncontroversially correct answer, these fillers also serve as a general attention check to probe into whether participants are reading the sentences carefully enough. Example filler sentences are:
**F1\.** William Shakespeare was a famous Italian painter in Rome.
**F2\.** There were two world wars in the 20th century.
##### D.3\.1\.1\.3 Procedure
Each experimental run started with five practice trials, which used the five additional sentences, which were like the filler material and the same for each participant, presented in random order.
The main part of the experiment presented each participant with five critical sentences, exactly one from each vignette and exactly one from each condition, allocated completely at random. Each participant also saw all of the five “background check” sentences. Each “background check” sentence was presented *after* the corresponding vignette’s critical sentence. All of these test trials were interspersed with 14 random filler sentences.
##### D.3\.1\.1\.4 Realization
The experiment was realized using [\_magpie](https://magpie-ea.github.io/magpie-site/index.html) and can be tried out [here](https://magpie-king-of-france.netlify.com).
#### D.3\.1\.2 Hypotheses
We will be interested in the following research questions:
* **H1**: The latent probability of “TRUE” judgements is higher in C0 (with presupposition) than in C1 (where the presupposition is part of the at\-issue / asserted content).
* **H2**: There is no difference in truth\-value judgements between C0 (the positive sentence) and C6 (the negative sentence).
* **H3**: The disposition towards “TRUE” judgements is lower for C9 (where the presupposition is topical) than for C10 (where the presupposition is not topical and occurs under negation).
### D.3\.2 Loading and preprocessing the data
First, load the data:
```
data_KoF_raw <- aida::data_KoF_raw
```
And then have a glimpse:
```
glimpse(data_KoF_raw)
```
```
## Rows: 2,813
## Columns: 16
## $ submission_id <dbl> 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, …
## $ RT <dbl> 8110, 35557, 3647, 16037, 11816, 6024, 4986, 13019, 538…
## $ age <dbl> 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57,…
## $ comments <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
## $ item_version <chr> "none", "none", "none", "none", "none", "none", "none",…
## $ correct_answer <lgl> FALSE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE, FAL…
## $ education <chr> "Graduated College", "Graduated College", "Graduated Co…
## $ gender <chr> "female", "female", "female", "female", "female", "fema…
## $ languages <chr> "English", "English", "English", "English", "English", …
## $ question <chr> "World War II was a global war that lasted from 1914 to…
## $ response <lgl> FALSE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE, FAL…
## $ timeSpent <dbl> 39.48995, 39.48995, 39.48995, 39.48995, 39.48995, 39.48…
## $ trial_name <chr> "practice_trials", "practice_trials", "practice_trials"…
## $ trial_number <dbl> 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 1…
## $ trial_type <chr> "practice", "practice", "practice", "practice", "practi…
## $ vignette <chr> "undefined", "undefined", "undefined", "undefined", "un…
```
The most important variables in this data set are:
* `submission_id`: unique identifier for each participant
* `trial_type`: whether the trial was of the category `filler`, `main`, `practice` or `special`, where the latter encodes the “background checks”
* `item_version`: the condition to which the test sentence belongs (only given for trials of type `main` and `special`)
* `response`: the answer (“TRUE” or “FALSE”) on each trial
* `vignette`: the current item’s vignette number (applies only to trials of type `main` and `special`)
As the variable names used in the raw data are not ideal, we will pre\-process the raw data a bit for easier analysis.
```
data_KoF_processed <- data_KoF_raw %>%
# discard practice trials
filter(trial_type != "practice") %>%
mutate(
# add a 'condition' variable
condition = case_when(
trial_type == "special" ~ "background check",
trial_type == "main" ~ str_c("Condition ", item_version),
TRUE ~ "filler"
) %>%
factor(
ordered = T,
levels = c(str_c("Condition ", c(0, 1, 6, 9, 10)), "background check", "filler")
)
)
```
### D.3\.3 Cleaning the data
We clean the data in two consecutive steps:
1. Remove all data from any participant who got more than 50% of the answer to filler material wrong.
2. Remove individual main trials if the corresponding “background check” question was answered wrongly.
#### D.3\.3\.1 Cleaning by\-participant
```
# look at error rates for filler sentences by subject
# mark every subject with < 0.5 proportion correct
subject_error_rate <- data_KoF_processed %>%
filter(trial_type == "filler") %>%
group_by(submission_id) %>%
summarise(
proportion_correct = mean(correct_answer == response),
outlier_subject = proportion_correct < 0.5
) %>%
arrange(proportion_correct)
```
Plot the results:
```
# plot by-subject error rates
subject_error_rate %>%
ggplot(aes(x = proportion_correct, color = outlier_subject, shape = outlier_subject)) +
geom_jitter(aes(y = ""), width = 0.001) +
xlab("Poportion of correct answers") + ylab("") +
ggtitle("Distribution of proportion of correct answers on filler trials") +
xlim(0, 1) +
scale_color_discrete(name = "Outlier") +
scale_shape_discrete(name = "Outlier")
```
Apply the cleaning step:
```
# add info about error rates and exclude outlier subject(s)
d_cleaned <-
full_join(data_KoF_processed, subject_error_rate, by = "submission_id") %>%
filter(outlier_subject == FALSE)
```
#### D.3\.3\.2 Cleaning by\-trial
```
# exclude every critical trial whose 'background' test question was answered wrongly
d_cleaned <-
d_cleaned %>%
# select only the 'background question' trials
filter(trial_type == "special") %>%
# is the background question answered correctly?
mutate(
background_correct = correct_answer == response
) %>%
# select only the relevant columns
select(submission_id, vignette, background_correct) %>%
# right join lines to original data set
right_join(d_cleaned, by = c("submission_id", "vignette")) %>%
# remove all special trials, as well as main trials with incorrect background check
filter(trial_type == "main" & background_correct == TRUE)
```
### D.3\.4 Exploration: summary stats \& plots
Plot for ratings by condition:
```
d_cleaned %>%
# drop unused factor levels
droplevels() %>%
# get means and 95% bootstrapped CIs for each condition
group_by(condition) %>%
nest() %>%
summarise(
CIs = map(data, function(d) bootstrapped_CI(d$response == "TRUE"))
) %>%
unnest(CIs) %>%
# plot means and CIs
ggplot(aes(x = condition, y = mean, fill = condition)) +
geom_bar(stat = "identity") +
geom_errorbar(aes(ymin = lower, ymax = upper, width = 0.2)) +
ylim(0, 1) +
ylab("") + xlab("") + ggtitle("Proportion of 'TRUE' responses per condition") +
theme(legend.position = "none") +
scale_fill_manual(values = project_colors)
```
Plot for each condition \& vignette:
```
data_KoF_processed %>%
filter(trial_type == "main") %>%
droplevels() %>%
group_by(condition, vignette) %>%
nest() %>%
summarise(
CIs = map(data, function(d) bootstrapped_CI(d$response == "TRUE"))
) %>%
unnest(CIs) %>%
ggplot(aes(x = condition, y = mean, fill = vignette)) +
geom_bar(stat = "identity", position = "dodge2") +
geom_errorbar(
aes(ymin = lower, ymax = upper),
width = 0.3,
position = position_dodge(width = 0.9)
) +
ylim(0, 1) +
ylab("") + xlab("") + ggtitle("Proportion of 'TRUE' responses per condition & vignette")
```
### D.3\.1 Nature, origin and rationale of the data
A **presupposition** of a sentence is a piece of information that is necessary for the sentence to make sense, but which is not communicated explicitly. If I say “Jones chained my camel to a tree”, this sentence presupposes, somewhat incredibly, that I own a camel. If it is false that I own a camel, the sentence makes no sense. Yet, if I say it and you say: “I disagree”, you take issue with my claim about chaining, not about me owning a camel. In this sense, the presupposition is not part of the explicitly contributed content (it is “not at issue content”, as the linguists would say).
We here partially replicate a previous study by Abrusán and Szendröi ([2013](#ref-AbrusanSzendroi2013:Experimenting-w)) investigating how sentences with false presuppositions are perceived. The main question of interest for us is whether sentences with a false presupposition are rather regarded as true or rather as false. We therefore present participants with sentences (see below) and have them rate these as ‘true’ or ‘false’, a so\-called **truth\-value judgement task**, a common paradigm in experimental semantics and pragmatics. (The original study by Abrusán and Szendröi ([2013](#ref-AbrusanSzendroi2013:Experimenting-w)) also included a third option ‘cannot tell’, which we do not use since this data set is mainly used for toying around with binary choice data.)
Abrusán and Szendröi ([2013](#ref-AbrusanSzendroi2013:Experimenting-w)) presented their participants with 11 different types of sentences, of which we here only focus on five. Here are examples of the five conditions we test, using the corresponding condition numbers from the experiment by Abrusán and Szendröi ([2013](#ref-AbrusanSzendroi2013:Experimenting-w)).
**C0\.** The king of France is bald.
**C1\.** France has a king, and he is bald.
**C6\.** The King of France isn’t bald.
**C9\.** The King of France, he did not call Emmanuel Macron last night.
**C10\.** Emmanuel Macron, he did not call the King of France last night.
The presupposition in question is “France has a king”. C0 and C1 differ only with respect to whether this piece of information is presupposed (C0\) or explicitly asserted (C1\). The variants C0 and C6 differ only with respect to negation in the main (asserted) proposition. Finally, the contrast pair C9 and C10 is interesting because of a particular topic\-focus structure and the placement of negation. In C9, the topic is “the King of France”, which introduces the presupposition in question. In C10, the topic is “Emmanuel Macron”, but it introduces the presupposition under a negation.
Figure [D.1](app-93-data-sets-king-of-france.html#fig:App-93-04-Results-KoF-Original) shows the results reported by Abrusán and Szendröi ([2013](#ref-AbrusanSzendroi2013:Experimenting-w)).
Figure D.1: Results of Abrusán and Szendröi ([2013](#ref-AbrusanSzendroi2013:Experimenting-w)) .
#### D.3\.1\.1 The experiment
##### D.3\.1\.1\.1 Participants
We obtained data from 97 participants via the online crowd\-sourcing platform [Prolific](https://prolific.co).[94](#fn94) All participants were native speakers of English.
##### D.3\.1\.1\.2 Material
The sentence material consisted of five vignettes. Here are the sentences that constitute “condition 1” of each of the five vignettes:
**V1\.** The King of France is bald.
**V2\.** The Emperor of Canada is fond of sushi.
**V3\.** The Pope’s wife is a lawyer.
**V4\.** The Belgian rainforest provides a habitat for many species.
**V5\.** The volcanoes of Germany dominate the landscape.
As every vignette occurred in each of the five conditions, there are a total of 25 critical sentences. Additionally, for each vignette, there is a “background check” sentence, which is intended to find out whether participants know whether the relevant presuppositions are true. The “background check” sentences are:
**BC1\.** France has a king.
**BC2\.** The Pope is currently not married.
**BC3\.** Canada is a democracy.
**BC4\.** Belgium has rainforests.
**BC5\.** Germany has volcanoes.
Finally, there are also 110 filler sentences, which do not have a presupposition, but also require common world knowledge for a correct answer. As each filler has an uncontroversially correct answer, these fillers also serve as a general attention check to probe into whether participants are reading the sentences carefully enough. Example filler sentences are:
**F1\.** William Shakespeare was a famous Italian painter in Rome.
**F2\.** There were two world wars in the 20th century.
##### D.3\.1\.1\.3 Procedure
Each experimental run started with five practice trials, which used the five additional sentences, which were like the filler material and the same for each participant, presented in random order.
The main part of the experiment presented each participant with five critical sentences, exactly one from each vignette and exactly one from each condition, allocated completely at random. Each participant also saw all of the five “background check” sentences. Each “background check” sentence was presented *after* the corresponding vignette’s critical sentence. All of these test trials were interspersed with 14 random filler sentences.
##### D.3\.1\.1\.4 Realization
The experiment was realized using [\_magpie](https://magpie-ea.github.io/magpie-site/index.html) and can be tried out [here](https://magpie-king-of-france.netlify.com).
#### D.3\.1\.2 Hypotheses
We will be interested in the following research questions:
* **H1**: The latent probability of “TRUE” judgements is higher in C0 (with presupposition) than in C1 (where the presupposition is part of the at\-issue / asserted content).
* **H2**: There is no difference in truth\-value judgements between C0 (the positive sentence) and C6 (the negative sentence).
* **H3**: The disposition towards “TRUE” judgements is lower for C9 (where the presupposition is topical) than for C10 (where the presupposition is not topical and occurs under negation).
#### D.3\.1\.1 The experiment
##### D.3\.1\.1\.1 Participants
We obtained data from 97 participants via the online crowd\-sourcing platform [Prolific](https://prolific.co).[94](#fn94) All participants were native speakers of English.
##### D.3\.1\.1\.2 Material
The sentence material consisted of five vignettes. Here are the sentences that constitute “condition 1” of each of the five vignettes:
**V1\.** The King of France is bald.
**V2\.** The Emperor of Canada is fond of sushi.
**V3\.** The Pope’s wife is a lawyer.
**V4\.** The Belgian rainforest provides a habitat for many species.
**V5\.** The volcanoes of Germany dominate the landscape.
As every vignette occurred in each of the five conditions, there are a total of 25 critical sentences. Additionally, for each vignette, there is a “background check” sentence, which is intended to find out whether participants know whether the relevant presuppositions are true. The “background check” sentences are:
**BC1\.** France has a king.
**BC2\.** The Pope is currently not married.
**BC3\.** Canada is a democracy.
**BC4\.** Belgium has rainforests.
**BC5\.** Germany has volcanoes.
Finally, there are also 110 filler sentences, which do not have a presupposition, but also require common world knowledge for a correct answer. As each filler has an uncontroversially correct answer, these fillers also serve as a general attention check to probe into whether participants are reading the sentences carefully enough. Example filler sentences are:
**F1\.** William Shakespeare was a famous Italian painter in Rome.
**F2\.** There were two world wars in the 20th century.
##### D.3\.1\.1\.3 Procedure
Each experimental run started with five practice trials, which used the five additional sentences, which were like the filler material and the same for each participant, presented in random order.
The main part of the experiment presented each participant with five critical sentences, exactly one from each vignette and exactly one from each condition, allocated completely at random. Each participant also saw all of the five “background check” sentences. Each “background check” sentence was presented *after* the corresponding vignette’s critical sentence. All of these test trials were interspersed with 14 random filler sentences.
##### D.3\.1\.1\.4 Realization
The experiment was realized using [\_magpie](https://magpie-ea.github.io/magpie-site/index.html) and can be tried out [here](https://magpie-king-of-france.netlify.com).
##### D.3\.1\.1\.1 Participants
We obtained data from 97 participants via the online crowd\-sourcing platform [Prolific](https://prolific.co).[94](#fn94) All participants were native speakers of English.
##### D.3\.1\.1\.2 Material
The sentence material consisted of five vignettes. Here are the sentences that constitute “condition 1” of each of the five vignettes:
**V1\.** The King of France is bald.
**V2\.** The Emperor of Canada is fond of sushi.
**V3\.** The Pope’s wife is a lawyer.
**V4\.** The Belgian rainforest provides a habitat for many species.
**V5\.** The volcanoes of Germany dominate the landscape.
As every vignette occurred in each of the five conditions, there are a total of 25 critical sentences. Additionally, for each vignette, there is a “background check” sentence, which is intended to find out whether participants know whether the relevant presuppositions are true. The “background check” sentences are:
**BC1\.** France has a king.
**BC2\.** The Pope is currently not married.
**BC3\.** Canada is a democracy.
**BC4\.** Belgium has rainforests.
**BC5\.** Germany has volcanoes.
Finally, there are also 110 filler sentences, which do not have a presupposition, but also require common world knowledge for a correct answer. As each filler has an uncontroversially correct answer, these fillers also serve as a general attention check to probe into whether participants are reading the sentences carefully enough. Example filler sentences are:
**F1\.** William Shakespeare was a famous Italian painter in Rome.
**F2\.** There were two world wars in the 20th century.
##### D.3\.1\.1\.3 Procedure
Each experimental run started with five practice trials, which used the five additional sentences, which were like the filler material and the same for each participant, presented in random order.
The main part of the experiment presented each participant with five critical sentences, exactly one from each vignette and exactly one from each condition, allocated completely at random. Each participant also saw all of the five “background check” sentences. Each “background check” sentence was presented *after* the corresponding vignette’s critical sentence. All of these test trials were interspersed with 14 random filler sentences.
##### D.3\.1\.1\.4 Realization
The experiment was realized using [\_magpie](https://magpie-ea.github.io/magpie-site/index.html) and can be tried out [here](https://magpie-king-of-france.netlify.com).
#### D.3\.1\.2 Hypotheses
We will be interested in the following research questions:
* **H1**: The latent probability of “TRUE” judgements is higher in C0 (with presupposition) than in C1 (where the presupposition is part of the at\-issue / asserted content).
* **H2**: There is no difference in truth\-value judgements between C0 (the positive sentence) and C6 (the negative sentence).
* **H3**: The disposition towards “TRUE” judgements is lower for C9 (where the presupposition is topical) than for C10 (where the presupposition is not topical and occurs under negation).
### D.3\.2 Loading and preprocessing the data
First, load the data:
```
data_KoF_raw <- aida::data_KoF_raw
```
And then have a glimpse:
```
glimpse(data_KoF_raw)
```
```
## Rows: 2,813
## Columns: 16
## $ submission_id <dbl> 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, …
## $ RT <dbl> 8110, 35557, 3647, 16037, 11816, 6024, 4986, 13019, 538…
## $ age <dbl> 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57, 57,…
## $ comments <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
## $ item_version <chr> "none", "none", "none", "none", "none", "none", "none",…
## $ correct_answer <lgl> FALSE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE, FAL…
## $ education <chr> "Graduated College", "Graduated College", "Graduated Co…
## $ gender <chr> "female", "female", "female", "female", "female", "fema…
## $ languages <chr> "English", "English", "English", "English", "English", …
## $ question <chr> "World War II was a global war that lasted from 1914 to…
## $ response <lgl> FALSE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE, FALSE, FAL…
## $ timeSpent <dbl> 39.48995, 39.48995, 39.48995, 39.48995, 39.48995, 39.48…
## $ trial_name <chr> "practice_trials", "practice_trials", "practice_trials"…
## $ trial_number <dbl> 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 1…
## $ trial_type <chr> "practice", "practice", "practice", "practice", "practi…
## $ vignette <chr> "undefined", "undefined", "undefined", "undefined", "un…
```
The most important variables in this data set are:
* `submission_id`: unique identifier for each participant
* `trial_type`: whether the trial was of the category `filler`, `main`, `practice` or `special`, where the latter encodes the “background checks”
* `item_version`: the condition to which the test sentence belongs (only given for trials of type `main` and `special`)
* `response`: the answer (“TRUE” or “FALSE”) on each trial
* `vignette`: the current item’s vignette number (applies only to trials of type `main` and `special`)
As the variable names used in the raw data are not ideal, we will pre\-process the raw data a bit for easier analysis.
```
data_KoF_processed <- data_KoF_raw %>%
# discard practice trials
filter(trial_type != "practice") %>%
mutate(
# add a 'condition' variable
condition = case_when(
trial_type == "special" ~ "background check",
trial_type == "main" ~ str_c("Condition ", item_version),
TRUE ~ "filler"
) %>%
factor(
ordered = T,
levels = c(str_c("Condition ", c(0, 1, 6, 9, 10)), "background check", "filler")
)
)
```
### D.3\.3 Cleaning the data
We clean the data in two consecutive steps:
1. Remove all data from any participant who got more than 50% of the answer to filler material wrong.
2. Remove individual main trials if the corresponding “background check” question was answered wrongly.
#### D.3\.3\.1 Cleaning by\-participant
```
# look at error rates for filler sentences by subject
# mark every subject with < 0.5 proportion correct
subject_error_rate <- data_KoF_processed %>%
filter(trial_type == "filler") %>%
group_by(submission_id) %>%
summarise(
proportion_correct = mean(correct_answer == response),
outlier_subject = proportion_correct < 0.5
) %>%
arrange(proportion_correct)
```
Plot the results:
```
# plot by-subject error rates
subject_error_rate %>%
ggplot(aes(x = proportion_correct, color = outlier_subject, shape = outlier_subject)) +
geom_jitter(aes(y = ""), width = 0.001) +
xlab("Poportion of correct answers") + ylab("") +
ggtitle("Distribution of proportion of correct answers on filler trials") +
xlim(0, 1) +
scale_color_discrete(name = "Outlier") +
scale_shape_discrete(name = "Outlier")
```
Apply the cleaning step:
```
# add info about error rates and exclude outlier subject(s)
d_cleaned <-
full_join(data_KoF_processed, subject_error_rate, by = "submission_id") %>%
filter(outlier_subject == FALSE)
```
#### D.3\.3\.2 Cleaning by\-trial
```
# exclude every critical trial whose 'background' test question was answered wrongly
d_cleaned <-
d_cleaned %>%
# select only the 'background question' trials
filter(trial_type == "special") %>%
# is the background question answered correctly?
mutate(
background_correct = correct_answer == response
) %>%
# select only the relevant columns
select(submission_id, vignette, background_correct) %>%
# right join lines to original data set
right_join(d_cleaned, by = c("submission_id", "vignette")) %>%
# remove all special trials, as well as main trials with incorrect background check
filter(trial_type == "main" & background_correct == TRUE)
```
#### D.3\.3\.1 Cleaning by\-participant
```
# look at error rates for filler sentences by subject
# mark every subject with < 0.5 proportion correct
subject_error_rate <- data_KoF_processed %>%
filter(trial_type == "filler") %>%
group_by(submission_id) %>%
summarise(
proportion_correct = mean(correct_answer == response),
outlier_subject = proportion_correct < 0.5
) %>%
arrange(proportion_correct)
```
Plot the results:
```
# plot by-subject error rates
subject_error_rate %>%
ggplot(aes(x = proportion_correct, color = outlier_subject, shape = outlier_subject)) +
geom_jitter(aes(y = ""), width = 0.001) +
xlab("Poportion of correct answers") + ylab("") +
ggtitle("Distribution of proportion of correct answers on filler trials") +
xlim(0, 1) +
scale_color_discrete(name = "Outlier") +
scale_shape_discrete(name = "Outlier")
```
Apply the cleaning step:
```
# add info about error rates and exclude outlier subject(s)
d_cleaned <-
full_join(data_KoF_processed, subject_error_rate, by = "submission_id") %>%
filter(outlier_subject == FALSE)
```
#### D.3\.3\.2 Cleaning by\-trial
```
# exclude every critical trial whose 'background' test question was answered wrongly
d_cleaned <-
d_cleaned %>%
# select only the 'background question' trials
filter(trial_type == "special") %>%
# is the background question answered correctly?
mutate(
background_correct = correct_answer == response
) %>%
# select only the relevant columns
select(submission_id, vignette, background_correct) %>%
# right join lines to original data set
right_join(d_cleaned, by = c("submission_id", "vignette")) %>%
# remove all special trials, as well as main trials with incorrect background check
filter(trial_type == "main" & background_correct == TRUE)
```
### D.3\.4 Exploration: summary stats \& plots
Plot for ratings by condition:
```
d_cleaned %>%
# drop unused factor levels
droplevels() %>%
# get means and 95% bootstrapped CIs for each condition
group_by(condition) %>%
nest() %>%
summarise(
CIs = map(data, function(d) bootstrapped_CI(d$response == "TRUE"))
) %>%
unnest(CIs) %>%
# plot means and CIs
ggplot(aes(x = condition, y = mean, fill = condition)) +
geom_bar(stat = "identity") +
geom_errorbar(aes(ymin = lower, ymax = upper, width = 0.2)) +
ylim(0, 1) +
ylab("") + xlab("") + ggtitle("Proportion of 'TRUE' responses per condition") +
theme(legend.position = "none") +
scale_fill_manual(values = project_colors)
```
Plot for each condition \& vignette:
```
data_KoF_processed %>%
filter(trial_type == "main") %>%
droplevels() %>%
group_by(condition, vignette) %>%
nest() %>%
summarise(
CIs = map(data, function(d) bootstrapped_CI(d$response == "TRUE"))
) %>%
unnest(CIs) %>%
ggplot(aes(x = condition, y = mean, fill = vignette)) +
geom_bar(stat = "identity", position = "dodge2") +
geom_errorbar(
aes(ymin = lower, ymax = upper),
width = 0.3,
position = position_dodge(width = 0.9)
) +
ylim(0, 1) +
ylab("") + xlab("") + ggtitle("Proportion of 'TRUE' responses per condition & vignette")
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/app-93-data-sets-BLJM.html |
D.4 Bio\-Logic Jazz\-Metal (and where to consume it)
----------------------------------------------------
### D.4\.1 Nature, origin and rationale of the data
This is a very short and non\-serious experiment that asks for just three binary decisions from each participant, namely their spontaneous preference for one of two presented options (biology vs. logic, jazz vs. metal, and mountains vs. beach). The data from this experiment will be analyzed and plotted. This is supposed to be a useful and hopefully entertaining self\-generated data set with which to practice making contingency tables and to apply binomial tests and fun stuff like that.
#### D.4\.1\.1 The experiment
##### D.4\.1\.1\.1 Participants
We obtained data from 102 participants, all of whom were students of a course based on this web\-book held in the winter term of 2019/2020 at the University of Osnabrück.
##### D.4\.1\.1\.2 Material
There were three critical trials (and nothing else). All trials had the same trailing question:
> If you have to choose between the following two options, which one do you prefer?
Each critical trial then presented two options as buttons, one of which had to be clicked.
1. Biology vs. Logic
2. Jazz vs. Metal
3. Mountains vs. Beach
##### D.4\.1\.1\.3 Procedure
Each participant saw all three critical trials (and no other trials) in random order.
##### D.4\.1\.1\.4 Realization
The experiment was realized using [\_magpie](https://magpie-ea.github.io/magpie-site/index.html) and can be tried out [here](https://magpie-bio-logical-jazz-metal.netlify.com).
#### D.4\.1\.2 Theoretical motivation \& hypotheses
This is a bogus experiment, and no sane person would advance a serious hypothesis about this. Except for the main author of this book, who conjectures that appreciators of Metal music like logic more than Jazz\-enthusiasts would (because Metal is cleaner and more mechanic, while Jazz is fuzzy and organic, obviously).[95](#fn95)
### D.4\.2 Loading and preprocessing the data
First, load the data:
```
data_BLJM_raw <- aida::data_BLJM_raw
```
Take a peak:
```
glimpse(data_BLJM_raw)
```
```
## Rows: 306
## Columns: 19
## $ submission_id <dbl> 379, 379, 379, 378, 378, 378, 377, 377, 377, 376, 376, 3…
## $ QUD <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ RT <dbl> 9230, 9330, 5248, 5570, 2896, 36236, 5906, 4767, 10427, …
## $ age <dbl> 30, 30, 30, 29, 29, 29, 20, 20, 20, 21, 21, 21, 23, 23, …
## $ comments <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ education <chr> "Graduated High School", "Graduated High School", "Gradu…
## $ endTime <dbl> 1.573751e+12, 1.573751e+12, 1.573751e+12, 1.573738e+12, …
## $ experiment_id <dbl> 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,…
## $ gender <chr> "male", "male", "male", "male", "male", "male", "female"…
## $ languages <chr> "German", "German", "German", "German", "German", "Germa…
## $ option1 <chr> "Mountains", "Biology", "Metal", "Metal", "Biology", "Mo…
## $ option2 <chr> "Beach", "Logic", "Jazz", "Jazz", "Logic", "Beach", "Bea…
## $ question <chr> "If you have to choose between the following two options…
## $ response <chr> "Beach", "Logic", "Metal", "Metal", "Logic", "Beach", "M…
## $ startDate <chr> "Thu Nov 14 2019 18:01:24 GMT+0100 (CET)", "Thu Nov 14 2…
## $ startTime <dbl> 1.573751e+12, 1.573751e+12, 1.573751e+12, 1.573738e+12, …
## $ timeSpent <dbl> 2.3601500, 2.3601500, 2.3601500, 2.1552667, 2.1552667, 2…
## $ trial_name <chr> "forced_choice", "forced_choice", "forced_choice", "forc…
## $ trial_number <dbl> 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1,…
```
The most important variables in this data set are:
* `submission_id`: unique identifier for each participant
* `option1` and `option2`: what the choice options where
* `response`: which of the two options was chosen
Notice that there is no convenient column indicating which of the three critical conditions we are dealing with, so we extract that information from the data given in columns `option1` and `option2`, while also discarding everything we will not need:[96](#fn96)
```
data_BLJM_processed <-
data_BLJM_raw %>%
mutate(
condition = str_c(str_sub(option2, 1, 1), str_sub(option1, 1, 1))
) %>%
select(submission_id, condition, response)
data_BLJM_processed
```
```
## # A tibble: 306 × 3
## submission_id condition response
## <dbl> <chr> <chr>
## 1 379 BM Beach
## 2 379 LB Logic
## 3 379 JM Metal
## 4 378 JM Metal
## 5 378 LB Logic
## 6 378 BM Beach
## 7 377 BM Mountains
## 8 377 LB Biology
## 9 377 JM Jazz
## 10 376 BM Beach
## # … with 296 more rows
```
### D.4\.3 Exploration: counts \& plots
We are interested in relevant counts of the original data, namely the number of times certain choices were made. First, let’s look at the overal choice rates in each condition:
```
data_BLJM_processed %>%
# we use function`count` from the `dplyr` package
dplyr::count(condition, response)
```
```
## # A tibble: 6 × 3
## condition response n
## <chr> <chr> <int>
## 1 BM Beach 44
## 2 BM Mountains 58
## 3 JM Jazz 64
## 4 JM Metal 38
## 5 LB Biology 58
## 6 LB Logic 44
```
Overall it seems that mountains are preferred over beaches, Jazz is preferred over Metal and Biology is preferred over Logic.
The overall counts, however, do not tell us anything about any potentially interesting relationship between preferences. So, let’s have a closer look at the lecturer’s conjecture that a preference for logic tends to go with a stronger preference for metal than a preference for biology does. To check this, we need to look at different counts, namely the number of people who selected which music\-subject pair. We collect these counts in a variable called `BLJM_associated_counts`:
```
BLJM_associated_counts <- data_BLJM_processed %>%
select(submission_id, condition, response) %>%
pivot_wider(names_from = condition, values_from = response) %>%
select(-BM) %>%
dplyr::count(JM, LB)
BLJM_associated_counts
```
```
## # A tibble: 4 × 3
## JM LB n
## <chr> <chr> <int>
## 1 Jazz Biology 38
## 2 Jazz Logic 26
## 3 Metal Biology 20
## 4 Metal Logic 18
```
Notice that this representation is tidy, but not ideal for visual inspection. A more commonly seen format can be obtained by pivoting to a wider representation:
```
# visually attractive table representation
BLJM_associated_counts %>%
pivot_wider(names_from = LB, values_from = n)
```
```
## # A tibble: 2 × 3
## JM Biology Logic
## <chr> <int> <int>
## 1 Jazz 38 26
## 2 Metal 20 18
```
The tidy representation *is* ideal for plotting, though. Notice, however, that the code below plots proportions of choices, not raw counts:
```
BLJM_associated_counts %>%
ggplot(aes(x = LB, y = n/sum(n), color = JM, shape = JM, group = JM)) +
geom_point(size = 3) +
geom_line() +
labs(
title = "Proportion of choices of each music+subject pair",
x = "",
y = ""
)
```
The lecturer’s conjecture might be correct. This does look like there could be an interaction. While Jazz is preferred more generally, the preference for Jazz over Metal seems more pronounced for those participants who preferred Biology than for those who preferred Logic.
### D.4\.1 Nature, origin and rationale of the data
This is a very short and non\-serious experiment that asks for just three binary decisions from each participant, namely their spontaneous preference for one of two presented options (biology vs. logic, jazz vs. metal, and mountains vs. beach). The data from this experiment will be analyzed and plotted. This is supposed to be a useful and hopefully entertaining self\-generated data set with which to practice making contingency tables and to apply binomial tests and fun stuff like that.
#### D.4\.1\.1 The experiment
##### D.4\.1\.1\.1 Participants
We obtained data from 102 participants, all of whom were students of a course based on this web\-book held in the winter term of 2019/2020 at the University of Osnabrück.
##### D.4\.1\.1\.2 Material
There were three critical trials (and nothing else). All trials had the same trailing question:
> If you have to choose between the following two options, which one do you prefer?
Each critical trial then presented two options as buttons, one of which had to be clicked.
1. Biology vs. Logic
2. Jazz vs. Metal
3. Mountains vs. Beach
##### D.4\.1\.1\.3 Procedure
Each participant saw all three critical trials (and no other trials) in random order.
##### D.4\.1\.1\.4 Realization
The experiment was realized using [\_magpie](https://magpie-ea.github.io/magpie-site/index.html) and can be tried out [here](https://magpie-bio-logical-jazz-metal.netlify.com).
#### D.4\.1\.2 Theoretical motivation \& hypotheses
This is a bogus experiment, and no sane person would advance a serious hypothesis about this. Except for the main author of this book, who conjectures that appreciators of Metal music like logic more than Jazz\-enthusiasts would (because Metal is cleaner and more mechanic, while Jazz is fuzzy and organic, obviously).[95](#fn95)
#### D.4\.1\.1 The experiment
##### D.4\.1\.1\.1 Participants
We obtained data from 102 participants, all of whom were students of a course based on this web\-book held in the winter term of 2019/2020 at the University of Osnabrück.
##### D.4\.1\.1\.2 Material
There were three critical trials (and nothing else). All trials had the same trailing question:
> If you have to choose between the following two options, which one do you prefer?
Each critical trial then presented two options as buttons, one of which had to be clicked.
1. Biology vs. Logic
2. Jazz vs. Metal
3. Mountains vs. Beach
##### D.4\.1\.1\.3 Procedure
Each participant saw all three critical trials (and no other trials) in random order.
##### D.4\.1\.1\.4 Realization
The experiment was realized using [\_magpie](https://magpie-ea.github.io/magpie-site/index.html) and can be tried out [here](https://magpie-bio-logical-jazz-metal.netlify.com).
##### D.4\.1\.1\.1 Participants
We obtained data from 102 participants, all of whom were students of a course based on this web\-book held in the winter term of 2019/2020 at the University of Osnabrück.
##### D.4\.1\.1\.2 Material
There were three critical trials (and nothing else). All trials had the same trailing question:
> If you have to choose between the following two options, which one do you prefer?
Each critical trial then presented two options as buttons, one of which had to be clicked.
1. Biology vs. Logic
2. Jazz vs. Metal
3. Mountains vs. Beach
##### D.4\.1\.1\.3 Procedure
Each participant saw all three critical trials (and no other trials) in random order.
##### D.4\.1\.1\.4 Realization
The experiment was realized using [\_magpie](https://magpie-ea.github.io/magpie-site/index.html) and can be tried out [here](https://magpie-bio-logical-jazz-metal.netlify.com).
#### D.4\.1\.2 Theoretical motivation \& hypotheses
This is a bogus experiment, and no sane person would advance a serious hypothesis about this. Except for the main author of this book, who conjectures that appreciators of Metal music like logic more than Jazz\-enthusiasts would (because Metal is cleaner and more mechanic, while Jazz is fuzzy and organic, obviously).[95](#fn95)
### D.4\.2 Loading and preprocessing the data
First, load the data:
```
data_BLJM_raw <- aida::data_BLJM_raw
```
Take a peak:
```
glimpse(data_BLJM_raw)
```
```
## Rows: 306
## Columns: 19
## $ submission_id <dbl> 379, 379, 379, 378, 378, 378, 377, 377, 377, 376, 376, 3…
## $ QUD <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ RT <dbl> 9230, 9330, 5248, 5570, 2896, 36236, 5906, 4767, 10427, …
## $ age <dbl> 30, 30, 30, 29, 29, 29, 20, 20, 20, 21, 21, 21, 23, 23, …
## $ comments <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ education <chr> "Graduated High School", "Graduated High School", "Gradu…
## $ endTime <dbl> 1.573751e+12, 1.573751e+12, 1.573751e+12, 1.573738e+12, …
## $ experiment_id <dbl> 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,…
## $ gender <chr> "male", "male", "male", "male", "male", "male", "female"…
## $ languages <chr> "German", "German", "German", "German", "German", "Germa…
## $ option1 <chr> "Mountains", "Biology", "Metal", "Metal", "Biology", "Mo…
## $ option2 <chr> "Beach", "Logic", "Jazz", "Jazz", "Logic", "Beach", "Bea…
## $ question <chr> "If you have to choose between the following two options…
## $ response <chr> "Beach", "Logic", "Metal", "Metal", "Logic", "Beach", "M…
## $ startDate <chr> "Thu Nov 14 2019 18:01:24 GMT+0100 (CET)", "Thu Nov 14 2…
## $ startTime <dbl> 1.573751e+12, 1.573751e+12, 1.573751e+12, 1.573738e+12, …
## $ timeSpent <dbl> 2.3601500, 2.3601500, 2.3601500, 2.1552667, 2.1552667, 2…
## $ trial_name <chr> "forced_choice", "forced_choice", "forced_choice", "forc…
## $ trial_number <dbl> 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1,…
```
The most important variables in this data set are:
* `submission_id`: unique identifier for each participant
* `option1` and `option2`: what the choice options where
* `response`: which of the two options was chosen
Notice that there is no convenient column indicating which of the three critical conditions we are dealing with, so we extract that information from the data given in columns `option1` and `option2`, while also discarding everything we will not need:[96](#fn96)
```
data_BLJM_processed <-
data_BLJM_raw %>%
mutate(
condition = str_c(str_sub(option2, 1, 1), str_sub(option1, 1, 1))
) %>%
select(submission_id, condition, response)
data_BLJM_processed
```
```
## # A tibble: 306 × 3
## submission_id condition response
## <dbl> <chr> <chr>
## 1 379 BM Beach
## 2 379 LB Logic
## 3 379 JM Metal
## 4 378 JM Metal
## 5 378 LB Logic
## 6 378 BM Beach
## 7 377 BM Mountains
## 8 377 LB Biology
## 9 377 JM Jazz
## 10 376 BM Beach
## # … with 296 more rows
```
### D.4\.3 Exploration: counts \& plots
We are interested in relevant counts of the original data, namely the number of times certain choices were made. First, let’s look at the overal choice rates in each condition:
```
data_BLJM_processed %>%
# we use function`count` from the `dplyr` package
dplyr::count(condition, response)
```
```
## # A tibble: 6 × 3
## condition response n
## <chr> <chr> <int>
## 1 BM Beach 44
## 2 BM Mountains 58
## 3 JM Jazz 64
## 4 JM Metal 38
## 5 LB Biology 58
## 6 LB Logic 44
```
Overall it seems that mountains are preferred over beaches, Jazz is preferred over Metal and Biology is preferred over Logic.
The overall counts, however, do not tell us anything about any potentially interesting relationship between preferences. So, let’s have a closer look at the lecturer’s conjecture that a preference for logic tends to go with a stronger preference for metal than a preference for biology does. To check this, we need to look at different counts, namely the number of people who selected which music\-subject pair. We collect these counts in a variable called `BLJM_associated_counts`:
```
BLJM_associated_counts <- data_BLJM_processed %>%
select(submission_id, condition, response) %>%
pivot_wider(names_from = condition, values_from = response) %>%
select(-BM) %>%
dplyr::count(JM, LB)
BLJM_associated_counts
```
```
## # A tibble: 4 × 3
## JM LB n
## <chr> <chr> <int>
## 1 Jazz Biology 38
## 2 Jazz Logic 26
## 3 Metal Biology 20
## 4 Metal Logic 18
```
Notice that this representation is tidy, but not ideal for visual inspection. A more commonly seen format can be obtained by pivoting to a wider representation:
```
# visually attractive table representation
BLJM_associated_counts %>%
pivot_wider(names_from = LB, values_from = n)
```
```
## # A tibble: 2 × 3
## JM Biology Logic
## <chr> <int> <int>
## 1 Jazz 38 26
## 2 Metal 20 18
```
The tidy representation *is* ideal for plotting, though. Notice, however, that the code below plots proportions of choices, not raw counts:
```
BLJM_associated_counts %>%
ggplot(aes(x = LB, y = n/sum(n), color = JM, shape = JM, group = JM)) +
geom_point(size = 3) +
geom_line() +
labs(
title = "Proportion of choices of each music+subject pair",
x = "",
y = ""
)
```
The lecturer’s conjecture might be correct. This does look like there could be an interaction. While Jazz is preferred more generally, the preference for Jazz over Metal seems more pronounced for those participants who preferred Biology than for those who preferred Logic.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/app-93-data-sets-avocado.html |
D.5 Avocado prices
------------------
### D.5\.1 Nature, origin and rationale of the data
This data set has been plucked from [Kaggle](https://www.kaggle.com). More information on the origin and composition of this data set can be found on [Kaggle’s website covering the avocado data](https://www.kaggle.com/neuromusic/avocado-prices). The data set includes information about the prices of (Hass) avocados and the amount sold (of different kinds) at different points in time. The data is originally from the Hass Avocado Board, where the data is described as follows:
> The \[data] represents weekly 2018 retail scan data for National retail volume (units) and price. Retail scan data comes directly from retailers’ cash registers based on actual retail sales of Hass avocados. Starting in 2013, the table below reflects an expanded, multi\-outlet retail data set. Multi\-outlet reporting includes an aggregation of the following channels: grocery, mass, club, drug, dollar and military. The Average Price (of avocados) in the table reflects a per unit (per avocado) cost, even when multiple units (avocados) are sold in bags. The Product Lookup codes (PLU’s) in the table are only for Hass avocados. Other varieties of avocados (e.g. greenskins) are not included in this table.
Columns of interest are:
* `Date`: date of the observation
* `AveragePrice`: average price of a single avocado
* `Total Volume`: total number of avocados sold
* `type`: whether the price/amount is for conventional or organic
* `4046`: total number of small avocados sold (PLU 4046\)
* `4225`: total number of medium avocados sold (PLU 4225\)
* `4770`: total number of large avocados sold (PLU 4770\)
### D.5\.2 Loading and preprocessing the data
We load the data into a variable named `avocado_data` but also immediately rename some of the columns to have more convenient handles:
```
avocado_data <- aida::data_avocado_raw %>%
# remove currently irrelevant columns
select(-X1, -contains("Bags"), -year, -region) %>%
# rename variables of interest for convenience
rename(
total_volume_sold = `Total Volume`,
average_price = `AveragePrice`,
small = '4046',
medium = '4225',
large = '4770',
)
```
We can then take a glimpse:
```
glimpse(avocado_data)
```
```
## Rows: 18,249
## Columns: 7
## $ Date <date> 2015-12-27, 2015-12-20, 2015-12-13, 2015-12-06, 201…
## $ average_price <dbl> 1.33, 1.35, 0.93, 1.08, 1.28, 1.26, 0.99, 0.98, 1.02…
## $ total_volume_sold <dbl> 64236.62, 54876.98, 118220.22, 78992.15, 51039.60, 5…
## $ small <dbl> 1036.74, 674.28, 794.70, 1132.00, 941.48, 1184.27, 1…
## $ medium <dbl> 54454.85, 44638.81, 109149.67, 71976.41, 43838.39, 4…
## $ large <dbl> 48.16, 58.33, 130.50, 72.58, 75.78, 43.61, 93.26, 80…
## $ type <chr> "conventional", "conventional", "conventional", "con…
```
The preprocessed version of the data is stored in `aida::data_avocado` for later reuse.
### D.5\.3 Summary statistics
We are interested in the following summary statistics for the variables `total_amount_sold` and `average_price` for the whole data and for each type of avocado separately:
* mean
* median
* variance
* the bootstrapped 95% confidence interval of the mean
To get these results we define a convenience function that calculates exactly these measures:
```
summary_stats_convenience_fct <- function(numeric_data_vector) {
bootstrap_results <- bootstrapped_CI(numeric_data_vector)
tibble(
CI_lower = bootstrap_results$lower,
mean = bootstrap_results$mean,
CI_upper = bootstrap_results$upper,
median = median(numeric_data_vector),
var = var(numeric_data_vector)
)
}
```
We then apply this function once for the whole data set and once for each type of avocado (conventional or organic). We do this using a nested tibble in order to record the joint output of the convenience function (so that we only need to calculate the bootstrapped 95% confidence interval twice).
```
# summary stats for the whole data taken together
avocado_sum_stats_total <- avocado_data %>%
select(type, average_price, total_volume_sold) %>%
pivot_longer(
cols = c(total_volume_sold, average_price),
names_to = 'variable',
values_to = 'value'
) %>%
group_by(variable) %>%
nest() %>%
summarise(
summary_stats = map(data, function(d) summary_stats_convenience_fct(d$value))
) %>%
unnest(summary_stats) %>%
mutate(type = "both_together") %>%
# reorder columns: moving `type` to second position
select(1, type, everything())
# summary stats for each type of avocado
avocado_sum_stats_by_type <- avocado_data %>%
select(type, average_price, total_volume_sold) %>%
pivot_longer(
cols = c(total_volume_sold, average_price),
names_to = 'variable',
values_to = 'value'
) %>%
group_by(type, variable) %>%
nest() %>%
summarise(
summary_stats = map(data, function(d) summary_stats_convenience_fct(d$value))
) %>%
unnest(summary_stats)
# joining the summary stats in a single tibble
avocado_sum_stats <-
full_join(avocado_sum_stats_total, avocado_sum_stats_by_type)
# inspect the results
avocado_sum_stats
```
```
## # A tibble: 6 × 7
## variable type CI_lower mean CI_upper median var
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 average_price both_together 1.40 1.41 1.41e0 1.37e0 1.62e- 1
## 2 total_volume_sold both_together 802101. 850644. 8.99e5 1.07e5 1.19e+13
## 3 average_price conventional 1.15 1.16 1.16e0 1.13e0 6.92e- 2
## 4 total_volume_sold conventional 1556392. 1653213. 1.75e6 4.08e5 2.25e+13
## 5 average_price organic 1.65 1.65 1.66e0 1.63e0 1.32e- 1
## 6 total_volume_sold organic 44920. 47811. 5.09e4 1.08e4 2.03e+10
```
### D.5\.4 Plots
Here are plots of the distributions of `average_price` for different types of avocados:
```
avocado_data %>%
ggplot(aes(x = average_price, fill = type)) +
geom_histogram(binwidth = 0.01) +
facet_wrap(type ~ ., ncol = 1) +
coord_flip() +
geom_point(
data = avocado_sum_stats_by_type %>% filter(variable == "average_price"),
aes(y = 0, x = mean)
) +
ylab('') +
xlab('Average price') +
theme(legend.position = "none")
```
Here is a scatter plot of the logarithm of `total_volume_sold` against `average_price`:
```
avocado_data %>%
ggplot(aes(x = log(total_volume_sold), y = average_price)) +
geom_point(color = "darkgray", alpha = 0.3) +
geom_smooth(color = "black", method = "lm") +
xlab('Logarithm of total volume sold') +
ylab('Average price') +
ggtitle("Avocado prices plotted against the (log) amount sold")
```
And another scatter plot, using a log\-scaled \\(x\\)\-axis and distinguishing different types of avocados:
```
# pipe data set into function `ggplot`
avocado_data %>%
# reverse factor level so that horizontal legend entries align with
# the majority of observations of each group in the plot
mutate(
type = fct_rev(type)
) %>%
# initialize the plot
ggplot(
# defined mapping
mapping = aes(
# which variable goes on the x-axis
x = total_volume_sold,
# which variable goes on the y-axis
y = average_price,
# which groups of variables to distinguish
group = type,
# color and fill to change by grouping variable
fill = type,
color = type
)
) +
# declare that we want a scatter plot
geom_point(
# set low opacity for each point
alpha = 0.1
) +
# add a linear model fit (for each group)
geom_smooth(
color = "black",
method = "lm"
) +
# change the default (normal) of x-axis to log-scale
scale_x_log10() +
# add dollar signs to y-axis labels
scale_y_continuous(labels = scales::dollar) +
# change axis labels and plot title & subtitle
labs(
x = 'Total volume sold (on a log scale)',
y = 'Average price',
title = "Avocado prices plotted against the amount sold per type",
subtitle = "With linear regression lines"
)
```
### D.5\.1 Nature, origin and rationale of the data
This data set has been plucked from [Kaggle](https://www.kaggle.com). More information on the origin and composition of this data set can be found on [Kaggle’s website covering the avocado data](https://www.kaggle.com/neuromusic/avocado-prices). The data set includes information about the prices of (Hass) avocados and the amount sold (of different kinds) at different points in time. The data is originally from the Hass Avocado Board, where the data is described as follows:
> The \[data] represents weekly 2018 retail scan data for National retail volume (units) and price. Retail scan data comes directly from retailers’ cash registers based on actual retail sales of Hass avocados. Starting in 2013, the table below reflects an expanded, multi\-outlet retail data set. Multi\-outlet reporting includes an aggregation of the following channels: grocery, mass, club, drug, dollar and military. The Average Price (of avocados) in the table reflects a per unit (per avocado) cost, even when multiple units (avocados) are sold in bags. The Product Lookup codes (PLU’s) in the table are only for Hass avocados. Other varieties of avocados (e.g. greenskins) are not included in this table.
Columns of interest are:
* `Date`: date of the observation
* `AveragePrice`: average price of a single avocado
* `Total Volume`: total number of avocados sold
* `type`: whether the price/amount is for conventional or organic
* `4046`: total number of small avocados sold (PLU 4046\)
* `4225`: total number of medium avocados sold (PLU 4225\)
* `4770`: total number of large avocados sold (PLU 4770\)
### D.5\.2 Loading and preprocessing the data
We load the data into a variable named `avocado_data` but also immediately rename some of the columns to have more convenient handles:
```
avocado_data <- aida::data_avocado_raw %>%
# remove currently irrelevant columns
select(-X1, -contains("Bags"), -year, -region) %>%
# rename variables of interest for convenience
rename(
total_volume_sold = `Total Volume`,
average_price = `AveragePrice`,
small = '4046',
medium = '4225',
large = '4770',
)
```
We can then take a glimpse:
```
glimpse(avocado_data)
```
```
## Rows: 18,249
## Columns: 7
## $ Date <date> 2015-12-27, 2015-12-20, 2015-12-13, 2015-12-06, 201…
## $ average_price <dbl> 1.33, 1.35, 0.93, 1.08, 1.28, 1.26, 0.99, 0.98, 1.02…
## $ total_volume_sold <dbl> 64236.62, 54876.98, 118220.22, 78992.15, 51039.60, 5…
## $ small <dbl> 1036.74, 674.28, 794.70, 1132.00, 941.48, 1184.27, 1…
## $ medium <dbl> 54454.85, 44638.81, 109149.67, 71976.41, 43838.39, 4…
## $ large <dbl> 48.16, 58.33, 130.50, 72.58, 75.78, 43.61, 93.26, 80…
## $ type <chr> "conventional", "conventional", "conventional", "con…
```
The preprocessed version of the data is stored in `aida::data_avocado` for later reuse.
### D.5\.3 Summary statistics
We are interested in the following summary statistics for the variables `total_amount_sold` and `average_price` for the whole data and for each type of avocado separately:
* mean
* median
* variance
* the bootstrapped 95% confidence interval of the mean
To get these results we define a convenience function that calculates exactly these measures:
```
summary_stats_convenience_fct <- function(numeric_data_vector) {
bootstrap_results <- bootstrapped_CI(numeric_data_vector)
tibble(
CI_lower = bootstrap_results$lower,
mean = bootstrap_results$mean,
CI_upper = bootstrap_results$upper,
median = median(numeric_data_vector),
var = var(numeric_data_vector)
)
}
```
We then apply this function once for the whole data set and once for each type of avocado (conventional or organic). We do this using a nested tibble in order to record the joint output of the convenience function (so that we only need to calculate the bootstrapped 95% confidence interval twice).
```
# summary stats for the whole data taken together
avocado_sum_stats_total <- avocado_data %>%
select(type, average_price, total_volume_sold) %>%
pivot_longer(
cols = c(total_volume_sold, average_price),
names_to = 'variable',
values_to = 'value'
) %>%
group_by(variable) %>%
nest() %>%
summarise(
summary_stats = map(data, function(d) summary_stats_convenience_fct(d$value))
) %>%
unnest(summary_stats) %>%
mutate(type = "both_together") %>%
# reorder columns: moving `type` to second position
select(1, type, everything())
# summary stats for each type of avocado
avocado_sum_stats_by_type <- avocado_data %>%
select(type, average_price, total_volume_sold) %>%
pivot_longer(
cols = c(total_volume_sold, average_price),
names_to = 'variable',
values_to = 'value'
) %>%
group_by(type, variable) %>%
nest() %>%
summarise(
summary_stats = map(data, function(d) summary_stats_convenience_fct(d$value))
) %>%
unnest(summary_stats)
# joining the summary stats in a single tibble
avocado_sum_stats <-
full_join(avocado_sum_stats_total, avocado_sum_stats_by_type)
# inspect the results
avocado_sum_stats
```
```
## # A tibble: 6 × 7
## variable type CI_lower mean CI_upper median var
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 average_price both_together 1.40 1.41 1.41e0 1.37e0 1.62e- 1
## 2 total_volume_sold both_together 802101. 850644. 8.99e5 1.07e5 1.19e+13
## 3 average_price conventional 1.15 1.16 1.16e0 1.13e0 6.92e- 2
## 4 total_volume_sold conventional 1556392. 1653213. 1.75e6 4.08e5 2.25e+13
## 5 average_price organic 1.65 1.65 1.66e0 1.63e0 1.32e- 1
## 6 total_volume_sold organic 44920. 47811. 5.09e4 1.08e4 2.03e+10
```
### D.5\.4 Plots
Here are plots of the distributions of `average_price` for different types of avocados:
```
avocado_data %>%
ggplot(aes(x = average_price, fill = type)) +
geom_histogram(binwidth = 0.01) +
facet_wrap(type ~ ., ncol = 1) +
coord_flip() +
geom_point(
data = avocado_sum_stats_by_type %>% filter(variable == "average_price"),
aes(y = 0, x = mean)
) +
ylab('') +
xlab('Average price') +
theme(legend.position = "none")
```
Here is a scatter plot of the logarithm of `total_volume_sold` against `average_price`:
```
avocado_data %>%
ggplot(aes(x = log(total_volume_sold), y = average_price)) +
geom_point(color = "darkgray", alpha = 0.3) +
geom_smooth(color = "black", method = "lm") +
xlab('Logarithm of total volume sold') +
ylab('Average price') +
ggtitle("Avocado prices plotted against the (log) amount sold")
```
And another scatter plot, using a log\-scaled \\(x\\)\-axis and distinguishing different types of avocados:
```
# pipe data set into function `ggplot`
avocado_data %>%
# reverse factor level so that horizontal legend entries align with
# the majority of observations of each group in the plot
mutate(
type = fct_rev(type)
) %>%
# initialize the plot
ggplot(
# defined mapping
mapping = aes(
# which variable goes on the x-axis
x = total_volume_sold,
# which variable goes on the y-axis
y = average_price,
# which groups of variables to distinguish
group = type,
# color and fill to change by grouping variable
fill = type,
color = type
)
) +
# declare that we want a scatter plot
geom_point(
# set low opacity for each point
alpha = 0.1
) +
# add a linear model fit (for each group)
geom_smooth(
color = "black",
method = "lm"
) +
# change the default (normal) of x-axis to log-scale
scale_x_log10() +
# add dollar signs to y-axis labels
scale_y_continuous(labels = scales::dollar) +
# change axis labels and plot title & subtitle
labs(
x = 'Total volume sold (on a log scale)',
y = 'Average price',
title = "Avocado prices plotted against the amount sold per type",
subtitle = "With linear regression lines"
)
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/app-93-data-sets-temperature.html |
D.6 Annual average world surface temperature
--------------------------------------------
### D.6\.1 Nature, origin and rationale of the data
This data set has been downloaded from [Berkeley Earth](http://berkeleyearth.org/).[97](#fn97) More information on the origin and composition of this data set can be found [here](http://berkeleyearth.org/data-new/). Specifically, what we will use here is the time series data for “land only” using the annual summary of monthly average temperature. We have added to the data set used here the absolute average temperature. (Berkeley Earth only lists the “annual anomaly”, i.e., the deviation from a grand mean.)
Columns of interest are:
* `year`: year of the observation (1750\-2019\)
* `anomaly`: deviation from the grand mean of 1750\-1980, which equals 8\.61 degrees Celsius
* `uncertainty`: measure of uncertainty associated with the reported `anomaly`
* `avg_temp`: the annual average world surface temperature
### D.6\.2 Loading and preprocessing the data
We load the data into a variable named `data_temperature`:
```
data_temperature <- aida::data_WorldTemp
```
And inspect the first rows of data:
```
head(data_temperature)
```
```
## # A tibble: 6 × 4
## year anomaly uncertainty avg_temp
## <dbl> <dbl> <dbl> <dbl>
## 1 1750 -1.41 NA 7.20
## 2 1751 -1.52 NA 7.09
## 3 1753 -1.07 1.3 7.54
## 4 1754 -0.614 1.09 8.00
## 5 1755 -0.823 1.24 7.79
## 6 1756 -0.547 1.28 8.06
```
### D.6\.3 Hypothesis \& modeling approach
We care about whether the annual average temperature increased over time. We address this question with a simple linear regression model, in particular the relationship `avg_temp ~ year`. We are interested in whether the slope coefficient of that regression model is credibly/significantly bigger than zero.
Using a simple linear regression here is clearly and blatantly way too simple a modeling approach, but it serves our purposes and the violent simplification should make you think how/why exactly the linear regression model is conceptually inadequate for this data and inference model.
### D.6\.4 Plotting
Here is a scatterplot of annual average temperature `avg_temp` against `year`. The straight line is the best linear predictor.
```
data_temperature %>%
ggplot(aes(x = year, y = avg_temp)) +
geom_point() +
geom_smooth(method = "lm") +
labs(
y = "temperature (degrees Celsius)",
title = "Annual average surface land temperature"
)
```
### D.6\.1 Nature, origin and rationale of the data
This data set has been downloaded from [Berkeley Earth](http://berkeleyearth.org/).[97](#fn97) More information on the origin and composition of this data set can be found [here](http://berkeleyearth.org/data-new/). Specifically, what we will use here is the time series data for “land only” using the annual summary of monthly average temperature. We have added to the data set used here the absolute average temperature. (Berkeley Earth only lists the “annual anomaly”, i.e., the deviation from a grand mean.)
Columns of interest are:
* `year`: year of the observation (1750\-2019\)
* `anomaly`: deviation from the grand mean of 1750\-1980, which equals 8\.61 degrees Celsius
* `uncertainty`: measure of uncertainty associated with the reported `anomaly`
* `avg_temp`: the annual average world surface temperature
### D.6\.2 Loading and preprocessing the data
We load the data into a variable named `data_temperature`:
```
data_temperature <- aida::data_WorldTemp
```
And inspect the first rows of data:
```
head(data_temperature)
```
```
## # A tibble: 6 × 4
## year anomaly uncertainty avg_temp
## <dbl> <dbl> <dbl> <dbl>
## 1 1750 -1.41 NA 7.20
## 2 1751 -1.52 NA 7.09
## 3 1753 -1.07 1.3 7.54
## 4 1754 -0.614 1.09 8.00
## 5 1755 -0.823 1.24 7.79
## 6 1756 -0.547 1.28 8.06
```
### D.6\.3 Hypothesis \& modeling approach
We care about whether the annual average temperature increased over time. We address this question with a simple linear regression model, in particular the relationship `avg_temp ~ year`. We are interested in whether the slope coefficient of that regression model is credibly/significantly bigger than zero.
Using a simple linear regression here is clearly and blatantly way too simple a modeling approach, but it serves our purposes and the violent simplification should make you think how/why exactly the linear regression model is conceptually inadequate for this data and inference model.
### D.6\.4 Plotting
Here is a scatterplot of annual average temperature `avg_temp` against `year`. The straight line is the best linear predictor.
```
data_temperature %>%
ggplot(aes(x = year, y = avg_temp)) +
geom_point() +
geom_smooth(method = "lm") +
labs(
y = "temperature (degrees Celsius)",
title = "Annual average surface land temperature"
)
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/app-93-data-sets-murder-data.html |
D.7 Murder data
---------------
### D.7\.1 Nature, origin and rationale of the data
The murder data set contains information about the relative number of murders in American cities.
It also contains further socio\-economic information, such as a city’s unemployment rate, and the percentage of inhabitants with a low income.
We use this data set just for illustration.
No further real\-world conclusions should be drawn from this, as the data should be treated as entirely fictitious.
```
murder_data <- aida::data_murder
```
We take a look at the data:
```
murder_data %>% head()
```
```
## # A tibble: 6 × 4
## murder_rate low_income unemployment population
## <dbl> <dbl> <dbl> <dbl>
## 1 11.2 16.5 6.2 587000
## 2 13.4 20.5 6.4 643000
## 3 40.7 26.3 9.3 635000
## 4 5.3 16.5 5.3 692000
## 5 24.8 19.2 7.3 1248000
## 6 12.7 16.5 5.9 643000
```
Each row in this data set shows data from a city. The information in the columns is:
* `murder_rate`: annual murder rate per million inhabitants
* `low_income`: percentage of inhabitants with a low income (however that is defined)
* `unemployment`: percentage of unemployed inhabitants
* `population`: number of inhabitants of a city
There is information for a total of 20 cities in this data set.
Here’s a nice way of plotting each variable against each other:
```
GGally::ggpairs(murder_data, title = "Murder rate data")
```
The diagonal of this graph shows the density curve of the data in each column. Scatter plots below the diagonal show pairs of values from two columns plotted against each other. The information above the diagonal gives the correlation score of each pair of variables.
The “research question” of interest for this data set is which factors help predict a city’s `murder_rate`.
In other words, we want to know, for example, whether knowing a random city’s value for the variable `unemployment`, will allow us to make better predictions about that city’s value for the variable `murder_rate`.
Chapter [12](Chap-04-01-simple-linear-regression.html#Chap-04-01-simple-linear-regression) uses this data set to specifically ask whether we can use information from variables like `unemployment` to predict `murder_rate` based on the assumption of a *linear relationship*.
It is important to stress here that asking for an epistemic / stochastic relationship of the form “Does \\(x\\) help to make better predictions about \\(y\\)?” does *not* relate to or presuppose a *causal relationship* between \\(x\\) and \\(y\\).
The variables \\(x\\) and \\(y\\) could be mutual effects of a common cause, and yet still knowing about \\(x\\) could carry information about \\(y\\) even if manipulating \\(x\\) by divine intervention would not change \\(y\\), and vice versa.
### D.7\.1 Nature, origin and rationale of the data
The murder data set contains information about the relative number of murders in American cities.
It also contains further socio\-economic information, such as a city’s unemployment rate, and the percentage of inhabitants with a low income.
We use this data set just for illustration.
No further real\-world conclusions should be drawn from this, as the data should be treated as entirely fictitious.
```
murder_data <- aida::data_murder
```
We take a look at the data:
```
murder_data %>% head()
```
```
## # A tibble: 6 × 4
## murder_rate low_income unemployment population
## <dbl> <dbl> <dbl> <dbl>
## 1 11.2 16.5 6.2 587000
## 2 13.4 20.5 6.4 643000
## 3 40.7 26.3 9.3 635000
## 4 5.3 16.5 5.3 692000
## 5 24.8 19.2 7.3 1248000
## 6 12.7 16.5 5.9 643000
```
Each row in this data set shows data from a city. The information in the columns is:
* `murder_rate`: annual murder rate per million inhabitants
* `low_income`: percentage of inhabitants with a low income (however that is defined)
* `unemployment`: percentage of unemployed inhabitants
* `population`: number of inhabitants of a city
There is information for a total of 20 cities in this data set.
Here’s a nice way of plotting each variable against each other:
```
GGally::ggpairs(murder_data, title = "Murder rate data")
```
The diagonal of this graph shows the density curve of the data in each column. Scatter plots below the diagonal show pairs of values from two columns plotted against each other. The information above the diagonal gives the correlation score of each pair of variables.
The “research question” of interest for this data set is which factors help predict a city’s `murder_rate`.
In other words, we want to know, for example, whether knowing a random city’s value for the variable `unemployment`, will allow us to make better predictions about that city’s value for the variable `murder_rate`.
Chapter [12](Chap-04-01-simple-linear-regression.html#Chap-04-01-simple-linear-regression) uses this data set to specifically ask whether we can use information from variables like `unemployment` to predict `murder_rate` based on the assumption of a *linear relationship*.
It is important to stress here that asking for an epistemic / stochastic relationship of the form “Does \\(x\\) help to make better predictions about \\(y\\)?” does *not* relate to or presuppose a *causal relationship* between \\(x\\) and \\(y\\).
The variables \\(x\\) and \\(y\\) could be mutual effects of a common cause, and yet still knowing about \\(x\\) could carry information about \\(y\\) even if manipulating \\(x\\) by divine intervention would not change \\(y\\), and vice versa.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/app-93-data-sets-politeness.html |
D.8 Politeness data
-------------------
### D.8\.1 Nature, origin and rationale of the data
The politeness data is borrowed from Winter and Grawunder ([2012](#ref-WinterGrawunder2012:The-Phonetic-Pr)).[98](#fn98)
The data set contains measurements of voice pitch obtained from a \\(2 \\times 2\\) factorial design, with factors `gender` and `context`.
The data is from Korean speakers.
Here is a glimpse of the data:
```
politeness_data <- aida::data_polite
glimpse(politeness_data)
```
```
## Rows: 83
## Columns: 5
## $ subject <chr> "F1", "F1", "F1", "F1", "F1", "F1", "F1", "F1", "F1", "F1", "…
## $ gender <chr> "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "…
## $ sentence <chr> "S1", "S1", "S2", "S2", "S3", "S3", "S4", "S4", "S5", "S5", "…
## $ context <chr> "pol", "inf", "pol", "inf", "pol", "inf", "pol", "inf", "pol"…
## $ pitch <dbl> 213.3, 204.5, 285.1, 259.7, 203.9, 286.9, 250.8, 276.8, 231.9…
```
The variables contained here are:
* `subject`: an indicator for each experimental participant
* `gender`: an indicator of each participants gender (only binary)
* `sentence`: an indicator of the sentence spoken by the participant
* `context`: the main manipulation of whether the context was a “polite” or “informal” setting
* `pitch`: the measured voice pitch (presumably: average over the sentence spoken)
### D.8\.2 Hypotheses
The main research question of interest here is whether voice pitch is higher in “polite” contexts than in “informal”, and whether this effect is more or less present for male or female speakers.
### D.8\.3 Summary statistics
Here are the mean pitch values for the four relevant design cells:
```
## # A tibble: 4 × 3
## # Groups: gender [2]
## gender context mean_pitch
## <fct> <fct> <dbl>
## 1 M pol 133.
## 2 M inf 144.
## 3 F pol 233.
## 4 F inf 261.
```
### D.8\.4 Visualization
Here is a plot showing the distribution of pitch measures in each group (small semi\-transparent points), as well the cell means (big solid points):
### D.8\.1 Nature, origin and rationale of the data
The politeness data is borrowed from Winter and Grawunder ([2012](#ref-WinterGrawunder2012:The-Phonetic-Pr)).[98](#fn98)
The data set contains measurements of voice pitch obtained from a \\(2 \\times 2\\) factorial design, with factors `gender` and `context`.
The data is from Korean speakers.
Here is a glimpse of the data:
```
politeness_data <- aida::data_polite
glimpse(politeness_data)
```
```
## Rows: 83
## Columns: 5
## $ subject <chr> "F1", "F1", "F1", "F1", "F1", "F1", "F1", "F1", "F1", "F1", "…
## $ gender <chr> "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "F", "…
## $ sentence <chr> "S1", "S1", "S2", "S2", "S3", "S3", "S4", "S4", "S5", "S5", "…
## $ context <chr> "pol", "inf", "pol", "inf", "pol", "inf", "pol", "inf", "pol"…
## $ pitch <dbl> 213.3, 204.5, 285.1, 259.7, 203.9, 286.9, 250.8, 276.8, 231.9…
```
The variables contained here are:
* `subject`: an indicator for each experimental participant
* `gender`: an indicator of each participants gender (only binary)
* `sentence`: an indicator of the sentence spoken by the participant
* `context`: the main manipulation of whether the context was a “polite” or “informal” setting
* `pitch`: the measured voice pitch (presumably: average over the sentence spoken)
### D.8\.2 Hypotheses
The main research question of interest here is whether voice pitch is higher in “polite” contexts than in “informal”, and whether this effect is more or less present for male or female speakers.
### D.8\.3 Summary statistics
Here are the mean pitch values for the four relevant design cells:
```
## # A tibble: 4 × 3
## # Groups: gender [2]
## gender context mean_pitch
## <fct> <fct> <dbl>
## 1 M pol 133.
## 2 M inf 144.
## 3 F pol 233.
## 4 F inf 261.
```
### D.8\.4 Visualization
Here is a plot showing the distribution of pitch measures in each group (small semi\-transparent points), as well the cell means (big solid points):
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/app-94-replication-crisis.html |
E.1 Psychology’s replication crisis
-----------------------------------
What happens with a scientific discipline if it predominantly fails to replicate[99](#fn99) previous discoveries? This question frequently arose after a groundbreaking project revealed that psychology is facing a replication crisis. In 2011, the Open Science Collaboration ([2015](#ref-OpenScienceCollab2015)) launched a large\-scale project – the so\-called “Reproducibility Project” – in which they attempted 100 direct replications of experimental and correlational studies in psychology. The results are worrisome: 97% of the original studies reported statistically significant results, whereas the initiative could merely replicate 36% of the results.[100](#fn100) This low replicability rate, however, does not imply that about two\-thirds of the discoveries are wrong. It emphasizes that research outcomes should not be taken at face value but scrutinized by the scientific community. Most of all, the results show that scientists should take action to increase the replicability of their studies. This urgent need is further fueled by the discovery that the prevalence of low replicability rates diminishes the public’s trust (e.g., [Wingen, Berkessel, and Englich 2019](#ref-WingenBerkessel2019)) and, in the long run, might undermine the credibility of psychology as a science.
In order to know how to increase a study’s replicability, it is crucial to investigate what causes replications to fail. Essentially, failing to replicate the significant results of the original study has three roots: The original study yielded a false\-positive, the replication study yielded a false\-negative, or too divergent methodologies led to two different outcomes ([Open Science Collaboration 2015](#ref-OpenScienceCollab2015)). We focus here on false\-positives and diverging methodologies. We only briefly touch on false\-negatives in the replication study when we talk about low statistical power.
### E.1\.1 Publication bias, QRP’s, and false\-positives
Weighing evidence in favor of verifying preconceptions and beliefs rather than falsifying them is a cognitive bias (confirmation bias). This natural form of reasoning can be a considerable challenge in doing proper research, as the full amount of information should be taken into account and not just those consistent with prior beliefs. Confirmation bias also manifests itself in a tendency to see patterns in the data and perceive meaning, when there is only noise (apophenia) and overestimating the prediction of an event after it occurred, typically expressed as “I knew it all along!” (hindsight bias).
These biases further pave the way for a skewed incentive structure that prefers confirmation over inconclusiveness or contradiction. In psychological science, there is a vast prevalence of publications that report significant (\\(p \< 0\.05\\)) and novel findings in contrast to null\-results (e.g., [Sterling 1959](#ref-Sterling1959)) or replication studies (e.g., [Makel, Plucker, and Hegarty 2012](#ref-MakelPlucker2012)). This substantial publication bias towards positive and novel results may initially seem entirely plausible. Journals might want to publish flashy headlines that catch the reader’s attention rather than “wasting” resources for studies that remain inconclusive. Furthermore, scientific articles that report significant outcomes are more likely to be cited ([Duyx et al. 2017](#ref-DuyxUrlings2017)) and thus may increase the journal’s impact factor (JIF). Replication studies might not be incentivized because they are considered tedious and redundant. Why publish results that don’t make new contributions to science?[101](#fn101)
Publication bias operates at the expense of replicability and thus the reliability of science. The pressure of generating significant results can further fuel the researcher’s bias ([Fanelli 2010](#ref-Fanelli2010)). Increasing cognitive biases towards the desired positive result could therefore lead researchers to draw false conclusions. To cope with this “Publish or Perish” mindset, researchers may increasingly engage in **questionable research practices** (QRP’s) as a way of somehow obtaining a \\(p\\)\-value less than the significance level \\(\\alpha\\). QRP’s fall into the grey area of research and might be the norm in psychological science. Commonly researchers “\\(p\\)\-hack” their way to a significant \\(p\\)\-value by analyzing the data multiple ways through exploiting the flexibility in data collection and data analysis (researcher degrees of freedom). This exploratory behavior is frequently followed by selective reporting of what “worked”, so\-called cherry\-picking. Such \\(p\\)\-hacking also takes on the form of unreported omission of statistical outliers and conditions, post hoc decisions to analyze a subgroup, or to change statistical analyses. Furthermore, researchers make rounding errors by reporting their results to cross the significance threshold (.049 becomes .04\), they randomly stop collecting data when the desired \\(p\\)\-value of under .05 pops up, or they present exploratory hypotheses[102](#fn102) as being confirmatory (HARKing, Hypothesizing After the Results are Known).
Diederik Stapel, a former professor of social psychology at Tilburg University, writes in his book *Faking Science: A True Story of Academic Fraud* about his scientific misconduct. He shows how easy it is to not just fool the scientific community (in a discipline where transparency is not common practice) but also oneself:
> I did a lot of experiments, but not all of them worked. \[…] But when I really believed in something \[…] I found it hard to give up, and tried one more time. If it seemed logical, it must be true. \[…] You can always make another couple of little adjustments to improve the results. \[…] I ran some extra statistical analyses looking for a pattern that would tell me what had gone wrong. When I found something strange, I changed the experiment and ran it again, until it worked ([Stapel 2014, 100–101](#ref-Stapel2014)).
If the publication of a long\-standing study determines whether researchers get funding or a job, it is perfectly understandable why they consciously or subconsciously engage in such practices. However, exploiting researcher degrees of freedom by engaging in QRP’s poses a significant threat to the validity of the scientific discovery by blatantly inflating the probability of false\-positives. By not correcting the significance threshold accordingly, many analyses are likely to be statistically significant just by chance, and reporting solely those that “worked” additionally paints a distorted picture on the confidence of the finding.
**Example.** Let’s illustrate \\(p\\)\-hacking based on a [popular
comic by xkcd](https://www.explainxkcd.com/wiki/index.php/882:_Significant). In the comic, two researchers investigate whether
eating jelly beans causes acne. A \\(p\\)\-value larger than the conventional
\\(\\alpha\\)\-threshold doesn’t allow them
to reject the null hypothesis of no effect. Well, it must be one
particular color that is associated with acne. The researchers now
individually test the 20 different colors of jelly beans. Indeed,
numerous tests later, they obtain the significant \\(p\\)\-value that they have probably been
waiting for. The verdict: Green jelly beans are associated with acne! Of
course, this finding leads to a big headline in the newspaper. The
article reports that there is only a 5% chance that the finding is due
to coincidence.
However, the probability that the finding is a fluke is about 13
times higher than anticipated and reported in the paper. Let’s check
what happened here:
In the first experiment (without color distinctions), there was a
\\(5\\%\\) chance of rejecting \\(H\_0\\), and consequently a \\(95\\%\\) chance of failing to reject \\(H\_0\\). Since the \\(\\alpha\\)\-level is the upper bound on a
false\-positive outcome, the confidence in the finding reported in the
newspaper would have been true if the researchers had kept it with just
one hypothesis test. However, by taking the 20 different colors into
account, the probability of obtaining a non\-significant \\(p\\)\-value in each of the 20 tests dropped
from \\(95\\%\\) to \\(0\.95^{20} \\approx 35\.85\\%\\), leaving room
for a \\(64\.15\\%\\) chance that at least
one test yielded a false\-positive. The probability of at least one
false\-positive due to conducting multiple hypothesis tests on the same
data set is called the *family\-wise error rate* (FWER). Formally,
it can be calculated like so:
\\\[\\alpha\_{FWER} \= 1 \- (1 \-
\\alpha)^n,\\]
where \\(\\alpha\\) denotes the
significance level for each individual test, which is conventionally set
to \\(\\alpha \= 0\.05\\), and \\(n\\) the total number of hypothesis
tests.
Conducting multiple tests on the same data set and not correcting the
family\-wise error rate accordingly, therefore makes it more likely that
a study finds a statistically significant result by coincidence.
To investigate how prevalent QRP’s are in psychological science, Leslie John et al. ([John, Loewenstein, and Prelec 2012](#ref-JohnLoewenstein2012)) surveyed over 2000 psychologists regarding their engagement in QRP’s. They found that 66\.5% of the respondents admitted that they failed to report all dependent measures, 58% collected more data after seeing whether the results were significant, 50% selectively reported studies that “worked”, and 43\.4% excluded data after looking at the impact of doing so. Based on the self\-admission estimate, they derived a prevalence estimate of 100% for each mentioned QRP. These numbers once more reinforce the suspicion that QRP’s are the norm in psychology. Together with the fact that these practices can blatantly inflate the false\-positive rates, one might conclude that much of the psychological literature cannot be successfully replicated and thus *might* be wrong.
### E.1\.2 Low statistical power
Another factor that can account for unreliable discoveries in the scientific literature is the persistence of highly underpowered studies in psychology (e.g., [Cohen 1962](#ref-Cohen1962); [Sedlmeier and Gigerenzer 1989](#ref-SedlmeierGigerenzer1989); [Marjan, Dijk, and Wicherts 2012](#ref-BakkerVanDijk2012); [Szucs and Ioannidis 2017](#ref-SzucsIoannidis2017)). A study’s statistical power is the probability of correctly rejecting a false null hypothesis, i.e., the ideal in NHST. Defined as \\(1 − \\beta\\), power is directly related to the probability of encountering a false\-negative, meaning that low\-powered studies are less likely to reject \\(H\_0\\) when it is in fact false. Figure [E.1](app-94-replication-crisis.html#fig:ch-94-error-dists) shows the relationship between \\(\\alpha\\)\-errors and \\(\\beta\\)\-errors (slightly adapted from a previous figure in Chapter [16\.4](ch-03-04-hypothesis-significance-errors.html#ch-03-04-hypothesis-significance-errors)), as well as the power to correctly rejecting \\(H\_0\\).
Figure E.1: Relationship between power, \\(\\alpha\\) and \\(\\beta\\)\-errors.
It may be tempting to conclude that a statistically significant result of an underpowered study is “more convincing”. However, low statistical power also decreases the probability that a significant result reflects a true effect (that is, that the detected difference is really present in the population). This probability is referred to as the *Positive Predictive Value* (PPV). The PPV is defined as \\\[PPV \= \\frac{(1 \- \\beta) \\cdot R}{(1 \- \\beta) \\cdot R \+ \\alpha},\\] where \\(1 − \\beta\\) is the statistical power, \\(R\\) is the pre\-study odds (the odds of the prevalence of an effect before conducting the experiment), and \\(\\alpha\\) is the type I error. Choosing the conventional \\(\\alpha\\) of 5% and assuming \\(R\\) to be 25%, the PPV for a statistically significant result of a study with 80% power \- which is deemed acceptable \- is 0\.8\. If the power is reduced to 35%, the PPV is 0\.64\. A 64% chance that a discovered effect is true implies that there is a 36% chance that a false discovery was made. Therefore, low\-powered studies are more likely to obtain flawed and unreliable outcomes, which contribute to the poor replicability of discoveries in the scientific record.
Another consequence of underpowered studies is the overestimation of effect sizes[103](#fn103) and a higher probability of an effect size in the wrong direction. These errors are referred to as **Type M (Magnitude)** and **Type S (Sign) errors**, respectively ([Gelman and Carlin 2014](#ref-GelmanCarlin2014)). If, for example, the true effect size (which is unknown in reality) between group \\(A\\) and \\(B\\) is 20 ms, finding a significant effect size of 50 ms would overestimate the true effect size by a factor of 2\.5\. If we observe an effect size of \-50 ms, we would even wrongly assume that group \\(B\\) performs faster than group \\(A\\).
The statistical power, as well as Type S and Type M error rates can be easily estimated by simulation. Recall the example from Chapter [16\.6\.3](ch-03-05-hypothesis-testing-tests.html#ch-03-05-hypothesis-testing-t-test), where we investigated whether the distribution of IQ’s from a sample of CogSci students could have been generated by an average IQ of 100, i.e., \\(H\_0: \\mu\_{CogSci} \= 100 \\ (\\delta \= 0\)\\). This time, we’re doing a two\-tailed \\(t\\)\-test, where the alternative hypothesis states that there is a difference in means without assigning relevance to the direction of the difference, i.e., \\(H\_a: \\mu\_{CogSci} \\neq 100 \\ (\\delta \\neq 0\)\\). We plan on recruiting 25 CogScis and set \\(\\alpha \= 0\.05\\).
Before we start with the real experiment, we check its power, Type S, and Type M error rates by hypothetically running the same experiment 10000 times in the WebPPL code box below. From the previous literature, we estimate the true effect size to be 1 (CogScis have an average IQ of 101\) and the standard deviation to be 15\. Since we want to know how many times we correctly reject the null hypothesis of equal means, we set the estimated true effect size as ground truth (`delta` variable) and sample from \\(Normal(100 \+ \\delta, 15\)\\). Variable `t_crit` stores the demarcation point for statistical significance in a \\(t\\)\-distribution with `n` \- 1 degrees of freedom.
We address the following questions:
* If the true effect size is 1, what is the probability of correctly rejecting the null hypothesis of equal means (\= an effect size of 0\)?
* If the true effect size is 1, what is the probability that a significant result will reflect a negative effect size (that is, an average IQ of less than 100\)?
* If the true effect size is 1 and we obtain a statistically significant result, what is the ratio of the estimated effect size to the true effect size (exaggeration ratio)?
Play around with the parameter values to get a feeling of how power can be increased. Remember to change the `t_crit` variable when choosing a different sample size. The critical \\(t\\)\-value can be easily looked up in a \\(t\\)\-table or computed with the respective quantile function in R (e.g, `qt(c(0.025,0.975), 13)` for a two\-sided test with \\(\\alpha \= 0\.05\\) and \\(n \= 14\\)). For \\(n \\geq 30\\), the \\(t\\)\-distribution approximates the standard normal distribution.
```
var delta = 1; // true effect size between mu_CogSci and mu_0
var sigma = 15; // standard deviation
var n = 25; // sample size per experiment
var t_crit = 2.063899; // +- critical t-value for n-1 degrees of freedom
var n_sim = 10000; // number of simulations (1 simulation = 1 experiment)
///fold:
var se = sigma/Math.sqrt(n); // standard error
// Effect size estimates:
/* In each simulation, drep(n_sim) takes n samples from a normal distribution
centered around the true mean and returns a vector of the effect sizes */
var drep = function(n_sim) {
if(n_sim == 1) {
var sample = repeat(n, function(){gaussian({mu: 100 + delta, sigma: sigma})});
var effect_size = [_.mean(sample)-100];
return effect_size;
} else {
var sample = repeat(n, function(){gaussian({mu: 100 + delta, sigma: sigma})});
var effect_size = [_.mean(sample)-100];
return effect_size.concat(drep(n_sim-1));
}
}
// vector of all effect sizes
var ES = drep(n_sim);
// Power:
/* get_signif(n_sim) takes the number of simulations and returns a vector of only
significant effect sizes. It calculates the absolute observed t-value, i.e.,
|effect size / standard error| and compares it with the critical t-value. If the
absolute observed t-value is greater than or equal to the critical t-value, the
difference in means is statistically significant.
Note that we take the absolute t-value since we're conducting a two-sided t-test and
therefore also have to consider values that are in the lower tail of the sampling
distribution. */
var get_signif = function(n_sim) {
if(n_sim == 1) {
var t_obs = Math.abs(ES[0]/se);
if(t_obs >= t_crit) {
return [ES[0]];
} else {
return [];
}
} else {
var t_obs = Math.abs(ES[n_sim-1]/se);
if(t_obs >= t_crit) {
return [ES[n_sim-1]].concat(get_signif(n_sim-1));
} else {
return [].concat(get_signif(n_sim-1));
}
}
}
// vector of only significant effect size estimates
var signif_ES = get_signif(n_sim);
// proportion of times where the null hypothesis would have been correctly rejected
var power = signif_ES.length/n_sim;
// Type S error:
/* get_neg_ES(n_sim) takes the number of simulations and returns a vector of
significant effect sizes that are negative. */
var get_neg_ES = function(n_sim){
if(n_sim == 1){
if(signif_ES[n_sim-1] < 0){
return [signif_ES[n_sim-1]];
} else {
return [];
}
} else {
if(signif_ES[n_sim-1] < 0){
return [signif_ES[n_sim-1]].concat(get_neg_ES(n_sim-1));
} else {
return [].concat(get_neg_ES(n_sim-1));
}
}
}
// vector of only significant effect size estimates that are negative
var neg_ES = get_neg_ES(n_sim);
/* If at least one simulation yielded statistical significance, calculate the
proportion of significant+negative effect sizes to all significant effect sizes. */
var type_s = function(){
if(signif_ES.length == 0) {
return "No significant effect size";
} else {
return neg_ES.length/signif_ES.length;
}
}
// proportion of significant results with a negative effect size
var s = type_s();
// Type M error:
// take the absolute value of all significant effect sizes
var absolute_ES = _.map(signif_ES, Math.abs);
/* If at least one simulation yielded statistical significance, calculate the
ratio of the average absolute effect size to the true effect size. */
var type_m = function(){
if(signif_ES.length == 0) {
return "No significant effect size";
} else {
return _.mean(absolute_ES)/delta;
}
}
// exaggeration ratio
var m = type_m();
// Results:
// print results
display("Power: " + power +
"\nType II error: " + (1-power) +
"\nType S error: " + s +
"\nType M error: " + m
);
// print interpretation depending on results
if(power != 0) {
if(_.round(m,1) == 1) {
display(
"Interpretation:\n" +
// Power
"If the true effect size is " + delta + ", there is a " +
_.round((power*100),1) + "% chance of detecting a significant \ndifference. "+
// Type S error
"If a significant difference is detected, there is a " +
_.round((s*100),1) +
"% chance \nthat the effect size estimate is negative. " +
// Type M error
"Further, the absolute estimated effect size is expected to be about the " +
"same as the true effect size of " + delta + "."
);
} else {
display(
"Interpretation:\n" +
// Power
"If the true effect size is " + delta + ", there is a " +
_.round((power*100),1) + "% chance of detecting a significant \ndifference. "+
// Type S error
"If a significant difference is detected, there is a " +
_.round((s*100),1) +
"% chance \nthat the effect size estimate is negative. " +
// Typ M error
"Further, the absolute estimated effect size is expected to be " +
_.round(m,1) + " times too high."
);
}
} else {
display(
"Interpretation:\n" +
// Power
"If the true effect size is " + delta + ", there is no " +
"chance of detecting a significant \ndifference at all. " +
"Since Type S and Type M errors are contingent on a \nsignificant " +
"result, there is no chance of having them in this case."
);
}
///
```
```
```
As the power of replication studies is typically based on the reported effect size of the original study, an inflated effect size also renders the power of the replication study to be much lower than anticipated. Hence, an underpowered study may additionally increase the replications’ probability of encountering a type II error, which may lead replicators to misinterpret the statistical significance of the original study as being a false\-positive. Besides being self\-defeating for authors of the original study, this may compromise the veracity of the cumulative knowledge base that direct replications aim to build.
### E.1\.3 Lack of transparency
When it comes to the reporting of methodologies, there seem to be disagreements within the scientific community. In his *new Etiquette for Replication*, Daniel Kahneman ([2014](#ref-Kahneman2014)) called for new standards for conducting direct replication studies. Concretely, replicators should be obliged to consult the authors of the original study – otherwise, the replication should not be valid. According to him, the described methodologies in psychology papers are too vague to permit direct replications. He argues that “\[…] behavior is easily affected by seemingly irrelevant factors” and that paraphrasing experimental instructions discards crucial information, as “\[…] their wording and even the font in which they are printed are known to be significant”. Kahneman’s proposed rules for the interaction between authors and replicators led to heated discussions within the discipline. Chris Chambers ([2017, 52–55](#ref-Chambers2017)) refers to several responses to Kahneman, among others, from psychologist Andrew Wilson. In his blog post, titled *Psychology’s real replication problem: our Methods sections*, he takes an unequivocal stand on rejecting rigid standards for replication studies:
> If you can’t stand the replication heat, get out of the empirical kitchen because publishing your work means you think it’s ready for prime time, and if other people can’t make it work based on your published methods then that’s your problem and not theirs ([Wilson 2014](#ref-Wilson2014)).
Of course, there are also voices between those extremes that, even if they disagree with Kahneman’s proposal, agree that there are shortcomings in reporting methodologies. So why are method sections not as informative as they should be? A reason might be that the trend towards disregarding direct replications – due to lacking incentives – decreases the importance of detailed descriptions about the experimental design or data analyses. Furthermore, editors may favor brief method descriptions due to a lack of space in the paper. To minimize a variation in methodologies that might account for different outcomes, it is essential that journal policies change accordingly.
In addition to detailed reporting of methodologies, further materials such as scripts and raw data are known to facilitate replication efforts. In an attempt to retrieve data from previous studies, Hardwicke and Ioannidis ([2018](#ref-HardwickeIoannidis2018)) encountered that almost 40% of the authors did not respond to their request in any form, followed by almost 30% not willing to share their data. The reluctance to share data for reanalysis can be related to weaker evidence and more errors in reporting statistical results ([Wicherts, Bakker, and Molenaar 2011](#ref-WichertsBakker2011)). This finding further intensifies the need for assessing the veracity of the reported results by reanalyzing the raw data, i.e., checking its computational reproducibility. However, computational replication attempts can hardly be conducted without transparency of the original study. To end this vicious circle and make sharing common practice, journals could establish mandatory sharing policies or provide incentives for open practices.
### E.1\.1 Publication bias, QRP’s, and false\-positives
Weighing evidence in favor of verifying preconceptions and beliefs rather than falsifying them is a cognitive bias (confirmation bias). This natural form of reasoning can be a considerable challenge in doing proper research, as the full amount of information should be taken into account and not just those consistent with prior beliefs. Confirmation bias also manifests itself in a tendency to see patterns in the data and perceive meaning, when there is only noise (apophenia) and overestimating the prediction of an event after it occurred, typically expressed as “I knew it all along!” (hindsight bias).
These biases further pave the way for a skewed incentive structure that prefers confirmation over inconclusiveness or contradiction. In psychological science, there is a vast prevalence of publications that report significant (\\(p \< 0\.05\\)) and novel findings in contrast to null\-results (e.g., [Sterling 1959](#ref-Sterling1959)) or replication studies (e.g., [Makel, Plucker, and Hegarty 2012](#ref-MakelPlucker2012)). This substantial publication bias towards positive and novel results may initially seem entirely plausible. Journals might want to publish flashy headlines that catch the reader’s attention rather than “wasting” resources for studies that remain inconclusive. Furthermore, scientific articles that report significant outcomes are more likely to be cited ([Duyx et al. 2017](#ref-DuyxUrlings2017)) and thus may increase the journal’s impact factor (JIF). Replication studies might not be incentivized because they are considered tedious and redundant. Why publish results that don’t make new contributions to science?[101](#fn101)
Publication bias operates at the expense of replicability and thus the reliability of science. The pressure of generating significant results can further fuel the researcher’s bias ([Fanelli 2010](#ref-Fanelli2010)). Increasing cognitive biases towards the desired positive result could therefore lead researchers to draw false conclusions. To cope with this “Publish or Perish” mindset, researchers may increasingly engage in **questionable research practices** (QRP’s) as a way of somehow obtaining a \\(p\\)\-value less than the significance level \\(\\alpha\\). QRP’s fall into the grey area of research and might be the norm in psychological science. Commonly researchers “\\(p\\)\-hack” their way to a significant \\(p\\)\-value by analyzing the data multiple ways through exploiting the flexibility in data collection and data analysis (researcher degrees of freedom). This exploratory behavior is frequently followed by selective reporting of what “worked”, so\-called cherry\-picking. Such \\(p\\)\-hacking also takes on the form of unreported omission of statistical outliers and conditions, post hoc decisions to analyze a subgroup, or to change statistical analyses. Furthermore, researchers make rounding errors by reporting their results to cross the significance threshold (.049 becomes .04\), they randomly stop collecting data when the desired \\(p\\)\-value of under .05 pops up, or they present exploratory hypotheses[102](#fn102) as being confirmatory (HARKing, Hypothesizing After the Results are Known).
Diederik Stapel, a former professor of social psychology at Tilburg University, writes in his book *Faking Science: A True Story of Academic Fraud* about his scientific misconduct. He shows how easy it is to not just fool the scientific community (in a discipline where transparency is not common practice) but also oneself:
> I did a lot of experiments, but not all of them worked. \[…] But when I really believed in something \[…] I found it hard to give up, and tried one more time. If it seemed logical, it must be true. \[…] You can always make another couple of little adjustments to improve the results. \[…] I ran some extra statistical analyses looking for a pattern that would tell me what had gone wrong. When I found something strange, I changed the experiment and ran it again, until it worked ([Stapel 2014, 100–101](#ref-Stapel2014)).
If the publication of a long\-standing study determines whether researchers get funding or a job, it is perfectly understandable why they consciously or subconsciously engage in such practices. However, exploiting researcher degrees of freedom by engaging in QRP’s poses a significant threat to the validity of the scientific discovery by blatantly inflating the probability of false\-positives. By not correcting the significance threshold accordingly, many analyses are likely to be statistically significant just by chance, and reporting solely those that “worked” additionally paints a distorted picture on the confidence of the finding.
**Example.** Let’s illustrate \\(p\\)\-hacking based on a [popular
comic by xkcd](https://www.explainxkcd.com/wiki/index.php/882:_Significant). In the comic, two researchers investigate whether
eating jelly beans causes acne. A \\(p\\)\-value larger than the conventional
\\(\\alpha\\)\-threshold doesn’t allow them
to reject the null hypothesis of no effect. Well, it must be one
particular color that is associated with acne. The researchers now
individually test the 20 different colors of jelly beans. Indeed,
numerous tests later, they obtain the significant \\(p\\)\-value that they have probably been
waiting for. The verdict: Green jelly beans are associated with acne! Of
course, this finding leads to a big headline in the newspaper. The
article reports that there is only a 5% chance that the finding is due
to coincidence.
However, the probability that the finding is a fluke is about 13
times higher than anticipated and reported in the paper. Let’s check
what happened here:
In the first experiment (without color distinctions), there was a
\\(5\\%\\) chance of rejecting \\(H\_0\\), and consequently a \\(95\\%\\) chance of failing to reject \\(H\_0\\). Since the \\(\\alpha\\)\-level is the upper bound on a
false\-positive outcome, the confidence in the finding reported in the
newspaper would have been true if the researchers had kept it with just
one hypothesis test. However, by taking the 20 different colors into
account, the probability of obtaining a non\-significant \\(p\\)\-value in each of the 20 tests dropped
from \\(95\\%\\) to \\(0\.95^{20} \\approx 35\.85\\%\\), leaving room
for a \\(64\.15\\%\\) chance that at least
one test yielded a false\-positive. The probability of at least one
false\-positive due to conducting multiple hypothesis tests on the same
data set is called the *family\-wise error rate* (FWER). Formally,
it can be calculated like so:
\\\[\\alpha\_{FWER} \= 1 \- (1 \-
\\alpha)^n,\\]
where \\(\\alpha\\) denotes the
significance level for each individual test, which is conventionally set
to \\(\\alpha \= 0\.05\\), and \\(n\\) the total number of hypothesis
tests.
Conducting multiple tests on the same data set and not correcting the
family\-wise error rate accordingly, therefore makes it more likely that
a study finds a statistically significant result by coincidence.
To investigate how prevalent QRP’s are in psychological science, Leslie John et al. ([John, Loewenstein, and Prelec 2012](#ref-JohnLoewenstein2012)) surveyed over 2000 psychologists regarding their engagement in QRP’s. They found that 66\.5% of the respondents admitted that they failed to report all dependent measures, 58% collected more data after seeing whether the results were significant, 50% selectively reported studies that “worked”, and 43\.4% excluded data after looking at the impact of doing so. Based on the self\-admission estimate, they derived a prevalence estimate of 100% for each mentioned QRP. These numbers once more reinforce the suspicion that QRP’s are the norm in psychology. Together with the fact that these practices can blatantly inflate the false\-positive rates, one might conclude that much of the psychological literature cannot be successfully replicated and thus *might* be wrong.
### E.1\.2 Low statistical power
Another factor that can account for unreliable discoveries in the scientific literature is the persistence of highly underpowered studies in psychology (e.g., [Cohen 1962](#ref-Cohen1962); [Sedlmeier and Gigerenzer 1989](#ref-SedlmeierGigerenzer1989); [Marjan, Dijk, and Wicherts 2012](#ref-BakkerVanDijk2012); [Szucs and Ioannidis 2017](#ref-SzucsIoannidis2017)). A study’s statistical power is the probability of correctly rejecting a false null hypothesis, i.e., the ideal in NHST. Defined as \\(1 − \\beta\\), power is directly related to the probability of encountering a false\-negative, meaning that low\-powered studies are less likely to reject \\(H\_0\\) when it is in fact false. Figure [E.1](app-94-replication-crisis.html#fig:ch-94-error-dists) shows the relationship between \\(\\alpha\\)\-errors and \\(\\beta\\)\-errors (slightly adapted from a previous figure in Chapter [16\.4](ch-03-04-hypothesis-significance-errors.html#ch-03-04-hypothesis-significance-errors)), as well as the power to correctly rejecting \\(H\_0\\).
Figure E.1: Relationship between power, \\(\\alpha\\) and \\(\\beta\\)\-errors.
It may be tempting to conclude that a statistically significant result of an underpowered study is “more convincing”. However, low statistical power also decreases the probability that a significant result reflects a true effect (that is, that the detected difference is really present in the population). This probability is referred to as the *Positive Predictive Value* (PPV). The PPV is defined as \\\[PPV \= \\frac{(1 \- \\beta) \\cdot R}{(1 \- \\beta) \\cdot R \+ \\alpha},\\] where \\(1 − \\beta\\) is the statistical power, \\(R\\) is the pre\-study odds (the odds of the prevalence of an effect before conducting the experiment), and \\(\\alpha\\) is the type I error. Choosing the conventional \\(\\alpha\\) of 5% and assuming \\(R\\) to be 25%, the PPV for a statistically significant result of a study with 80% power \- which is deemed acceptable \- is 0\.8\. If the power is reduced to 35%, the PPV is 0\.64\. A 64% chance that a discovered effect is true implies that there is a 36% chance that a false discovery was made. Therefore, low\-powered studies are more likely to obtain flawed and unreliable outcomes, which contribute to the poor replicability of discoveries in the scientific record.
Another consequence of underpowered studies is the overestimation of effect sizes[103](#fn103) and a higher probability of an effect size in the wrong direction. These errors are referred to as **Type M (Magnitude)** and **Type S (Sign) errors**, respectively ([Gelman and Carlin 2014](#ref-GelmanCarlin2014)). If, for example, the true effect size (which is unknown in reality) between group \\(A\\) and \\(B\\) is 20 ms, finding a significant effect size of 50 ms would overestimate the true effect size by a factor of 2\.5\. If we observe an effect size of \-50 ms, we would even wrongly assume that group \\(B\\) performs faster than group \\(A\\).
The statistical power, as well as Type S and Type M error rates can be easily estimated by simulation. Recall the example from Chapter [16\.6\.3](ch-03-05-hypothesis-testing-tests.html#ch-03-05-hypothesis-testing-t-test), where we investigated whether the distribution of IQ’s from a sample of CogSci students could have been generated by an average IQ of 100, i.e., \\(H\_0: \\mu\_{CogSci} \= 100 \\ (\\delta \= 0\)\\). This time, we’re doing a two\-tailed \\(t\\)\-test, where the alternative hypothesis states that there is a difference in means without assigning relevance to the direction of the difference, i.e., \\(H\_a: \\mu\_{CogSci} \\neq 100 \\ (\\delta \\neq 0\)\\). We plan on recruiting 25 CogScis and set \\(\\alpha \= 0\.05\\).
Before we start with the real experiment, we check its power, Type S, and Type M error rates by hypothetically running the same experiment 10000 times in the WebPPL code box below. From the previous literature, we estimate the true effect size to be 1 (CogScis have an average IQ of 101\) and the standard deviation to be 15\. Since we want to know how many times we correctly reject the null hypothesis of equal means, we set the estimated true effect size as ground truth (`delta` variable) and sample from \\(Normal(100 \+ \\delta, 15\)\\). Variable `t_crit` stores the demarcation point for statistical significance in a \\(t\\)\-distribution with `n` \- 1 degrees of freedom.
We address the following questions:
* If the true effect size is 1, what is the probability of correctly rejecting the null hypothesis of equal means (\= an effect size of 0\)?
* If the true effect size is 1, what is the probability that a significant result will reflect a negative effect size (that is, an average IQ of less than 100\)?
* If the true effect size is 1 and we obtain a statistically significant result, what is the ratio of the estimated effect size to the true effect size (exaggeration ratio)?
Play around with the parameter values to get a feeling of how power can be increased. Remember to change the `t_crit` variable when choosing a different sample size. The critical \\(t\\)\-value can be easily looked up in a \\(t\\)\-table or computed with the respective quantile function in R (e.g, `qt(c(0.025,0.975), 13)` for a two\-sided test with \\(\\alpha \= 0\.05\\) and \\(n \= 14\\)). For \\(n \\geq 30\\), the \\(t\\)\-distribution approximates the standard normal distribution.
```
var delta = 1; // true effect size between mu_CogSci and mu_0
var sigma = 15; // standard deviation
var n = 25; // sample size per experiment
var t_crit = 2.063899; // +- critical t-value for n-1 degrees of freedom
var n_sim = 10000; // number of simulations (1 simulation = 1 experiment)
///fold:
var se = sigma/Math.sqrt(n); // standard error
// Effect size estimates:
/* In each simulation, drep(n_sim) takes n samples from a normal distribution
centered around the true mean and returns a vector of the effect sizes */
var drep = function(n_sim) {
if(n_sim == 1) {
var sample = repeat(n, function(){gaussian({mu: 100 + delta, sigma: sigma})});
var effect_size = [_.mean(sample)-100];
return effect_size;
} else {
var sample = repeat(n, function(){gaussian({mu: 100 + delta, sigma: sigma})});
var effect_size = [_.mean(sample)-100];
return effect_size.concat(drep(n_sim-1));
}
}
// vector of all effect sizes
var ES = drep(n_sim);
// Power:
/* get_signif(n_sim) takes the number of simulations and returns a vector of only
significant effect sizes. It calculates the absolute observed t-value, i.e.,
|effect size / standard error| and compares it with the critical t-value. If the
absolute observed t-value is greater than or equal to the critical t-value, the
difference in means is statistically significant.
Note that we take the absolute t-value since we're conducting a two-sided t-test and
therefore also have to consider values that are in the lower tail of the sampling
distribution. */
var get_signif = function(n_sim) {
if(n_sim == 1) {
var t_obs = Math.abs(ES[0]/se);
if(t_obs >= t_crit) {
return [ES[0]];
} else {
return [];
}
} else {
var t_obs = Math.abs(ES[n_sim-1]/se);
if(t_obs >= t_crit) {
return [ES[n_sim-1]].concat(get_signif(n_sim-1));
} else {
return [].concat(get_signif(n_sim-1));
}
}
}
// vector of only significant effect size estimates
var signif_ES = get_signif(n_sim);
// proportion of times where the null hypothesis would have been correctly rejected
var power = signif_ES.length/n_sim;
// Type S error:
/* get_neg_ES(n_sim) takes the number of simulations and returns a vector of
significant effect sizes that are negative. */
var get_neg_ES = function(n_sim){
if(n_sim == 1){
if(signif_ES[n_sim-1] < 0){
return [signif_ES[n_sim-1]];
} else {
return [];
}
} else {
if(signif_ES[n_sim-1] < 0){
return [signif_ES[n_sim-1]].concat(get_neg_ES(n_sim-1));
} else {
return [].concat(get_neg_ES(n_sim-1));
}
}
}
// vector of only significant effect size estimates that are negative
var neg_ES = get_neg_ES(n_sim);
/* If at least one simulation yielded statistical significance, calculate the
proportion of significant+negative effect sizes to all significant effect sizes. */
var type_s = function(){
if(signif_ES.length == 0) {
return "No significant effect size";
} else {
return neg_ES.length/signif_ES.length;
}
}
// proportion of significant results with a negative effect size
var s = type_s();
// Type M error:
// take the absolute value of all significant effect sizes
var absolute_ES = _.map(signif_ES, Math.abs);
/* If at least one simulation yielded statistical significance, calculate the
ratio of the average absolute effect size to the true effect size. */
var type_m = function(){
if(signif_ES.length == 0) {
return "No significant effect size";
} else {
return _.mean(absolute_ES)/delta;
}
}
// exaggeration ratio
var m = type_m();
// Results:
// print results
display("Power: " + power +
"\nType II error: " + (1-power) +
"\nType S error: " + s +
"\nType M error: " + m
);
// print interpretation depending on results
if(power != 0) {
if(_.round(m,1) == 1) {
display(
"Interpretation:\n" +
// Power
"If the true effect size is " + delta + ", there is a " +
_.round((power*100),1) + "% chance of detecting a significant \ndifference. "+
// Type S error
"If a significant difference is detected, there is a " +
_.round((s*100),1) +
"% chance \nthat the effect size estimate is negative. " +
// Type M error
"Further, the absolute estimated effect size is expected to be about the " +
"same as the true effect size of " + delta + "."
);
} else {
display(
"Interpretation:\n" +
// Power
"If the true effect size is " + delta + ", there is a " +
_.round((power*100),1) + "% chance of detecting a significant \ndifference. "+
// Type S error
"If a significant difference is detected, there is a " +
_.round((s*100),1) +
"% chance \nthat the effect size estimate is negative. " +
// Typ M error
"Further, the absolute estimated effect size is expected to be " +
_.round(m,1) + " times too high."
);
}
} else {
display(
"Interpretation:\n" +
// Power
"If the true effect size is " + delta + ", there is no " +
"chance of detecting a significant \ndifference at all. " +
"Since Type S and Type M errors are contingent on a \nsignificant " +
"result, there is no chance of having them in this case."
);
}
///
```
```
```
As the power of replication studies is typically based on the reported effect size of the original study, an inflated effect size also renders the power of the replication study to be much lower than anticipated. Hence, an underpowered study may additionally increase the replications’ probability of encountering a type II error, which may lead replicators to misinterpret the statistical significance of the original study as being a false\-positive. Besides being self\-defeating for authors of the original study, this may compromise the veracity of the cumulative knowledge base that direct replications aim to build.
### E.1\.3 Lack of transparency
When it comes to the reporting of methodologies, there seem to be disagreements within the scientific community. In his *new Etiquette for Replication*, Daniel Kahneman ([2014](#ref-Kahneman2014)) called for new standards for conducting direct replication studies. Concretely, replicators should be obliged to consult the authors of the original study – otherwise, the replication should not be valid. According to him, the described methodologies in psychology papers are too vague to permit direct replications. He argues that “\[…] behavior is easily affected by seemingly irrelevant factors” and that paraphrasing experimental instructions discards crucial information, as “\[…] their wording and even the font in which they are printed are known to be significant”. Kahneman’s proposed rules for the interaction between authors and replicators led to heated discussions within the discipline. Chris Chambers ([2017, 52–55](#ref-Chambers2017)) refers to several responses to Kahneman, among others, from psychologist Andrew Wilson. In his blog post, titled *Psychology’s real replication problem: our Methods sections*, he takes an unequivocal stand on rejecting rigid standards for replication studies:
> If you can’t stand the replication heat, get out of the empirical kitchen because publishing your work means you think it’s ready for prime time, and if other people can’t make it work based on your published methods then that’s your problem and not theirs ([Wilson 2014](#ref-Wilson2014)).
Of course, there are also voices between those extremes that, even if they disagree with Kahneman’s proposal, agree that there are shortcomings in reporting methodologies. So why are method sections not as informative as they should be? A reason might be that the trend towards disregarding direct replications – due to lacking incentives – decreases the importance of detailed descriptions about the experimental design or data analyses. Furthermore, editors may favor brief method descriptions due to a lack of space in the paper. To minimize a variation in methodologies that might account for different outcomes, it is essential that journal policies change accordingly.
In addition to detailed reporting of methodologies, further materials such as scripts and raw data are known to facilitate replication efforts. In an attempt to retrieve data from previous studies, Hardwicke and Ioannidis ([2018](#ref-HardwickeIoannidis2018)) encountered that almost 40% of the authors did not respond to their request in any form, followed by almost 30% not willing to share their data. The reluctance to share data for reanalysis can be related to weaker evidence and more errors in reporting statistical results ([Wicherts, Bakker, and Molenaar 2011](#ref-WichertsBakker2011)). This finding further intensifies the need for assessing the veracity of the reported results by reanalyzing the raw data, i.e., checking its computational reproducibility. However, computational replication attempts can hardly be conducted without transparency of the original study. To end this vicious circle and make sharing common practice, journals could establish mandatory sharing policies or provide incentives for open practices.
| Data Science |
dzchilds.github.io | https://dzchilds.github.io/eda-for-bio/index.html |
Information and overview
========================
This is the online course book for the **Introduction to Exploratory Data Analysis with R** component of [APS 135](https://www.sheffield.ac.uk/aps/currentug/level1/aps135), a module taught by the Department and Animal and Plant Sciences at the University of Sheffield. You can view this book in any modern desktop browser, as well as on your phone or tablet device. Dylan Childs is running the course this year. Please [email him](mailto:[email protected]?Subject=APS%20135%20general%20query) if you spot any problems with the course book.
Aims
----
1. You will be introduced to the R ecosystem. R is widely used by biologists and environmental scientists to manipulate and clean data, produce high quality figures, and carry out statistical analyses. We will teach you some basic R programming so that you are in a position to address these needs in future if you need to. You don’t have to become an expert programmer to have a successful career in science, but knowing a little bit of programming has almost become a prerequisite for doing biological research in the 21st century.
2. You will learn how to use R to carry out data manipulation and visualisation. Designing good experiments, collecting data, and analysis are hard, and these activities often take a great deal time and money. If you want to effectively communicate your hard\-won results, it is difficult to beat a good figure or diagram; conversely, if you want to be ignored, put everything into a boring table. R is really good at producing figures, so even if you end up just using it as a platform for visualising data, your time hasn’t been wasted.
3. This book provides a foundation for learning statistics later on. If you want to be a biologist, particularly one involved in research, there is really no way to avoid using statistics. You might be able to dodge it by becoming a theoretician, but if that really is your primary interest you should probably being studying for a mathematics degree. For the rest of us who collect and analyse data knowing about statistics is essential: it allows us to distinguish between real patterns (the “signal”) and chance variation (the “noise”).
### Topics
The topics we will cover in this book are divided into three sections (‘blocks’):
The **Getting Started with R** block introduces the R language and the RStudio environment for working with R. Our aim is to run through much of what you need to know to start using R productively. This includes some basic terminology, how to use R packages, and how to access help. We are not trying to turn you into an expert programmer—though you may be surprised to discover that you do enjoy it. However, by the end of this block you will know enough about R to begin learning the practical material that follows.
The **Data Wrangling with R** block aims to show you how to manipulate your data with R. If you regularly work with data a large amount of time will inevitably be spent getting data into the format you need. The informal name for this is “data wrangling”. This is a topic that is often not taught to undergraduates, which is a shame because mastering the art of data wrangling can save a lot of time in the long run. We’ll learn how to get data into and out of R, makes subsets of important variables, create new variables, summarise your data, and so on.
The **Exploratory Data Analysis** block is all about using R to help you understand and describe your data. The first step in any analysis after you have managed to wrangle the data into shape almost always involves some kind of visualisation or numerical summary. In this block you will learn how to do this using one of the best plotting systems in R: **ggplot2**. We will review the different kinds of variables you might have to analyse, discuss the different ways you can describe them, both visually and with numbers, and then learn how to explore relationships between variables.
How to use the book
-------------------
This book covers all the material you need to get to grips with this year, some of which we will not have time to cover in the practicals. **No one is expecting you to memorise everything in the book**. It is designed to serve as a resource for you to refer to over the next 2\-3 years (and beyond) as needed. However, you should aim to familiarise yourself with the content so that you know where to look for information or examples when needed. Try to understand the important concepts and then worry about the details.
What should you be doing as you read about each topic? There is a lot of R code embedded in the book, most of which you can just copy and paste into R. It’s a good idea to do this as you work through a topic. The best way to learn something is to use it actively, not just read about it. Experimenting with different code snippets by changing them is also a very good way to learn what they do. You can’t really break R and working out why something does or does not work will help you learn how to use it.
#### Text, instructions, and explanations
Normal text, instructions, explanations etc. are written in the same type as this document, we will tend to use bold for emphasis and italics to highlight specific technical terms when they are first introduced (italics will also crop up with Latin names from time to time, but this is unlikely to produce too much confusion!)
At various points in the text you will come across text in different coloured boxes. These are designed to highlight stand\-alone exercises or little pieces of supplementary information that might otherwise break the flow. There are three different kinds of boxes:
This is an **action** box. We use these when we want to say something important. For example, we might be summarising a key learning outcome or giving you instructions to do something. Do not ignore these boxes.
This is a **warning** box. These contain a warning or a common “gotcha”. There are a number of common pitfalls that trip up new users of R. These boxes aim to highlight these and show you how to avoid them. Pay attention to these.
This is an **information** box. These aim to offer a not\-too\-technical discussion of why something works the way it does. You do not have to understand everything in these boxes to use R but the information will help you understand how it works.
#### R code and output in this book
We will try to illustrate as many ideas as we can using snippets of real R code. Stand alone snippets will be formatted like this:
```
tmp <- 1
print(tmp)
```
```
## [1] 1
```
At this point it does not matter what the above actually means. You just need to understand how the formatting of R code in this book works. The lines that start with `##` show us what R prints to the screen after it evaluates an instruction and does whatever was asked of it, that is, they show the output. The lines that **do not** start with `##` show us the instructions, that is, they show us the input. So remember, the absence of `##` shows us what we are asking R to do, otherwise we are looking at something R prints in response to these instructions.
`This typeface` is used to distinguish R code within a sentence of text: e.g. “We use the `mutate` function to change or add new variables.”
A sequence of selections from an RStudio menu is indicated as follows: e.g. **File ▶ New File ▶ R Script**
File names referred to in general text are given in upper case in the normal typeface: e.g. MYFILE.CSV.
Getting help
------------
You will learn various ways of finding help about R in this book. If you find yourself stuck at any point these should your first port of call. If you are still struggling, try the following, in this order:
1. Google is your friend. One of the nice consequences of R’s growing popularity is that the web is now packed full of useful tutorials and tips, many of which are aimed at beginners. One of the objectives of this book is turn you into a self sufficient useR. Learning how to solve your own R\-related problems is an essential pre\-requisite for this to happen.
2. If an hour of Googling does not solve a problem, make a note of your question and ask a TA for help in a practical session. The TA’s have been chosen because they have a lot of experience with R. They’re a friendly bunch too and they like talking about R! If they can’t answer your question then the instructor (Dylan) will be able to. He gets bored when nobody asks him questions in practicals…
3. We encourage you to try options 1 and 2 first. Nonetheless, on occasion Google may turn out not to be your friend and a post to the Facebook page might not elicit a satisfactory response. In these instances you are welcome to [email Dylan](mailto:[email protected]?Subject=APS%20135%20Question) with your query. You are unlikely to receive an answer at the weekend though.
Aims
----
1. You will be introduced to the R ecosystem. R is widely used by biologists and environmental scientists to manipulate and clean data, produce high quality figures, and carry out statistical analyses. We will teach you some basic R programming so that you are in a position to address these needs in future if you need to. You don’t have to become an expert programmer to have a successful career in science, but knowing a little bit of programming has almost become a prerequisite for doing biological research in the 21st century.
2. You will learn how to use R to carry out data manipulation and visualisation. Designing good experiments, collecting data, and analysis are hard, and these activities often take a great deal time and money. If you want to effectively communicate your hard\-won results, it is difficult to beat a good figure or diagram; conversely, if you want to be ignored, put everything into a boring table. R is really good at producing figures, so even if you end up just using it as a platform for visualising data, your time hasn’t been wasted.
3. This book provides a foundation for learning statistics later on. If you want to be a biologist, particularly one involved in research, there is really no way to avoid using statistics. You might be able to dodge it by becoming a theoretician, but if that really is your primary interest you should probably being studying for a mathematics degree. For the rest of us who collect and analyse data knowing about statistics is essential: it allows us to distinguish between real patterns (the “signal”) and chance variation (the “noise”).
### Topics
The topics we will cover in this book are divided into three sections (‘blocks’):
The **Getting Started with R** block introduces the R language and the RStudio environment for working with R. Our aim is to run through much of what you need to know to start using R productively. This includes some basic terminology, how to use R packages, and how to access help. We are not trying to turn you into an expert programmer—though you may be surprised to discover that you do enjoy it. However, by the end of this block you will know enough about R to begin learning the practical material that follows.
The **Data Wrangling with R** block aims to show you how to manipulate your data with R. If you regularly work with data a large amount of time will inevitably be spent getting data into the format you need. The informal name for this is “data wrangling”. This is a topic that is often not taught to undergraduates, which is a shame because mastering the art of data wrangling can save a lot of time in the long run. We’ll learn how to get data into and out of R, makes subsets of important variables, create new variables, summarise your data, and so on.
The **Exploratory Data Analysis** block is all about using R to help you understand and describe your data. The first step in any analysis after you have managed to wrangle the data into shape almost always involves some kind of visualisation or numerical summary. In this block you will learn how to do this using one of the best plotting systems in R: **ggplot2**. We will review the different kinds of variables you might have to analyse, discuss the different ways you can describe them, both visually and with numbers, and then learn how to explore relationships between variables.
### Topics
The topics we will cover in this book are divided into three sections (‘blocks’):
The **Getting Started with R** block introduces the R language and the RStudio environment for working with R. Our aim is to run through much of what you need to know to start using R productively. This includes some basic terminology, how to use R packages, and how to access help. We are not trying to turn you into an expert programmer—though you may be surprised to discover that you do enjoy it. However, by the end of this block you will know enough about R to begin learning the practical material that follows.
The **Data Wrangling with R** block aims to show you how to manipulate your data with R. If you regularly work with data a large amount of time will inevitably be spent getting data into the format you need. The informal name for this is “data wrangling”. This is a topic that is often not taught to undergraduates, which is a shame because mastering the art of data wrangling can save a lot of time in the long run. We’ll learn how to get data into and out of R, makes subsets of important variables, create new variables, summarise your data, and so on.
The **Exploratory Data Analysis** block is all about using R to help you understand and describe your data. The first step in any analysis after you have managed to wrangle the data into shape almost always involves some kind of visualisation or numerical summary. In this block you will learn how to do this using one of the best plotting systems in R: **ggplot2**. We will review the different kinds of variables you might have to analyse, discuss the different ways you can describe them, both visually and with numbers, and then learn how to explore relationships between variables.
How to use the book
-------------------
This book covers all the material you need to get to grips with this year, some of which we will not have time to cover in the practicals. **No one is expecting you to memorise everything in the book**. It is designed to serve as a resource for you to refer to over the next 2\-3 years (and beyond) as needed. However, you should aim to familiarise yourself with the content so that you know where to look for information or examples when needed. Try to understand the important concepts and then worry about the details.
What should you be doing as you read about each topic? There is a lot of R code embedded in the book, most of which you can just copy and paste into R. It’s a good idea to do this as you work through a topic. The best way to learn something is to use it actively, not just read about it. Experimenting with different code snippets by changing them is also a very good way to learn what they do. You can’t really break R and working out why something does or does not work will help you learn how to use it.
#### Text, instructions, and explanations
Normal text, instructions, explanations etc. are written in the same type as this document, we will tend to use bold for emphasis and italics to highlight specific technical terms when they are first introduced (italics will also crop up with Latin names from time to time, but this is unlikely to produce too much confusion!)
At various points in the text you will come across text in different coloured boxes. These are designed to highlight stand\-alone exercises or little pieces of supplementary information that might otherwise break the flow. There are three different kinds of boxes:
This is an **action** box. We use these when we want to say something important. For example, we might be summarising a key learning outcome or giving you instructions to do something. Do not ignore these boxes.
This is a **warning** box. These contain a warning or a common “gotcha”. There are a number of common pitfalls that trip up new users of R. These boxes aim to highlight these and show you how to avoid them. Pay attention to these.
This is an **information** box. These aim to offer a not\-too\-technical discussion of why something works the way it does. You do not have to understand everything in these boxes to use R but the information will help you understand how it works.
#### R code and output in this book
We will try to illustrate as many ideas as we can using snippets of real R code. Stand alone snippets will be formatted like this:
```
tmp <- 1
print(tmp)
```
```
## [1] 1
```
At this point it does not matter what the above actually means. You just need to understand how the formatting of R code in this book works. The lines that start with `##` show us what R prints to the screen after it evaluates an instruction and does whatever was asked of it, that is, they show the output. The lines that **do not** start with `##` show us the instructions, that is, they show us the input. So remember, the absence of `##` shows us what we are asking R to do, otherwise we are looking at something R prints in response to these instructions.
`This typeface` is used to distinguish R code within a sentence of text: e.g. “We use the `mutate` function to change or add new variables.”
A sequence of selections from an RStudio menu is indicated as follows: e.g. **File ▶ New File ▶ R Script**
File names referred to in general text are given in upper case in the normal typeface: e.g. MYFILE.CSV.
#### Text, instructions, and explanations
Normal text, instructions, explanations etc. are written in the same type as this document, we will tend to use bold for emphasis and italics to highlight specific technical terms when they are first introduced (italics will also crop up with Latin names from time to time, but this is unlikely to produce too much confusion!)
At various points in the text you will come across text in different coloured boxes. These are designed to highlight stand\-alone exercises or little pieces of supplementary information that might otherwise break the flow. There are three different kinds of boxes:
This is an **action** box. We use these when we want to say something important. For example, we might be summarising a key learning outcome or giving you instructions to do something. Do not ignore these boxes.
This is a **warning** box. These contain a warning or a common “gotcha”. There are a number of common pitfalls that trip up new users of R. These boxes aim to highlight these and show you how to avoid them. Pay attention to these.
This is an **information** box. These aim to offer a not\-too\-technical discussion of why something works the way it does. You do not have to understand everything in these boxes to use R but the information will help you understand how it works.
#### R code and output in this book
We will try to illustrate as many ideas as we can using snippets of real R code. Stand alone snippets will be formatted like this:
```
tmp <- 1
print(tmp)
```
```
## [1] 1
```
At this point it does not matter what the above actually means. You just need to understand how the formatting of R code in this book works. The lines that start with `##` show us what R prints to the screen after it evaluates an instruction and does whatever was asked of it, that is, they show the output. The lines that **do not** start with `##` show us the instructions, that is, they show us the input. So remember, the absence of `##` shows us what we are asking R to do, otherwise we are looking at something R prints in response to these instructions.
`This typeface` is used to distinguish R code within a sentence of text: e.g. “We use the `mutate` function to change or add new variables.”
A sequence of selections from an RStudio menu is indicated as follows: e.g. **File ▶ New File ▶ R Script**
File names referred to in general text are given in upper case in the normal typeface: e.g. MYFILE.CSV.
Getting help
------------
You will learn various ways of finding help about R in this book. If you find yourself stuck at any point these should your first port of call. If you are still struggling, try the following, in this order:
1. Google is your friend. One of the nice consequences of R’s growing popularity is that the web is now packed full of useful tutorials and tips, many of which are aimed at beginners. One of the objectives of this book is turn you into a self sufficient useR. Learning how to solve your own R\-related problems is an essential pre\-requisite for this to happen.
2. If an hour of Googling does not solve a problem, make a note of your question and ask a TA for help in a practical session. The TA’s have been chosen because they have a lot of experience with R. They’re a friendly bunch too and they like talking about R! If they can’t answer your question then the instructor (Dylan) will be able to. He gets bored when nobody asks him questions in practicals…
3. We encourage you to try options 1 and 2 first. Nonetheless, on occasion Google may turn out not to be your friend and a post to the Facebook page might not elicit a satisfactory response. In these instances you are welcome to [email Dylan](mailto:[email protected]?Subject=APS%20135%20Question) with your query. You are unlikely to receive an answer at the weekend though.
| Data Science |
umatter.github.io | https://umatter.github.io/BigData/the-two-domains-of-big-data-analytics.html |
Chapter 3 The Two Domains of Big Data Analytics
===============================================
As discussed in the previous chapter, data analytics in the context of Big Data
can be broadly categorized into two domains of statistical challenges:
techniques/estimators to address *big P* problems and techniques/estimators to
address *big N* problems. While this book predominantly focuses on how to handle
Big Data for applied economics and business analytics settings in the context of
*big N* problems, it is useful to set the stage for the following chapters with
two practical examples concerning both *big P* and *big N* methods.
3\.1 A practical *big P* problem
--------------------------------
Due to the abundance of digital data on all kinds of human activities, both
empirical economists and business analysts are increasingly confronted with
high\-dimensional data (many signals, many variables). While having a lot of
variables to work with sounds kind of like a good thing, it introduces new
problems in coming up with useful predictive models. In the extreme case of
having more variables in the model than observations, traditional methods cannot
be used at all. In the less extreme case of just having dozens or hundreds of
variables in a model (and plenty of observations), we risk “falsely” discovering
seemingly influential variables and consequently coming up with a model with
potentially very misleading out\-of\-sample predictions. So how can we find a
reasonable model?[4](#fn4)
Let us look at a real\-life example. Suppose you work for Google’s e\-commerce
platform (<https://shop.googlemerchandisestore.com>), and
you are in charge of predicting purchases (i.e., the probability that a user
actually buys something from your store in a given session) based on user and
browser\-session characteristics.[5](#fn5)
The dependent variable `purchase` is an indicator equal to `1` if the
corresponding shop visit leads to a purchase and equal to `0` otherwise. All
other variables contain information about the user and the session (Where is the
user located? Which browser is (s)he using? etc.).
### 3\.1\.1 Simple logistic regression (naive approach)
As the dependent variable is binary, we will first estimate a simple logit model, in which
we use the origins of the store visitors (how did a visitor end up in the shop?)
as explanatory variables. Note that many of these variables are categorical, and
the model matrix thus contains a lot of “dummies” (indicator variables). The
plan in this (intentionally naive) first approach is to simply add a lot of
explanatory variables to the model, run logit, and then select the variables
with statistically significant coefficient estimates as the final predictive
model. The following code snippet covers the import of the data, the creation of
the model matrix (with all the dummy\-variables), and the logit estimation.
```
# import/inspect data
ga <- read.csv("data/ga.csv")
head(ga[, c("source", "browser", "city", "purchase")])
```
```
## source browser city purchase
## 1 google Chrome San Jose 1
## 2 (direct) Edge Charlotte 1
## 3 (direct) Safari San Francisco 1
## 4 (direct) Safari Los Angeles 1
## 5 (direct) Chrome Chicago 1
## 6 (direct) Chrome Sunnyvale 1
```
```
# create model matrix (dummy vars)
mm <- cbind(ga$purchase,
model.matrix(purchase~source, data=ga,)[,-1])
mm_df <- as.data.frame(mm)
# clean variable names
names(mm_df) <- c("purchase",
gsub("source", "", names(mm_df)[-1]))
# run logit
model1 <- glm(purchase ~ .,
data=mm_df, family=binomial)
```
Now we can perform the t\-tests and filter out the “relevant” variables.
```
model1_sum <- summary(model1)
# select "significant" variables for final model
pvalues <- model1_sum$coefficients[,"Pr(>|z|)"]
vars <- names(pvalues[which(pvalues<0.05)][-1])
vars
```
```
## [1] "bing"
## [2] "dfa"
## [3] "docs.google.com"
## [4] "facebook.com"
## [5] "google"
## [6] "google.com"
## [7] "m.facebook.com"
## [8] "Partners"
## [9] "quora.com"
## [10] "siliconvalley.about.com"
## [11] "sites.google.com"
## [12] "t.co"
## [13] "youtube.com"
```
Finally, we re\-estimate our “final” model.
```
# specify and estimate the final model
finalmodel <- glm(purchase ~.,
data = mm_df[, c("purchase", vars)],
family = binomial)
```
The first problem with this approach is that we should not trust the coefficient
t\-tests based on which we have selected the covariates too much. The first model
contains 62 explanatory variables (plus the intercept). With that many
hypothesis tests, we are quite likely to reject the NULL of no predictive effect
although there is actually no predictive effect. In addition, this approach
turns out to be unstable. There might be correlation between some of the
variables in the original set, and adding/removing even one variable might
substantially affect the predictive power of the model (and the apparent
relevance of other variables). We can see this already from the summary of our
final model estimate (generated in the next code chunk). One of the apparently
relevant predictors (`dfa`) is not at all significant anymore in this
specification. Thus, we might be tempted to further change the model, which in
turn would again change the apparent relevance of other covariates, and so on.
```
summary(finalmodel)$coef[,c("Estimate", "Pr(>|z|)")]
```
```
## Estimate Pr(>|z|)
## (Intercept) -1.3831 0.000e+00
## bing -1.4647 4.416e-03
## dfa -0.1865 1.271e-01
## docs.google.com -2.0181 4.714e-02
## facebook.com -1.1663 3.873e-04
## google -1.0149 6.321e-168
## google.com -2.9607 3.193e-05
## m.facebook.com -3.6920 2.331e-04
## Partners -4.3747 3.942e-14
## quora.com -3.1277 1.869e-03
## siliconvalley.about.com -2.2456 1.242e-04
## sites.google.com -0.5968 1.356e-03
## t.co -2.0509 4.316e-03
## youtube.com -6.9935 4.197e-23
```
An alternative approach would be to estimate models based on all possible
combinations of covariates and then use that sequence of models to select the
final model based on some out\-of\-sample prediction performance measure. Clearly
such an approach would take a long time to compute.
### 3\.1\.2 Regularization: the lasso estimator
Instead, the *lasso estimator* provides a convenient and
efficient way to get a sequence of candidate models. The key idea behind lasso
is to penalize model complexity (the cause of instability) during the estimation
procedure.[6](#fn6) In a second step, we can then select a
final model from the sequence of candidate models based on, for example,
“out\-of\-sample” prediction in a k\-fold cross
validation.
The `gamlr` package ([Taddy 2017](#ref-gamlr)) provides both parts of this procedure (lasso for the
sequence of candidate models, and selection of the “best” model based on k\-fold
cross\-validation).
```
# load packages
library(gamlr)
# create the model matrix
mm <- model.matrix(purchase~source, data = ga)
```
In cases with both many observations and many candidate explanatory variables,
the model matrix might get very large. Even simply generating the model matrix
might be a computational burden, as we might run out of memory to hold the model
matrix object. If this large model matrix is sparse (i.e, has a lot of `0`
entries), there is a much more memory\-efficient way to store it in an R object.
R provides ways to represent such sparse matrices in a compressed way in
specialized R objects (such as `CsparseMatrix` provided in the `Matrix`
package Bates, Maechler, and Jagan ([2022](#ref-Matrix))). Instead of containing all \\(n\\times m\\) cells of the matrix, these
objects only explicitly store the cells with non\-zero values and the
corresponding indices. Below, we make use of the high\-level
`sparse.model.matrix` function to generate the model matrix and store it in a
sparse matrix object. To illustrate the point of a more
memory\-efficient representation, we show that the traditional matrix object is
about 7\.5 times larger than the sparse version.
```
# create the sparse model matrix
mm_sparse <- sparse.model.matrix(purchase~source, data = ga)
# compare the object's sizes
as.numeric(object.size(mm)/object.size(mm_sparse))
```
```
## [1] 7.525
```
Finally, we run the lasso estimation with k\-fold cross\-validation.
```
# run k-fold cross-validation lasso
cvpurchase <- cv.gamlr(mm_sparse, ga$purchase, family="binomial")
```
We can then illustrate the performance of the selected final model – for
example, with an ROC curve. Note that both the `coef` method
and the `predict` method for `gamlr` objects automatically select the ‘best’
model.
```
# load packages
library(PRROC)
# use "best" model for prediction
# (model selection based on average OSS deviance
pred <- predict(cvpurchase$gamlr, mm_sparse, type="response")
# compute tpr, fpr; plot ROC
comparison <- roc.curve(scores.class0 = pred,
weights.class0=ga$purchase,
curve=TRUE)
plot(comparison)
```
Hence, econometrics techniques such as lasso help deal with *big P* problems by
providing reasonable ways to select a good predictive model (in other words,
decide which of the many variables should be included).
3\.2 A practical *big N* problem
--------------------------------
Big N problems are situations in which we know what type of model we want to use
but the *number of observations* is too big to run the estimation (the computer
crashes or slows down significantly). The simplest statistical solution to such
a problem is usually to just estimate the model based on a smaller sample.
However, we might not want to do that for other reasons (i.e., if we require a big N for statistical power reasons). As an illustration of how an alternative statistical procedure can speed up the analysis of big N datasets, we look at a procedure to estimate linear
models for situations where the classical OLS estimator is computationally too
demanding when analyzing large datasets, the *Uluru* algorithm
([Dhillon et al. 2013](#ref-dhillon_2013)).
### 3\.2\.1 OLS as a point of reference
Recall the OLS estimator in matrix notation, given the linear model \\(\\mathbf{y}\=\\mathbf{X}\\beta \+ \\epsilon\\):
\\\[\\hat{\\beta}\_{OLS} \= (\\mathbf{X}^\\intercal\\mathbf{X})^{\-1}\\mathbf{X}^{\\intercal}\\mathbf{y}.\\]
In order to compute \\(\\hat{\\beta}\_{OLS}\\), we have to compute
\\((\\mathbf{X}^\\intercal\\mathbf{X})^{\-1}\\), which implies a computationally
expensive matrix inversion.[7](#fn7) If our dataset is large,
\\(\\mathbf{X}\\) is large, and the inversion can take up a lot of computation time.
Moreover, the inversion and matrix multiplication to get \\(\\hat{\\beta}\_{OLS}\\)
needs a lot of memory. In practice, it might well be that the estimation of a
linear model via OLS with the standard approach in R (`lm()`) brings a computer
to its knees, as there is not enough memory available.
To further illustrate the point, we implement the OLS estimator in R.
```
beta_ols <-
function(X, y) {
# compute cross products and inverse
XXi <- solve(crossprod(X,X))
Xy <- crossprod(X, y)
return( XXi %*% Xy )
}
```
Now, we will test our OLS estimator function with a few (pseudo\-)random numbers
in a Monte Carlo study. First, we set the sample size parameters `n` (the number
of observations in our pseudo\-sample) and `p` (the number of variables
describing each of these observations) and initialize the dataset `X`.
```
# set parameter values
n <- 10000000
p <- 4
# generate sample based on Monte Carlo
# generate a design matrix (~ our 'dataset')
# with 4 variables and 10,000 observations
X <- matrix(rnorm(n*p, mean = 10), ncol = p)
# add column for intercept
X <- cbind(rep(1, n), X)
```
Now we define what the real linear model that we have in mind looks like and
compute the output `y` of this model, given the input `X`.[8](#fn8)
```
# MC model
y <- 2 + 1.5*X[,2] + 4*X[,3] - 3.5*X[,4] + 0.5*X[,5] + rnorm(n)
```
Finally, we test our `beta_ols` function.
```
# apply the OLS estimator
beta_ols(X, y)
```
```
## [,1]
## [1,] 1.9974
## [2,] 1.5001
## [3,] 3.9996
## [4,] -3.4994
## [5,] 0.4999
```
### 3\.2\.2 The *Uluru* algorithm as an alternative to OLS
Following Dhillon et al. ([2013](#ref-dhillon_2013)), we implement a procedure to compute
\\(\\hat{\\beta}\_{Uluru}\\):
\\\[\\hat{\\beta}\_{Uluru}\=\\hat{\\beta}\_{FS} \+ \\hat{\\beta}\_{correct}\\], where
\\\[\\hat{\\beta}\_{FS} \=
(\\mathbf{X}\_{subs}^\\intercal\\mathbf{X}\_{subs})^{\-1}\\mathbf{X}\_{subs}^{\\intercal}\\mathbf{y}\_{subs}\\],
and \\\[\\hat{\\beta}\_{correct}\= \\frac{n\_{subs}}{n\_{rem}} \\cdot
(\\mathbf{X}\_{subs}^\\intercal\\mathbf{X}\_{subs})^{\-1}
\\mathbf{X}\_{rem}^{\\intercal}\\mathbf{R}\_{rem}\\], and \\\[\\mathbf{R}\_{rem} \=
\\mathbf{Y}\_{rem} \- \\mathbf{X}\_{rem} \\cdot \\hat{\\beta}\_{FS}\\].
The key idea behind this is that the computational bottleneck of the OLS
estimator, the cross product and matrix inversion,
\\((\\mathbf{X}^\\intercal\\mathbf{X})^{\-1}\\), is only computed on a sub\-sample
(\\(X\_{subs}\\), etc.), not the entire dataset. However, the remainder of the
dataset is also taken into consideration (in order to correct a bias arising
from the sub\-sampling). Again, we implement the estimator in R to further
illustrate this point.
```
beta_uluru <-
function(X_subs, y_subs, X_rem, y_rem) {
# compute beta_fs
#(this is simply OLS applied to the subsample)
XXi_subs <- solve(crossprod(X_subs, X_subs))
Xy_subs <- crossprod(X_subs, y_subs)
b_fs <- XXi_subs %*% Xy_subs
# compute \mathbf{R}_{rem}
R_rem <- y_rem - X_rem %*% b_fs
# compute \hat{\beta}_{correct}
b_correct <-
(nrow(X_subs)/(nrow(X_rem))) *
XXi_subs %*% crossprod(X_rem, R_rem)
# beta uluru
return(b_fs + b_correct)
}
```
We then test it with the same input as above:
```
# set size of sub-sample
n_subs <- 1000
# select sub-sample and remainder
n_obs <- nrow(X)
X_subs <- X[1L:n_subs,]
y_subs <- y[1L:n_subs]
X_rem <- X[(n_subs+1L):n_obs,]
y_rem <- y[(n_subs+1L):n_obs]
# apply the uluru estimator
beta_uluru(X_subs, y_subs, X_rem, y_rem)
```
```
## [,1]
## [1,] 2.0048
## [2,] 1.4997
## [3,] 3.9995
## [4,] -3.4993
## [5,] 0.4996
```
This looks quite good already. Let’s have a closer look with a little Monte
Carlo study. The aim of the simulation study is to visualize the difference
between the classical OLS approach and the *Uluru* algorithm with regard to bias
and time complexity if we increase the sub\-sample size in *Uluru*. For
simplicity, we only look at the first estimated coefficient \\(\\beta\_{1}\\).
```
# define sub-samples
n_subs_sizes <- seq(from = 1000, to = 500000, by=10000)
n_runs <- length(n_subs_sizes)
# compute uluru result, stop time
mc_results <- rep(NA, n_runs)
mc_times <- rep(NA, n_runs)
for (i in 1:n_runs) {
# set size of sub-sample
n_subs <- n_subs_sizes[i]
# select sub-sample and remainder
n_obs <- nrow(X)
X_subs <- X[1L:n_subs,]
y_subs <- y[1L:n_subs]
X_rem <- X[(n_subs+1L):n_obs,]
y_rem <- y[(n_subs+1L):n_obs]
mc_results[i] <- beta_uluru(X_subs,
y_subs,
X_rem,
y_rem)[2] # (1 is the intercept)
mc_times[i] <- system.time(beta_uluru(X_subs,
y_subs,
X_rem,
y_rem))[3]
}
# compute OLS results and OLS time
ols_time <- system.time(beta_ols(X, y))
ols_res <- beta_ols(X, y)[2]
```
Let’s visualize the comparison with OLS.
```
# load packages
library(ggplot2)
# prepare data to plot
plotdata <- data.frame(beta1 = mc_results,
time_elapsed = mc_times,
subs_size = n_subs_sizes)
```
First, let’s look at the time used to estimate the linear model.
```
ggplot(plotdata, aes(x = subs_size, y = time_elapsed)) +
geom_point(color="darkgreen") +
geom_hline(yintercept = ols_time[3],
color = "red",
linewidth = 1) +
theme_minimal() +
ylab("Time elapsed") +
xlab("Subsample size")
```
The horizontal red line indicates the computation time for estimation via OLS;
the green points indicate the computation time for the estimation via the
*Ulruru* algorithm. Note that even for large sub\-samples, the computation time
is substantially lower than for OLS. Finally, let’s have a look at how close the results are to OLS.
```
ggplot(plotdata, aes(x = subs_size, y = beta1)) +
geom_hline(yintercept = ols_res,
color = "red",
linewidth = 1) +
geom_hline(yintercept = 1.5,
color = "green",
linewidth = 1) +
geom_point(color="darkgreen") +
theme_minimal() +
ylab("Estimated coefficient") +
xlab("Subsample size")
```
The horizontal red line indicates the size of the estimated coefficient, when
using OLS. The horizontal green line indicates the size of the actual
coefficient. The green points indicate the size of the same coefficient
estimated by the *Uluru* algorithm for different sub\-sample sizes. Note that
even relatively small sub\-samples already deliver estimates very close to the
OLS estimates.
Taken together, the example illustrates that alternative statistical methods,
optimized for large amounts of data, can deliver results very close to
traditional approaches. Yet, they can deliver these results much more
efficiently.
3\.1 A practical *big P* problem
--------------------------------
Due to the abundance of digital data on all kinds of human activities, both
empirical economists and business analysts are increasingly confronted with
high\-dimensional data (many signals, many variables). While having a lot of
variables to work with sounds kind of like a good thing, it introduces new
problems in coming up with useful predictive models. In the extreme case of
having more variables in the model than observations, traditional methods cannot
be used at all. In the less extreme case of just having dozens or hundreds of
variables in a model (and plenty of observations), we risk “falsely” discovering
seemingly influential variables and consequently coming up with a model with
potentially very misleading out\-of\-sample predictions. So how can we find a
reasonable model?[4](#fn4)
Let us look at a real\-life example. Suppose you work for Google’s e\-commerce
platform (<https://shop.googlemerchandisestore.com>), and
you are in charge of predicting purchases (i.e., the probability that a user
actually buys something from your store in a given session) based on user and
browser\-session characteristics.[5](#fn5)
The dependent variable `purchase` is an indicator equal to `1` if the
corresponding shop visit leads to a purchase and equal to `0` otherwise. All
other variables contain information about the user and the session (Where is the
user located? Which browser is (s)he using? etc.).
### 3\.1\.1 Simple logistic regression (naive approach)
As the dependent variable is binary, we will first estimate a simple logit model, in which
we use the origins of the store visitors (how did a visitor end up in the shop?)
as explanatory variables. Note that many of these variables are categorical, and
the model matrix thus contains a lot of “dummies” (indicator variables). The
plan in this (intentionally naive) first approach is to simply add a lot of
explanatory variables to the model, run logit, and then select the variables
with statistically significant coefficient estimates as the final predictive
model. The following code snippet covers the import of the data, the creation of
the model matrix (with all the dummy\-variables), and the logit estimation.
```
# import/inspect data
ga <- read.csv("data/ga.csv")
head(ga[, c("source", "browser", "city", "purchase")])
```
```
## source browser city purchase
## 1 google Chrome San Jose 1
## 2 (direct) Edge Charlotte 1
## 3 (direct) Safari San Francisco 1
## 4 (direct) Safari Los Angeles 1
## 5 (direct) Chrome Chicago 1
## 6 (direct) Chrome Sunnyvale 1
```
```
# create model matrix (dummy vars)
mm <- cbind(ga$purchase,
model.matrix(purchase~source, data=ga,)[,-1])
mm_df <- as.data.frame(mm)
# clean variable names
names(mm_df) <- c("purchase",
gsub("source", "", names(mm_df)[-1]))
# run logit
model1 <- glm(purchase ~ .,
data=mm_df, family=binomial)
```
Now we can perform the t\-tests and filter out the “relevant” variables.
```
model1_sum <- summary(model1)
# select "significant" variables for final model
pvalues <- model1_sum$coefficients[,"Pr(>|z|)"]
vars <- names(pvalues[which(pvalues<0.05)][-1])
vars
```
```
## [1] "bing"
## [2] "dfa"
## [3] "docs.google.com"
## [4] "facebook.com"
## [5] "google"
## [6] "google.com"
## [7] "m.facebook.com"
## [8] "Partners"
## [9] "quora.com"
## [10] "siliconvalley.about.com"
## [11] "sites.google.com"
## [12] "t.co"
## [13] "youtube.com"
```
Finally, we re\-estimate our “final” model.
```
# specify and estimate the final model
finalmodel <- glm(purchase ~.,
data = mm_df[, c("purchase", vars)],
family = binomial)
```
The first problem with this approach is that we should not trust the coefficient
t\-tests based on which we have selected the covariates too much. The first model
contains 62 explanatory variables (plus the intercept). With that many
hypothesis tests, we are quite likely to reject the NULL of no predictive effect
although there is actually no predictive effect. In addition, this approach
turns out to be unstable. There might be correlation between some of the
variables in the original set, and adding/removing even one variable might
substantially affect the predictive power of the model (and the apparent
relevance of other variables). We can see this already from the summary of our
final model estimate (generated in the next code chunk). One of the apparently
relevant predictors (`dfa`) is not at all significant anymore in this
specification. Thus, we might be tempted to further change the model, which in
turn would again change the apparent relevance of other covariates, and so on.
```
summary(finalmodel)$coef[,c("Estimate", "Pr(>|z|)")]
```
```
## Estimate Pr(>|z|)
## (Intercept) -1.3831 0.000e+00
## bing -1.4647 4.416e-03
## dfa -0.1865 1.271e-01
## docs.google.com -2.0181 4.714e-02
## facebook.com -1.1663 3.873e-04
## google -1.0149 6.321e-168
## google.com -2.9607 3.193e-05
## m.facebook.com -3.6920 2.331e-04
## Partners -4.3747 3.942e-14
## quora.com -3.1277 1.869e-03
## siliconvalley.about.com -2.2456 1.242e-04
## sites.google.com -0.5968 1.356e-03
## t.co -2.0509 4.316e-03
## youtube.com -6.9935 4.197e-23
```
An alternative approach would be to estimate models based on all possible
combinations of covariates and then use that sequence of models to select the
final model based on some out\-of\-sample prediction performance measure. Clearly
such an approach would take a long time to compute.
### 3\.1\.2 Regularization: the lasso estimator
Instead, the *lasso estimator* provides a convenient and
efficient way to get a sequence of candidate models. The key idea behind lasso
is to penalize model complexity (the cause of instability) during the estimation
procedure.[6](#fn6) In a second step, we can then select a
final model from the sequence of candidate models based on, for example,
“out\-of\-sample” prediction in a k\-fold cross
validation.
The `gamlr` package ([Taddy 2017](#ref-gamlr)) provides both parts of this procedure (lasso for the
sequence of candidate models, and selection of the “best” model based on k\-fold
cross\-validation).
```
# load packages
library(gamlr)
# create the model matrix
mm <- model.matrix(purchase~source, data = ga)
```
In cases with both many observations and many candidate explanatory variables,
the model matrix might get very large. Even simply generating the model matrix
might be a computational burden, as we might run out of memory to hold the model
matrix object. If this large model matrix is sparse (i.e, has a lot of `0`
entries), there is a much more memory\-efficient way to store it in an R object.
R provides ways to represent such sparse matrices in a compressed way in
specialized R objects (such as `CsparseMatrix` provided in the `Matrix`
package Bates, Maechler, and Jagan ([2022](#ref-Matrix))). Instead of containing all \\(n\\times m\\) cells of the matrix, these
objects only explicitly store the cells with non\-zero values and the
corresponding indices. Below, we make use of the high\-level
`sparse.model.matrix` function to generate the model matrix and store it in a
sparse matrix object. To illustrate the point of a more
memory\-efficient representation, we show that the traditional matrix object is
about 7\.5 times larger than the sparse version.
```
# create the sparse model matrix
mm_sparse <- sparse.model.matrix(purchase~source, data = ga)
# compare the object's sizes
as.numeric(object.size(mm)/object.size(mm_sparse))
```
```
## [1] 7.525
```
Finally, we run the lasso estimation with k\-fold cross\-validation.
```
# run k-fold cross-validation lasso
cvpurchase <- cv.gamlr(mm_sparse, ga$purchase, family="binomial")
```
We can then illustrate the performance of the selected final model – for
example, with an ROC curve. Note that both the `coef` method
and the `predict` method for `gamlr` objects automatically select the ‘best’
model.
```
# load packages
library(PRROC)
# use "best" model for prediction
# (model selection based on average OSS deviance
pred <- predict(cvpurchase$gamlr, mm_sparse, type="response")
# compute tpr, fpr; plot ROC
comparison <- roc.curve(scores.class0 = pred,
weights.class0=ga$purchase,
curve=TRUE)
plot(comparison)
```
Hence, econometrics techniques such as lasso help deal with *big P* problems by
providing reasonable ways to select a good predictive model (in other words,
decide which of the many variables should be included).
### 3\.1\.1 Simple logistic regression (naive approach)
As the dependent variable is binary, we will first estimate a simple logit model, in which
we use the origins of the store visitors (how did a visitor end up in the shop?)
as explanatory variables. Note that many of these variables are categorical, and
the model matrix thus contains a lot of “dummies” (indicator variables). The
plan in this (intentionally naive) first approach is to simply add a lot of
explanatory variables to the model, run logit, and then select the variables
with statistically significant coefficient estimates as the final predictive
model. The following code snippet covers the import of the data, the creation of
the model matrix (with all the dummy\-variables), and the logit estimation.
```
# import/inspect data
ga <- read.csv("data/ga.csv")
head(ga[, c("source", "browser", "city", "purchase")])
```
```
## source browser city purchase
## 1 google Chrome San Jose 1
## 2 (direct) Edge Charlotte 1
## 3 (direct) Safari San Francisco 1
## 4 (direct) Safari Los Angeles 1
## 5 (direct) Chrome Chicago 1
## 6 (direct) Chrome Sunnyvale 1
```
```
# create model matrix (dummy vars)
mm <- cbind(ga$purchase,
model.matrix(purchase~source, data=ga,)[,-1])
mm_df <- as.data.frame(mm)
# clean variable names
names(mm_df) <- c("purchase",
gsub("source", "", names(mm_df)[-1]))
# run logit
model1 <- glm(purchase ~ .,
data=mm_df, family=binomial)
```
Now we can perform the t\-tests and filter out the “relevant” variables.
```
model1_sum <- summary(model1)
# select "significant" variables for final model
pvalues <- model1_sum$coefficients[,"Pr(>|z|)"]
vars <- names(pvalues[which(pvalues<0.05)][-1])
vars
```
```
## [1] "bing"
## [2] "dfa"
## [3] "docs.google.com"
## [4] "facebook.com"
## [5] "google"
## [6] "google.com"
## [7] "m.facebook.com"
## [8] "Partners"
## [9] "quora.com"
## [10] "siliconvalley.about.com"
## [11] "sites.google.com"
## [12] "t.co"
## [13] "youtube.com"
```
Finally, we re\-estimate our “final” model.
```
# specify and estimate the final model
finalmodel <- glm(purchase ~.,
data = mm_df[, c("purchase", vars)],
family = binomial)
```
The first problem with this approach is that we should not trust the coefficient
t\-tests based on which we have selected the covariates too much. The first model
contains 62 explanatory variables (plus the intercept). With that many
hypothesis tests, we are quite likely to reject the NULL of no predictive effect
although there is actually no predictive effect. In addition, this approach
turns out to be unstable. There might be correlation between some of the
variables in the original set, and adding/removing even one variable might
substantially affect the predictive power of the model (and the apparent
relevance of other variables). We can see this already from the summary of our
final model estimate (generated in the next code chunk). One of the apparently
relevant predictors (`dfa`) is not at all significant anymore in this
specification. Thus, we might be tempted to further change the model, which in
turn would again change the apparent relevance of other covariates, and so on.
```
summary(finalmodel)$coef[,c("Estimate", "Pr(>|z|)")]
```
```
## Estimate Pr(>|z|)
## (Intercept) -1.3831 0.000e+00
## bing -1.4647 4.416e-03
## dfa -0.1865 1.271e-01
## docs.google.com -2.0181 4.714e-02
## facebook.com -1.1663 3.873e-04
## google -1.0149 6.321e-168
## google.com -2.9607 3.193e-05
## m.facebook.com -3.6920 2.331e-04
## Partners -4.3747 3.942e-14
## quora.com -3.1277 1.869e-03
## siliconvalley.about.com -2.2456 1.242e-04
## sites.google.com -0.5968 1.356e-03
## t.co -2.0509 4.316e-03
## youtube.com -6.9935 4.197e-23
```
An alternative approach would be to estimate models based on all possible
combinations of covariates and then use that sequence of models to select the
final model based on some out\-of\-sample prediction performance measure. Clearly
such an approach would take a long time to compute.
### 3\.1\.2 Regularization: the lasso estimator
Instead, the *lasso estimator* provides a convenient and
efficient way to get a sequence of candidate models. The key idea behind lasso
is to penalize model complexity (the cause of instability) during the estimation
procedure.[6](#fn6) In a second step, we can then select a
final model from the sequence of candidate models based on, for example,
“out\-of\-sample” prediction in a k\-fold cross
validation.
The `gamlr` package ([Taddy 2017](#ref-gamlr)) provides both parts of this procedure (lasso for the
sequence of candidate models, and selection of the “best” model based on k\-fold
cross\-validation).
```
# load packages
library(gamlr)
# create the model matrix
mm <- model.matrix(purchase~source, data = ga)
```
In cases with both many observations and many candidate explanatory variables,
the model matrix might get very large. Even simply generating the model matrix
might be a computational burden, as we might run out of memory to hold the model
matrix object. If this large model matrix is sparse (i.e, has a lot of `0`
entries), there is a much more memory\-efficient way to store it in an R object.
R provides ways to represent such sparse matrices in a compressed way in
specialized R objects (such as `CsparseMatrix` provided in the `Matrix`
package Bates, Maechler, and Jagan ([2022](#ref-Matrix))). Instead of containing all \\(n\\times m\\) cells of the matrix, these
objects only explicitly store the cells with non\-zero values and the
corresponding indices. Below, we make use of the high\-level
`sparse.model.matrix` function to generate the model matrix and store it in a
sparse matrix object. To illustrate the point of a more
memory\-efficient representation, we show that the traditional matrix object is
about 7\.5 times larger than the sparse version.
```
# create the sparse model matrix
mm_sparse <- sparse.model.matrix(purchase~source, data = ga)
# compare the object's sizes
as.numeric(object.size(mm)/object.size(mm_sparse))
```
```
## [1] 7.525
```
Finally, we run the lasso estimation with k\-fold cross\-validation.
```
# run k-fold cross-validation lasso
cvpurchase <- cv.gamlr(mm_sparse, ga$purchase, family="binomial")
```
We can then illustrate the performance of the selected final model – for
example, with an ROC curve. Note that both the `coef` method
and the `predict` method for `gamlr` objects automatically select the ‘best’
model.
```
# load packages
library(PRROC)
# use "best" model for prediction
# (model selection based on average OSS deviance
pred <- predict(cvpurchase$gamlr, mm_sparse, type="response")
# compute tpr, fpr; plot ROC
comparison <- roc.curve(scores.class0 = pred,
weights.class0=ga$purchase,
curve=TRUE)
plot(comparison)
```
Hence, econometrics techniques such as lasso help deal with *big P* problems by
providing reasonable ways to select a good predictive model (in other words,
decide which of the many variables should be included).
3\.2 A practical *big N* problem
--------------------------------
Big N problems are situations in which we know what type of model we want to use
but the *number of observations* is too big to run the estimation (the computer
crashes or slows down significantly). The simplest statistical solution to such
a problem is usually to just estimate the model based on a smaller sample.
However, we might not want to do that for other reasons (i.e., if we require a big N for statistical power reasons). As an illustration of how an alternative statistical procedure can speed up the analysis of big N datasets, we look at a procedure to estimate linear
models for situations where the classical OLS estimator is computationally too
demanding when analyzing large datasets, the *Uluru* algorithm
([Dhillon et al. 2013](#ref-dhillon_2013)).
### 3\.2\.1 OLS as a point of reference
Recall the OLS estimator in matrix notation, given the linear model \\(\\mathbf{y}\=\\mathbf{X}\\beta \+ \\epsilon\\):
\\\[\\hat{\\beta}\_{OLS} \= (\\mathbf{X}^\\intercal\\mathbf{X})^{\-1}\\mathbf{X}^{\\intercal}\\mathbf{y}.\\]
In order to compute \\(\\hat{\\beta}\_{OLS}\\), we have to compute
\\((\\mathbf{X}^\\intercal\\mathbf{X})^{\-1}\\), which implies a computationally
expensive matrix inversion.[7](#fn7) If our dataset is large,
\\(\\mathbf{X}\\) is large, and the inversion can take up a lot of computation time.
Moreover, the inversion and matrix multiplication to get \\(\\hat{\\beta}\_{OLS}\\)
needs a lot of memory. In practice, it might well be that the estimation of a
linear model via OLS with the standard approach in R (`lm()`) brings a computer
to its knees, as there is not enough memory available.
To further illustrate the point, we implement the OLS estimator in R.
```
beta_ols <-
function(X, y) {
# compute cross products and inverse
XXi <- solve(crossprod(X,X))
Xy <- crossprod(X, y)
return( XXi %*% Xy )
}
```
Now, we will test our OLS estimator function with a few (pseudo\-)random numbers
in a Monte Carlo study. First, we set the sample size parameters `n` (the number
of observations in our pseudo\-sample) and `p` (the number of variables
describing each of these observations) and initialize the dataset `X`.
```
# set parameter values
n <- 10000000
p <- 4
# generate sample based on Monte Carlo
# generate a design matrix (~ our 'dataset')
# with 4 variables and 10,000 observations
X <- matrix(rnorm(n*p, mean = 10), ncol = p)
# add column for intercept
X <- cbind(rep(1, n), X)
```
Now we define what the real linear model that we have in mind looks like and
compute the output `y` of this model, given the input `X`.[8](#fn8)
```
# MC model
y <- 2 + 1.5*X[,2] + 4*X[,3] - 3.5*X[,4] + 0.5*X[,5] + rnorm(n)
```
Finally, we test our `beta_ols` function.
```
# apply the OLS estimator
beta_ols(X, y)
```
```
## [,1]
## [1,] 1.9974
## [2,] 1.5001
## [3,] 3.9996
## [4,] -3.4994
## [5,] 0.4999
```
### 3\.2\.2 The *Uluru* algorithm as an alternative to OLS
Following Dhillon et al. ([2013](#ref-dhillon_2013)), we implement a procedure to compute
\\(\\hat{\\beta}\_{Uluru}\\):
\\\[\\hat{\\beta}\_{Uluru}\=\\hat{\\beta}\_{FS} \+ \\hat{\\beta}\_{correct}\\], where
\\\[\\hat{\\beta}\_{FS} \=
(\\mathbf{X}\_{subs}^\\intercal\\mathbf{X}\_{subs})^{\-1}\\mathbf{X}\_{subs}^{\\intercal}\\mathbf{y}\_{subs}\\],
and \\\[\\hat{\\beta}\_{correct}\= \\frac{n\_{subs}}{n\_{rem}} \\cdot
(\\mathbf{X}\_{subs}^\\intercal\\mathbf{X}\_{subs})^{\-1}
\\mathbf{X}\_{rem}^{\\intercal}\\mathbf{R}\_{rem}\\], and \\\[\\mathbf{R}\_{rem} \=
\\mathbf{Y}\_{rem} \- \\mathbf{X}\_{rem} \\cdot \\hat{\\beta}\_{FS}\\].
The key idea behind this is that the computational bottleneck of the OLS
estimator, the cross product and matrix inversion,
\\((\\mathbf{X}^\\intercal\\mathbf{X})^{\-1}\\), is only computed on a sub\-sample
(\\(X\_{subs}\\), etc.), not the entire dataset. However, the remainder of the
dataset is also taken into consideration (in order to correct a bias arising
from the sub\-sampling). Again, we implement the estimator in R to further
illustrate this point.
```
beta_uluru <-
function(X_subs, y_subs, X_rem, y_rem) {
# compute beta_fs
#(this is simply OLS applied to the subsample)
XXi_subs <- solve(crossprod(X_subs, X_subs))
Xy_subs <- crossprod(X_subs, y_subs)
b_fs <- XXi_subs %*% Xy_subs
# compute \mathbf{R}_{rem}
R_rem <- y_rem - X_rem %*% b_fs
# compute \hat{\beta}_{correct}
b_correct <-
(nrow(X_subs)/(nrow(X_rem))) *
XXi_subs %*% crossprod(X_rem, R_rem)
# beta uluru
return(b_fs + b_correct)
}
```
We then test it with the same input as above:
```
# set size of sub-sample
n_subs <- 1000
# select sub-sample and remainder
n_obs <- nrow(X)
X_subs <- X[1L:n_subs,]
y_subs <- y[1L:n_subs]
X_rem <- X[(n_subs+1L):n_obs,]
y_rem <- y[(n_subs+1L):n_obs]
# apply the uluru estimator
beta_uluru(X_subs, y_subs, X_rem, y_rem)
```
```
## [,1]
## [1,] 2.0048
## [2,] 1.4997
## [3,] 3.9995
## [4,] -3.4993
## [5,] 0.4996
```
This looks quite good already. Let’s have a closer look with a little Monte
Carlo study. The aim of the simulation study is to visualize the difference
between the classical OLS approach and the *Uluru* algorithm with regard to bias
and time complexity if we increase the sub\-sample size in *Uluru*. For
simplicity, we only look at the first estimated coefficient \\(\\beta\_{1}\\).
```
# define sub-samples
n_subs_sizes <- seq(from = 1000, to = 500000, by=10000)
n_runs <- length(n_subs_sizes)
# compute uluru result, stop time
mc_results <- rep(NA, n_runs)
mc_times <- rep(NA, n_runs)
for (i in 1:n_runs) {
# set size of sub-sample
n_subs <- n_subs_sizes[i]
# select sub-sample and remainder
n_obs <- nrow(X)
X_subs <- X[1L:n_subs,]
y_subs <- y[1L:n_subs]
X_rem <- X[(n_subs+1L):n_obs,]
y_rem <- y[(n_subs+1L):n_obs]
mc_results[i] <- beta_uluru(X_subs,
y_subs,
X_rem,
y_rem)[2] # (1 is the intercept)
mc_times[i] <- system.time(beta_uluru(X_subs,
y_subs,
X_rem,
y_rem))[3]
}
# compute OLS results and OLS time
ols_time <- system.time(beta_ols(X, y))
ols_res <- beta_ols(X, y)[2]
```
Let’s visualize the comparison with OLS.
```
# load packages
library(ggplot2)
# prepare data to plot
plotdata <- data.frame(beta1 = mc_results,
time_elapsed = mc_times,
subs_size = n_subs_sizes)
```
First, let’s look at the time used to estimate the linear model.
```
ggplot(plotdata, aes(x = subs_size, y = time_elapsed)) +
geom_point(color="darkgreen") +
geom_hline(yintercept = ols_time[3],
color = "red",
linewidth = 1) +
theme_minimal() +
ylab("Time elapsed") +
xlab("Subsample size")
```
The horizontal red line indicates the computation time for estimation via OLS;
the green points indicate the computation time for the estimation via the
*Ulruru* algorithm. Note that even for large sub\-samples, the computation time
is substantially lower than for OLS. Finally, let’s have a look at how close the results are to OLS.
```
ggplot(plotdata, aes(x = subs_size, y = beta1)) +
geom_hline(yintercept = ols_res,
color = "red",
linewidth = 1) +
geom_hline(yintercept = 1.5,
color = "green",
linewidth = 1) +
geom_point(color="darkgreen") +
theme_minimal() +
ylab("Estimated coefficient") +
xlab("Subsample size")
```
The horizontal red line indicates the size of the estimated coefficient, when
using OLS. The horizontal green line indicates the size of the actual
coefficient. The green points indicate the size of the same coefficient
estimated by the *Uluru* algorithm for different sub\-sample sizes. Note that
even relatively small sub\-samples already deliver estimates very close to the
OLS estimates.
Taken together, the example illustrates that alternative statistical methods,
optimized for large amounts of data, can deliver results very close to
traditional approaches. Yet, they can deliver these results much more
efficiently.
### 3\.2\.1 OLS as a point of reference
Recall the OLS estimator in matrix notation, given the linear model \\(\\mathbf{y}\=\\mathbf{X}\\beta \+ \\epsilon\\):
\\\[\\hat{\\beta}\_{OLS} \= (\\mathbf{X}^\\intercal\\mathbf{X})^{\-1}\\mathbf{X}^{\\intercal}\\mathbf{y}.\\]
In order to compute \\(\\hat{\\beta}\_{OLS}\\), we have to compute
\\((\\mathbf{X}^\\intercal\\mathbf{X})^{\-1}\\), which implies a computationally
expensive matrix inversion.[7](#fn7) If our dataset is large,
\\(\\mathbf{X}\\) is large, and the inversion can take up a lot of computation time.
Moreover, the inversion and matrix multiplication to get \\(\\hat{\\beta}\_{OLS}\\)
needs a lot of memory. In practice, it might well be that the estimation of a
linear model via OLS with the standard approach in R (`lm()`) brings a computer
to its knees, as there is not enough memory available.
To further illustrate the point, we implement the OLS estimator in R.
```
beta_ols <-
function(X, y) {
# compute cross products and inverse
XXi <- solve(crossprod(X,X))
Xy <- crossprod(X, y)
return( XXi %*% Xy )
}
```
Now, we will test our OLS estimator function with a few (pseudo\-)random numbers
in a Monte Carlo study. First, we set the sample size parameters `n` (the number
of observations in our pseudo\-sample) and `p` (the number of variables
describing each of these observations) and initialize the dataset `X`.
```
# set parameter values
n <- 10000000
p <- 4
# generate sample based on Monte Carlo
# generate a design matrix (~ our 'dataset')
# with 4 variables and 10,000 observations
X <- matrix(rnorm(n*p, mean = 10), ncol = p)
# add column for intercept
X <- cbind(rep(1, n), X)
```
Now we define what the real linear model that we have in mind looks like and
compute the output `y` of this model, given the input `X`.[8](#fn8)
```
# MC model
y <- 2 + 1.5*X[,2] + 4*X[,3] - 3.5*X[,4] + 0.5*X[,5] + rnorm(n)
```
Finally, we test our `beta_ols` function.
```
# apply the OLS estimator
beta_ols(X, y)
```
```
## [,1]
## [1,] 1.9974
## [2,] 1.5001
## [3,] 3.9996
## [4,] -3.4994
## [5,] 0.4999
```
### 3\.2\.2 The *Uluru* algorithm as an alternative to OLS
Following Dhillon et al. ([2013](#ref-dhillon_2013)), we implement a procedure to compute
\\(\\hat{\\beta}\_{Uluru}\\):
\\\[\\hat{\\beta}\_{Uluru}\=\\hat{\\beta}\_{FS} \+ \\hat{\\beta}\_{correct}\\], where
\\\[\\hat{\\beta}\_{FS} \=
(\\mathbf{X}\_{subs}^\\intercal\\mathbf{X}\_{subs})^{\-1}\\mathbf{X}\_{subs}^{\\intercal}\\mathbf{y}\_{subs}\\],
and \\\[\\hat{\\beta}\_{correct}\= \\frac{n\_{subs}}{n\_{rem}} \\cdot
(\\mathbf{X}\_{subs}^\\intercal\\mathbf{X}\_{subs})^{\-1}
\\mathbf{X}\_{rem}^{\\intercal}\\mathbf{R}\_{rem}\\], and \\\[\\mathbf{R}\_{rem} \=
\\mathbf{Y}\_{rem} \- \\mathbf{X}\_{rem} \\cdot \\hat{\\beta}\_{FS}\\].
The key idea behind this is that the computational bottleneck of the OLS
estimator, the cross product and matrix inversion,
\\((\\mathbf{X}^\\intercal\\mathbf{X})^{\-1}\\), is only computed on a sub\-sample
(\\(X\_{subs}\\), etc.), not the entire dataset. However, the remainder of the
dataset is also taken into consideration (in order to correct a bias arising
from the sub\-sampling). Again, we implement the estimator in R to further
illustrate this point.
```
beta_uluru <-
function(X_subs, y_subs, X_rem, y_rem) {
# compute beta_fs
#(this is simply OLS applied to the subsample)
XXi_subs <- solve(crossprod(X_subs, X_subs))
Xy_subs <- crossprod(X_subs, y_subs)
b_fs <- XXi_subs %*% Xy_subs
# compute \mathbf{R}_{rem}
R_rem <- y_rem - X_rem %*% b_fs
# compute \hat{\beta}_{correct}
b_correct <-
(nrow(X_subs)/(nrow(X_rem))) *
XXi_subs %*% crossprod(X_rem, R_rem)
# beta uluru
return(b_fs + b_correct)
}
```
We then test it with the same input as above:
```
# set size of sub-sample
n_subs <- 1000
# select sub-sample and remainder
n_obs <- nrow(X)
X_subs <- X[1L:n_subs,]
y_subs <- y[1L:n_subs]
X_rem <- X[(n_subs+1L):n_obs,]
y_rem <- y[(n_subs+1L):n_obs]
# apply the uluru estimator
beta_uluru(X_subs, y_subs, X_rem, y_rem)
```
```
## [,1]
## [1,] 2.0048
## [2,] 1.4997
## [3,] 3.9995
## [4,] -3.4993
## [5,] 0.4996
```
This looks quite good already. Let’s have a closer look with a little Monte
Carlo study. The aim of the simulation study is to visualize the difference
between the classical OLS approach and the *Uluru* algorithm with regard to bias
and time complexity if we increase the sub\-sample size in *Uluru*. For
simplicity, we only look at the first estimated coefficient \\(\\beta\_{1}\\).
```
# define sub-samples
n_subs_sizes <- seq(from = 1000, to = 500000, by=10000)
n_runs <- length(n_subs_sizes)
# compute uluru result, stop time
mc_results <- rep(NA, n_runs)
mc_times <- rep(NA, n_runs)
for (i in 1:n_runs) {
# set size of sub-sample
n_subs <- n_subs_sizes[i]
# select sub-sample and remainder
n_obs <- nrow(X)
X_subs <- X[1L:n_subs,]
y_subs <- y[1L:n_subs]
X_rem <- X[(n_subs+1L):n_obs,]
y_rem <- y[(n_subs+1L):n_obs]
mc_results[i] <- beta_uluru(X_subs,
y_subs,
X_rem,
y_rem)[2] # (1 is the intercept)
mc_times[i] <- system.time(beta_uluru(X_subs,
y_subs,
X_rem,
y_rem))[3]
}
# compute OLS results and OLS time
ols_time <- system.time(beta_ols(X, y))
ols_res <- beta_ols(X, y)[2]
```
Let’s visualize the comparison with OLS.
```
# load packages
library(ggplot2)
# prepare data to plot
plotdata <- data.frame(beta1 = mc_results,
time_elapsed = mc_times,
subs_size = n_subs_sizes)
```
First, let’s look at the time used to estimate the linear model.
```
ggplot(plotdata, aes(x = subs_size, y = time_elapsed)) +
geom_point(color="darkgreen") +
geom_hline(yintercept = ols_time[3],
color = "red",
linewidth = 1) +
theme_minimal() +
ylab("Time elapsed") +
xlab("Subsample size")
```
The horizontal red line indicates the computation time for estimation via OLS;
the green points indicate the computation time for the estimation via the
*Ulruru* algorithm. Note that even for large sub\-samples, the computation time
is substantially lower than for OLS. Finally, let’s have a look at how close the results are to OLS.
```
ggplot(plotdata, aes(x = subs_size, y = beta1)) +
geom_hline(yintercept = ols_res,
color = "red",
linewidth = 1) +
geom_hline(yintercept = 1.5,
color = "green",
linewidth = 1) +
geom_point(color="darkgreen") +
theme_minimal() +
ylab("Estimated coefficient") +
xlab("Subsample size")
```
The horizontal red line indicates the size of the estimated coefficient, when
using OLS. The horizontal green line indicates the size of the actual
coefficient. The green points indicate the size of the same coefficient
estimated by the *Uluru* algorithm for different sub\-sample sizes. Note that
even relatively small sub\-samples already deliver estimates very close to the
OLS estimates.
Taken together, the example illustrates that alternative statistical methods,
optimized for large amounts of data, can deliver results very close to
traditional approaches. Yet, they can deliver these results much more
efficiently.
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/software-programming-with-big-data.html |
Chapter 4 Software: Programming with (Big) Data
===============================================
The programming language and computing environment *R* ([R Core Team 2021](#ref-rfoundation2021)) is particularly made for writing code in a data analytics context. However, the language was developed at a time when data analytics was primarily focused on moderately sized datasets that can easily be loaded/imported and worked with on a common PC. Depending on the field or industry you work in, this is not the case anymore. In this chapter, we will explore some of R’s (potential) weaknesses as well as learn how to avoid them and how to exploit some of R’s strengths when it comes to working with large datasets. The first part of this chapter is primarily focused on understanding code profiling and improving code with the aim of making computationally intensive data analytics scripts in R run faster. This chapter presupposes basic knowledge of R data structures and data types as well as experience with basic programming concepts such as loops.[10](#fn10) While very useful in writing analytics scripts, we will not look into topics like coding workflows, version control, and code sharing (e.g., by means of Git and GitHub[11](#fn11)). The assumption is that you bring some experience in writing analytics scripts already.
While R is a very useful tool for many aspects of Big Data Analytics that we will cover in the following chapters, R alone is not enough for a basic Big Data Analytics toolbox. The second part of this chapter introduces the reader to the *Structured Query Language (SQL)*, a programming language designed for managing data in relational databases. Although the type of databases where SQL is traditionally encountered would not necessarily be considered part of Big Data Analytics today, some versions of SQL are now used with systems particularly designed for Big Data Analytics (such as Amazon Athena and Google BigQuery). Hence, with a good knowledge of R in combination with basic SQL skills, you will be able to productively engage with a large array of practical Big Data Analytics problems.
4\.1 Domains of programming with (big) data
-------------------------------------------
Programming tasks in the context of data analytics typically fall into one of the following broad categories:
* Procedures to import/export data.
* Procedures to clean and filter data.
* Implementing functions for statistical analysis.
When writing a program to process large amounts of data in any of these areas, it is helpful to take into consideration the following design choices:
1. Which basic (already implemented) R functions are more or less suitable as building blocks for the program?[12](#fn12)
2. How can we exploit/avoid some of R’s lower\-level characteristics in order to write more efficient code?
3. Is there a need to interface with a lower\-level programming language in order to speed up the code? (advanced topic)
Finally, there is an additional important point to be made regarding the writing of code for *statistical analysis*: Independent of *how* we write a statistical procedure in R (or in any other language, for that matter), keep in mind that there might be an *alternative statistical procedure/algorithm* that is faster but delivers approximately the same result (as long as we use a sufficiently large sample).
4\.2 Measuring R performance
----------------------------
When writing a data analysis script in R to process large amounts of data, it generally makes sense to first test each crucial part of the script with a small sub\-sample. In order to quickly recognize potential bottlenecks, there are a couple of R packages that help you keep track of exactly how long each component of your script needs to process as well as how much memory it uses. The table below lists some of the packages and functions that you should keep in mind when *“profiling”* and testing your code.
| package | function | purpose |
| --- | --- | --- |
| `utils` | `object.size()` | Provides an estimate of the memory that is being used to store an R object. |
| `pryr` | `object_size()` | Works similarly to `object.size()`, but counts more accurately and includes the size of environments. |
| `pryr` | `mem_used()` | Returns the total amount of memory (in megabytes) currently used by R. |
| `pryr` | `mem_change()` | Shows the change in memory (in megabytes) before and after running code. |
| `base` | `system.time()` | Returns CPU (and other) times that an R expression used. |
| `microbenchmark` | `microbenchmark()` | Highly accurate timing of R expression evaluation. |
| `bench` | `mark()` | Benchmark a series of functions. |
| `profvis` | `profvis()` | Profiles an R expression and visualizes the profiling data (usage of memory, time elapsed, etc.). |
Most of these functions are used in an interactive way in the R console. They serve either of two purposes that are central to profiling and improving your code’s performance. First, in order to assess the performance of your R code you probably want to know how long it takes to run your entire script or a specific part of your script. The `system.time()` ([R Core Team 2021](#ref-rfoundation2021)) function provides an easy way to check this. This function is loaded by default with R; there is no need to install an additional package. Simply wrap it around the line(s) of code that you want to assess.
```
# how much time does it take to run this loop?
system.time(for (i in 1:100) {i + 5})
```
```
## user system elapsed
## 0.001 0.000 0.002
```
Note that each time you run this line of code, the returned amount of time varies slightly. This has to do with the fact that the actual time needed to run a line of code can depend on various other processes happening at the same time on your computer.
The `microbenchmark` ([Mersmann 2021](#ref-microbenchmark)) and `bench` ([Hester and Vaughan 2021](#ref-bench)) packages provide additional functions to measure execution time in more sophisticated ways. In particular, they account for the fact that the processing time for the same code might vary and automatically run the code several times in order to return statistics about the processing time. In addition, `microbenchmark()` provides highly detailed and highly accurate timing of R expression evaluation. The function is particularly useful to accurately find even minor room for improvement when testing a data analysis script on a smaller sub\-sample (which might scale when working on a large dataset). For example, suppose you need to run a for\-loop over millions of iterations, and there are different ways to implement the body of the loop (which does not take too much time to process in one iteration). Note that the function actually evaluates the R expression in question many times and returns a statistical summary of the timings.
```
# load package
library(microbenchmark)
# how much time does it take to run this loop (exactly)?
microbenchmark(for (i in 1:100) {i + 5})
```
```
## Unit: milliseconds
## expr min lq mean
## for (i in 1:100) { i + 5 } 1.126 1.209 1.284
## median uq max neval
## 1.254 1.282 4.768 100
```
Second, a key aspect to improving the performance of data analysis scripts in R is to detect inefficient memory allocation as well as avoiding an R\-object that is either growing too much or too large to handle in memory. To this end, you might want to monitor how much memory R occupies at different points in your script as well as how much memory is taken up by individual R objects. For example, `object.size()` returns the size of an R object, that is, the amount of memory it takes up in the R environment in bytes (`pryr::object_size()` counts slightly more accurately).
```
hello <- "Hello, World!"
object.size(hello)
```
```
## 120 bytes
```
This is useful to implementing your script with a generally less memory\-intensive approach. For example, for a specific task it might not matter whether a particular variable is stored as a `character` vector or a `factor`. But storing it as `character` turns out to be more memory intensive (why?).
```
# initialize a large string vector containing letters
large_string <- rep(LETTERS[1:20], 1000^2)
head(large_string)
```
```
## [1] "A" "B" "C" "D" "E" "F"
```
```
# store the same information as a factor in a new variable
large_factor <- as.factor(large_string)
# is one bigger than the other?
object.size(large_string) - object.size(large_factor)
```
```
## 79999456 bytes
```
`pryr::mem_change()` ([Wickham 2021](#ref-pryr)) is useful to track how different parts of your script affect the overall memory occupied by R.
```
# load package
library(pryr)
# initialize a vector with 1000 (pseudo)-random numbers
mem_change(
thousand_numbers <- runif(1000)
)
```
```
## 7.98 kB
```
```
# initialize a vector with 1M (pseudo)-random numbers
mem_change(
a_million_numbers <- runif(1000^2)
)
```
```
## 8 MB
```
`bench::mark()` allows you to easily compare the performance of several different implementations of a code chunk both regarding timing and memory usage. The following code example illustrates this in a comparison of two approaches to computing the product of each element in a vector `x` with a factor `z`.
```
# load packages
library(bench)
# initialize variables
x <- 1:10000
z <- 1.5
# approach I: loop
multiplication <-
function(x,z) {
result <- c()
for (i in 1:length(x)) {result <- c(result, x[i]*z)}
return(result)
}
result <- multiplication(x,z)
head(result)
```
```
## [1] 1.5 3.0 4.5 6.0 7.5 9.0
```
```
# approach II: "R-style"
result2 <- x * z
head(result2)
```
```
## [1] 1.5 3.0 4.5 6.0 7.5 9.0
```
```
# comparison
benchmarking <-
mark(
result <- multiplication(x,z),
result2 <- x * z,
min_iterations = 50
)
benchmarking[, 4:9]
```
```
## # A tibble: 2 × 3
## `itr/sec` mem_alloc `gc/sec`
## <dbl> <bch:byt> <dbl>
## 1 12.2 382MB 15.5
## 2 76419. 78.2KB 7.64
```
In addition, the `bench` package ([Hester and Vaughan 2021](#ref-bench)) provides a simple way to visualize these outputs:
```
plot(benchmarking, type = "boxplot")
```
Finally, to analyze the performance of your entire script/program, the `profvis` package ([Chang, Luraschi, and Mastny 2020](#ref-profvis)) provides visual summaries to quickly detect the most prominent bottlenecks. You can either call this via the `profvis()` function with the code section to be profiled as argument, or via the RStudio user interface by clicking on the Code Tools menu in the editor window and selecting “Profile selected lines”.
```
# load package
library(profvis)
# analyze performance of several lines of code
profvis({
x <- 1:10000
z <- 1.5
# approach I: loop
multiplication <-
function(x,z) {
result <- c()
for (i in 1:length(x)) {result <- c(result, x[i]*z)}
return(result)
}
result <- multiplication(x,z)
# approach II: "R-style"
result2 <- x * z
head(result2)
})
```
4\.3 Writing efficient R code
-----------------------------
This subsection touches upon several prominent aspects of writing efficient/fast R code.[13](#fn13)
### 4\.3\.1 Memory allocation and growing objects
R tends to “grow” already\-initialized objects in memory when they are modified. At the initiation of the object, a small amount of memory is occupied at some location in memory. In simple terms, once the object grows, it might not have enough space where it is currently located. Hence, it needs to be “moved” to another location in memory with more space available. This moving, or “re\-allocation” of memory, needs time and slows down the overall process.
This potential is most practically illustrated with a `for`\-loop in which each iteration’s result is stored as an element of a vector (the object in question). To avoid growing this object, you need to instruct R to pre\-allocate the memory necessary to contain the final result. If we don’t do this, each iteration of the loop causes R to re\-allocate memory because the number of elements in the vector/list is changing. In simple terms, this means that R needs to execute more steps in each iteration.
In the following example, we compare the performance of two functions, one taking this principle into account, the other not. The functions take a numeric vector as input and return the square root of each element of the numeric vector.
```
# naïve implementation
sqrt_vector <-
function(x) {
output <- c()
for (i in 1:length(x)) {
output <- c(output, x[i]^(1/2))
}
return(output)
}
# implementation with pre-allocation of memory
sqrt_vector_faster <-
function(x) {
output <- rep(NA, length(x))
for (i in 1:length(x)) {
output[i] <- x[i]^(1/2)
}
return(output)
}
```
As a proof of concept we use `system.time()` to measure the difference in speed for various input sizes.[14](#fn14)
```
# the different sizes of the vectors we will put into the two functions
input_sizes <- seq(from = 100, to = 10000, by = 100)
# create the input vectors
inputs <- sapply(input_sizes, rnorm)
# compute outputs for each of the functions
output_slower <-
sapply(inputs,
function(x){ system.time(sqrt_vector(x))["elapsed"]
}
)
output_faster <-
sapply(inputs,
function(x){ system.time(sqrt_vector_faster(x))["elapsed"]
}
)
```
The following plot shows the difference in the performance of the two functions.
```
# load packages
library(ggplot2)
# initialize data frame for plot
plotdata <- data.frame(time_elapsed = c(output_slower, output_faster),
input_size = c(input_sizes, input_sizes),
Implementation= c(rep("sqrt_vector",
length(output_slower)),
rep("sqrt_vector_faster",
length(output_faster))))
# plot
ggplot(plotdata, aes(x=input_size, y= time_elapsed)) +
geom_point(aes(colour=Implementation)) +
theme_minimal(base_size = 18) +
theme(legend.position = "bottom") +
ylab("Time elapsed (in seconds)") +
xlab("No. of elements processed")
```
Clearly, the version with pre\-allocation of memory (avoiding growing an object) is much faster overall. In addition, we see that the problem with the growing object in the naïve implementation tends to get worse with each iteration. The take\-away message for the practitioner: If possible, always initialize the “container” object (list, matrix, etc.) for iteration results as an empty object of the final size/dimensions.
The attentive reader and experienced R coder will have noticed by this point that both of the functions implemented above are not really smart practice to solve the problem at hand. If you consider yourself part of this group, the next subsection will make you more comfortable.
### 4\.3\.2 Vectorization in basic R functions
We can further improve the performance of this function by exploiting a particular characteristic of R: in R, ‘everything is a vector’, and many of the most basic R functions (such as math operators) are *vectorized*. In simple terms, this means that an operation is implemented to directly work on vectors in such a way that it can take advantage of the similarity of each of the vector’s elements. That is, R only has to figure out once how to apply a given function to a vector element in order to apply it to all elements of the vector. In a simple loop, however, R has to go through the same ‘preparatory’ steps again and again in each iteration.
Following up on the problem from the previous subsection, we implement an additional function called `sqrt_vector_fastest` that exploits the fact that math operators in R are vectorized functions. We then re\-run the same speed test as above with this function.
```
# implementation with vectorization
sqrt_vector_fastest <-
function(x) {
output <- x^(1/2)
return(output)
}
# speed test
output_fastest <-
sapply(inputs,
function(x){ system.time(sqrt_vector_fastest(x))["elapsed"]
}
)
```
Let’s have a look at whether this improves the function’s performance further.
```
# load packages
library(ggplot2)
# initialize data frame for plot
plotdata <- data.frame(time_elapsed = c(output_faster, output_fastest),
input_size = c(input_sizes, input_sizes),
Implementation= c(rep("sqrt_vector_faster",
length(output_faster)),
rep("sqrt_vector_fastest",
length(output_fastest))))
# plot
ggplot(plotdata, aes(x=time_elapsed, y=Implementation)) +
geom_boxplot(aes(colour=Implementation),
show.legend = FALSE) +
theme_minimal(base_size = 18) +
xlab("Time elapsed (in seconds)")
```
Clearly, the vectorized implementation is even faster. The take\-away message: Make use of vectorized basic R functions where possible. At this point you might wonder: Why not always use vectorization over loops, when working with R? This question (and closely related similar questions) has been fiercely debated in the R online community over the last few years. Also the debate contains and has contained several (in my view) slightly misleading arguments. A simple answer to this question is: It is in fact not that simple to use *actual* vectorization for every kind of problem in R. There are a number of functions often mentioned to achieve “vectorization” easily in R; however, they do not actually implement actual vectorization in its original technical sense (the type just demonstrated here with the R math operators). Since this point is very prominent in debates about how to improve R code, the next subsection attempts to summarize the most important aspects to keep in mind.
### 4\.3\.3 `apply`\-type functions and vectorization
There are basically two ways to make use of some form of “vectorization” instead of writing loops.
One approach is to use an `apply`\-type function instead of loops. Note, though, that the `apply`\-type functions primarily make the writing of code more efficient. They still run a loop under the hood. Nevertheless, some `apply`\-type functions might still outperform explicit loops, as they might be better implemented.[15](#fn15)
Consider, for example, `lapply()`, a function that takes a vector (atomic or list) as input and applies a function `FUN` to each of its elements. It is a straightforward alternative to `for`\-loops in many situations (and it automatically takes care of the “growing objects” problem discussed above). The following example shows how we can get the same result by either writing a loop or using `lapply()`. The aim of the code example is to import the [Health News in Twitter Dataset](https://archive.ics.uci.edu/ml/datasets/Health+News+in+Twitter) by Karami et al. ([2017](#ref-karami_etal2017)). The raw data consists of several text files that need to be imported to R consecutively.
The text\-files are located in `data/twitter_texts/`. For either approach of importing all of these files, we first need a list of the paths to all of the files. We can get this with `list.files()`. Also, for either approach we will make use of the `fread` function in the `data.table` package ([Dowle and Srinivasan 2022](#ref-data.table)).
```
# load packages
library(data.table)
# get a list of all file-paths
textfiles <- list.files("data/twitter_texts", full.names = TRUE)
```
Now we can read in all the text files with a `for`\-loop as follows.
```
# prepare loop
all_texts <- list()
n_files <- length(textfiles)
length(all_texts) <- n_files
# read all files listed in textfiles
for (i in 1:n_files) {
all_texts[[i]] <- fread(textfiles[i])
}
```
The imported files are now stored as `data.table`\-objects in the list `all_texts`. With the following line of code we combine all of them in one `data.table`.
```
# combine all in one data.table
twitter_text <- rbindlist(all_texts)
# check result
dim(twitter_text)
```
```
## [1] 42422 3
```
Alternatively, we can make use of `lapply` as follows in order to achieve exactly the same.
```
# use lapply instead of loop
all_texts <- lapply(textfiles, fread)
# combine all in one data.table
twitter_text <- rbindlist(all_texts)
# check result
dim(twitter_text)
```
```
## [1] 42422 3
```
Finally, we can make use of `Vectorization()` in order to “vectorize” our own import function (written for this example). Again, this does not make use of vectorization in its original technical sense.
```
# initialize the import function
import_file <-
function(x) {
parsed_x <- fread(x)
return(parsed_x)
}
# 'vectorize' it
import_files <- Vectorize(import_file, SIMPLIFY = FALSE)
# Apply the vectorized function
all_texts <- import_files(textfiles)
twitter_text <- rbindlist(all_texts)
# check the result
dim(twitter_text)
```
```
## [1] 42422 3
```
The take\-away message: Instead of writing simple loops, use `apply`\-type functions to save time writing code (and make the code easier to read) and automatically avoid memory\-allocation problems.
### 4\.3\.4 Avoiding unnecessary copying
The “growing objects” problem discussed above is only one aspect that can lead to inefficient use of memory when working with R. Another potential problem of using up more memory than necessary during an execution of an R\-script is how R handles objects/variables and their names.
Consider the following line of code:
```
a <- runif(10000)
```
What is usually said to describe what is happening here is something along the lines of “we initialize a variable called `a` and assign a numeric vector with 10,000 random numbers. What in fact happens is that the name `a` is assigned to the integer vector (which in turn exists at a specific memory address). Thus, values do not have names but *names have values*. This has important consequences for memory allocation and performance. For example, because `a` is in fact just a name attached to a value, the following does not involve any copying of values. It simply”binds” another name, `b`, to the same value to which `a` is already bound.
```
b <- a
```
We can prove this in two ways. First, if what I just stated was not true, the line above would actually lead to more memory being occupied by the current R session. However, this is not the case:
```
object_size(a)
```
```
## 80.05 kB
```
```
mem_change(c <- a)
```
```
## -588 kB
```
Second, by means of the `lobstr`\-package ([Wickham 2022a](#ref-lobstr)), we can see that the values to which `a` and `b` are bound are stored at the same memory address. Hence, they are the same values.
```
# load packages
library(lobstr)
# check memory addresses of objects
obj_addr(a)
```
```
## [1] "0x55d688cfeec0"
```
```
obj_addr(b)
```
```
## [1] "0x55d688cfeec0"
```
Now you probably wonder, what happens to `b` if we modify `a`. After all, if the values to which `b` is bound are changed when we write code concerning `a`, we might end up with very surprising output. The answer is, and this is key (!), once we modify `a`, the values need to be *copied* in order to ensure the integrity of `b`. Only at this point, our program will require more memory.
```
# check the first element's value
a[1]
```
```
## [1] 0.5262
```
```
b[1]
```
```
## [1] 0.5262
```
```
# modify a, check memory change
mem_change(a[1] <- 0)
```
```
## 79 kB
```
```
# check memory addresses
obj_addr(a)
```
```
## [1] "0x55d671554530"
```
```
obj_addr(b)
```
```
## [1] "0x55d688cfeec0"
```
Note that the entire vector needed to be copied for this. There is, of course, a lesson from all this regarding writing efficient code. Knowing how actual copying of values occurs helps avoid unnecessary copying. The larger an object, the more time it will take to copy it in memory. Objects with a single binding get modified in place (no copying):
```
mem_change(d <- runif(10000))
```
```
## 80.3 kB
```
```
mem_change(d[1] <- 0)
```
```
## 584 B
```
### 4\.3\.5 Releasing memory
Closely related to the issue of copy\-upon\-modify is the issue of “releasing” memory via “garbage collection”. If your program uses up a lot of (too much) memory (typical for working with large datasets), all processes on your computer might substantially slow down (we will look more closely into why this is the case in the next chapter). Hence, you might want to remove/delete an object once you do not need it anymore. This can be done with the `rm()` function.
```
mem_change(large_vector <- runif(10^8))
```
```
## 800 MB
```
```
mem_change(rm(large_vector))
```
```
## -800 MB
```
`rm()` removes objects that are currently accessible in the global R environment. However, some objects/values might technically not be visible/accessible anymore (for example, objects that have been created in a function which has since returned the function output). To also release memory occupied by these objects, you can call `gc()` (the garbage collector). While R will automatically collect the garbage once it is close to running out of memory, explicitly calling `gc` can still improve the performance of your script when working with large datasets. This is in particular the case when R is not the only data\-intensive process running on your computer. For example, when running an R script involving the repeated querying of data from a local SQL database and the subsequent memory\-intensive processing of this data in R, you can avoid using up too much memory by running `rm` and `gc` explicitly.[16](#fn16)
### 4\.3\.6 Beyond R
So far, we have explored idiosyncrasies of R we should be aware of when writing programs to handle and analyze large datasets. While this has shown that R has many advantages for working with data, it also revealed some aspects of R that might result in low performance compared to other programming languages. A simple generic explanation for this is that R is an interpreted language, meaning that when we execute R code, it is processed (statement by statement) by an ‘interpreter’ that translates the code into machine code (without the user giving any specific instructions). In contrast, when writing code in a ‘compiled language’, we first have to explicitly compile the code (into machine code) and then run the compiled program. Running code that is already compiled is typically much faster than running R code that has to be interpreted before it can actually be processed by the CPU.
For advanced programmers, R offers various options to directly make use of compiled programs (for example, written in C, C\+\+, or FORTRAN). In fact, several of the core R functions installed with the basic R distribution are implemented in one of these lower\-level programming languages, and the R function we call simply interacts with these functions.
We can actually investigate this by looking at the source code of an R function. If you simply type the name of a function (such as our `import_file()`) to the console, R prints the function’s source code to the console.
```
import_file
```
```
## function(x) {
## parsed_x <- fread(x)
## return(parsed_x)
## }
## <bytecode: 0x55d689024050>
```
However, if we do the same for function `sum`, we don’t see any actual source code.
```
sum
```
```
## function (..., na.rm = FALSE) .Primitive("sum")
```
Instead `.Primitive()` indicates that `sum()` is actually referring to an internal function (in this case implemented in C).
While the use of functions implemented in a lower\-level language is a common technique to improve the speed of ‘R’ functions, it is particularly prominent in the context of functions/packages made to deal with large amounts of data (such as the `data.table` package).
4\.4 SQL basics
---------------
Structured Query Language (SQL) has become a bread\-and\-butter tool for data analysts and data scientists due to its broad application in systems used to store large amounts of data. While traditionally only encountered in the context of structured data stored in relational database management systems, some versions of it are now also used to query data from data warehouse systems (e.g. Amazon Redshift) and even to query massive amounts (terabytes or even petabytes) of data stored in data lakes (e.g., Amazon Athena). In all of these applications, SQL’s purpose (from the data analytics perspective) is to provide a convenient and efficient way to query data from mass storage for analysis. Instead of importing a CSV file into R and then filtering it in order to get to the analytic dataset, we use SQL to express how the analytic dataset should look (which variables and rows should be included).
The latter point is very important to keep in mind when already having experience with a language like R and learning SQL for the first time. In R we write code to instruct the computer what to do with the data. For example, we tell it to import a csv file called `economics.csv` as a `data.table`; then we instruct it to remove observations that are older than a certain date according to the `date` column; then we instruct it to compute the average of the `unemploy` column values for each year based on the `date` column and then return the result as a separate data frame:
```
# import data
econ <- read.csv("data/economics.csv")
# filter
econ2 <- econ["1968-01-01"<=econ$date,]
# compute yearly averages (basic R approach)
econ2$year <- lubridate::year(econ2$date)
years <- unique(econ2$year)
averages <-
sapply(years, FUN = function(x){
mean(econ2[econ2$year==x,"unemploy"])
})
output <- data.frame(year=years, average_unemploy=averages)
# inspect the first few lines of the result
head(output)
```
```
## year average_unemploy
## 1 1968 2797
## 2 1969 2830
## 3 1970 4127
## 4 1971 5022
## 5 1972 4876
## 6 1973 4359
```
In contrast, when using SQL we write code that describes what the final result is supposed to look like. The SQL engine processing the code then takes care of the rest and returns the result in the most efficient way.[17](#fn17)
```
SELECT
strftime('%Y', `date`) AS year,
AVG(unemploy) AS average_unemploy
FROM econ
WHERE "1968-01-01"<=`date`
GROUP BY year LIMIT 6;
```
```
## year average_unemploy
## 1 1968 2797
## 2 1969 2830
## 3 1970 4127
## 4 1971 5022
## 5 1972 4876
## 6 1973 4359
```
For the moment, we will only focus on the code and ignore the underlying hardware and database concepts (those will be discussed in more detail in Chapter 5\).
### 4\.4\.1 First steps in SQL(ite)
In order to get familiar with coding in SQL, we work with a free and easy\-to\-use version of SQL called *SQLite*. [SQLite](https://sqlite.org/index.html) is a free full\-featured SQL database engine widely used across platforms. It usually comes pre\-installed with Windows and Mac/OSX distributions and has (from the user’s perspective) all the core features of more sophisticated SQL versions. Unlike the more sophisticated SQL systems, SQLite does not rely explicitly on a client/server model. That is, there is no need to set up your database on a server and then query it from a client interface. In fact, setting it up is straightforward. In the terminal, we can directly call SQLite as a command\-line tool (on most modern computers, the command is now `sqlite3`, SQLite version 3\).
In this first code example, we set up an SQLite database using the command line. In the file structure of the book repository, we first switch to the data directory.
```
cd data
```
With one simple command, we start up SQLite, create a new database called `mydb.sqlite`, and connect to the newly created database.[18](#fn18)
```
sqlite3 mydb.sqlite
```
This created a new file `mydb.sqlite` in our `data` directory, which contains the newly created database. Also, we are now running `sqlite` in the terminal (indicated by the `sqlite>` prompt. This means we can now type SQL code to the terminal to run queries and other SQL commands.
At this point, the newly created database does not contain any data. There are no tables in it. We can see this by running the `.tables` command.
```
.tables
```
As expected, nothing is returned. Now, let’s create our first table and import the `economics.csv` dataset into it. In SQLite, it makes sense to first set up an empty table in which all column data types are defined before importing data from a CSV\-file to it. If a CSV is directly imported to a new table (without type definitions), all columns will be set to `TEXT` (similar to `character` in R) by default. Setting the right data type for each variable follows essentially the same logic as setting the data types of a data frame’s columns in R (with the difference that in SQL this also affects how the data is stored on disk).[19](#fn19)
In a first step, we thus create a new table called `econ`.
```
-- Create the new table
CREATE TABLE econ(
"date" DATE,
"pce" REAL,
"pop" REAL,
"psavert" REAL,
"uempmed" REAL,
"unemploy" INTEGER
);
```
Then, we can import the data from the csv file, by first switching to CSV mode via the command `.mode csv` and then importing the data to `econ` with `.import`. The `.import` command expects as a first argument the path to the CSV file on disk and as a second argument the name of the table to import the data to.
```
-- prepare import
.mode csv
-- import data from csv
.import --skip 1 economics.csv econ
```
Now we can have a look at the new database table in SQLite. `.tables` shows that we now have one table called `econ` in our database, and `.schema` displays the structure of the new `econ` table.
```
.tables
```
```
# econ
```
```
.schema econ
```
```
# CREATE TABLE econ(
# "date" DATE,
# "pce" REAL,
# "pop" REAL,
# "psavert" REAL,
# "uempmed" REAL,
# "unemploy" INTEGER
# );
```
With this, we can start querying data with SQLite. In order to make the query results easier to read, we first set two options regarding how query results are displayed on the terminal. `.header on` enables the display of the column names in the returned query results. And `.mode columns` arranges the query results in columns.
```
.header on
```
```
.mode columns
```
In our first query, we select all (`*`) variable values of the observation of January 1968\.
```
select * from econ where date = '1968-01-01';
```
```
## date pce pop psavert uempmed unemploy
## 1 1968-01-01 531.5 199808 11.7 5.1 2878
```
#### 4\.4\.1\.1 Simple queries
Now let’s select all dates and unemployment values of observations with more than 15 million unemployed, ordered by date.
```
select date,
unemploy from econ
where unemploy > 15000
order by date;
```
```
## date unemploy
## 1 2009-09-01 15009
## 2 2009-10-01 15352
## 3 2009-11-01 15219
## 4 2009-12-01 15098
## 5 2010-01-01 15046
## 6 2010-02-01 15113
## 7 2010-03-01 15202
## 8 2010-04-01 15325
## 9 2010-11-01 15081
```
### 4\.4\.2 Joins
So far, we have only considered queries involving one table of data. However, SQL provides a very efficient way to join data from various tables. Again, the way of writing SQL code is the same: You describe what the final table should look like and from where the data is to be selected.
Let’s extend the previous example by importing an additional table to our `mydb.sqlite`. The additional data is stored in the file `inflation.csv` in the book’s data folder and contains information on the US annual inflation rate measured in percent.[20](#fn20)
```
-- Create the new table
CREATE TABLE inflation(
"date" DATE,
"inflation_percent" REAL
);
-- prepare import
.mode csv
-- import data from csv
.import --skip 1 inflation.csv inflation
-- switch back to column mode
.mode columns
```
Note that the data stored in `econ` contains monthly observations, while `inflation` contains annual observations. We can thus only meaningfully combine the two datasets at the level of years. Again using the combination of datasets in R as a reference point, here is what we would like to achieve expressed in R. The aim is to get a table that serves as basis for a [Phillips curve](https://en.wikipedia.org/wiki/Phillips_curve) plot, with annual observations and the variables `year`, `average_unemp_percent`, and `inflation_percent`.
```
# import data
econ <- read.csv("data/economics.csv")
inflation <- read.csv("data/inflation.csv")
# prepare variable to match observations
econ$year <- lubridate::year(econ$date)
inflation$year <- lubridate::year(inflation$date)
# create final output
years <- unique(econ$year)
averages <- sapply(years, FUN = function(x) {
mean(econ[econ$year==x,"unemploy"]/econ[econ$year==x,"pop"])*100
} )
unemp <- data.frame(year=years,
average_unemp_percent=averages)
# combine via the year column
# keep all rows of econ
output<- merge(unemp, inflation[, c("year", "inflation_percent")], by="year")
# inspect output
head(output)
```
```
## year average_unemp_percent inflation_percent
## 1 1967 1.512 2.773
## 2 1968 1.394 4.272
## 3 1969 1.396 5.462
## 4 1970 2.013 5.838
## 5 1971 2.419 4.293
## 6 1972 2.324 3.272
```
Now let’s look at how the same table can be created in SQLite (the table output below only shows the first 6 rows of the resulting table).
```
SELECT
strftime('%Y', econ.date) AS year,
AVG(unemploy/pop)*100 AS average_unemp_percent,
inflation_percent
FROM econ INNER JOIN inflation ON year = strftime('%Y', inflation.date)
GROUP BY year
```
```
## year average_unemp_percent inflation_percent
## 1 1967 1.512 2.773
## 2 1968 1.394 4.272
## 3 1969 1.396 5.462
## 4 1970 2.013 5.838
## 5 1971 2.419 4.293
## 6 1972 2.324 3.272
```
When done working with the database, we can exit SQLite by typing `.quit` into the terminal and hit enter.
4\.5 With a little help from my friends: GPT and R/SQL coding
-------------------------------------------------------------
Whether you are already an experienced programmer in R and SQL or whether you are rather new to coding, recent developments in Large Language Models (LLMs) might provide an interesting way of making your coding workflow more efficient. At the writing of this book, OpenAI’s ChatGPT was still in its testing phase but has already created a big hype in various topic domains. In very simple terms ChatGPT and its predecessors GPT\-2, GPT\-3 are pre\-trained large\-scale machine learning models that have been trained on millions of websites’ text content (including code from open repositories such as GitHub). Applying these models for predictions is different from other machine learning settings. Instead of feeding new datasets into the trained model, you interact with the model via a prompt (like a chat function). That is, among other things you can pose a question to the model in plain English and get an often very reasonable answer, or you can instruct via the prompt to generate some type of text output for you (given your instructions, and potentially additional input). As the model is trained on natural language texts as well as (documented) computer code, you can ask it to write code for you, for example in SQL or R.
While there are many tools that build on LLMs such as GPT\-3 already out there and even more still being developed, I want to explicitly point you to two of those: [`gptstudio`](https://github.com/MichelNivard/GPTstudio), an add\-in for Rstudio, providing an easy\-to\-use interface with some of OpenAI’s APIs, and [GitHub Copilot](https://github.com/features/copilot). The latter is a professionally developed tool to support your software development workflow by, for example, auto\-completing the code you are writing. To use GitHub Copilot you need a paid subscription. With a subscription the tool can then be installed as an extension to different code editors (for example Visual Studio Code). However, at the time of writing this book no GitHub Copilot extension for RStudio was available. `gptstudio` is a much simpler but free alternative to GitHub Copilot and it is explicitly made for RStudio.[21](#fn21) You will, however, need an OpenAI account and a corresponding OpenAI API key (to get these simply follow the instructions here: <https://github.com/MichelNivard/GPTstudio>) in order to use the gptstudio\-add\-in. You will be charged for the queries that `gptstudio` sends to the OpenAI\-API; however there are no fixed costs associated with this setup.
Just to give you an idea of how you could use `gptstudio` for your coding workflow, consider the following example. After installing the add\-in and creating your OpenAI account and API key, you can initiate the chat function of the add\-in as follows.
```
# replace "YOUR-API-KEY" with
# your actual key
Sys.setenv(OPENAI_API_KEY = "YOUR-API-KEY")
# open chat window
gptstudio:::chat_gpt_addin()
```
This will cause RStudio to launch a Viewer window. You can pose questions or write instructions to OpenAI’s GPT model in the prompt field and send the corresponding query by clicking the “Chat” button. In the example below, I simply ask the model to generate a SQL query for me. In fact, I ask it to construct a query that we have previously built and evaluated in the previous SQL examples. I want the model to specifically reproduce the following query:
```
select date,
unemploy from econ
where unemploy > 15000
order by date;
```
Figure [4\.1](software-programming-with-big-data.html#fig:gptinput) shows a screenshot of my instruction to the model, and Figure [4\.2](software-programming-with-big-data.html#fig:gptoutput) presents the response from the model.
Figure 4\.1: GPTStudio: instructing OpenAI’s GPT\-3 model (text\-davinci\-003\) to write an SQL query.
Figure 4\.2: GPTStudio: an SQL query written by OpenAI’s GPT\-3 model (text\-davinci\-003\).
Two things are worth noting here: first, the query is syntactically correct and would essentially work; second, when comparing the query or the query’s results with our previous manually written query, we notice that the AI’s query is not semantically correct. Our database’s unemployment variable is called `unemploy`, and it is measured in thousands. The GPT model, of course, had no way of obtaining this information from our instructions. As a result, it simply used variable names and values for the filtering that seemed most reasonable given our input. The take\-away message here is to be aware of giving the model very clear instructions when creating code in this manner, especially in terms of the broader context (here the database and schema you are working with). To check the model’s code for syntax errors, simply test whether the code runs through or not. However, model\-generated code can easily introduce semantic errors, which can be very problematic.
4\.6 Wrapping up
----------------
* Find bottlenecks in your code before exposing it to the full dataset. To do so, use tools like `bench::mark()` and `profvis::profvis()` to see how long certain parts of your code need to process and how much memory they occupy.
* Be aware of R’s strengths and weaknesses when writing code for Big Data Analytics.Pre\-allocate memory for objects in which you collect the results of loops, make use of R’s vectorization, and avoid unnecessary copying.
* Get familiar with SQL and the underlying concept of only loading those observations and variables into R that are really needed for your task. SQLite in combination with R is an excellent lightweight solution to do this.
4\.1 Domains of programming with (big) data
-------------------------------------------
Programming tasks in the context of data analytics typically fall into one of the following broad categories:
* Procedures to import/export data.
* Procedures to clean and filter data.
* Implementing functions for statistical analysis.
When writing a program to process large amounts of data in any of these areas, it is helpful to take into consideration the following design choices:
1. Which basic (already implemented) R functions are more or less suitable as building blocks for the program?[12](#fn12)
2. How can we exploit/avoid some of R’s lower\-level characteristics in order to write more efficient code?
3. Is there a need to interface with a lower\-level programming language in order to speed up the code? (advanced topic)
Finally, there is an additional important point to be made regarding the writing of code for *statistical analysis*: Independent of *how* we write a statistical procedure in R (or in any other language, for that matter), keep in mind that there might be an *alternative statistical procedure/algorithm* that is faster but delivers approximately the same result (as long as we use a sufficiently large sample).
4\.2 Measuring R performance
----------------------------
When writing a data analysis script in R to process large amounts of data, it generally makes sense to first test each crucial part of the script with a small sub\-sample. In order to quickly recognize potential bottlenecks, there are a couple of R packages that help you keep track of exactly how long each component of your script needs to process as well as how much memory it uses. The table below lists some of the packages and functions that you should keep in mind when *“profiling”* and testing your code.
| package | function | purpose |
| --- | --- | --- |
| `utils` | `object.size()` | Provides an estimate of the memory that is being used to store an R object. |
| `pryr` | `object_size()` | Works similarly to `object.size()`, but counts more accurately and includes the size of environments. |
| `pryr` | `mem_used()` | Returns the total amount of memory (in megabytes) currently used by R. |
| `pryr` | `mem_change()` | Shows the change in memory (in megabytes) before and after running code. |
| `base` | `system.time()` | Returns CPU (and other) times that an R expression used. |
| `microbenchmark` | `microbenchmark()` | Highly accurate timing of R expression evaluation. |
| `bench` | `mark()` | Benchmark a series of functions. |
| `profvis` | `profvis()` | Profiles an R expression and visualizes the profiling data (usage of memory, time elapsed, etc.). |
Most of these functions are used in an interactive way in the R console. They serve either of two purposes that are central to profiling and improving your code’s performance. First, in order to assess the performance of your R code you probably want to know how long it takes to run your entire script or a specific part of your script. The `system.time()` ([R Core Team 2021](#ref-rfoundation2021)) function provides an easy way to check this. This function is loaded by default with R; there is no need to install an additional package. Simply wrap it around the line(s) of code that you want to assess.
```
# how much time does it take to run this loop?
system.time(for (i in 1:100) {i + 5})
```
```
## user system elapsed
## 0.001 0.000 0.002
```
Note that each time you run this line of code, the returned amount of time varies slightly. This has to do with the fact that the actual time needed to run a line of code can depend on various other processes happening at the same time on your computer.
The `microbenchmark` ([Mersmann 2021](#ref-microbenchmark)) and `bench` ([Hester and Vaughan 2021](#ref-bench)) packages provide additional functions to measure execution time in more sophisticated ways. In particular, they account for the fact that the processing time for the same code might vary and automatically run the code several times in order to return statistics about the processing time. In addition, `microbenchmark()` provides highly detailed and highly accurate timing of R expression evaluation. The function is particularly useful to accurately find even minor room for improvement when testing a data analysis script on a smaller sub\-sample (which might scale when working on a large dataset). For example, suppose you need to run a for\-loop over millions of iterations, and there are different ways to implement the body of the loop (which does not take too much time to process in one iteration). Note that the function actually evaluates the R expression in question many times and returns a statistical summary of the timings.
```
# load package
library(microbenchmark)
# how much time does it take to run this loop (exactly)?
microbenchmark(for (i in 1:100) {i + 5})
```
```
## Unit: milliseconds
## expr min lq mean
## for (i in 1:100) { i + 5 } 1.126 1.209 1.284
## median uq max neval
## 1.254 1.282 4.768 100
```
Second, a key aspect to improving the performance of data analysis scripts in R is to detect inefficient memory allocation as well as avoiding an R\-object that is either growing too much or too large to handle in memory. To this end, you might want to monitor how much memory R occupies at different points in your script as well as how much memory is taken up by individual R objects. For example, `object.size()` returns the size of an R object, that is, the amount of memory it takes up in the R environment in bytes (`pryr::object_size()` counts slightly more accurately).
```
hello <- "Hello, World!"
object.size(hello)
```
```
## 120 bytes
```
This is useful to implementing your script with a generally less memory\-intensive approach. For example, for a specific task it might not matter whether a particular variable is stored as a `character` vector or a `factor`. But storing it as `character` turns out to be more memory intensive (why?).
```
# initialize a large string vector containing letters
large_string <- rep(LETTERS[1:20], 1000^2)
head(large_string)
```
```
## [1] "A" "B" "C" "D" "E" "F"
```
```
# store the same information as a factor in a new variable
large_factor <- as.factor(large_string)
# is one bigger than the other?
object.size(large_string) - object.size(large_factor)
```
```
## 79999456 bytes
```
`pryr::mem_change()` ([Wickham 2021](#ref-pryr)) is useful to track how different parts of your script affect the overall memory occupied by R.
```
# load package
library(pryr)
# initialize a vector with 1000 (pseudo)-random numbers
mem_change(
thousand_numbers <- runif(1000)
)
```
```
## 7.98 kB
```
```
# initialize a vector with 1M (pseudo)-random numbers
mem_change(
a_million_numbers <- runif(1000^2)
)
```
```
## 8 MB
```
`bench::mark()` allows you to easily compare the performance of several different implementations of a code chunk both regarding timing and memory usage. The following code example illustrates this in a comparison of two approaches to computing the product of each element in a vector `x` with a factor `z`.
```
# load packages
library(bench)
# initialize variables
x <- 1:10000
z <- 1.5
# approach I: loop
multiplication <-
function(x,z) {
result <- c()
for (i in 1:length(x)) {result <- c(result, x[i]*z)}
return(result)
}
result <- multiplication(x,z)
head(result)
```
```
## [1] 1.5 3.0 4.5 6.0 7.5 9.0
```
```
# approach II: "R-style"
result2 <- x * z
head(result2)
```
```
## [1] 1.5 3.0 4.5 6.0 7.5 9.0
```
```
# comparison
benchmarking <-
mark(
result <- multiplication(x,z),
result2 <- x * z,
min_iterations = 50
)
benchmarking[, 4:9]
```
```
## # A tibble: 2 × 3
## `itr/sec` mem_alloc `gc/sec`
## <dbl> <bch:byt> <dbl>
## 1 12.2 382MB 15.5
## 2 76419. 78.2KB 7.64
```
In addition, the `bench` package ([Hester and Vaughan 2021](#ref-bench)) provides a simple way to visualize these outputs:
```
plot(benchmarking, type = "boxplot")
```
Finally, to analyze the performance of your entire script/program, the `profvis` package ([Chang, Luraschi, and Mastny 2020](#ref-profvis)) provides visual summaries to quickly detect the most prominent bottlenecks. You can either call this via the `profvis()` function with the code section to be profiled as argument, or via the RStudio user interface by clicking on the Code Tools menu in the editor window and selecting “Profile selected lines”.
```
# load package
library(profvis)
# analyze performance of several lines of code
profvis({
x <- 1:10000
z <- 1.5
# approach I: loop
multiplication <-
function(x,z) {
result <- c()
for (i in 1:length(x)) {result <- c(result, x[i]*z)}
return(result)
}
result <- multiplication(x,z)
# approach II: "R-style"
result2 <- x * z
head(result2)
})
```
4\.3 Writing efficient R code
-----------------------------
This subsection touches upon several prominent aspects of writing efficient/fast R code.[13](#fn13)
### 4\.3\.1 Memory allocation and growing objects
R tends to “grow” already\-initialized objects in memory when they are modified. At the initiation of the object, a small amount of memory is occupied at some location in memory. In simple terms, once the object grows, it might not have enough space where it is currently located. Hence, it needs to be “moved” to another location in memory with more space available. This moving, or “re\-allocation” of memory, needs time and slows down the overall process.
This potential is most practically illustrated with a `for`\-loop in which each iteration’s result is stored as an element of a vector (the object in question). To avoid growing this object, you need to instruct R to pre\-allocate the memory necessary to contain the final result. If we don’t do this, each iteration of the loop causes R to re\-allocate memory because the number of elements in the vector/list is changing. In simple terms, this means that R needs to execute more steps in each iteration.
In the following example, we compare the performance of two functions, one taking this principle into account, the other not. The functions take a numeric vector as input and return the square root of each element of the numeric vector.
```
# naïve implementation
sqrt_vector <-
function(x) {
output <- c()
for (i in 1:length(x)) {
output <- c(output, x[i]^(1/2))
}
return(output)
}
# implementation with pre-allocation of memory
sqrt_vector_faster <-
function(x) {
output <- rep(NA, length(x))
for (i in 1:length(x)) {
output[i] <- x[i]^(1/2)
}
return(output)
}
```
As a proof of concept we use `system.time()` to measure the difference in speed for various input sizes.[14](#fn14)
```
# the different sizes of the vectors we will put into the two functions
input_sizes <- seq(from = 100, to = 10000, by = 100)
# create the input vectors
inputs <- sapply(input_sizes, rnorm)
# compute outputs for each of the functions
output_slower <-
sapply(inputs,
function(x){ system.time(sqrt_vector(x))["elapsed"]
}
)
output_faster <-
sapply(inputs,
function(x){ system.time(sqrt_vector_faster(x))["elapsed"]
}
)
```
The following plot shows the difference in the performance of the two functions.
```
# load packages
library(ggplot2)
# initialize data frame for plot
plotdata <- data.frame(time_elapsed = c(output_slower, output_faster),
input_size = c(input_sizes, input_sizes),
Implementation= c(rep("sqrt_vector",
length(output_slower)),
rep("sqrt_vector_faster",
length(output_faster))))
# plot
ggplot(plotdata, aes(x=input_size, y= time_elapsed)) +
geom_point(aes(colour=Implementation)) +
theme_minimal(base_size = 18) +
theme(legend.position = "bottom") +
ylab("Time elapsed (in seconds)") +
xlab("No. of elements processed")
```
Clearly, the version with pre\-allocation of memory (avoiding growing an object) is much faster overall. In addition, we see that the problem with the growing object in the naïve implementation tends to get worse with each iteration. The take\-away message for the practitioner: If possible, always initialize the “container” object (list, matrix, etc.) for iteration results as an empty object of the final size/dimensions.
The attentive reader and experienced R coder will have noticed by this point that both of the functions implemented above are not really smart practice to solve the problem at hand. If you consider yourself part of this group, the next subsection will make you more comfortable.
### 4\.3\.2 Vectorization in basic R functions
We can further improve the performance of this function by exploiting a particular characteristic of R: in R, ‘everything is a vector’, and many of the most basic R functions (such as math operators) are *vectorized*. In simple terms, this means that an operation is implemented to directly work on vectors in such a way that it can take advantage of the similarity of each of the vector’s elements. That is, R only has to figure out once how to apply a given function to a vector element in order to apply it to all elements of the vector. In a simple loop, however, R has to go through the same ‘preparatory’ steps again and again in each iteration.
Following up on the problem from the previous subsection, we implement an additional function called `sqrt_vector_fastest` that exploits the fact that math operators in R are vectorized functions. We then re\-run the same speed test as above with this function.
```
# implementation with vectorization
sqrt_vector_fastest <-
function(x) {
output <- x^(1/2)
return(output)
}
# speed test
output_fastest <-
sapply(inputs,
function(x){ system.time(sqrt_vector_fastest(x))["elapsed"]
}
)
```
Let’s have a look at whether this improves the function’s performance further.
```
# load packages
library(ggplot2)
# initialize data frame for plot
plotdata <- data.frame(time_elapsed = c(output_faster, output_fastest),
input_size = c(input_sizes, input_sizes),
Implementation= c(rep("sqrt_vector_faster",
length(output_faster)),
rep("sqrt_vector_fastest",
length(output_fastest))))
# plot
ggplot(plotdata, aes(x=time_elapsed, y=Implementation)) +
geom_boxplot(aes(colour=Implementation),
show.legend = FALSE) +
theme_minimal(base_size = 18) +
xlab("Time elapsed (in seconds)")
```
Clearly, the vectorized implementation is even faster. The take\-away message: Make use of vectorized basic R functions where possible. At this point you might wonder: Why not always use vectorization over loops, when working with R? This question (and closely related similar questions) has been fiercely debated in the R online community over the last few years. Also the debate contains and has contained several (in my view) slightly misleading arguments. A simple answer to this question is: It is in fact not that simple to use *actual* vectorization for every kind of problem in R. There are a number of functions often mentioned to achieve “vectorization” easily in R; however, they do not actually implement actual vectorization in its original technical sense (the type just demonstrated here with the R math operators). Since this point is very prominent in debates about how to improve R code, the next subsection attempts to summarize the most important aspects to keep in mind.
### 4\.3\.3 `apply`\-type functions and vectorization
There are basically two ways to make use of some form of “vectorization” instead of writing loops.
One approach is to use an `apply`\-type function instead of loops. Note, though, that the `apply`\-type functions primarily make the writing of code more efficient. They still run a loop under the hood. Nevertheless, some `apply`\-type functions might still outperform explicit loops, as they might be better implemented.[15](#fn15)
Consider, for example, `lapply()`, a function that takes a vector (atomic or list) as input and applies a function `FUN` to each of its elements. It is a straightforward alternative to `for`\-loops in many situations (and it automatically takes care of the “growing objects” problem discussed above). The following example shows how we can get the same result by either writing a loop or using `lapply()`. The aim of the code example is to import the [Health News in Twitter Dataset](https://archive.ics.uci.edu/ml/datasets/Health+News+in+Twitter) by Karami et al. ([2017](#ref-karami_etal2017)). The raw data consists of several text files that need to be imported to R consecutively.
The text\-files are located in `data/twitter_texts/`. For either approach of importing all of these files, we first need a list of the paths to all of the files. We can get this with `list.files()`. Also, for either approach we will make use of the `fread` function in the `data.table` package ([Dowle and Srinivasan 2022](#ref-data.table)).
```
# load packages
library(data.table)
# get a list of all file-paths
textfiles <- list.files("data/twitter_texts", full.names = TRUE)
```
Now we can read in all the text files with a `for`\-loop as follows.
```
# prepare loop
all_texts <- list()
n_files <- length(textfiles)
length(all_texts) <- n_files
# read all files listed in textfiles
for (i in 1:n_files) {
all_texts[[i]] <- fread(textfiles[i])
}
```
The imported files are now stored as `data.table`\-objects in the list `all_texts`. With the following line of code we combine all of them in one `data.table`.
```
# combine all in one data.table
twitter_text <- rbindlist(all_texts)
# check result
dim(twitter_text)
```
```
## [1] 42422 3
```
Alternatively, we can make use of `lapply` as follows in order to achieve exactly the same.
```
# use lapply instead of loop
all_texts <- lapply(textfiles, fread)
# combine all in one data.table
twitter_text <- rbindlist(all_texts)
# check result
dim(twitter_text)
```
```
## [1] 42422 3
```
Finally, we can make use of `Vectorization()` in order to “vectorize” our own import function (written for this example). Again, this does not make use of vectorization in its original technical sense.
```
# initialize the import function
import_file <-
function(x) {
parsed_x <- fread(x)
return(parsed_x)
}
# 'vectorize' it
import_files <- Vectorize(import_file, SIMPLIFY = FALSE)
# Apply the vectorized function
all_texts <- import_files(textfiles)
twitter_text <- rbindlist(all_texts)
# check the result
dim(twitter_text)
```
```
## [1] 42422 3
```
The take\-away message: Instead of writing simple loops, use `apply`\-type functions to save time writing code (and make the code easier to read) and automatically avoid memory\-allocation problems.
### 4\.3\.4 Avoiding unnecessary copying
The “growing objects” problem discussed above is only one aspect that can lead to inefficient use of memory when working with R. Another potential problem of using up more memory than necessary during an execution of an R\-script is how R handles objects/variables and their names.
Consider the following line of code:
```
a <- runif(10000)
```
What is usually said to describe what is happening here is something along the lines of “we initialize a variable called `a` and assign a numeric vector with 10,000 random numbers. What in fact happens is that the name `a` is assigned to the integer vector (which in turn exists at a specific memory address). Thus, values do not have names but *names have values*. This has important consequences for memory allocation and performance. For example, because `a` is in fact just a name attached to a value, the following does not involve any copying of values. It simply”binds” another name, `b`, to the same value to which `a` is already bound.
```
b <- a
```
We can prove this in two ways. First, if what I just stated was not true, the line above would actually lead to more memory being occupied by the current R session. However, this is not the case:
```
object_size(a)
```
```
## 80.05 kB
```
```
mem_change(c <- a)
```
```
## -588 kB
```
Second, by means of the `lobstr`\-package ([Wickham 2022a](#ref-lobstr)), we can see that the values to which `a` and `b` are bound are stored at the same memory address. Hence, they are the same values.
```
# load packages
library(lobstr)
# check memory addresses of objects
obj_addr(a)
```
```
## [1] "0x55d688cfeec0"
```
```
obj_addr(b)
```
```
## [1] "0x55d688cfeec0"
```
Now you probably wonder, what happens to `b` if we modify `a`. After all, if the values to which `b` is bound are changed when we write code concerning `a`, we might end up with very surprising output. The answer is, and this is key (!), once we modify `a`, the values need to be *copied* in order to ensure the integrity of `b`. Only at this point, our program will require more memory.
```
# check the first element's value
a[1]
```
```
## [1] 0.5262
```
```
b[1]
```
```
## [1] 0.5262
```
```
# modify a, check memory change
mem_change(a[1] <- 0)
```
```
## 79 kB
```
```
# check memory addresses
obj_addr(a)
```
```
## [1] "0x55d671554530"
```
```
obj_addr(b)
```
```
## [1] "0x55d688cfeec0"
```
Note that the entire vector needed to be copied for this. There is, of course, a lesson from all this regarding writing efficient code. Knowing how actual copying of values occurs helps avoid unnecessary copying. The larger an object, the more time it will take to copy it in memory. Objects with a single binding get modified in place (no copying):
```
mem_change(d <- runif(10000))
```
```
## 80.3 kB
```
```
mem_change(d[1] <- 0)
```
```
## 584 B
```
### 4\.3\.5 Releasing memory
Closely related to the issue of copy\-upon\-modify is the issue of “releasing” memory via “garbage collection”. If your program uses up a lot of (too much) memory (typical for working with large datasets), all processes on your computer might substantially slow down (we will look more closely into why this is the case in the next chapter). Hence, you might want to remove/delete an object once you do not need it anymore. This can be done with the `rm()` function.
```
mem_change(large_vector <- runif(10^8))
```
```
## 800 MB
```
```
mem_change(rm(large_vector))
```
```
## -800 MB
```
`rm()` removes objects that are currently accessible in the global R environment. However, some objects/values might technically not be visible/accessible anymore (for example, objects that have been created in a function which has since returned the function output). To also release memory occupied by these objects, you can call `gc()` (the garbage collector). While R will automatically collect the garbage once it is close to running out of memory, explicitly calling `gc` can still improve the performance of your script when working with large datasets. This is in particular the case when R is not the only data\-intensive process running on your computer. For example, when running an R script involving the repeated querying of data from a local SQL database and the subsequent memory\-intensive processing of this data in R, you can avoid using up too much memory by running `rm` and `gc` explicitly.[16](#fn16)
### 4\.3\.6 Beyond R
So far, we have explored idiosyncrasies of R we should be aware of when writing programs to handle and analyze large datasets. While this has shown that R has many advantages for working with data, it also revealed some aspects of R that might result in low performance compared to other programming languages. A simple generic explanation for this is that R is an interpreted language, meaning that when we execute R code, it is processed (statement by statement) by an ‘interpreter’ that translates the code into machine code (without the user giving any specific instructions). In contrast, when writing code in a ‘compiled language’, we first have to explicitly compile the code (into machine code) and then run the compiled program. Running code that is already compiled is typically much faster than running R code that has to be interpreted before it can actually be processed by the CPU.
For advanced programmers, R offers various options to directly make use of compiled programs (for example, written in C, C\+\+, or FORTRAN). In fact, several of the core R functions installed with the basic R distribution are implemented in one of these lower\-level programming languages, and the R function we call simply interacts with these functions.
We can actually investigate this by looking at the source code of an R function. If you simply type the name of a function (such as our `import_file()`) to the console, R prints the function’s source code to the console.
```
import_file
```
```
## function(x) {
## parsed_x <- fread(x)
## return(parsed_x)
## }
## <bytecode: 0x55d689024050>
```
However, if we do the same for function `sum`, we don’t see any actual source code.
```
sum
```
```
## function (..., na.rm = FALSE) .Primitive("sum")
```
Instead `.Primitive()` indicates that `sum()` is actually referring to an internal function (in this case implemented in C).
While the use of functions implemented in a lower\-level language is a common technique to improve the speed of ‘R’ functions, it is particularly prominent in the context of functions/packages made to deal with large amounts of data (such as the `data.table` package).
### 4\.3\.1 Memory allocation and growing objects
R tends to “grow” already\-initialized objects in memory when they are modified. At the initiation of the object, a small amount of memory is occupied at some location in memory. In simple terms, once the object grows, it might not have enough space where it is currently located. Hence, it needs to be “moved” to another location in memory with more space available. This moving, or “re\-allocation” of memory, needs time and slows down the overall process.
This potential is most practically illustrated with a `for`\-loop in which each iteration’s result is stored as an element of a vector (the object in question). To avoid growing this object, you need to instruct R to pre\-allocate the memory necessary to contain the final result. If we don’t do this, each iteration of the loop causes R to re\-allocate memory because the number of elements in the vector/list is changing. In simple terms, this means that R needs to execute more steps in each iteration.
In the following example, we compare the performance of two functions, one taking this principle into account, the other not. The functions take a numeric vector as input and return the square root of each element of the numeric vector.
```
# naïve implementation
sqrt_vector <-
function(x) {
output <- c()
for (i in 1:length(x)) {
output <- c(output, x[i]^(1/2))
}
return(output)
}
# implementation with pre-allocation of memory
sqrt_vector_faster <-
function(x) {
output <- rep(NA, length(x))
for (i in 1:length(x)) {
output[i] <- x[i]^(1/2)
}
return(output)
}
```
As a proof of concept we use `system.time()` to measure the difference in speed for various input sizes.[14](#fn14)
```
# the different sizes of the vectors we will put into the two functions
input_sizes <- seq(from = 100, to = 10000, by = 100)
# create the input vectors
inputs <- sapply(input_sizes, rnorm)
# compute outputs for each of the functions
output_slower <-
sapply(inputs,
function(x){ system.time(sqrt_vector(x))["elapsed"]
}
)
output_faster <-
sapply(inputs,
function(x){ system.time(sqrt_vector_faster(x))["elapsed"]
}
)
```
The following plot shows the difference in the performance of the two functions.
```
# load packages
library(ggplot2)
# initialize data frame for plot
plotdata <- data.frame(time_elapsed = c(output_slower, output_faster),
input_size = c(input_sizes, input_sizes),
Implementation= c(rep("sqrt_vector",
length(output_slower)),
rep("sqrt_vector_faster",
length(output_faster))))
# plot
ggplot(plotdata, aes(x=input_size, y= time_elapsed)) +
geom_point(aes(colour=Implementation)) +
theme_minimal(base_size = 18) +
theme(legend.position = "bottom") +
ylab("Time elapsed (in seconds)") +
xlab("No. of elements processed")
```
Clearly, the version with pre\-allocation of memory (avoiding growing an object) is much faster overall. In addition, we see that the problem with the growing object in the naïve implementation tends to get worse with each iteration. The take\-away message for the practitioner: If possible, always initialize the “container” object (list, matrix, etc.) for iteration results as an empty object of the final size/dimensions.
The attentive reader and experienced R coder will have noticed by this point that both of the functions implemented above are not really smart practice to solve the problem at hand. If you consider yourself part of this group, the next subsection will make you more comfortable.
### 4\.3\.2 Vectorization in basic R functions
We can further improve the performance of this function by exploiting a particular characteristic of R: in R, ‘everything is a vector’, and many of the most basic R functions (such as math operators) are *vectorized*. In simple terms, this means that an operation is implemented to directly work on vectors in such a way that it can take advantage of the similarity of each of the vector’s elements. That is, R only has to figure out once how to apply a given function to a vector element in order to apply it to all elements of the vector. In a simple loop, however, R has to go through the same ‘preparatory’ steps again and again in each iteration.
Following up on the problem from the previous subsection, we implement an additional function called `sqrt_vector_fastest` that exploits the fact that math operators in R are vectorized functions. We then re\-run the same speed test as above with this function.
```
# implementation with vectorization
sqrt_vector_fastest <-
function(x) {
output <- x^(1/2)
return(output)
}
# speed test
output_fastest <-
sapply(inputs,
function(x){ system.time(sqrt_vector_fastest(x))["elapsed"]
}
)
```
Let’s have a look at whether this improves the function’s performance further.
```
# load packages
library(ggplot2)
# initialize data frame for plot
plotdata <- data.frame(time_elapsed = c(output_faster, output_fastest),
input_size = c(input_sizes, input_sizes),
Implementation= c(rep("sqrt_vector_faster",
length(output_faster)),
rep("sqrt_vector_fastest",
length(output_fastest))))
# plot
ggplot(plotdata, aes(x=time_elapsed, y=Implementation)) +
geom_boxplot(aes(colour=Implementation),
show.legend = FALSE) +
theme_minimal(base_size = 18) +
xlab("Time elapsed (in seconds)")
```
Clearly, the vectorized implementation is even faster. The take\-away message: Make use of vectorized basic R functions where possible. At this point you might wonder: Why not always use vectorization over loops, when working with R? This question (and closely related similar questions) has been fiercely debated in the R online community over the last few years. Also the debate contains and has contained several (in my view) slightly misleading arguments. A simple answer to this question is: It is in fact not that simple to use *actual* vectorization for every kind of problem in R. There are a number of functions often mentioned to achieve “vectorization” easily in R; however, they do not actually implement actual vectorization in its original technical sense (the type just demonstrated here with the R math operators). Since this point is very prominent in debates about how to improve R code, the next subsection attempts to summarize the most important aspects to keep in mind.
### 4\.3\.3 `apply`\-type functions and vectorization
There are basically two ways to make use of some form of “vectorization” instead of writing loops.
One approach is to use an `apply`\-type function instead of loops. Note, though, that the `apply`\-type functions primarily make the writing of code more efficient. They still run a loop under the hood. Nevertheless, some `apply`\-type functions might still outperform explicit loops, as they might be better implemented.[15](#fn15)
Consider, for example, `lapply()`, a function that takes a vector (atomic or list) as input and applies a function `FUN` to each of its elements. It is a straightforward alternative to `for`\-loops in many situations (and it automatically takes care of the “growing objects” problem discussed above). The following example shows how we can get the same result by either writing a loop or using `lapply()`. The aim of the code example is to import the [Health News in Twitter Dataset](https://archive.ics.uci.edu/ml/datasets/Health+News+in+Twitter) by Karami et al. ([2017](#ref-karami_etal2017)). The raw data consists of several text files that need to be imported to R consecutively.
The text\-files are located in `data/twitter_texts/`. For either approach of importing all of these files, we first need a list of the paths to all of the files. We can get this with `list.files()`. Also, for either approach we will make use of the `fread` function in the `data.table` package ([Dowle and Srinivasan 2022](#ref-data.table)).
```
# load packages
library(data.table)
# get a list of all file-paths
textfiles <- list.files("data/twitter_texts", full.names = TRUE)
```
Now we can read in all the text files with a `for`\-loop as follows.
```
# prepare loop
all_texts <- list()
n_files <- length(textfiles)
length(all_texts) <- n_files
# read all files listed in textfiles
for (i in 1:n_files) {
all_texts[[i]] <- fread(textfiles[i])
}
```
The imported files are now stored as `data.table`\-objects in the list `all_texts`. With the following line of code we combine all of them in one `data.table`.
```
# combine all in one data.table
twitter_text <- rbindlist(all_texts)
# check result
dim(twitter_text)
```
```
## [1] 42422 3
```
Alternatively, we can make use of `lapply` as follows in order to achieve exactly the same.
```
# use lapply instead of loop
all_texts <- lapply(textfiles, fread)
# combine all in one data.table
twitter_text <- rbindlist(all_texts)
# check result
dim(twitter_text)
```
```
## [1] 42422 3
```
Finally, we can make use of `Vectorization()` in order to “vectorize” our own import function (written for this example). Again, this does not make use of vectorization in its original technical sense.
```
# initialize the import function
import_file <-
function(x) {
parsed_x <- fread(x)
return(parsed_x)
}
# 'vectorize' it
import_files <- Vectorize(import_file, SIMPLIFY = FALSE)
# Apply the vectorized function
all_texts <- import_files(textfiles)
twitter_text <- rbindlist(all_texts)
# check the result
dim(twitter_text)
```
```
## [1] 42422 3
```
The take\-away message: Instead of writing simple loops, use `apply`\-type functions to save time writing code (and make the code easier to read) and automatically avoid memory\-allocation problems.
### 4\.3\.4 Avoiding unnecessary copying
The “growing objects” problem discussed above is only one aspect that can lead to inefficient use of memory when working with R. Another potential problem of using up more memory than necessary during an execution of an R\-script is how R handles objects/variables and their names.
Consider the following line of code:
```
a <- runif(10000)
```
What is usually said to describe what is happening here is something along the lines of “we initialize a variable called `a` and assign a numeric vector with 10,000 random numbers. What in fact happens is that the name `a` is assigned to the integer vector (which in turn exists at a specific memory address). Thus, values do not have names but *names have values*. This has important consequences for memory allocation and performance. For example, because `a` is in fact just a name attached to a value, the following does not involve any copying of values. It simply”binds” another name, `b`, to the same value to which `a` is already bound.
```
b <- a
```
We can prove this in two ways. First, if what I just stated was not true, the line above would actually lead to more memory being occupied by the current R session. However, this is not the case:
```
object_size(a)
```
```
## 80.05 kB
```
```
mem_change(c <- a)
```
```
## -588 kB
```
Second, by means of the `lobstr`\-package ([Wickham 2022a](#ref-lobstr)), we can see that the values to which `a` and `b` are bound are stored at the same memory address. Hence, they are the same values.
```
# load packages
library(lobstr)
# check memory addresses of objects
obj_addr(a)
```
```
## [1] "0x55d688cfeec0"
```
```
obj_addr(b)
```
```
## [1] "0x55d688cfeec0"
```
Now you probably wonder, what happens to `b` if we modify `a`. After all, if the values to which `b` is bound are changed when we write code concerning `a`, we might end up with very surprising output. The answer is, and this is key (!), once we modify `a`, the values need to be *copied* in order to ensure the integrity of `b`. Only at this point, our program will require more memory.
```
# check the first element's value
a[1]
```
```
## [1] 0.5262
```
```
b[1]
```
```
## [1] 0.5262
```
```
# modify a, check memory change
mem_change(a[1] <- 0)
```
```
## 79 kB
```
```
# check memory addresses
obj_addr(a)
```
```
## [1] "0x55d671554530"
```
```
obj_addr(b)
```
```
## [1] "0x55d688cfeec0"
```
Note that the entire vector needed to be copied for this. There is, of course, a lesson from all this regarding writing efficient code. Knowing how actual copying of values occurs helps avoid unnecessary copying. The larger an object, the more time it will take to copy it in memory. Objects with a single binding get modified in place (no copying):
```
mem_change(d <- runif(10000))
```
```
## 80.3 kB
```
```
mem_change(d[1] <- 0)
```
```
## 584 B
```
### 4\.3\.5 Releasing memory
Closely related to the issue of copy\-upon\-modify is the issue of “releasing” memory via “garbage collection”. If your program uses up a lot of (too much) memory (typical for working with large datasets), all processes on your computer might substantially slow down (we will look more closely into why this is the case in the next chapter). Hence, you might want to remove/delete an object once you do not need it anymore. This can be done with the `rm()` function.
```
mem_change(large_vector <- runif(10^8))
```
```
## 800 MB
```
```
mem_change(rm(large_vector))
```
```
## -800 MB
```
`rm()` removes objects that are currently accessible in the global R environment. However, some objects/values might technically not be visible/accessible anymore (for example, objects that have been created in a function which has since returned the function output). To also release memory occupied by these objects, you can call `gc()` (the garbage collector). While R will automatically collect the garbage once it is close to running out of memory, explicitly calling `gc` can still improve the performance of your script when working with large datasets. This is in particular the case when R is not the only data\-intensive process running on your computer. For example, when running an R script involving the repeated querying of data from a local SQL database and the subsequent memory\-intensive processing of this data in R, you can avoid using up too much memory by running `rm` and `gc` explicitly.[16](#fn16)
### 4\.3\.6 Beyond R
So far, we have explored idiosyncrasies of R we should be aware of when writing programs to handle and analyze large datasets. While this has shown that R has many advantages for working with data, it also revealed some aspects of R that might result in low performance compared to other programming languages. A simple generic explanation for this is that R is an interpreted language, meaning that when we execute R code, it is processed (statement by statement) by an ‘interpreter’ that translates the code into machine code (without the user giving any specific instructions). In contrast, when writing code in a ‘compiled language’, we first have to explicitly compile the code (into machine code) and then run the compiled program. Running code that is already compiled is typically much faster than running R code that has to be interpreted before it can actually be processed by the CPU.
For advanced programmers, R offers various options to directly make use of compiled programs (for example, written in C, C\+\+, or FORTRAN). In fact, several of the core R functions installed with the basic R distribution are implemented in one of these lower\-level programming languages, and the R function we call simply interacts with these functions.
We can actually investigate this by looking at the source code of an R function. If you simply type the name of a function (such as our `import_file()`) to the console, R prints the function’s source code to the console.
```
import_file
```
```
## function(x) {
## parsed_x <- fread(x)
## return(parsed_x)
## }
## <bytecode: 0x55d689024050>
```
However, if we do the same for function `sum`, we don’t see any actual source code.
```
sum
```
```
## function (..., na.rm = FALSE) .Primitive("sum")
```
Instead `.Primitive()` indicates that `sum()` is actually referring to an internal function (in this case implemented in C).
While the use of functions implemented in a lower\-level language is a common technique to improve the speed of ‘R’ functions, it is particularly prominent in the context of functions/packages made to deal with large amounts of data (such as the `data.table` package).
4\.4 SQL basics
---------------
Structured Query Language (SQL) has become a bread\-and\-butter tool for data analysts and data scientists due to its broad application in systems used to store large amounts of data. While traditionally only encountered in the context of structured data stored in relational database management systems, some versions of it are now also used to query data from data warehouse systems (e.g. Amazon Redshift) and even to query massive amounts (terabytes or even petabytes) of data stored in data lakes (e.g., Amazon Athena). In all of these applications, SQL’s purpose (from the data analytics perspective) is to provide a convenient and efficient way to query data from mass storage for analysis. Instead of importing a CSV file into R and then filtering it in order to get to the analytic dataset, we use SQL to express how the analytic dataset should look (which variables and rows should be included).
The latter point is very important to keep in mind when already having experience with a language like R and learning SQL for the first time. In R we write code to instruct the computer what to do with the data. For example, we tell it to import a csv file called `economics.csv` as a `data.table`; then we instruct it to remove observations that are older than a certain date according to the `date` column; then we instruct it to compute the average of the `unemploy` column values for each year based on the `date` column and then return the result as a separate data frame:
```
# import data
econ <- read.csv("data/economics.csv")
# filter
econ2 <- econ["1968-01-01"<=econ$date,]
# compute yearly averages (basic R approach)
econ2$year <- lubridate::year(econ2$date)
years <- unique(econ2$year)
averages <-
sapply(years, FUN = function(x){
mean(econ2[econ2$year==x,"unemploy"])
})
output <- data.frame(year=years, average_unemploy=averages)
# inspect the first few lines of the result
head(output)
```
```
## year average_unemploy
## 1 1968 2797
## 2 1969 2830
## 3 1970 4127
## 4 1971 5022
## 5 1972 4876
## 6 1973 4359
```
In contrast, when using SQL we write code that describes what the final result is supposed to look like. The SQL engine processing the code then takes care of the rest and returns the result in the most efficient way.[17](#fn17)
```
SELECT
strftime('%Y', `date`) AS year,
AVG(unemploy) AS average_unemploy
FROM econ
WHERE "1968-01-01"<=`date`
GROUP BY year LIMIT 6;
```
```
## year average_unemploy
## 1 1968 2797
## 2 1969 2830
## 3 1970 4127
## 4 1971 5022
## 5 1972 4876
## 6 1973 4359
```
For the moment, we will only focus on the code and ignore the underlying hardware and database concepts (those will be discussed in more detail in Chapter 5\).
### 4\.4\.1 First steps in SQL(ite)
In order to get familiar with coding in SQL, we work with a free and easy\-to\-use version of SQL called *SQLite*. [SQLite](https://sqlite.org/index.html) is a free full\-featured SQL database engine widely used across platforms. It usually comes pre\-installed with Windows and Mac/OSX distributions and has (from the user’s perspective) all the core features of more sophisticated SQL versions. Unlike the more sophisticated SQL systems, SQLite does not rely explicitly on a client/server model. That is, there is no need to set up your database on a server and then query it from a client interface. In fact, setting it up is straightforward. In the terminal, we can directly call SQLite as a command\-line tool (on most modern computers, the command is now `sqlite3`, SQLite version 3\).
In this first code example, we set up an SQLite database using the command line. In the file structure of the book repository, we first switch to the data directory.
```
cd data
```
With one simple command, we start up SQLite, create a new database called `mydb.sqlite`, and connect to the newly created database.[18](#fn18)
```
sqlite3 mydb.sqlite
```
This created a new file `mydb.sqlite` in our `data` directory, which contains the newly created database. Also, we are now running `sqlite` in the terminal (indicated by the `sqlite>` prompt. This means we can now type SQL code to the terminal to run queries and other SQL commands.
At this point, the newly created database does not contain any data. There are no tables in it. We can see this by running the `.tables` command.
```
.tables
```
As expected, nothing is returned. Now, let’s create our first table and import the `economics.csv` dataset into it. In SQLite, it makes sense to first set up an empty table in which all column data types are defined before importing data from a CSV\-file to it. If a CSV is directly imported to a new table (without type definitions), all columns will be set to `TEXT` (similar to `character` in R) by default. Setting the right data type for each variable follows essentially the same logic as setting the data types of a data frame’s columns in R (with the difference that in SQL this also affects how the data is stored on disk).[19](#fn19)
In a first step, we thus create a new table called `econ`.
```
-- Create the new table
CREATE TABLE econ(
"date" DATE,
"pce" REAL,
"pop" REAL,
"psavert" REAL,
"uempmed" REAL,
"unemploy" INTEGER
);
```
Then, we can import the data from the csv file, by first switching to CSV mode via the command `.mode csv` and then importing the data to `econ` with `.import`. The `.import` command expects as a first argument the path to the CSV file on disk and as a second argument the name of the table to import the data to.
```
-- prepare import
.mode csv
-- import data from csv
.import --skip 1 economics.csv econ
```
Now we can have a look at the new database table in SQLite. `.tables` shows that we now have one table called `econ` in our database, and `.schema` displays the structure of the new `econ` table.
```
.tables
```
```
# econ
```
```
.schema econ
```
```
# CREATE TABLE econ(
# "date" DATE,
# "pce" REAL,
# "pop" REAL,
# "psavert" REAL,
# "uempmed" REAL,
# "unemploy" INTEGER
# );
```
With this, we can start querying data with SQLite. In order to make the query results easier to read, we first set two options regarding how query results are displayed on the terminal. `.header on` enables the display of the column names in the returned query results. And `.mode columns` arranges the query results in columns.
```
.header on
```
```
.mode columns
```
In our first query, we select all (`*`) variable values of the observation of January 1968\.
```
select * from econ where date = '1968-01-01';
```
```
## date pce pop psavert uempmed unemploy
## 1 1968-01-01 531.5 199808 11.7 5.1 2878
```
#### 4\.4\.1\.1 Simple queries
Now let’s select all dates and unemployment values of observations with more than 15 million unemployed, ordered by date.
```
select date,
unemploy from econ
where unemploy > 15000
order by date;
```
```
## date unemploy
## 1 2009-09-01 15009
## 2 2009-10-01 15352
## 3 2009-11-01 15219
## 4 2009-12-01 15098
## 5 2010-01-01 15046
## 6 2010-02-01 15113
## 7 2010-03-01 15202
## 8 2010-04-01 15325
## 9 2010-11-01 15081
```
### 4\.4\.2 Joins
So far, we have only considered queries involving one table of data. However, SQL provides a very efficient way to join data from various tables. Again, the way of writing SQL code is the same: You describe what the final table should look like and from where the data is to be selected.
Let’s extend the previous example by importing an additional table to our `mydb.sqlite`. The additional data is stored in the file `inflation.csv` in the book’s data folder and contains information on the US annual inflation rate measured in percent.[20](#fn20)
```
-- Create the new table
CREATE TABLE inflation(
"date" DATE,
"inflation_percent" REAL
);
-- prepare import
.mode csv
-- import data from csv
.import --skip 1 inflation.csv inflation
-- switch back to column mode
.mode columns
```
Note that the data stored in `econ` contains monthly observations, while `inflation` contains annual observations. We can thus only meaningfully combine the two datasets at the level of years. Again using the combination of datasets in R as a reference point, here is what we would like to achieve expressed in R. The aim is to get a table that serves as basis for a [Phillips curve](https://en.wikipedia.org/wiki/Phillips_curve) plot, with annual observations and the variables `year`, `average_unemp_percent`, and `inflation_percent`.
```
# import data
econ <- read.csv("data/economics.csv")
inflation <- read.csv("data/inflation.csv")
# prepare variable to match observations
econ$year <- lubridate::year(econ$date)
inflation$year <- lubridate::year(inflation$date)
# create final output
years <- unique(econ$year)
averages <- sapply(years, FUN = function(x) {
mean(econ[econ$year==x,"unemploy"]/econ[econ$year==x,"pop"])*100
} )
unemp <- data.frame(year=years,
average_unemp_percent=averages)
# combine via the year column
# keep all rows of econ
output<- merge(unemp, inflation[, c("year", "inflation_percent")], by="year")
# inspect output
head(output)
```
```
## year average_unemp_percent inflation_percent
## 1 1967 1.512 2.773
## 2 1968 1.394 4.272
## 3 1969 1.396 5.462
## 4 1970 2.013 5.838
## 5 1971 2.419 4.293
## 6 1972 2.324 3.272
```
Now let’s look at how the same table can be created in SQLite (the table output below only shows the first 6 rows of the resulting table).
```
SELECT
strftime('%Y', econ.date) AS year,
AVG(unemploy/pop)*100 AS average_unemp_percent,
inflation_percent
FROM econ INNER JOIN inflation ON year = strftime('%Y', inflation.date)
GROUP BY year
```
```
## year average_unemp_percent inflation_percent
## 1 1967 1.512 2.773
## 2 1968 1.394 4.272
## 3 1969 1.396 5.462
## 4 1970 2.013 5.838
## 5 1971 2.419 4.293
## 6 1972 2.324 3.272
```
When done working with the database, we can exit SQLite by typing `.quit` into the terminal and hit enter.
### 4\.4\.1 First steps in SQL(ite)
In order to get familiar with coding in SQL, we work with a free and easy\-to\-use version of SQL called *SQLite*. [SQLite](https://sqlite.org/index.html) is a free full\-featured SQL database engine widely used across platforms. It usually comes pre\-installed with Windows and Mac/OSX distributions and has (from the user’s perspective) all the core features of more sophisticated SQL versions. Unlike the more sophisticated SQL systems, SQLite does not rely explicitly on a client/server model. That is, there is no need to set up your database on a server and then query it from a client interface. In fact, setting it up is straightforward. In the terminal, we can directly call SQLite as a command\-line tool (on most modern computers, the command is now `sqlite3`, SQLite version 3\).
In this first code example, we set up an SQLite database using the command line. In the file structure of the book repository, we first switch to the data directory.
```
cd data
```
With one simple command, we start up SQLite, create a new database called `mydb.sqlite`, and connect to the newly created database.[18](#fn18)
```
sqlite3 mydb.sqlite
```
This created a new file `mydb.sqlite` in our `data` directory, which contains the newly created database. Also, we are now running `sqlite` in the terminal (indicated by the `sqlite>` prompt. This means we can now type SQL code to the terminal to run queries and other SQL commands.
At this point, the newly created database does not contain any data. There are no tables in it. We can see this by running the `.tables` command.
```
.tables
```
As expected, nothing is returned. Now, let’s create our first table and import the `economics.csv` dataset into it. In SQLite, it makes sense to first set up an empty table in which all column data types are defined before importing data from a CSV\-file to it. If a CSV is directly imported to a new table (without type definitions), all columns will be set to `TEXT` (similar to `character` in R) by default. Setting the right data type for each variable follows essentially the same logic as setting the data types of a data frame’s columns in R (with the difference that in SQL this also affects how the data is stored on disk).[19](#fn19)
In a first step, we thus create a new table called `econ`.
```
-- Create the new table
CREATE TABLE econ(
"date" DATE,
"pce" REAL,
"pop" REAL,
"psavert" REAL,
"uempmed" REAL,
"unemploy" INTEGER
);
```
Then, we can import the data from the csv file, by first switching to CSV mode via the command `.mode csv` and then importing the data to `econ` with `.import`. The `.import` command expects as a first argument the path to the CSV file on disk and as a second argument the name of the table to import the data to.
```
-- prepare import
.mode csv
-- import data from csv
.import --skip 1 economics.csv econ
```
Now we can have a look at the new database table in SQLite. `.tables` shows that we now have one table called `econ` in our database, and `.schema` displays the structure of the new `econ` table.
```
.tables
```
```
# econ
```
```
.schema econ
```
```
# CREATE TABLE econ(
# "date" DATE,
# "pce" REAL,
# "pop" REAL,
# "psavert" REAL,
# "uempmed" REAL,
# "unemploy" INTEGER
# );
```
With this, we can start querying data with SQLite. In order to make the query results easier to read, we first set two options regarding how query results are displayed on the terminal. `.header on` enables the display of the column names in the returned query results. And `.mode columns` arranges the query results in columns.
```
.header on
```
```
.mode columns
```
In our first query, we select all (`*`) variable values of the observation of January 1968\.
```
select * from econ where date = '1968-01-01';
```
```
## date pce pop psavert uempmed unemploy
## 1 1968-01-01 531.5 199808 11.7 5.1 2878
```
#### 4\.4\.1\.1 Simple queries
Now let’s select all dates and unemployment values of observations with more than 15 million unemployed, ordered by date.
```
select date,
unemploy from econ
where unemploy > 15000
order by date;
```
```
## date unemploy
## 1 2009-09-01 15009
## 2 2009-10-01 15352
## 3 2009-11-01 15219
## 4 2009-12-01 15098
## 5 2010-01-01 15046
## 6 2010-02-01 15113
## 7 2010-03-01 15202
## 8 2010-04-01 15325
## 9 2010-11-01 15081
```
#### 4\.4\.1\.1 Simple queries
Now let’s select all dates and unemployment values of observations with more than 15 million unemployed, ordered by date.
```
select date,
unemploy from econ
where unemploy > 15000
order by date;
```
```
## date unemploy
## 1 2009-09-01 15009
## 2 2009-10-01 15352
## 3 2009-11-01 15219
## 4 2009-12-01 15098
## 5 2010-01-01 15046
## 6 2010-02-01 15113
## 7 2010-03-01 15202
## 8 2010-04-01 15325
## 9 2010-11-01 15081
```
### 4\.4\.2 Joins
So far, we have only considered queries involving one table of data. However, SQL provides a very efficient way to join data from various tables. Again, the way of writing SQL code is the same: You describe what the final table should look like and from where the data is to be selected.
Let’s extend the previous example by importing an additional table to our `mydb.sqlite`. The additional data is stored in the file `inflation.csv` in the book’s data folder and contains information on the US annual inflation rate measured in percent.[20](#fn20)
```
-- Create the new table
CREATE TABLE inflation(
"date" DATE,
"inflation_percent" REAL
);
-- prepare import
.mode csv
-- import data from csv
.import --skip 1 inflation.csv inflation
-- switch back to column mode
.mode columns
```
Note that the data stored in `econ` contains monthly observations, while `inflation` contains annual observations. We can thus only meaningfully combine the two datasets at the level of years. Again using the combination of datasets in R as a reference point, here is what we would like to achieve expressed in R. The aim is to get a table that serves as basis for a [Phillips curve](https://en.wikipedia.org/wiki/Phillips_curve) plot, with annual observations and the variables `year`, `average_unemp_percent`, and `inflation_percent`.
```
# import data
econ <- read.csv("data/economics.csv")
inflation <- read.csv("data/inflation.csv")
# prepare variable to match observations
econ$year <- lubridate::year(econ$date)
inflation$year <- lubridate::year(inflation$date)
# create final output
years <- unique(econ$year)
averages <- sapply(years, FUN = function(x) {
mean(econ[econ$year==x,"unemploy"]/econ[econ$year==x,"pop"])*100
} )
unemp <- data.frame(year=years,
average_unemp_percent=averages)
# combine via the year column
# keep all rows of econ
output<- merge(unemp, inflation[, c("year", "inflation_percent")], by="year")
# inspect output
head(output)
```
```
## year average_unemp_percent inflation_percent
## 1 1967 1.512 2.773
## 2 1968 1.394 4.272
## 3 1969 1.396 5.462
## 4 1970 2.013 5.838
## 5 1971 2.419 4.293
## 6 1972 2.324 3.272
```
Now let’s look at how the same table can be created in SQLite (the table output below only shows the first 6 rows of the resulting table).
```
SELECT
strftime('%Y', econ.date) AS year,
AVG(unemploy/pop)*100 AS average_unemp_percent,
inflation_percent
FROM econ INNER JOIN inflation ON year = strftime('%Y', inflation.date)
GROUP BY year
```
```
## year average_unemp_percent inflation_percent
## 1 1967 1.512 2.773
## 2 1968 1.394 4.272
## 3 1969 1.396 5.462
## 4 1970 2.013 5.838
## 5 1971 2.419 4.293
## 6 1972 2.324 3.272
```
When done working with the database, we can exit SQLite by typing `.quit` into the terminal and hit enter.
4\.5 With a little help from my friends: GPT and R/SQL coding
-------------------------------------------------------------
Whether you are already an experienced programmer in R and SQL or whether you are rather new to coding, recent developments in Large Language Models (LLMs) might provide an interesting way of making your coding workflow more efficient. At the writing of this book, OpenAI’s ChatGPT was still in its testing phase but has already created a big hype in various topic domains. In very simple terms ChatGPT and its predecessors GPT\-2, GPT\-3 are pre\-trained large\-scale machine learning models that have been trained on millions of websites’ text content (including code from open repositories such as GitHub). Applying these models for predictions is different from other machine learning settings. Instead of feeding new datasets into the trained model, you interact with the model via a prompt (like a chat function). That is, among other things you can pose a question to the model in plain English and get an often very reasonable answer, or you can instruct via the prompt to generate some type of text output for you (given your instructions, and potentially additional input). As the model is trained on natural language texts as well as (documented) computer code, you can ask it to write code for you, for example in SQL or R.
While there are many tools that build on LLMs such as GPT\-3 already out there and even more still being developed, I want to explicitly point you to two of those: [`gptstudio`](https://github.com/MichelNivard/GPTstudio), an add\-in for Rstudio, providing an easy\-to\-use interface with some of OpenAI’s APIs, and [GitHub Copilot](https://github.com/features/copilot). The latter is a professionally developed tool to support your software development workflow by, for example, auto\-completing the code you are writing. To use GitHub Copilot you need a paid subscription. With a subscription the tool can then be installed as an extension to different code editors (for example Visual Studio Code). However, at the time of writing this book no GitHub Copilot extension for RStudio was available. `gptstudio` is a much simpler but free alternative to GitHub Copilot and it is explicitly made for RStudio.[21](#fn21) You will, however, need an OpenAI account and a corresponding OpenAI API key (to get these simply follow the instructions here: <https://github.com/MichelNivard/GPTstudio>) in order to use the gptstudio\-add\-in. You will be charged for the queries that `gptstudio` sends to the OpenAI\-API; however there are no fixed costs associated with this setup.
Just to give you an idea of how you could use `gptstudio` for your coding workflow, consider the following example. After installing the add\-in and creating your OpenAI account and API key, you can initiate the chat function of the add\-in as follows.
```
# replace "YOUR-API-KEY" with
# your actual key
Sys.setenv(OPENAI_API_KEY = "YOUR-API-KEY")
# open chat window
gptstudio:::chat_gpt_addin()
```
This will cause RStudio to launch a Viewer window. You can pose questions or write instructions to OpenAI’s GPT model in the prompt field and send the corresponding query by clicking the “Chat” button. In the example below, I simply ask the model to generate a SQL query for me. In fact, I ask it to construct a query that we have previously built and evaluated in the previous SQL examples. I want the model to specifically reproduce the following query:
```
select date,
unemploy from econ
where unemploy > 15000
order by date;
```
Figure [4\.1](software-programming-with-big-data.html#fig:gptinput) shows a screenshot of my instruction to the model, and Figure [4\.2](software-programming-with-big-data.html#fig:gptoutput) presents the response from the model.
Figure 4\.1: GPTStudio: instructing OpenAI’s GPT\-3 model (text\-davinci\-003\) to write an SQL query.
Figure 4\.2: GPTStudio: an SQL query written by OpenAI’s GPT\-3 model (text\-davinci\-003\).
Two things are worth noting here: first, the query is syntactically correct and would essentially work; second, when comparing the query or the query’s results with our previous manually written query, we notice that the AI’s query is not semantically correct. Our database’s unemployment variable is called `unemploy`, and it is measured in thousands. The GPT model, of course, had no way of obtaining this information from our instructions. As a result, it simply used variable names and values for the filtering that seemed most reasonable given our input. The take\-away message here is to be aware of giving the model very clear instructions when creating code in this manner, especially in terms of the broader context (here the database and schema you are working with). To check the model’s code for syntax errors, simply test whether the code runs through or not. However, model\-generated code can easily introduce semantic errors, which can be very problematic.
4\.6 Wrapping up
----------------
* Find bottlenecks in your code before exposing it to the full dataset. To do so, use tools like `bench::mark()` and `profvis::profvis()` to see how long certain parts of your code need to process and how much memory they occupy.
* Be aware of R’s strengths and weaknesses when writing code for Big Data Analytics.Pre\-allocate memory for objects in which you collect the results of loops, make use of R’s vectorization, and avoid unnecessary copying.
* Get familiar with SQL and the underlying concept of only loading those observations and variables into R that are really needed for your task. SQLite in combination with R is an excellent lightweight solution to do this.
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/hardware-computing-resources.html |
Chapter 5 Hardware: Computing Resources
=======================================
In order to better understand how we can use the available computing resources most efficiently in an analytics task, we first need to get an idea of what we mean by capacity and *big* regarding the most important hardware components. We then look at each of these components (and additional specialized components) through the lens of Big Data. That is, for each component, we look at how it can become a crucial bottleneck when processing large amounts of data and what we can do about it in R. First we focus on mass storage and memory, then on the CPU, and finally on new alternatives to the CPU.
5\.1 Mass storage
-----------------
In a simple computing environment, the mass storage device (hard disk) is where the data to be analyzed is stored. So, in what units do we measure the size of datasets and consequently the mass storage capacity of a computer? The smallest unit of information in computing/digital data is called a *bit* (from *bi*nary dig*it*; abbrev. ‘b’) and can take one of two (symbolic) values, either a `0` or a `1` (“off” or “on”). Consider, for example, the decimal number `139`. Written in the binary system, `139` corresponds to the binary number `10001011`. In order to store this number on a hard disk, we require a capacity of 8 bits, or one *byte* (1 byte \= 8 bits; abbrev. ‘B’). Historically, one byte encoded a single character of text (e.g., in the ASCII character encoding system). When thinking of a given dataset in its raw/binary representation, we can simply think of it as a row of `0`s and `1`s.
Bigger units for storage capacity usually build on bytes, for example:
* \\(1 \\text{ kilobyte (KB)} \= 1000^{1} \\approx 2^{10} \\text{ bytes}\\)
* \\(1 \\text{ megabyte (MB)} \= 1000^{2} \\approx 2^{20} \\text{ bytes}\\)
* \\(1 \\text{ gigabyte (GB)} \= 1000^{3} \\approx 2^{30} \\text{ bytes}\\)
Currently, a common laptop or desktop computer has several hundred GBs of mass storage capacity. The problems related to a lack of mass storage capacity in Big Data analytics are likely the easiest to understand. Suppose you collect large amounts of data from an online source such as the Twitter. At some point, R will throw an error and stop the data collection procedure as the operating system will not allow R to use up more disk space. The simplest solution to this problem is to clean up your hard disk: empty the trash, archive files in the cloud or onto an external drive and delete them on the main disk, etc. In addition, there are some easy\-to\-learn tricks to use from within R to save some disk space.
### 5\.1\.1 Avoiding redundancies
Different formats for structuring data stored on disk use up more or less space. A simple example is the comparison of JSON (JavaScript Object Notation) and CSV (Comma Separated Values), both data structures that are widely used to store data for analytics purposes. JSON is much more flexible in that it allows the definition of arbitrarily complex hierarchical data structures (and even allows for hints at data types). However, this flexibility comes with some overhead in the usage of special characters to define the structure. Consider the following JSON excerpt of an economic time series fetched from the Federal Reserve’s [FRED API](https://fred.stlouisfed.org/docs/api/fred/series_observations.html#example_json).
```
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"observation_start": "1776-07-04",
"observation_end": "9999-12-31",
"units": "lin",
"output_type": 1,
"file_type": "json",
"order_by": "observation_date",
"sort_order": "asc",
"count": 84,
"offset": 0,
"limit": 100000,
"observations": [
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"date": "1929-01-01",
"value": "1065.9"
},
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"date": "1930-01-01",
"value": "975.5"
},
...,
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"date": "2012-01-01",
"value": "15693.1"
}
]
}
```
The JSON format is very practical here in separating metadata (such as what time frame is covered by this dataset, etc.) in the first few lines on top from the actual data in `"observations"` further down. However, note that due to this structure, the key names like `"date"`, and `"value"` occur for each observation in that time series. In addition, `"realtime_start"` and `"realtime_end"` occur both in the metadata section and again in each observation. Each of those occurrences costs some bytes of storage space on your hard disk but does not add any information once you have parsed and imported the time series into R. The same information could also be stored in a more efficient way on your hard disk by simply storing the metadata in a separate text file and the actual observations in a CSV file (in a table\-like structure):
```
"date","value"
"1929-01-01", "1065.9"
"1930-01-01", "975.5"
...,
"2012-01-01", 15693.1"
```
In fact, in this particular example, storing the data in JSON format would take up more than double the hard\-disk space as CSV. Of course, this is not to say that one should generally store data in CSV files. In many situations, you might really have to rely on JSON’s flexibility to represent more complex structures. However, in practice it is very much worthwhile to think about whether you can improve storage efficiency by simply storing raw data in a different format.
Another related point to storing data in CSV files is to remove redundancies by splitting the data into several tables/CSV files, whereby each table contains the variables exclusively describing the type of observation in it. For example, when analyzing customer data for marketing purposes, the dataset stored in one CSV file might be at the level of individual purchases. That is, each row contains information on what has been purchased on which day by which customer as well as additional variables describing the customer (such as customer ID, name, address, etc.). Instead of keeping all of this data in one file, we could split it into two files, where one only contains the order IDs and corresponding customer IDs as well as attributes of individual orders (but not additional attributes of the customers themselves), and the other contains the customer IDs and all customer attributes. Thereby, we avoid redundancies in the form of repeatedly storing the same values of customer attributes (like name and address) for each order.[22](#fn22)
### 5\.1\.2 Data compression
Data compression essentially follows from the same basic idea of avoiding redundancies in data storage as the simple approaches discussed above. However, it happens on a much more fundamental level. Data compression algorithms encode the information contained in the original representation of the data with fewer bits. In the case of lossless compression, this results in a new data file containing the exact same information but taking up less space on disk. In simple terms, compression replaces repeatedly occurring sequences with shorter expressions and keeps track of replacements in a table. Based on the table, the file can then be de\-compressed to recreate the original representation of the data. For example, consider the following character string.
```
"xxxxxyyyyyzzzz"
```
The same data could be represented with fewer bits as:
```
"5x6y4z"
```
which needs fewer than half the number of bits to be stored (but contains the same information).
There are several easy ways to use your mass storage capacity more efficiently with data compression in R. Most conveniently, some functions to import/export data in R directly allow for reading and writing of compressed formats. For example, the `fread()`/`fwrite()` functions provided in the `data.table` package will automatically use the GZIP (de\-)compression utility when writing to (reading from) a CSV file with a `.gz` file extension in the file name.
```
# load packages
library(data.table)
# load example data from basic R installation
data("LifeCycleSavings")
# write data to normal csv file and check size
fwrite(LifeCycleSavings, file="lcs.csv")
file.size("lcs.csv")
```
```
## [1] 1441
```
```
# write data to a GZIPped (compressed) csv file and check size
fwrite(LifeCycleSavings, file="lcs.csv.gz")
file.size("lcs.csv.gz")
```
```
## [1] 744
```
```
# read/import the compressed data
lcs <- data.table::fread("lcs.csv.gz")
```
Alternatively, you can also use other types of data compression as follows.
```
# common ZIP compression (independent of data.table package)
write.csv(LifeCycleSavings, file="lcs.csv")
file.size("lcs.csv")
```
```
## [1] 1984
```
```
zip(zipfile = "lcs.csv.zip", files = "lcs.csv")
file.size("lcs.csv.zip")
```
```
## [1] 1205
```
```
# unzip/decompress and read/import data
lcs_path <- unzip("lcs.csv.zip")
lcs <- read.csv(lcs_path)
```
Note that data compression is subject to a time–memory trade\-off. Compression and de\-compression are computationally intensive and need time. When using compression to make more efficient use of the available mass storage capacity, think about how frequently you expect the data to be loaded into R as part of the data analysis tasks ahead and for how long you will need to keep the data stored on your hard disk. Importing GBs of compressed data can be uncomfortably slower than importing from an uncompressed file.
So far, we have only focused on data size in the context of mass storage capacity. But what happens once you load a large dataset into R (e.g., by means of `read.csv()`)? A program called a “parser” is executed that reads the raw data from the hard disk and creates a representation of that data in the R environment, that is, in random access memory (RAM). All common computers have more GBs of mass storage available than GBs of RAM. Hence, new issues of hardware capacity loom at the stage of data import, which brings us to the next subsection.
5\.2 Random access memory (RAM)
-------------------------------
Currently, a common laptop or desktop computer has 8–32 GB of RAM capacity. These are more\-or\-less the numbers you should keep in the back of your mind for the examples/discussions that follow. That is, we will consider a dataset as “big” if it takes up several GBs in RAM (and therefore might overwhelm a machine with 8GB RAM capacity).
There are several types of problems that you might run into in practice when attempting to import and analyze a dataset of the size close to or larger than your computer’s RAM capacity. Importing the data might take much longer than expected, your computer might freeze during import (or later during the analysis), R/Rstudio might crash, or you might get an error message hinting at a lack of RAM. How can you anticipate such problems, and what can you do about them?
Many of the techniques and packages discussed in the following chapters are in one way or another solutions to these kinds of problems. However, there are a few relatively simple things to keep in mind before we go into the details.
1. The same data stored on the mass storage device (e.g., in a CSV file) might take up more or less space in RAM. This is due to the fact that the data is (technically speaking) structured differently in a CSV or JSON file than in, for example, a data table or a matrix in R. For example, it is reasonable to anticipate that the example JSON file with the economic time series data will take up less space as a time series object in R (in RAM) than it does on the hard disk (for one thing simply due to the fact that we will not keep the redundancies mentioned before).
2. The import might work well, but some parts of the data analysis script might require much more memory to run through even without loading additional data from disk. A classic example of this is regression analysis performed with, for example, `lm()` in R. As part of the OLS estimation procedure, `lm` will need to create the model matrix (usually denoted \\(X\\)). Depending on the model you want to estimate, the model matrix might actually be larger than the data frame containing the dataset. In fact, this can happen quite easily if you specify a fixed effects model in which you want to account for the fixed effects via dummy variables (for example, for each country except for one).[23](#fn23) Again, the result can be one of several: an error message hinting at a lack of memory, a crash, or the computer slowing down significantly. Anticipating these types of problems is very tricky since memory problems are often caused at a lower level of a function from the package that provides you with the data analytics routine you intend to use. Accordingly, error messages can be rather cryptic.
3. Keep in mind that you have some leeway in how much space imported data takes up in R by considering data structures and data types. For example, you can use factors instead of character vectors when importing categorical variables into R (the default in `read.csv`), and for some operations it makes sense to work with matrices instead of data frames.
Finally, recall the lessons regarding memory usage from the section “Writing efficient R code” in Chapter 1\.
5\.3 Combining RAM and hard disk: Virtual memory
------------------------------------------------
What if all the RAM in our computer is not enough to store all the data we want to analyze?
Modern operating systems (OSs) have a way of dealing with such a situation. Once all RAM is used up by the currently running programs, the OS allocates parts of the memory back to the hard disk, which then works as *virtual memory*. Figure 4\.2 illustrates this point.
Figure 5\.1: Virtual memory. Overall memory is mapped to RAM and parts of the hard disk.
For example, when we implement an R\-script that imports one file after another into the R environment, ignoring the RAM capacity of our computer, the OS will start *paging* data to the virtual memory. This happens ‘under the hood’ without explicit instructions by the user. We will quite likely notice that the computer slows down a lot when this happens.
While this default usage of virtual memory by the OS is helpful for running several applications at the same time, each taking up a moderate amount of memory, it is not a really useful tool for processing large amounts of data in one application (R). However, the underlying idea of using both RAM and mass storage simultaneously in order to cope with a lack of memory is very useful in the context of Big Data Analytics.
Several R packages have been developed that exploit the idea behind virtual memory explicitly for analyzing large amounts of data. The basic idea behind these packages is to map a dataset to the hard disk when loading it into R. The actual data values are stored in chunks on the hard disk, while the structure/metadata of the dataset is loaded into R.
5\.4 CPU and parallelization
----------------------------
The actual processing of the data is done in the computer’s central processing unit (CPU). Consequently, the performance of the CPU has a substantial effect on how fast a data analytics task runs. A CPU’s performance is usually denoted by its *clock rate* measured in gigaherz (GHz). In simple terms, a CPU with a clock rate of 4\.8 GHz can execute 4\.8 billion basic operations per second. Holding all other aspects constant, you can thus expect an analytics task to run faster if it runs on a computer with higher CPU clock rate. As an alternative to scaling up the CPU, we can exploit the fact that modern CPUs have several *cores*. In the normal usage of a PC, the operating system makes use of these cores to run several applications smoothly *in parallel* (e.g., you listen to music on Spotify while browsing the web and running some analytics script in RStudio in the background).
Modern computing environments such as R allow us to explicitly run parts of the same analytics task in parallel, that is, on several CPU cores at the same time. Following the same logic, we can also connect several computers (each with several CPU cores) in a cluster computer and run the program in parallel on all of these computing nodes. Both of these approaches are generally referred to as *parallelization*, and both are supported in several R packages.
An R program run in parallel typically involves the following steps.
* First, several instances of R are running at the same time (across one machine with multiple CPU cores or across a cluster computer). One of the instances (i.e., the *master* instance) breaks the computation into batches and sends those to the other instances.
* Second, each of the instances processes its batch and sends the results back to the master instance.
* Finally, the master instance combines the partial results into the final result and returns it to the user.
To illustrate this point, consider the following econometric problem: you have a customer [dataset](https://www.kaggle.com/jackdaoud/marketing-data?select=marketing_data.csv) with detailed data on customer characteristics, past customer behavior, and information on online marketing campaigns. Your task is to figure out which customers are more likely to react positively to the most recent online marketing campaign. The aim is to optimize personalized marketing campaigns in the future based on insights gained from this exercise. In a first step you take a computationally intensive “brute force” approach: you run all possible regressions with the dependent variable `Response` (equal to 1 if the customer took the offer in the campaign and 0 otherwise). In total you have 21 independent variables; thus you need to run \\(2^{20}\=1,048,576\\) logit regressions (this is without considering linear combinations of covariates etc.). Finally, you want to select the model with the best fit according to deviance.
A simple sequential implementation to solve this problem could look like this (for the sake of time, we cap the number of regression models to N\=10\).
```
# you can download the dataset from
# https://www.kaggle.com/jackdaoud/marketing-data?
# select=marketing_data.csv
# PREPARATION -----------------------------
# packages
library(stringr)
# import data
marketing <- read.csv("data/marketing_data.csv")
# clean/prepare data
marketing$Income <- as.numeric(gsub("[[:punct:]]",
"",
marketing$Income))
marketing$days_customer <-
as.Date(Sys.Date())-
as.Date(marketing$Dt_Customer, "%m/%d/%y")
marketing$Dt_Customer <- NULL
# all sets of independent vars
indep <- names(marketing)[ c(2:19, 27,28)]
combinations_list <- lapply(1:length(indep),
function(x) combn(indep, x,
simplify = FALSE))
combinations_list <- unlist(combinations_list,
recursive = FALSE)
models <- lapply(combinations_list,
function(x) paste("Response ~",
paste(x, collapse="+")))
# COMPUTE REGRESSIONS --------------------------
N <- 10 # N <- length(models) for all
pseudo_Rsq <- list()
length(pseudo_Rsq) <- N
for (i in 1:N) {
# fit the logit model via maximum likelihood
fit <- glm(models[[i]],
data=marketing,
family = binomial())
# compute the proportion of deviance explained by
# the independent vars (~R^2)
pseudo_Rsq[[i]] <- 1-(fit$deviance/fit$null.deviance)
}
# SELECT THE WINNER ---------------
models[[which.max(pseudo_Rsq)]]
```
```
## [1] "Response ~ MntWines"
```
Alternatively, a sequential implementation could be based on an apply\-type function like `lapply()`. As several of the approaches to parallelize computation with R build either on loops or an apply\-type syntax, let us also briefly introduce the sequential lapply\-implementation of the task above as a point of reference.
```
# COMPUTE REGRESSIONS --------------------------
N <- 10 # N <- length(models) for all
run_reg <-
function(model, data, family){
# fit the logit model via maximum likelihood
fit <- glm(model, data=data, family = family)
# compute and return the proportion of deviance explained by
# the independent vars (~R^2)
return(1-(fit$deviance/fit$null.deviance))
}
pseudo_Rsq_list <-lapply(models[1:N], run_reg, data=marketing, family=binomial() )
pseudo_Rsq <- unlist(pseudo_Rsq_list)
# SELECT THE WINNER ---------------
models[[which.max(pseudo_Rsq)]]
```
```
## [1] "Response ~ MntWines"
```
### 5\.4\.1 Naive multi\-session approach
There is actually a simple way of doing this “manually” on a multi\-core PC, which intuitively illustrates the point of parallelization (although it would not be a very practical approach): you write an R script that loads the dataset, runs the first \\(n\\) of the total of \\(N\\) regressions, and stores the result in a local text file. Next, you run the script in your current RStudio session, open an additional RStudio session, and run the script with the next \\(n\\) regressions, and so on until all cores are occupied with one RStudio session. At the end you collect all of the results from the separate text files and combine them to get the final result. Depending on the problem at hand, this could indeed speed up the overall task, and it is technically speaking a form of “multi\-session” approach. However, as you have surely noticed, this is unlikely to be a very practical approach.
### 5\.4\.2 Multi\-session approach with futures
There is a straightforward way to implement the very basic (naive) idea of running parts of the task in separate R sessions. The `future` package (see Bengtsson ([2021](#ref-bengtsson_2021)) for details) provides a lightweight interface (API) to use futures[24](#fn24). An additional set of packages (such as `future.apply`) that build on the `future` package, provides high\-level functionality to run your code in parallel without having to change your (sequential, usual) R code much. In order to demonstrate the simplicity of this approach, let us re\-write the sequential implementation through `lapply()` from above for parallelization through the `future` package. All we need to do is to load the `future` and `future.apply` packages ([Bengtsson 2021](#ref-bengtsson_2021)) and then simply replace `lapply(...)` with `future_lapply(...)`.
```
# SET UP ------------------
# load packages
library(future)
library(future.apply)
# instruct the package to resolve
# futures in parallel (via a SOCK cluster)
plan(multisession)
# COMPUTE REGRESSIONS --------------------------
N <- 10 # N <- length(models) for all
pseudo_Rsq_list <- future_lapply(models[1:N],
run_reg,
data=marketing,
family=binomial() )
pseudo_Rsq <- unlist(pseudo_Rsq_list)
# SELECT THE WINNER ---------------
models[[which.max(pseudo_Rsq)]]
```
```
## [1] "Response ~ MntWines"
```
### 5\.4\.3 Multi\-core and multi\-node approach
There are several additional approaches to parallelization in R. With the help of some specialized packages, we can instruct R to automatically distribute the workload to different cores (or different computing nodes in a cluster computer), control and monitor the progress in all cores, and then automatically collect and combine the results from all cores. The `future`\-package and the packages building on it provide in themselves different approaches to writing such scripts.[25](#fn25) Below, we look at two additional ways of implementing parallelization with R that are based on other underlying frameworks than `future`.
#### 5\.4\.3\.1 Parallel for\-loops using socket
Probably the most intuitive approach to parallelizing a task in R is the `foreach` package ([Microsoft and Weston 2022](#ref-foreach)). It allows you to write a `foreach` statement that is very similar to the for\-loop syntax in R. Hence, you can straightforwardly “translate” an already implemented sequential approach with a common for\-loop to a parallel implementation.
```
# COMPUTE REGRESSIONS IN PARALLEL (MULTI-CORE) --------------------------
# packages for parallel processing
library(parallel)
library(doSNOW)
# get the number of cores available
ncores <- parallel::detectCores()
# set cores for parallel processing
ctemp <- makeCluster(ncores)
registerDoSNOW(ctemp)
# prepare loop
N <- 10000 # N <- length(models) for all
# run loop in parallel
pseudo_Rsq <-
foreach ( i = 1:N, .combine = c) %dopar% {
# fit the logit model via maximum likelihood
fit <- glm(models[[i]],
data=marketing,
family = binomial())
# compute the proportion of deviance explained by
# the independent vars (~R^2)
return(1-(fit$deviance/fit$null.deviance))
}
# SELECT THE WINNER ---------------
models[[which.max(pseudo_Rsq)]]
```
```
## [1] "Response ~ Year_Birth+Teenhome+Recency+MntWines+days_customer"
```
With relatively few cases, this approach is not very fast due to the overhead of “distributing” variables/objects from the master process to all cores/workers. In simple terms, the socket approach means that the cores do not share the same variables/the same environment, which creates overhead. However, this approach is usually very stable and runs on all platforms.
#### 5\.4\.3\.2 Parallel lapply using forking
Finally, let us look at an implementation based on forking (here, implemented in the `parallel` package by ([R Core Team 2021](#ref-rfoundation2021)). In the fork approach, each core works with the same objects/variables in a shared environment, which makes this approach very fast. However, depending on what exactly is being computed, sharing an environment can cause problems.[26](#fn26) If you are not sure whether your setup might run into issues with forking, it would be better to rely on a non\-fork approach.[27](#fn27)
```
# COMPUTE REGRESSIONS IN PARALLEL (MULTI-CORE) ---------------
# prepare parallel lapply (based on forking,
# here clearly faster than foreach)
N <- 10000 # N <- length(models) for all
# run parallel lapply
pseudo_Rsq <- mclapply(1:N,
mc.cores = ncores,
FUN = function(i){
# fit the logit model
fit <- glm(models[[i]],
data=marketing,
family = binomial())
# compute the proportion of deviance
# explained by the independent vars (~R^2)
return(1-(fit$deviance/fit$null.deviance))
})
# SELECT THE WINNER, SHOW FINAL OUTPUT ---------------
best_model <- models[[which.max(pseudo_Rsq)]]
best_model
```
```
## [1] "Response ~ Year_Birth+Teenhome+Recency+MntWines+days_customer"
```
5\.5 GPUs for scientific computing
----------------------------------
The success of the computer games industry in the late 1990s/early 2000s led to an interesting positive externality for scientific computing. The ever more demanding graphics of modern computer games and the huge economic success of the computer games industry set incentives for hardware producers to invest in research and development of more powerful ‘graphics cards’, extending a normal PC/computing environment with additional computing power solely dedicated to graphics. At the heart of these graphic cards are so\-called GPUs (graphics processing units), microprocessors specifically optimized for graphics processing. Figure [5\.2](hardware-computing-resources.html#fig:rtx) depicts a modern graphics card similar to those commonly built into today’s ‘gaming’ PCs.
Figure 5\.2: Illustration of a Nvidia GEFORCE RTX 2080 graphics card with a modern GPU (illustration by MarcusBurns1977 under CC BY 3\.0 license).
Why did the hardware industry not simply invest in the development of more powerful CPUs to deal with the more demanding PC games? The main reason is that the architecture of CPUs is designed not only for efficiency but also flexibility. That is, a CPU needs to perform well in all kinds of computations, some parallel, some sequential, etc. Computing graphics is a comparatively narrow domain of computation, and designing a processing unit architecture that is custom\-made to excel just at this one task is thus much more cost efficient. Interestingly, this graphics\-specific architecture (specialized in highly parallel numerical \[floating point] workloads) turns out to also be very useful in some core scientific computing tasks – in particular, matrix multiplications (see Fatahalian, Sugerman, and Hanrahan ([2004](#ref-fatahalian_etal2004)) for a detailed discussion of why that is the case). A key aspect of GPUs is that they are composed of several multiprocessor units, of which each in turn has several cores. GPUs can thus perform computations with hundreds or even thousands of threads in parallel. The figure below illustrates this point by showing the typical architecture of an NVIDIA GPU.
Figure 5\.3: Illustration of a graphics processing unit’s components/architecture. The GPU consists of several Texture Processing Clusters (TPC), which in turn consist of several Streaming Multiprocessors (SM; the primary unit of parallelism in the GPU) that contain ten Streaming Processors (SP; cores, responsible for executing a single thread), shared memory (can be accessed by multiple SPs simultaneously), instruction cache (I\-Cache; responsible for storing and managing the instructions needed to execute a program), constant cache (C\-Cache; store constant data that is needed during program execution), and a multi\-threaded issue component (MT issue; responsible for scheduling and managing the execution of multiple threads simultaneously).
While initially, programming GPUs for scientific computing required a very good understanding of the hardware, graphics card producers have realized that there is an additional market for their products (in particular with the recent rise of deep learning) and now provide several high\-level APIs to use GPUs for tasks other than graphics processing. Over the last few years, more high\-level software has been developed that makes it much easier to use GPUs in parallel computing tasks. The following subsections show some examples of such software in the R environment.[28](#fn28)
### 5\.5\.1 GPUs in R
The `gpuR` package ([Determan 2019](#ref-gpuR)) provides basic R functions to compute with GPUs from within the R environment.[29](#fn29) In the following example we compare the performance of a CPU with a GPU for a matrix multiplication exercise. For a large \\(N\\times P\\) matrix \\(X\\), we want to compute \\(X^tX\\).
In a first step, we load the `gpuR` package.[30](#fn30) Note the output to the console. It shows the type of GPU identified by `gpuR`. This is the platform on which `gpuR` will compute the GPU examples. In order to compare the performances, we also load the `bench` package.
```
# load package
library(bench)
library(gpuR)
```
```
## Number of platforms: 1
## - platform: NVIDIA Corporation: OpenCL 3.0 CUDA 12.2.138
## - context device index: 0
## - NVIDIA GeForce GTX 1650
## checked all devices
## completed initialization
```
Note how loading the `gpuR` package triggers a check of GPU devices and outputs information on the detected GPUs as well as the lower\-level software platform to run GPU computations. Next, we initialize a large matrix filled with pseudo\-random numbers, representing a dataset with \\(N\\) observations and \\(P\\) variables.
```
# initialize dataset with pseudo-random numbers
N <- 10000 # number of observations
P <- 100 # number of variables
X <- matrix(rnorm(N * P, 0, 1), nrow = N, ncol =P)
```
For the GPU examples to work, we need one more preparatory step. GPUs have their own memory, which they can access faster than they can access RAM. However, this GPU memory is typically not very large compared to the memory CPUs have access to. Hence, there is a potential trade\-off between losing some efficiency but working with more data or vice versa.[31](#fn31) Here, we transfer the matrix to GPU memory with `vclMatrix()`.[32](#fn32).
```
# prepare GPU-specific objects/settings
# transfer matrix to GPU (matrix stored in GPU memory)
vclX <- vclMatrix(X, type = "float")
```
Now we run the two examples, first, based on standard R, using the CPU, and then, computing on the GPU and using GPU memory. In order to make the comparison fair, we force `bench::mark()` to run at least 200 iterations per variant.
```
# compare three approaches
gpu_cpu <- bench::mark(
# compute with CPU
cpu <-t(X) %*% X,
# GPU version, in GPU memory
# (vclMatrix formation is a memory transfer)
gpu <- t(vclX) %*% vclX,
check = FALSE, memory = FALSE, min_iterations = 200)
```
The performance comparison is visualized with boxplots.
```
plot(gpu_cpu, type = "boxplot")
```
The theoretically expected pattern becomes clearly visible. When using the GPU \+ GPU memory, the matrix operation is substantially faster than the common CPU computation. However, in this simple example of only one matrix operation, the real strength of GPU computation vs. CPU computation does not really become visible. In Chapter 13, we will look at a computationally much more intensive application of GPUs in the domain of deep learning (which relies heavily on matrix multiplications).
5\.6 The road ahead: Hardware made for machine learning
-------------------------------------------------------
Due to the high demand for more computational power in the domain of training complex neural network models (for example, in computer vision), Google has recently developed a new hardware platform specifically designed to work with complex neural networks using TensorFlow: Tensor Processing Units (TPUs). TPUs were designed from the ground up to improve performance in dense vector and matrix computations with the aim of substantially increasing the speed of training deep learning models implemented with TensorFlow ([Abadi et al. 2015](#ref-tensorflow2015-whitepaper)).
Figure 5\.4: Illustration of a tensor processing unit (TPU).
While initially only used internally by Google, the Google Cloud platform now offers cloud TPUs to the general public.
5\.7 Wrapping up
----------------
* Be aware of and avoid redundancies in data storage. Consider, for example, storing data in CSV\-files instead of JSON\-files (if there is no need to store hierarchical structures).
* File compression is a more general strategy to avoid redundancies and save mass storage space. It can help you store even large datasets on disk. However, reading and saving compressed files takes longer, as additional processing is necessary. As a rule of thumb, store the raw datasets (which don’t have to be accessed that often) in a compressed format.
* Standard R for data analytics expects the datasets to be imported and available as R objects in the R environment (i.e., in RAM). Hence, the step of importing large datasets to R with the conventional approaches is aimed to parsing and loading the entire dataset into RAM, which might fail if your dataset is larger than the available RAM.
* Even if a dataset is not too large to fit into RAM, running data analysis scripts on it might then lead to R reaching RAM limits due to the creation of additional R objects in RAM needed for the computation. For example, when running regressions in the conventional way in R, R will generate, among other objects, an object containing the model matrix. However, at this point your original dataset object will still also reside in RAM. Not uncommonly, R would then crash or slow down substantially.
* The reason R might slow down substantially when working with large datasets in RAM is that your computer’s operating system (OS) has a default approach of handling situations with a lack of available RAM: it triggers *paging* between the RAM and a dedicated part of the hard disk called *virtual memory*. In simple terms, your computer starts using parts of the hard disk as an extension of RAM. However, reading/writing from/to the hard disk is much slower than from/to RAM, so your entire data analytics script (and any other programs running at the same time) will slow down substantially.
* Based on the points above, when working locally with a large dataset, recognize why your computer is slowing down or why R is crashing. Consider whether the dataset could theoretically fit into memory. Clarify whether analyzing the already imported data triggers the OS’s virtual memory mechanism.
* Taken together, your program might run slower than expected due to a lack of RAM (and thus the paging) and/or due to a very high computational burden on the CPU – for example, bootstrapping the standard errors of regression coefficients.
* By default essentially all basic R functions use one CPU thread/core for computation. If RAM is not an issue, setting up repetitive tasks to run in parallel (i.e., using more than one CPU thread/core at a time) can substantially speed up your program. Easy\-to\-use solutions to do this are `foreach` for a parallel version of `for`\-loops and `mclapply` for a parallel version of `lapply`.
* Finally, if your analytics script builds extensively on matrix multiplication, consider implementing it for processing on your GPU via the `gpuR` package. Note, though, that this approach presupposes that you have installed and set up your GPU with the right drivers to use it not only for graphics but also for scientific computation.
5\.8 Still have insufficient computing resources?
-------------------------------------------------
When working with very large datasets (i.e., terabytes of data), processing the data on one common computer might not work due to a lack of memory or would be way too slow due to a lack of computing power (CPU cores). The architecture or basic hardware setup of a common computer is subject to a limited amount of RAM and a limited number of CPUs/CPU cores. Hence, simply scaling up might not be sufficient. Instead, we need to scale out. In simple terms, this means connecting several computers (each with its own RAM, CPU, and mass storage) in a network, distributing the dataset across all computers (“nodes”) in this network, and working on the data simultaneously across all nodes. In the next chapter, we look into how such \`\`distributed systems’’ basically work, what software frameworks are commonly used to work on distributed systems, and how we can interact with this software (and the distributed system) via R and SQL.
5\.1 Mass storage
-----------------
In a simple computing environment, the mass storage device (hard disk) is where the data to be analyzed is stored. So, in what units do we measure the size of datasets and consequently the mass storage capacity of a computer? The smallest unit of information in computing/digital data is called a *bit* (from *bi*nary dig*it*; abbrev. ‘b’) and can take one of two (symbolic) values, either a `0` or a `1` (“off” or “on”). Consider, for example, the decimal number `139`. Written in the binary system, `139` corresponds to the binary number `10001011`. In order to store this number on a hard disk, we require a capacity of 8 bits, or one *byte* (1 byte \= 8 bits; abbrev. ‘B’). Historically, one byte encoded a single character of text (e.g., in the ASCII character encoding system). When thinking of a given dataset in its raw/binary representation, we can simply think of it as a row of `0`s and `1`s.
Bigger units for storage capacity usually build on bytes, for example:
* \\(1 \\text{ kilobyte (KB)} \= 1000^{1} \\approx 2^{10} \\text{ bytes}\\)
* \\(1 \\text{ megabyte (MB)} \= 1000^{2} \\approx 2^{20} \\text{ bytes}\\)
* \\(1 \\text{ gigabyte (GB)} \= 1000^{3} \\approx 2^{30} \\text{ bytes}\\)
Currently, a common laptop or desktop computer has several hundred GBs of mass storage capacity. The problems related to a lack of mass storage capacity in Big Data analytics are likely the easiest to understand. Suppose you collect large amounts of data from an online source such as the Twitter. At some point, R will throw an error and stop the data collection procedure as the operating system will not allow R to use up more disk space. The simplest solution to this problem is to clean up your hard disk: empty the trash, archive files in the cloud or onto an external drive and delete them on the main disk, etc. In addition, there are some easy\-to\-learn tricks to use from within R to save some disk space.
### 5\.1\.1 Avoiding redundancies
Different formats for structuring data stored on disk use up more or less space. A simple example is the comparison of JSON (JavaScript Object Notation) and CSV (Comma Separated Values), both data structures that are widely used to store data for analytics purposes. JSON is much more flexible in that it allows the definition of arbitrarily complex hierarchical data structures (and even allows for hints at data types). However, this flexibility comes with some overhead in the usage of special characters to define the structure. Consider the following JSON excerpt of an economic time series fetched from the Federal Reserve’s [FRED API](https://fred.stlouisfed.org/docs/api/fred/series_observations.html#example_json).
```
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"observation_start": "1776-07-04",
"observation_end": "9999-12-31",
"units": "lin",
"output_type": 1,
"file_type": "json",
"order_by": "observation_date",
"sort_order": "asc",
"count": 84,
"offset": 0,
"limit": 100000,
"observations": [
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"date": "1929-01-01",
"value": "1065.9"
},
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"date": "1930-01-01",
"value": "975.5"
},
...,
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"date": "2012-01-01",
"value": "15693.1"
}
]
}
```
The JSON format is very practical here in separating metadata (such as what time frame is covered by this dataset, etc.) in the first few lines on top from the actual data in `"observations"` further down. However, note that due to this structure, the key names like `"date"`, and `"value"` occur for each observation in that time series. In addition, `"realtime_start"` and `"realtime_end"` occur both in the metadata section and again in each observation. Each of those occurrences costs some bytes of storage space on your hard disk but does not add any information once you have parsed and imported the time series into R. The same information could also be stored in a more efficient way on your hard disk by simply storing the metadata in a separate text file and the actual observations in a CSV file (in a table\-like structure):
```
"date","value"
"1929-01-01", "1065.9"
"1930-01-01", "975.5"
...,
"2012-01-01", 15693.1"
```
In fact, in this particular example, storing the data in JSON format would take up more than double the hard\-disk space as CSV. Of course, this is not to say that one should generally store data in CSV files. In many situations, you might really have to rely on JSON’s flexibility to represent more complex structures. However, in practice it is very much worthwhile to think about whether you can improve storage efficiency by simply storing raw data in a different format.
Another related point to storing data in CSV files is to remove redundancies by splitting the data into several tables/CSV files, whereby each table contains the variables exclusively describing the type of observation in it. For example, when analyzing customer data for marketing purposes, the dataset stored in one CSV file might be at the level of individual purchases. That is, each row contains information on what has been purchased on which day by which customer as well as additional variables describing the customer (such as customer ID, name, address, etc.). Instead of keeping all of this data in one file, we could split it into two files, where one only contains the order IDs and corresponding customer IDs as well as attributes of individual orders (but not additional attributes of the customers themselves), and the other contains the customer IDs and all customer attributes. Thereby, we avoid redundancies in the form of repeatedly storing the same values of customer attributes (like name and address) for each order.[22](#fn22)
### 5\.1\.2 Data compression
Data compression essentially follows from the same basic idea of avoiding redundancies in data storage as the simple approaches discussed above. However, it happens on a much more fundamental level. Data compression algorithms encode the information contained in the original representation of the data with fewer bits. In the case of lossless compression, this results in a new data file containing the exact same information but taking up less space on disk. In simple terms, compression replaces repeatedly occurring sequences with shorter expressions and keeps track of replacements in a table. Based on the table, the file can then be de\-compressed to recreate the original representation of the data. For example, consider the following character string.
```
"xxxxxyyyyyzzzz"
```
The same data could be represented with fewer bits as:
```
"5x6y4z"
```
which needs fewer than half the number of bits to be stored (but contains the same information).
There are several easy ways to use your mass storage capacity more efficiently with data compression in R. Most conveniently, some functions to import/export data in R directly allow for reading and writing of compressed formats. For example, the `fread()`/`fwrite()` functions provided in the `data.table` package will automatically use the GZIP (de\-)compression utility when writing to (reading from) a CSV file with a `.gz` file extension in the file name.
```
# load packages
library(data.table)
# load example data from basic R installation
data("LifeCycleSavings")
# write data to normal csv file and check size
fwrite(LifeCycleSavings, file="lcs.csv")
file.size("lcs.csv")
```
```
## [1] 1441
```
```
# write data to a GZIPped (compressed) csv file and check size
fwrite(LifeCycleSavings, file="lcs.csv.gz")
file.size("lcs.csv.gz")
```
```
## [1] 744
```
```
# read/import the compressed data
lcs <- data.table::fread("lcs.csv.gz")
```
Alternatively, you can also use other types of data compression as follows.
```
# common ZIP compression (independent of data.table package)
write.csv(LifeCycleSavings, file="lcs.csv")
file.size("lcs.csv")
```
```
## [1] 1984
```
```
zip(zipfile = "lcs.csv.zip", files = "lcs.csv")
file.size("lcs.csv.zip")
```
```
## [1] 1205
```
```
# unzip/decompress and read/import data
lcs_path <- unzip("lcs.csv.zip")
lcs <- read.csv(lcs_path)
```
Note that data compression is subject to a time–memory trade\-off. Compression and de\-compression are computationally intensive and need time. When using compression to make more efficient use of the available mass storage capacity, think about how frequently you expect the data to be loaded into R as part of the data analysis tasks ahead and for how long you will need to keep the data stored on your hard disk. Importing GBs of compressed data can be uncomfortably slower than importing from an uncompressed file.
So far, we have only focused on data size in the context of mass storage capacity. But what happens once you load a large dataset into R (e.g., by means of `read.csv()`)? A program called a “parser” is executed that reads the raw data from the hard disk and creates a representation of that data in the R environment, that is, in random access memory (RAM). All common computers have more GBs of mass storage available than GBs of RAM. Hence, new issues of hardware capacity loom at the stage of data import, which brings us to the next subsection.
### 5\.1\.1 Avoiding redundancies
Different formats for structuring data stored on disk use up more or less space. A simple example is the comparison of JSON (JavaScript Object Notation) and CSV (Comma Separated Values), both data structures that are widely used to store data for analytics purposes. JSON is much more flexible in that it allows the definition of arbitrarily complex hierarchical data structures (and even allows for hints at data types). However, this flexibility comes with some overhead in the usage of special characters to define the structure. Consider the following JSON excerpt of an economic time series fetched from the Federal Reserve’s [FRED API](https://fred.stlouisfed.org/docs/api/fred/series_observations.html#example_json).
```
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"observation_start": "1776-07-04",
"observation_end": "9999-12-31",
"units": "lin",
"output_type": 1,
"file_type": "json",
"order_by": "observation_date",
"sort_order": "asc",
"count": 84,
"offset": 0,
"limit": 100000,
"observations": [
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"date": "1929-01-01",
"value": "1065.9"
},
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"date": "1930-01-01",
"value": "975.5"
},
...,
{
"realtime_start": "2013-08-14",
"realtime_end": "2013-08-14",
"date": "2012-01-01",
"value": "15693.1"
}
]
}
```
The JSON format is very practical here in separating metadata (such as what time frame is covered by this dataset, etc.) in the first few lines on top from the actual data in `"observations"` further down. However, note that due to this structure, the key names like `"date"`, and `"value"` occur for each observation in that time series. In addition, `"realtime_start"` and `"realtime_end"` occur both in the metadata section and again in each observation. Each of those occurrences costs some bytes of storage space on your hard disk but does not add any information once you have parsed and imported the time series into R. The same information could also be stored in a more efficient way on your hard disk by simply storing the metadata in a separate text file and the actual observations in a CSV file (in a table\-like structure):
```
"date","value"
"1929-01-01", "1065.9"
"1930-01-01", "975.5"
...,
"2012-01-01", 15693.1"
```
In fact, in this particular example, storing the data in JSON format would take up more than double the hard\-disk space as CSV. Of course, this is not to say that one should generally store data in CSV files. In many situations, you might really have to rely on JSON’s flexibility to represent more complex structures. However, in practice it is very much worthwhile to think about whether you can improve storage efficiency by simply storing raw data in a different format.
Another related point to storing data in CSV files is to remove redundancies by splitting the data into several tables/CSV files, whereby each table contains the variables exclusively describing the type of observation in it. For example, when analyzing customer data for marketing purposes, the dataset stored in one CSV file might be at the level of individual purchases. That is, each row contains information on what has been purchased on which day by which customer as well as additional variables describing the customer (such as customer ID, name, address, etc.). Instead of keeping all of this data in one file, we could split it into two files, where one only contains the order IDs and corresponding customer IDs as well as attributes of individual orders (but not additional attributes of the customers themselves), and the other contains the customer IDs and all customer attributes. Thereby, we avoid redundancies in the form of repeatedly storing the same values of customer attributes (like name and address) for each order.[22](#fn22)
### 5\.1\.2 Data compression
Data compression essentially follows from the same basic idea of avoiding redundancies in data storage as the simple approaches discussed above. However, it happens on a much more fundamental level. Data compression algorithms encode the information contained in the original representation of the data with fewer bits. In the case of lossless compression, this results in a new data file containing the exact same information but taking up less space on disk. In simple terms, compression replaces repeatedly occurring sequences with shorter expressions and keeps track of replacements in a table. Based on the table, the file can then be de\-compressed to recreate the original representation of the data. For example, consider the following character string.
```
"xxxxxyyyyyzzzz"
```
The same data could be represented with fewer bits as:
```
"5x6y4z"
```
which needs fewer than half the number of bits to be stored (but contains the same information).
There are several easy ways to use your mass storage capacity more efficiently with data compression in R. Most conveniently, some functions to import/export data in R directly allow for reading and writing of compressed formats. For example, the `fread()`/`fwrite()` functions provided in the `data.table` package will automatically use the GZIP (de\-)compression utility when writing to (reading from) a CSV file with a `.gz` file extension in the file name.
```
# load packages
library(data.table)
# load example data from basic R installation
data("LifeCycleSavings")
# write data to normal csv file and check size
fwrite(LifeCycleSavings, file="lcs.csv")
file.size("lcs.csv")
```
```
## [1] 1441
```
```
# write data to a GZIPped (compressed) csv file and check size
fwrite(LifeCycleSavings, file="lcs.csv.gz")
file.size("lcs.csv.gz")
```
```
## [1] 744
```
```
# read/import the compressed data
lcs <- data.table::fread("lcs.csv.gz")
```
Alternatively, you can also use other types of data compression as follows.
```
# common ZIP compression (independent of data.table package)
write.csv(LifeCycleSavings, file="lcs.csv")
file.size("lcs.csv")
```
```
## [1] 1984
```
```
zip(zipfile = "lcs.csv.zip", files = "lcs.csv")
file.size("lcs.csv.zip")
```
```
## [1] 1205
```
```
# unzip/decompress and read/import data
lcs_path <- unzip("lcs.csv.zip")
lcs <- read.csv(lcs_path)
```
Note that data compression is subject to a time–memory trade\-off. Compression and de\-compression are computationally intensive and need time. When using compression to make more efficient use of the available mass storage capacity, think about how frequently you expect the data to be loaded into R as part of the data analysis tasks ahead and for how long you will need to keep the data stored on your hard disk. Importing GBs of compressed data can be uncomfortably slower than importing from an uncompressed file.
So far, we have only focused on data size in the context of mass storage capacity. But what happens once you load a large dataset into R (e.g., by means of `read.csv()`)? A program called a “parser” is executed that reads the raw data from the hard disk and creates a representation of that data in the R environment, that is, in random access memory (RAM). All common computers have more GBs of mass storage available than GBs of RAM. Hence, new issues of hardware capacity loom at the stage of data import, which brings us to the next subsection.
5\.2 Random access memory (RAM)
-------------------------------
Currently, a common laptop or desktop computer has 8–32 GB of RAM capacity. These are more\-or\-less the numbers you should keep in the back of your mind for the examples/discussions that follow. That is, we will consider a dataset as “big” if it takes up several GBs in RAM (and therefore might overwhelm a machine with 8GB RAM capacity).
There are several types of problems that you might run into in practice when attempting to import and analyze a dataset of the size close to or larger than your computer’s RAM capacity. Importing the data might take much longer than expected, your computer might freeze during import (or later during the analysis), R/Rstudio might crash, or you might get an error message hinting at a lack of RAM. How can you anticipate such problems, and what can you do about them?
Many of the techniques and packages discussed in the following chapters are in one way or another solutions to these kinds of problems. However, there are a few relatively simple things to keep in mind before we go into the details.
1. The same data stored on the mass storage device (e.g., in a CSV file) might take up more or less space in RAM. This is due to the fact that the data is (technically speaking) structured differently in a CSV or JSON file than in, for example, a data table or a matrix in R. For example, it is reasonable to anticipate that the example JSON file with the economic time series data will take up less space as a time series object in R (in RAM) than it does on the hard disk (for one thing simply due to the fact that we will not keep the redundancies mentioned before).
2. The import might work well, but some parts of the data analysis script might require much more memory to run through even without loading additional data from disk. A classic example of this is regression analysis performed with, for example, `lm()` in R. As part of the OLS estimation procedure, `lm` will need to create the model matrix (usually denoted \\(X\\)). Depending on the model you want to estimate, the model matrix might actually be larger than the data frame containing the dataset. In fact, this can happen quite easily if you specify a fixed effects model in which you want to account for the fixed effects via dummy variables (for example, for each country except for one).[23](#fn23) Again, the result can be one of several: an error message hinting at a lack of memory, a crash, or the computer slowing down significantly. Anticipating these types of problems is very tricky since memory problems are often caused at a lower level of a function from the package that provides you with the data analytics routine you intend to use. Accordingly, error messages can be rather cryptic.
3. Keep in mind that you have some leeway in how much space imported data takes up in R by considering data structures and data types. For example, you can use factors instead of character vectors when importing categorical variables into R (the default in `read.csv`), and for some operations it makes sense to work with matrices instead of data frames.
Finally, recall the lessons regarding memory usage from the section “Writing efficient R code” in Chapter 1\.
5\.3 Combining RAM and hard disk: Virtual memory
------------------------------------------------
What if all the RAM in our computer is not enough to store all the data we want to analyze?
Modern operating systems (OSs) have a way of dealing with such a situation. Once all RAM is used up by the currently running programs, the OS allocates parts of the memory back to the hard disk, which then works as *virtual memory*. Figure 4\.2 illustrates this point.
Figure 5\.1: Virtual memory. Overall memory is mapped to RAM and parts of the hard disk.
For example, when we implement an R\-script that imports one file after another into the R environment, ignoring the RAM capacity of our computer, the OS will start *paging* data to the virtual memory. This happens ‘under the hood’ without explicit instructions by the user. We will quite likely notice that the computer slows down a lot when this happens.
While this default usage of virtual memory by the OS is helpful for running several applications at the same time, each taking up a moderate amount of memory, it is not a really useful tool for processing large amounts of data in one application (R). However, the underlying idea of using both RAM and mass storage simultaneously in order to cope with a lack of memory is very useful in the context of Big Data Analytics.
Several R packages have been developed that exploit the idea behind virtual memory explicitly for analyzing large amounts of data. The basic idea behind these packages is to map a dataset to the hard disk when loading it into R. The actual data values are stored in chunks on the hard disk, while the structure/metadata of the dataset is loaded into R.
5\.4 CPU and parallelization
----------------------------
The actual processing of the data is done in the computer’s central processing unit (CPU). Consequently, the performance of the CPU has a substantial effect on how fast a data analytics task runs. A CPU’s performance is usually denoted by its *clock rate* measured in gigaherz (GHz). In simple terms, a CPU with a clock rate of 4\.8 GHz can execute 4\.8 billion basic operations per second. Holding all other aspects constant, you can thus expect an analytics task to run faster if it runs on a computer with higher CPU clock rate. As an alternative to scaling up the CPU, we can exploit the fact that modern CPUs have several *cores*. In the normal usage of a PC, the operating system makes use of these cores to run several applications smoothly *in parallel* (e.g., you listen to music on Spotify while browsing the web and running some analytics script in RStudio in the background).
Modern computing environments such as R allow us to explicitly run parts of the same analytics task in parallel, that is, on several CPU cores at the same time. Following the same logic, we can also connect several computers (each with several CPU cores) in a cluster computer and run the program in parallel on all of these computing nodes. Both of these approaches are generally referred to as *parallelization*, and both are supported in several R packages.
An R program run in parallel typically involves the following steps.
* First, several instances of R are running at the same time (across one machine with multiple CPU cores or across a cluster computer). One of the instances (i.e., the *master* instance) breaks the computation into batches and sends those to the other instances.
* Second, each of the instances processes its batch and sends the results back to the master instance.
* Finally, the master instance combines the partial results into the final result and returns it to the user.
To illustrate this point, consider the following econometric problem: you have a customer [dataset](https://www.kaggle.com/jackdaoud/marketing-data?select=marketing_data.csv) with detailed data on customer characteristics, past customer behavior, and information on online marketing campaigns. Your task is to figure out which customers are more likely to react positively to the most recent online marketing campaign. The aim is to optimize personalized marketing campaigns in the future based on insights gained from this exercise. In a first step you take a computationally intensive “brute force” approach: you run all possible regressions with the dependent variable `Response` (equal to 1 if the customer took the offer in the campaign and 0 otherwise). In total you have 21 independent variables; thus you need to run \\(2^{20}\=1,048,576\\) logit regressions (this is without considering linear combinations of covariates etc.). Finally, you want to select the model with the best fit according to deviance.
A simple sequential implementation to solve this problem could look like this (for the sake of time, we cap the number of regression models to N\=10\).
```
# you can download the dataset from
# https://www.kaggle.com/jackdaoud/marketing-data?
# select=marketing_data.csv
# PREPARATION -----------------------------
# packages
library(stringr)
# import data
marketing <- read.csv("data/marketing_data.csv")
# clean/prepare data
marketing$Income <- as.numeric(gsub("[[:punct:]]",
"",
marketing$Income))
marketing$days_customer <-
as.Date(Sys.Date())-
as.Date(marketing$Dt_Customer, "%m/%d/%y")
marketing$Dt_Customer <- NULL
# all sets of independent vars
indep <- names(marketing)[ c(2:19, 27,28)]
combinations_list <- lapply(1:length(indep),
function(x) combn(indep, x,
simplify = FALSE))
combinations_list <- unlist(combinations_list,
recursive = FALSE)
models <- lapply(combinations_list,
function(x) paste("Response ~",
paste(x, collapse="+")))
# COMPUTE REGRESSIONS --------------------------
N <- 10 # N <- length(models) for all
pseudo_Rsq <- list()
length(pseudo_Rsq) <- N
for (i in 1:N) {
# fit the logit model via maximum likelihood
fit <- glm(models[[i]],
data=marketing,
family = binomial())
# compute the proportion of deviance explained by
# the independent vars (~R^2)
pseudo_Rsq[[i]] <- 1-(fit$deviance/fit$null.deviance)
}
# SELECT THE WINNER ---------------
models[[which.max(pseudo_Rsq)]]
```
```
## [1] "Response ~ MntWines"
```
Alternatively, a sequential implementation could be based on an apply\-type function like `lapply()`. As several of the approaches to parallelize computation with R build either on loops or an apply\-type syntax, let us also briefly introduce the sequential lapply\-implementation of the task above as a point of reference.
```
# COMPUTE REGRESSIONS --------------------------
N <- 10 # N <- length(models) for all
run_reg <-
function(model, data, family){
# fit the logit model via maximum likelihood
fit <- glm(model, data=data, family = family)
# compute and return the proportion of deviance explained by
# the independent vars (~R^2)
return(1-(fit$deviance/fit$null.deviance))
}
pseudo_Rsq_list <-lapply(models[1:N], run_reg, data=marketing, family=binomial() )
pseudo_Rsq <- unlist(pseudo_Rsq_list)
# SELECT THE WINNER ---------------
models[[which.max(pseudo_Rsq)]]
```
```
## [1] "Response ~ MntWines"
```
### 5\.4\.1 Naive multi\-session approach
There is actually a simple way of doing this “manually” on a multi\-core PC, which intuitively illustrates the point of parallelization (although it would not be a very practical approach): you write an R script that loads the dataset, runs the first \\(n\\) of the total of \\(N\\) regressions, and stores the result in a local text file. Next, you run the script in your current RStudio session, open an additional RStudio session, and run the script with the next \\(n\\) regressions, and so on until all cores are occupied with one RStudio session. At the end you collect all of the results from the separate text files and combine them to get the final result. Depending on the problem at hand, this could indeed speed up the overall task, and it is technically speaking a form of “multi\-session” approach. However, as you have surely noticed, this is unlikely to be a very practical approach.
### 5\.4\.2 Multi\-session approach with futures
There is a straightforward way to implement the very basic (naive) idea of running parts of the task in separate R sessions. The `future` package (see Bengtsson ([2021](#ref-bengtsson_2021)) for details) provides a lightweight interface (API) to use futures[24](#fn24). An additional set of packages (such as `future.apply`) that build on the `future` package, provides high\-level functionality to run your code in parallel without having to change your (sequential, usual) R code much. In order to demonstrate the simplicity of this approach, let us re\-write the sequential implementation through `lapply()` from above for parallelization through the `future` package. All we need to do is to load the `future` and `future.apply` packages ([Bengtsson 2021](#ref-bengtsson_2021)) and then simply replace `lapply(...)` with `future_lapply(...)`.
```
# SET UP ------------------
# load packages
library(future)
library(future.apply)
# instruct the package to resolve
# futures in parallel (via a SOCK cluster)
plan(multisession)
# COMPUTE REGRESSIONS --------------------------
N <- 10 # N <- length(models) for all
pseudo_Rsq_list <- future_lapply(models[1:N],
run_reg,
data=marketing,
family=binomial() )
pseudo_Rsq <- unlist(pseudo_Rsq_list)
# SELECT THE WINNER ---------------
models[[which.max(pseudo_Rsq)]]
```
```
## [1] "Response ~ MntWines"
```
### 5\.4\.3 Multi\-core and multi\-node approach
There are several additional approaches to parallelization in R. With the help of some specialized packages, we can instruct R to automatically distribute the workload to different cores (or different computing nodes in a cluster computer), control and monitor the progress in all cores, and then automatically collect and combine the results from all cores. The `future`\-package and the packages building on it provide in themselves different approaches to writing such scripts.[25](#fn25) Below, we look at two additional ways of implementing parallelization with R that are based on other underlying frameworks than `future`.
#### 5\.4\.3\.1 Parallel for\-loops using socket
Probably the most intuitive approach to parallelizing a task in R is the `foreach` package ([Microsoft and Weston 2022](#ref-foreach)). It allows you to write a `foreach` statement that is very similar to the for\-loop syntax in R. Hence, you can straightforwardly “translate” an already implemented sequential approach with a common for\-loop to a parallel implementation.
```
# COMPUTE REGRESSIONS IN PARALLEL (MULTI-CORE) --------------------------
# packages for parallel processing
library(parallel)
library(doSNOW)
# get the number of cores available
ncores <- parallel::detectCores()
# set cores for parallel processing
ctemp <- makeCluster(ncores)
registerDoSNOW(ctemp)
# prepare loop
N <- 10000 # N <- length(models) for all
# run loop in parallel
pseudo_Rsq <-
foreach ( i = 1:N, .combine = c) %dopar% {
# fit the logit model via maximum likelihood
fit <- glm(models[[i]],
data=marketing,
family = binomial())
# compute the proportion of deviance explained by
# the independent vars (~R^2)
return(1-(fit$deviance/fit$null.deviance))
}
# SELECT THE WINNER ---------------
models[[which.max(pseudo_Rsq)]]
```
```
## [1] "Response ~ Year_Birth+Teenhome+Recency+MntWines+days_customer"
```
With relatively few cases, this approach is not very fast due to the overhead of “distributing” variables/objects from the master process to all cores/workers. In simple terms, the socket approach means that the cores do not share the same variables/the same environment, which creates overhead. However, this approach is usually very stable and runs on all platforms.
#### 5\.4\.3\.2 Parallel lapply using forking
Finally, let us look at an implementation based on forking (here, implemented in the `parallel` package by ([R Core Team 2021](#ref-rfoundation2021)). In the fork approach, each core works with the same objects/variables in a shared environment, which makes this approach very fast. However, depending on what exactly is being computed, sharing an environment can cause problems.[26](#fn26) If you are not sure whether your setup might run into issues with forking, it would be better to rely on a non\-fork approach.[27](#fn27)
```
# COMPUTE REGRESSIONS IN PARALLEL (MULTI-CORE) ---------------
# prepare parallel lapply (based on forking,
# here clearly faster than foreach)
N <- 10000 # N <- length(models) for all
# run parallel lapply
pseudo_Rsq <- mclapply(1:N,
mc.cores = ncores,
FUN = function(i){
# fit the logit model
fit <- glm(models[[i]],
data=marketing,
family = binomial())
# compute the proportion of deviance
# explained by the independent vars (~R^2)
return(1-(fit$deviance/fit$null.deviance))
})
# SELECT THE WINNER, SHOW FINAL OUTPUT ---------------
best_model <- models[[which.max(pseudo_Rsq)]]
best_model
```
```
## [1] "Response ~ Year_Birth+Teenhome+Recency+MntWines+days_customer"
```
### 5\.4\.1 Naive multi\-session approach
There is actually a simple way of doing this “manually” on a multi\-core PC, which intuitively illustrates the point of parallelization (although it would not be a very practical approach): you write an R script that loads the dataset, runs the first \\(n\\) of the total of \\(N\\) regressions, and stores the result in a local text file. Next, you run the script in your current RStudio session, open an additional RStudio session, and run the script with the next \\(n\\) regressions, and so on until all cores are occupied with one RStudio session. At the end you collect all of the results from the separate text files and combine them to get the final result. Depending on the problem at hand, this could indeed speed up the overall task, and it is technically speaking a form of “multi\-session” approach. However, as you have surely noticed, this is unlikely to be a very practical approach.
### 5\.4\.2 Multi\-session approach with futures
There is a straightforward way to implement the very basic (naive) idea of running parts of the task in separate R sessions. The `future` package (see Bengtsson ([2021](#ref-bengtsson_2021)) for details) provides a lightweight interface (API) to use futures[24](#fn24). An additional set of packages (such as `future.apply`) that build on the `future` package, provides high\-level functionality to run your code in parallel without having to change your (sequential, usual) R code much. In order to demonstrate the simplicity of this approach, let us re\-write the sequential implementation through `lapply()` from above for parallelization through the `future` package. All we need to do is to load the `future` and `future.apply` packages ([Bengtsson 2021](#ref-bengtsson_2021)) and then simply replace `lapply(...)` with `future_lapply(...)`.
```
# SET UP ------------------
# load packages
library(future)
library(future.apply)
# instruct the package to resolve
# futures in parallel (via a SOCK cluster)
plan(multisession)
# COMPUTE REGRESSIONS --------------------------
N <- 10 # N <- length(models) for all
pseudo_Rsq_list <- future_lapply(models[1:N],
run_reg,
data=marketing,
family=binomial() )
pseudo_Rsq <- unlist(pseudo_Rsq_list)
# SELECT THE WINNER ---------------
models[[which.max(pseudo_Rsq)]]
```
```
## [1] "Response ~ MntWines"
```
### 5\.4\.3 Multi\-core and multi\-node approach
There are several additional approaches to parallelization in R. With the help of some specialized packages, we can instruct R to automatically distribute the workload to different cores (or different computing nodes in a cluster computer), control and monitor the progress in all cores, and then automatically collect and combine the results from all cores. The `future`\-package and the packages building on it provide in themselves different approaches to writing such scripts.[25](#fn25) Below, we look at two additional ways of implementing parallelization with R that are based on other underlying frameworks than `future`.
#### 5\.4\.3\.1 Parallel for\-loops using socket
Probably the most intuitive approach to parallelizing a task in R is the `foreach` package ([Microsoft and Weston 2022](#ref-foreach)). It allows you to write a `foreach` statement that is very similar to the for\-loop syntax in R. Hence, you can straightforwardly “translate” an already implemented sequential approach with a common for\-loop to a parallel implementation.
```
# COMPUTE REGRESSIONS IN PARALLEL (MULTI-CORE) --------------------------
# packages for parallel processing
library(parallel)
library(doSNOW)
# get the number of cores available
ncores <- parallel::detectCores()
# set cores for parallel processing
ctemp <- makeCluster(ncores)
registerDoSNOW(ctemp)
# prepare loop
N <- 10000 # N <- length(models) for all
# run loop in parallel
pseudo_Rsq <-
foreach ( i = 1:N, .combine = c) %dopar% {
# fit the logit model via maximum likelihood
fit <- glm(models[[i]],
data=marketing,
family = binomial())
# compute the proportion of deviance explained by
# the independent vars (~R^2)
return(1-(fit$deviance/fit$null.deviance))
}
# SELECT THE WINNER ---------------
models[[which.max(pseudo_Rsq)]]
```
```
## [1] "Response ~ Year_Birth+Teenhome+Recency+MntWines+days_customer"
```
With relatively few cases, this approach is not very fast due to the overhead of “distributing” variables/objects from the master process to all cores/workers. In simple terms, the socket approach means that the cores do not share the same variables/the same environment, which creates overhead. However, this approach is usually very stable and runs on all platforms.
#### 5\.4\.3\.2 Parallel lapply using forking
Finally, let us look at an implementation based on forking (here, implemented in the `parallel` package by ([R Core Team 2021](#ref-rfoundation2021)). In the fork approach, each core works with the same objects/variables in a shared environment, which makes this approach very fast. However, depending on what exactly is being computed, sharing an environment can cause problems.[26](#fn26) If you are not sure whether your setup might run into issues with forking, it would be better to rely on a non\-fork approach.[27](#fn27)
```
# COMPUTE REGRESSIONS IN PARALLEL (MULTI-CORE) ---------------
# prepare parallel lapply (based on forking,
# here clearly faster than foreach)
N <- 10000 # N <- length(models) for all
# run parallel lapply
pseudo_Rsq <- mclapply(1:N,
mc.cores = ncores,
FUN = function(i){
# fit the logit model
fit <- glm(models[[i]],
data=marketing,
family = binomial())
# compute the proportion of deviance
# explained by the independent vars (~R^2)
return(1-(fit$deviance/fit$null.deviance))
})
# SELECT THE WINNER, SHOW FINAL OUTPUT ---------------
best_model <- models[[which.max(pseudo_Rsq)]]
best_model
```
```
## [1] "Response ~ Year_Birth+Teenhome+Recency+MntWines+days_customer"
```
#### 5\.4\.3\.1 Parallel for\-loops using socket
Probably the most intuitive approach to parallelizing a task in R is the `foreach` package ([Microsoft and Weston 2022](#ref-foreach)). It allows you to write a `foreach` statement that is very similar to the for\-loop syntax in R. Hence, you can straightforwardly “translate” an already implemented sequential approach with a common for\-loop to a parallel implementation.
```
# COMPUTE REGRESSIONS IN PARALLEL (MULTI-CORE) --------------------------
# packages for parallel processing
library(parallel)
library(doSNOW)
# get the number of cores available
ncores <- parallel::detectCores()
# set cores for parallel processing
ctemp <- makeCluster(ncores)
registerDoSNOW(ctemp)
# prepare loop
N <- 10000 # N <- length(models) for all
# run loop in parallel
pseudo_Rsq <-
foreach ( i = 1:N, .combine = c) %dopar% {
# fit the logit model via maximum likelihood
fit <- glm(models[[i]],
data=marketing,
family = binomial())
# compute the proportion of deviance explained by
# the independent vars (~R^2)
return(1-(fit$deviance/fit$null.deviance))
}
# SELECT THE WINNER ---------------
models[[which.max(pseudo_Rsq)]]
```
```
## [1] "Response ~ Year_Birth+Teenhome+Recency+MntWines+days_customer"
```
With relatively few cases, this approach is not very fast due to the overhead of “distributing” variables/objects from the master process to all cores/workers. In simple terms, the socket approach means that the cores do not share the same variables/the same environment, which creates overhead. However, this approach is usually very stable and runs on all platforms.
#### 5\.4\.3\.2 Parallel lapply using forking
Finally, let us look at an implementation based on forking (here, implemented in the `parallel` package by ([R Core Team 2021](#ref-rfoundation2021)). In the fork approach, each core works with the same objects/variables in a shared environment, which makes this approach very fast. However, depending on what exactly is being computed, sharing an environment can cause problems.[26](#fn26) If you are not sure whether your setup might run into issues with forking, it would be better to rely on a non\-fork approach.[27](#fn27)
```
# COMPUTE REGRESSIONS IN PARALLEL (MULTI-CORE) ---------------
# prepare parallel lapply (based on forking,
# here clearly faster than foreach)
N <- 10000 # N <- length(models) for all
# run parallel lapply
pseudo_Rsq <- mclapply(1:N,
mc.cores = ncores,
FUN = function(i){
# fit the logit model
fit <- glm(models[[i]],
data=marketing,
family = binomial())
# compute the proportion of deviance
# explained by the independent vars (~R^2)
return(1-(fit$deviance/fit$null.deviance))
})
# SELECT THE WINNER, SHOW FINAL OUTPUT ---------------
best_model <- models[[which.max(pseudo_Rsq)]]
best_model
```
```
## [1] "Response ~ Year_Birth+Teenhome+Recency+MntWines+days_customer"
```
5\.5 GPUs for scientific computing
----------------------------------
The success of the computer games industry in the late 1990s/early 2000s led to an interesting positive externality for scientific computing. The ever more demanding graphics of modern computer games and the huge economic success of the computer games industry set incentives for hardware producers to invest in research and development of more powerful ‘graphics cards’, extending a normal PC/computing environment with additional computing power solely dedicated to graphics. At the heart of these graphic cards are so\-called GPUs (graphics processing units), microprocessors specifically optimized for graphics processing. Figure [5\.2](hardware-computing-resources.html#fig:rtx) depicts a modern graphics card similar to those commonly built into today’s ‘gaming’ PCs.
Figure 5\.2: Illustration of a Nvidia GEFORCE RTX 2080 graphics card with a modern GPU (illustration by MarcusBurns1977 under CC BY 3\.0 license).
Why did the hardware industry not simply invest in the development of more powerful CPUs to deal with the more demanding PC games? The main reason is that the architecture of CPUs is designed not only for efficiency but also flexibility. That is, a CPU needs to perform well in all kinds of computations, some parallel, some sequential, etc. Computing graphics is a comparatively narrow domain of computation, and designing a processing unit architecture that is custom\-made to excel just at this one task is thus much more cost efficient. Interestingly, this graphics\-specific architecture (specialized in highly parallel numerical \[floating point] workloads) turns out to also be very useful in some core scientific computing tasks – in particular, matrix multiplications (see Fatahalian, Sugerman, and Hanrahan ([2004](#ref-fatahalian_etal2004)) for a detailed discussion of why that is the case). A key aspect of GPUs is that they are composed of several multiprocessor units, of which each in turn has several cores. GPUs can thus perform computations with hundreds or even thousands of threads in parallel. The figure below illustrates this point by showing the typical architecture of an NVIDIA GPU.
Figure 5\.3: Illustration of a graphics processing unit’s components/architecture. The GPU consists of several Texture Processing Clusters (TPC), which in turn consist of several Streaming Multiprocessors (SM; the primary unit of parallelism in the GPU) that contain ten Streaming Processors (SP; cores, responsible for executing a single thread), shared memory (can be accessed by multiple SPs simultaneously), instruction cache (I\-Cache; responsible for storing and managing the instructions needed to execute a program), constant cache (C\-Cache; store constant data that is needed during program execution), and a multi\-threaded issue component (MT issue; responsible for scheduling and managing the execution of multiple threads simultaneously).
While initially, programming GPUs for scientific computing required a very good understanding of the hardware, graphics card producers have realized that there is an additional market for their products (in particular with the recent rise of deep learning) and now provide several high\-level APIs to use GPUs for tasks other than graphics processing. Over the last few years, more high\-level software has been developed that makes it much easier to use GPUs in parallel computing tasks. The following subsections show some examples of such software in the R environment.[28](#fn28)
### 5\.5\.1 GPUs in R
The `gpuR` package ([Determan 2019](#ref-gpuR)) provides basic R functions to compute with GPUs from within the R environment.[29](#fn29) In the following example we compare the performance of a CPU with a GPU for a matrix multiplication exercise. For a large \\(N\\times P\\) matrix \\(X\\), we want to compute \\(X^tX\\).
In a first step, we load the `gpuR` package.[30](#fn30) Note the output to the console. It shows the type of GPU identified by `gpuR`. This is the platform on which `gpuR` will compute the GPU examples. In order to compare the performances, we also load the `bench` package.
```
# load package
library(bench)
library(gpuR)
```
```
## Number of platforms: 1
## - platform: NVIDIA Corporation: OpenCL 3.0 CUDA 12.2.138
## - context device index: 0
## - NVIDIA GeForce GTX 1650
## checked all devices
## completed initialization
```
Note how loading the `gpuR` package triggers a check of GPU devices and outputs information on the detected GPUs as well as the lower\-level software platform to run GPU computations. Next, we initialize a large matrix filled with pseudo\-random numbers, representing a dataset with \\(N\\) observations and \\(P\\) variables.
```
# initialize dataset with pseudo-random numbers
N <- 10000 # number of observations
P <- 100 # number of variables
X <- matrix(rnorm(N * P, 0, 1), nrow = N, ncol =P)
```
For the GPU examples to work, we need one more preparatory step. GPUs have their own memory, which they can access faster than they can access RAM. However, this GPU memory is typically not very large compared to the memory CPUs have access to. Hence, there is a potential trade\-off between losing some efficiency but working with more data or vice versa.[31](#fn31) Here, we transfer the matrix to GPU memory with `vclMatrix()`.[32](#fn32).
```
# prepare GPU-specific objects/settings
# transfer matrix to GPU (matrix stored in GPU memory)
vclX <- vclMatrix(X, type = "float")
```
Now we run the two examples, first, based on standard R, using the CPU, and then, computing on the GPU and using GPU memory. In order to make the comparison fair, we force `bench::mark()` to run at least 200 iterations per variant.
```
# compare three approaches
gpu_cpu <- bench::mark(
# compute with CPU
cpu <-t(X) %*% X,
# GPU version, in GPU memory
# (vclMatrix formation is a memory transfer)
gpu <- t(vclX) %*% vclX,
check = FALSE, memory = FALSE, min_iterations = 200)
```
The performance comparison is visualized with boxplots.
```
plot(gpu_cpu, type = "boxplot")
```
The theoretically expected pattern becomes clearly visible. When using the GPU \+ GPU memory, the matrix operation is substantially faster than the common CPU computation. However, in this simple example of only one matrix operation, the real strength of GPU computation vs. CPU computation does not really become visible. In Chapter 13, we will look at a computationally much more intensive application of GPUs in the domain of deep learning (which relies heavily on matrix multiplications).
### 5\.5\.1 GPUs in R
The `gpuR` package ([Determan 2019](#ref-gpuR)) provides basic R functions to compute with GPUs from within the R environment.[29](#fn29) In the following example we compare the performance of a CPU with a GPU for a matrix multiplication exercise. For a large \\(N\\times P\\) matrix \\(X\\), we want to compute \\(X^tX\\).
In a first step, we load the `gpuR` package.[30](#fn30) Note the output to the console. It shows the type of GPU identified by `gpuR`. This is the platform on which `gpuR` will compute the GPU examples. In order to compare the performances, we also load the `bench` package.
```
# load package
library(bench)
library(gpuR)
```
```
## Number of platforms: 1
## - platform: NVIDIA Corporation: OpenCL 3.0 CUDA 12.2.138
## - context device index: 0
## - NVIDIA GeForce GTX 1650
## checked all devices
## completed initialization
```
Note how loading the `gpuR` package triggers a check of GPU devices and outputs information on the detected GPUs as well as the lower\-level software platform to run GPU computations. Next, we initialize a large matrix filled with pseudo\-random numbers, representing a dataset with \\(N\\) observations and \\(P\\) variables.
```
# initialize dataset with pseudo-random numbers
N <- 10000 # number of observations
P <- 100 # number of variables
X <- matrix(rnorm(N * P, 0, 1), nrow = N, ncol =P)
```
For the GPU examples to work, we need one more preparatory step. GPUs have their own memory, which they can access faster than they can access RAM. However, this GPU memory is typically not very large compared to the memory CPUs have access to. Hence, there is a potential trade\-off between losing some efficiency but working with more data or vice versa.[31](#fn31) Here, we transfer the matrix to GPU memory with `vclMatrix()`.[32](#fn32).
```
# prepare GPU-specific objects/settings
# transfer matrix to GPU (matrix stored in GPU memory)
vclX <- vclMatrix(X, type = "float")
```
Now we run the two examples, first, based on standard R, using the CPU, and then, computing on the GPU and using GPU memory. In order to make the comparison fair, we force `bench::mark()` to run at least 200 iterations per variant.
```
# compare three approaches
gpu_cpu <- bench::mark(
# compute with CPU
cpu <-t(X) %*% X,
# GPU version, in GPU memory
# (vclMatrix formation is a memory transfer)
gpu <- t(vclX) %*% vclX,
check = FALSE, memory = FALSE, min_iterations = 200)
```
The performance comparison is visualized with boxplots.
```
plot(gpu_cpu, type = "boxplot")
```
The theoretically expected pattern becomes clearly visible. When using the GPU \+ GPU memory, the matrix operation is substantially faster than the common CPU computation. However, in this simple example of only one matrix operation, the real strength of GPU computation vs. CPU computation does not really become visible. In Chapter 13, we will look at a computationally much more intensive application of GPUs in the domain of deep learning (which relies heavily on matrix multiplications).
5\.6 The road ahead: Hardware made for machine learning
-------------------------------------------------------
Due to the high demand for more computational power in the domain of training complex neural network models (for example, in computer vision), Google has recently developed a new hardware platform specifically designed to work with complex neural networks using TensorFlow: Tensor Processing Units (TPUs). TPUs were designed from the ground up to improve performance in dense vector and matrix computations with the aim of substantially increasing the speed of training deep learning models implemented with TensorFlow ([Abadi et al. 2015](#ref-tensorflow2015-whitepaper)).
Figure 5\.4: Illustration of a tensor processing unit (TPU).
While initially only used internally by Google, the Google Cloud platform now offers cloud TPUs to the general public.
5\.7 Wrapping up
----------------
* Be aware of and avoid redundancies in data storage. Consider, for example, storing data in CSV\-files instead of JSON\-files (if there is no need to store hierarchical structures).
* File compression is a more general strategy to avoid redundancies and save mass storage space. It can help you store even large datasets on disk. However, reading and saving compressed files takes longer, as additional processing is necessary. As a rule of thumb, store the raw datasets (which don’t have to be accessed that often) in a compressed format.
* Standard R for data analytics expects the datasets to be imported and available as R objects in the R environment (i.e., in RAM). Hence, the step of importing large datasets to R with the conventional approaches is aimed to parsing and loading the entire dataset into RAM, which might fail if your dataset is larger than the available RAM.
* Even if a dataset is not too large to fit into RAM, running data analysis scripts on it might then lead to R reaching RAM limits due to the creation of additional R objects in RAM needed for the computation. For example, when running regressions in the conventional way in R, R will generate, among other objects, an object containing the model matrix. However, at this point your original dataset object will still also reside in RAM. Not uncommonly, R would then crash or slow down substantially.
* The reason R might slow down substantially when working with large datasets in RAM is that your computer’s operating system (OS) has a default approach of handling situations with a lack of available RAM: it triggers *paging* between the RAM and a dedicated part of the hard disk called *virtual memory*. In simple terms, your computer starts using parts of the hard disk as an extension of RAM. However, reading/writing from/to the hard disk is much slower than from/to RAM, so your entire data analytics script (and any other programs running at the same time) will slow down substantially.
* Based on the points above, when working locally with a large dataset, recognize why your computer is slowing down or why R is crashing. Consider whether the dataset could theoretically fit into memory. Clarify whether analyzing the already imported data triggers the OS’s virtual memory mechanism.
* Taken together, your program might run slower than expected due to a lack of RAM (and thus the paging) and/or due to a very high computational burden on the CPU – for example, bootstrapping the standard errors of regression coefficients.
* By default essentially all basic R functions use one CPU thread/core for computation. If RAM is not an issue, setting up repetitive tasks to run in parallel (i.e., using more than one CPU thread/core at a time) can substantially speed up your program. Easy\-to\-use solutions to do this are `foreach` for a parallel version of `for`\-loops and `mclapply` for a parallel version of `lapply`.
* Finally, if your analytics script builds extensively on matrix multiplication, consider implementing it for processing on your GPU via the `gpuR` package. Note, though, that this approach presupposes that you have installed and set up your GPU with the right drivers to use it not only for graphics but also for scientific computation.
5\.8 Still have insufficient computing resources?
-------------------------------------------------
When working with very large datasets (i.e., terabytes of data), processing the data on one common computer might not work due to a lack of memory or would be way too slow due to a lack of computing power (CPU cores). The architecture or basic hardware setup of a common computer is subject to a limited amount of RAM and a limited number of CPUs/CPU cores. Hence, simply scaling up might not be sufficient. Instead, we need to scale out. In simple terms, this means connecting several computers (each with its own RAM, CPU, and mass storage) in a network, distributing the dataset across all computers (“nodes”) in this network, and working on the data simultaneously across all nodes. In the next chapter, we look into how such \`\`distributed systems’’ basically work, what software frameworks are commonly used to work on distributed systems, and how we can interact with this software (and the distributed system) via R and SQL.
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/distributed-systems.html |
Chapter 6 Distributed Systems
=============================
When we connect several computers in a network to jointly process large amounts of data, such a computing system is commonly referred to as a “distributed system”. From a technical standpoint the key difference between a distributed system and the more familiar parallel system (e.g., our desktop computer with its multi core CPU) is that in distributed systems the different components do not share the same memory (and storage). Figure [6\.1](distributed-systems.html#fig:distributedsystems) illustrates this point.
Figure 6\.1: Panel A illustrates a distributed system, in contrast to the illustration of a parallel system in Panel B.
In a distributed system, the dataset is literally split up into pieces that then reside separately on different nodes. This requires an additional layer of software (that coordinates the distribution/loading of data as well as the simultaneous processing) and different approaches (different programming models) to defining computing/data analytics tasks. Below, we will look at each of these aspects in turn.
6\.1 MapReduce
--------------
A broadly used programming model for processing Big Data on distributed systems is called MapReduce. It essentially consists of two procedures and is conceptually very close to the “split\-apply\-combine” strategy in data analysis. First, the Map function sorts/filters the data (on each node/computer). Then, a Reduce function aggregates the sorted/filtered data. Thereby, all of these processes are orchestrated to run across many nodes of a cluster computer. Finally, the master node collects the results and returns them to the user.
Let us illustrate the basic idea behind MapReduce with a simple example. Suppose you are working on a text mining task in which all the raw text in thousands of digitized books (stored as text files) need to be processed. In a first step, you want to compute word frequencies (count the number of occurrences of specific words in all books combined).
For simplicity, let us only focus on the following very simple and often referred to MapReduce word count example[33](#fn33):
Text in book 1:
*Apple Orange Mango*
*Orange Grapes Plum*
Text in book 2:
*Apple Plum Mango*
*Apple Apple Plum*
The MapReduce procedure is then as follows:
* First, the data is loaded from the original text files.
* Each line of text is then passed to individual mapper instances, which separately split the lines of text into key–value pairs. In the example above, the first key\-value pair of the first document/line would then be *Apple,1*.
* Then the system sorts and shuffles all key–value pairs across all instances; next, the reducer aggregates the sorted/shuffled key–value pairs (here: counts the number of word occurrences). In the example above, this means all values with key *Apple* are summed up, resulting in *Apple,4*.
* Finally, the master instance collects all the results and returns the final output.
The result would be as follows:
Text in book 2:
*Apple,4*
*Grapes,1*
*Mango,2*
*Orange,2*
*Plum,3*
From this simple example, a key aspect of MapReduce should become clear: for the key tasks of mapping and reducing, the data processing on one node/instance can happen completely independently of the processing on the other instances. Note that this is not as easily achievable for every data analytics task as it is for computing word frequencies.
**Aside: MapReduce concept illustrated in R**
In order to better understand the basic concept behind the MapReduce framework on a distributed system, let’s look at how we can combine the functions `map()` and `reduce()` in R to implement the basic MapReduce example shown above (this is just to illustrate the underlying idea, *not* to suggest that MapReduce actually is simply an application of the classical `map` and `reduce (fold)` functions in functional programming).[34](#fn34) The overall aim of the program is to count the number of times each word is repeated in a given text. The input to the program is thus a text, and the output is a list of key–value pairs with the unique words occurring in the text as keys and their respective number of occurrences as values.
In the code example, we will use the following text as input.
```
# initialize the input text (for simplicity as one text string)
input_text <-
"Apple Orange Mango
Orange Grapes Plum
Apple Plum Mango
Apple Apple Plum"
```
*Mapper*
The Mapper first splits the text into lines and then splits the lines into key–value pairs, assigning to each key the value `1`. For the first step we use `strsplit()`, which takes a character string as input and splits it into a list of sub\-strings according to the matches of a sub\-string (here `"\n"`, indicating the end of a line).
```
# Mapper splits input into lines
lines <- as.list(strsplit(input_text, "\n")[[1]])
lines[1:2]
```
```
## [[1]]
## [1] "Apple Orange Mango"
##
## [[2]]
## [1] "Orange Grapes Plum"
```
In a second step, we apply our own function (`map_fun()`) to each line of text via `Map()`. `map_fun()` splits each line into words (keys) and assigns a value of `1` to each key.
```
# Mapper splits lines into key–value pairs
map_fun <-
function(x){
# remove special characters
x_clean <- gsub("[[:punct:]]", "", x)
# split line into words
keys <- unlist(strsplit(x_clean, " "))
# initialize key–value pairs
key_values <- rep(1, length(keys))
names(key_values) <- keys
return(key_values)
}
kv_pairs <- Map(map_fun, lines)
# look at the result
kv_pairs[1:2]
```
```
## [[1]]
## Apple Orange Mango
## 1 1 1
##
## [[2]]
## Orange Grapes Plum
## 1 1 1
```
*Reducer*
The Reducer first sorts and shuffles the input from the Mapper and then reduces the key–value pairs by summing up the values for each key.
```
# order and shuffle
kv_pairs <- unlist(kv_pairs)
keys <- unique(names(kv_pairs))
keys <- keys[order(keys)]
shuffled <- lapply(keys,
function(x) kv_pairs[x == names(kv_pairs)])
shuffled[1:2]
```
```
## [[1]]
## Apple Apple Apple Apple
## 1 1 1 1
##
## [[2]]
## Grapes
## 1
```
Now we can sum up the keys to get the word count for the entire input.
```
sums <- lapply(shuffled, Reduce, f=sum)
names(sums) <- keys
sums[1:2]
```
```
## $Apple
## [1] 4
##
## $Grapes
## [1] 1
```
6\.2 Apache Hadoop
------------------
Hadoop MapReduce is the most widely known and used implementation of the MapReduce framework. A decade ago, Big Data Analytics with really large datasets often involved directly interacting with/working in Hadoop to run MapReduce jobs. However, over the last few years various higher\-level interfaces have been developed that make the usage of MapReduce/Hadoop by data analysts much more easily accessible. The purpose of this section is thus to give a lightweight introduction to the underlying basics that power some of the code examples and tutorials discussed in the data analytics chapters toward the end of this book.
### 6\.2\.1 Hadoop word count example
To get an idea of what running a Hadoop job looks like, we run the same simple word count example introduced above on a local Hadoop installation. The example presupposes a local installation of Hadoop version 2\.10\.1 (see Appendix C for details) and can easily be run on a completely normal desktop/laptop computer running Ubuntu Linux. As a side remark, this actually illustrates an important aspect of developing MapReducescripts in Hadoop (and many of the software packages building on it): the code can easily be developed and tested locally on a small machine and only later transferred to the actual Hadoop cluster to be run on the full dataset.
The basic Hadoop installation comes with a few templates for very typical map/reduce programs.[35](#fn35) Below we replicate the same word\-count example as shown in simple R code above.
In a first step, we create an input directory where we store the input file(s) to feed to Hadoop.
```
# create directory for input files (typically text files)
mkdir ~/input
```
Then we add a text file containing the same text as in the example above.
```
echo "Apple Orange Mango
Orange Grapes Plum
Apple Plum Mango
Apple Apple Plum" >> ~/input/text.txt
```
Now we can run the MapReduce/Hadoop word count as follows, storing the results in a new directory called `wordcount_example`. We use the already\-implemented Hadoop script to run a word count job, MapReduce style. This is where we rely on the already implemented word\-count example provided with the Hadoop installation (located in `/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.1.jar`).
```
# run mapreduce word count
/usr/local/hadoop/bin/hadoop jar \
/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.1.jar \
wordcount
~/input ~/wc_example
```
What this line says is: Run the Hadoop program called `wordcount` implemented in the jar\-file `hadoop-mapreduce-examples-2.10.1.jar`; use the files in directory `~/input` containing the raw text as input, and store the final output in directory `~/wc_example`.
```
cat ~/wc_example/*
```
```
## Apple 4
## Grapes 1
## Mango 2
## Orange 2
## Plum 3
```
What looks rather simple in this example can get very complex once you want to write an entire data analysis script with all kinds of analysis for Hadoop. Also, Hadoop was designed for batch processing and does not offer a simple interface for interactive sessions. All of this makes it rather impractical for a typical analytics workflow as we know it from working with R. This is where [Apache Spark](https://spark.apache.org/) ([Zaharia et al. 2016](#ref-Spark)) comes to the rescue.
6\.3 Apache Spark
-----------------
Spark ([Zaharia et al. 2016](#ref-Spark)) is a data analytics engine specifically designed for processing large amounts of data on cluster computers. It partially builds on the broader Apache Hadoop framework for handling storage and resource management, but it is often faster than Hadoop MapReduce by an order of magnitude. In addition, it offers many more easy\-to\-use high\-level interfaces for typical analytics tasks than Hadoop. In contrast to Hadoop, Spark
is specifically made for interactively developing and running data analytics scripts and is therefore more easily accessible to people with an applied econometrics background but no substantial knowledge in MapReduce and/or cluster computing. In particular, it comes with several high\-level operators that make it rather easy to implement analytics tasks. As we will see in later chapters, it is very easy to use interactively from within R (and other languages like Python, SQL, and Scala). This makes the platform much more accessible and worthwhile for empirical economic research, even for relatively simple econometric analyses.
The following figure illustrates the basic components of Spark. The main functionality includes memory management, task scheduling, and the implementation of Spark’s capabilities to handle and manipulate data distributed across many nodes in parallel. Several built\-in libraries extend the core implementation, covering specific domains of practical data analytics tasks (querying structured data via SQL, processing streams of data, machine learning, and network/graph analysis). The last two provide various common functions/algorithms frequently used in data analytics/applied econometrics, such as generalized linear regression, summary statistics, and principal component analysis.
At the heart of Big Data Analytics with Spark
is the fundamental data structure called ‘resilient distributed dataset’ (RDD). When loading/importing data into Spark, the data is automatically distributed across the cluster in RDDs (\~ as distributed collections of elements), and manipulations are then executed in parallel on these RDDs. However, the entire Spark framework also works locally on a simple laptop or desktop computer. This is a great advantage when learning Spark and when testing/debugging an analytics script on a small sample of the real dataset.
6\.4 Spark with R
-----------------
There are two prominent packages for using Spark in connection with R: `SparkR` ([Venkataraman et al. 2021](#ref-SparkR)) and RStudio’s `sparklyr` ([Luraschi et al. 2022](#ref-sparklyr)). The former is in some ways closer to Spark’s Python API; the latter is closer to the `dplyr`\-type of data handling (and is compatible with the `tidyverse` ([Wickham et al. 2019](#ref-tidyverse))).[36](#fn36) For the very simple introductory examples below, either package could have been used equally well. For the general introduction we focus on `SparkR` and later have a look at a simple regression example based on `sparklyr`.
To install and use Spark from the R shell, only a few preparatory steps are needed. The following examples are based on installing/running Spark on a Linux machine with the `SparkR` package. `SparkR` depends on Java (version 8\). Thus, we first should make sure the right Java version is installed. If several Java versions are installed, we might have to select version 8 manually via the following terminal command (Linux):
```
# might have to switch to java version 8 first
sudo update-alternatives --config java
```
With the right version of Java running, we can install `SparkR` from GitHub (needs the `devtools` package ([Wickham et al. 2022](#ref-devtools))) `devtools::install_github("cran/SparkR")`. After installing `SparkR`, the call `SparkR::install.spark()` will download and install Apache Spark to a local directory.[37](#fn37) Now we can start an interactive SparkR session from the terminal with
```
$ SPARK-HOME/bin/sparkR
```
where `SPARK-HOME` is a placeholder for the path to your local Spark installation (printed to the console after running `SparkR::install.spark()`). Or simply run SparkR from within RStudio by loading `SparkR` and initiating Spark with `sparkR.session()`.
```
# to install use
# devtools::install_github("cran/SparkR")
# load packages
library(SparkR)
# start session
sparkR.session()
```
By default this starts a local stand\-alone session (no connection to a cluster computer needed). While the examples below are all intended to run on a local machine, it is straightforward to connect to a remote Spark cluster and run the same examples there.[38](#fn38)
### 6\.4\.1 Data import and summary statistics
First, we want to have a brief look at how to perform the first few steps of a typical econometric analysis: import data and compute summary statistics. We will analyze the already familiar `flights.csv` dataset. The basic Spark installation provides direct support to import common data formats such as CSV and JSON via the `read.df()` function (for many additional formats, specific Spark libraries are available). To import `flights.csv`, we set the `source` argument to `"csv"`.
```
# Import data and create a SparkDataFrame
# (a distributed collection of data, RDD)
flights <- read.df("data/flights.csv", source = "csv", header="true")
# inspect the object
class(flights)
```
```
## [1] "SparkDataFrame"
## attr(,"package")
## [1] "SparkR"
```
```
dim(flights)
```
```
## [1] 336776 19
```
By default, all variables have been imported as type `character`. For several variables this is, of course, not the optimal data type to compute summary statistics. We thus first have to convert some columns to other data types with the `cast` function.
```
flights$dep_delay <- cast(flights$dep_delay, "double")
flights$dep_time <- cast(flights$dep_time, "double")
flights$arr_time <- cast(flights$arr_time, "double")
flights$arr_delay <- cast(flights$arr_delay, "double")
flights$air_time <- cast(flights$air_time, "double")
flights$distance <- cast(flights$distance, "double")
```
Suppose we only want to compute average arrival delays per carrier for flights with a distance over 1000 miles. Variable selection and filtering of observations is implemented in `select()` and `filter()` (as in the `dplyr` package).
```
# filter
long_flights <- select(flights, "carrier", "year", "arr_delay", "distance")
long_flights <- filter(long_flights, long_flights$distance >= 1000)
head(long_flights)
```
```
## carrier year arr_delay distance
## 1 UA 2013 11 1400
## 2 UA 2013 20 1416
## 3 AA 2013 33 1089
## 4 B6 2013 -18 1576
## 5 B6 2013 19 1065
## 6 B6 2013 -2 1028
```
Now we summarize the arrival delays for the subset of long flights by carrier. This is the ‘split\-apply\-combine’ approach applied in `SparkR`.
```
# aggregation: mean delay per carrier
long_flights_delays<- summarize(groupBy(long_flights, long_flights$carrier),
avg_delay = mean(long_flights$arr_delay))
head(long_flights_delays)
```
```
## carrier avg_delay
## 1 UA 3.2622
## 2 AA 0.4958
## 3 EV 15.6876
## 4 B6 9.0364
## 5 DL -0.2394
## 6 OO -2.0000
```
Finally, we want to convert the result back into a usual `data.frame` (loaded in our current R session) in order to further process the summary statistics (output to LaTeX table, plot, etc.). Note that as in the previous aggregation exercises with the `ff` package, the computed summary statistics (in the form of a table/df) are obviously much smaller than the raw data. However, note that converting a `SparkDataFrame` back into a native R object generally means all the data stored in the RDDs constituting the `SparkDataFrame` object is loaded into local RAM. Hence, when working with actual Big Data on a Spark cluster, this type of operation can quickly overflow local RAM.
```
# Convert result back into native R object
delays <- collect(long_flights_delays)
class(delays)
```
```
## [1] "data.frame"
```
```
delays
```
```
## carrier avg_delay
## 1 UA 3.2622
## 2 AA 0.4958
## 3 EV 15.6876
## 4 B6 9.0364
## 5 DL -0.2394
## 6 OO -2.0000
## 7 F9 21.9207
## 8 US 0.5567
## 9 MQ 8.2331
## 10 HA -6.9152
## 11 AS -9.9309
## 12 VX 1.7645
## 13 WN 9.0842
## 14 9E 6.6730
```
6\.5 Spark with SQL
-------------------
Instead of interacting with Spark via R, you can do the same via SQL. This can be very convenient at the stage of data exploration and data preparation. Also note that this is a very good example of how knowing some SQL can be very useful when working with Big Data even if you are not interacting with an actual relational database.[39](#fn39)
To directly interact with Spark via SQL, open a terminal window, switch to the `SPARK-HOME` directory,
```
cd SPARK-HOME
```
and enter the following command:
```
$ bin/spark-sql
```
where `SPARK-HOME` is again the placeholder for the path to your local Spark installation (printed to the console after running `SparkR::install.spark()`). This will start up Spark and connect to it via Spark’s SQL interface. You will notice that the prompt in the terminal changes (similar to when you start `sqlite`).
Let’s run some example queries. The Spark installation comes with several data and script examples. The example datasets are located at `SPARK-HOME/examples/src/main/resources`. For example, the file `employees.json` contains the following records in JSON format:
```
{"name":"Michael", "salary":3000}
{"name":"Andy", "salary":4500}
{"name":"Justin", "salary":3500}
{"name":"Berta", "salary":4000}
```
We can query this data directly via SQL commands by referring to the location of the original JSON file.
**Select all observations**
```
SELECT *
FROM json.`examples/src/main/resources/employees.json`
;
```
```
Michael 3000
Andy 4500
Justin 3500
Berta 4000
Time taken: 0.099 seconds, Fetched 4 row(s)
```
**Filter observations**
```
SELECT *
FROM json.`examples/src/main/resources/employees.json`
WHERE salary <4000
;
```
```
Michael 3000
Justin 3500
Time taken: 0.125 seconds, Fetched 2 row(s)
```
**Compute the average salary**
```
SELECT AVG(salary) AS mean_salary
FROM json.`examples/src/main/resources/employees.json`;
```
```
3750.0
Time taken: 0.142 seconds, Fetched 1 row(s)
```
6\.6 Spark with R \+ SQL
------------------------
Most conveniently, you can combine the SQL query features of Spark and SQL with running R on Spark. First, initiate the Spark session in RStudio and import the data as a Spark data frame.
```
# to install use
# devtools::install_github("cran/SparkR")
# load packages
library(SparkR)
# start session
sparkR.session()
```
```
## Java ref type org.apache.spark.sql.SparkSession id 1
```
```
# read data
flights <- read.df("data/flights.csv", source = "csv", header="true")
```
Now we can make the Spark data frame accessible for SQL queries by registering it as a temporary table/view with `createOrReplaceTempView()` and then run SQL queries on it from within the R session via the `sql()`\-function. `sql()` will return the results as a Spark data frame (this means the result is also located on the cluster and hardly affects the master node’s memory).
```
# register the data frame as a table
createOrReplaceTempView(flights, "flights" )
# now run SQL queries on it
query <-
"SELECT DISTINCT carrier,
year,
arr_delay,
distance
FROM flights
WHERE 1000 <= distance"
long_flights2 <- sql(query)
head(long_flights2)
```
```
## carrier year arr_delay distance
## 1 DL 2013 -30 1089
## 2 UA 2013 -11 1605
## 3 DL 2013 -42 1598
## 4 UA 2013 -5 1585
## 5 AA 2013 6 1389
## 6 UA 2013 -23 1620
```
6\.7 Wrapping up
----------------
* At the core of a vertical scaling strategy are so\-called *distributed systems* – several computers connected in a network to jointly process large amounts of data.
* In contrast to standard parallel\-computing, the different computing nodes in a distributed system do not share the same physical memory. Each of the nodes/computers in the system has its own CPU, hard disk, and RAM. This architecture requires a different computing paradigm to run the same data analytics job across all nodes (in parallel).
* A commonly used paradigm to do this is MapReduce, which is implemented in software called Apache Hadoop.
* The core idea of MapReduce is to split a problem/computing task on a large dataset into several components, each of which focuses on a smaller subset of the dataset. The task components are then distributed across the cluster, so that each component is handled by one computer in the network. Finally, each node returns its result to the master node (the computer coordinating all activities in the cluster), where the partial results are combined into the overall result.
* A typical example of a MapReduce job is the computation of term frequencies in a large body of text. Here, each node computes the number of occurrences of specific words in a subset of the overall body of text; the individual results are then summed up per unique word.
* Apache Hadoop is a collection of open\-source software tools to work with massive amounts of data on a distributed system (a network of computers). Part of Hadoop is the Hadoop MapReduce implementation to run MapReduce jobs on a Hadoop cluster.
* Apache Spark is an analytics engine for large\-scale data processing on local machines or clusters. It improves upon several shortcomings of the previous Hadoop/MapReduce framework, in particular with regard to iterative tasks (such as in machine learning).
6\.1 MapReduce
--------------
A broadly used programming model for processing Big Data on distributed systems is called MapReduce. It essentially consists of two procedures and is conceptually very close to the “split\-apply\-combine” strategy in data analysis. First, the Map function sorts/filters the data (on each node/computer). Then, a Reduce function aggregates the sorted/filtered data. Thereby, all of these processes are orchestrated to run across many nodes of a cluster computer. Finally, the master node collects the results and returns them to the user.
Let us illustrate the basic idea behind MapReduce with a simple example. Suppose you are working on a text mining task in which all the raw text in thousands of digitized books (stored as text files) need to be processed. In a first step, you want to compute word frequencies (count the number of occurrences of specific words in all books combined).
For simplicity, let us only focus on the following very simple and often referred to MapReduce word count example[33](#fn33):
Text in book 1:
*Apple Orange Mango*
*Orange Grapes Plum*
Text in book 2:
*Apple Plum Mango*
*Apple Apple Plum*
The MapReduce procedure is then as follows:
* First, the data is loaded from the original text files.
* Each line of text is then passed to individual mapper instances, which separately split the lines of text into key–value pairs. In the example above, the first key\-value pair of the first document/line would then be *Apple,1*.
* Then the system sorts and shuffles all key–value pairs across all instances; next, the reducer aggregates the sorted/shuffled key–value pairs (here: counts the number of word occurrences). In the example above, this means all values with key *Apple* are summed up, resulting in *Apple,4*.
* Finally, the master instance collects all the results and returns the final output.
The result would be as follows:
Text in book 2:
*Apple,4*
*Grapes,1*
*Mango,2*
*Orange,2*
*Plum,3*
From this simple example, a key aspect of MapReduce should become clear: for the key tasks of mapping and reducing, the data processing on one node/instance can happen completely independently of the processing on the other instances. Note that this is not as easily achievable for every data analytics task as it is for computing word frequencies.
**Aside: MapReduce concept illustrated in R**
In order to better understand the basic concept behind the MapReduce framework on a distributed system, let’s look at how we can combine the functions `map()` and `reduce()` in R to implement the basic MapReduce example shown above (this is just to illustrate the underlying idea, *not* to suggest that MapReduce actually is simply an application of the classical `map` and `reduce (fold)` functions in functional programming).[34](#fn34) The overall aim of the program is to count the number of times each word is repeated in a given text. The input to the program is thus a text, and the output is a list of key–value pairs with the unique words occurring in the text as keys and their respective number of occurrences as values.
In the code example, we will use the following text as input.
```
# initialize the input text (for simplicity as one text string)
input_text <-
"Apple Orange Mango
Orange Grapes Plum
Apple Plum Mango
Apple Apple Plum"
```
*Mapper*
The Mapper first splits the text into lines and then splits the lines into key–value pairs, assigning to each key the value `1`. For the first step we use `strsplit()`, which takes a character string as input and splits it into a list of sub\-strings according to the matches of a sub\-string (here `"\n"`, indicating the end of a line).
```
# Mapper splits input into lines
lines <- as.list(strsplit(input_text, "\n")[[1]])
lines[1:2]
```
```
## [[1]]
## [1] "Apple Orange Mango"
##
## [[2]]
## [1] "Orange Grapes Plum"
```
In a second step, we apply our own function (`map_fun()`) to each line of text via `Map()`. `map_fun()` splits each line into words (keys) and assigns a value of `1` to each key.
```
# Mapper splits lines into key–value pairs
map_fun <-
function(x){
# remove special characters
x_clean <- gsub("[[:punct:]]", "", x)
# split line into words
keys <- unlist(strsplit(x_clean, " "))
# initialize key–value pairs
key_values <- rep(1, length(keys))
names(key_values) <- keys
return(key_values)
}
kv_pairs <- Map(map_fun, lines)
# look at the result
kv_pairs[1:2]
```
```
## [[1]]
## Apple Orange Mango
## 1 1 1
##
## [[2]]
## Orange Grapes Plum
## 1 1 1
```
*Reducer*
The Reducer first sorts and shuffles the input from the Mapper and then reduces the key–value pairs by summing up the values for each key.
```
# order and shuffle
kv_pairs <- unlist(kv_pairs)
keys <- unique(names(kv_pairs))
keys <- keys[order(keys)]
shuffled <- lapply(keys,
function(x) kv_pairs[x == names(kv_pairs)])
shuffled[1:2]
```
```
## [[1]]
## Apple Apple Apple Apple
## 1 1 1 1
##
## [[2]]
## Grapes
## 1
```
Now we can sum up the keys to get the word count for the entire input.
```
sums <- lapply(shuffled, Reduce, f=sum)
names(sums) <- keys
sums[1:2]
```
```
## $Apple
## [1] 4
##
## $Grapes
## [1] 1
```
6\.2 Apache Hadoop
------------------
Hadoop MapReduce is the most widely known and used implementation of the MapReduce framework. A decade ago, Big Data Analytics with really large datasets often involved directly interacting with/working in Hadoop to run MapReduce jobs. However, over the last few years various higher\-level interfaces have been developed that make the usage of MapReduce/Hadoop by data analysts much more easily accessible. The purpose of this section is thus to give a lightweight introduction to the underlying basics that power some of the code examples and tutorials discussed in the data analytics chapters toward the end of this book.
### 6\.2\.1 Hadoop word count example
To get an idea of what running a Hadoop job looks like, we run the same simple word count example introduced above on a local Hadoop installation. The example presupposes a local installation of Hadoop version 2\.10\.1 (see Appendix C for details) and can easily be run on a completely normal desktop/laptop computer running Ubuntu Linux. As a side remark, this actually illustrates an important aspect of developing MapReducescripts in Hadoop (and many of the software packages building on it): the code can easily be developed and tested locally on a small machine and only later transferred to the actual Hadoop cluster to be run on the full dataset.
The basic Hadoop installation comes with a few templates for very typical map/reduce programs.[35](#fn35) Below we replicate the same word\-count example as shown in simple R code above.
In a first step, we create an input directory where we store the input file(s) to feed to Hadoop.
```
# create directory for input files (typically text files)
mkdir ~/input
```
Then we add a text file containing the same text as in the example above.
```
echo "Apple Orange Mango
Orange Grapes Plum
Apple Plum Mango
Apple Apple Plum" >> ~/input/text.txt
```
Now we can run the MapReduce/Hadoop word count as follows, storing the results in a new directory called `wordcount_example`. We use the already\-implemented Hadoop script to run a word count job, MapReduce style. This is where we rely on the already implemented word\-count example provided with the Hadoop installation (located in `/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.1.jar`).
```
# run mapreduce word count
/usr/local/hadoop/bin/hadoop jar \
/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.1.jar \
wordcount
~/input ~/wc_example
```
What this line says is: Run the Hadoop program called `wordcount` implemented in the jar\-file `hadoop-mapreduce-examples-2.10.1.jar`; use the files in directory `~/input` containing the raw text as input, and store the final output in directory `~/wc_example`.
```
cat ~/wc_example/*
```
```
## Apple 4
## Grapes 1
## Mango 2
## Orange 2
## Plum 3
```
What looks rather simple in this example can get very complex once you want to write an entire data analysis script with all kinds of analysis for Hadoop. Also, Hadoop was designed for batch processing and does not offer a simple interface for interactive sessions. All of this makes it rather impractical for a typical analytics workflow as we know it from working with R. This is where [Apache Spark](https://spark.apache.org/) ([Zaharia et al. 2016](#ref-Spark)) comes to the rescue.
### 6\.2\.1 Hadoop word count example
To get an idea of what running a Hadoop job looks like, we run the same simple word count example introduced above on a local Hadoop installation. The example presupposes a local installation of Hadoop version 2\.10\.1 (see Appendix C for details) and can easily be run on a completely normal desktop/laptop computer running Ubuntu Linux. As a side remark, this actually illustrates an important aspect of developing MapReducescripts in Hadoop (and many of the software packages building on it): the code can easily be developed and tested locally on a small machine and only later transferred to the actual Hadoop cluster to be run on the full dataset.
The basic Hadoop installation comes with a few templates for very typical map/reduce programs.[35](#fn35) Below we replicate the same word\-count example as shown in simple R code above.
In a first step, we create an input directory where we store the input file(s) to feed to Hadoop.
```
# create directory for input files (typically text files)
mkdir ~/input
```
Then we add a text file containing the same text as in the example above.
```
echo "Apple Orange Mango
Orange Grapes Plum
Apple Plum Mango
Apple Apple Plum" >> ~/input/text.txt
```
Now we can run the MapReduce/Hadoop word count as follows, storing the results in a new directory called `wordcount_example`. We use the already\-implemented Hadoop script to run a word count job, MapReduce style. This is where we rely on the already implemented word\-count example provided with the Hadoop installation (located in `/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.1.jar`).
```
# run mapreduce word count
/usr/local/hadoop/bin/hadoop jar \
/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.1.jar \
wordcount
~/input ~/wc_example
```
What this line says is: Run the Hadoop program called `wordcount` implemented in the jar\-file `hadoop-mapreduce-examples-2.10.1.jar`; use the files in directory `~/input` containing the raw text as input, and store the final output in directory `~/wc_example`.
```
cat ~/wc_example/*
```
```
## Apple 4
## Grapes 1
## Mango 2
## Orange 2
## Plum 3
```
What looks rather simple in this example can get very complex once you want to write an entire data analysis script with all kinds of analysis for Hadoop. Also, Hadoop was designed for batch processing and does not offer a simple interface for interactive sessions. All of this makes it rather impractical for a typical analytics workflow as we know it from working with R. This is where [Apache Spark](https://spark.apache.org/) ([Zaharia et al. 2016](#ref-Spark)) comes to the rescue.
6\.3 Apache Spark
-----------------
Spark ([Zaharia et al. 2016](#ref-Spark)) is a data analytics engine specifically designed for processing large amounts of data on cluster computers. It partially builds on the broader Apache Hadoop framework for handling storage and resource management, but it is often faster than Hadoop MapReduce by an order of magnitude. In addition, it offers many more easy\-to\-use high\-level interfaces for typical analytics tasks than Hadoop. In contrast to Hadoop, Spark
is specifically made for interactively developing and running data analytics scripts and is therefore more easily accessible to people with an applied econometrics background but no substantial knowledge in MapReduce and/or cluster computing. In particular, it comes with several high\-level operators that make it rather easy to implement analytics tasks. As we will see in later chapters, it is very easy to use interactively from within R (and other languages like Python, SQL, and Scala). This makes the platform much more accessible and worthwhile for empirical economic research, even for relatively simple econometric analyses.
The following figure illustrates the basic components of Spark. The main functionality includes memory management, task scheduling, and the implementation of Spark’s capabilities to handle and manipulate data distributed across many nodes in parallel. Several built\-in libraries extend the core implementation, covering specific domains of practical data analytics tasks (querying structured data via SQL, processing streams of data, machine learning, and network/graph analysis). The last two provide various common functions/algorithms frequently used in data analytics/applied econometrics, such as generalized linear regression, summary statistics, and principal component analysis.
At the heart of Big Data Analytics with Spark
is the fundamental data structure called ‘resilient distributed dataset’ (RDD). When loading/importing data into Spark, the data is automatically distributed across the cluster in RDDs (\~ as distributed collections of elements), and manipulations are then executed in parallel on these RDDs. However, the entire Spark framework also works locally on a simple laptop or desktop computer. This is a great advantage when learning Spark and when testing/debugging an analytics script on a small sample of the real dataset.
6\.4 Spark with R
-----------------
There are two prominent packages for using Spark in connection with R: `SparkR` ([Venkataraman et al. 2021](#ref-SparkR)) and RStudio’s `sparklyr` ([Luraschi et al. 2022](#ref-sparklyr)). The former is in some ways closer to Spark’s Python API; the latter is closer to the `dplyr`\-type of data handling (and is compatible with the `tidyverse` ([Wickham et al. 2019](#ref-tidyverse))).[36](#fn36) For the very simple introductory examples below, either package could have been used equally well. For the general introduction we focus on `SparkR` and later have a look at a simple regression example based on `sparklyr`.
To install and use Spark from the R shell, only a few preparatory steps are needed. The following examples are based on installing/running Spark on a Linux machine with the `SparkR` package. `SparkR` depends on Java (version 8\). Thus, we first should make sure the right Java version is installed. If several Java versions are installed, we might have to select version 8 manually via the following terminal command (Linux):
```
# might have to switch to java version 8 first
sudo update-alternatives --config java
```
With the right version of Java running, we can install `SparkR` from GitHub (needs the `devtools` package ([Wickham et al. 2022](#ref-devtools))) `devtools::install_github("cran/SparkR")`. After installing `SparkR`, the call `SparkR::install.spark()` will download and install Apache Spark to a local directory.[37](#fn37) Now we can start an interactive SparkR session from the terminal with
```
$ SPARK-HOME/bin/sparkR
```
where `SPARK-HOME` is a placeholder for the path to your local Spark installation (printed to the console after running `SparkR::install.spark()`). Or simply run SparkR from within RStudio by loading `SparkR` and initiating Spark with `sparkR.session()`.
```
# to install use
# devtools::install_github("cran/SparkR")
# load packages
library(SparkR)
# start session
sparkR.session()
```
By default this starts a local stand\-alone session (no connection to a cluster computer needed). While the examples below are all intended to run on a local machine, it is straightforward to connect to a remote Spark cluster and run the same examples there.[38](#fn38)
### 6\.4\.1 Data import and summary statistics
First, we want to have a brief look at how to perform the first few steps of a typical econometric analysis: import data and compute summary statistics. We will analyze the already familiar `flights.csv` dataset. The basic Spark installation provides direct support to import common data formats such as CSV and JSON via the `read.df()` function (for many additional formats, specific Spark libraries are available). To import `flights.csv`, we set the `source` argument to `"csv"`.
```
# Import data and create a SparkDataFrame
# (a distributed collection of data, RDD)
flights <- read.df("data/flights.csv", source = "csv", header="true")
# inspect the object
class(flights)
```
```
## [1] "SparkDataFrame"
## attr(,"package")
## [1] "SparkR"
```
```
dim(flights)
```
```
## [1] 336776 19
```
By default, all variables have been imported as type `character`. For several variables this is, of course, not the optimal data type to compute summary statistics. We thus first have to convert some columns to other data types with the `cast` function.
```
flights$dep_delay <- cast(flights$dep_delay, "double")
flights$dep_time <- cast(flights$dep_time, "double")
flights$arr_time <- cast(flights$arr_time, "double")
flights$arr_delay <- cast(flights$arr_delay, "double")
flights$air_time <- cast(flights$air_time, "double")
flights$distance <- cast(flights$distance, "double")
```
Suppose we only want to compute average arrival delays per carrier for flights with a distance over 1000 miles. Variable selection and filtering of observations is implemented in `select()` and `filter()` (as in the `dplyr` package).
```
# filter
long_flights <- select(flights, "carrier", "year", "arr_delay", "distance")
long_flights <- filter(long_flights, long_flights$distance >= 1000)
head(long_flights)
```
```
## carrier year arr_delay distance
## 1 UA 2013 11 1400
## 2 UA 2013 20 1416
## 3 AA 2013 33 1089
## 4 B6 2013 -18 1576
## 5 B6 2013 19 1065
## 6 B6 2013 -2 1028
```
Now we summarize the arrival delays for the subset of long flights by carrier. This is the ‘split\-apply\-combine’ approach applied in `SparkR`.
```
# aggregation: mean delay per carrier
long_flights_delays<- summarize(groupBy(long_flights, long_flights$carrier),
avg_delay = mean(long_flights$arr_delay))
head(long_flights_delays)
```
```
## carrier avg_delay
## 1 UA 3.2622
## 2 AA 0.4958
## 3 EV 15.6876
## 4 B6 9.0364
## 5 DL -0.2394
## 6 OO -2.0000
```
Finally, we want to convert the result back into a usual `data.frame` (loaded in our current R session) in order to further process the summary statistics (output to LaTeX table, plot, etc.). Note that as in the previous aggregation exercises with the `ff` package, the computed summary statistics (in the form of a table/df) are obviously much smaller than the raw data. However, note that converting a `SparkDataFrame` back into a native R object generally means all the data stored in the RDDs constituting the `SparkDataFrame` object is loaded into local RAM. Hence, when working with actual Big Data on a Spark cluster, this type of operation can quickly overflow local RAM.
```
# Convert result back into native R object
delays <- collect(long_flights_delays)
class(delays)
```
```
## [1] "data.frame"
```
```
delays
```
```
## carrier avg_delay
## 1 UA 3.2622
## 2 AA 0.4958
## 3 EV 15.6876
## 4 B6 9.0364
## 5 DL -0.2394
## 6 OO -2.0000
## 7 F9 21.9207
## 8 US 0.5567
## 9 MQ 8.2331
## 10 HA -6.9152
## 11 AS -9.9309
## 12 VX 1.7645
## 13 WN 9.0842
## 14 9E 6.6730
```
### 6\.4\.1 Data import and summary statistics
First, we want to have a brief look at how to perform the first few steps of a typical econometric analysis: import data and compute summary statistics. We will analyze the already familiar `flights.csv` dataset. The basic Spark installation provides direct support to import common data formats such as CSV and JSON via the `read.df()` function (for many additional formats, specific Spark libraries are available). To import `flights.csv`, we set the `source` argument to `"csv"`.
```
# Import data and create a SparkDataFrame
# (a distributed collection of data, RDD)
flights <- read.df("data/flights.csv", source = "csv", header="true")
# inspect the object
class(flights)
```
```
## [1] "SparkDataFrame"
## attr(,"package")
## [1] "SparkR"
```
```
dim(flights)
```
```
## [1] 336776 19
```
By default, all variables have been imported as type `character`. For several variables this is, of course, not the optimal data type to compute summary statistics. We thus first have to convert some columns to other data types with the `cast` function.
```
flights$dep_delay <- cast(flights$dep_delay, "double")
flights$dep_time <- cast(flights$dep_time, "double")
flights$arr_time <- cast(flights$arr_time, "double")
flights$arr_delay <- cast(flights$arr_delay, "double")
flights$air_time <- cast(flights$air_time, "double")
flights$distance <- cast(flights$distance, "double")
```
Suppose we only want to compute average arrival delays per carrier for flights with a distance over 1000 miles. Variable selection and filtering of observations is implemented in `select()` and `filter()` (as in the `dplyr` package).
```
# filter
long_flights <- select(flights, "carrier", "year", "arr_delay", "distance")
long_flights <- filter(long_flights, long_flights$distance >= 1000)
head(long_flights)
```
```
## carrier year arr_delay distance
## 1 UA 2013 11 1400
## 2 UA 2013 20 1416
## 3 AA 2013 33 1089
## 4 B6 2013 -18 1576
## 5 B6 2013 19 1065
## 6 B6 2013 -2 1028
```
Now we summarize the arrival delays for the subset of long flights by carrier. This is the ‘split\-apply\-combine’ approach applied in `SparkR`.
```
# aggregation: mean delay per carrier
long_flights_delays<- summarize(groupBy(long_flights, long_flights$carrier),
avg_delay = mean(long_flights$arr_delay))
head(long_flights_delays)
```
```
## carrier avg_delay
## 1 UA 3.2622
## 2 AA 0.4958
## 3 EV 15.6876
## 4 B6 9.0364
## 5 DL -0.2394
## 6 OO -2.0000
```
Finally, we want to convert the result back into a usual `data.frame` (loaded in our current R session) in order to further process the summary statistics (output to LaTeX table, plot, etc.). Note that as in the previous aggregation exercises with the `ff` package, the computed summary statistics (in the form of a table/df) are obviously much smaller than the raw data. However, note that converting a `SparkDataFrame` back into a native R object generally means all the data stored in the RDDs constituting the `SparkDataFrame` object is loaded into local RAM. Hence, when working with actual Big Data on a Spark cluster, this type of operation can quickly overflow local RAM.
```
# Convert result back into native R object
delays <- collect(long_flights_delays)
class(delays)
```
```
## [1] "data.frame"
```
```
delays
```
```
## carrier avg_delay
## 1 UA 3.2622
## 2 AA 0.4958
## 3 EV 15.6876
## 4 B6 9.0364
## 5 DL -0.2394
## 6 OO -2.0000
## 7 F9 21.9207
## 8 US 0.5567
## 9 MQ 8.2331
## 10 HA -6.9152
## 11 AS -9.9309
## 12 VX 1.7645
## 13 WN 9.0842
## 14 9E 6.6730
```
6\.5 Spark with SQL
-------------------
Instead of interacting with Spark via R, you can do the same via SQL. This can be very convenient at the stage of data exploration and data preparation. Also note that this is a very good example of how knowing some SQL can be very useful when working with Big Data even if you are not interacting with an actual relational database.[39](#fn39)
To directly interact with Spark via SQL, open a terminal window, switch to the `SPARK-HOME` directory,
```
cd SPARK-HOME
```
and enter the following command:
```
$ bin/spark-sql
```
where `SPARK-HOME` is again the placeholder for the path to your local Spark installation (printed to the console after running `SparkR::install.spark()`). This will start up Spark and connect to it via Spark’s SQL interface. You will notice that the prompt in the terminal changes (similar to when you start `sqlite`).
Let’s run some example queries. The Spark installation comes with several data and script examples. The example datasets are located at `SPARK-HOME/examples/src/main/resources`. For example, the file `employees.json` contains the following records in JSON format:
```
{"name":"Michael", "salary":3000}
{"name":"Andy", "salary":4500}
{"name":"Justin", "salary":3500}
{"name":"Berta", "salary":4000}
```
We can query this data directly via SQL commands by referring to the location of the original JSON file.
**Select all observations**
```
SELECT *
FROM json.`examples/src/main/resources/employees.json`
;
```
```
Michael 3000
Andy 4500
Justin 3500
Berta 4000
Time taken: 0.099 seconds, Fetched 4 row(s)
```
**Filter observations**
```
SELECT *
FROM json.`examples/src/main/resources/employees.json`
WHERE salary <4000
;
```
```
Michael 3000
Justin 3500
Time taken: 0.125 seconds, Fetched 2 row(s)
```
**Compute the average salary**
```
SELECT AVG(salary) AS mean_salary
FROM json.`examples/src/main/resources/employees.json`;
```
```
3750.0
Time taken: 0.142 seconds, Fetched 1 row(s)
```
6\.6 Spark with R \+ SQL
------------------------
Most conveniently, you can combine the SQL query features of Spark and SQL with running R on Spark. First, initiate the Spark session in RStudio and import the data as a Spark data frame.
```
# to install use
# devtools::install_github("cran/SparkR")
# load packages
library(SparkR)
# start session
sparkR.session()
```
```
## Java ref type org.apache.spark.sql.SparkSession id 1
```
```
# read data
flights <- read.df("data/flights.csv", source = "csv", header="true")
```
Now we can make the Spark data frame accessible for SQL queries by registering it as a temporary table/view with `createOrReplaceTempView()` and then run SQL queries on it from within the R session via the `sql()`\-function. `sql()` will return the results as a Spark data frame (this means the result is also located on the cluster and hardly affects the master node’s memory).
```
# register the data frame as a table
createOrReplaceTempView(flights, "flights" )
# now run SQL queries on it
query <-
"SELECT DISTINCT carrier,
year,
arr_delay,
distance
FROM flights
WHERE 1000 <= distance"
long_flights2 <- sql(query)
head(long_flights2)
```
```
## carrier year arr_delay distance
## 1 DL 2013 -30 1089
## 2 UA 2013 -11 1605
## 3 DL 2013 -42 1598
## 4 UA 2013 -5 1585
## 5 AA 2013 6 1389
## 6 UA 2013 -23 1620
```
6\.7 Wrapping up
----------------
* At the core of a vertical scaling strategy are so\-called *distributed systems* – several computers connected in a network to jointly process large amounts of data.
* In contrast to standard parallel\-computing, the different computing nodes in a distributed system do not share the same physical memory. Each of the nodes/computers in the system has its own CPU, hard disk, and RAM. This architecture requires a different computing paradigm to run the same data analytics job across all nodes (in parallel).
* A commonly used paradigm to do this is MapReduce, which is implemented in software called Apache Hadoop.
* The core idea of MapReduce is to split a problem/computing task on a large dataset into several components, each of which focuses on a smaller subset of the dataset. The task components are then distributed across the cluster, so that each component is handled by one computer in the network. Finally, each node returns its result to the master node (the computer coordinating all activities in the cluster), where the partial results are combined into the overall result.
* A typical example of a MapReduce job is the computation of term frequencies in a large body of text. Here, each node computes the number of occurrences of specific words in a subset of the overall body of text; the individual results are then summed up per unique word.
* Apache Hadoop is a collection of open\-source software tools to work with massive amounts of data on a distributed system (a network of computers). Part of Hadoop is the Hadoop MapReduce implementation to run MapReduce jobs on a Hadoop cluster.
* Apache Spark is an analytics engine for large\-scale data processing on local machines or clusters. It improves upon several shortcomings of the previous Hadoop/MapReduce framework, in particular with regard to iterative tasks (such as in machine learning).
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/cloud-computing.html |
Chapter 7 Cloud Computing
=========================
In this chapter, we first look at what cloud computing
basically is and what platforms provide cloud computing services. We then focus on *scaling up* in the cloud. For the sake of simplicity, we will primarily focus on how to use cloud instances provided by one of the providers, Amazon Web Services (AWS). However, once you are familiar with setting things up on AWS, also using Google Cloud, Azure, etc. will be easy. Most of the core services are provided by all providers, and once you understand the basics, the different dashboards will look quite familiar. In a second step, we look at a prominent approach to *scaling out* by setting up a Spark cluster in the cloud.
7\.1 Cloud computing basics and platforms
-----------------------------------------
So far we have focused on the available computing resources on our local machines (desktop/laptop) and how to use them optimally when dealing with large amounts of data and/or computationally demanding tasks. A key aspect of this has been to understand why our local machine is struggling with a computing task when there is a large amount of data to be processed and then identifying potential avenues to use the available resources more efficiently, for example, by using one of the following approaches:
* Computationally intensive tasks (but not pushing RAM to the limit): parallelization, using several CPU cores (nodes) in parallel.
* Memory\-intensive tasks (data still fits into RAM): efficient memory allocation.
* Memory\-intensive tasks (data does not fit into RAM): efficient use of virtual memory (use parts of mass storage device as virtual memory).
* Storage: efficient storage (avoid redundancies).
In practice, datasets might be too large for our local machine even if we take all of the techniques listed above into account. That is, a parallelized task might still take ages to complete because our local machine has too few cores available, a task involving virtual memory would use up way too much space on our hard disk, etc.
In such situations, we have to think about horizontal and vertical scaling beyond our local machine. That is, we outsource tasks to a bigger machine (or a cluster of machines) to which our local computer is connected (typically, over the internet). While only one or two decades ago most organizations had their own large centrally hosted machines (database servers, cluster computers) for such tasks, today they often rely on third\-party solutions *‘in the cloud’*. That is, specialized companies provide computing resources (usually, virtual servers) that can be easily accessed via a broadband internet connection and rented on an hourly basis (or even by the minute or second). Given the obvious economies of scale in this line of business, a few large players have emerged who effectively dominate most of the global market:
* [Amazon Web Services (AWS)](https://aws.amazon.com/)
* [Microsoft Azure](https://azure.microsoft.com/en-us/)
* [Google Cloud Platform (GCP)](https://cloud.google.com/)
* [IBM Cloud](https://www.ibm.com/cloud/)
* [Alibaba Cloud](https://www.alibabacloud.com/)
* [Tencent Cloud](https://intl.cloud.tencent.com/)
In the following subsections and chapters, we will primarily rely on services provided by AWS and GCP. In order to try out the code examples and tutorials, make sure to have an AWS account as well as a Google account (which can then easily be linked to GCP). For the AWS account, go to `https://aws.amazon.com/` and create an account. You will have to enter credit card details for either cloud platform when setting up/linking accounts. Importantly, you will only be charged for the time you use an AWS service. However, even when using some cloud instances, several of AWS’s cloud products offer a free tier to test and try out products. The following examples rely whenever possible on free\-tier instances; if not, it is explicitly indicated that running the example in the cloud will generate some costs on your account. For the GCP account, have your Google login credentials ready, and visit `https://cloud.google.com/` to register your Google account with GCP. Again, credit card details are needed to set up an account, but many of the services can be used for free to a certain extent (to learn and try out code).
7\.2 Transitioning to the cloud
-------------------------------
When logged in to AWS and GCP, you will notice the breadth of services offered by these platforms. There are more than 10 main categories of services, with many subcategories and products in each. It is easy to get lost from just browsing through them. Rest assured that for the purpose of data analytics/applied econometrics, many of these services are irrelevant. Our motivation to use the cloud is to extend our computational resources to use our analytics scripts on large datasets, not to develop and deploy web applications or business analytics dashboards. With this perspective, a small selection of services will make the cloud easily accessible for daily analytics workflows.
When we use services from AWS or GCP to *scale up* (vertical scaling) the available resources, the transition from our local implementation of a data analytics task to the cloud implementation is often rather simple. Once we have set up a cloud instance and figured out how to communicate with it, we typically can run the exact same R script locally and in the cloud. This is usually the case for parallelized tasks (simply run the same script on a machine with more cores), in\-memory tasks (rent a machine with more RAM but still use `data.table()`, etc.), and highly parallelized tasks to be run on GPUs. The transition from a local implementation to horizontal scaling (*scaling out*) in the cloud will require slightly more preparatory steps. However, in this domain we will directly build on the same (or very similar) software tools that we have used locally in previous chapters. For example, instead of connecting R to a local SQLite database, we set up a MySQL database on AWS RDS and then connect in essentially the same way our local R session with this database in the cloud.
7\.3 Scaling up in the cloud: Virtual servers
---------------------------------------------
In the following pages we look at a very common scheme to deal with a lack of local computing resources: flexibly renting a type of virtual server often referred to as “Elastic Cloud Computing (EC2\)” instance. Specifically, we will look at how to scale up with AWS EC2 and R/RStudio Server. One of the easiest ways to set up an AWS EC2 instance for R/RStudio Server is to use [Louis Aslett’s Amazon Machine Image (AMI)](https://www.louisaslett.com/RStudio_AMI/). This way you do not need to install R/Rstudio Server yourself. Simply follow these five steps:
* Depending on the region in which you want to create your EC2 instance, click on the corresponding AMI link in <https://www.louisaslett.com/RStudio_AMI/>. For example, if you want to create the instance in Frankfurt, click on [ami\-076abd591c4335092](https://console.aws.amazon.com/ec2/home?region=eu-central-1#launchAmi=ami-076abd591c4335092). You will be automatically directed to the AWS page where you can select the type of EC2 instance you want to create. By default, the free tier T2\.micro instance is selected (I recommend using this type of instance if you simply want to try out the examples below).
* After selecting the instance type, click on “Review and Launch”. On the opened page, select “Edit security groups”. There should be one entry with `SSH` selected in the drop\-down menu. Click on this drop\-down menu and select `HTTP` (instead of `SSH`). Click again on “Review and Launch” to confirm the change.
* Then, click “Launch” to initialize the instance. From the pop\-up concerning the key pair, select “Proceed without a key pair” from the drop\-down menu, and check the box below (“I acknowledge …”). Click “Launch” to confirm. A page opens. Click on “View” instances to see all of your instances and their status. Wait until “Status check” is “2/2 checks passed” (you might want to refresh the instance overview or browser window).
* Click on the instance ID of your newly launched instance and copy the public IPv4 address, open a new browser window/tab, type in `http://`, paste the IP address, and hit enter (the address in your browser bar will be something like `http://3.66.120.150`; `http`, not `https`!) .
* You should see the login\-interface to RStudio on your cloud instance. The username is `rstudio`, and the password is the instance ID of your newly launched instance (it might take a while to load R/Rstudio). Once RStudio is loaded, you are ready to go.
*NOTE*: the instructions above help you set up your own EC2 instance with R/RStudio to run some example scripts and tryout R on EC2\. For more serious/professional (long\-term) usage of an EC2 instance, I strongly recommend setting it up manually and improving the security settings accordingly! The above setup will theoretically result in your instance being accessible for anyone in the Web (something you might want to avoid).
### 7\.3\.1 Parallelization with an EC2 instance
This short tutorial illustrates how to scale the computation up by running it on an AWS EC2 instance. Thereby, we build on the techniques discussed in the previous chapter. Note that our EC2 instance is a Linux machine. When running R on a Linux machine, there is sometimes an additional step to install R packages (at least for most of the packages): R packages need to be compiled before they can be installed. The command to install packages is exactly the same (`install.packages()`), and normally you only notice a slight difference in the output shown on the R console during installation (and the installation process takes a little longer than what you are used to). In some cases you might also have to install additional dependencies directly in Linux. Apart from that, using R via RStudio Server in the cloud looks/feels very similar if not identical to when using R/RStudio locally.
**Preparatory steps**
If your EC2 instance with RStudio Server is not running yet, do the following. In the AWS console, navigate to EC2, select your EC2 instance (with RStudio Server installed), and click on “Instance state/Start instance”. You will have to wait until you see “2/2 checks passed”. Then, open a new browser window, enter the address of your EC2/RStudio Server instance (see above, e.g., `http://3.66.120.150`), and log in to RStudio. First, we need to install the `parallel` ([R Core Team 2021](#ref-rfoundation2021)) and `doSNOW` ([Microsoft Corporation and Weston 2022](#ref-doSNOW)) packages. In addition we will rely on the `stringr` package ([Wickham 2022b](#ref-stringr)).
```
# install packages for parallelization
install.packages("parallel", "doSNOW", "stringr")
```
Once the installations have finished, you can load the packages and verify the number of cores available on your EC2 instance as follows. If you have chosen the free tier T2\.micro instance type when setting up your EC2 instance, you will see that you only have one core available. Do not worry. It is reasonable practice to test your parallelization script with a few iterations on a small machine before bringing out the big guns. The specialized packages we use for parallelization here do not mind if you have one or 32 cores; the same code runs on either machine (obviously not very fast with only one core).
```
# load packages
library(parallel)
library(doSNOW)
# verify no. of cores available
n_cores <- detectCores()
n_cores
```
Finally, we have to upload the data that we want to process as part of the parallelization task. To this end, in RStudio Server, navigate to the file explorer in the lower right\-hand corner. The graphical user interfaces of a local RStudio installation and RStudio Server are almost identical. However, you will find in the file explorer pane an “Upload” button to transfer files from your local machine to the EC2 instance. In this demonstration, we will work with the previously introduced `marketing_data.csv` dataset. You can thus click on “Upload” and upload it to the current target directory (the home directory of RStudio Server). As soon as the file is uploaded, you can work with it as usual (as on the local RStudio installation). To keep things as in the local examples, use the file explorer to create a new `data` folder, and move `marketing_data.csv` in this new folder. The screenshot in Figure [7\.1](cloud-computing.html#fig:ec2rstudioserver) shows a screenshot of the corresponding section.
Figure 7\.1: File explorer and Upload button on RStudio Server.
In order to test if all is set up properly to run in parallel on our EC2 instance, open a new R script in RStudio Server and copy/paste the preparatory steps and the simple parallelization example from Section 4\.5 into the R script.
```
# PREPARATION -----------------------------
# packages
library(stringr)
# import data
marketing <- read.csv("data/marketing_data.csv")
# clean/prepare data
marketing$Income <- as.numeric(gsub("[[:punct:]]", "", marketing$Income))
marketing$days_customer <- as.Date(Sys.Date())-
as.Date(marketing$Dt_Customer, "%m/%d/%y")
marketing$Dt_Customer <- NULL
# all sets of independent vars
indep <- names(marketing)[ c(2:19, 27,28)]
combinations_list <- lapply(1:length(indep),
function(x) combn(indep, x, simplify = FALSE))
combinations_list <- unlist(combinations_list, recursive = FALSE)
models <- lapply(combinations_list,
function(x) paste("Response ~", paste(x, collapse="+")))
```
**Test parallelized code**
Now, we can start testing the code on EC2 without registering the one core for cluster processing. This way, `%dopart%` will automatically resort to running the code sequentially. Make sure to set `N` to 10 (or another small number) for this test.
```
# set cores for parallel processing
# ctemp <- makeCluster(ncores)
# registerDoSNOW(ctemp)
# prepare loop
N <- 10 # just for illustration, the actual code is N <- length(models)
# run loop in parallel
pseudo_Rsq <-
foreach ( i = 1:N, .combine = c) %dopar% {
# fit the logit model via maximum likelihood
fit <- glm(models[[i]], data=marketing, family = binomial())
# compute the proportion of deviance explained
#by the independent vars (~R^2)
return(1-(fit$deviance/fit$null.deviance))
}
```
Once the test has run through successfully, we are ready to scale up and run the actual workload in parallel in the cloud.
**Scale up and run in parallel**
First, switch back to the AWS EC2 console and stop the instance by selecting the tick\-mark in the corresponding row, and click on “Instance state/stop instance”. Once the Instance state is “Stopped”, click on “Actions/Instance settings/change instance type”. You will be presented with a drop\-down menu from which you can select the new instance type and confirm. The example below is based on selecting the `t2.2xlarge` (with 8 vCPU cores and 32MB of RAM). Now you can start the instance again, log in to RStudio Server (as above), and run the script again – but this time with the following lines not commented out (in order to make use of all eight cores):
```
# set cores for parallel processing
ctemp <- makeCluster(ncores)
registerDoSNOW(ctemp)
```
In order to monitor the usage of computing resources on your instance, switch to the Terminal tab, type in `htop`, and hit enter. This will open the interactive process viewer called [htop](https://htop.dev/). Figure [7\.2](cloud-computing.html#fig:ec2rstudioserverhtop) shows the output of htop for the preparatory phase of the parallel task implemented above. The output confirms the available resources provided by a `t2.2xlarge` EC2 instance (with 8 vCPU cores and 32MB of RAM). When using the default free tier T2\.micro instance, you will notice in the htop output that only one core is available.
Figure 7\.2: Monitor resources and processes with htop.
7\.4 Scaling up with GPUs
-------------------------
As discussed in Chapter 4, GPUs can help speed up highly parallelizable tasks such as matrix multiplications. While using a local GPU
/graphics card for statistical analysis has become easier due to more easily accessible software layers around the GPUs
, it still needs solid knowledge regarding the installation of specific GPU drivers and changing of basic system settings. Many users specializing in the data analytics side rather than the computer science/hardware side of Big Data Analytics might not be comfortable with making such installations/changes on their desktop computers or might not have the right type of GPU/graphics card in their device for such changes. In addition, for many users it might not make sense to have a powerful GPU
in their local machine, if they only occasionally use it for certain machine learning or parallel computing tasks. In recent years, many cloud computing platforms have started providing virtual machines with access to GPUs
, in many cases with additional layers of software and/or pre\-installed drivers, allowing users to directly run their code on GPUs
in the cloud. Below, we briefly look at two of the most easy\-to\-use options to run code on GPUs
in the cloud: using Google Colab notebooks with GPUs
and setting up RStudio on virtual machines in a special EC2 tier with GPU
access on AWS.
### 7\.4\.1 GPUs on Google Colab
Google Colab provides a very easy way to run R code on GPUs from Google Cloud. All you need is a Google account. Open a new browser window, go to <https://colab.to/r>, and log in with your Google account if prompted to do so. Colab will open a [Jupyter notebook](https://en.wikipedia.org/wiki/Project_Jupyter) with an R runtime. Click on “Runtime/Change runtime type”, and in the drop\-down menu under ‘Hardware accelerator’, select the option ‘GPU’.
Figure 7\.3: Colab notebook with R runtime and GPUs.
Then, you can install the packages for which you wish to use GPU acceleration (e.g., `gpuR`, `keras`, and `tensorflow`), and the code relying on GPU processing will be run on GPUs (or even [TPUs](https://en.wikipedia.org/wiki/Tensor_Processing_Unit)). At the following link you can find a Colab notebook set up for running a [simple image classification tutorial](https://tensorflow.rstudio.com/tutorials/beginners/basic-ml/tutorial_basic_classification/) with keras on TPUs: [bit.ly/bda\_colab](https://bit.ly/bda_colab).
### 7\.4\.2 RStudio and EC2 with GPUs on AWS
To start a ready\-made EC2 instance with GPUs and RStudio installed, open a browser window and navigate to this service provided by Inmatura on the AWS Marketplace: [https://aws.amazon.com/marketplace/pp/prodview\-p4gqghzifhmmo](https://aws.amazon.com/marketplace/pp/prodview-p4gqghzifhmmo). Click on “Subscribe”.
Figure 7\.4: JupyterHub AMI provided by Inmatura on the AWS Marketplace to run RStudio Server with GPUs on AWS EC2\.
After the subscription request is processed, click on “Continue to Configuration” and “Continue to Launch”. To make use of a GPU, select, for example, `g2.2xlarge` type under “EC2 Instance Type”. If necessary, create a new key pair under Key Pair Settings; otherwise keep all the default settings as they are. Then, at the bottom, click on *Launch*. This will launch a new EC2 instance with a GPU and with RStudio server (as part of JupyterHub) installed.[40](#fn40)
Figure 7\.5: Launch JupyterHub with RStudio Server and GPUs on AWS EC2\.
Once you have successfully launched your EC2 instance, JupyterHub is programmed to automatically initiate on port 80\. You can access it using the following link: <http://>, where the `<instance-ip>` is the public IP address of the newly launched instance (you will find this on the EC2 dashboard). The default username is set as ‘jupyterhub\-admin’, and the default password is identical to your EC2 instance ID. If you need to verify this, you can find it in your EC2 dashboard. For example, it could appear similar to ‘i\-0b3445939c7492’.[41](#fn41)
7\.5 Scaling out: MapReduce in the cloud
----------------------------------------
Many cloud computing providers offer specialized services for MapReduce tasks in the cloud. Here we look at a comparatively easy\-to\-use solution provided by AWS, called Elastic MapReduce (AWS EMR). It allows you to set up a Hadoop cluster in the cloud within minutes and requires essentially no additional configuration if the cluster is being used for the kind of data analytics tasks discussed in this book.
Setting up a default AWS EMR cluster via the AWS console is straightforward. Simply go to `https://console.aws.amazon.com/elasticmapreduce/`, click on “Create cluster”, and adjust the default selection of settings if necessary. Alternatively, we can set up an EMR cluster via the AWS command\-line interface (CLI). In the following tutorials, we will work with AWS EMR via R/Rstudio (specifically, via the package `sparklyr`). By default, RStudio is not part of the EMR cluster set\-up. However, AWS EMR offers a very flexible way to install/configure additional software on virtual EMR clusters via so\-called “bootstrap” scripts. These scripts can be shared on AWS S3 and used by others, which is what we do in the following cluster set\-up via the AWS command\-line interface (CLI).[42](#fn42)
In order to run the cluster set up via AWS CLI, shown below, you need an SSH key to later connect to the EMR cluster. If you do not have such an SSH key for AWS yet, follow these instructions to generate one: <https://docs.aws.amazon.com/cloudhsm/classic/userguide/generate_ssh_key.html>. In the example below, the key generated in this way is stored in a file called `sparklyr.pem`.[43](#fn43)
The following command (`aws emr create-cluster`) initializes our EMR cluster with a specific set of options (all of these options can also be modified via the AWS console in the browser). `--applications Name=Hadoop Name=Spark Name=Hive Name=Pig Name=Tez Name=Ganglia` specifies which type of basic applications (that are essential to running different types of MapReduce tasks) should be installed on the cluster. Unless you really know what you are doing, do not change these settings. `--name "EMR 6.1 RStudio + sparklyr` simply specifies what the newly initialized cluster should be called (this name will then appear on your list of clusters in the AWS console). More relevant for what follows is the line specifying what type of virtual servers (EC2 instances) should be used as part of the cluster: `--instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.2xlarge` specifies that the one master node (the machine distributing tasks and coordinating the MapReduce procedure) is an instance of type `m3.2xlarge`; `InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.2xlarge` specifies that there are two slave nodes in this cluster, also of type `m1.medium`.[44](#fn44) `--bootstrap-action Path=s3://aws-bigdata-blog/artifacts/aws-blog-emr-rstudio-sparklyr/rstudio _sparklyr_emr6.sh,Name="Install RStudio"` tells the set\-up application to run the corresponding bootstrap script on the cluster in order to install the additional software (here RStudio).
Finally, there are two important aspects to note: First, in order to initialize the cluster in this way, you need to have an SSH key pair (for your EC2 instances) set up, which you then instruct the cluster to use with `KeyName=`. That is, `KeyName="sparklyr"` means that the user already has created an SSH key pair called `sparklyr` and that this is the key pair that will be used with the cluster nodes for SSH connections. Second, the `--region` argument defines in which AWS region the cluster should be created. Importantly, in this particular case, the bootstrap script used to install RStudio on the cluster is stored in the `us-east-1` region; hence we also need to set up the cluster in this region: `--region us-east-1` (otherwise the set\-up will fail as the set\-up application will not find the bootstrap script and will terminate with an error!).
```
aws emr create-cluster \
--release-label emr-6.1.0 \
--applications Name=Hadoop Name=Spark Name=Hive Name=Pig \
Name=Tez Name=Ganglia \
--name "EMR 6.1 RStudio + sparklyr" \
--service-role EMR_DefaultRole \
--instance-groups InstanceGroupType=MASTER,InstanceCount=1,\
InstanceType=m3.2xlarge,InstanceGroupType=CORE,\
InstanceCount=2,InstanceType=m3.2xlarge \
--bootstrap-action \
Path='s3://aws-bigdata-blog/artifacts/
aws-blog-emr-rstudio-sparklyr/rstudio_sparklyr_emr6.sh',\
Name="Install RStudio" --ec2-attributes InstanceProfile=EMR_EC2_DefaultRole,\
KeyName="sparklyr"
--configurations '[{"Classification":"spark",
"Properties":{"maximizeResourceAllocation":"true"}}]' \
--region us-east-1
```
Setting up this cluster with all the additional software and configurations from the bootstrap script will take around 40 minutes. You can always follow the progress in the AWS console. Once the cluster is ready, you will see something like this:
Figure 7\.6: AWS EMR console indicating the successful set up of the EMR cluster.
In order to access RStudio on the EMR cluster’s master node via a secure SSH connection, follow these steps:
* First, follow the prerequisites to connect to EMR via SSH: [https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr\-connect\-ssh\-prereqs.html](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-connect-ssh-prereqs.html).
* Then initialize the SSH tunnel to the EMR cluster as instructed here: [https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr\-ssh\-tunnel.html](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-ssh-tunnel.html).
* Protect your key\-file (`sparklyr.pem`) by navigating to the location of the key\-file on your computer in the terminal and run `chmod 600 sparklyr.pem` before connecting. Also make sure your IP address is still the one you have entered in the previous step (you can check your current IP address by visiting <https://whatismyipaddress.com/>).
* In a browser tab, navigate to the AWS EMR console, click on the newly created cluster, and copy the “Master public DNS”. In the terminal, connect to the EMR cluster via SSH by running `ssh -i sparklyr.pem -ND 8157 hadoop@master-node-dns` (if you have protected the key\-file as superuser, i.e., `sudo chmod`, you will need to use `sudo ssh` here; make sure to replace `master-node-dns` with the actual DNS copied from the AWS EMR console). The terminal will be busy, but you won’t see any output (if all goes well).
* In your Firefox browser, install the [FoxyProxy add\-on](https://addons.mozilla.org/en-US/firefox/addon/foxyproxy-standard/). Follow these instructions to set up the proxy via FoxyProxy: [https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr\-connect\-master\-node\-proxy.html](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-connect-master-node-proxy.html).
* Select the newly created Socks5 proxy in FoxyProxy.
* Go to <http://localhost:8787/> and enter with username `hadoop` and password `hadoop`.
Now you can run `sparklyr` on the AWS EMR cluster. After finishing working with the cluster, make sure to terminate it via the EMR console. This will shut down all EC2 instances that are part of the cluster (and hence AWS will stop charging you for this). Once you have connected and logged into RStudio on the EMR cluster’s master node, you can connect the Rstudio session to the Spark cluster as follows:
```
# load packages
library(sparklyr)
# connect rstudio session to cluster
sc <- spark_connect(master = "yarn")
```
After using the EMR Spark cluster, make sure to terminate the cluster in the AWS EMR console to avoid additional charges. This automatically terminates all the EC2 machines linked to the cluster.
7\.6 Wrapping up
----------------
* Cloud computing refers to the on\-demand availability of computing resources. While many of today’s cloud computing services go beyond the scope of the common data analytics tasks discussed in this book, a handful of specific services can be very efficient in providing you with the right solution if local computing resources are not sufficient, as summarized in the following bullet points.
* *EC2 (elastic cloud computing)*: scale your analysis up with a virtual server/virtual machine in the cloud. For example, rent an EC2 instance for a couple of minutes in order to run a massively parallel task on 36 cores.
* *GPUs in the cloud*: Google Colab offers an easy\-to\-use interface to run your machine\-learning code on GPUs, for example, in the context of training neural nets.
* *AWS RDS* offers a straightforward way to set up an SQL database in the cloud without any need for database server installation and maintenance.
* *AWS EMR* allows you to flexibly set up and run your Spark/`sparkly` or Hadoop code on a cluster of EC2 machines in the cloud.
7\.1 Cloud computing basics and platforms
-----------------------------------------
So far we have focused on the available computing resources on our local machines (desktop/laptop) and how to use them optimally when dealing with large amounts of data and/or computationally demanding tasks. A key aspect of this has been to understand why our local machine is struggling with a computing task when there is a large amount of data to be processed and then identifying potential avenues to use the available resources more efficiently, for example, by using one of the following approaches:
* Computationally intensive tasks (but not pushing RAM to the limit): parallelization, using several CPU cores (nodes) in parallel.
* Memory\-intensive tasks (data still fits into RAM): efficient memory allocation.
* Memory\-intensive tasks (data does not fit into RAM): efficient use of virtual memory (use parts of mass storage device as virtual memory).
* Storage: efficient storage (avoid redundancies).
In practice, datasets might be too large for our local machine even if we take all of the techniques listed above into account. That is, a parallelized task might still take ages to complete because our local machine has too few cores available, a task involving virtual memory would use up way too much space on our hard disk, etc.
In such situations, we have to think about horizontal and vertical scaling beyond our local machine. That is, we outsource tasks to a bigger machine (or a cluster of machines) to which our local computer is connected (typically, over the internet). While only one or two decades ago most organizations had their own large centrally hosted machines (database servers, cluster computers) for such tasks, today they often rely on third\-party solutions *‘in the cloud’*. That is, specialized companies provide computing resources (usually, virtual servers) that can be easily accessed via a broadband internet connection and rented on an hourly basis (or even by the minute or second). Given the obvious economies of scale in this line of business, a few large players have emerged who effectively dominate most of the global market:
* [Amazon Web Services (AWS)](https://aws.amazon.com/)
* [Microsoft Azure](https://azure.microsoft.com/en-us/)
* [Google Cloud Platform (GCP)](https://cloud.google.com/)
* [IBM Cloud](https://www.ibm.com/cloud/)
* [Alibaba Cloud](https://www.alibabacloud.com/)
* [Tencent Cloud](https://intl.cloud.tencent.com/)
In the following subsections and chapters, we will primarily rely on services provided by AWS and GCP. In order to try out the code examples and tutorials, make sure to have an AWS account as well as a Google account (which can then easily be linked to GCP). For the AWS account, go to `https://aws.amazon.com/` and create an account. You will have to enter credit card details for either cloud platform when setting up/linking accounts. Importantly, you will only be charged for the time you use an AWS service. However, even when using some cloud instances, several of AWS’s cloud products offer a free tier to test and try out products. The following examples rely whenever possible on free\-tier instances; if not, it is explicitly indicated that running the example in the cloud will generate some costs on your account. For the GCP account, have your Google login credentials ready, and visit `https://cloud.google.com/` to register your Google account with GCP. Again, credit card details are needed to set up an account, but many of the services can be used for free to a certain extent (to learn and try out code).
7\.2 Transitioning to the cloud
-------------------------------
When logged in to AWS and GCP, you will notice the breadth of services offered by these platforms. There are more than 10 main categories of services, with many subcategories and products in each. It is easy to get lost from just browsing through them. Rest assured that for the purpose of data analytics/applied econometrics, many of these services are irrelevant. Our motivation to use the cloud is to extend our computational resources to use our analytics scripts on large datasets, not to develop and deploy web applications or business analytics dashboards. With this perspective, a small selection of services will make the cloud easily accessible for daily analytics workflows.
When we use services from AWS or GCP to *scale up* (vertical scaling) the available resources, the transition from our local implementation of a data analytics task to the cloud implementation is often rather simple. Once we have set up a cloud instance and figured out how to communicate with it, we typically can run the exact same R script locally and in the cloud. This is usually the case for parallelized tasks (simply run the same script on a machine with more cores), in\-memory tasks (rent a machine with more RAM but still use `data.table()`, etc.), and highly parallelized tasks to be run on GPUs. The transition from a local implementation to horizontal scaling (*scaling out*) in the cloud will require slightly more preparatory steps. However, in this domain we will directly build on the same (or very similar) software tools that we have used locally in previous chapters. For example, instead of connecting R to a local SQLite database, we set up a MySQL database on AWS RDS and then connect in essentially the same way our local R session with this database in the cloud.
7\.3 Scaling up in the cloud: Virtual servers
---------------------------------------------
In the following pages we look at a very common scheme to deal with a lack of local computing resources: flexibly renting a type of virtual server often referred to as “Elastic Cloud Computing (EC2\)” instance. Specifically, we will look at how to scale up with AWS EC2 and R/RStudio Server. One of the easiest ways to set up an AWS EC2 instance for R/RStudio Server is to use [Louis Aslett’s Amazon Machine Image (AMI)](https://www.louisaslett.com/RStudio_AMI/). This way you do not need to install R/Rstudio Server yourself. Simply follow these five steps:
* Depending on the region in which you want to create your EC2 instance, click on the corresponding AMI link in <https://www.louisaslett.com/RStudio_AMI/>. For example, if you want to create the instance in Frankfurt, click on [ami\-076abd591c4335092](https://console.aws.amazon.com/ec2/home?region=eu-central-1#launchAmi=ami-076abd591c4335092). You will be automatically directed to the AWS page where you can select the type of EC2 instance you want to create. By default, the free tier T2\.micro instance is selected (I recommend using this type of instance if you simply want to try out the examples below).
* After selecting the instance type, click on “Review and Launch”. On the opened page, select “Edit security groups”. There should be one entry with `SSH` selected in the drop\-down menu. Click on this drop\-down menu and select `HTTP` (instead of `SSH`). Click again on “Review and Launch” to confirm the change.
* Then, click “Launch” to initialize the instance. From the pop\-up concerning the key pair, select “Proceed without a key pair” from the drop\-down menu, and check the box below (“I acknowledge …”). Click “Launch” to confirm. A page opens. Click on “View” instances to see all of your instances and their status. Wait until “Status check” is “2/2 checks passed” (you might want to refresh the instance overview or browser window).
* Click on the instance ID of your newly launched instance and copy the public IPv4 address, open a new browser window/tab, type in `http://`, paste the IP address, and hit enter (the address in your browser bar will be something like `http://3.66.120.150`; `http`, not `https`!) .
* You should see the login\-interface to RStudio on your cloud instance. The username is `rstudio`, and the password is the instance ID of your newly launched instance (it might take a while to load R/Rstudio). Once RStudio is loaded, you are ready to go.
*NOTE*: the instructions above help you set up your own EC2 instance with R/RStudio to run some example scripts and tryout R on EC2\. For more serious/professional (long\-term) usage of an EC2 instance, I strongly recommend setting it up manually and improving the security settings accordingly! The above setup will theoretically result in your instance being accessible for anyone in the Web (something you might want to avoid).
### 7\.3\.1 Parallelization with an EC2 instance
This short tutorial illustrates how to scale the computation up by running it on an AWS EC2 instance. Thereby, we build on the techniques discussed in the previous chapter. Note that our EC2 instance is a Linux machine. When running R on a Linux machine, there is sometimes an additional step to install R packages (at least for most of the packages): R packages need to be compiled before they can be installed. The command to install packages is exactly the same (`install.packages()`), and normally you only notice a slight difference in the output shown on the R console during installation (and the installation process takes a little longer than what you are used to). In some cases you might also have to install additional dependencies directly in Linux. Apart from that, using R via RStudio Server in the cloud looks/feels very similar if not identical to when using R/RStudio locally.
**Preparatory steps**
If your EC2 instance with RStudio Server is not running yet, do the following. In the AWS console, navigate to EC2, select your EC2 instance (with RStudio Server installed), and click on “Instance state/Start instance”. You will have to wait until you see “2/2 checks passed”. Then, open a new browser window, enter the address of your EC2/RStudio Server instance (see above, e.g., `http://3.66.120.150`), and log in to RStudio. First, we need to install the `parallel` ([R Core Team 2021](#ref-rfoundation2021)) and `doSNOW` ([Microsoft Corporation and Weston 2022](#ref-doSNOW)) packages. In addition we will rely on the `stringr` package ([Wickham 2022b](#ref-stringr)).
```
# install packages for parallelization
install.packages("parallel", "doSNOW", "stringr")
```
Once the installations have finished, you can load the packages and verify the number of cores available on your EC2 instance as follows. If you have chosen the free tier T2\.micro instance type when setting up your EC2 instance, you will see that you only have one core available. Do not worry. It is reasonable practice to test your parallelization script with a few iterations on a small machine before bringing out the big guns. The specialized packages we use for parallelization here do not mind if you have one or 32 cores; the same code runs on either machine (obviously not very fast with only one core).
```
# load packages
library(parallel)
library(doSNOW)
# verify no. of cores available
n_cores <- detectCores()
n_cores
```
Finally, we have to upload the data that we want to process as part of the parallelization task. To this end, in RStudio Server, navigate to the file explorer in the lower right\-hand corner. The graphical user interfaces of a local RStudio installation and RStudio Server are almost identical. However, you will find in the file explorer pane an “Upload” button to transfer files from your local machine to the EC2 instance. In this demonstration, we will work with the previously introduced `marketing_data.csv` dataset. You can thus click on “Upload” and upload it to the current target directory (the home directory of RStudio Server). As soon as the file is uploaded, you can work with it as usual (as on the local RStudio installation). To keep things as in the local examples, use the file explorer to create a new `data` folder, and move `marketing_data.csv` in this new folder. The screenshot in Figure [7\.1](cloud-computing.html#fig:ec2rstudioserver) shows a screenshot of the corresponding section.
Figure 7\.1: File explorer and Upload button on RStudio Server.
In order to test if all is set up properly to run in parallel on our EC2 instance, open a new R script in RStudio Server and copy/paste the preparatory steps and the simple parallelization example from Section 4\.5 into the R script.
```
# PREPARATION -----------------------------
# packages
library(stringr)
# import data
marketing <- read.csv("data/marketing_data.csv")
# clean/prepare data
marketing$Income <- as.numeric(gsub("[[:punct:]]", "", marketing$Income))
marketing$days_customer <- as.Date(Sys.Date())-
as.Date(marketing$Dt_Customer, "%m/%d/%y")
marketing$Dt_Customer <- NULL
# all sets of independent vars
indep <- names(marketing)[ c(2:19, 27,28)]
combinations_list <- lapply(1:length(indep),
function(x) combn(indep, x, simplify = FALSE))
combinations_list <- unlist(combinations_list, recursive = FALSE)
models <- lapply(combinations_list,
function(x) paste("Response ~", paste(x, collapse="+")))
```
**Test parallelized code**
Now, we can start testing the code on EC2 without registering the one core for cluster processing. This way, `%dopart%` will automatically resort to running the code sequentially. Make sure to set `N` to 10 (or another small number) for this test.
```
# set cores for parallel processing
# ctemp <- makeCluster(ncores)
# registerDoSNOW(ctemp)
# prepare loop
N <- 10 # just for illustration, the actual code is N <- length(models)
# run loop in parallel
pseudo_Rsq <-
foreach ( i = 1:N, .combine = c) %dopar% {
# fit the logit model via maximum likelihood
fit <- glm(models[[i]], data=marketing, family = binomial())
# compute the proportion of deviance explained
#by the independent vars (~R^2)
return(1-(fit$deviance/fit$null.deviance))
}
```
Once the test has run through successfully, we are ready to scale up and run the actual workload in parallel in the cloud.
**Scale up and run in parallel**
First, switch back to the AWS EC2 console and stop the instance by selecting the tick\-mark in the corresponding row, and click on “Instance state/stop instance”. Once the Instance state is “Stopped”, click on “Actions/Instance settings/change instance type”. You will be presented with a drop\-down menu from which you can select the new instance type and confirm. The example below is based on selecting the `t2.2xlarge` (with 8 vCPU cores and 32MB of RAM). Now you can start the instance again, log in to RStudio Server (as above), and run the script again – but this time with the following lines not commented out (in order to make use of all eight cores):
```
# set cores for parallel processing
ctemp <- makeCluster(ncores)
registerDoSNOW(ctemp)
```
In order to monitor the usage of computing resources on your instance, switch to the Terminal tab, type in `htop`, and hit enter. This will open the interactive process viewer called [htop](https://htop.dev/). Figure [7\.2](cloud-computing.html#fig:ec2rstudioserverhtop) shows the output of htop for the preparatory phase of the parallel task implemented above. The output confirms the available resources provided by a `t2.2xlarge` EC2 instance (with 8 vCPU cores and 32MB of RAM). When using the default free tier T2\.micro instance, you will notice in the htop output that only one core is available.
Figure 7\.2: Monitor resources and processes with htop.
### 7\.3\.1 Parallelization with an EC2 instance
This short tutorial illustrates how to scale the computation up by running it on an AWS EC2 instance. Thereby, we build on the techniques discussed in the previous chapter. Note that our EC2 instance is a Linux machine. When running R on a Linux machine, there is sometimes an additional step to install R packages (at least for most of the packages): R packages need to be compiled before they can be installed. The command to install packages is exactly the same (`install.packages()`), and normally you only notice a slight difference in the output shown on the R console during installation (and the installation process takes a little longer than what you are used to). In some cases you might also have to install additional dependencies directly in Linux. Apart from that, using R via RStudio Server in the cloud looks/feels very similar if not identical to when using R/RStudio locally.
**Preparatory steps**
If your EC2 instance with RStudio Server is not running yet, do the following. In the AWS console, navigate to EC2, select your EC2 instance (with RStudio Server installed), and click on “Instance state/Start instance”. You will have to wait until you see “2/2 checks passed”. Then, open a new browser window, enter the address of your EC2/RStudio Server instance (see above, e.g., `http://3.66.120.150`), and log in to RStudio. First, we need to install the `parallel` ([R Core Team 2021](#ref-rfoundation2021)) and `doSNOW` ([Microsoft Corporation and Weston 2022](#ref-doSNOW)) packages. In addition we will rely on the `stringr` package ([Wickham 2022b](#ref-stringr)).
```
# install packages for parallelization
install.packages("parallel", "doSNOW", "stringr")
```
Once the installations have finished, you can load the packages and verify the number of cores available on your EC2 instance as follows. If you have chosen the free tier T2\.micro instance type when setting up your EC2 instance, you will see that you only have one core available. Do not worry. It is reasonable practice to test your parallelization script with a few iterations on a small machine before bringing out the big guns. The specialized packages we use for parallelization here do not mind if you have one or 32 cores; the same code runs on either machine (obviously not very fast with only one core).
```
# load packages
library(parallel)
library(doSNOW)
# verify no. of cores available
n_cores <- detectCores()
n_cores
```
Finally, we have to upload the data that we want to process as part of the parallelization task. To this end, in RStudio Server, navigate to the file explorer in the lower right\-hand corner. The graphical user interfaces of a local RStudio installation and RStudio Server are almost identical. However, you will find in the file explorer pane an “Upload” button to transfer files from your local machine to the EC2 instance. In this demonstration, we will work with the previously introduced `marketing_data.csv` dataset. You can thus click on “Upload” and upload it to the current target directory (the home directory of RStudio Server). As soon as the file is uploaded, you can work with it as usual (as on the local RStudio installation). To keep things as in the local examples, use the file explorer to create a new `data` folder, and move `marketing_data.csv` in this new folder. The screenshot in Figure [7\.1](cloud-computing.html#fig:ec2rstudioserver) shows a screenshot of the corresponding section.
Figure 7\.1: File explorer and Upload button on RStudio Server.
In order to test if all is set up properly to run in parallel on our EC2 instance, open a new R script in RStudio Server and copy/paste the preparatory steps and the simple parallelization example from Section 4\.5 into the R script.
```
# PREPARATION -----------------------------
# packages
library(stringr)
# import data
marketing <- read.csv("data/marketing_data.csv")
# clean/prepare data
marketing$Income <- as.numeric(gsub("[[:punct:]]", "", marketing$Income))
marketing$days_customer <- as.Date(Sys.Date())-
as.Date(marketing$Dt_Customer, "%m/%d/%y")
marketing$Dt_Customer <- NULL
# all sets of independent vars
indep <- names(marketing)[ c(2:19, 27,28)]
combinations_list <- lapply(1:length(indep),
function(x) combn(indep, x, simplify = FALSE))
combinations_list <- unlist(combinations_list, recursive = FALSE)
models <- lapply(combinations_list,
function(x) paste("Response ~", paste(x, collapse="+")))
```
**Test parallelized code**
Now, we can start testing the code on EC2 without registering the one core for cluster processing. This way, `%dopart%` will automatically resort to running the code sequentially. Make sure to set `N` to 10 (or another small number) for this test.
```
# set cores for parallel processing
# ctemp <- makeCluster(ncores)
# registerDoSNOW(ctemp)
# prepare loop
N <- 10 # just for illustration, the actual code is N <- length(models)
# run loop in parallel
pseudo_Rsq <-
foreach ( i = 1:N, .combine = c) %dopar% {
# fit the logit model via maximum likelihood
fit <- glm(models[[i]], data=marketing, family = binomial())
# compute the proportion of deviance explained
#by the independent vars (~R^2)
return(1-(fit$deviance/fit$null.deviance))
}
```
Once the test has run through successfully, we are ready to scale up and run the actual workload in parallel in the cloud.
**Scale up and run in parallel**
First, switch back to the AWS EC2 console and stop the instance by selecting the tick\-mark in the corresponding row, and click on “Instance state/stop instance”. Once the Instance state is “Stopped”, click on “Actions/Instance settings/change instance type”. You will be presented with a drop\-down menu from which you can select the new instance type and confirm. The example below is based on selecting the `t2.2xlarge` (with 8 vCPU cores and 32MB of RAM). Now you can start the instance again, log in to RStudio Server (as above), and run the script again – but this time with the following lines not commented out (in order to make use of all eight cores):
```
# set cores for parallel processing
ctemp <- makeCluster(ncores)
registerDoSNOW(ctemp)
```
In order to monitor the usage of computing resources on your instance, switch to the Terminal tab, type in `htop`, and hit enter. This will open the interactive process viewer called [htop](https://htop.dev/). Figure [7\.2](cloud-computing.html#fig:ec2rstudioserverhtop) shows the output of htop for the preparatory phase of the parallel task implemented above. The output confirms the available resources provided by a `t2.2xlarge` EC2 instance (with 8 vCPU cores and 32MB of RAM). When using the default free tier T2\.micro instance, you will notice in the htop output that only one core is available.
Figure 7\.2: Monitor resources and processes with htop.
7\.4 Scaling up with GPUs
-------------------------
As discussed in Chapter 4, GPUs can help speed up highly parallelizable tasks such as matrix multiplications. While using a local GPU
/graphics card for statistical analysis has become easier due to more easily accessible software layers around the GPUs
, it still needs solid knowledge regarding the installation of specific GPU drivers and changing of basic system settings. Many users specializing in the data analytics side rather than the computer science/hardware side of Big Data Analytics might not be comfortable with making such installations/changes on their desktop computers or might not have the right type of GPU/graphics card in their device for such changes. In addition, for many users it might not make sense to have a powerful GPU
in their local machine, if they only occasionally use it for certain machine learning or parallel computing tasks. In recent years, many cloud computing platforms have started providing virtual machines with access to GPUs
, in many cases with additional layers of software and/or pre\-installed drivers, allowing users to directly run their code on GPUs
in the cloud. Below, we briefly look at two of the most easy\-to\-use options to run code on GPUs
in the cloud: using Google Colab notebooks with GPUs
and setting up RStudio on virtual machines in a special EC2 tier with GPU
access on AWS.
### 7\.4\.1 GPUs on Google Colab
Google Colab provides a very easy way to run R code on GPUs from Google Cloud. All you need is a Google account. Open a new browser window, go to <https://colab.to/r>, and log in with your Google account if prompted to do so. Colab will open a [Jupyter notebook](https://en.wikipedia.org/wiki/Project_Jupyter) with an R runtime. Click on “Runtime/Change runtime type”, and in the drop\-down menu under ‘Hardware accelerator’, select the option ‘GPU’.
Figure 7\.3: Colab notebook with R runtime and GPUs.
Then, you can install the packages for which you wish to use GPU acceleration (e.g., `gpuR`, `keras`, and `tensorflow`), and the code relying on GPU processing will be run on GPUs (or even [TPUs](https://en.wikipedia.org/wiki/Tensor_Processing_Unit)). At the following link you can find a Colab notebook set up for running a [simple image classification tutorial](https://tensorflow.rstudio.com/tutorials/beginners/basic-ml/tutorial_basic_classification/) with keras on TPUs: [bit.ly/bda\_colab](https://bit.ly/bda_colab).
### 7\.4\.2 RStudio and EC2 with GPUs on AWS
To start a ready\-made EC2 instance with GPUs and RStudio installed, open a browser window and navigate to this service provided by Inmatura on the AWS Marketplace: [https://aws.amazon.com/marketplace/pp/prodview\-p4gqghzifhmmo](https://aws.amazon.com/marketplace/pp/prodview-p4gqghzifhmmo). Click on “Subscribe”.
Figure 7\.4: JupyterHub AMI provided by Inmatura on the AWS Marketplace to run RStudio Server with GPUs on AWS EC2\.
After the subscription request is processed, click on “Continue to Configuration” and “Continue to Launch”. To make use of a GPU, select, for example, `g2.2xlarge` type under “EC2 Instance Type”. If necessary, create a new key pair under Key Pair Settings; otherwise keep all the default settings as they are. Then, at the bottom, click on *Launch*. This will launch a new EC2 instance with a GPU and with RStudio server (as part of JupyterHub) installed.[40](#fn40)
Figure 7\.5: Launch JupyterHub with RStudio Server and GPUs on AWS EC2\.
Once you have successfully launched your EC2 instance, JupyterHub is programmed to automatically initiate on port 80\. You can access it using the following link: <http://>, where the `<instance-ip>` is the public IP address of the newly launched instance (you will find this on the EC2 dashboard). The default username is set as ‘jupyterhub\-admin’, and the default password is identical to your EC2 instance ID. If you need to verify this, you can find it in your EC2 dashboard. For example, it could appear similar to ‘i\-0b3445939c7492’.[41](#fn41)
### 7\.4\.1 GPUs on Google Colab
Google Colab provides a very easy way to run R code on GPUs from Google Cloud. All you need is a Google account. Open a new browser window, go to <https://colab.to/r>, and log in with your Google account if prompted to do so. Colab will open a [Jupyter notebook](https://en.wikipedia.org/wiki/Project_Jupyter) with an R runtime. Click on “Runtime/Change runtime type”, and in the drop\-down menu under ‘Hardware accelerator’, select the option ‘GPU’.
Figure 7\.3: Colab notebook with R runtime and GPUs.
Then, you can install the packages for which you wish to use GPU acceleration (e.g., `gpuR`, `keras`, and `tensorflow`), and the code relying on GPU processing will be run on GPUs (or even [TPUs](https://en.wikipedia.org/wiki/Tensor_Processing_Unit)). At the following link you can find a Colab notebook set up for running a [simple image classification tutorial](https://tensorflow.rstudio.com/tutorials/beginners/basic-ml/tutorial_basic_classification/) with keras on TPUs: [bit.ly/bda\_colab](https://bit.ly/bda_colab).
### 7\.4\.2 RStudio and EC2 with GPUs on AWS
To start a ready\-made EC2 instance with GPUs and RStudio installed, open a browser window and navigate to this service provided by Inmatura on the AWS Marketplace: [https://aws.amazon.com/marketplace/pp/prodview\-p4gqghzifhmmo](https://aws.amazon.com/marketplace/pp/prodview-p4gqghzifhmmo). Click on “Subscribe”.
Figure 7\.4: JupyterHub AMI provided by Inmatura on the AWS Marketplace to run RStudio Server with GPUs on AWS EC2\.
After the subscription request is processed, click on “Continue to Configuration” and “Continue to Launch”. To make use of a GPU, select, for example, `g2.2xlarge` type under “EC2 Instance Type”. If necessary, create a new key pair under Key Pair Settings; otherwise keep all the default settings as they are. Then, at the bottom, click on *Launch*. This will launch a new EC2 instance with a GPU and with RStudio server (as part of JupyterHub) installed.[40](#fn40)
Figure 7\.5: Launch JupyterHub with RStudio Server and GPUs on AWS EC2\.
Once you have successfully launched your EC2 instance, JupyterHub is programmed to automatically initiate on port 80\. You can access it using the following link: <http://>, where the `<instance-ip>` is the public IP address of the newly launched instance (you will find this on the EC2 dashboard). The default username is set as ‘jupyterhub\-admin’, and the default password is identical to your EC2 instance ID. If you need to verify this, you can find it in your EC2 dashboard. For example, it could appear similar to ‘i\-0b3445939c7492’.[41](#fn41)
7\.5 Scaling out: MapReduce in the cloud
----------------------------------------
Many cloud computing providers offer specialized services for MapReduce tasks in the cloud. Here we look at a comparatively easy\-to\-use solution provided by AWS, called Elastic MapReduce (AWS EMR). It allows you to set up a Hadoop cluster in the cloud within minutes and requires essentially no additional configuration if the cluster is being used for the kind of data analytics tasks discussed in this book.
Setting up a default AWS EMR cluster via the AWS console is straightforward. Simply go to `https://console.aws.amazon.com/elasticmapreduce/`, click on “Create cluster”, and adjust the default selection of settings if necessary. Alternatively, we can set up an EMR cluster via the AWS command\-line interface (CLI). In the following tutorials, we will work with AWS EMR via R/Rstudio (specifically, via the package `sparklyr`). By default, RStudio is not part of the EMR cluster set\-up. However, AWS EMR offers a very flexible way to install/configure additional software on virtual EMR clusters via so\-called “bootstrap” scripts. These scripts can be shared on AWS S3 and used by others, which is what we do in the following cluster set\-up via the AWS command\-line interface (CLI).[42](#fn42)
In order to run the cluster set up via AWS CLI, shown below, you need an SSH key to later connect to the EMR cluster. If you do not have such an SSH key for AWS yet, follow these instructions to generate one: <https://docs.aws.amazon.com/cloudhsm/classic/userguide/generate_ssh_key.html>. In the example below, the key generated in this way is stored in a file called `sparklyr.pem`.[43](#fn43)
The following command (`aws emr create-cluster`) initializes our EMR cluster with a specific set of options (all of these options can also be modified via the AWS console in the browser). `--applications Name=Hadoop Name=Spark Name=Hive Name=Pig Name=Tez Name=Ganglia` specifies which type of basic applications (that are essential to running different types of MapReduce tasks) should be installed on the cluster. Unless you really know what you are doing, do not change these settings. `--name "EMR 6.1 RStudio + sparklyr` simply specifies what the newly initialized cluster should be called (this name will then appear on your list of clusters in the AWS console). More relevant for what follows is the line specifying what type of virtual servers (EC2 instances) should be used as part of the cluster: `--instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.2xlarge` specifies that the one master node (the machine distributing tasks and coordinating the MapReduce procedure) is an instance of type `m3.2xlarge`; `InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.2xlarge` specifies that there are two slave nodes in this cluster, also of type `m1.medium`.[44](#fn44) `--bootstrap-action Path=s3://aws-bigdata-blog/artifacts/aws-blog-emr-rstudio-sparklyr/rstudio _sparklyr_emr6.sh,Name="Install RStudio"` tells the set\-up application to run the corresponding bootstrap script on the cluster in order to install the additional software (here RStudio).
Finally, there are two important aspects to note: First, in order to initialize the cluster in this way, you need to have an SSH key pair (for your EC2 instances) set up, which you then instruct the cluster to use with `KeyName=`. That is, `KeyName="sparklyr"` means that the user already has created an SSH key pair called `sparklyr` and that this is the key pair that will be used with the cluster nodes for SSH connections. Second, the `--region` argument defines in which AWS region the cluster should be created. Importantly, in this particular case, the bootstrap script used to install RStudio on the cluster is stored in the `us-east-1` region; hence we also need to set up the cluster in this region: `--region us-east-1` (otherwise the set\-up will fail as the set\-up application will not find the bootstrap script and will terminate with an error!).
```
aws emr create-cluster \
--release-label emr-6.1.0 \
--applications Name=Hadoop Name=Spark Name=Hive Name=Pig \
Name=Tez Name=Ganglia \
--name "EMR 6.1 RStudio + sparklyr" \
--service-role EMR_DefaultRole \
--instance-groups InstanceGroupType=MASTER,InstanceCount=1,\
InstanceType=m3.2xlarge,InstanceGroupType=CORE,\
InstanceCount=2,InstanceType=m3.2xlarge \
--bootstrap-action \
Path='s3://aws-bigdata-blog/artifacts/
aws-blog-emr-rstudio-sparklyr/rstudio_sparklyr_emr6.sh',\
Name="Install RStudio" --ec2-attributes InstanceProfile=EMR_EC2_DefaultRole,\
KeyName="sparklyr"
--configurations '[{"Classification":"spark",
"Properties":{"maximizeResourceAllocation":"true"}}]' \
--region us-east-1
```
Setting up this cluster with all the additional software and configurations from the bootstrap script will take around 40 minutes. You can always follow the progress in the AWS console. Once the cluster is ready, you will see something like this:
Figure 7\.6: AWS EMR console indicating the successful set up of the EMR cluster.
In order to access RStudio on the EMR cluster’s master node via a secure SSH connection, follow these steps:
* First, follow the prerequisites to connect to EMR via SSH: [https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr\-connect\-ssh\-prereqs.html](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-connect-ssh-prereqs.html).
* Then initialize the SSH tunnel to the EMR cluster as instructed here: [https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr\-ssh\-tunnel.html](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-ssh-tunnel.html).
* Protect your key\-file (`sparklyr.pem`) by navigating to the location of the key\-file on your computer in the terminal and run `chmod 600 sparklyr.pem` before connecting. Also make sure your IP address is still the one you have entered in the previous step (you can check your current IP address by visiting <https://whatismyipaddress.com/>).
* In a browser tab, navigate to the AWS EMR console, click on the newly created cluster, and copy the “Master public DNS”. In the terminal, connect to the EMR cluster via SSH by running `ssh -i sparklyr.pem -ND 8157 hadoop@master-node-dns` (if you have protected the key\-file as superuser, i.e., `sudo chmod`, you will need to use `sudo ssh` here; make sure to replace `master-node-dns` with the actual DNS copied from the AWS EMR console). The terminal will be busy, but you won’t see any output (if all goes well).
* In your Firefox browser, install the [FoxyProxy add\-on](https://addons.mozilla.org/en-US/firefox/addon/foxyproxy-standard/). Follow these instructions to set up the proxy via FoxyProxy: [https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr\-connect\-master\-node\-proxy.html](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-connect-master-node-proxy.html).
* Select the newly created Socks5 proxy in FoxyProxy.
* Go to <http://localhost:8787/> and enter with username `hadoop` and password `hadoop`.
Now you can run `sparklyr` on the AWS EMR cluster. After finishing working with the cluster, make sure to terminate it via the EMR console. This will shut down all EC2 instances that are part of the cluster (and hence AWS will stop charging you for this). Once you have connected and logged into RStudio on the EMR cluster’s master node, you can connect the Rstudio session to the Spark cluster as follows:
```
# load packages
library(sparklyr)
# connect rstudio session to cluster
sc <- spark_connect(master = "yarn")
```
After using the EMR Spark cluster, make sure to terminate the cluster in the AWS EMR console to avoid additional charges. This automatically terminates all the EC2 machines linked to the cluster.
7\.6 Wrapping up
----------------
* Cloud computing refers to the on\-demand availability of computing resources. While many of today’s cloud computing services go beyond the scope of the common data analytics tasks discussed in this book, a handful of specific services can be very efficient in providing you with the right solution if local computing resources are not sufficient, as summarized in the following bullet points.
* *EC2 (elastic cloud computing)*: scale your analysis up with a virtual server/virtual machine in the cloud. For example, rent an EC2 instance for a couple of minutes in order to run a massively parallel task on 36 cores.
* *GPUs in the cloud*: Google Colab offers an easy\-to\-use interface to run your machine\-learning code on GPUs, for example, in the context of training neural nets.
* *AWS RDS* offers a straightforward way to set up an SQL database in the cloud without any need for database server installation and maintenance.
* *AWS EMR* allows you to flexibly set up and run your Spark/`sparkly` or Hadoop code on a cluster of EC2 machines in the cloud.
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/data-collection-and-data-storage.html |
Chapter 8 Data Collection and Data Storage
==========================================
The first steps of a data analytics project typically deal with the question of how to collect, organize, and store the raw data for further processing. In this chapter, we cover several approaches to how to practically implement these steps in the context of observational data, as they commonly occur in applied econometrics and business analytics. Thereby, the focus lies on several important aspects of how to implement these steps locally and then introduces several useful cloud tools to store and query large amounts of data for analytics purposes.
8\.1 Gathering and compilation of raw data
------------------------------------------
The NYC Taxi \& Limousine Commission (TLC) provides detailed data on all trip records, including pick\-up and drop\-off times/locations. When combining all available trip records from 2009 to 2018, we get a rather large dataset of over 200GB. The code examples below illustrate how to collect and compile the entire dataset. In order to avoid long computing times, the code examples shown below are based on a small sub\-set of the actual raw data (however, all examples involving virtual memory are in theory scalable to the extent of the entire raw dataset).
The raw data consists of several monthly Parquet files and can be downloaded via the [TLC’s website](https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page). The following short R script automates the downloading of all available trip\-record files. *NOTE*: Downloading all files can take several hours and will occupy over 200GB!
```
# Fetch all TLC trip records Data source:
# https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page
# Input: Monthly Parquet files from urls
# SET UP -----------------
# packages
library(R.utils) # to create directories from within R
# fix vars
BASE_URL <- "https://d37ci6vzurychx.cloudfront.net/trip-data/"
FILE <- "yellow_tripdata_2018-01.parquet"
URL <- paste0(BASE_URL, FILE)
OUTPUT_PATH <- "data/tlc_trips/"
START_DATE <- as.Date("2009-01-01")
END_DATE <- as.Date("2018-06-01")
# BUILD URLS -----------
# parse base url
base_url <- gsub("2018-01.parquet", "", URL)
# build urls
dates <- seq(from = START_DATE, to = END_DATE, by = "month")
year_months <- gsub("-01$", "", as.character(dates))
data_urls <- paste0(base_url, year_months, ".parquet")
data_paths <- paste0(OUTPUT_PATH, year_months, ".parquet")
# FETCH AND STACK CSVS ----------------
mkdirs(OUTPUT_PATH)
# download all csvs in the data range
for (i in 1:length(data_urls)) {
# download to disk
download.file(data_urls[i], data_paths[i])
}
```
8\.2 Stack/combine raw source files
-----------------------------------
In a next step, we parse and combine the downloaded data. Depending on how you want to further work with the gathered data, one or another storage format might be more convenient. For the sake of illustration (and the following examples building on the downloaded data), we store the downloaded data in one CSV file. To this end, we make use of the `arrow` package ([Richardson et al. 2022](#ref-richardson_etal2022)), an R interface to the Apache Arrow C\+\+ library (a platform to work with large\-scale columnar data). The aim of the exercise is to combine the downloaded Parquet files into one compressed CSV file, which will be more easily accessible for some of the libraries used in further examples.
We start by installing the `arrow` package in the following way.
```
# install arrow
Sys.setenv(LIBARROW_MINIMAL = "false") # to enable working with compressed files
install.packages("arrow") # might take a while
```
The setting `LIBARROW_MINIMAL= "false"` ensures that the installation of arrow is not restricted to the very basic functionality of the package. Specifically, for our context it will be important that the `arrow` installation allows for the reading of compressed files.
```
# SET UP ---------------------------
# load packages
library(arrow)
library(data.table)
library(purrr)
# fix vars
INPUT_PATH <- "data/tlc_trips/"
OUTPUT_FILE <- "data/tlc_trips.parquet"
OUTPUT_FILE_CSV <- "data/tlc_trips.csv"
# list of paths to downloaded Parquet files
all_files <- list.files(INPUT_PATH, full.names = TRUE)
# LOAD, COMBINE, STORE ----------------------
# read Parquet files
all_data <- lapply(all_files, read_parquet, as_data_frame = FALSE)
# combine all arrow tables into one
combined_data <- lift_dl(concat_tables)(all_data)
# write combined dataset to csv file
write_csv_arrow(combined_data,
file = OUTPUT_FILE_CSV,
include_header = TRUE)
```
Note that in the code example above we use `purr::lift_dl()` to facilitate the code. The `arrow` function `concat_tables` combines several table objects into one table.
**Aside: CSV import and memory allocation, read.csv vs. fread**
The time needed for the simple step of importing rather large CSV files can vary substantially in R, depending on the function/package used. The reason is that there are different ways to allocate RAM when reading data from a CSV file. Depending on the amount of data to be read in, one or another approach might be faster. We first investigate the RAM allocation in R with `mem_change()` and `mem_used()`.
```
# SET UP -----------------
# fix variables
DATA_PATH <- "data/flights.csv"
# load packages
library(pryr)
# check how much memory is used by R (overall)
mem_used()
```
```
## 1.73 GB
```
```
# DATA IMPORT ----------------
# check the change in memory due to each step
# and stop the time needed for the import
system.time(flights <- read.csv(DATA_PATH))
```
```
## user system elapsed
## 1.496 0.117 1.646
```
```
mem_used()
```
```
## 1.76 GB
```
```
# DATA PREPARATION --------
flights <- flights[,-1:-3]
# check how much memory is used by R now
mem_used()
```
```
## 1.76 GB
```
The last result is rather interesting. The object `flights` must have been larger right after importing it than at the end of the script. We have thrown out several variables, after all. Why does R still use that much memory? R does not by default ‘clean up’ memory unless it is really necessary (meaning no more memory is available). In this case, R still has much more memory available from the operating system; thus there is no need to ‘collect the garbage’ yet. However, we can force R to collect the garbage on the spot with `gc()`. This can be helpful to better keep track of the memory needed by an analytics script.
```
gc()
```
```
## used (Mb) gc trigger (Mb) max used
## Ncells 7039856 376 11826824 631.7 11826824
## Vcells 170456635 1300 399556013 3048.4 399271901
## (Mb)
## Ncells 631.7
## Vcells 3046.3
```
Now, let’s see how we can improve the performance of this script with regard to memory allocation. Most memory is allocated when importing the file. Obviously, any improvement of the script must still result in importing all the data. However, there are different ways to read data into RAM. `read.csv()` reads all lines of a csv file consecutively. In contrast, `data.table::fread()` first ‘maps’ the data file into memory and only then actually reads it in line by line. This involves an additional initial step, but the larger the file, the less relevant is this first step in the total time needed to read all the data into memory. By switching on the `verbose` option, we can actually see what `fread` is doing.
```
# load packages
library(data.table)
# DATA IMPORT ----------------
system.time(flights <- fread(DATA_PATH, verbose = TRUE))
```
```
## user system elapsed
## 0.357 0.000 0.067
```
The output displayed on the console shows what is involved in steps `[1]` to `[12]` of the parsing/import procedure. Note in particular the following line under step `[7]` in the procedure:
```
Estimated number of rows: 30960501 / 92.03 = 336403
Initial alloc = 370043 rows (336403 + 9%) using
bytes/max(mean-2*sd,min) clamped between [1.1*estn, 2.0*estn]
```
This is the result of the above\-mentioned preparatory step in the form of sampling. The `fread` CSV parser first estimates how large the dataset likely is and then creates an additional allocation (in this case of `370043 rows`). Only after this are the rows actually imported into RAM. The summary of the time allocated for the different steps shown at the bottom of the output nicely illustrates that the preparatory steps of memory mapping and allocation are rather fast compared with the time needed to actually read the data into RAM. Given the size of the dataset, `fread`’s approach to memory allocation results in a much faster import of the dataset than `read.csv`’s approach.
8\.3 Efficient local data storage
---------------------------------
In this section, we are concerned with a) how we can store large datasets permanently on a mass storage device in an efficient way (here, efficient can be understood as ‘not taking up too much space’) and b) how we can load (parts of) this dataset in an efficient way (here, efficient\~fast) for analysis.
We look at this problem in two situations:
* The data needs to be stored locally (e.g., on the hard disk of our laptop).
* The data can be stored on a server ‘in the cloud’.
Various tools have been developed over the last few years to improve the efficiency of storing and accessing large amounts of data, many of which go beyond the scope implied by this book’s perspective on applied data analytics. Here, we focus on the basic concept of *SQL/Relational Database Management Systems (RDBMSs)*, as well as a few alternatives that can be summarized under the term *NoSQL (‘non\-SQL’, sometimes ‘Not only SQL’)* database systems. Conveniently (and contrary to what the latter name would suggest), most of these tools can be worked with by using basic SQL queries to load/query data.
The relational database system follows the relational data model, in which the data is organized in several tables that are connected via some unique data record identifiers (keys). Such systems, for example, SQLite introduced in Chapter 3, have been used for a long time in all kinds of business and analytics contexts. They are well\-tried and stable and have a large and diverse user base. There are many technicalities involved in how they work under the hood, but for our purposes three characteristics are most relevant:
1. All common RDBMSs, like SQLite and MySQL, are *row\-based* databases. That is, data is thought of as observations/data records stored in rows of a table. One record consists of one row.
2. They are typically made for storing clean data in a *clearly defined set of tables*, with clearly defined properties. The organizing of data in various tables has (at least for our perspective here) the aim of avoiding redundancies and thereby using the available storage space more efficiently.
3. Rows are *indexed* according to the unique identifiers of tables (or one or several other variables in a table). This allows for fast querying of specific records and efficient merging/joining of tables.
While these particular features work very well also with large amounts of data, particularly for exploration and data preparation (joining tables), in the age of Big Data they might be more relevant for operational databases (in the back\-end of web applications, or simply the operational database of a business) than for the specific purpose of data analytics.
On the one hand, the data basis of an analytics project might be simpler in terms of the number of tables involved. On the other hand, Big Data, as we have seen, might come in less structured and/or more complex forms than traditional table\-like/row\-based data. *NoSQL* databases have been developed for the purposes of storing more complex/less structured data, which might not necessarily be described as a set of tables connected via keys, and for the purpose of fast analysis of large amounts of data. Again, three main characteristics of these types of databases are of particular relevance here:
1. Typically, *NoSQL* databases are not row\-based, but follow a *column\-based*, document\-based, key\-value\-based, or graph\-based data model. In what follows, the column\-based model is most relevant.
2. *NoSQL* databases are designed for horizontal scaling. That is, scaling such a database out over many nodes of a computing cluster is usually straightforward.
3. They are optimized to give quick answers based on summarizing large amounts of data, such as frequency counts and averages (sometimes by using approximations rather than exact computations.)
Figure [8\.1](data-collection-and-data-storage.html#fig:columnvsrow) illustrates the basic concept of row\-based vs. column\-based data storage.
Figure 8\.1: Schematic illustration of columnar vs. row\-based data storage.
**Aside: Row\-based vs. column\-based databases**
Conceptually, in a *row\-based database* individual values (cells) are contained in rows, which means changing one value requires updating a row. Row\-based databases (e.g., SQLite) are thus designed for efficient data reading and writing when users often access many columns but rather few observations. For example, for an operational database in the back\-end of a web application such as an online shop, a row\-based approach makes sense because hundreds or thousands of users (customers in that case) constantly add or query small amounts of data. In contrast, changing a value in *column\-based* databases means changing a column. Accessing all values in a particular column is much faster in comparison to row\-based databases.
This means that column\-based databases are useful when users tend to query rather few columns but massive numbers of observations, which is typically rather the case in an analytics context. Some well\-known data warehouse and data lake systems are therefore based on this principle (e.g., Google BigQuery). However, if analytics tasks involve a lot of (out\-of\-memory) table joins, column\-based solutions are likely to be slower than row\-based solutions.
In the following, we have a close look at using both column\-based and row\-based tools. Thereby we will particularly highlight the practical differences between using column\-based and row\-based data storage solutions.
### 8\.3\.1 RDBMS basics
RDBMSs have two key features that tackle the two efficiency concerns mentioned above:
* The *relational data model*: The overall dataset is split by columns (covariates) into tables in order to reduce the storage of redundant variable\-value repetitions. The resulting database tables are then linked via key\-variables (unique identifiers). Thus (simply put), each type of entity on which observations exist resides in its own database table. Within this table, each observation has its unique ID. Keeping the data in such a structure is very efficient in terms of storage space used.
* *Indexing*: The key\-columns of the database tables are indexed, meaning (in simple terms) ordered on disk. Indexing a table takes time, but it has to be performed only once (unless the content of the table changes). The resulting index is then stored on disk as part of the database. These indices substantially reduce the number of disk accesses required to query/find specific observations. Thus, they make the loading of specific parts of the data for analysis much more efficient.
The loading/querying of data from an RDBMS typically involves the selection of specific observations (rows) and covariates (columns) from different tables. Due to the indexing, observations are selected efficiently, and the defined relations between tables (via keys) facilitate the joining of columns to a new table (the queried data).
### 8\.3\.2 Efficient data access: Indices and joins in SQLite
So far we have only had a look at the very basics of writing SQL code. Let us now further explore SQLite as an easy\-to\-use and easy\-to\-set\-up relational database solution. In a second step we then look at how to connect to a local SQLite database from within R. First, we switch to the Terminal tab in RStudio, set up a new database called `air.sqlite`, and import the csv\-file `flights.csv` (used in previous chapters) as a first table.
```
# switch to data directory
cd data
# create database and run sqlite
sqlite3 air.sqlite
```
```
-- import csvs
.mode csv
.import flights.csv flights
```
We check whether everything worked out well via the `.tables` and `.schema` commands.
```
.tables
.schema flights
```
In `flights`, each row describes a flight (the day it took place, its origin, its destination, etc.). It contains a covariate `carrier` containing the unique ID of the respective airline/carrier carrying out the flight as well as the covariates `origin` and `dest`. The latter two variables contain the unique IATA\-codes of the airports from which the flights departed and where they arrived, respectively. In `flights` we thus have observations at the level of individual flights.
Now we extend our database in a meaningful way, following the relational data model idea. First we download two additional CSV files containing data that relate to the flights table:
* [`airports.csv`](http://stat-computing.org/dataexpo/2009/airports.csv): Describes the locations of US Airports (relates to `origin` and `dest`).
* [`carriers.csv`](http://stat-computing.org/dataexpo/2009/carriers.csv): A listing of carrier codes with full names (relates to the `carrier`\-column in `flights`).
In this code example, the two CSVs have already been downloaded to the `materials/data`\-folder.
```
-- import airport data
.mode csv
.import airports.csv airports
.import carriers.csv carriers
-- inspect the result
.tables
.schema airports
.schema carriers
```
Now we can run our first query involving the relation between tables. The aim of the exercise is to query flights data (information on departure delays per flight number and date, from the `flights` table) for all `United Air Lines Inc.` flights (information from the `carriers` table) departing from `Newark Intl` airport (information from the `airports` table). In addition, we want the resulting table ordered by flight number. For the sake of the exercise, we only show the first 10 results of this query (`LIMIT 10`).
```
SELECT
year,
month,
day,
dep_delay,
flight
FROM (flights INNER JOIN airports ON flights.origin=airports.iata)
INNER JOIN carriers ON flights.carrier = carriers.Code
WHERE carriers.Description = 'United Air Lines Inc.'
AND airports.airport = 'Newark Intl'
ORDER BY flight
LIMIT 10;
```
```
flights_join
```
```
## year month day dep_delay flight
## 1 2013 1 4 0 1
## 2 2013 1 5 -2 1
## 3 2013 3 6 1 1
## 4 2013 2 13 -2 3
## 5 2013 2 16 -9 3
## 6 2013 2 20 3 3
## 7 2013 2 23 -5 3
## 8 2013 2 26 24 3
## 9 2013 2 27 10 3
## 10 2013 1 5 3 10
```
Note that this query has been executed without indexing any of the tables first. Thus SQLite could not take any ‘shortcuts’ when matching the ID columns in order to join the tables for the query output. That is, SQLite had to scan all the columns to find the matches. Now we index the respective ID columns and re\-run the query.
```
CREATE INDEX iata_airports ON airports (iata);
CREATE INDEX origin_flights ON flights (origin);
CREATE INDEX carrier_flights ON flights (carrier);
CREATE INDEX code_carriers ON carriers (code);
```
Note that SQLite optimizes the efficiency of the query without our explicit instructions. If there are indices it can use to speed up the query, it will do so.
```
SELECT
year,
month,
day,
dep_delay,
flight
FROM (flights INNER JOIN airports ON flights.origin=airports.iata)
INNER JOIN carriers ON flights.carrier = carriers.Code
WHERE carriers.Description = 'United Air Lines Inc.'
AND airports.airport = 'Newark Intl'
ORDER BY flight
LIMIT 10;
```
```
## year month day dep_delay flight
## 1 2013 1 4 0 1
## 2 2013 1 5 -2 1
## 3 2013 3 6 1 1
## 4 2013 2 13 -2 3
## 5 2013 2 16 -9 3
## 6 2013 2 20 3 3
## 7 2013 2 23 -5 3
## 8 2013 2 26 24 3
## 9 2013 2 27 10 3
## 10 2013 1 5 3 10
```
You can find the final `air.sqlite` database, including all the indices and tables, as `materials/data/air_final.sqlite` in the book’s code repository.
8\.4 Connecting R to an RDBMS
-----------------------------
The R\-package `RSQLite` ([Müller et al. 2022](#ref-RSQLite)) embeds SQLite in R. That is, it provides functions that allow us to use SQLite directly from within R. You will see that the combination of SQLite with R is a simple but very practical approach to working with very efficiently (and locally) stored datasets. In the following example, we explore how `RSQLite` can be used to set up and query the `air.sqlite` database shown in the example above.
### 8\.4\.1 Creating a new database with `RSQLite`
Similarly to the raw SQLite syntax, we connect to a database that does not exist yet actually creates this (empty database). Note that for all interactions with the database from within R, we need to refer to the connection (here: `con_air`).
```
# load packages
library(RSQLite)
# initialize the database
con_air <- dbConnect(SQLite(), "data/air.sqlite")
```
### 8\.4\.2 Importing data
With `RSQLite` we can easily add `data.frame`s as SQLite tables to the database.
```
# import data into current R session
flights <- fread("data/flights.csv")
airports <- fread("data/airports.csv")
carriers <- fread("data/carriers.csv")
# add tables to database
dbWriteTable(con_air, "flights", flights)
dbWriteTable(con_air, "airports", airports)
dbWriteTable(con_air, "carriers", carriers)
```
### 8\.4\.3 Issuing queries
Now we can query the database from within R. By default, `RSQLite` returns the query results as `data.frame`s. Queries are simply character strings written in SQLite.
```
# define query
delay_query <-
"SELECT
year,
month,
day,
dep_delay,
flight
FROM (flights INNER JOIN airports ON flights.origin=airports.iata)
INNER JOIN carriers ON flights.carrier = carriers.Code
WHERE carriers.Description = 'United Air Lines Inc.'
AND airports.airport = 'Newark Intl'
ORDER BY flight
LIMIT 10;
"
# issue query
delays_df <- dbGetQuery(con_air, delay_query)
delays_df
# clean up
dbDisconnect(con_air)
```
When done working with the database, we close the connection to the database with `dbDisconnect(con)`.
8\.5 Cloud solutions for (big) data storage
-------------------------------------------
As outlined in the previous section, RDBMSs are a very practical tool for storing the structured data of an analytics project locally in a database. A local SQLite database can easily be set up and accessed via R, allowing one to write the whole data pipeline – from data gathering to filtering, aggregating, and finally analyzing – in R. In contrast to directly working with CSV files, using SQLite has the advantage of organizing the data access much more efficiently in terms of RAM. Only the final result of a query is really loaded fully into R’s memory.
If mass storage space is too sparse or if RAM is nevertheless not sufficient, even when organizing data access via SQLite, several cloud solutions come to the rescue. Although you could also rent a traditional web server and host a SQL database there, this is usually not worthwhile for a data analytics project. In the next section we thus look at three important cases of how to store data as part of an analytics project: *RDBMS in the cloud*, a serverless *data warehouse* solution for large datasets called *Google BigQuery*, and a simple storage service to use as a *data lake* called *AWS S3*. All of these solutions are discussed from a data analytics perspective, and for all of these solutions we will look at how to make use of them from within R.
### 8\.5\.1 Easy\-to\-use RDBMS in the cloud: AWS RDS
Once we have set up the RStudio server on an EC2 instance, we can run the SQLite examples shown above on it. There are no additional steps needed to install SQLite. However, when using RDBMSs in the cloud, we typically have a more sophisticated implementation than SQLite in mind. Particularly, we want to set up an actual RDBMS server running in the cloud to which several clients can connect (e.g., via RStudio Server).
AWS’s Relational Database Service (RDS) provides an easy way to set up and run a SQL database in the cloud. The great advantage for users new to RDBMS/SQL is that you do not have to manually set up a server (e.g., an EC2 instance) and install/configure the SQL server. Instead, you can directly set up a fully functioning relational database in the cloud.
As a first step, open the AWS console and search for/select “RDS” in the search bar. Then, click on “Create database” in the lower part of the landing page.
Figure 8\.2: Create a managed relational database on AWS RDS.
On the next page, select “Easy create”, “MySQL”, and the “Free tier” DB instance size. Further down you will have to set the database instance identifier, the user name, and a password.
Figure 8\.3: Easy creation of an RDS MySQL DB.
Once the database instance is ready, you will see it in the databases overview. Click on the DB identifier (the name of your database shown in the list of databases), and click on modify (button in the upper\-right corner). In the “Connectivity” panel under “Additional configuration”, select *Publicly accessible* (this is necessary to interact with the DB from your local machine), and save the settings. Back on the overview page of your database, under “Connectivity \& security”, click on the link under the VPC security groups, scroll down and select the “Inbound rules” tab. Edit the inbound rule to allow any IP4 inbound traffic.[45](#fn45)
Figure 8\.4: Allow all IP4 inbound traffic (set Source to `0.0.0.0/0`).
```
# load packages
library(RMySQL)
library(data.table)
# fix vars
# replace this with the Endpoint shown in the AWS RDS console
RDS_ENDPOINT <- "MY-ENDPOINT"
# replace this with the password you have set when initiating the RDS DB on AWS
PW <- "MY-PW"
# connect to DB
con_rds <- dbConnect(RMySQL::MySQL(),
host=RDS_ENDPOINT,
port=3306,
username="admin",
password=PW)
# create a new database on the MySQL RDS instance
dbSendQuery(con_rds, "CREATE DATABASE air")
# disconnect and re-connect directly to the new DB
dbDisconnect(con_rds)
con_rds <- dbConnect(RMySQL::MySQL(),
host=RDS_ENDPOINT,
port=3306,
username="admin",
dbname="air",
password=PW)
```
`RMySQL` and `RSQLite` both build on the `DBI` package, which generalizes how we can interact with SQL\-type databases via R. This makes it straightforward to apply what we have learned so far by interacting with our local SQLite database to interactions with other databases. As soon as the connection to the new database is established, we can essentially use the same R functions as before to create new tables and import data.
```
# import data into current R session
flights <- fread("data/flights.csv")
airports <- fread("data/airports.csv")
carriers <- fread("data/carriers.csv")
# add tables to database
dbWriteTable(con_rds, "flights", flights)
dbWriteTable(con_rds, "airports", airports)
dbWriteTable(con_rds, "carriers", carriers)
```
Finally, we can query our RDS MySQL database on AWS.
```
# define query
delay_query <-
"SELECT
year,
month,
day,
dep_delay,
flight
FROM (flights INNER JOIN airports ON flights.origin=airports.iata)
INNER JOIN carriers ON flights.carrier = carriers.Code
WHERE carriers.Description = 'United Air Lines Inc.'
AND airports.airport = 'Newark Intl'
ORDER BY flight
LIMIT 10;
"
# issue query
delays_df <- dbGetQuery(con_rds, delay_query)
delays_df
# clean up
dbDisconnect(con_rds)
```
8\.6 Column\-based analytics databases
--------------------------------------
As outlined in the discussion of row\-based vs. column\-based databases above, many data analytics tasks focus on few columns but many rows, hence making column\-based databases the better option for large\-scale analytics purposes. [Apache Druid](https://druid.apache.org/) ([Yang et al. 2014](#ref-Druid)) is one such solution that has particular advantages for the data analytics perspective taken in this book. It can easily be run on a local machine (Linux and Mac/OSX), or on a cluster in the cloud, and it easily allows for connections to external data, for example, data stored on Google Cloud Storage. Moreover, it can be interfaced by `RDruid` ([Metamarkets Group Inc. 2023](#ref-RDruid)) to run Druid queries from within R, or, yet again, Druid can be directly queried via SQL.
To get started with Apache Druid, navigate to <https://druid.apache.org/>. Under [downloads](https://druid.apache.org/downloads.html) you will find a link to download the latest stable release (at the time of writing this book: 25\.0\.0\). On the Apache Druid landing page, you will also find a link [Quickstart](https://druid.apache.org/docs/latest/tutorials/index.html) with all the details regarding the installation and set up. Importantly, as of the time of writing this book, only Linux and MacOSX are supported (Windows is not supported).
### 8\.6\.1 Installation and start up
On Linux, follow these steps to set up Apache Druid on your machine. First, open a terminal and download the Druid binary to the location in which you want to work with Druid. First, we download and unpack the current Apache Druid version via the terminal.
Using Druid in its most basic form is then straightforward. Simply navigate to the unpacked folder and run `./bin/start-micro-quickstart`.
```
# navigate to local copy of druid
cd apache-druid-25.0.0
# start up druid (basic/minimal settings)
./bin/start-micro-quickstart
```
### 8\.6\.2 First steps via Druid’s GUI
Once all Druid services are running, open a new browser window and navigate to `http://localhost:8888`. This will open Druid’s graphical user interface (GUI). The GUI provides easy\-to\-use interfaces to all basic Druid services, ranging from the loading of data to querying via Druid’s SQL. Figure [8\.5](data-collection-and-data-storage.html#fig:druidstart) highlights the GUI buttons mentioned in the instructions below.
Figure 8\.5: Apache Druid GUI starting page. White boxes highlight buttons for the Druid services discussed in the main text (from left to right): the query editor (run Druid SQL queries on any of the loaded data sources directly here); the data load service (use this to import data from local files); and the Datasources console (lists all currently available data sources).
#### 8\.6\.2\.1 Load data into Druid
In a first step, we will import the TLC taxi trips dataset from the locally stored CSV file. To do so, click on *Load data/Batch \- classic*, then click on *Start new batch spec*, and then select *Local disk* and *Connect*. On the right side of the Druid GUI, a menu will open. In the `Base directory` field, enter the path to the local directory in which you have stored the TLC taxi trips CSV file used in the examples above (`../data/`).[46](#fn46) In the `File filter` field, enter `tlc_trips.csv`.[47](#fn47) Finally click on the *Apply* button.
Figure 8\.6: Apache Druid GUI: CSV parse menu for classic batch data ingestion.
The first few lines of the raw data will appear in the Druid console. In the lower\-right corner of the console, click on *Next: Parse data*. Druid will automatically guess the delimiter used in the CSV (following the examples above, this is `,`) and present the first few parsed rows.
If all looks good, click on *Next: Parse time* in the lower\-right corner of the console. Druid is implemented to work particularly fast on time\-series and panel data. To this end, it expects you to define a main time\-variable in your dataset, which then can be used to index and partition your overall dataset to speed up queries for specific time frames. Per default, Druid will suggest using the first column that looks like a time format (in the TLC\-data, this would be column 2, the pick\-up time of a trip, which seems very reasonable for the sake of this example). We move on with a click on *Next: Transform* in the lower right corner. Druid allows you, right at the step of loading data, to add or transform variables/columns. As we do not need to change anything at this point, we continue with *Next: Filter* in the lower\-right corner. At this stage you can filter out rows/observations that you are sure should not be included in any of the queries/analyses performed later via Druid.
For this example, we do not filter out any observations and continue via *Next: Configure schema* in the lower\-right corner. Druid guesses the schema/data types for each column based on sampling the first few observations in the dataset. Notice, for example, how Druid considers `vendor_name` to be a `string` and `Trip_distance` to be a `double` (a 64\-bit floating point number). In most applications of Druid for the data analytics perspective of this book, the guessed data types will be just fine. We will leave the data types as\-is and keep the original column/variable names. You can easily change names of variables/columns by double\-clicking on the corresponding column name, which will open a menu on the right\-hand side of the console. With this, all the main parameters to load the data are defined. What follows has to do with optimizing Druid’s performance.
Once you click on *Next: Partition* in the lower\-right corner, you will have to choose the primary partitioning, which is always based on time (again, this has to do with Druid being optimized to work on large time\-series and panel datasets). Basically, you need to decide whether the data should be organized into chunks per year, month, week, etc. For this example, we will segment the data according to months. To this end, from the drop\-down menu under `Segment granularity`, choose `month`. For the rest of the parameters, we keep the default values. Continue by clicking on *Next: Tune* (we do not change anything here) and then on *Next: Publish*. In the menu that appears, you can choose the name under which the TLC taxi trips data should be listed in the *Datasources* menu on your local Druid installation, once all the data is loaded/processed. Thinking of SQL queries when working with Druid, the `Datasource name` is what you then will use in the `FROM` statement of a Druid SQL query (in analogy to a table name in the case of RDBMSs like SQLite). We keep the suggested name `tlc_trips`. Thus, you can click on *Edit spec* in the lower\-right corner. An editor window will open and display all your load configurations as a JSON file. Only change anything at this step if you really know what you are doing. Finally, click on *Submit* in the lower\-right corner. This will trigger the loading of data into Druid. As in the case of the RDBMS covered above, the data ingestion or data loading process primarily involves indexing and writing data to disk. It does not mean importing data to RAM. Since the CSV file used in this example is rather large, this process can take several minutes on a modern laptop computer.
Once the data ingestion is finished, click on the *Datasources* tab in the top menu bar to verify the ingestion. The `tlc_trips` dataset should now appear in the list of data sources in Druid.
Figure 8\.7: Apache Druid: Datasources console.
#### 8\.6\.2\.2 Query Druid via the GUI SQL console
Once the data is loaded into Druid, we can directly query it via the SQL console in Druid’s GUI. To do this, navigate in Druid to *Query*. To illustrate the strengths of Druid as an analytic database, we run an extensive data aggregation query. Specifically, we count the number of cases (trips) per vendor and split the number of trips per vendor further by payment type.
```
SELECT
vendor_name,
Payment_Type,
COUNT(*) AS Count_trips
FROM tlc_trips
GROUP BY vendor_name, Payment_Type
```
Note that for such simple queries, Druid SQL is essentially identical to the SQL dialects covered in previous chapters and subsections, which makes it rather simple for beginners to start productively engaging with Druid. SQL queries can directly be entered in the query tab; a click on *Run* will send the query to Druid, and the results are shown right below.
Figure 8\.8: Apache Druid query console with Druid\-SQL example: count the number of cases per vendor and payment type.
Counting the number of taxi trips per vendor name and payment type implies using the entire dataset of over 27 million rows (1\.5GB). Nevertheless, Druid needs less than a second and hardly any RAM to compute the results.
### 8\.6\.3 Query Druid from R
Apache provides high\-level interfaces to Druid for several languages common in data science/data analytics. The `RDruid` package provides such a Druid connector for R. The package can be installed from GitHub via the `devtools` package.
```
# install devtools if necessary
if (!require("devtools")) {
install.packages("devtools")}
# install RDruid
devtools::install_github("druid-io/RDruid")
```
The `RDruid` package provides several high\-level functions to issue specific Druid queries; however, the syntax might not be straightforward for beginners, and the package has not been further developed for many years.
Thanks to Druid’s basic architecture as a web application, however, there is a simple alternative to the `RDruid` package. Druid accepts queries via HTTP POST calls (with SQL queries embedded in a JSON file sent in the HTTP body). The data is then returned as a compressed JSON string in the HTTP response to the POST request. We can build on this to implement our own simple `druid()` function to query Druid from R.
```
# create R function to query Druid (locally)
druid <-
function(query){
# dependencies
require(jsonlite)
require(httr)
require(data.table)
# basic POST body
base_query <-
'{
"context": {
"sqlOuterLimit": 1001,
"sqlQueryId": "1"},
"header": true,
"query": "",
"resultFormat": "csv",
"sqlTypesHeader": false,
"typesHeader": false
}'
param_list <- fromJSON(base_query)
# add SQL query
param_list$query <- query
# send query; parse result
resp <- POST("http://localhost:8888/druid/v2/sql",
body = param_list,
encode = "json")
parsed <- fread(content(resp, as = "text", encoding = "UTF-8"))
return(parsed)
}
```
Now we can send queries to our local Druid installation. Importantly, Druid needs to be started up in order to make this work. In the example below we start up Druid from within R via `system("apache-druid-25.0.0/bin/start-micro-quickstart")` (make sure that the working directory is set correctly before running this). Then, we send the same query as in the Druid GUI example from above.
```
# start Druid
system("apache-druid-25.0.0/bin/start-micro-quickstart",
intern = FALSE,
wait = FALSE)
Sys.sleep(30) # wait for Druid to start up
# query tlc data
query <-
'
SELECT
vendor_name,
Payment_Type,
COUNT(*) AS Count_trips
FROM tlc_trips
GROUP BY vendor_name, Payment_Type
'
result <- druid(query)
# inspect result
result
```
```
## vendor_name Payment_Type Count_trips
## 1: CMT Cash 9618583
## 2: CMT Credit 2737111
## 3: CMT Dispute 16774
## 4: CMT No Charge 82142
## 5: DDS CASH 1332901
## 6: DDS CREDIT 320411
## 7: VTS CASH 10264988
## 8: VTS Credit 3099625
```
8\.7 Data warehouses
--------------------
Unlike RDBMSs, the main purpose of data warehouses is usually analytics and not the provision of data for everyday operations. Generally, data warehouses contain well\-organized and well\-structured data, but are not as stringent as RMDBS when it comes to organizing data in relational tables. Typically, they build on a table\-based logic, but allow for nesting structures and more flexible storage approaches. They are designed to contain large amounts of data (via horizontal scaling) and are usually column\-based. From the perspective of Big Data Analytics taken in this book, there are several suitable and easily accessible data warehouse solutions provided in the cloud. In the following example, we will introduce one such solution called *Google BigQuery*.
### 8\.7\.1 Data warehouse for analytics: Google BigQuery example
Google BigQuery is flexible regarding the upload and export of data and can be set up straightforwardly for a data analytics project with hardly any set up costs. The pricing schema is usage\-based. Unless you store massive amounts of data on it, you will only be charged for the volume of data processed. Moreover, there is a straightforward R\-interface to Google BigQuery called [`bigrquery`](https://bigrquery.r-dbi.org/), which allows for the same R/SQL\-syntax as R’s interfaces to traditional relational databases.
**Get started with `bigrquery`**
To get started with Google BigQuery and `bigrquery` ([Wickham and Bryan 2022](#ref-bigrquery)), go to <https://cloud.google.com/bigquery>. Click on “Try Big Query” (if new to this) or “Go to console” (if used previously). Create a Google Cloud project to use BigQuery with. Note that, as in general for Google Cloud services, you need to have a credit card registered with the project to do this. However, for learning and testing purposes, Google Cloud offers 1TB of free queries per month. All the examples shown below combined will not exceed this free tier. Finally, run `install.packages("bigrquery")` in R.
To set up an R session to interface with BigQuery, you need to indicate which Google BigQuery project you want to use for the billing (the `BILLING` variable in the example below), as well as the Google BigQuery project in which the data is stored that you want to query (the `PROJECT` variable below). This distinction is very useful because it easily allows you to query data from a large array of publicly available datasets on BigQuery. In the set up example code below, we use this option in order to access an existing and publicly available dataset (provided in the `bigquery-public-data` project) called `google_analytics_sample`. In fact, this dataset provides the raw Google Analytics data used in the Big\-P example discussed in Chapter 2\.
Finally, all that is left to do is to connect to BigQuery via the already familiar `dbConnect()` function provided in `DBI`.[48](#fn48) When first connecting to and querying BigQuery with your Google Cloud account, a browser window will open, and you will be prompted to grant `bigrquery` access to your account/project. To do so, you will have to be logged in to your Google account. See the **Important details** section on [https://bigrquery.r\-dbi.org/](https://bigrquery.r-dbi.org/) for details on the authentication.
```
# load packages, credentials
library(bigrquery)
library(data.table)
library(DBI)
# fix vars
# the project ID on BigQuery (billing must be enabled)
BILLING <- "bda-examples"
# the project name on BigQuery
PROJECT <- "bigquery-public-data"
DATASET <- "google_analytics_sample"
# connect to DB on BigQuery
con <- dbConnect(
bigrquery::bigquery(),
project = PROJECT,
dataset = DATASET,
billing = BILLING
)
```
**Get familiar with BigQuery**
The basic query syntax is now essentially identical to what we have covered in the RDBMS examples above.[49](#fn49) In this first query, we count the number of times a Google merchandise shop visit originates from a given web domain on August 1, 2017 (hence the query to table `ga_sessions_20170801`). Note the way we refer to the specific table (in the `FROM` statement of the query below): `bigquery-public-data` is the pointer to the BigQuery project, `google_analytics_sample` is the name of the data warehouse, and `ga_sessions_20170801` is the name of the specific table we want to query data from. Finally, note the argument `page_size=15000` as part of the familiar `dbGetQuery()` function. This ensures that `bigrquery` does not exceed the limit of volume per second for downloads via the Google BigQuery API (on which `bigrquery` builds).
```
# run query
query <-
"
SELECT DISTINCT trafficSource.source AS origin,
COUNT(trafficSource.source) AS no_occ
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801`
GROUP BY trafficSource.source
ORDER BY no_occ DESC;
"
ga <- as.data.table(dbGetQuery(con, query, page_size=15000))
head(ga)
```
Note the output displayed in the console. `bigrquery` indicates how much data volume was processed as part of the query (which indicates what will be charged to your billing project),
**Upload data to BigQuery**
Storing your entire raw dataset on BigQuery is straightforward with `bigrquery`. In the following simple example, we upload the previously gathered and locally stored TLC taxi trips data. To do so, we first create and connect to a new dataset on BigQuery. To keep things simple, we initialize the new dataset in the same project used for the billing.
```
# name of the dataset to be created
DATASET <- "tlc"
# connect and initialize a new dataset
con <- dbConnect(
bigrquery::bigquery(),
project = BILLING,
billing = BILLING,
dataset = DATASET
)
```
In a first step, we create the dataset to which we then can add the table.
```
tlc_ds <- bq_dataset(BILLING, DATASET)
bq_dataset_create(tlc_ds)
```
We then load the TLC dataset into R via `fread()` and upload it as a new table to your project/dataset on BigQuery via `bigrquery`. For the sake of the example, we only upload the first 10,000 rows.
```
# read data from csv
tlc <- fread("data/tlc_trips.csv.gz", nrows = 10000)
# write data to a new table
dbWriteTable(con, name = "tlc_trips", value = tlc)
```
Alternatively, you can easily upload data via the Google BigQuery console in the browser. Go to <https://console.cloud.google.com/bigquery>, select (or create) the project you want to upload data to, then in the *Explorer* section click on *\+ ADD DATA*, and select the file you want to upload. You can either upload the data from disk, from Google Cloud Storage, or from a third\-party connection. Uploading the data into BigQuery via Google Cloud Storage is particularly useful for large datasets.
Finally, we can test the newly created dataset/table with the following query
```
test_query <-
"
SELECT *
FROM tlc.tlc_trips
LIMIT 10
"
test <- dbGetQuery(con, test_query)
```
**Tutorial: Retrieve and prepare Google Analytics data**
The following tutorial illustrates how the raw data for the Big\-P example in Chapter 2 was collected and prepared via Google BigQuery and R. Before we get started, note an important aspect of a data warehouse solution like BigQuery in contrast to common applications of RDBS. As data warehouses are used in a more flexible way than relational databases, it is not uncommon to store data files/tables containing the same variables separately in various tables, for example to store one table per day or year of a panel dataset. On Google BigQuery, this partitioning of datasets into several components can additionally make sense for cost reasons. Suppose you want to only compute summary statistics for certain variables over a given time frame. If all observations of a large dataset are stored in one standard BigQuery table, such a query results in processing GBs or TBs of data, as the observations from the corresponding time frame need to be filtered out of the entire dataset. Partitioning the data into several subsets helps avoid this, as BigQuery has several features that allow the definition of SQL queries to be run on partitioned data. The publicly available Google Analytics dataset is organized in such a partitioned way. The data is stored in several tables (one for each day of the observation period), whereby the last few characters of the table name contain the date of the corresponding observation day (such as the one used in the example above: `ga_sessions_20170801`). If we want to combine data from several of those tables, we can use the wildcard character (`*`) to indicate that the BigQuery should consider all tables matching the table name up to the `*`: `FROM bigquery-public-data.google_analytics_sample.ga_sessions_*`.
We proceed by first connecting the R session with GoogleBigQuery.
```
# fix vars
# the project ID on BigQuery (billing must be enabled)
BILLING <- "YOUR-BILLING-PROJECT-ID"
# the project name on BigQuery
PROJECT <- "bigquery-public-data"
DATASET <- "google_analytics_sample"
# connect to DB on BigQuery
con <- dbConnect(
bigrquery::bigquery(),
project = PROJECT,
dataset = DATASET,
billing = BILLING
)
```
The query combines all Google Analytics data recorded from the beginning of 2016 to the end of 2017 via `WHERE _TABLE_SUFFIX BETWEEN '20160101' AND '20171231'`. This gives us all the raw data used in the Big\-P analysis shown in Chapter 2\.
```
# run query
query <-
"
SELECT
totals.visits,
totals.transactions,
trafficSource.source,
device.browser,
device.isMobile,
geoNetwork.city,
geoNetwork.country,
channelGrouping
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*`
WHERE _TABLE_SUFFIX BETWEEN '20160101' AND '20171231';
"
ga <- as.data.table(dbGetQuery(con, query, page_size=15000))
```
Finally, we use `data.table` and basic R to prepare the final analytic dataset and write it on disk.
```
# further cleaning and coding via data.table and basic R
ga$transactions[is.na(ga$transactions)] <- 0
ga <- ga[ga$city!="not available in demo dataset",]
ga$purchase <- as.integer(0<ga$transactions)
ga$transactions <- NULL
ga_p <- ga[purchase==1]
ga_rest <- ga[purchase==0][sample(1:nrow(ga[purchase==0]), 45000)]
ga <- rbindlist(list(ga_p, ga_rest))
potential_sources <- table(ga$source)
potential_sources <- names(potential_sources[1<potential_sources])
ga <- ga[ga$source %in% potential_sources,]
# store dataset on local hard disk
fwrite(ga, file="data/ga.csv")
# clean up
dbDisconnect(con)
```
Note how we combine BigQuery as our data warehouse with basic R for data preparation. Solutions like BigQuery are particularly useful for this kind of approach as part of an analytics project: Large operations such as the selection of columns/variables from large\-scale data sources are handled within the warehouse in the cloud, and the refinement/cleaning steps can then be implemented locally on a much smaller subset.[50](#fn50)
Note the wildcard character (`*`) in the query is used to fetch data from several partitions of the overall dataset.
8\.8 Data lakes and simple storage service
------------------------------------------
Broadly speaking a data lake is where all your data resides (these days, this is typically somewhere in the cloud). The data is simply stored in whatever file format and in simple terms organized in folders and sub\-folders. In the same data lake you might thus store CSV files, SQL database dumps, log files, image files, raw text, etc. In addition, you typically have many options to define access rights to files, including to easily make them accessible for download to the public. For a simple data analytics project in the context of economic research or business analytics, the data lake in the cloud concept is a useful tool to store all project\-related raw data files. On the one hand you avoid running into troubles with occupying gigabytes or terabytes of your local hard disk with files that are relevant but only rarely imported/worked with. On the other hand you can properly organize all the raw data for reproducibility purposes and easily share the files with colleagues (and eventually the public). For example, you can use one main folder (one “bucket”) for an entire analytics project, store all the raw data in one sub\-folder (for reproduction purposes), and store all the final analytic datasets in another sub\-folder for replication purposes and more frequent access as well as sharing across a team of co\-workers.
There are several types of cloud\-based data lake solutions available, many of which are primarily focused on corporate data storage and provide a variety of services (for example, AWS Lake Formation or Azure Data Lake) that might go well beyond the data analytics perspective taken in this book. However, most of these solutions build in the end on a so\-called simple storage service such as AWS S3 or Google Cloud Storage, which build the core of the lake – the place where the data is actually stored and accessed. In the following, we will look at how to use such a simple storage service (AWS S3\) as a data lake in simple analytics projects.[51](#fn51)
Finally, we will look at a very interesting approach to combine the concept of a data lake with the concept of a data warehouse. That is, we briefly look at solutions of how some analytics tools (specifically, a tool called Amazon Athena) can directly be used to query/analyze the data stored in the simple storage service.
### 8\.8\.1 AWS S3 with R: First steps
For the following first steps with AWS S3 and R, you will need an AWS account (the same as above for EC2\) and IAM credentials from your AWS account with the right to access S3\.[52](#fn52) Finally, you will have to install the `aws.s3` package in R in order to access S3 via R: `install.packages("aws.s3")`.
To initiate an R session in which you connect to S3, `aws.s3` ([Leeper 2020](#ref-aws.s3)) must be loaded and the following environment variables must be set:
* `AWS_ACCESS_KEY_ID`: your access key ID (of the keypair with rights to use S3\)
* `AWS_SECRET_KEY`: your access key (of the keypair with rights to use S3\)
* `REGION`: the region in which your S3 buckets are/will be located (e.g., `"eu-central-1"`)
```
# load packages
library(aws.s3)
# set environment variables with your AWS S3 credentials
Sys.setenv("AWS_ACCESS_KEY_ID" = AWS_ACCESS_KEY_ID,
"AWS_SECRET_ACCESS_KEY" = AWS_SECRET_KEY,
"AWS_DEFAULT_REGION" = REGION)
```
In a first step, we create a project bucket (the main repository for our project) to store all the data of our analytics project. All the raw data can be placed directly in this main folder. Then, we add one sub\-folder to this bucket: `analytic_data` (for the cleaned/prepared datasets underlying the analyses in the project).[53](#fn53)
```
# fix variable for bucket name
BUCKET <- "tlc-trips"
# create project bucket
put_bucket(BUCKET)
# create folders
put_folder("raw_data", BUCKET)
put_folder("analytic_data", BUCKET)
```
### 8\.8\.2 Uploading data to S3
Now we can start uploading the data to the bucket (and the sub\-folder). For example, to remain within the context of the TLC taxi trips data, we upload the original Parquet files directly to the bucket and the prepared CSV file to `analytic_data`. For large files (larger than 100MB) it is recommended to use the multipart option (upload of file in several parts; `multipart=TRUE`).
```
# upload to bucket
# final analytic dataset
put_object(
file = "data/tlc_trips.csv", # the file you want to upload
object = "analytic_data/tlc_trips.csv", # name of the file in the bucket
bucket = BUCKET,
multipart = TRUE
)
# upload raw data
file_paths <- list.files("data/tlc_trips/raw_data", full.names = TRUE)
lapply(file_paths,
put_object,
bucket=BUCKET,
multipart=TRUE)
```
### 8\.8\.3 More than just simple storage: S3 \+ Amazon Athena
There are several implementations of interfaces with Amazon Athena in R. Here, we will rely on `AWR.Athena` ([Fultz and Daróczi 2019](#ref-AWR.Athena)) (run `install.packages("AWR.Athena")`), which allows interacting with Amazon Athena via the familiar `DBI` package ([R Special Interest Group on Databases (R\-SIG\-DB), Wickham, and Müller 2022](#ref-DBI)).
```
# SET UP -------------------------
# load packages
library(DBI)
library(aws.s3)
# aws credentials with Athena and S3 rights and region
AWS_ACCESS_KEY_ID <- "YOUR_KEY_ID"
AWS_ACCESS_KEY <- "YOUR_KEY"
REGION <- "eu-central-1"
```
```
# establish AWS connection
Sys.setenv("AWS_ACCESS_KEY_ID" = AWS_ACCESS_KEY_ID,
"AWS_SECRET_ACCESS_KEY" = AWS_ACCESS_KEY,
"AWS_DEFAULT_REGION" = REGION)
```
Create a bucket for the output.
```
OUTPUT_BUCKET <- "bda-athena"
put_bucket(OUTPUT_BUCKET, region="us-east-1")
```
Now we can connect to Amazon Athena to query data from files in S3 via the `RJDBC` package ([Urbanek 2022](#ref-RJDBC)).
```
# load packages
library(RJDBC)
library(DBI)
# download Athena JDBC driver
URL <- "https://s3.amazonaws.com/athena-downloads/drivers/JDBC/"
VERSION <- "AthenaJDBC_1.1.0/AthenaJDBC41-1.1.0.jar"
DRV_FILE <- "AthenaJDBC41-1.1.0.jar"
download.file(paste0(URL, VERSION), destfile = DRV_FILE)
# connect to JDBC
athena <- JDBC(driverClass = "com.amazonaws.athena.jdbc.AthenaDriver",
DRV_FILE, identifier.quote = "'")
# connect to Athena
con <- dbConnect(athena, "jdbc:awsathena://athena.us-east-1.amazonaws.com:443/",
s3_staging_dir = "s3://bda-athena", user = AWS_ACCESS_KEY_ID,
password = AWS_ACCESS_KEY)
```
In order to query data stored in S3 via Amazon Athena, we need to create an *external table* in Athena, which will be based on data stored in S3\.
```
query_create_table <-
"
CREATE EXTERNAL TABLE default.trips (
`vendor_name` string,
`Trip_Pickup_DateTime` string,
`Trip_Dropoff_DateTime` string,
`Passenger_Count` int,
`Trip_Distance` double,
`Start_Lon` double,
`Start_Lat` double,
`Rate_Code` string,
`store_and_forward` string,
`End_Lon` double,
`End_Lat` double,
`Payment_Type` string,
`Fare_Amt` double,
`surcharge` double,
`mta_tax` string,
`Tip_Amt` double,
`Tolls_Amt` double,
`Total_Amt` double
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION 's3://tlc-trips/analytic_data/'
"
dbSendQuery(con, query_create_table)
```
Run a test query to verify the table.
```
test_query <-
"
SELECT *
FROM default.trips
LIMIT 10
"
test <- dbGetQuery(con, test_query)
dim(test)
```
```
## [1] 10 18
```
Finally, close the connection.
```
dbDisconnect(con)
```
```
## [1] TRUE
```
8\.9 Wrapping up
----------------
* It is good practice to set up all of the high\-level *pipeline* in the same language (here R). This substantially facilitates your workflow and makes your overall pipeline easier to maintain. Importantly, as illustrated in the sections above, this practice does not mean that all of the underlying data processing is actually done in R. We simply use R as the highest\-level layer and call a range of services under the hood to handle each of the pipeline components as efficiently as possible.
* *Apache Arrow* allows you to combine and correct raw data without exceeding RAM; in addition it facilitates working with newer (big) data formats for columnar data storage systems (like *Apache Parquet*).
* *RDBMSs* such as *SQLite* or *MySQL* and analytics databases such as *Druid* help you store and organize clean/structured data for analytics purposes locally or in the cloud.
* *RDBMSs* like SQLite are *row\-based* (changing a value means changing a row), while modern analytics databases are usually *column*\-based (changing a value means modifying one column).
* Row\-based databases are recommended when your analytics workflow includes a lot of tables, table joins, and frequent filtering for specific observations with variables from several tables.
Column\-based databases are recommended for analytics workflows involving less frequent but large\-scale data aggregation tasks.
* *Data warehouse* solutions like *Google BigQuery* are useful to store and query large (semi\-)structured datasets but are more flexible regarding hierarchical data and file formats than traditional RDBMSs.
* *Data lakes* and simple storage services are the all\-purpose tools to store vast amounts of data in any format in the cloud. Typically, solutions like *AWS S3* are a great option to store all of the raw data related to a data analytics project.
8\.1 Gathering and compilation of raw data
------------------------------------------
The NYC Taxi \& Limousine Commission (TLC) provides detailed data on all trip records, including pick\-up and drop\-off times/locations. When combining all available trip records from 2009 to 2018, we get a rather large dataset of over 200GB. The code examples below illustrate how to collect and compile the entire dataset. In order to avoid long computing times, the code examples shown below are based on a small sub\-set of the actual raw data (however, all examples involving virtual memory are in theory scalable to the extent of the entire raw dataset).
The raw data consists of several monthly Parquet files and can be downloaded via the [TLC’s website](https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page). The following short R script automates the downloading of all available trip\-record files. *NOTE*: Downloading all files can take several hours and will occupy over 200GB!
```
# Fetch all TLC trip records Data source:
# https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page
# Input: Monthly Parquet files from urls
# SET UP -----------------
# packages
library(R.utils) # to create directories from within R
# fix vars
BASE_URL <- "https://d37ci6vzurychx.cloudfront.net/trip-data/"
FILE <- "yellow_tripdata_2018-01.parquet"
URL <- paste0(BASE_URL, FILE)
OUTPUT_PATH <- "data/tlc_trips/"
START_DATE <- as.Date("2009-01-01")
END_DATE <- as.Date("2018-06-01")
# BUILD URLS -----------
# parse base url
base_url <- gsub("2018-01.parquet", "", URL)
# build urls
dates <- seq(from = START_DATE, to = END_DATE, by = "month")
year_months <- gsub("-01$", "", as.character(dates))
data_urls <- paste0(base_url, year_months, ".parquet")
data_paths <- paste0(OUTPUT_PATH, year_months, ".parquet")
# FETCH AND STACK CSVS ----------------
mkdirs(OUTPUT_PATH)
# download all csvs in the data range
for (i in 1:length(data_urls)) {
# download to disk
download.file(data_urls[i], data_paths[i])
}
```
8\.2 Stack/combine raw source files
-----------------------------------
In a next step, we parse and combine the downloaded data. Depending on how you want to further work with the gathered data, one or another storage format might be more convenient. For the sake of illustration (and the following examples building on the downloaded data), we store the downloaded data in one CSV file. To this end, we make use of the `arrow` package ([Richardson et al. 2022](#ref-richardson_etal2022)), an R interface to the Apache Arrow C\+\+ library (a platform to work with large\-scale columnar data). The aim of the exercise is to combine the downloaded Parquet files into one compressed CSV file, which will be more easily accessible for some of the libraries used in further examples.
We start by installing the `arrow` package in the following way.
```
# install arrow
Sys.setenv(LIBARROW_MINIMAL = "false") # to enable working with compressed files
install.packages("arrow") # might take a while
```
The setting `LIBARROW_MINIMAL= "false"` ensures that the installation of arrow is not restricted to the very basic functionality of the package. Specifically, for our context it will be important that the `arrow` installation allows for the reading of compressed files.
```
# SET UP ---------------------------
# load packages
library(arrow)
library(data.table)
library(purrr)
# fix vars
INPUT_PATH <- "data/tlc_trips/"
OUTPUT_FILE <- "data/tlc_trips.parquet"
OUTPUT_FILE_CSV <- "data/tlc_trips.csv"
# list of paths to downloaded Parquet files
all_files <- list.files(INPUT_PATH, full.names = TRUE)
# LOAD, COMBINE, STORE ----------------------
# read Parquet files
all_data <- lapply(all_files, read_parquet, as_data_frame = FALSE)
# combine all arrow tables into one
combined_data <- lift_dl(concat_tables)(all_data)
# write combined dataset to csv file
write_csv_arrow(combined_data,
file = OUTPUT_FILE_CSV,
include_header = TRUE)
```
Note that in the code example above we use `purr::lift_dl()` to facilitate the code. The `arrow` function `concat_tables` combines several table objects into one table.
**Aside: CSV import and memory allocation, read.csv vs. fread**
The time needed for the simple step of importing rather large CSV files can vary substantially in R, depending on the function/package used. The reason is that there are different ways to allocate RAM when reading data from a CSV file. Depending on the amount of data to be read in, one or another approach might be faster. We first investigate the RAM allocation in R with `mem_change()` and `mem_used()`.
```
# SET UP -----------------
# fix variables
DATA_PATH <- "data/flights.csv"
# load packages
library(pryr)
# check how much memory is used by R (overall)
mem_used()
```
```
## 1.73 GB
```
```
# DATA IMPORT ----------------
# check the change in memory due to each step
# and stop the time needed for the import
system.time(flights <- read.csv(DATA_PATH))
```
```
## user system elapsed
## 1.496 0.117 1.646
```
```
mem_used()
```
```
## 1.76 GB
```
```
# DATA PREPARATION --------
flights <- flights[,-1:-3]
# check how much memory is used by R now
mem_used()
```
```
## 1.76 GB
```
The last result is rather interesting. The object `flights` must have been larger right after importing it than at the end of the script. We have thrown out several variables, after all. Why does R still use that much memory? R does not by default ‘clean up’ memory unless it is really necessary (meaning no more memory is available). In this case, R still has much more memory available from the operating system; thus there is no need to ‘collect the garbage’ yet. However, we can force R to collect the garbage on the spot with `gc()`. This can be helpful to better keep track of the memory needed by an analytics script.
```
gc()
```
```
## used (Mb) gc trigger (Mb) max used
## Ncells 7039856 376 11826824 631.7 11826824
## Vcells 170456635 1300 399556013 3048.4 399271901
## (Mb)
## Ncells 631.7
## Vcells 3046.3
```
Now, let’s see how we can improve the performance of this script with regard to memory allocation. Most memory is allocated when importing the file. Obviously, any improvement of the script must still result in importing all the data. However, there are different ways to read data into RAM. `read.csv()` reads all lines of a csv file consecutively. In contrast, `data.table::fread()` first ‘maps’ the data file into memory and only then actually reads it in line by line. This involves an additional initial step, but the larger the file, the less relevant is this first step in the total time needed to read all the data into memory. By switching on the `verbose` option, we can actually see what `fread` is doing.
```
# load packages
library(data.table)
# DATA IMPORT ----------------
system.time(flights <- fread(DATA_PATH, verbose = TRUE))
```
```
## user system elapsed
## 0.357 0.000 0.067
```
The output displayed on the console shows what is involved in steps `[1]` to `[12]` of the parsing/import procedure. Note in particular the following line under step `[7]` in the procedure:
```
Estimated number of rows: 30960501 / 92.03 = 336403
Initial alloc = 370043 rows (336403 + 9%) using
bytes/max(mean-2*sd,min) clamped between [1.1*estn, 2.0*estn]
```
This is the result of the above\-mentioned preparatory step in the form of sampling. The `fread` CSV parser first estimates how large the dataset likely is and then creates an additional allocation (in this case of `370043 rows`). Only after this are the rows actually imported into RAM. The summary of the time allocated for the different steps shown at the bottom of the output nicely illustrates that the preparatory steps of memory mapping and allocation are rather fast compared with the time needed to actually read the data into RAM. Given the size of the dataset, `fread`’s approach to memory allocation results in a much faster import of the dataset than `read.csv`’s approach.
8\.3 Efficient local data storage
---------------------------------
In this section, we are concerned with a) how we can store large datasets permanently on a mass storage device in an efficient way (here, efficient can be understood as ‘not taking up too much space’) and b) how we can load (parts of) this dataset in an efficient way (here, efficient\~fast) for analysis.
We look at this problem in two situations:
* The data needs to be stored locally (e.g., on the hard disk of our laptop).
* The data can be stored on a server ‘in the cloud’.
Various tools have been developed over the last few years to improve the efficiency of storing and accessing large amounts of data, many of which go beyond the scope implied by this book’s perspective on applied data analytics. Here, we focus on the basic concept of *SQL/Relational Database Management Systems (RDBMSs)*, as well as a few alternatives that can be summarized under the term *NoSQL (‘non\-SQL’, sometimes ‘Not only SQL’)* database systems. Conveniently (and contrary to what the latter name would suggest), most of these tools can be worked with by using basic SQL queries to load/query data.
The relational database system follows the relational data model, in which the data is organized in several tables that are connected via some unique data record identifiers (keys). Such systems, for example, SQLite introduced in Chapter 3, have been used for a long time in all kinds of business and analytics contexts. They are well\-tried and stable and have a large and diverse user base. There are many technicalities involved in how they work under the hood, but for our purposes three characteristics are most relevant:
1. All common RDBMSs, like SQLite and MySQL, are *row\-based* databases. That is, data is thought of as observations/data records stored in rows of a table. One record consists of one row.
2. They are typically made for storing clean data in a *clearly defined set of tables*, with clearly defined properties. The organizing of data in various tables has (at least for our perspective here) the aim of avoiding redundancies and thereby using the available storage space more efficiently.
3. Rows are *indexed* according to the unique identifiers of tables (or one or several other variables in a table). This allows for fast querying of specific records and efficient merging/joining of tables.
While these particular features work very well also with large amounts of data, particularly for exploration and data preparation (joining tables), in the age of Big Data they might be more relevant for operational databases (in the back\-end of web applications, or simply the operational database of a business) than for the specific purpose of data analytics.
On the one hand, the data basis of an analytics project might be simpler in terms of the number of tables involved. On the other hand, Big Data, as we have seen, might come in less structured and/or more complex forms than traditional table\-like/row\-based data. *NoSQL* databases have been developed for the purposes of storing more complex/less structured data, which might not necessarily be described as a set of tables connected via keys, and for the purpose of fast analysis of large amounts of data. Again, three main characteristics of these types of databases are of particular relevance here:
1. Typically, *NoSQL* databases are not row\-based, but follow a *column\-based*, document\-based, key\-value\-based, or graph\-based data model. In what follows, the column\-based model is most relevant.
2. *NoSQL* databases are designed for horizontal scaling. That is, scaling such a database out over many nodes of a computing cluster is usually straightforward.
3. They are optimized to give quick answers based on summarizing large amounts of data, such as frequency counts and averages (sometimes by using approximations rather than exact computations.)
Figure [8\.1](data-collection-and-data-storage.html#fig:columnvsrow) illustrates the basic concept of row\-based vs. column\-based data storage.
Figure 8\.1: Schematic illustration of columnar vs. row\-based data storage.
**Aside: Row\-based vs. column\-based databases**
Conceptually, in a *row\-based database* individual values (cells) are contained in rows, which means changing one value requires updating a row. Row\-based databases (e.g., SQLite) are thus designed for efficient data reading and writing when users often access many columns but rather few observations. For example, for an operational database in the back\-end of a web application such as an online shop, a row\-based approach makes sense because hundreds or thousands of users (customers in that case) constantly add or query small amounts of data. In contrast, changing a value in *column\-based* databases means changing a column. Accessing all values in a particular column is much faster in comparison to row\-based databases.
This means that column\-based databases are useful when users tend to query rather few columns but massive numbers of observations, which is typically rather the case in an analytics context. Some well\-known data warehouse and data lake systems are therefore based on this principle (e.g., Google BigQuery). However, if analytics tasks involve a lot of (out\-of\-memory) table joins, column\-based solutions are likely to be slower than row\-based solutions.
In the following, we have a close look at using both column\-based and row\-based tools. Thereby we will particularly highlight the practical differences between using column\-based and row\-based data storage solutions.
### 8\.3\.1 RDBMS basics
RDBMSs have two key features that tackle the two efficiency concerns mentioned above:
* The *relational data model*: The overall dataset is split by columns (covariates) into tables in order to reduce the storage of redundant variable\-value repetitions. The resulting database tables are then linked via key\-variables (unique identifiers). Thus (simply put), each type of entity on which observations exist resides in its own database table. Within this table, each observation has its unique ID. Keeping the data in such a structure is very efficient in terms of storage space used.
* *Indexing*: The key\-columns of the database tables are indexed, meaning (in simple terms) ordered on disk. Indexing a table takes time, but it has to be performed only once (unless the content of the table changes). The resulting index is then stored on disk as part of the database. These indices substantially reduce the number of disk accesses required to query/find specific observations. Thus, they make the loading of specific parts of the data for analysis much more efficient.
The loading/querying of data from an RDBMS typically involves the selection of specific observations (rows) and covariates (columns) from different tables. Due to the indexing, observations are selected efficiently, and the defined relations between tables (via keys) facilitate the joining of columns to a new table (the queried data).
### 8\.3\.2 Efficient data access: Indices and joins in SQLite
So far we have only had a look at the very basics of writing SQL code. Let us now further explore SQLite as an easy\-to\-use and easy\-to\-set\-up relational database solution. In a second step we then look at how to connect to a local SQLite database from within R. First, we switch to the Terminal tab in RStudio, set up a new database called `air.sqlite`, and import the csv\-file `flights.csv` (used in previous chapters) as a first table.
```
# switch to data directory
cd data
# create database and run sqlite
sqlite3 air.sqlite
```
```
-- import csvs
.mode csv
.import flights.csv flights
```
We check whether everything worked out well via the `.tables` and `.schema` commands.
```
.tables
.schema flights
```
In `flights`, each row describes a flight (the day it took place, its origin, its destination, etc.). It contains a covariate `carrier` containing the unique ID of the respective airline/carrier carrying out the flight as well as the covariates `origin` and `dest`. The latter two variables contain the unique IATA\-codes of the airports from which the flights departed and where they arrived, respectively. In `flights` we thus have observations at the level of individual flights.
Now we extend our database in a meaningful way, following the relational data model idea. First we download two additional CSV files containing data that relate to the flights table:
* [`airports.csv`](http://stat-computing.org/dataexpo/2009/airports.csv): Describes the locations of US Airports (relates to `origin` and `dest`).
* [`carriers.csv`](http://stat-computing.org/dataexpo/2009/carriers.csv): A listing of carrier codes with full names (relates to the `carrier`\-column in `flights`).
In this code example, the two CSVs have already been downloaded to the `materials/data`\-folder.
```
-- import airport data
.mode csv
.import airports.csv airports
.import carriers.csv carriers
-- inspect the result
.tables
.schema airports
.schema carriers
```
Now we can run our first query involving the relation between tables. The aim of the exercise is to query flights data (information on departure delays per flight number and date, from the `flights` table) for all `United Air Lines Inc.` flights (information from the `carriers` table) departing from `Newark Intl` airport (information from the `airports` table). In addition, we want the resulting table ordered by flight number. For the sake of the exercise, we only show the first 10 results of this query (`LIMIT 10`).
```
SELECT
year,
month,
day,
dep_delay,
flight
FROM (flights INNER JOIN airports ON flights.origin=airports.iata)
INNER JOIN carriers ON flights.carrier = carriers.Code
WHERE carriers.Description = 'United Air Lines Inc.'
AND airports.airport = 'Newark Intl'
ORDER BY flight
LIMIT 10;
```
```
flights_join
```
```
## year month day dep_delay flight
## 1 2013 1 4 0 1
## 2 2013 1 5 -2 1
## 3 2013 3 6 1 1
## 4 2013 2 13 -2 3
## 5 2013 2 16 -9 3
## 6 2013 2 20 3 3
## 7 2013 2 23 -5 3
## 8 2013 2 26 24 3
## 9 2013 2 27 10 3
## 10 2013 1 5 3 10
```
Note that this query has been executed without indexing any of the tables first. Thus SQLite could not take any ‘shortcuts’ when matching the ID columns in order to join the tables for the query output. That is, SQLite had to scan all the columns to find the matches. Now we index the respective ID columns and re\-run the query.
```
CREATE INDEX iata_airports ON airports (iata);
CREATE INDEX origin_flights ON flights (origin);
CREATE INDEX carrier_flights ON flights (carrier);
CREATE INDEX code_carriers ON carriers (code);
```
Note that SQLite optimizes the efficiency of the query without our explicit instructions. If there are indices it can use to speed up the query, it will do so.
```
SELECT
year,
month,
day,
dep_delay,
flight
FROM (flights INNER JOIN airports ON flights.origin=airports.iata)
INNER JOIN carriers ON flights.carrier = carriers.Code
WHERE carriers.Description = 'United Air Lines Inc.'
AND airports.airport = 'Newark Intl'
ORDER BY flight
LIMIT 10;
```
```
## year month day dep_delay flight
## 1 2013 1 4 0 1
## 2 2013 1 5 -2 1
## 3 2013 3 6 1 1
## 4 2013 2 13 -2 3
## 5 2013 2 16 -9 3
## 6 2013 2 20 3 3
## 7 2013 2 23 -5 3
## 8 2013 2 26 24 3
## 9 2013 2 27 10 3
## 10 2013 1 5 3 10
```
You can find the final `air.sqlite` database, including all the indices and tables, as `materials/data/air_final.sqlite` in the book’s code repository.
### 8\.3\.1 RDBMS basics
RDBMSs have two key features that tackle the two efficiency concerns mentioned above:
* The *relational data model*: The overall dataset is split by columns (covariates) into tables in order to reduce the storage of redundant variable\-value repetitions. The resulting database tables are then linked via key\-variables (unique identifiers). Thus (simply put), each type of entity on which observations exist resides in its own database table. Within this table, each observation has its unique ID. Keeping the data in such a structure is very efficient in terms of storage space used.
* *Indexing*: The key\-columns of the database tables are indexed, meaning (in simple terms) ordered on disk. Indexing a table takes time, but it has to be performed only once (unless the content of the table changes). The resulting index is then stored on disk as part of the database. These indices substantially reduce the number of disk accesses required to query/find specific observations. Thus, they make the loading of specific parts of the data for analysis much more efficient.
The loading/querying of data from an RDBMS typically involves the selection of specific observations (rows) and covariates (columns) from different tables. Due to the indexing, observations are selected efficiently, and the defined relations between tables (via keys) facilitate the joining of columns to a new table (the queried data).
### 8\.3\.2 Efficient data access: Indices and joins in SQLite
So far we have only had a look at the very basics of writing SQL code. Let us now further explore SQLite as an easy\-to\-use and easy\-to\-set\-up relational database solution. In a second step we then look at how to connect to a local SQLite database from within R. First, we switch to the Terminal tab in RStudio, set up a new database called `air.sqlite`, and import the csv\-file `flights.csv` (used in previous chapters) as a first table.
```
# switch to data directory
cd data
# create database and run sqlite
sqlite3 air.sqlite
```
```
-- import csvs
.mode csv
.import flights.csv flights
```
We check whether everything worked out well via the `.tables` and `.schema` commands.
```
.tables
.schema flights
```
In `flights`, each row describes a flight (the day it took place, its origin, its destination, etc.). It contains a covariate `carrier` containing the unique ID of the respective airline/carrier carrying out the flight as well as the covariates `origin` and `dest`. The latter two variables contain the unique IATA\-codes of the airports from which the flights departed and where they arrived, respectively. In `flights` we thus have observations at the level of individual flights.
Now we extend our database in a meaningful way, following the relational data model idea. First we download two additional CSV files containing data that relate to the flights table:
* [`airports.csv`](http://stat-computing.org/dataexpo/2009/airports.csv): Describes the locations of US Airports (relates to `origin` and `dest`).
* [`carriers.csv`](http://stat-computing.org/dataexpo/2009/carriers.csv): A listing of carrier codes with full names (relates to the `carrier`\-column in `flights`).
In this code example, the two CSVs have already been downloaded to the `materials/data`\-folder.
```
-- import airport data
.mode csv
.import airports.csv airports
.import carriers.csv carriers
-- inspect the result
.tables
.schema airports
.schema carriers
```
Now we can run our first query involving the relation between tables. The aim of the exercise is to query flights data (information on departure delays per flight number and date, from the `flights` table) for all `United Air Lines Inc.` flights (information from the `carriers` table) departing from `Newark Intl` airport (information from the `airports` table). In addition, we want the resulting table ordered by flight number. For the sake of the exercise, we only show the first 10 results of this query (`LIMIT 10`).
```
SELECT
year,
month,
day,
dep_delay,
flight
FROM (flights INNER JOIN airports ON flights.origin=airports.iata)
INNER JOIN carriers ON flights.carrier = carriers.Code
WHERE carriers.Description = 'United Air Lines Inc.'
AND airports.airport = 'Newark Intl'
ORDER BY flight
LIMIT 10;
```
```
flights_join
```
```
## year month day dep_delay flight
## 1 2013 1 4 0 1
## 2 2013 1 5 -2 1
## 3 2013 3 6 1 1
## 4 2013 2 13 -2 3
## 5 2013 2 16 -9 3
## 6 2013 2 20 3 3
## 7 2013 2 23 -5 3
## 8 2013 2 26 24 3
## 9 2013 2 27 10 3
## 10 2013 1 5 3 10
```
Note that this query has been executed without indexing any of the tables first. Thus SQLite could not take any ‘shortcuts’ when matching the ID columns in order to join the tables for the query output. That is, SQLite had to scan all the columns to find the matches. Now we index the respective ID columns and re\-run the query.
```
CREATE INDEX iata_airports ON airports (iata);
CREATE INDEX origin_flights ON flights (origin);
CREATE INDEX carrier_flights ON flights (carrier);
CREATE INDEX code_carriers ON carriers (code);
```
Note that SQLite optimizes the efficiency of the query without our explicit instructions. If there are indices it can use to speed up the query, it will do so.
```
SELECT
year,
month,
day,
dep_delay,
flight
FROM (flights INNER JOIN airports ON flights.origin=airports.iata)
INNER JOIN carriers ON flights.carrier = carriers.Code
WHERE carriers.Description = 'United Air Lines Inc.'
AND airports.airport = 'Newark Intl'
ORDER BY flight
LIMIT 10;
```
```
## year month day dep_delay flight
## 1 2013 1 4 0 1
## 2 2013 1 5 -2 1
## 3 2013 3 6 1 1
## 4 2013 2 13 -2 3
## 5 2013 2 16 -9 3
## 6 2013 2 20 3 3
## 7 2013 2 23 -5 3
## 8 2013 2 26 24 3
## 9 2013 2 27 10 3
## 10 2013 1 5 3 10
```
You can find the final `air.sqlite` database, including all the indices and tables, as `materials/data/air_final.sqlite` in the book’s code repository.
8\.4 Connecting R to an RDBMS
-----------------------------
The R\-package `RSQLite` ([Müller et al. 2022](#ref-RSQLite)) embeds SQLite in R. That is, it provides functions that allow us to use SQLite directly from within R. You will see that the combination of SQLite with R is a simple but very practical approach to working with very efficiently (and locally) stored datasets. In the following example, we explore how `RSQLite` can be used to set up and query the `air.sqlite` database shown in the example above.
### 8\.4\.1 Creating a new database with `RSQLite`
Similarly to the raw SQLite syntax, we connect to a database that does not exist yet actually creates this (empty database). Note that for all interactions with the database from within R, we need to refer to the connection (here: `con_air`).
```
# load packages
library(RSQLite)
# initialize the database
con_air <- dbConnect(SQLite(), "data/air.sqlite")
```
### 8\.4\.2 Importing data
With `RSQLite` we can easily add `data.frame`s as SQLite tables to the database.
```
# import data into current R session
flights <- fread("data/flights.csv")
airports <- fread("data/airports.csv")
carriers <- fread("data/carriers.csv")
# add tables to database
dbWriteTable(con_air, "flights", flights)
dbWriteTable(con_air, "airports", airports)
dbWriteTable(con_air, "carriers", carriers)
```
### 8\.4\.3 Issuing queries
Now we can query the database from within R. By default, `RSQLite` returns the query results as `data.frame`s. Queries are simply character strings written in SQLite.
```
# define query
delay_query <-
"SELECT
year,
month,
day,
dep_delay,
flight
FROM (flights INNER JOIN airports ON flights.origin=airports.iata)
INNER JOIN carriers ON flights.carrier = carriers.Code
WHERE carriers.Description = 'United Air Lines Inc.'
AND airports.airport = 'Newark Intl'
ORDER BY flight
LIMIT 10;
"
# issue query
delays_df <- dbGetQuery(con_air, delay_query)
delays_df
# clean up
dbDisconnect(con_air)
```
When done working with the database, we close the connection to the database with `dbDisconnect(con)`.
### 8\.4\.1 Creating a new database with `RSQLite`
Similarly to the raw SQLite syntax, we connect to a database that does not exist yet actually creates this (empty database). Note that for all interactions with the database from within R, we need to refer to the connection (here: `con_air`).
```
# load packages
library(RSQLite)
# initialize the database
con_air <- dbConnect(SQLite(), "data/air.sqlite")
```
### 8\.4\.2 Importing data
With `RSQLite` we can easily add `data.frame`s as SQLite tables to the database.
```
# import data into current R session
flights <- fread("data/flights.csv")
airports <- fread("data/airports.csv")
carriers <- fread("data/carriers.csv")
# add tables to database
dbWriteTable(con_air, "flights", flights)
dbWriteTable(con_air, "airports", airports)
dbWriteTable(con_air, "carriers", carriers)
```
### 8\.4\.3 Issuing queries
Now we can query the database from within R. By default, `RSQLite` returns the query results as `data.frame`s. Queries are simply character strings written in SQLite.
```
# define query
delay_query <-
"SELECT
year,
month,
day,
dep_delay,
flight
FROM (flights INNER JOIN airports ON flights.origin=airports.iata)
INNER JOIN carriers ON flights.carrier = carriers.Code
WHERE carriers.Description = 'United Air Lines Inc.'
AND airports.airport = 'Newark Intl'
ORDER BY flight
LIMIT 10;
"
# issue query
delays_df <- dbGetQuery(con_air, delay_query)
delays_df
# clean up
dbDisconnect(con_air)
```
When done working with the database, we close the connection to the database with `dbDisconnect(con)`.
8\.5 Cloud solutions for (big) data storage
-------------------------------------------
As outlined in the previous section, RDBMSs are a very practical tool for storing the structured data of an analytics project locally in a database. A local SQLite database can easily be set up and accessed via R, allowing one to write the whole data pipeline – from data gathering to filtering, aggregating, and finally analyzing – in R. In contrast to directly working with CSV files, using SQLite has the advantage of organizing the data access much more efficiently in terms of RAM. Only the final result of a query is really loaded fully into R’s memory.
If mass storage space is too sparse or if RAM is nevertheless not sufficient, even when organizing data access via SQLite, several cloud solutions come to the rescue. Although you could also rent a traditional web server and host a SQL database there, this is usually not worthwhile for a data analytics project. In the next section we thus look at three important cases of how to store data as part of an analytics project: *RDBMS in the cloud*, a serverless *data warehouse* solution for large datasets called *Google BigQuery*, and a simple storage service to use as a *data lake* called *AWS S3*. All of these solutions are discussed from a data analytics perspective, and for all of these solutions we will look at how to make use of them from within R.
### 8\.5\.1 Easy\-to\-use RDBMS in the cloud: AWS RDS
Once we have set up the RStudio server on an EC2 instance, we can run the SQLite examples shown above on it. There are no additional steps needed to install SQLite. However, when using RDBMSs in the cloud, we typically have a more sophisticated implementation than SQLite in mind. Particularly, we want to set up an actual RDBMS server running in the cloud to which several clients can connect (e.g., via RStudio Server).
AWS’s Relational Database Service (RDS) provides an easy way to set up and run a SQL database in the cloud. The great advantage for users new to RDBMS/SQL is that you do not have to manually set up a server (e.g., an EC2 instance) and install/configure the SQL server. Instead, you can directly set up a fully functioning relational database in the cloud.
As a first step, open the AWS console and search for/select “RDS” in the search bar. Then, click on “Create database” in the lower part of the landing page.
Figure 8\.2: Create a managed relational database on AWS RDS.
On the next page, select “Easy create”, “MySQL”, and the “Free tier” DB instance size. Further down you will have to set the database instance identifier, the user name, and a password.
Figure 8\.3: Easy creation of an RDS MySQL DB.
Once the database instance is ready, you will see it in the databases overview. Click on the DB identifier (the name of your database shown in the list of databases), and click on modify (button in the upper\-right corner). In the “Connectivity” panel under “Additional configuration”, select *Publicly accessible* (this is necessary to interact with the DB from your local machine), and save the settings. Back on the overview page of your database, under “Connectivity \& security”, click on the link under the VPC security groups, scroll down and select the “Inbound rules” tab. Edit the inbound rule to allow any IP4 inbound traffic.[45](#fn45)
Figure 8\.4: Allow all IP4 inbound traffic (set Source to `0.0.0.0/0`).
```
# load packages
library(RMySQL)
library(data.table)
# fix vars
# replace this with the Endpoint shown in the AWS RDS console
RDS_ENDPOINT <- "MY-ENDPOINT"
# replace this with the password you have set when initiating the RDS DB on AWS
PW <- "MY-PW"
# connect to DB
con_rds <- dbConnect(RMySQL::MySQL(),
host=RDS_ENDPOINT,
port=3306,
username="admin",
password=PW)
# create a new database on the MySQL RDS instance
dbSendQuery(con_rds, "CREATE DATABASE air")
# disconnect and re-connect directly to the new DB
dbDisconnect(con_rds)
con_rds <- dbConnect(RMySQL::MySQL(),
host=RDS_ENDPOINT,
port=3306,
username="admin",
dbname="air",
password=PW)
```
`RMySQL` and `RSQLite` both build on the `DBI` package, which generalizes how we can interact with SQL\-type databases via R. This makes it straightforward to apply what we have learned so far by interacting with our local SQLite database to interactions with other databases. As soon as the connection to the new database is established, we can essentially use the same R functions as before to create new tables and import data.
```
# import data into current R session
flights <- fread("data/flights.csv")
airports <- fread("data/airports.csv")
carriers <- fread("data/carriers.csv")
# add tables to database
dbWriteTable(con_rds, "flights", flights)
dbWriteTable(con_rds, "airports", airports)
dbWriteTable(con_rds, "carriers", carriers)
```
Finally, we can query our RDS MySQL database on AWS.
```
# define query
delay_query <-
"SELECT
year,
month,
day,
dep_delay,
flight
FROM (flights INNER JOIN airports ON flights.origin=airports.iata)
INNER JOIN carriers ON flights.carrier = carriers.Code
WHERE carriers.Description = 'United Air Lines Inc.'
AND airports.airport = 'Newark Intl'
ORDER BY flight
LIMIT 10;
"
# issue query
delays_df <- dbGetQuery(con_rds, delay_query)
delays_df
# clean up
dbDisconnect(con_rds)
```
### 8\.5\.1 Easy\-to\-use RDBMS in the cloud: AWS RDS
Once we have set up the RStudio server on an EC2 instance, we can run the SQLite examples shown above on it. There are no additional steps needed to install SQLite. However, when using RDBMSs in the cloud, we typically have a more sophisticated implementation than SQLite in mind. Particularly, we want to set up an actual RDBMS server running in the cloud to which several clients can connect (e.g., via RStudio Server).
AWS’s Relational Database Service (RDS) provides an easy way to set up and run a SQL database in the cloud. The great advantage for users new to RDBMS/SQL is that you do not have to manually set up a server (e.g., an EC2 instance) and install/configure the SQL server. Instead, you can directly set up a fully functioning relational database in the cloud.
As a first step, open the AWS console and search for/select “RDS” in the search bar. Then, click on “Create database” in the lower part of the landing page.
Figure 8\.2: Create a managed relational database on AWS RDS.
On the next page, select “Easy create”, “MySQL”, and the “Free tier” DB instance size. Further down you will have to set the database instance identifier, the user name, and a password.
Figure 8\.3: Easy creation of an RDS MySQL DB.
Once the database instance is ready, you will see it in the databases overview. Click on the DB identifier (the name of your database shown in the list of databases), and click on modify (button in the upper\-right corner). In the “Connectivity” panel under “Additional configuration”, select *Publicly accessible* (this is necessary to interact with the DB from your local machine), and save the settings. Back on the overview page of your database, under “Connectivity \& security”, click on the link under the VPC security groups, scroll down and select the “Inbound rules” tab. Edit the inbound rule to allow any IP4 inbound traffic.[45](#fn45)
Figure 8\.4: Allow all IP4 inbound traffic (set Source to `0.0.0.0/0`).
```
# load packages
library(RMySQL)
library(data.table)
# fix vars
# replace this with the Endpoint shown in the AWS RDS console
RDS_ENDPOINT <- "MY-ENDPOINT"
# replace this with the password you have set when initiating the RDS DB on AWS
PW <- "MY-PW"
# connect to DB
con_rds <- dbConnect(RMySQL::MySQL(),
host=RDS_ENDPOINT,
port=3306,
username="admin",
password=PW)
# create a new database on the MySQL RDS instance
dbSendQuery(con_rds, "CREATE DATABASE air")
# disconnect and re-connect directly to the new DB
dbDisconnect(con_rds)
con_rds <- dbConnect(RMySQL::MySQL(),
host=RDS_ENDPOINT,
port=3306,
username="admin",
dbname="air",
password=PW)
```
`RMySQL` and `RSQLite` both build on the `DBI` package, which generalizes how we can interact with SQL\-type databases via R. This makes it straightforward to apply what we have learned so far by interacting with our local SQLite database to interactions with other databases. As soon as the connection to the new database is established, we can essentially use the same R functions as before to create new tables and import data.
```
# import data into current R session
flights <- fread("data/flights.csv")
airports <- fread("data/airports.csv")
carriers <- fread("data/carriers.csv")
# add tables to database
dbWriteTable(con_rds, "flights", flights)
dbWriteTable(con_rds, "airports", airports)
dbWriteTable(con_rds, "carriers", carriers)
```
Finally, we can query our RDS MySQL database on AWS.
```
# define query
delay_query <-
"SELECT
year,
month,
day,
dep_delay,
flight
FROM (flights INNER JOIN airports ON flights.origin=airports.iata)
INNER JOIN carriers ON flights.carrier = carriers.Code
WHERE carriers.Description = 'United Air Lines Inc.'
AND airports.airport = 'Newark Intl'
ORDER BY flight
LIMIT 10;
"
# issue query
delays_df <- dbGetQuery(con_rds, delay_query)
delays_df
# clean up
dbDisconnect(con_rds)
```
8\.6 Column\-based analytics databases
--------------------------------------
As outlined in the discussion of row\-based vs. column\-based databases above, many data analytics tasks focus on few columns but many rows, hence making column\-based databases the better option for large\-scale analytics purposes. [Apache Druid](https://druid.apache.org/) ([Yang et al. 2014](#ref-Druid)) is one such solution that has particular advantages for the data analytics perspective taken in this book. It can easily be run on a local machine (Linux and Mac/OSX), or on a cluster in the cloud, and it easily allows for connections to external data, for example, data stored on Google Cloud Storage. Moreover, it can be interfaced by `RDruid` ([Metamarkets Group Inc. 2023](#ref-RDruid)) to run Druid queries from within R, or, yet again, Druid can be directly queried via SQL.
To get started with Apache Druid, navigate to <https://druid.apache.org/>. Under [downloads](https://druid.apache.org/downloads.html) you will find a link to download the latest stable release (at the time of writing this book: 25\.0\.0\). On the Apache Druid landing page, you will also find a link [Quickstart](https://druid.apache.org/docs/latest/tutorials/index.html) with all the details regarding the installation and set up. Importantly, as of the time of writing this book, only Linux and MacOSX are supported (Windows is not supported).
### 8\.6\.1 Installation and start up
On Linux, follow these steps to set up Apache Druid on your machine. First, open a terminal and download the Druid binary to the location in which you want to work with Druid. First, we download and unpack the current Apache Druid version via the terminal.
Using Druid in its most basic form is then straightforward. Simply navigate to the unpacked folder and run `./bin/start-micro-quickstart`.
```
# navigate to local copy of druid
cd apache-druid-25.0.0
# start up druid (basic/minimal settings)
./bin/start-micro-quickstart
```
### 8\.6\.2 First steps via Druid’s GUI
Once all Druid services are running, open a new browser window and navigate to `http://localhost:8888`. This will open Druid’s graphical user interface (GUI). The GUI provides easy\-to\-use interfaces to all basic Druid services, ranging from the loading of data to querying via Druid’s SQL. Figure [8\.5](data-collection-and-data-storage.html#fig:druidstart) highlights the GUI buttons mentioned in the instructions below.
Figure 8\.5: Apache Druid GUI starting page. White boxes highlight buttons for the Druid services discussed in the main text (from left to right): the query editor (run Druid SQL queries on any of the loaded data sources directly here); the data load service (use this to import data from local files); and the Datasources console (lists all currently available data sources).
#### 8\.6\.2\.1 Load data into Druid
In a first step, we will import the TLC taxi trips dataset from the locally stored CSV file. To do so, click on *Load data/Batch \- classic*, then click on *Start new batch spec*, and then select *Local disk* and *Connect*. On the right side of the Druid GUI, a menu will open. In the `Base directory` field, enter the path to the local directory in which you have stored the TLC taxi trips CSV file used in the examples above (`../data/`).[46](#fn46) In the `File filter` field, enter `tlc_trips.csv`.[47](#fn47) Finally click on the *Apply* button.
Figure 8\.6: Apache Druid GUI: CSV parse menu for classic batch data ingestion.
The first few lines of the raw data will appear in the Druid console. In the lower\-right corner of the console, click on *Next: Parse data*. Druid will automatically guess the delimiter used in the CSV (following the examples above, this is `,`) and present the first few parsed rows.
If all looks good, click on *Next: Parse time* in the lower\-right corner of the console. Druid is implemented to work particularly fast on time\-series and panel data. To this end, it expects you to define a main time\-variable in your dataset, which then can be used to index and partition your overall dataset to speed up queries for specific time frames. Per default, Druid will suggest using the first column that looks like a time format (in the TLC\-data, this would be column 2, the pick\-up time of a trip, which seems very reasonable for the sake of this example). We move on with a click on *Next: Transform* in the lower right corner. Druid allows you, right at the step of loading data, to add or transform variables/columns. As we do not need to change anything at this point, we continue with *Next: Filter* in the lower\-right corner. At this stage you can filter out rows/observations that you are sure should not be included in any of the queries/analyses performed later via Druid.
For this example, we do not filter out any observations and continue via *Next: Configure schema* in the lower\-right corner. Druid guesses the schema/data types for each column based on sampling the first few observations in the dataset. Notice, for example, how Druid considers `vendor_name` to be a `string` and `Trip_distance` to be a `double` (a 64\-bit floating point number). In most applications of Druid for the data analytics perspective of this book, the guessed data types will be just fine. We will leave the data types as\-is and keep the original column/variable names. You can easily change names of variables/columns by double\-clicking on the corresponding column name, which will open a menu on the right\-hand side of the console. With this, all the main parameters to load the data are defined. What follows has to do with optimizing Druid’s performance.
Once you click on *Next: Partition* in the lower\-right corner, you will have to choose the primary partitioning, which is always based on time (again, this has to do with Druid being optimized to work on large time\-series and panel datasets). Basically, you need to decide whether the data should be organized into chunks per year, month, week, etc. For this example, we will segment the data according to months. To this end, from the drop\-down menu under `Segment granularity`, choose `month`. For the rest of the parameters, we keep the default values. Continue by clicking on *Next: Tune* (we do not change anything here) and then on *Next: Publish*. In the menu that appears, you can choose the name under which the TLC taxi trips data should be listed in the *Datasources* menu on your local Druid installation, once all the data is loaded/processed. Thinking of SQL queries when working with Druid, the `Datasource name` is what you then will use in the `FROM` statement of a Druid SQL query (in analogy to a table name in the case of RDBMSs like SQLite). We keep the suggested name `tlc_trips`. Thus, you can click on *Edit spec* in the lower\-right corner. An editor window will open and display all your load configurations as a JSON file. Only change anything at this step if you really know what you are doing. Finally, click on *Submit* in the lower\-right corner. This will trigger the loading of data into Druid. As in the case of the RDBMS covered above, the data ingestion or data loading process primarily involves indexing and writing data to disk. It does not mean importing data to RAM. Since the CSV file used in this example is rather large, this process can take several minutes on a modern laptop computer.
Once the data ingestion is finished, click on the *Datasources* tab in the top menu bar to verify the ingestion. The `tlc_trips` dataset should now appear in the list of data sources in Druid.
Figure 8\.7: Apache Druid: Datasources console.
#### 8\.6\.2\.2 Query Druid via the GUI SQL console
Once the data is loaded into Druid, we can directly query it via the SQL console in Druid’s GUI. To do this, navigate in Druid to *Query*. To illustrate the strengths of Druid as an analytic database, we run an extensive data aggregation query. Specifically, we count the number of cases (trips) per vendor and split the number of trips per vendor further by payment type.
```
SELECT
vendor_name,
Payment_Type,
COUNT(*) AS Count_trips
FROM tlc_trips
GROUP BY vendor_name, Payment_Type
```
Note that for such simple queries, Druid SQL is essentially identical to the SQL dialects covered in previous chapters and subsections, which makes it rather simple for beginners to start productively engaging with Druid. SQL queries can directly be entered in the query tab; a click on *Run* will send the query to Druid, and the results are shown right below.
Figure 8\.8: Apache Druid query console with Druid\-SQL example: count the number of cases per vendor and payment type.
Counting the number of taxi trips per vendor name and payment type implies using the entire dataset of over 27 million rows (1\.5GB). Nevertheless, Druid needs less than a second and hardly any RAM to compute the results.
### 8\.6\.3 Query Druid from R
Apache provides high\-level interfaces to Druid for several languages common in data science/data analytics. The `RDruid` package provides such a Druid connector for R. The package can be installed from GitHub via the `devtools` package.
```
# install devtools if necessary
if (!require("devtools")) {
install.packages("devtools")}
# install RDruid
devtools::install_github("druid-io/RDruid")
```
The `RDruid` package provides several high\-level functions to issue specific Druid queries; however, the syntax might not be straightforward for beginners, and the package has not been further developed for many years.
Thanks to Druid’s basic architecture as a web application, however, there is a simple alternative to the `RDruid` package. Druid accepts queries via HTTP POST calls (with SQL queries embedded in a JSON file sent in the HTTP body). The data is then returned as a compressed JSON string in the HTTP response to the POST request. We can build on this to implement our own simple `druid()` function to query Druid from R.
```
# create R function to query Druid (locally)
druid <-
function(query){
# dependencies
require(jsonlite)
require(httr)
require(data.table)
# basic POST body
base_query <-
'{
"context": {
"sqlOuterLimit": 1001,
"sqlQueryId": "1"},
"header": true,
"query": "",
"resultFormat": "csv",
"sqlTypesHeader": false,
"typesHeader": false
}'
param_list <- fromJSON(base_query)
# add SQL query
param_list$query <- query
# send query; parse result
resp <- POST("http://localhost:8888/druid/v2/sql",
body = param_list,
encode = "json")
parsed <- fread(content(resp, as = "text", encoding = "UTF-8"))
return(parsed)
}
```
Now we can send queries to our local Druid installation. Importantly, Druid needs to be started up in order to make this work. In the example below we start up Druid from within R via `system("apache-druid-25.0.0/bin/start-micro-quickstart")` (make sure that the working directory is set correctly before running this). Then, we send the same query as in the Druid GUI example from above.
```
# start Druid
system("apache-druid-25.0.0/bin/start-micro-quickstart",
intern = FALSE,
wait = FALSE)
Sys.sleep(30) # wait for Druid to start up
# query tlc data
query <-
'
SELECT
vendor_name,
Payment_Type,
COUNT(*) AS Count_trips
FROM tlc_trips
GROUP BY vendor_name, Payment_Type
'
result <- druid(query)
# inspect result
result
```
```
## vendor_name Payment_Type Count_trips
## 1: CMT Cash 9618583
## 2: CMT Credit 2737111
## 3: CMT Dispute 16774
## 4: CMT No Charge 82142
## 5: DDS CASH 1332901
## 6: DDS CREDIT 320411
## 7: VTS CASH 10264988
## 8: VTS Credit 3099625
```
### 8\.6\.1 Installation and start up
On Linux, follow these steps to set up Apache Druid on your machine. First, open a terminal and download the Druid binary to the location in which you want to work with Druid. First, we download and unpack the current Apache Druid version via the terminal.
Using Druid in its most basic form is then straightforward. Simply navigate to the unpacked folder and run `./bin/start-micro-quickstart`.
```
# navigate to local copy of druid
cd apache-druid-25.0.0
# start up druid (basic/minimal settings)
./bin/start-micro-quickstart
```
### 8\.6\.2 First steps via Druid’s GUI
Once all Druid services are running, open a new browser window and navigate to `http://localhost:8888`. This will open Druid’s graphical user interface (GUI). The GUI provides easy\-to\-use interfaces to all basic Druid services, ranging from the loading of data to querying via Druid’s SQL. Figure [8\.5](data-collection-and-data-storage.html#fig:druidstart) highlights the GUI buttons mentioned in the instructions below.
Figure 8\.5: Apache Druid GUI starting page. White boxes highlight buttons for the Druid services discussed in the main text (from left to right): the query editor (run Druid SQL queries on any of the loaded data sources directly here); the data load service (use this to import data from local files); and the Datasources console (lists all currently available data sources).
#### 8\.6\.2\.1 Load data into Druid
In a first step, we will import the TLC taxi trips dataset from the locally stored CSV file. To do so, click on *Load data/Batch \- classic*, then click on *Start new batch spec*, and then select *Local disk* and *Connect*. On the right side of the Druid GUI, a menu will open. In the `Base directory` field, enter the path to the local directory in which you have stored the TLC taxi trips CSV file used in the examples above (`../data/`).[46](#fn46) In the `File filter` field, enter `tlc_trips.csv`.[47](#fn47) Finally click on the *Apply* button.
Figure 8\.6: Apache Druid GUI: CSV parse menu for classic batch data ingestion.
The first few lines of the raw data will appear in the Druid console. In the lower\-right corner of the console, click on *Next: Parse data*. Druid will automatically guess the delimiter used in the CSV (following the examples above, this is `,`) and present the first few parsed rows.
If all looks good, click on *Next: Parse time* in the lower\-right corner of the console. Druid is implemented to work particularly fast on time\-series and panel data. To this end, it expects you to define a main time\-variable in your dataset, which then can be used to index and partition your overall dataset to speed up queries for specific time frames. Per default, Druid will suggest using the first column that looks like a time format (in the TLC\-data, this would be column 2, the pick\-up time of a trip, which seems very reasonable for the sake of this example). We move on with a click on *Next: Transform* in the lower right corner. Druid allows you, right at the step of loading data, to add or transform variables/columns. As we do not need to change anything at this point, we continue with *Next: Filter* in the lower\-right corner. At this stage you can filter out rows/observations that you are sure should not be included in any of the queries/analyses performed later via Druid.
For this example, we do not filter out any observations and continue via *Next: Configure schema* in the lower\-right corner. Druid guesses the schema/data types for each column based on sampling the first few observations in the dataset. Notice, for example, how Druid considers `vendor_name` to be a `string` and `Trip_distance` to be a `double` (a 64\-bit floating point number). In most applications of Druid for the data analytics perspective of this book, the guessed data types will be just fine. We will leave the data types as\-is and keep the original column/variable names. You can easily change names of variables/columns by double\-clicking on the corresponding column name, which will open a menu on the right\-hand side of the console. With this, all the main parameters to load the data are defined. What follows has to do with optimizing Druid’s performance.
Once you click on *Next: Partition* in the lower\-right corner, you will have to choose the primary partitioning, which is always based on time (again, this has to do with Druid being optimized to work on large time\-series and panel datasets). Basically, you need to decide whether the data should be organized into chunks per year, month, week, etc. For this example, we will segment the data according to months. To this end, from the drop\-down menu under `Segment granularity`, choose `month`. For the rest of the parameters, we keep the default values. Continue by clicking on *Next: Tune* (we do not change anything here) and then on *Next: Publish*. In the menu that appears, you can choose the name under which the TLC taxi trips data should be listed in the *Datasources* menu on your local Druid installation, once all the data is loaded/processed. Thinking of SQL queries when working with Druid, the `Datasource name` is what you then will use in the `FROM` statement of a Druid SQL query (in analogy to a table name in the case of RDBMSs like SQLite). We keep the suggested name `tlc_trips`. Thus, you can click on *Edit spec* in the lower\-right corner. An editor window will open and display all your load configurations as a JSON file. Only change anything at this step if you really know what you are doing. Finally, click on *Submit* in the lower\-right corner. This will trigger the loading of data into Druid. As in the case of the RDBMS covered above, the data ingestion or data loading process primarily involves indexing and writing data to disk. It does not mean importing data to RAM. Since the CSV file used in this example is rather large, this process can take several minutes on a modern laptop computer.
Once the data ingestion is finished, click on the *Datasources* tab in the top menu bar to verify the ingestion. The `tlc_trips` dataset should now appear in the list of data sources in Druid.
Figure 8\.7: Apache Druid: Datasources console.
#### 8\.6\.2\.2 Query Druid via the GUI SQL console
Once the data is loaded into Druid, we can directly query it via the SQL console in Druid’s GUI. To do this, navigate in Druid to *Query*. To illustrate the strengths of Druid as an analytic database, we run an extensive data aggregation query. Specifically, we count the number of cases (trips) per vendor and split the number of trips per vendor further by payment type.
```
SELECT
vendor_name,
Payment_Type,
COUNT(*) AS Count_trips
FROM tlc_trips
GROUP BY vendor_name, Payment_Type
```
Note that for such simple queries, Druid SQL is essentially identical to the SQL dialects covered in previous chapters and subsections, which makes it rather simple for beginners to start productively engaging with Druid. SQL queries can directly be entered in the query tab; a click on *Run* will send the query to Druid, and the results are shown right below.
Figure 8\.8: Apache Druid query console with Druid\-SQL example: count the number of cases per vendor and payment type.
Counting the number of taxi trips per vendor name and payment type implies using the entire dataset of over 27 million rows (1\.5GB). Nevertheless, Druid needs less than a second and hardly any RAM to compute the results.
#### 8\.6\.2\.1 Load data into Druid
In a first step, we will import the TLC taxi trips dataset from the locally stored CSV file. To do so, click on *Load data/Batch \- classic*, then click on *Start new batch spec*, and then select *Local disk* and *Connect*. On the right side of the Druid GUI, a menu will open. In the `Base directory` field, enter the path to the local directory in which you have stored the TLC taxi trips CSV file used in the examples above (`../data/`).[46](#fn46) In the `File filter` field, enter `tlc_trips.csv`.[47](#fn47) Finally click on the *Apply* button.
Figure 8\.6: Apache Druid GUI: CSV parse menu for classic batch data ingestion.
The first few lines of the raw data will appear in the Druid console. In the lower\-right corner of the console, click on *Next: Parse data*. Druid will automatically guess the delimiter used in the CSV (following the examples above, this is `,`) and present the first few parsed rows.
If all looks good, click on *Next: Parse time* in the lower\-right corner of the console. Druid is implemented to work particularly fast on time\-series and panel data. To this end, it expects you to define a main time\-variable in your dataset, which then can be used to index and partition your overall dataset to speed up queries for specific time frames. Per default, Druid will suggest using the first column that looks like a time format (in the TLC\-data, this would be column 2, the pick\-up time of a trip, which seems very reasonable for the sake of this example). We move on with a click on *Next: Transform* in the lower right corner. Druid allows you, right at the step of loading data, to add or transform variables/columns. As we do not need to change anything at this point, we continue with *Next: Filter* in the lower\-right corner. At this stage you can filter out rows/observations that you are sure should not be included in any of the queries/analyses performed later via Druid.
For this example, we do not filter out any observations and continue via *Next: Configure schema* in the lower\-right corner. Druid guesses the schema/data types for each column based on sampling the first few observations in the dataset. Notice, for example, how Druid considers `vendor_name` to be a `string` and `Trip_distance` to be a `double` (a 64\-bit floating point number). In most applications of Druid for the data analytics perspective of this book, the guessed data types will be just fine. We will leave the data types as\-is and keep the original column/variable names. You can easily change names of variables/columns by double\-clicking on the corresponding column name, which will open a menu on the right\-hand side of the console. With this, all the main parameters to load the data are defined. What follows has to do with optimizing Druid’s performance.
Once you click on *Next: Partition* in the lower\-right corner, you will have to choose the primary partitioning, which is always based on time (again, this has to do with Druid being optimized to work on large time\-series and panel datasets). Basically, you need to decide whether the data should be organized into chunks per year, month, week, etc. For this example, we will segment the data according to months. To this end, from the drop\-down menu under `Segment granularity`, choose `month`. For the rest of the parameters, we keep the default values. Continue by clicking on *Next: Tune* (we do not change anything here) and then on *Next: Publish*. In the menu that appears, you can choose the name under which the TLC taxi trips data should be listed in the *Datasources* menu on your local Druid installation, once all the data is loaded/processed. Thinking of SQL queries when working with Druid, the `Datasource name` is what you then will use in the `FROM` statement of a Druid SQL query (in analogy to a table name in the case of RDBMSs like SQLite). We keep the suggested name `tlc_trips`. Thus, you can click on *Edit spec* in the lower\-right corner. An editor window will open and display all your load configurations as a JSON file. Only change anything at this step if you really know what you are doing. Finally, click on *Submit* in the lower\-right corner. This will trigger the loading of data into Druid. As in the case of the RDBMS covered above, the data ingestion or data loading process primarily involves indexing and writing data to disk. It does not mean importing data to RAM. Since the CSV file used in this example is rather large, this process can take several minutes on a modern laptop computer.
Once the data ingestion is finished, click on the *Datasources* tab in the top menu bar to verify the ingestion. The `tlc_trips` dataset should now appear in the list of data sources in Druid.
Figure 8\.7: Apache Druid: Datasources console.
#### 8\.6\.2\.2 Query Druid via the GUI SQL console
Once the data is loaded into Druid, we can directly query it via the SQL console in Druid’s GUI. To do this, navigate in Druid to *Query*. To illustrate the strengths of Druid as an analytic database, we run an extensive data aggregation query. Specifically, we count the number of cases (trips) per vendor and split the number of trips per vendor further by payment type.
```
SELECT
vendor_name,
Payment_Type,
COUNT(*) AS Count_trips
FROM tlc_trips
GROUP BY vendor_name, Payment_Type
```
Note that for such simple queries, Druid SQL is essentially identical to the SQL dialects covered in previous chapters and subsections, which makes it rather simple for beginners to start productively engaging with Druid. SQL queries can directly be entered in the query tab; a click on *Run* will send the query to Druid, and the results are shown right below.
Figure 8\.8: Apache Druid query console with Druid\-SQL example: count the number of cases per vendor and payment type.
Counting the number of taxi trips per vendor name and payment type implies using the entire dataset of over 27 million rows (1\.5GB). Nevertheless, Druid needs less than a second and hardly any RAM to compute the results.
### 8\.6\.3 Query Druid from R
Apache provides high\-level interfaces to Druid for several languages common in data science/data analytics. The `RDruid` package provides such a Druid connector for R. The package can be installed from GitHub via the `devtools` package.
```
# install devtools if necessary
if (!require("devtools")) {
install.packages("devtools")}
# install RDruid
devtools::install_github("druid-io/RDruid")
```
The `RDruid` package provides several high\-level functions to issue specific Druid queries; however, the syntax might not be straightforward for beginners, and the package has not been further developed for many years.
Thanks to Druid’s basic architecture as a web application, however, there is a simple alternative to the `RDruid` package. Druid accepts queries via HTTP POST calls (with SQL queries embedded in a JSON file sent in the HTTP body). The data is then returned as a compressed JSON string in the HTTP response to the POST request. We can build on this to implement our own simple `druid()` function to query Druid from R.
```
# create R function to query Druid (locally)
druid <-
function(query){
# dependencies
require(jsonlite)
require(httr)
require(data.table)
# basic POST body
base_query <-
'{
"context": {
"sqlOuterLimit": 1001,
"sqlQueryId": "1"},
"header": true,
"query": "",
"resultFormat": "csv",
"sqlTypesHeader": false,
"typesHeader": false
}'
param_list <- fromJSON(base_query)
# add SQL query
param_list$query <- query
# send query; parse result
resp <- POST("http://localhost:8888/druid/v2/sql",
body = param_list,
encode = "json")
parsed <- fread(content(resp, as = "text", encoding = "UTF-8"))
return(parsed)
}
```
Now we can send queries to our local Druid installation. Importantly, Druid needs to be started up in order to make this work. In the example below we start up Druid from within R via `system("apache-druid-25.0.0/bin/start-micro-quickstart")` (make sure that the working directory is set correctly before running this). Then, we send the same query as in the Druid GUI example from above.
```
# start Druid
system("apache-druid-25.0.0/bin/start-micro-quickstart",
intern = FALSE,
wait = FALSE)
Sys.sleep(30) # wait for Druid to start up
# query tlc data
query <-
'
SELECT
vendor_name,
Payment_Type,
COUNT(*) AS Count_trips
FROM tlc_trips
GROUP BY vendor_name, Payment_Type
'
result <- druid(query)
# inspect result
result
```
```
## vendor_name Payment_Type Count_trips
## 1: CMT Cash 9618583
## 2: CMT Credit 2737111
## 3: CMT Dispute 16774
## 4: CMT No Charge 82142
## 5: DDS CASH 1332901
## 6: DDS CREDIT 320411
## 7: VTS CASH 10264988
## 8: VTS Credit 3099625
```
8\.7 Data warehouses
--------------------
Unlike RDBMSs, the main purpose of data warehouses is usually analytics and not the provision of data for everyday operations. Generally, data warehouses contain well\-organized and well\-structured data, but are not as stringent as RMDBS when it comes to organizing data in relational tables. Typically, they build on a table\-based logic, but allow for nesting structures and more flexible storage approaches. They are designed to contain large amounts of data (via horizontal scaling) and are usually column\-based. From the perspective of Big Data Analytics taken in this book, there are several suitable and easily accessible data warehouse solutions provided in the cloud. In the following example, we will introduce one such solution called *Google BigQuery*.
### 8\.7\.1 Data warehouse for analytics: Google BigQuery example
Google BigQuery is flexible regarding the upload and export of data and can be set up straightforwardly for a data analytics project with hardly any set up costs. The pricing schema is usage\-based. Unless you store massive amounts of data on it, you will only be charged for the volume of data processed. Moreover, there is a straightforward R\-interface to Google BigQuery called [`bigrquery`](https://bigrquery.r-dbi.org/), which allows for the same R/SQL\-syntax as R’s interfaces to traditional relational databases.
**Get started with `bigrquery`**
To get started with Google BigQuery and `bigrquery` ([Wickham and Bryan 2022](#ref-bigrquery)), go to <https://cloud.google.com/bigquery>. Click on “Try Big Query” (if new to this) or “Go to console” (if used previously). Create a Google Cloud project to use BigQuery with. Note that, as in general for Google Cloud services, you need to have a credit card registered with the project to do this. However, for learning and testing purposes, Google Cloud offers 1TB of free queries per month. All the examples shown below combined will not exceed this free tier. Finally, run `install.packages("bigrquery")` in R.
To set up an R session to interface with BigQuery, you need to indicate which Google BigQuery project you want to use for the billing (the `BILLING` variable in the example below), as well as the Google BigQuery project in which the data is stored that you want to query (the `PROJECT` variable below). This distinction is very useful because it easily allows you to query data from a large array of publicly available datasets on BigQuery. In the set up example code below, we use this option in order to access an existing and publicly available dataset (provided in the `bigquery-public-data` project) called `google_analytics_sample`. In fact, this dataset provides the raw Google Analytics data used in the Big\-P example discussed in Chapter 2\.
Finally, all that is left to do is to connect to BigQuery via the already familiar `dbConnect()` function provided in `DBI`.[48](#fn48) When first connecting to and querying BigQuery with your Google Cloud account, a browser window will open, and you will be prompted to grant `bigrquery` access to your account/project. To do so, you will have to be logged in to your Google account. See the **Important details** section on [https://bigrquery.r\-dbi.org/](https://bigrquery.r-dbi.org/) for details on the authentication.
```
# load packages, credentials
library(bigrquery)
library(data.table)
library(DBI)
# fix vars
# the project ID on BigQuery (billing must be enabled)
BILLING <- "bda-examples"
# the project name on BigQuery
PROJECT <- "bigquery-public-data"
DATASET <- "google_analytics_sample"
# connect to DB on BigQuery
con <- dbConnect(
bigrquery::bigquery(),
project = PROJECT,
dataset = DATASET,
billing = BILLING
)
```
**Get familiar with BigQuery**
The basic query syntax is now essentially identical to what we have covered in the RDBMS examples above.[49](#fn49) In this first query, we count the number of times a Google merchandise shop visit originates from a given web domain on August 1, 2017 (hence the query to table `ga_sessions_20170801`). Note the way we refer to the specific table (in the `FROM` statement of the query below): `bigquery-public-data` is the pointer to the BigQuery project, `google_analytics_sample` is the name of the data warehouse, and `ga_sessions_20170801` is the name of the specific table we want to query data from. Finally, note the argument `page_size=15000` as part of the familiar `dbGetQuery()` function. This ensures that `bigrquery` does not exceed the limit of volume per second for downloads via the Google BigQuery API (on which `bigrquery` builds).
```
# run query
query <-
"
SELECT DISTINCT trafficSource.source AS origin,
COUNT(trafficSource.source) AS no_occ
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801`
GROUP BY trafficSource.source
ORDER BY no_occ DESC;
"
ga <- as.data.table(dbGetQuery(con, query, page_size=15000))
head(ga)
```
Note the output displayed in the console. `bigrquery` indicates how much data volume was processed as part of the query (which indicates what will be charged to your billing project),
**Upload data to BigQuery**
Storing your entire raw dataset on BigQuery is straightforward with `bigrquery`. In the following simple example, we upload the previously gathered and locally stored TLC taxi trips data. To do so, we first create and connect to a new dataset on BigQuery. To keep things simple, we initialize the new dataset in the same project used for the billing.
```
# name of the dataset to be created
DATASET <- "tlc"
# connect and initialize a new dataset
con <- dbConnect(
bigrquery::bigquery(),
project = BILLING,
billing = BILLING,
dataset = DATASET
)
```
In a first step, we create the dataset to which we then can add the table.
```
tlc_ds <- bq_dataset(BILLING, DATASET)
bq_dataset_create(tlc_ds)
```
We then load the TLC dataset into R via `fread()` and upload it as a new table to your project/dataset on BigQuery via `bigrquery`. For the sake of the example, we only upload the first 10,000 rows.
```
# read data from csv
tlc <- fread("data/tlc_trips.csv.gz", nrows = 10000)
# write data to a new table
dbWriteTable(con, name = "tlc_trips", value = tlc)
```
Alternatively, you can easily upload data via the Google BigQuery console in the browser. Go to <https://console.cloud.google.com/bigquery>, select (or create) the project you want to upload data to, then in the *Explorer* section click on *\+ ADD DATA*, and select the file you want to upload. You can either upload the data from disk, from Google Cloud Storage, or from a third\-party connection. Uploading the data into BigQuery via Google Cloud Storage is particularly useful for large datasets.
Finally, we can test the newly created dataset/table with the following query
```
test_query <-
"
SELECT *
FROM tlc.tlc_trips
LIMIT 10
"
test <- dbGetQuery(con, test_query)
```
**Tutorial: Retrieve and prepare Google Analytics data**
The following tutorial illustrates how the raw data for the Big\-P example in Chapter 2 was collected and prepared via Google BigQuery and R. Before we get started, note an important aspect of a data warehouse solution like BigQuery in contrast to common applications of RDBS. As data warehouses are used in a more flexible way than relational databases, it is not uncommon to store data files/tables containing the same variables separately in various tables, for example to store one table per day or year of a panel dataset. On Google BigQuery, this partitioning of datasets into several components can additionally make sense for cost reasons. Suppose you want to only compute summary statistics for certain variables over a given time frame. If all observations of a large dataset are stored in one standard BigQuery table, such a query results in processing GBs or TBs of data, as the observations from the corresponding time frame need to be filtered out of the entire dataset. Partitioning the data into several subsets helps avoid this, as BigQuery has several features that allow the definition of SQL queries to be run on partitioned data. The publicly available Google Analytics dataset is organized in such a partitioned way. The data is stored in several tables (one for each day of the observation period), whereby the last few characters of the table name contain the date of the corresponding observation day (such as the one used in the example above: `ga_sessions_20170801`). If we want to combine data from several of those tables, we can use the wildcard character (`*`) to indicate that the BigQuery should consider all tables matching the table name up to the `*`: `FROM bigquery-public-data.google_analytics_sample.ga_sessions_*`.
We proceed by first connecting the R session with GoogleBigQuery.
```
# fix vars
# the project ID on BigQuery (billing must be enabled)
BILLING <- "YOUR-BILLING-PROJECT-ID"
# the project name on BigQuery
PROJECT <- "bigquery-public-data"
DATASET <- "google_analytics_sample"
# connect to DB on BigQuery
con <- dbConnect(
bigrquery::bigquery(),
project = PROJECT,
dataset = DATASET,
billing = BILLING
)
```
The query combines all Google Analytics data recorded from the beginning of 2016 to the end of 2017 via `WHERE _TABLE_SUFFIX BETWEEN '20160101' AND '20171231'`. This gives us all the raw data used in the Big\-P analysis shown in Chapter 2\.
```
# run query
query <-
"
SELECT
totals.visits,
totals.transactions,
trafficSource.source,
device.browser,
device.isMobile,
geoNetwork.city,
geoNetwork.country,
channelGrouping
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*`
WHERE _TABLE_SUFFIX BETWEEN '20160101' AND '20171231';
"
ga <- as.data.table(dbGetQuery(con, query, page_size=15000))
```
Finally, we use `data.table` and basic R to prepare the final analytic dataset and write it on disk.
```
# further cleaning and coding via data.table and basic R
ga$transactions[is.na(ga$transactions)] <- 0
ga <- ga[ga$city!="not available in demo dataset",]
ga$purchase <- as.integer(0<ga$transactions)
ga$transactions <- NULL
ga_p <- ga[purchase==1]
ga_rest <- ga[purchase==0][sample(1:nrow(ga[purchase==0]), 45000)]
ga <- rbindlist(list(ga_p, ga_rest))
potential_sources <- table(ga$source)
potential_sources <- names(potential_sources[1<potential_sources])
ga <- ga[ga$source %in% potential_sources,]
# store dataset on local hard disk
fwrite(ga, file="data/ga.csv")
# clean up
dbDisconnect(con)
```
Note how we combine BigQuery as our data warehouse with basic R for data preparation. Solutions like BigQuery are particularly useful for this kind of approach as part of an analytics project: Large operations such as the selection of columns/variables from large\-scale data sources are handled within the warehouse in the cloud, and the refinement/cleaning steps can then be implemented locally on a much smaller subset.[50](#fn50)
Note the wildcard character (`*`) in the query is used to fetch data from several partitions of the overall dataset.
### 8\.7\.1 Data warehouse for analytics: Google BigQuery example
Google BigQuery is flexible regarding the upload and export of data and can be set up straightforwardly for a data analytics project with hardly any set up costs. The pricing schema is usage\-based. Unless you store massive amounts of data on it, you will only be charged for the volume of data processed. Moreover, there is a straightforward R\-interface to Google BigQuery called [`bigrquery`](https://bigrquery.r-dbi.org/), which allows for the same R/SQL\-syntax as R’s interfaces to traditional relational databases.
**Get started with `bigrquery`**
To get started with Google BigQuery and `bigrquery` ([Wickham and Bryan 2022](#ref-bigrquery)), go to <https://cloud.google.com/bigquery>. Click on “Try Big Query” (if new to this) or “Go to console” (if used previously). Create a Google Cloud project to use BigQuery with. Note that, as in general for Google Cloud services, you need to have a credit card registered with the project to do this. However, for learning and testing purposes, Google Cloud offers 1TB of free queries per month. All the examples shown below combined will not exceed this free tier. Finally, run `install.packages("bigrquery")` in R.
To set up an R session to interface with BigQuery, you need to indicate which Google BigQuery project you want to use for the billing (the `BILLING` variable in the example below), as well as the Google BigQuery project in which the data is stored that you want to query (the `PROJECT` variable below). This distinction is very useful because it easily allows you to query data from a large array of publicly available datasets on BigQuery. In the set up example code below, we use this option in order to access an existing and publicly available dataset (provided in the `bigquery-public-data` project) called `google_analytics_sample`. In fact, this dataset provides the raw Google Analytics data used in the Big\-P example discussed in Chapter 2\.
Finally, all that is left to do is to connect to BigQuery via the already familiar `dbConnect()` function provided in `DBI`.[48](#fn48) When first connecting to and querying BigQuery with your Google Cloud account, a browser window will open, and you will be prompted to grant `bigrquery` access to your account/project. To do so, you will have to be logged in to your Google account. See the **Important details** section on [https://bigrquery.r\-dbi.org/](https://bigrquery.r-dbi.org/) for details on the authentication.
```
# load packages, credentials
library(bigrquery)
library(data.table)
library(DBI)
# fix vars
# the project ID on BigQuery (billing must be enabled)
BILLING <- "bda-examples"
# the project name on BigQuery
PROJECT <- "bigquery-public-data"
DATASET <- "google_analytics_sample"
# connect to DB on BigQuery
con <- dbConnect(
bigrquery::bigquery(),
project = PROJECT,
dataset = DATASET,
billing = BILLING
)
```
**Get familiar with BigQuery**
The basic query syntax is now essentially identical to what we have covered in the RDBMS examples above.[49](#fn49) In this first query, we count the number of times a Google merchandise shop visit originates from a given web domain on August 1, 2017 (hence the query to table `ga_sessions_20170801`). Note the way we refer to the specific table (in the `FROM` statement of the query below): `bigquery-public-data` is the pointer to the BigQuery project, `google_analytics_sample` is the name of the data warehouse, and `ga_sessions_20170801` is the name of the specific table we want to query data from. Finally, note the argument `page_size=15000` as part of the familiar `dbGetQuery()` function. This ensures that `bigrquery` does not exceed the limit of volume per second for downloads via the Google BigQuery API (on which `bigrquery` builds).
```
# run query
query <-
"
SELECT DISTINCT trafficSource.source AS origin,
COUNT(trafficSource.source) AS no_occ
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801`
GROUP BY trafficSource.source
ORDER BY no_occ DESC;
"
ga <- as.data.table(dbGetQuery(con, query, page_size=15000))
head(ga)
```
Note the output displayed in the console. `bigrquery` indicates how much data volume was processed as part of the query (which indicates what will be charged to your billing project),
**Upload data to BigQuery**
Storing your entire raw dataset on BigQuery is straightforward with `bigrquery`. In the following simple example, we upload the previously gathered and locally stored TLC taxi trips data. To do so, we first create and connect to a new dataset on BigQuery. To keep things simple, we initialize the new dataset in the same project used for the billing.
```
# name of the dataset to be created
DATASET <- "tlc"
# connect and initialize a new dataset
con <- dbConnect(
bigrquery::bigquery(),
project = BILLING,
billing = BILLING,
dataset = DATASET
)
```
In a first step, we create the dataset to which we then can add the table.
```
tlc_ds <- bq_dataset(BILLING, DATASET)
bq_dataset_create(tlc_ds)
```
We then load the TLC dataset into R via `fread()` and upload it as a new table to your project/dataset on BigQuery via `bigrquery`. For the sake of the example, we only upload the first 10,000 rows.
```
# read data from csv
tlc <- fread("data/tlc_trips.csv.gz", nrows = 10000)
# write data to a new table
dbWriteTable(con, name = "tlc_trips", value = tlc)
```
Alternatively, you can easily upload data via the Google BigQuery console in the browser. Go to <https://console.cloud.google.com/bigquery>, select (or create) the project you want to upload data to, then in the *Explorer* section click on *\+ ADD DATA*, and select the file you want to upload. You can either upload the data from disk, from Google Cloud Storage, or from a third\-party connection. Uploading the data into BigQuery via Google Cloud Storage is particularly useful for large datasets.
Finally, we can test the newly created dataset/table with the following query
```
test_query <-
"
SELECT *
FROM tlc.tlc_trips
LIMIT 10
"
test <- dbGetQuery(con, test_query)
```
**Tutorial: Retrieve and prepare Google Analytics data**
The following tutorial illustrates how the raw data for the Big\-P example in Chapter 2 was collected and prepared via Google BigQuery and R. Before we get started, note an important aspect of a data warehouse solution like BigQuery in contrast to common applications of RDBS. As data warehouses are used in a more flexible way than relational databases, it is not uncommon to store data files/tables containing the same variables separately in various tables, for example to store one table per day or year of a panel dataset. On Google BigQuery, this partitioning of datasets into several components can additionally make sense for cost reasons. Suppose you want to only compute summary statistics for certain variables over a given time frame. If all observations of a large dataset are stored in one standard BigQuery table, such a query results in processing GBs or TBs of data, as the observations from the corresponding time frame need to be filtered out of the entire dataset. Partitioning the data into several subsets helps avoid this, as BigQuery has several features that allow the definition of SQL queries to be run on partitioned data. The publicly available Google Analytics dataset is organized in such a partitioned way. The data is stored in several tables (one for each day of the observation period), whereby the last few characters of the table name contain the date of the corresponding observation day (such as the one used in the example above: `ga_sessions_20170801`). If we want to combine data from several of those tables, we can use the wildcard character (`*`) to indicate that the BigQuery should consider all tables matching the table name up to the `*`: `FROM bigquery-public-data.google_analytics_sample.ga_sessions_*`.
We proceed by first connecting the R session with GoogleBigQuery.
```
# fix vars
# the project ID on BigQuery (billing must be enabled)
BILLING <- "YOUR-BILLING-PROJECT-ID"
# the project name on BigQuery
PROJECT <- "bigquery-public-data"
DATASET <- "google_analytics_sample"
# connect to DB on BigQuery
con <- dbConnect(
bigrquery::bigquery(),
project = PROJECT,
dataset = DATASET,
billing = BILLING
)
```
The query combines all Google Analytics data recorded from the beginning of 2016 to the end of 2017 via `WHERE _TABLE_SUFFIX BETWEEN '20160101' AND '20171231'`. This gives us all the raw data used in the Big\-P analysis shown in Chapter 2\.
```
# run query
query <-
"
SELECT
totals.visits,
totals.transactions,
trafficSource.source,
device.browser,
device.isMobile,
geoNetwork.city,
geoNetwork.country,
channelGrouping
FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*`
WHERE _TABLE_SUFFIX BETWEEN '20160101' AND '20171231';
"
ga <- as.data.table(dbGetQuery(con, query, page_size=15000))
```
Finally, we use `data.table` and basic R to prepare the final analytic dataset and write it on disk.
```
# further cleaning and coding via data.table and basic R
ga$transactions[is.na(ga$transactions)] <- 0
ga <- ga[ga$city!="not available in demo dataset",]
ga$purchase <- as.integer(0<ga$transactions)
ga$transactions <- NULL
ga_p <- ga[purchase==1]
ga_rest <- ga[purchase==0][sample(1:nrow(ga[purchase==0]), 45000)]
ga <- rbindlist(list(ga_p, ga_rest))
potential_sources <- table(ga$source)
potential_sources <- names(potential_sources[1<potential_sources])
ga <- ga[ga$source %in% potential_sources,]
# store dataset on local hard disk
fwrite(ga, file="data/ga.csv")
# clean up
dbDisconnect(con)
```
Note how we combine BigQuery as our data warehouse with basic R for data preparation. Solutions like BigQuery are particularly useful for this kind of approach as part of an analytics project: Large operations such as the selection of columns/variables from large\-scale data sources are handled within the warehouse in the cloud, and the refinement/cleaning steps can then be implemented locally on a much smaller subset.[50](#fn50)
Note the wildcard character (`*`) in the query is used to fetch data from several partitions of the overall dataset.
8\.8 Data lakes and simple storage service
------------------------------------------
Broadly speaking a data lake is where all your data resides (these days, this is typically somewhere in the cloud). The data is simply stored in whatever file format and in simple terms organized in folders and sub\-folders. In the same data lake you might thus store CSV files, SQL database dumps, log files, image files, raw text, etc. In addition, you typically have many options to define access rights to files, including to easily make them accessible for download to the public. For a simple data analytics project in the context of economic research or business analytics, the data lake in the cloud concept is a useful tool to store all project\-related raw data files. On the one hand you avoid running into troubles with occupying gigabytes or terabytes of your local hard disk with files that are relevant but only rarely imported/worked with. On the other hand you can properly organize all the raw data for reproducibility purposes and easily share the files with colleagues (and eventually the public). For example, you can use one main folder (one “bucket”) for an entire analytics project, store all the raw data in one sub\-folder (for reproduction purposes), and store all the final analytic datasets in another sub\-folder for replication purposes and more frequent access as well as sharing across a team of co\-workers.
There are several types of cloud\-based data lake solutions available, many of which are primarily focused on corporate data storage and provide a variety of services (for example, AWS Lake Formation or Azure Data Lake) that might go well beyond the data analytics perspective taken in this book. However, most of these solutions build in the end on a so\-called simple storage service such as AWS S3 or Google Cloud Storage, which build the core of the lake – the place where the data is actually stored and accessed. In the following, we will look at how to use such a simple storage service (AWS S3\) as a data lake in simple analytics projects.[51](#fn51)
Finally, we will look at a very interesting approach to combine the concept of a data lake with the concept of a data warehouse. That is, we briefly look at solutions of how some analytics tools (specifically, a tool called Amazon Athena) can directly be used to query/analyze the data stored in the simple storage service.
### 8\.8\.1 AWS S3 with R: First steps
For the following first steps with AWS S3 and R, you will need an AWS account (the same as above for EC2\) and IAM credentials from your AWS account with the right to access S3\.[52](#fn52) Finally, you will have to install the `aws.s3` package in R in order to access S3 via R: `install.packages("aws.s3")`.
To initiate an R session in which you connect to S3, `aws.s3` ([Leeper 2020](#ref-aws.s3)) must be loaded and the following environment variables must be set:
* `AWS_ACCESS_KEY_ID`: your access key ID (of the keypair with rights to use S3\)
* `AWS_SECRET_KEY`: your access key (of the keypair with rights to use S3\)
* `REGION`: the region in which your S3 buckets are/will be located (e.g., `"eu-central-1"`)
```
# load packages
library(aws.s3)
# set environment variables with your AWS S3 credentials
Sys.setenv("AWS_ACCESS_KEY_ID" = AWS_ACCESS_KEY_ID,
"AWS_SECRET_ACCESS_KEY" = AWS_SECRET_KEY,
"AWS_DEFAULT_REGION" = REGION)
```
In a first step, we create a project bucket (the main repository for our project) to store all the data of our analytics project. All the raw data can be placed directly in this main folder. Then, we add one sub\-folder to this bucket: `analytic_data` (for the cleaned/prepared datasets underlying the analyses in the project).[53](#fn53)
```
# fix variable for bucket name
BUCKET <- "tlc-trips"
# create project bucket
put_bucket(BUCKET)
# create folders
put_folder("raw_data", BUCKET)
put_folder("analytic_data", BUCKET)
```
### 8\.8\.2 Uploading data to S3
Now we can start uploading the data to the bucket (and the sub\-folder). For example, to remain within the context of the TLC taxi trips data, we upload the original Parquet files directly to the bucket and the prepared CSV file to `analytic_data`. For large files (larger than 100MB) it is recommended to use the multipart option (upload of file in several parts; `multipart=TRUE`).
```
# upload to bucket
# final analytic dataset
put_object(
file = "data/tlc_trips.csv", # the file you want to upload
object = "analytic_data/tlc_trips.csv", # name of the file in the bucket
bucket = BUCKET,
multipart = TRUE
)
# upload raw data
file_paths <- list.files("data/tlc_trips/raw_data", full.names = TRUE)
lapply(file_paths,
put_object,
bucket=BUCKET,
multipart=TRUE)
```
### 8\.8\.3 More than just simple storage: S3 \+ Amazon Athena
There are several implementations of interfaces with Amazon Athena in R. Here, we will rely on `AWR.Athena` ([Fultz and Daróczi 2019](#ref-AWR.Athena)) (run `install.packages("AWR.Athena")`), which allows interacting with Amazon Athena via the familiar `DBI` package ([R Special Interest Group on Databases (R\-SIG\-DB), Wickham, and Müller 2022](#ref-DBI)).
```
# SET UP -------------------------
# load packages
library(DBI)
library(aws.s3)
# aws credentials with Athena and S3 rights and region
AWS_ACCESS_KEY_ID <- "YOUR_KEY_ID"
AWS_ACCESS_KEY <- "YOUR_KEY"
REGION <- "eu-central-1"
```
```
# establish AWS connection
Sys.setenv("AWS_ACCESS_KEY_ID" = AWS_ACCESS_KEY_ID,
"AWS_SECRET_ACCESS_KEY" = AWS_ACCESS_KEY,
"AWS_DEFAULT_REGION" = REGION)
```
Create a bucket for the output.
```
OUTPUT_BUCKET <- "bda-athena"
put_bucket(OUTPUT_BUCKET, region="us-east-1")
```
Now we can connect to Amazon Athena to query data from files in S3 via the `RJDBC` package ([Urbanek 2022](#ref-RJDBC)).
```
# load packages
library(RJDBC)
library(DBI)
# download Athena JDBC driver
URL <- "https://s3.amazonaws.com/athena-downloads/drivers/JDBC/"
VERSION <- "AthenaJDBC_1.1.0/AthenaJDBC41-1.1.0.jar"
DRV_FILE <- "AthenaJDBC41-1.1.0.jar"
download.file(paste0(URL, VERSION), destfile = DRV_FILE)
# connect to JDBC
athena <- JDBC(driverClass = "com.amazonaws.athena.jdbc.AthenaDriver",
DRV_FILE, identifier.quote = "'")
# connect to Athena
con <- dbConnect(athena, "jdbc:awsathena://athena.us-east-1.amazonaws.com:443/",
s3_staging_dir = "s3://bda-athena", user = AWS_ACCESS_KEY_ID,
password = AWS_ACCESS_KEY)
```
In order to query data stored in S3 via Amazon Athena, we need to create an *external table* in Athena, which will be based on data stored in S3\.
```
query_create_table <-
"
CREATE EXTERNAL TABLE default.trips (
`vendor_name` string,
`Trip_Pickup_DateTime` string,
`Trip_Dropoff_DateTime` string,
`Passenger_Count` int,
`Trip_Distance` double,
`Start_Lon` double,
`Start_Lat` double,
`Rate_Code` string,
`store_and_forward` string,
`End_Lon` double,
`End_Lat` double,
`Payment_Type` string,
`Fare_Amt` double,
`surcharge` double,
`mta_tax` string,
`Tip_Amt` double,
`Tolls_Amt` double,
`Total_Amt` double
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION 's3://tlc-trips/analytic_data/'
"
dbSendQuery(con, query_create_table)
```
Run a test query to verify the table.
```
test_query <-
"
SELECT *
FROM default.trips
LIMIT 10
"
test <- dbGetQuery(con, test_query)
dim(test)
```
```
## [1] 10 18
```
Finally, close the connection.
```
dbDisconnect(con)
```
```
## [1] TRUE
```
### 8\.8\.1 AWS S3 with R: First steps
For the following first steps with AWS S3 and R, you will need an AWS account (the same as above for EC2\) and IAM credentials from your AWS account with the right to access S3\.[52](#fn52) Finally, you will have to install the `aws.s3` package in R in order to access S3 via R: `install.packages("aws.s3")`.
To initiate an R session in which you connect to S3, `aws.s3` ([Leeper 2020](#ref-aws.s3)) must be loaded and the following environment variables must be set:
* `AWS_ACCESS_KEY_ID`: your access key ID (of the keypair with rights to use S3\)
* `AWS_SECRET_KEY`: your access key (of the keypair with rights to use S3\)
* `REGION`: the region in which your S3 buckets are/will be located (e.g., `"eu-central-1"`)
```
# load packages
library(aws.s3)
# set environment variables with your AWS S3 credentials
Sys.setenv("AWS_ACCESS_KEY_ID" = AWS_ACCESS_KEY_ID,
"AWS_SECRET_ACCESS_KEY" = AWS_SECRET_KEY,
"AWS_DEFAULT_REGION" = REGION)
```
In a first step, we create a project bucket (the main repository for our project) to store all the data of our analytics project. All the raw data can be placed directly in this main folder. Then, we add one sub\-folder to this bucket: `analytic_data` (for the cleaned/prepared datasets underlying the analyses in the project).[53](#fn53)
```
# fix variable for bucket name
BUCKET <- "tlc-trips"
# create project bucket
put_bucket(BUCKET)
# create folders
put_folder("raw_data", BUCKET)
put_folder("analytic_data", BUCKET)
```
### 8\.8\.2 Uploading data to S3
Now we can start uploading the data to the bucket (and the sub\-folder). For example, to remain within the context of the TLC taxi trips data, we upload the original Parquet files directly to the bucket and the prepared CSV file to `analytic_data`. For large files (larger than 100MB) it is recommended to use the multipart option (upload of file in several parts; `multipart=TRUE`).
```
# upload to bucket
# final analytic dataset
put_object(
file = "data/tlc_trips.csv", # the file you want to upload
object = "analytic_data/tlc_trips.csv", # name of the file in the bucket
bucket = BUCKET,
multipart = TRUE
)
# upload raw data
file_paths <- list.files("data/tlc_trips/raw_data", full.names = TRUE)
lapply(file_paths,
put_object,
bucket=BUCKET,
multipart=TRUE)
```
### 8\.8\.3 More than just simple storage: S3 \+ Amazon Athena
There are several implementations of interfaces with Amazon Athena in R. Here, we will rely on `AWR.Athena` ([Fultz and Daróczi 2019](#ref-AWR.Athena)) (run `install.packages("AWR.Athena")`), which allows interacting with Amazon Athena via the familiar `DBI` package ([R Special Interest Group on Databases (R\-SIG\-DB), Wickham, and Müller 2022](#ref-DBI)).
```
# SET UP -------------------------
# load packages
library(DBI)
library(aws.s3)
# aws credentials with Athena and S3 rights and region
AWS_ACCESS_KEY_ID <- "YOUR_KEY_ID"
AWS_ACCESS_KEY <- "YOUR_KEY"
REGION <- "eu-central-1"
```
```
# establish AWS connection
Sys.setenv("AWS_ACCESS_KEY_ID" = AWS_ACCESS_KEY_ID,
"AWS_SECRET_ACCESS_KEY" = AWS_ACCESS_KEY,
"AWS_DEFAULT_REGION" = REGION)
```
Create a bucket for the output.
```
OUTPUT_BUCKET <- "bda-athena"
put_bucket(OUTPUT_BUCKET, region="us-east-1")
```
Now we can connect to Amazon Athena to query data from files in S3 via the `RJDBC` package ([Urbanek 2022](#ref-RJDBC)).
```
# load packages
library(RJDBC)
library(DBI)
# download Athena JDBC driver
URL <- "https://s3.amazonaws.com/athena-downloads/drivers/JDBC/"
VERSION <- "AthenaJDBC_1.1.0/AthenaJDBC41-1.1.0.jar"
DRV_FILE <- "AthenaJDBC41-1.1.0.jar"
download.file(paste0(URL, VERSION), destfile = DRV_FILE)
# connect to JDBC
athena <- JDBC(driverClass = "com.amazonaws.athena.jdbc.AthenaDriver",
DRV_FILE, identifier.quote = "'")
# connect to Athena
con <- dbConnect(athena, "jdbc:awsathena://athena.us-east-1.amazonaws.com:443/",
s3_staging_dir = "s3://bda-athena", user = AWS_ACCESS_KEY_ID,
password = AWS_ACCESS_KEY)
```
In order to query data stored in S3 via Amazon Athena, we need to create an *external table* in Athena, which will be based on data stored in S3\.
```
query_create_table <-
"
CREATE EXTERNAL TABLE default.trips (
`vendor_name` string,
`Trip_Pickup_DateTime` string,
`Trip_Dropoff_DateTime` string,
`Passenger_Count` int,
`Trip_Distance` double,
`Start_Lon` double,
`Start_Lat` double,
`Rate_Code` string,
`store_and_forward` string,
`End_Lon` double,
`End_Lat` double,
`Payment_Type` string,
`Fare_Amt` double,
`surcharge` double,
`mta_tax` string,
`Tip_Amt` double,
`Tolls_Amt` double,
`Total_Amt` double
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
LOCATION 's3://tlc-trips/analytic_data/'
"
dbSendQuery(con, query_create_table)
```
Run a test query to verify the table.
```
test_query <-
"
SELECT *
FROM default.trips
LIMIT 10
"
test <- dbGetQuery(con, test_query)
dim(test)
```
```
## [1] 10 18
```
Finally, close the connection.
```
dbDisconnect(con)
```
```
## [1] TRUE
```
8\.9 Wrapping up
----------------
* It is good practice to set up all of the high\-level *pipeline* in the same language (here R). This substantially facilitates your workflow and makes your overall pipeline easier to maintain. Importantly, as illustrated in the sections above, this practice does not mean that all of the underlying data processing is actually done in R. We simply use R as the highest\-level layer and call a range of services under the hood to handle each of the pipeline components as efficiently as possible.
* *Apache Arrow* allows you to combine and correct raw data without exceeding RAM; in addition it facilitates working with newer (big) data formats for columnar data storage systems (like *Apache Parquet*).
* *RDBMSs* such as *SQLite* or *MySQL* and analytics databases such as *Druid* help you store and organize clean/structured data for analytics purposes locally or in the cloud.
* *RDBMSs* like SQLite are *row\-based* (changing a value means changing a row), while modern analytics databases are usually *column*\-based (changing a value means modifying one column).
* Row\-based databases are recommended when your analytics workflow includes a lot of tables, table joins, and frequent filtering for specific observations with variables from several tables.
Column\-based databases are recommended for analytics workflows involving less frequent but large\-scale data aggregation tasks.
* *Data warehouse* solutions like *Google BigQuery* are useful to store and query large (semi\-)structured datasets but are more flexible regarding hierarchical data and file formats than traditional RDBMSs.
* *Data lakes* and simple storage services are the all\-purpose tools to store vast amounts of data in any format in the cloud. Typically, solutions like *AWS S3* are a great option to store all of the raw data related to a data analytics project.
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/big-data-cleaning-and-transformation.html |
Chapter 9 Big Data Cleaning and Transformation
==============================================
Preceding the filtering/selection/aggregation of raw data, data cleaning and transformation typically have to be run on large volumes of raw data before the observations and variables of interest can be further analyzed. Typical data cleaning tasks involve:
* Normalization/standardization (across entities, categories, observation periods).
* Coding of additional variables (indicators, strings to categorical, etc.).
* Removing/adding covariates.
* Merging/joining datasets.
* Properly defining data types for each variable.
All of these steps are very common tasks when working with data for analytics purposes, independent of the size of the dataset. However, as most of the techniques and software developed for such tasks is meant to process data in memory, performing these tasks on large datasets can be challenging. Data cleaning workflows you are perfectly familiar with might slow down substantially or crash due to a lack of memory (RAM), particularly if the data preparation step involves merging/joining two datasets. Other potential bottlenecks are the parsing of large files (CPU) or intensive reading from and writing to the hard disk (Mass storage).
In practice, the most critical bottleneck of common data preparation tasks is often a lack of RAM. In the following, we thus explore two strategies that broadly build on the idea of *virtual memory* (using parts of the hard disk as RAM) and/or *lazy evaluation* (only loading/processing the part of a dataset really required).
9\.1 Out\-of\-memory strategies and lazy evaluation: Practical basics
---------------------------------------------------------------------
Virtual memory is in simple words an approach to combining the RAM and mass storage components in order to cope with a lack of RAM. Modern operating systems come with a virtual memory manager that automatically handles the swapping between RAM and the hard\-disk, when running processes that use up too much RAM. However, a virtual memory manager is not specifically developed to perform this task in the context of data analysis. Several strategies have thus been developed to build on the basic idea of *virtual memory* in the context of data analysis tasks.
* *Chunked data files on disk*: The data analytics software ‘partitions’ the dataset, and maps and stores the chunks of raw data on disk. What is actually ‘read’ into RAM when importing the data file with this approach is the mapping to the partitions of the actual dataset (the data structure) and some metadata describing the dataset. In R, this approach is implemented in the `ff` package ([Adler et al. 2022](#ref-ff)) and several packages building on `ff`. In this approach, the usage of disk space and the linking between RAM and files on disk is very explicit (and clearly visible to the user).
* *Memory mapped files and shared memory*: The data analytics software uses segments of virtual memory for the dataset and allows different programs/processes to access it in the same memory segment. Thus, virtual memory is explicitly allocated for one or several specific data analytics tasks. In R, this approach is notably implemented in the `bigmemory` package ([Kane, Emerson, and Weston 2013](#ref-bigmemory)) and several packages building on `bigmemory`.
A conceptually related but differently focused approach is the *lazy evaluation* implemented in Apache Arrow and the corresponding `arrow` package ([Richardson et al. 2022](#ref-richardson_etal2022)). While Apache Arrow is basically a platform for in\-memory columnar data, it is optimized for processing large amounts of data and working with datasets that actually do not fit into memory. The way this is done is that instructions on what to do with a dataset are not evaluated step\-by\-step on the spot but all together at the point of actually loading the data into R. That is, we can connect to a dataset via `arrow`, see its variables, etc., give instructions of which observations to filter out and which columns to select, all before we read the dataset into RAM. In comparison to the strategies outlined above, this approach is usually much faster but might still lead to a situation with a lack of memory.
In the following subsections we briefly look at how to set up an R session for data preparation purposes with any of these approaches (`ff`, `bigmemory`, `arrow`) and look at some of the conceptual basics behind the approaches.
### 9\.1\.1 Chunking data with the `ff` package
We first install and load the `ff` and `ffbase` ([de Jonge, Wijffels, and van der Laan 2023](#ref-ffbase)) packages, as well as the `pryr` package. We use the familiar `flights.csv` dataset[54](#fn54) For the sake of the example, we only use a fraction of the original dataset.[55](#fn55) On disk, the dataset is about 30MB:
```
fs::file_size("data/flights.csv")
```
```
## 29.5M
```
However, loading the entire dataset of several GBs would work just fine, using the `ff`\-approach.
When importing data via the `ff` package, we first have to set up a directory where `ff` can store the partitioned dataset (recall that this is explicitly/visibly done on disk). We call this new directory `ff_files`.
```
# SET UP --------------
# install.packages(c("ff", "ffbase"))
# you might have to install the ffbase package directly from GitHub:
# devtools::install_github("edwindj/ffbase", subdir="pkg")
# load packages
library(ff)
library(ffbase)
library(data.table) # for comparison
# create directory for ff chunks, and assign directory to ff
system("mkdir ff_files")
options(fftempdir = "ff_files")
```
Now we can read in the data with `read.table.ffdf`. In order to better understand the underlying concept, we also import the data into a common `data.table` object via `fread()` and then look at the size of the objects resulting from the two ‘import’ approaches in the R environment with `object.size()`.
```
# usual in-memory csv import
flights_dt <- fread("data/flights.csv")
# out-of-memory approach
flights <-
read.table.ffdf(file="data/flights.csv",
sep=",",
VERBOSE=TRUE,
header=TRUE,
next.rows=100000,
colClasses=NA)
```
```
## read.table.ffdf 1..100000 (100000) csv-read=0.609sec ffdf-write=0.065sec
## read.table.ffdf 100001..200000 (100000) csv-read=0.479sec ffdf-write=0.044sec
## read.table.ffdf 200001..300000 (100000) csv-read=0.446sec ffdf-write=0.046sec
## read.table.ffdf 300001..336776 (36776) csv-read=0.184sec ffdf-write=0.04sec
## csv-read=1.718sec ffdf-write=0.195sec TOTAL=1.913sec
```
```
# compare object sizes
object.size(flights) # out-of-memory approach
```
```
## 949976 bytes
```
```
object.size(flights_dt) # common data.table
```
```
## 32569024 bytes
```
Note that there are two substantial differences to what we have previously seen when using `fread()`. It takes much longer to import a CSV into the ff\_files structure. However, the RAM allocated to it is much smaller. This is exactly what we would expect, keeping in mind what `read.table.ffdf()` does in comparison to what `fread()` does. Now we can actually have a look at the data chunks created by `ff`.
```
# show the files in the directory keeping the chunks
head(list.files("ff_files"))
```
```
## [1] "ffdf42b781703dcfe.ff" "ffdf42b781d18cdf9.ff"
## [3] "ffdf42b781d5105fb.ff" "ffdf42b781d6ccc74.ff"
## [5] "ffdf42b782494a6a1.ff" "ffdf42b782df9670d.ff"
```
### 9\.1\.2 Memory mapping with `bigmemory`
The `bigmemory` package handles data in matrices and therefore only accepts data values of identical data type. Before importing data via the `bigmemory` package, we thus have to ensure that all variables in the raw data can be imported in a common type.
```
# SET UP ----------------
# load packages
library(bigmemory)
library(biganalytics)
# import the data
flights <- read.big.matrix("data/flights.csv",
type="integer",
header=TRUE,
backingfile="flights.bin",
descriptorfile="flights.desc")
```
Note that, similar to the `ff` example, `read.big.matrix()` creates a local file `flights.bin` on disk that is linked to the `flights` object in RAM. From looking at the imported file, we see that various variable values have been discarded. This is because we have forced all variables to be of type `"integer"` when importing the dataset.
```
object.size(flights)
```
```
## 696 bytes
```
```
str(flights)
```
```
## Formal class 'big.matrix' [package "bigmemory"] with 1
slot
## ..@ address:<externalptr>
```
Again, the object representing the dataset in R does not contain the actual data (it does not even take up a KB of memory).
### 9\.1\.3 Connecting to Apache Arrow
```
# SET UP ----------------
# load packages
library(arrow)
# import the data
flights <- read_csv_arrow("data/flights.csv",
as_data_frame = FALSE)
```
Note the `as_data_frame=FALSE` in the function call. This instructs Arrow to establish a connection to the file and read some of the data (to understand what is in the file), but not actually import the whole CSV.
```
summary(flights)
```
```
## Length Class Mode
## year 336776 ChunkedArray environment
## month 336776 ChunkedArray environment
## day 336776 ChunkedArray environment
## dep_time 336776 ChunkedArray environment
## sched_dep_time 336776 ChunkedArray environment
## dep_delay 336776 ChunkedArray environment
## arr_time 336776 ChunkedArray environment
## sched_arr_time 336776 ChunkedArray environment
## arr_delay 336776 ChunkedArray environment
## carrier 336776 ChunkedArray environment
## flight 336776 ChunkedArray environment
## tailnum 336776 ChunkedArray environment
## origin 336776 ChunkedArray environment
## dest 336776 ChunkedArray environment
## air_time 336776 ChunkedArray environment
## distance 336776 ChunkedArray environment
## hour 336776 ChunkedArray environment
## minute 336776 ChunkedArray environment
## time_hour 336776 ChunkedArray environment
```
```
object.size(flights)
```
```
## 488 bytes
```
Again, we notice that the `flights` object is much smaller than the actual dataset on disk.
9\.2 Big Data preparation tutorial with `ff`
--------------------------------------------
### 9\.2\.1 Set up
The following code and data examples build on Walkowiak ([2016](#ref-walkowiak_2016)), Chapter 3\.[56](#fn56) The set up for our analysis script involves the loading of the `ff` and `ffbase` packages, the initialization of fixed variables to hold the paths to the datasets, and the creation and assignment of a new local directory `ff_files` in which the binary flat file\-partitioned chunks of the original datasets will be stored.
```
## SET UP ------------------------
# create and set directory for ff files
system("mkdir ff_files")
options(fftempdir = "ff_files")
# load packages
library(ff)
library(ffbase)
library(pryr)
# fix vars
FLIGHTS_DATA <- "data/flights_sep_oct15.txt"
AIRLINES_DATA <- "data/airline_id.csv"
```
### 9\.2\.2 Data import
In a first step we read (or ‘upload’) the data into R. This step involves the creation of the binary chunked files as well as the mapping of these files and the metadata. In comparison to the traditional `read.csv` approach, you will notice two things. On the one hand the data import takes longer; on the other hand it uses up much less RAM than with `read.csv`.
```
# DATA IMPORT ------------------
# check memory used
mem_used()
```
```
## 1.79 GB
```
```
# 1. Upload flights_sep_oct15.txt and airline_id.csv files from flat files.
system.time(flights.ff <- read.table.ffdf(file=FLIGHTS_DATA,
sep=",",
VERBOSE=TRUE,
header=TRUE,
next.rows=100000,
colClasses=NA))
```
```
## read.table.ffdf 1..100000 (100000) csv-read=0.564sec ffdf-write=0.095sec
## read.table.ffdf 100001..200000 (100000) csv-read=0.603sec ffdf-write=0.072sec
## read.table.ffdf 200001..300000 (100000) csv-read=0.611sec ffdf-write=0.068sec
## read.table.ffdf 300001..400000 (100000) csv-read=0.625sec ffdf-write=0.08sec
## read.table.ffdf 400001..500000 (100000) csv-read=0.626sec ffdf-write=0.072sec
## read.table.ffdf 500001..600000 (100000) csv-read=0.681sec ffdf-write=0.075sec
## read.table.ffdf 600001..700000 (100000) csv-read=0.638sec ffdf-write=0.069sec
## read.table.ffdf 700001..800000 (100000) csv-read=0.6sec ffdf-write=0.081sec
## read.table.ffdf 800001..900000 (100000) csv-read=0.612sec ffdf-write=0.075sec
## read.table.ffdf 900001..951111 (51111) csv-read=0.329sec ffdf-write=0.047sec
## csv-read=5.889sec ffdf-write=0.734sec TOTAL=6.623sec
```
```
## user system elapsed
## 5.659 0.750 6.626
```
```
system.time(airlines.ff <- read.csv.ffdf(file= AIRLINES_DATA,
VERBOSE=TRUE,
header=TRUE,
next.rows=100000,
colClasses=NA))
```
```
## read.table.ffdf 1..1607 (1607) csv-read=0.005sec ffdf-write=0.004sec
## csv-read=0.005sec ffdf-write=0.004sec TOTAL=0.009sec
```
```
## user system elapsed
## 0.009 0.001 0.010
```
```
# check memory used
mem_used()
```
```
## 1.79 GB
```
Comparison with `read.table`
```
# Using read.table()
system.time(flights.table <- read.table(FLIGHTS_DATA,
sep=",",
header=TRUE))
```
```
## user system elapsed
## 5.164 0.684 5.976
```
```
system.time(airlines.table <- read.csv(AIRLINES_DATA,
header = TRUE))
```
```
## user system elapsed
## 0.002 0.000 0.003
```
```
# check the memory used
mem_used()
```
```
## 1.93 GB
```
### 9\.2\.3 Inspect imported files
A particularly useful aspect of working with the `ff` package and the packages building on it is that many of the simple R functions that work on normal data.frames in RAM also work on ff\_files files. Hence, without actually having loaded the entire raw data of a large dataset into RAM, we can quickly get an overview of the key characteristics, such as the number of observations and the number of variables.
```
# 2. Inspect the ff_files objects.
## For flights.ff object:
class(flights.ff)
```
```
## [1] "ffdf"
```
```
dim(flights.ff)
```
```
## [1] 951111 28
```
```
## For airlines.ff object:
class(airlines.ff)
```
```
## [1] "ffdf"
```
```
dim(airlines.ff)
```
```
## [1] 1607 2
```
### 9\.2\.4 Data cleaning and transformation
After inspecting the data, we go through several steps of cleaning and transformation, with the goal of then merging the two datasets. That is, we want to create a new dataset that contains detailed flight information but with additional information on the carriers/airlines. First, we want to rename some of the variables.
```
# step 1:
# Rename "Code" variable from airlines.ff
# to "AIRLINE_ID" and "Description" into "AIRLINE_NM".
names(airlines.ff) <- c("AIRLINE_ID", "AIRLINE_NM")
names(airlines.ff)
```
```
## [1] "AIRLINE_ID" "AIRLINE_NM"
```
```
str(airlines.ff[1:20,])
```
```
## 'data.frame': 20 obs. of 2 variables:
## $ AIRLINE_ID: int 19031 19032 19033 19034 19035 19036
19037 19038 19039 19040 ...
## $ AIRLINE_NM: Factor w/ 1607 levels "40-Mile Air:
Q5",..: 945 1025 503 721 64 725 1194 99 1395 276 ...
```
Now we can join the two datasets via the unique airline identifier `"AIRLINE_ID"`. Note that these kinds of operations would usually take up substantially more RAM on the spot, if both original datasets were also fully loaded into RAM. As illustrated by the `mem_change()` function, this is not the case here. All that is needed is a small chunk of RAM to keep the metadata and mapping\-information of the new `ff_files` object; all the actual data is cached on the hard disk.
```
# merge of ff_files objects
mem_change(flights.data.ff <- merge.ffdf(flights.ff,
airlines.ff,
by="AIRLINE_ID"))
```
```
## 774 kB
```
```
#The new object is only 551.2 KB in size
class(flights.data.ff)
```
```
## [1] "ffdf"
```
```
dim(flights.data.ff)
```
```
## [1] 951111 29
```
```
names(flights.data.ff)
```
```
## [1] "YEAR" "MONTH"
## [3] "DAY_OF_MONTH" "DAY_OF_WEEK"
## [5] "FL_DATE" "UNIQUE_CARRIER"
## [7] "AIRLINE_ID" "TAIL_NUM"
## [9] "FL_NUM" "ORIGIN_AIRPORT_ID"
## [11] "ORIGIN" "ORIGIN_CITY_NAME"
## [13] "ORIGIN_STATE_NM" "ORIGIN_WAC"
## [15] "DEST_AIRPORT_ID" "DEST"
## [17] "DEST_CITY_NAME" "DEST_STATE_NM"
## [19] "DEST_WAC" "DEP_TIME"
## [21] "DEP_DELAY" "ARR_TIME"
## [23] "ARR_DELAY" "CANCELLED"
## [25] "CANCELLATION_CODE" "DIVERTED"
## [27] "AIR_TIME" "DISTANCE"
## [29] "AIRLINE_NM"
```
### 9\.2\.5 Inspect difference in in\-memory operation
In comparison to the `ff`\-approach, performing the merge in memory needs more resources:
```
##For flights.table:
names(airlines.table) <- c("AIRLINE_ID", "AIRLINE_NM")
names(airlines.table)
```
```
## [1] "AIRLINE_ID" "AIRLINE_NM"
```
```
str(airlines.table[1:20,])
```
```
## 'data.frame': 20 obs. of 2 variables:
## $ AIRLINE_ID: int 19031 19032 19033 19034 19035 19036
19037 19038 19039 19040 ...
## $ AIRLINE_NM: chr "Mackey International Inc.: MAC" "Munz
Northern Airlines Inc.: XY" "Cochise Airlines Inc.: COC"
"Golden Gate Airlines Inc.: GSA" ...
```
```
# check memory usage of merge in RAM
mem_change(flights.data.table <- merge(flights.table,
airlines.table,
by="AIRLINE_ID"))
```
```
## 161 MB
```
```
#The new object is already 105.7 MB in size
#A rapid spike in RAM use when processing
```
### 9\.2\.6 Subsetting
Now, we want to filter out some observations as well as select only specific variables for a subset of the overall dataset.
```
mem_used()
```
```
## 2.09 GB
```
```
# Subset the ff_files object flights.data.ff:
subs1.ff <-
subset.ffdf(flights.data.ff,
CANCELLED == 1,
select = c(FL_DATE,
AIRLINE_ID,
ORIGIN_CITY_NAME,
ORIGIN_STATE_NM,
DEST_CITY_NAME,
DEST_STATE_NM,
CANCELLATION_CODE))
dim(subs1.ff)
```
```
## [1] 4529 7
```
```
mem_used()
```
```
## 2.09 GB
```
### 9\.2\.7 Save/load/export `ff` files
In order to better organize and easily reload the newly created `ff_files` files, we can explicitly save them to disk.
```
# Save a newly created ff_files object to a data file:
# (7 files (one for each column) created in the ffdb directory)
save.ffdf(subs1.ff, overwrite = TRUE)
```
If we want to reload a previously saved `ff_files` object, we do not have to go through the chunking of the raw data file again but can very quickly load the data mapping and metadata into RAM in order to further work with the data (stored on disk).
```
# Loading previously saved ff_files files:
rm(subs1.ff)
#gc()
load.ffdf("ffdb")
# check the class and structure of the loaded data
class(subs1.ff)
```
```
## [1] "ffdf"
```
```
dim(subs1.ff)
```
```
## [1] 4529 7
```
```
dimnames(subs1.ff)
```
```
## [[1]]
## NULL
##
## [[2]]
## [1] "FL_DATE" "AIRLINE_ID"
## [3] "ORIGIN_CITY_NAME" "ORIGIN_STATE_NM"
## [5] "DEST_CITY_NAME" "DEST_STATE_NM"
## [7] "CANCELLATION_CODE"
```
If we want to store an `ff_files` dataset in a format more accessible for other users (such as CSV), we can do so as follows. This last step is also quite common in practice. The initial raw dataset is very large; thus we perform all the theoretically very memory\-intensive tasks of preparing the analytic dataset via `ff` and then store the (often much smaller) analytic dataset in a more accessible CSV file in order to later read it into RAM and run more computationally intensive analyses directly in RAM.
```
# Export subs1.ff into CSV and TXT files:
write.csv.ffdf(subs1.ff, "subset1.csv")
```
9\.3 Big Data preparation tutorial with `arrow`
-----------------------------------------------
We begin by initializing our R session as in the short `arrow` introduction above.
```
# SET UP ----------------
# load packages
library(arrow)
library(dplyr)
library(pryr) # for profiling
# fix vars
FLIGHTS_DATA <- "data/flights_sep_oct15.txt"
AIRLINES_DATA <- "data/airline_id.csv"
# import the data
flights <- read_csv_arrow(FLIGHTS_DATA,
as_data_frame = FALSE)
airlines <- read_csv_arrow(AIRLINES_DATA,
as_data_frame = FALSE)
```
Note how the data from the CSV files is not actually read into RAM yet. The created objects `flights` and `airlines` are not data frames (yet) and occupy hardly any RAM.
```
class(flights)
```
```
## [1] "Table" "ArrowTabular" "ArrowObject"
## [4] "R6"
```
```
class(airlines)
```
```
## [1] "Table" "ArrowTabular" "ArrowObject"
## [4] "R6"
```
```
object_size(flights)
```
```
## 283.62 kB
```
```
object_size(airlines)
```
```
## 283.62 kB
```
In analogy to the `ff` tutorial above, we go through the same data preparation steps. First, we rename the variables in `airlines` to ensure that the variable names are consistent with the `flights` data frame.
```
# step 1:
# Rename "Code" variable from airlines.ff to "AIRLINE_ID"
# and "Description" into "AIRLINE_NM".
names(airlines) <- c("AIRLINE_ID", "AIRLINE_NM")
names(airlines)
```
```
## [1] "AIRLINE_ID" "AIRLINE_NM"
```
In a second step, the two data frames are merged/joined. The `arrow` package follows `dplyr`\-syntax regarding data preparation tasks. That is, we can directly build on functions like
```
# merge the two datasets via Arrow
flights.data.ar <- inner_join(airlines, flights, by="AIRLINE_ID")
object_size(flights.data.ar)
```
```
## 647.74 kB
```
In a last step, we filter the resulting dataset for cancelled flights and select only some of the available variables.
Now, we want to filter out some observations as well as select only specific variables for a subset of the overall dataset. As Arrow works with the `dplyr` back\-end, we can directly use the typical `dplyr`\-syntax to combine selection of columns and filtering of rows.
```
# Subset the ff_files object flights.data.ff:
subs1.ar <-
flights.data.ar %>%
filter(CANCELLED == 1) %>%
select(FL_DATE,
AIRLINE_ID,
ORIGIN_CITY_NAME,
ORIGIN_STATE_NM,
DEST_CITY_NAME,
DEST_STATE_NM,
CANCELLATION_CODE)
object_size(subs1.ar)
```
```
## 591.21 kB
```
Again, this operation hardly affected RAM usage by R. Note, though, that in contrast to the `ff`\-approach, Arrow has actually not yet created the new subset `sub1.ar`. In fact, it has not even really imported the data or merged the two datasets. This is the effect of the lazy evaluation approach implemented in `arrow`. To further process the data in `sub1.ar` with other functions (outside of `arrow`), we need to actually trigger the evaluation of all the data preparation steps we have just instructed R to do. This is done via `collect()`.
```
mem_change(subs1.ar.df <- collect(subs1.ar))
```
```
## 2.47 MB
```
```
class(subs1.ar.df)
```
```
## [1] "tbl_df" "tbl" "data.frame"
```
```
object_size(subs1.ar.df)
```
```
## 57.15 kB
```
Note how in this tutorial, the final subset is substantially smaller than the initial two datasets. Hence, in this case it is fine to actually load this into RAM as a data frame. However, this is not a necessary part of the workflow. Instead of calling `collect()`, you can then trigger the computation of all the data preparation steps via `compute()` and, for example, store the resulting `arrow` table to a CSV file.
```
subs1.ar %>%
compute() %>%
write_csv_arrow(file="data/subs1.ar.csv")
```
9\.4 Wrapping up
----------------
* Typically, the raw/uncleaned data is the critical bottleneck in terms of data volume, particularly as the selection and filtering of the overall dataset in the preparation of analytic datasets can only work properly with cleaned data.
* *Out\-of\-memory* strategies are based on the concept of virtual memory and are key to cleaning large amounts of data locally.
* The *`ff` package* provides a high\-level R interface to an out\-of\-memory approach. Most functions in `ff` and the corresponding `ffbase` package come with a syntax very similar to the basic R syntax for data cleaning and manipulation.
* The basic idea behind `ff` is to store the data in chunked format in an easily accessible way on the hard disk and only keep the metadata of a dataset (e.g., variable names) in an R object in RAM while working on the dataset.
* The `arrow` package offers similar functionality based on a slightly different approach called *lazy evaluation* (only evaluate data manipulation/cleaning tasks once the data is pulled into R). Unlike `ff`, `arrow` closely follows the `dplyr` syntax rather than basic R syntax for data cleaning tasks.
9\.1 Out\-of\-memory strategies and lazy evaluation: Practical basics
---------------------------------------------------------------------
Virtual memory is in simple words an approach to combining the RAM and mass storage components in order to cope with a lack of RAM. Modern operating systems come with a virtual memory manager that automatically handles the swapping between RAM and the hard\-disk, when running processes that use up too much RAM. However, a virtual memory manager is not specifically developed to perform this task in the context of data analysis. Several strategies have thus been developed to build on the basic idea of *virtual memory* in the context of data analysis tasks.
* *Chunked data files on disk*: The data analytics software ‘partitions’ the dataset, and maps and stores the chunks of raw data on disk. What is actually ‘read’ into RAM when importing the data file with this approach is the mapping to the partitions of the actual dataset (the data structure) and some metadata describing the dataset. In R, this approach is implemented in the `ff` package ([Adler et al. 2022](#ref-ff)) and several packages building on `ff`. In this approach, the usage of disk space and the linking between RAM and files on disk is very explicit (and clearly visible to the user).
* *Memory mapped files and shared memory*: The data analytics software uses segments of virtual memory for the dataset and allows different programs/processes to access it in the same memory segment. Thus, virtual memory is explicitly allocated for one or several specific data analytics tasks. In R, this approach is notably implemented in the `bigmemory` package ([Kane, Emerson, and Weston 2013](#ref-bigmemory)) and several packages building on `bigmemory`.
A conceptually related but differently focused approach is the *lazy evaluation* implemented in Apache Arrow and the corresponding `arrow` package ([Richardson et al. 2022](#ref-richardson_etal2022)). While Apache Arrow is basically a platform for in\-memory columnar data, it is optimized for processing large amounts of data and working with datasets that actually do not fit into memory. The way this is done is that instructions on what to do with a dataset are not evaluated step\-by\-step on the spot but all together at the point of actually loading the data into R. That is, we can connect to a dataset via `arrow`, see its variables, etc., give instructions of which observations to filter out and which columns to select, all before we read the dataset into RAM. In comparison to the strategies outlined above, this approach is usually much faster but might still lead to a situation with a lack of memory.
In the following subsections we briefly look at how to set up an R session for data preparation purposes with any of these approaches (`ff`, `bigmemory`, `arrow`) and look at some of the conceptual basics behind the approaches.
### 9\.1\.1 Chunking data with the `ff` package
We first install and load the `ff` and `ffbase` ([de Jonge, Wijffels, and van der Laan 2023](#ref-ffbase)) packages, as well as the `pryr` package. We use the familiar `flights.csv` dataset[54](#fn54) For the sake of the example, we only use a fraction of the original dataset.[55](#fn55) On disk, the dataset is about 30MB:
```
fs::file_size("data/flights.csv")
```
```
## 29.5M
```
However, loading the entire dataset of several GBs would work just fine, using the `ff`\-approach.
When importing data via the `ff` package, we first have to set up a directory where `ff` can store the partitioned dataset (recall that this is explicitly/visibly done on disk). We call this new directory `ff_files`.
```
# SET UP --------------
# install.packages(c("ff", "ffbase"))
# you might have to install the ffbase package directly from GitHub:
# devtools::install_github("edwindj/ffbase", subdir="pkg")
# load packages
library(ff)
library(ffbase)
library(data.table) # for comparison
# create directory for ff chunks, and assign directory to ff
system("mkdir ff_files")
options(fftempdir = "ff_files")
```
Now we can read in the data with `read.table.ffdf`. In order to better understand the underlying concept, we also import the data into a common `data.table` object via `fread()` and then look at the size of the objects resulting from the two ‘import’ approaches in the R environment with `object.size()`.
```
# usual in-memory csv import
flights_dt <- fread("data/flights.csv")
# out-of-memory approach
flights <-
read.table.ffdf(file="data/flights.csv",
sep=",",
VERBOSE=TRUE,
header=TRUE,
next.rows=100000,
colClasses=NA)
```
```
## read.table.ffdf 1..100000 (100000) csv-read=0.609sec ffdf-write=0.065sec
## read.table.ffdf 100001..200000 (100000) csv-read=0.479sec ffdf-write=0.044sec
## read.table.ffdf 200001..300000 (100000) csv-read=0.446sec ffdf-write=0.046sec
## read.table.ffdf 300001..336776 (36776) csv-read=0.184sec ffdf-write=0.04sec
## csv-read=1.718sec ffdf-write=0.195sec TOTAL=1.913sec
```
```
# compare object sizes
object.size(flights) # out-of-memory approach
```
```
## 949976 bytes
```
```
object.size(flights_dt) # common data.table
```
```
## 32569024 bytes
```
Note that there are two substantial differences to what we have previously seen when using `fread()`. It takes much longer to import a CSV into the ff\_files structure. However, the RAM allocated to it is much smaller. This is exactly what we would expect, keeping in mind what `read.table.ffdf()` does in comparison to what `fread()` does. Now we can actually have a look at the data chunks created by `ff`.
```
# show the files in the directory keeping the chunks
head(list.files("ff_files"))
```
```
## [1] "ffdf42b781703dcfe.ff" "ffdf42b781d18cdf9.ff"
## [3] "ffdf42b781d5105fb.ff" "ffdf42b781d6ccc74.ff"
## [5] "ffdf42b782494a6a1.ff" "ffdf42b782df9670d.ff"
```
### 9\.1\.2 Memory mapping with `bigmemory`
The `bigmemory` package handles data in matrices and therefore only accepts data values of identical data type. Before importing data via the `bigmemory` package, we thus have to ensure that all variables in the raw data can be imported in a common type.
```
# SET UP ----------------
# load packages
library(bigmemory)
library(biganalytics)
# import the data
flights <- read.big.matrix("data/flights.csv",
type="integer",
header=TRUE,
backingfile="flights.bin",
descriptorfile="flights.desc")
```
Note that, similar to the `ff` example, `read.big.matrix()` creates a local file `flights.bin` on disk that is linked to the `flights` object in RAM. From looking at the imported file, we see that various variable values have been discarded. This is because we have forced all variables to be of type `"integer"` when importing the dataset.
```
object.size(flights)
```
```
## 696 bytes
```
```
str(flights)
```
```
## Formal class 'big.matrix' [package "bigmemory"] with 1
slot
## ..@ address:<externalptr>
```
Again, the object representing the dataset in R does not contain the actual data (it does not even take up a KB of memory).
### 9\.1\.3 Connecting to Apache Arrow
```
# SET UP ----------------
# load packages
library(arrow)
# import the data
flights <- read_csv_arrow("data/flights.csv",
as_data_frame = FALSE)
```
Note the `as_data_frame=FALSE` in the function call. This instructs Arrow to establish a connection to the file and read some of the data (to understand what is in the file), but not actually import the whole CSV.
```
summary(flights)
```
```
## Length Class Mode
## year 336776 ChunkedArray environment
## month 336776 ChunkedArray environment
## day 336776 ChunkedArray environment
## dep_time 336776 ChunkedArray environment
## sched_dep_time 336776 ChunkedArray environment
## dep_delay 336776 ChunkedArray environment
## arr_time 336776 ChunkedArray environment
## sched_arr_time 336776 ChunkedArray environment
## arr_delay 336776 ChunkedArray environment
## carrier 336776 ChunkedArray environment
## flight 336776 ChunkedArray environment
## tailnum 336776 ChunkedArray environment
## origin 336776 ChunkedArray environment
## dest 336776 ChunkedArray environment
## air_time 336776 ChunkedArray environment
## distance 336776 ChunkedArray environment
## hour 336776 ChunkedArray environment
## minute 336776 ChunkedArray environment
## time_hour 336776 ChunkedArray environment
```
```
object.size(flights)
```
```
## 488 bytes
```
Again, we notice that the `flights` object is much smaller than the actual dataset on disk.
### 9\.1\.1 Chunking data with the `ff` package
We first install and load the `ff` and `ffbase` ([de Jonge, Wijffels, and van der Laan 2023](#ref-ffbase)) packages, as well as the `pryr` package. We use the familiar `flights.csv` dataset[54](#fn54) For the sake of the example, we only use a fraction of the original dataset.[55](#fn55) On disk, the dataset is about 30MB:
```
fs::file_size("data/flights.csv")
```
```
## 29.5M
```
However, loading the entire dataset of several GBs would work just fine, using the `ff`\-approach.
When importing data via the `ff` package, we first have to set up a directory where `ff` can store the partitioned dataset (recall that this is explicitly/visibly done on disk). We call this new directory `ff_files`.
```
# SET UP --------------
# install.packages(c("ff", "ffbase"))
# you might have to install the ffbase package directly from GitHub:
# devtools::install_github("edwindj/ffbase", subdir="pkg")
# load packages
library(ff)
library(ffbase)
library(data.table) # for comparison
# create directory for ff chunks, and assign directory to ff
system("mkdir ff_files")
options(fftempdir = "ff_files")
```
Now we can read in the data with `read.table.ffdf`. In order to better understand the underlying concept, we also import the data into a common `data.table` object via `fread()` and then look at the size of the objects resulting from the two ‘import’ approaches in the R environment with `object.size()`.
```
# usual in-memory csv import
flights_dt <- fread("data/flights.csv")
# out-of-memory approach
flights <-
read.table.ffdf(file="data/flights.csv",
sep=",",
VERBOSE=TRUE,
header=TRUE,
next.rows=100000,
colClasses=NA)
```
```
## read.table.ffdf 1..100000 (100000) csv-read=0.609sec ffdf-write=0.065sec
## read.table.ffdf 100001..200000 (100000) csv-read=0.479sec ffdf-write=0.044sec
## read.table.ffdf 200001..300000 (100000) csv-read=0.446sec ffdf-write=0.046sec
## read.table.ffdf 300001..336776 (36776) csv-read=0.184sec ffdf-write=0.04sec
## csv-read=1.718sec ffdf-write=0.195sec TOTAL=1.913sec
```
```
# compare object sizes
object.size(flights) # out-of-memory approach
```
```
## 949976 bytes
```
```
object.size(flights_dt) # common data.table
```
```
## 32569024 bytes
```
Note that there are two substantial differences to what we have previously seen when using `fread()`. It takes much longer to import a CSV into the ff\_files structure. However, the RAM allocated to it is much smaller. This is exactly what we would expect, keeping in mind what `read.table.ffdf()` does in comparison to what `fread()` does. Now we can actually have a look at the data chunks created by `ff`.
```
# show the files in the directory keeping the chunks
head(list.files("ff_files"))
```
```
## [1] "ffdf42b781703dcfe.ff" "ffdf42b781d18cdf9.ff"
## [3] "ffdf42b781d5105fb.ff" "ffdf42b781d6ccc74.ff"
## [5] "ffdf42b782494a6a1.ff" "ffdf42b782df9670d.ff"
```
### 9\.1\.2 Memory mapping with `bigmemory`
The `bigmemory` package handles data in matrices and therefore only accepts data values of identical data type. Before importing data via the `bigmemory` package, we thus have to ensure that all variables in the raw data can be imported in a common type.
```
# SET UP ----------------
# load packages
library(bigmemory)
library(biganalytics)
# import the data
flights <- read.big.matrix("data/flights.csv",
type="integer",
header=TRUE,
backingfile="flights.bin",
descriptorfile="flights.desc")
```
Note that, similar to the `ff` example, `read.big.matrix()` creates a local file `flights.bin` on disk that is linked to the `flights` object in RAM. From looking at the imported file, we see that various variable values have been discarded. This is because we have forced all variables to be of type `"integer"` when importing the dataset.
```
object.size(flights)
```
```
## 696 bytes
```
```
str(flights)
```
```
## Formal class 'big.matrix' [package "bigmemory"] with 1
slot
## ..@ address:<externalptr>
```
Again, the object representing the dataset in R does not contain the actual data (it does not even take up a KB of memory).
### 9\.1\.3 Connecting to Apache Arrow
```
# SET UP ----------------
# load packages
library(arrow)
# import the data
flights <- read_csv_arrow("data/flights.csv",
as_data_frame = FALSE)
```
Note the `as_data_frame=FALSE` in the function call. This instructs Arrow to establish a connection to the file and read some of the data (to understand what is in the file), but not actually import the whole CSV.
```
summary(flights)
```
```
## Length Class Mode
## year 336776 ChunkedArray environment
## month 336776 ChunkedArray environment
## day 336776 ChunkedArray environment
## dep_time 336776 ChunkedArray environment
## sched_dep_time 336776 ChunkedArray environment
## dep_delay 336776 ChunkedArray environment
## arr_time 336776 ChunkedArray environment
## sched_arr_time 336776 ChunkedArray environment
## arr_delay 336776 ChunkedArray environment
## carrier 336776 ChunkedArray environment
## flight 336776 ChunkedArray environment
## tailnum 336776 ChunkedArray environment
## origin 336776 ChunkedArray environment
## dest 336776 ChunkedArray environment
## air_time 336776 ChunkedArray environment
## distance 336776 ChunkedArray environment
## hour 336776 ChunkedArray environment
## minute 336776 ChunkedArray environment
## time_hour 336776 ChunkedArray environment
```
```
object.size(flights)
```
```
## 488 bytes
```
Again, we notice that the `flights` object is much smaller than the actual dataset on disk.
9\.2 Big Data preparation tutorial with `ff`
--------------------------------------------
### 9\.2\.1 Set up
The following code and data examples build on Walkowiak ([2016](#ref-walkowiak_2016)), Chapter 3\.[56](#fn56) The set up for our analysis script involves the loading of the `ff` and `ffbase` packages, the initialization of fixed variables to hold the paths to the datasets, and the creation and assignment of a new local directory `ff_files` in which the binary flat file\-partitioned chunks of the original datasets will be stored.
```
## SET UP ------------------------
# create and set directory for ff files
system("mkdir ff_files")
options(fftempdir = "ff_files")
# load packages
library(ff)
library(ffbase)
library(pryr)
# fix vars
FLIGHTS_DATA <- "data/flights_sep_oct15.txt"
AIRLINES_DATA <- "data/airline_id.csv"
```
### 9\.2\.2 Data import
In a first step we read (or ‘upload’) the data into R. This step involves the creation of the binary chunked files as well as the mapping of these files and the metadata. In comparison to the traditional `read.csv` approach, you will notice two things. On the one hand the data import takes longer; on the other hand it uses up much less RAM than with `read.csv`.
```
# DATA IMPORT ------------------
# check memory used
mem_used()
```
```
## 1.79 GB
```
```
# 1. Upload flights_sep_oct15.txt and airline_id.csv files from flat files.
system.time(flights.ff <- read.table.ffdf(file=FLIGHTS_DATA,
sep=",",
VERBOSE=TRUE,
header=TRUE,
next.rows=100000,
colClasses=NA))
```
```
## read.table.ffdf 1..100000 (100000) csv-read=0.564sec ffdf-write=0.095sec
## read.table.ffdf 100001..200000 (100000) csv-read=0.603sec ffdf-write=0.072sec
## read.table.ffdf 200001..300000 (100000) csv-read=0.611sec ffdf-write=0.068sec
## read.table.ffdf 300001..400000 (100000) csv-read=0.625sec ffdf-write=0.08sec
## read.table.ffdf 400001..500000 (100000) csv-read=0.626sec ffdf-write=0.072sec
## read.table.ffdf 500001..600000 (100000) csv-read=0.681sec ffdf-write=0.075sec
## read.table.ffdf 600001..700000 (100000) csv-read=0.638sec ffdf-write=0.069sec
## read.table.ffdf 700001..800000 (100000) csv-read=0.6sec ffdf-write=0.081sec
## read.table.ffdf 800001..900000 (100000) csv-read=0.612sec ffdf-write=0.075sec
## read.table.ffdf 900001..951111 (51111) csv-read=0.329sec ffdf-write=0.047sec
## csv-read=5.889sec ffdf-write=0.734sec TOTAL=6.623sec
```
```
## user system elapsed
## 5.659 0.750 6.626
```
```
system.time(airlines.ff <- read.csv.ffdf(file= AIRLINES_DATA,
VERBOSE=TRUE,
header=TRUE,
next.rows=100000,
colClasses=NA))
```
```
## read.table.ffdf 1..1607 (1607) csv-read=0.005sec ffdf-write=0.004sec
## csv-read=0.005sec ffdf-write=0.004sec TOTAL=0.009sec
```
```
## user system elapsed
## 0.009 0.001 0.010
```
```
# check memory used
mem_used()
```
```
## 1.79 GB
```
Comparison with `read.table`
```
# Using read.table()
system.time(flights.table <- read.table(FLIGHTS_DATA,
sep=",",
header=TRUE))
```
```
## user system elapsed
## 5.164 0.684 5.976
```
```
system.time(airlines.table <- read.csv(AIRLINES_DATA,
header = TRUE))
```
```
## user system elapsed
## 0.002 0.000 0.003
```
```
# check the memory used
mem_used()
```
```
## 1.93 GB
```
### 9\.2\.3 Inspect imported files
A particularly useful aspect of working with the `ff` package and the packages building on it is that many of the simple R functions that work on normal data.frames in RAM also work on ff\_files files. Hence, without actually having loaded the entire raw data of a large dataset into RAM, we can quickly get an overview of the key characteristics, such as the number of observations and the number of variables.
```
# 2. Inspect the ff_files objects.
## For flights.ff object:
class(flights.ff)
```
```
## [1] "ffdf"
```
```
dim(flights.ff)
```
```
## [1] 951111 28
```
```
## For airlines.ff object:
class(airlines.ff)
```
```
## [1] "ffdf"
```
```
dim(airlines.ff)
```
```
## [1] 1607 2
```
### 9\.2\.4 Data cleaning and transformation
After inspecting the data, we go through several steps of cleaning and transformation, with the goal of then merging the two datasets. That is, we want to create a new dataset that contains detailed flight information but with additional information on the carriers/airlines. First, we want to rename some of the variables.
```
# step 1:
# Rename "Code" variable from airlines.ff
# to "AIRLINE_ID" and "Description" into "AIRLINE_NM".
names(airlines.ff) <- c("AIRLINE_ID", "AIRLINE_NM")
names(airlines.ff)
```
```
## [1] "AIRLINE_ID" "AIRLINE_NM"
```
```
str(airlines.ff[1:20,])
```
```
## 'data.frame': 20 obs. of 2 variables:
## $ AIRLINE_ID: int 19031 19032 19033 19034 19035 19036
19037 19038 19039 19040 ...
## $ AIRLINE_NM: Factor w/ 1607 levels "40-Mile Air:
Q5",..: 945 1025 503 721 64 725 1194 99 1395 276 ...
```
Now we can join the two datasets via the unique airline identifier `"AIRLINE_ID"`. Note that these kinds of operations would usually take up substantially more RAM on the spot, if both original datasets were also fully loaded into RAM. As illustrated by the `mem_change()` function, this is not the case here. All that is needed is a small chunk of RAM to keep the metadata and mapping\-information of the new `ff_files` object; all the actual data is cached on the hard disk.
```
# merge of ff_files objects
mem_change(flights.data.ff <- merge.ffdf(flights.ff,
airlines.ff,
by="AIRLINE_ID"))
```
```
## 774 kB
```
```
#The new object is only 551.2 KB in size
class(flights.data.ff)
```
```
## [1] "ffdf"
```
```
dim(flights.data.ff)
```
```
## [1] 951111 29
```
```
names(flights.data.ff)
```
```
## [1] "YEAR" "MONTH"
## [3] "DAY_OF_MONTH" "DAY_OF_WEEK"
## [5] "FL_DATE" "UNIQUE_CARRIER"
## [7] "AIRLINE_ID" "TAIL_NUM"
## [9] "FL_NUM" "ORIGIN_AIRPORT_ID"
## [11] "ORIGIN" "ORIGIN_CITY_NAME"
## [13] "ORIGIN_STATE_NM" "ORIGIN_WAC"
## [15] "DEST_AIRPORT_ID" "DEST"
## [17] "DEST_CITY_NAME" "DEST_STATE_NM"
## [19] "DEST_WAC" "DEP_TIME"
## [21] "DEP_DELAY" "ARR_TIME"
## [23] "ARR_DELAY" "CANCELLED"
## [25] "CANCELLATION_CODE" "DIVERTED"
## [27] "AIR_TIME" "DISTANCE"
## [29] "AIRLINE_NM"
```
### 9\.2\.5 Inspect difference in in\-memory operation
In comparison to the `ff`\-approach, performing the merge in memory needs more resources:
```
##For flights.table:
names(airlines.table) <- c("AIRLINE_ID", "AIRLINE_NM")
names(airlines.table)
```
```
## [1] "AIRLINE_ID" "AIRLINE_NM"
```
```
str(airlines.table[1:20,])
```
```
## 'data.frame': 20 obs. of 2 variables:
## $ AIRLINE_ID: int 19031 19032 19033 19034 19035 19036
19037 19038 19039 19040 ...
## $ AIRLINE_NM: chr "Mackey International Inc.: MAC" "Munz
Northern Airlines Inc.: XY" "Cochise Airlines Inc.: COC"
"Golden Gate Airlines Inc.: GSA" ...
```
```
# check memory usage of merge in RAM
mem_change(flights.data.table <- merge(flights.table,
airlines.table,
by="AIRLINE_ID"))
```
```
## 161 MB
```
```
#The new object is already 105.7 MB in size
#A rapid spike in RAM use when processing
```
### 9\.2\.6 Subsetting
Now, we want to filter out some observations as well as select only specific variables for a subset of the overall dataset.
```
mem_used()
```
```
## 2.09 GB
```
```
# Subset the ff_files object flights.data.ff:
subs1.ff <-
subset.ffdf(flights.data.ff,
CANCELLED == 1,
select = c(FL_DATE,
AIRLINE_ID,
ORIGIN_CITY_NAME,
ORIGIN_STATE_NM,
DEST_CITY_NAME,
DEST_STATE_NM,
CANCELLATION_CODE))
dim(subs1.ff)
```
```
## [1] 4529 7
```
```
mem_used()
```
```
## 2.09 GB
```
### 9\.2\.7 Save/load/export `ff` files
In order to better organize and easily reload the newly created `ff_files` files, we can explicitly save them to disk.
```
# Save a newly created ff_files object to a data file:
# (7 files (one for each column) created in the ffdb directory)
save.ffdf(subs1.ff, overwrite = TRUE)
```
If we want to reload a previously saved `ff_files` object, we do not have to go through the chunking of the raw data file again but can very quickly load the data mapping and metadata into RAM in order to further work with the data (stored on disk).
```
# Loading previously saved ff_files files:
rm(subs1.ff)
#gc()
load.ffdf("ffdb")
# check the class and structure of the loaded data
class(subs1.ff)
```
```
## [1] "ffdf"
```
```
dim(subs1.ff)
```
```
## [1] 4529 7
```
```
dimnames(subs1.ff)
```
```
## [[1]]
## NULL
##
## [[2]]
## [1] "FL_DATE" "AIRLINE_ID"
## [3] "ORIGIN_CITY_NAME" "ORIGIN_STATE_NM"
## [5] "DEST_CITY_NAME" "DEST_STATE_NM"
## [7] "CANCELLATION_CODE"
```
If we want to store an `ff_files` dataset in a format more accessible for other users (such as CSV), we can do so as follows. This last step is also quite common in practice. The initial raw dataset is very large; thus we perform all the theoretically very memory\-intensive tasks of preparing the analytic dataset via `ff` and then store the (often much smaller) analytic dataset in a more accessible CSV file in order to later read it into RAM and run more computationally intensive analyses directly in RAM.
```
# Export subs1.ff into CSV and TXT files:
write.csv.ffdf(subs1.ff, "subset1.csv")
```
### 9\.2\.1 Set up
The following code and data examples build on Walkowiak ([2016](#ref-walkowiak_2016)), Chapter 3\.[56](#fn56) The set up for our analysis script involves the loading of the `ff` and `ffbase` packages, the initialization of fixed variables to hold the paths to the datasets, and the creation and assignment of a new local directory `ff_files` in which the binary flat file\-partitioned chunks of the original datasets will be stored.
```
## SET UP ------------------------
# create and set directory for ff files
system("mkdir ff_files")
options(fftempdir = "ff_files")
# load packages
library(ff)
library(ffbase)
library(pryr)
# fix vars
FLIGHTS_DATA <- "data/flights_sep_oct15.txt"
AIRLINES_DATA <- "data/airline_id.csv"
```
### 9\.2\.2 Data import
In a first step we read (or ‘upload’) the data into R. This step involves the creation of the binary chunked files as well as the mapping of these files and the metadata. In comparison to the traditional `read.csv` approach, you will notice two things. On the one hand the data import takes longer; on the other hand it uses up much less RAM than with `read.csv`.
```
# DATA IMPORT ------------------
# check memory used
mem_used()
```
```
## 1.79 GB
```
```
# 1. Upload flights_sep_oct15.txt and airline_id.csv files from flat files.
system.time(flights.ff <- read.table.ffdf(file=FLIGHTS_DATA,
sep=",",
VERBOSE=TRUE,
header=TRUE,
next.rows=100000,
colClasses=NA))
```
```
## read.table.ffdf 1..100000 (100000) csv-read=0.564sec ffdf-write=0.095sec
## read.table.ffdf 100001..200000 (100000) csv-read=0.603sec ffdf-write=0.072sec
## read.table.ffdf 200001..300000 (100000) csv-read=0.611sec ffdf-write=0.068sec
## read.table.ffdf 300001..400000 (100000) csv-read=0.625sec ffdf-write=0.08sec
## read.table.ffdf 400001..500000 (100000) csv-read=0.626sec ffdf-write=0.072sec
## read.table.ffdf 500001..600000 (100000) csv-read=0.681sec ffdf-write=0.075sec
## read.table.ffdf 600001..700000 (100000) csv-read=0.638sec ffdf-write=0.069sec
## read.table.ffdf 700001..800000 (100000) csv-read=0.6sec ffdf-write=0.081sec
## read.table.ffdf 800001..900000 (100000) csv-read=0.612sec ffdf-write=0.075sec
## read.table.ffdf 900001..951111 (51111) csv-read=0.329sec ffdf-write=0.047sec
## csv-read=5.889sec ffdf-write=0.734sec TOTAL=6.623sec
```
```
## user system elapsed
## 5.659 0.750 6.626
```
```
system.time(airlines.ff <- read.csv.ffdf(file= AIRLINES_DATA,
VERBOSE=TRUE,
header=TRUE,
next.rows=100000,
colClasses=NA))
```
```
## read.table.ffdf 1..1607 (1607) csv-read=0.005sec ffdf-write=0.004sec
## csv-read=0.005sec ffdf-write=0.004sec TOTAL=0.009sec
```
```
## user system elapsed
## 0.009 0.001 0.010
```
```
# check memory used
mem_used()
```
```
## 1.79 GB
```
Comparison with `read.table`
```
# Using read.table()
system.time(flights.table <- read.table(FLIGHTS_DATA,
sep=",",
header=TRUE))
```
```
## user system elapsed
## 5.164 0.684 5.976
```
```
system.time(airlines.table <- read.csv(AIRLINES_DATA,
header = TRUE))
```
```
## user system elapsed
## 0.002 0.000 0.003
```
```
# check the memory used
mem_used()
```
```
## 1.93 GB
```
### 9\.2\.3 Inspect imported files
A particularly useful aspect of working with the `ff` package and the packages building on it is that many of the simple R functions that work on normal data.frames in RAM also work on ff\_files files. Hence, without actually having loaded the entire raw data of a large dataset into RAM, we can quickly get an overview of the key characteristics, such as the number of observations and the number of variables.
```
# 2. Inspect the ff_files objects.
## For flights.ff object:
class(flights.ff)
```
```
## [1] "ffdf"
```
```
dim(flights.ff)
```
```
## [1] 951111 28
```
```
## For airlines.ff object:
class(airlines.ff)
```
```
## [1] "ffdf"
```
```
dim(airlines.ff)
```
```
## [1] 1607 2
```
### 9\.2\.4 Data cleaning and transformation
After inspecting the data, we go through several steps of cleaning and transformation, with the goal of then merging the two datasets. That is, we want to create a new dataset that contains detailed flight information but with additional information on the carriers/airlines. First, we want to rename some of the variables.
```
# step 1:
# Rename "Code" variable from airlines.ff
# to "AIRLINE_ID" and "Description" into "AIRLINE_NM".
names(airlines.ff) <- c("AIRLINE_ID", "AIRLINE_NM")
names(airlines.ff)
```
```
## [1] "AIRLINE_ID" "AIRLINE_NM"
```
```
str(airlines.ff[1:20,])
```
```
## 'data.frame': 20 obs. of 2 variables:
## $ AIRLINE_ID: int 19031 19032 19033 19034 19035 19036
19037 19038 19039 19040 ...
## $ AIRLINE_NM: Factor w/ 1607 levels "40-Mile Air:
Q5",..: 945 1025 503 721 64 725 1194 99 1395 276 ...
```
Now we can join the two datasets via the unique airline identifier `"AIRLINE_ID"`. Note that these kinds of operations would usually take up substantially more RAM on the spot, if both original datasets were also fully loaded into RAM. As illustrated by the `mem_change()` function, this is not the case here. All that is needed is a small chunk of RAM to keep the metadata and mapping\-information of the new `ff_files` object; all the actual data is cached on the hard disk.
```
# merge of ff_files objects
mem_change(flights.data.ff <- merge.ffdf(flights.ff,
airlines.ff,
by="AIRLINE_ID"))
```
```
## 774 kB
```
```
#The new object is only 551.2 KB in size
class(flights.data.ff)
```
```
## [1] "ffdf"
```
```
dim(flights.data.ff)
```
```
## [1] 951111 29
```
```
names(flights.data.ff)
```
```
## [1] "YEAR" "MONTH"
## [3] "DAY_OF_MONTH" "DAY_OF_WEEK"
## [5] "FL_DATE" "UNIQUE_CARRIER"
## [7] "AIRLINE_ID" "TAIL_NUM"
## [9] "FL_NUM" "ORIGIN_AIRPORT_ID"
## [11] "ORIGIN" "ORIGIN_CITY_NAME"
## [13] "ORIGIN_STATE_NM" "ORIGIN_WAC"
## [15] "DEST_AIRPORT_ID" "DEST"
## [17] "DEST_CITY_NAME" "DEST_STATE_NM"
## [19] "DEST_WAC" "DEP_TIME"
## [21] "DEP_DELAY" "ARR_TIME"
## [23] "ARR_DELAY" "CANCELLED"
## [25] "CANCELLATION_CODE" "DIVERTED"
## [27] "AIR_TIME" "DISTANCE"
## [29] "AIRLINE_NM"
```
### 9\.2\.5 Inspect difference in in\-memory operation
In comparison to the `ff`\-approach, performing the merge in memory needs more resources:
```
##For flights.table:
names(airlines.table) <- c("AIRLINE_ID", "AIRLINE_NM")
names(airlines.table)
```
```
## [1] "AIRLINE_ID" "AIRLINE_NM"
```
```
str(airlines.table[1:20,])
```
```
## 'data.frame': 20 obs. of 2 variables:
## $ AIRLINE_ID: int 19031 19032 19033 19034 19035 19036
19037 19038 19039 19040 ...
## $ AIRLINE_NM: chr "Mackey International Inc.: MAC" "Munz
Northern Airlines Inc.: XY" "Cochise Airlines Inc.: COC"
"Golden Gate Airlines Inc.: GSA" ...
```
```
# check memory usage of merge in RAM
mem_change(flights.data.table <- merge(flights.table,
airlines.table,
by="AIRLINE_ID"))
```
```
## 161 MB
```
```
#The new object is already 105.7 MB in size
#A rapid spike in RAM use when processing
```
### 9\.2\.6 Subsetting
Now, we want to filter out some observations as well as select only specific variables for a subset of the overall dataset.
```
mem_used()
```
```
## 2.09 GB
```
```
# Subset the ff_files object flights.data.ff:
subs1.ff <-
subset.ffdf(flights.data.ff,
CANCELLED == 1,
select = c(FL_DATE,
AIRLINE_ID,
ORIGIN_CITY_NAME,
ORIGIN_STATE_NM,
DEST_CITY_NAME,
DEST_STATE_NM,
CANCELLATION_CODE))
dim(subs1.ff)
```
```
## [1] 4529 7
```
```
mem_used()
```
```
## 2.09 GB
```
### 9\.2\.7 Save/load/export `ff` files
In order to better organize and easily reload the newly created `ff_files` files, we can explicitly save them to disk.
```
# Save a newly created ff_files object to a data file:
# (7 files (one for each column) created in the ffdb directory)
save.ffdf(subs1.ff, overwrite = TRUE)
```
If we want to reload a previously saved `ff_files` object, we do not have to go through the chunking of the raw data file again but can very quickly load the data mapping and metadata into RAM in order to further work with the data (stored on disk).
```
# Loading previously saved ff_files files:
rm(subs1.ff)
#gc()
load.ffdf("ffdb")
# check the class and structure of the loaded data
class(subs1.ff)
```
```
## [1] "ffdf"
```
```
dim(subs1.ff)
```
```
## [1] 4529 7
```
```
dimnames(subs1.ff)
```
```
## [[1]]
## NULL
##
## [[2]]
## [1] "FL_DATE" "AIRLINE_ID"
## [3] "ORIGIN_CITY_NAME" "ORIGIN_STATE_NM"
## [5] "DEST_CITY_NAME" "DEST_STATE_NM"
## [7] "CANCELLATION_CODE"
```
If we want to store an `ff_files` dataset in a format more accessible for other users (such as CSV), we can do so as follows. This last step is also quite common in practice. The initial raw dataset is very large; thus we perform all the theoretically very memory\-intensive tasks of preparing the analytic dataset via `ff` and then store the (often much smaller) analytic dataset in a more accessible CSV file in order to later read it into RAM and run more computationally intensive analyses directly in RAM.
```
# Export subs1.ff into CSV and TXT files:
write.csv.ffdf(subs1.ff, "subset1.csv")
```
9\.3 Big Data preparation tutorial with `arrow`
-----------------------------------------------
We begin by initializing our R session as in the short `arrow` introduction above.
```
# SET UP ----------------
# load packages
library(arrow)
library(dplyr)
library(pryr) # for profiling
# fix vars
FLIGHTS_DATA <- "data/flights_sep_oct15.txt"
AIRLINES_DATA <- "data/airline_id.csv"
# import the data
flights <- read_csv_arrow(FLIGHTS_DATA,
as_data_frame = FALSE)
airlines <- read_csv_arrow(AIRLINES_DATA,
as_data_frame = FALSE)
```
Note how the data from the CSV files is not actually read into RAM yet. The created objects `flights` and `airlines` are not data frames (yet) and occupy hardly any RAM.
```
class(flights)
```
```
## [1] "Table" "ArrowTabular" "ArrowObject"
## [4] "R6"
```
```
class(airlines)
```
```
## [1] "Table" "ArrowTabular" "ArrowObject"
## [4] "R6"
```
```
object_size(flights)
```
```
## 283.62 kB
```
```
object_size(airlines)
```
```
## 283.62 kB
```
In analogy to the `ff` tutorial above, we go through the same data preparation steps. First, we rename the variables in `airlines` to ensure that the variable names are consistent with the `flights` data frame.
```
# step 1:
# Rename "Code" variable from airlines.ff to "AIRLINE_ID"
# and "Description" into "AIRLINE_NM".
names(airlines) <- c("AIRLINE_ID", "AIRLINE_NM")
names(airlines)
```
```
## [1] "AIRLINE_ID" "AIRLINE_NM"
```
In a second step, the two data frames are merged/joined. The `arrow` package follows `dplyr`\-syntax regarding data preparation tasks. That is, we can directly build on functions like
```
# merge the two datasets via Arrow
flights.data.ar <- inner_join(airlines, flights, by="AIRLINE_ID")
object_size(flights.data.ar)
```
```
## 647.74 kB
```
In a last step, we filter the resulting dataset for cancelled flights and select only some of the available variables.
Now, we want to filter out some observations as well as select only specific variables for a subset of the overall dataset. As Arrow works with the `dplyr` back\-end, we can directly use the typical `dplyr`\-syntax to combine selection of columns and filtering of rows.
```
# Subset the ff_files object flights.data.ff:
subs1.ar <-
flights.data.ar %>%
filter(CANCELLED == 1) %>%
select(FL_DATE,
AIRLINE_ID,
ORIGIN_CITY_NAME,
ORIGIN_STATE_NM,
DEST_CITY_NAME,
DEST_STATE_NM,
CANCELLATION_CODE)
object_size(subs1.ar)
```
```
## 591.21 kB
```
Again, this operation hardly affected RAM usage by R. Note, though, that in contrast to the `ff`\-approach, Arrow has actually not yet created the new subset `sub1.ar`. In fact, it has not even really imported the data or merged the two datasets. This is the effect of the lazy evaluation approach implemented in `arrow`. To further process the data in `sub1.ar` with other functions (outside of `arrow`), we need to actually trigger the evaluation of all the data preparation steps we have just instructed R to do. This is done via `collect()`.
```
mem_change(subs1.ar.df <- collect(subs1.ar))
```
```
## 2.47 MB
```
```
class(subs1.ar.df)
```
```
## [1] "tbl_df" "tbl" "data.frame"
```
```
object_size(subs1.ar.df)
```
```
## 57.15 kB
```
Note how in this tutorial, the final subset is substantially smaller than the initial two datasets. Hence, in this case it is fine to actually load this into RAM as a data frame. However, this is not a necessary part of the workflow. Instead of calling `collect()`, you can then trigger the computation of all the data preparation steps via `compute()` and, for example, store the resulting `arrow` table to a CSV file.
```
subs1.ar %>%
compute() %>%
write_csv_arrow(file="data/subs1.ar.csv")
```
9\.4 Wrapping up
----------------
* Typically, the raw/uncleaned data is the critical bottleneck in terms of data volume, particularly as the selection and filtering of the overall dataset in the preparation of analytic datasets can only work properly with cleaned data.
* *Out\-of\-memory* strategies are based on the concept of virtual memory and are key to cleaning large amounts of data locally.
* The *`ff` package* provides a high\-level R interface to an out\-of\-memory approach. Most functions in `ff` and the corresponding `ffbase` package come with a syntax very similar to the basic R syntax for data cleaning and manipulation.
* The basic idea behind `ff` is to store the data in chunked format in an easily accessible way on the hard disk and only keep the metadata of a dataset (e.g., variable names) in an R object in RAM while working on the dataset.
* The `arrow` package offers similar functionality based on a slightly different approach called *lazy evaluation* (only evaluate data manipulation/cleaning tasks once the data is pulled into R). Unlike `ff`, `arrow` closely follows the `dplyr` syntax rather than basic R syntax for data cleaning tasks.
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/descriptive-statistics-and-aggregation.html |
Chapter 10 Descriptive Statistics and Aggregation
=================================================
10\.1 Data aggregation: The ‘split\-apply\-combine’ strategy
------------------------------------------------------------
The ‘split\-apply\-combine’ strategy plays an important role in many data analysis tasks, ranging from data preparation to summary statistics and model\-fitting.[57](#fn57) The strategy can be defined as “break up a problem into manageable pieces, operate on each piece independently, and then put all the pieces back together.” ([Wickham 2011, 1](#ref-wickham_2011))
Many R users are familiar with the basic concept of split\-apply\-combine implemented in the `plyr` package intended for normal in\-memory operations (dataset fits into RAM). Here, we explore the options for split\-apply\-combine approaches to large datasets that do not fit into RAM.
10\.2 Data aggregation with chunked data files
----------------------------------------------
In this tutorial we explore the world of New York’s famous Yellow Cabs. In a first step, we will focus on the `ff`\-based approach to employ parts of the hard disk as ‘virtual memory’. This means, that all of the examples are easily scalable without risking too much memory pressure. Given the size of the entire TLC database (over 200GB), we will only use one million taxi trip records.[58](#fn58)
**Data import**
First, we read the raw taxi trip records into R with the `ff` package.
```
# load packages
library(ff)
library(ffbase)
# set up the ff directory (for data file chunks)
if (!dir.exists("fftaxi")){
system("mkdir fftaxi")
}
options(fftempdir = "fftaxi")
# import the first one million observations
taxi <- read.table.ffdf(file = "data/tlc_trips.csv",
sep = ",",
header = TRUE,
next.rows = 100000,
# colClasses= col_classes,
nrows = 1000000
)
```
Following the data documentation provided by TLC, we give the columns of our dataset more meaningful names and remove the empty columns (some covariates are only collected in later years).
When inspecting the factor variables of the dataset, we notice that some of the values are not standardized/normalized, and the resulting factor levels are, therefore, somewhat ambiguous. We should clean this before getting into data aggregation tasks. Note the `ff`\-specific syntax needed to recode the factor.
```
# inspect the factor levels
levels(taxi$Payment_Type)
```
```
## [1] "Cash" "CASH" "Credit" "CREDIT"
## [5] "Dispute" "No Charge"
```
```
# recode them
levels(taxi$Payment_Type) <- tolower(levels(taxi$Payment_Type))
taxi$Payment_Type <- ff(taxi$Payment_Type,
levels = unique(levels(taxi$Payment_Type)),
ramclass = "factor")
# check result
levels(taxi$Payment_Type)
```
```
## [1] "cash" "credit" "dispute" "no charge"
```
**Aggregation with split\-apply\-combine**
First, we will have a look at whether trips paid with credit card tend to involve lower tip amounts than trips paid in cash. In order to do so, we create a table that shows the average amount of tip paid for each payment\-type category.
In simple words, this means we first split the dataset into subsets, each of which contains all observations belonging to a distinct payment type. Then, we compute the arithmetic mean of the tip\-column of each of these subsets. Finally, we combine all of these results into one table (i.e., the split\-apply\-combine strategy). When working with `ff`, the `ffdfply()` function in combination with the `doBy` package ([Højsgaard and Halekoh 2023](#ref-doBy)) provides a user\-friendly implementation of split\-apply\-combine types of tasks.
```
# load packages
library(doBy)
# split-apply-combine procedure on data file chunks
tip_pcategory <- ffdfdply(taxi,
split = taxi$Payment_Type,
BATCHBYTES = 100000000,
FUN = function(x) {
summaryBy(Tip_Amt~Payment_Type,
data = x,
FUN = mean,
na.rm = TRUE)})
```
Note how the output describes the procedure step by step. Now we can have a look at the resulting summary statistic in the form of a `data.frame`.
```
as.data.frame(tip_pcategory)
```
```
## Payment_Type Tip_Amt.mean
## 1 cash 0.0008162
## 2 credit 2.1619737
## 3 dispute 0.0035075
## 4 no charge 0.0041056
```
The result contradicts our initial hypothesis. However, the comparison is a little flawed. If trips paid by credit card also tend to be longer, the result is not too surprising. We should thus look at the share of tip (or percentage), given the overall amount paid for the trip.
We add an additional variable `percent_tip` and then repeat the aggregation exercise for this variable.
```
# add additional column with the share of tip
taxi$percent_tip <- (taxi$Tip_Amt/taxi$Total_Amt)*100
# recompute the aggregate stats
tip_pcategory <- ffdfdply(taxi,
split = taxi$Payment_Type,
BATCHBYTES = 100000000,
FUN = function(x) {
# note the difference here
summaryBy(percent_tip~Payment_Type,
data = x,
FUN = mean,
na.rm = TRUE)})
# show result as data frame
as.data.frame(tip_pcategory)
```
```
## Payment_Type percent_tip.mean
## 1 cash 0.005978
## 2 credit 16.004173
## 3 dispute 0.045660
## 4 no charge 0.040433
```
**Cross\-tabulation of `ff` vectors**
Also in relative terms, trips paid by credit card tend to be tipped more. However, are there actually many trips paid by credit card? In order to figure this out, we count the number of trips per payment type by applying the `table.ff` function provided in `ffbase`.
```
table.ff(taxi$Payment_Type)
```
```
##
## cash credit dispute no charge
## 781295 215424 536 2745
```
So trips paid in cash are much more frequent than trips paid by credit card. Again using the `table.ff` function, we investigate what factors might be correlated with payment types. First, we have a look at whether payment type is associated with the number of passengers in a trip.
```
# select the subset of observations only containing trips paid by
# credit card or cash
taxi_sub <- subset.ffdf(taxi, Payment_Type=="credit" | Payment_Type == "cash")
taxi_sub$Payment_Type <- ff(taxi_sub$Payment_Type,
levels = c("credit", "cash"),
ramclass = "factor")
# compute the cross tabulation
crosstab <- table.ff(taxi_sub$Passenger_Count,
taxi_sub$Payment_Type
)
# add names to the margins
names(dimnames(crosstab)) <- c("Passenger count", "Payment type")
# show result
crosstab
```
```
## Payment type
## Passenger count credit cash
## 0 2 44
## 1 149990 516828
## 2 32891 133468
## 3 7847 36439
## 4 2909 17901
## 5 20688 73027
## 6 1097 3588
```
From the raw numbers it is hard to see whether there are significant differences between the categories cash and credit. We therefore use a visualization technique called a ‘mosaic plot’ (provided in the `vcd` package; see Meyer, Zeileis, and Hornik ([2023](#ref-vcd)), Meyer, Zeileis, and Hornik ([2006](#ref-strucplot)), and Zeileis, Meyer, and Hornik ([2007](#ref-shading))) to visualize the cross\-tabulation.
```
# install.packages(vcd)
# load package for mosaic plot
library(vcd)
# generate a mosaic plot
mosaic(crosstab, shade = TRUE)
```
The plot suggests that trips involving more than one passenger tend to be paid by cash rather than by credit card.
10\.3 High\-speed in\-memory data aggregation with `arrow`
----------------------------------------------------------
For large datasets that (at least in part) fit into RAM, the `arrow` package again provides an attractive alternative to `ff`.
**Data import**
We use the already familiar `read_csv_arrow()` to import the same first million observations from the taxi trips records.
```
# load packages
library(arrow)
library(dplyr)
# read the CSV file
taxi <- read_csv_arrow("data/tlc_trips.csv",
as_data_frame = FALSE)
```
**Data preparation and ‘split\-apply\-combine’**
We prepare/clean the data as in the `ff`\-approach above.
As `arrow` builds on a `dplyr` back\-end, basic computations can be easily done through the common `dplyr` syntax. Note, however, that not all of the `dplyr` functions are covered in `arrow` (as of the writing of this book).[59](#fn59)
```
# clean the categorical variable; aggregate by group
taxi <-
taxi %>%
mutate(Payment_Type = tolower(Payment_Type))
```
```
taxi_summary <-
taxi %>%
mutate(percent_tip = (Tip_Amt/Total_Amt)*100 ) %>%
group_by(Payment_Type) %>%
summarize(avg_percent_tip = mean(percent_tip)) %>%
collect()
```
Similarly, we can use `data.table`’s `dcast()` for cross\-tabulation\-like operations.
```
library(tidyr)
# compute the frequencies; pull result into R
ct <- taxi %>%
filter(Payment_Type %in% c("credit", "cash")) %>%
group_by(Passenger_Count, Payment_Type) %>%
summarize(n=n())%>%
collect()
# present as cross-tabulation
pivot_wider(data=ct,
names_from="Passenger_Count",
values_from = "n")
```
```
## # A tibble: 2 × 11
## Payment_Type `1` `3` `5` `2` `4` `6`
## <chr> <int> <int> <int> <int> <int> <int>
## 1 cash 1.42e7 972341 1.89e6 3.57e6 473783 96920
## 2 credit 4.34e6 221648 5.63e5 9.23e5 82800 28853
## # ℹ 4 more variables: `0` <int>, `208` <int>,
## # `129` <int>, `113` <int>
```
10\.4 High\-speed in\-memory data aggregation with `data.table`
---------------------------------------------------------------
For large datasets that still fit into RAM, the `data.table` package ([Dowle and Srinivasan 2022](#ref-data.table)) provides very fast and elegant functions to compute aggregate statistics.
**Data import**
We use the already familiar `fread()` to import the same first million observations from the taxi trip records.
```
# load packages
library(data.table)
# import data into RAM (needs around 200MB)
taxi <- fread("data/tlc_trips.csv",
nrows = 1000000)
```
**Data preparation and `data.table` syntax for ‘split\-apply\-combine’**
We prepare/clean the data as in the `ff` approach above.
```
# clean the factor levels
taxi$Payment_Type <- tolower(taxi$Payment_Type)
taxi$Payment_Type <- factor(taxi$Payment_Type,
levels = unique(taxi$Payment_Type))
```
Note the simpler syntax of essentially doing the same thing, but all in\-memory.
**`data.table`\-syntax for ‘split\-apply\-combine’ operations**
With the `[]`\-syntax we index/subset the usual `data.frame` objects in R. When working with `data.table`s, much more can be done in the step of ‘sub\-setting’ the frame.[60](#fn60)
For example, we can directly compute on columns.
```
taxi[, mean(Tip_Amt/Total_Amt)]
```
```
## [1] 0.03452
```
Moreover, in the same step, we can ‘split’ the rows *by* specific groups and apply the function to each subset.
```
taxi[, .(percent_tip = mean((Tip_Amt/Total_Amt)*100)), by = Payment_Type]
```
```
## Payment_Type percent_tip
## 1: cash 0.005978
## 2: credit 16.004173
## 3: no charge 0.040433
## 4: dispute 0.045660
```
Similarly, we can use `data.table`’s `dcast()` for cross\-tabulation\-like operations.
```
dcast(taxi[Payment_Type %in% c("credit", "cash")],
Passenger_Count~Payment_Type,
fun.aggregate = length,
value.var = "vendor_name")
```
```
## Passenger_Count cash credit
## 1: 0 44 2
## 2: 1 516828 149990
## 3: 2 133468 32891
## 4: 3 36439 7847
## 5: 4 17901 2909
## 6: 5 73027 20688
## 7: 6 3588 1097
```
10\.5 Wrapping up
-----------------
* Similar to the MapReduce idea in the context of distributed systems, the *split\-apply\-combine* approach is key in many Big Data aggregation procedures on normal machines (laptop/desktop computers). The idea is to split the overall data into subsets based on a categorical variable, apply a function (e.g., mean) on each subset, and then combine the results into one object. Thus, the approach allows for parallelization and working on separate data chunks.
* As computing descriptive statistics on various subsets of a large dataset can be very memory\-intensive, it is recommended to use out\-of\-memory strategies, lazy evaluation, or a classical SQL\-database approach for this.
* There are several options available such as `ffdply`, running on chunked datasets; and `arrow` with `group_by()`.
10\.1 Data aggregation: The ‘split\-apply\-combine’ strategy
------------------------------------------------------------
The ‘split\-apply\-combine’ strategy plays an important role in many data analysis tasks, ranging from data preparation to summary statistics and model\-fitting.[57](#fn57) The strategy can be defined as “break up a problem into manageable pieces, operate on each piece independently, and then put all the pieces back together.” ([Wickham 2011, 1](#ref-wickham_2011))
Many R users are familiar with the basic concept of split\-apply\-combine implemented in the `plyr` package intended for normal in\-memory operations (dataset fits into RAM). Here, we explore the options for split\-apply\-combine approaches to large datasets that do not fit into RAM.
10\.2 Data aggregation with chunked data files
----------------------------------------------
In this tutorial we explore the world of New York’s famous Yellow Cabs. In a first step, we will focus on the `ff`\-based approach to employ parts of the hard disk as ‘virtual memory’. This means, that all of the examples are easily scalable without risking too much memory pressure. Given the size of the entire TLC database (over 200GB), we will only use one million taxi trip records.[58](#fn58)
**Data import**
First, we read the raw taxi trip records into R with the `ff` package.
```
# load packages
library(ff)
library(ffbase)
# set up the ff directory (for data file chunks)
if (!dir.exists("fftaxi")){
system("mkdir fftaxi")
}
options(fftempdir = "fftaxi")
# import the first one million observations
taxi <- read.table.ffdf(file = "data/tlc_trips.csv",
sep = ",",
header = TRUE,
next.rows = 100000,
# colClasses= col_classes,
nrows = 1000000
)
```
Following the data documentation provided by TLC, we give the columns of our dataset more meaningful names and remove the empty columns (some covariates are only collected in later years).
When inspecting the factor variables of the dataset, we notice that some of the values are not standardized/normalized, and the resulting factor levels are, therefore, somewhat ambiguous. We should clean this before getting into data aggregation tasks. Note the `ff`\-specific syntax needed to recode the factor.
```
# inspect the factor levels
levels(taxi$Payment_Type)
```
```
## [1] "Cash" "CASH" "Credit" "CREDIT"
## [5] "Dispute" "No Charge"
```
```
# recode them
levels(taxi$Payment_Type) <- tolower(levels(taxi$Payment_Type))
taxi$Payment_Type <- ff(taxi$Payment_Type,
levels = unique(levels(taxi$Payment_Type)),
ramclass = "factor")
# check result
levels(taxi$Payment_Type)
```
```
## [1] "cash" "credit" "dispute" "no charge"
```
**Aggregation with split\-apply\-combine**
First, we will have a look at whether trips paid with credit card tend to involve lower tip amounts than trips paid in cash. In order to do so, we create a table that shows the average amount of tip paid for each payment\-type category.
In simple words, this means we first split the dataset into subsets, each of which contains all observations belonging to a distinct payment type. Then, we compute the arithmetic mean of the tip\-column of each of these subsets. Finally, we combine all of these results into one table (i.e., the split\-apply\-combine strategy). When working with `ff`, the `ffdfply()` function in combination with the `doBy` package ([Højsgaard and Halekoh 2023](#ref-doBy)) provides a user\-friendly implementation of split\-apply\-combine types of tasks.
```
# load packages
library(doBy)
# split-apply-combine procedure on data file chunks
tip_pcategory <- ffdfdply(taxi,
split = taxi$Payment_Type,
BATCHBYTES = 100000000,
FUN = function(x) {
summaryBy(Tip_Amt~Payment_Type,
data = x,
FUN = mean,
na.rm = TRUE)})
```
Note how the output describes the procedure step by step. Now we can have a look at the resulting summary statistic in the form of a `data.frame`.
```
as.data.frame(tip_pcategory)
```
```
## Payment_Type Tip_Amt.mean
## 1 cash 0.0008162
## 2 credit 2.1619737
## 3 dispute 0.0035075
## 4 no charge 0.0041056
```
The result contradicts our initial hypothesis. However, the comparison is a little flawed. If trips paid by credit card also tend to be longer, the result is not too surprising. We should thus look at the share of tip (or percentage), given the overall amount paid for the trip.
We add an additional variable `percent_tip` and then repeat the aggregation exercise for this variable.
```
# add additional column with the share of tip
taxi$percent_tip <- (taxi$Tip_Amt/taxi$Total_Amt)*100
# recompute the aggregate stats
tip_pcategory <- ffdfdply(taxi,
split = taxi$Payment_Type,
BATCHBYTES = 100000000,
FUN = function(x) {
# note the difference here
summaryBy(percent_tip~Payment_Type,
data = x,
FUN = mean,
na.rm = TRUE)})
# show result as data frame
as.data.frame(tip_pcategory)
```
```
## Payment_Type percent_tip.mean
## 1 cash 0.005978
## 2 credit 16.004173
## 3 dispute 0.045660
## 4 no charge 0.040433
```
**Cross\-tabulation of `ff` vectors**
Also in relative terms, trips paid by credit card tend to be tipped more. However, are there actually many trips paid by credit card? In order to figure this out, we count the number of trips per payment type by applying the `table.ff` function provided in `ffbase`.
```
table.ff(taxi$Payment_Type)
```
```
##
## cash credit dispute no charge
## 781295 215424 536 2745
```
So trips paid in cash are much more frequent than trips paid by credit card. Again using the `table.ff` function, we investigate what factors might be correlated with payment types. First, we have a look at whether payment type is associated with the number of passengers in a trip.
```
# select the subset of observations only containing trips paid by
# credit card or cash
taxi_sub <- subset.ffdf(taxi, Payment_Type=="credit" | Payment_Type == "cash")
taxi_sub$Payment_Type <- ff(taxi_sub$Payment_Type,
levels = c("credit", "cash"),
ramclass = "factor")
# compute the cross tabulation
crosstab <- table.ff(taxi_sub$Passenger_Count,
taxi_sub$Payment_Type
)
# add names to the margins
names(dimnames(crosstab)) <- c("Passenger count", "Payment type")
# show result
crosstab
```
```
## Payment type
## Passenger count credit cash
## 0 2 44
## 1 149990 516828
## 2 32891 133468
## 3 7847 36439
## 4 2909 17901
## 5 20688 73027
## 6 1097 3588
```
From the raw numbers it is hard to see whether there are significant differences between the categories cash and credit. We therefore use a visualization technique called a ‘mosaic plot’ (provided in the `vcd` package; see Meyer, Zeileis, and Hornik ([2023](#ref-vcd)), Meyer, Zeileis, and Hornik ([2006](#ref-strucplot)), and Zeileis, Meyer, and Hornik ([2007](#ref-shading))) to visualize the cross\-tabulation.
```
# install.packages(vcd)
# load package for mosaic plot
library(vcd)
# generate a mosaic plot
mosaic(crosstab, shade = TRUE)
```
The plot suggests that trips involving more than one passenger tend to be paid by cash rather than by credit card.
10\.3 High\-speed in\-memory data aggregation with `arrow`
----------------------------------------------------------
For large datasets that (at least in part) fit into RAM, the `arrow` package again provides an attractive alternative to `ff`.
**Data import**
We use the already familiar `read_csv_arrow()` to import the same first million observations from the taxi trips records.
```
# load packages
library(arrow)
library(dplyr)
# read the CSV file
taxi <- read_csv_arrow("data/tlc_trips.csv",
as_data_frame = FALSE)
```
**Data preparation and ‘split\-apply\-combine’**
We prepare/clean the data as in the `ff`\-approach above.
As `arrow` builds on a `dplyr` back\-end, basic computations can be easily done through the common `dplyr` syntax. Note, however, that not all of the `dplyr` functions are covered in `arrow` (as of the writing of this book).[59](#fn59)
```
# clean the categorical variable; aggregate by group
taxi <-
taxi %>%
mutate(Payment_Type = tolower(Payment_Type))
```
```
taxi_summary <-
taxi %>%
mutate(percent_tip = (Tip_Amt/Total_Amt)*100 ) %>%
group_by(Payment_Type) %>%
summarize(avg_percent_tip = mean(percent_tip)) %>%
collect()
```
Similarly, we can use `data.table`’s `dcast()` for cross\-tabulation\-like operations.
```
library(tidyr)
# compute the frequencies; pull result into R
ct <- taxi %>%
filter(Payment_Type %in% c("credit", "cash")) %>%
group_by(Passenger_Count, Payment_Type) %>%
summarize(n=n())%>%
collect()
# present as cross-tabulation
pivot_wider(data=ct,
names_from="Passenger_Count",
values_from = "n")
```
```
## # A tibble: 2 × 11
## Payment_Type `1` `3` `5` `2` `4` `6`
## <chr> <int> <int> <int> <int> <int> <int>
## 1 cash 1.42e7 972341 1.89e6 3.57e6 473783 96920
## 2 credit 4.34e6 221648 5.63e5 9.23e5 82800 28853
## # ℹ 4 more variables: `0` <int>, `208` <int>,
## # `129` <int>, `113` <int>
```
10\.4 High\-speed in\-memory data aggregation with `data.table`
---------------------------------------------------------------
For large datasets that still fit into RAM, the `data.table` package ([Dowle and Srinivasan 2022](#ref-data.table)) provides very fast and elegant functions to compute aggregate statistics.
**Data import**
We use the already familiar `fread()` to import the same first million observations from the taxi trip records.
```
# load packages
library(data.table)
# import data into RAM (needs around 200MB)
taxi <- fread("data/tlc_trips.csv",
nrows = 1000000)
```
**Data preparation and `data.table` syntax for ‘split\-apply\-combine’**
We prepare/clean the data as in the `ff` approach above.
```
# clean the factor levels
taxi$Payment_Type <- tolower(taxi$Payment_Type)
taxi$Payment_Type <- factor(taxi$Payment_Type,
levels = unique(taxi$Payment_Type))
```
Note the simpler syntax of essentially doing the same thing, but all in\-memory.
**`data.table`\-syntax for ‘split\-apply\-combine’ operations**
With the `[]`\-syntax we index/subset the usual `data.frame` objects in R. When working with `data.table`s, much more can be done in the step of ‘sub\-setting’ the frame.[60](#fn60)
For example, we can directly compute on columns.
```
taxi[, mean(Tip_Amt/Total_Amt)]
```
```
## [1] 0.03452
```
Moreover, in the same step, we can ‘split’ the rows *by* specific groups and apply the function to each subset.
```
taxi[, .(percent_tip = mean((Tip_Amt/Total_Amt)*100)), by = Payment_Type]
```
```
## Payment_Type percent_tip
## 1: cash 0.005978
## 2: credit 16.004173
## 3: no charge 0.040433
## 4: dispute 0.045660
```
Similarly, we can use `data.table`’s `dcast()` for cross\-tabulation\-like operations.
```
dcast(taxi[Payment_Type %in% c("credit", "cash")],
Passenger_Count~Payment_Type,
fun.aggregate = length,
value.var = "vendor_name")
```
```
## Passenger_Count cash credit
## 1: 0 44 2
## 2: 1 516828 149990
## 3: 2 133468 32891
## 4: 3 36439 7847
## 5: 4 17901 2909
## 6: 5 73027 20688
## 7: 6 3588 1097
```
10\.5 Wrapping up
-----------------
* Similar to the MapReduce idea in the context of distributed systems, the *split\-apply\-combine* approach is key in many Big Data aggregation procedures on normal machines (laptop/desktop computers). The idea is to split the overall data into subsets based on a categorical variable, apply a function (e.g., mean) on each subset, and then combine the results into one object. Thus, the approach allows for parallelization and working on separate data chunks.
* As computing descriptive statistics on various subsets of a large dataset can be very memory\-intensive, it is recommended to use out\-of\-memory strategies, lazy evaluation, or a classical SQL\-database approach for this.
* There are several options available such as `ffdply`, running on chunked datasets; and `arrow` with `group_by()`.
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/big-data-visualization.html |
Chapter 11 (Big) Data Visualization
===================================
Visualizing certain characteristics and patterns in large datasets is primarily challenging for two reasons. First, depending on the type of plot, plotting raw data consisting of many observations can take a long time (and lead to large figure files). Second, patterns might be harder to recognize due to the sheer amount of data displayed in a plot. Both of these challenges are particularly related to the visualization of raw data for explorative or descriptive purposes. Visualizations of already\-computed aggregations or estimates is typically very similar whether working with large or small datasets.
The following sections thus particularly highlight the issue of generating plots based on the raw data, including many observations, and with the aim of exploring the data in order to discover patterns in the data that then can be further investigated in more sophisticated statistical analyses. We will do so in three steps. First, we will look into a few important conceptual aspects where generating plots with a large number of observations becomes difficult, and then we will look at potentially helpful tools to address these difficulties. Based on these insights, the next section presents a data exploration tutorial based on the already familiar TLC taxi trips dataset, looking into different approaches to visualize relations between variables. Finally, the last section of this chapter covers an area of data visualization that has become more and more relevant in applied economic research with the availability of highly detailed observational data on economic and social activities (due to the digitization of many aspects of modern life): the plotting of geo\-spatial information on economic activity.
All illustrations of concepts and visualization examples in this chapter build on the Grammar of Graphics ([Wilkinson et al. 2005](#ref-wilkinson2005grammar)) concept implemented in the `ggplot2` package ([Wickham 2016](#ref-ggplot2)). The choice of this plotting package/framework is motivated by the large variety of plot\-types covered in `ggplot2` (ranging from simple scatterplots to hexbin\-plots and geographic maps), as well as the flexibility to build and modify plots step by step (an aspect that is particularly interesting when exploring large datasets visually).
11\.1 Challenges of Big Data visualization
------------------------------------------
Generating a plot in an interactive R session means generating a new object in the R environment (RAM), which can (in the case of large datasets) take up a considerable amount of memory. Moreover, depending on how the plot function is called, RStudio will directly render the plot in the Plots tab (which again needs memory and processing). Consider the following simple example, in which we plot two vectors of random numbers against each other.[61](#fn61)
```
# load package
library(ggplot2) # for plotting
library(pryr) # for profiling
library(bench) # for profiling
library(fs) # for profiling
# random numbers generation
x <- rnorm(10^6, mean=5)
y <- 1 + 1.4*x + rnorm(10^6)
plotdata <- data.frame(x=x, y=y)
object_size(plotdata)
```
```
## 16.00 MB
```
```
# generate scatter plot
splot <-
ggplot(plotdata, aes(x=x, y=y))+
geom_point()
object_size(splot)
```
```
## 16.84 MB
```
The plot object, not surprisingly, takes up an additional slice of RAM of the size of the original dataset, plus some overhead. Now when we instruct ggplot to generate/plot the visualization on canvas, even more memory is needed. Moreover, rather a lot of data processing is needed to place one million points on the canvas (also, note that one million observations would not be considered a lot in the context of this book…).
```
mem_used()
```
```
## 2.26 GB
```
```
system.time(print(splot))
```
```
## user system elapsed
## 12.27 0.09 12.36
```
```
mem_used()
```
```
## 2.36 GB
```
First, to generate this one plot, an average modern laptop needs about 13\.6 seconds. This would not be very comfortable in an interactive session to explore the data visually. Second, and even more striking, before the plot was generated, `mem_used()` indicated the total amount of memory (in MBs) used by R was around 160MB, while right after plotting to the canvas, R had used around 270MB. Note that this is larger than the dataset and the ggplot\-object by an order of magnitude. Creating the same plot based on 100 million observations would likely crash or freeze your R session. Finally, when we output the plot to a file (for example, a pdf), the generated vector\-based graphic file is also rather large.
```
ggsave("splot.pdf", device="pdf", width = 5, height = 5)
file_size("splot.pdf")
```
```
## 54.8M
```
Hence generating plots visualizing large amounts of raw data tends to use up a lot of computing time, memory, and (ultimately) storage space for the generated plot file. There are a couple of solutions to address these performance issues.
**Avoid fancy symbols (costly rendering)**
It turns out that one aspect of the problem is the particular symbols/characters used in ggplot (and other plot functions in R) for the points in such a scatter\-plot. Thus, one solution is to override the default set of characters directly when calling `ggplot()`. A reasonable choice of character for this purpose is simply the full point (`.`).
```
# generate scatter plot
splot2 <-
ggplot(plotdata, aes(x=x, y=y))+
geom_point(pch=".")
```
```
mem_used()
```
```
## 2.26 GB
```
```
system.time(print(splot2))
```
```
## user system elapsed
## 1.862 0.018 1.882
```
```
mem_used()
```
```
## 2.37 GB
```
The increase in memory due to the plot call is comparatively smaller, and plotting is substantially faster.
**Use rasterization (bitmap graphics) instead of vector graphics**
By default, most data visualization libraries, including `ggplot2`, are implemented to generate vector\-based graphics. Conceptually, this makes a lot of sense for any type of plot when the number of observations plotted is small or moderate. In simple terms, vector\-based graphics define lines and shapes as vectors in a coordinate system. In the case of a scatter\-plot, the x and y coordinates of every point need to be recorded. In contrast, bitmap files contain image information in the form of a matrix (or several matrices if colors are involved), whereby each cell of the matrix represents a pixel and contains information about the pixel’s color. While a vector\-based representation of plot of few observations is likely more memory\-efficient than a high\-resolution bitmap representation of the same plot, it might well be the other way around when we are plotting millions of observations.
Thus, an alternative solution to save time and memory is to directly use a bitmap format instead of a vector\-based format. This could be done by plotting directly to a bitmap\-format file and then opening the file to look at the plot. However, this is somewhat clumsy as part of a data visualization workflow to explore the data. Luckily there is a ready\-made solution by Kratochvíl et al. ([2020](#ref-kratochvil_etal2020)) that builds on the idea of rasterizing scatter\-plots, but that then displays the bitmap image directly in R. The approach is implemented in the `scattermore` package ([Kratochvil 2022](#ref-scattermore)) and can straightforwardly be used in combination with `ggplot`.
```
# install.packages("scattermore")
library(scattermore)
# generate scatter plot
splot3 <-
ggplot()+
geom_scattermore(aes(x=x, y=y), data=plotdata)
# show plot in interactive session
system.time(print(splot3))
```
```
## user system elapsed
## 0.703 0.019 0.727
```
```
# plot to file
ggsave("splot3.pdf", device="pdf", width = 5, height = 5)
file_size("splot3.pdf")
```
```
## 13.2K
```
This approach is faster by an order of magnitude, and the resulting pdf takes up only a fraction of the storage space needed for `splot.pdf`, which is based on the classical `geom_points()` and a vector\-based image.
**Use aggregates instead of raw data**
Depending on what pattern/aspect of the data you want to inspect visually, you might not actually need to plot all observations directly but rather the result of aggregating the observations first. There are several options to do this, but in the context of scatter plots based on many observations, a two\-dimensional bin plot can be a good starting point. The idea behind this approach is to divide the canvas into grid\-cells (typically in the form of rectangles or hexagons), compute for each grid cell the number of observations/points that would fall into it (in a scatter plot), and then indicate the number of observations per grid cell via the cell’s shading. Such a 2D bin plot of the same data as above can be generated via `geom_hex()`:
```
# generate scatter plot
splot4 <-
ggplot(plotdata, aes(x=x, y=y))+
geom_hex()
```
```
mem_used()
```
```
## 2.26 GB
```
```
system.time(print(splot4))
```
```
## user system elapsed
## 0.465 0.008 0.496
```
```
mem_used()
```
```
## 2.27 GB
```
Obviously, this approach is much faster and uses up much less memory than the `geom_point()` approach. Moreover, note that this approach to visualizing a potential relation between two variables based on many observations might even have another advantage over the approaches taken above. In all of the scatter plots, it was not visible whether the point cloud contains areas with substantially more observations (more density). There were simply too many points plotted over each other to recognize much more than the contour of the overall point cloud. With the 2D bin plot implemented with `geom_hex()`, we recognize immediately that there are many more observations located in the center of the cloud than further away from the center.
11\.2 Data exploration with `ggplot2`
-------------------------------------
In this tutorial we will work with the TLC data used in the data aggregation session. The raw data consists of several monthly CSV files and can be downloaded via the [TLC’s website](https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page). Again, we work only with the first million observations.
In order to better understand the large dataset at hand (particularly regarding the determinants of tips paid), we use `ggplot2` to visualize some key aspects of the data.
First, let’s look at the raw relationship between fare paid and the tip paid. We set up the canvas with `ggplot`.
```
# load packages
library(ggplot2)
# set up the canvas
taxiplot <- ggplot(taxi, aes(y=Tip_Amt, x= Fare_Amt))
taxiplot
```
Now we visualize the co\-distribution of the two variables with a simple scatter\-plot. to speed things up, we use `geom_scattermore()` but increase the point size.[62](#fn62)
```
# simple x/y plot
taxiplot + geom_scattermore(pointsize = 3)
```
Note that this took quite a while, as R had to literally plot one million dots on the canvas. Moreover, many dots fall within the same area, making it impossible to recognize how much mass there actually is. This is typical for visualization exercises with large datasets. One way to improve this is by making the dots more transparent by setting the `alpha` parameter.
```
# simple x/y plot
taxiplot + geom_scattermore(pointsize = 3, alpha=0.2)
```
Alternatively, we can compute two\-dimensional bins. Here, we use `geom_bin2d()` (an alternative to `geom_hex` used above) in which the canvas is split into rectangles and the number of observations falling into each respective rectangle is computed. The visualization is then based on plotting the rectangles with counts greater than 0, and the shading of the rectangles indicates the count values.
```
# two-dimensional bins
taxiplot + geom_bin2d()
```
A large proportion of the tip/fare observations seem to be in the very lower\-left corner of the pane, while most other trips seem to be evenly distributed. However, we fail to see smaller differences in this visualization. In order to reduce the dominance of the 2D bins with very high counts, we display the natural logarithm of counts and display the bins as points.
```
# two-dimensional bins
taxiplot +
stat_bin_2d(geom="point",
mapping= aes(size = log(after_stat(count)))) +
guides(fill = "none")
```
We note that there are many cases with very low fare amounts, many cases with no or hardly any tip, and quite a lot of cases with very high tip amounts (in relation to the rather low fare amount). In the following, we dissect this picture by having a closer look at ‘typical’ tip amounts and whether they differ by type of payment.
```
# compute frequency of per tip amount and payment method
taxi[, n_same_tip:= .N, by= c("Tip_Amt", "Payment_Type")]
frequencies <- unique(taxi[Payment_Type %in% c("credit", "cash"),
c("n_same_tip",
"Tip_Amt",
"Payment_Type")][order(n_same_tip,
decreasing = TRUE)])
# plot top 20 frequent tip amounts
fare <- ggplot(data = frequencies[1:20], aes(x = factor(Tip_Amt),
y = n_same_tip))
fare + geom_bar(stat = "identity")
```
Indeed, paying no tip at all is quite frequent, overall.[63](#fn63) The bar plot also indicates that there seem to be some ‘focal points’ in the amount of tip paid. Clearly, paying one USD or two USD is more common than paying fractions. However, fractions of dollars might be more likely if tips are paid in cash and customers simply add some loose change to the fare amount paid.
```
fare + geom_bar(stat = "identity") +
facet_wrap("Payment_Type")
```
Clearly, it looks as if trips paid in cash tend not to be tipped (at least in this sub\-sample).
Let’s try to tease this information out of the initial points plot. Trips paid in cash are often not tipped; we thus should indicate the payment method. Moreover, tips paid in full dollar amounts might indicate a habit.
```
# indicate natural numbers
taxi[, dollar_paid := ifelse(Tip_Amt == round(Tip_Amt,0), "Full", "Fraction"),]
# extended x/y plot
taxiplot +
geom_scattermore(pointsize = 3, alpha=0.2, aes(color=Payment_Type)) +
facet_wrap("dollar_paid") +
theme(legend.position="bottom")
```
Now the picture is getting clearer. Paying a tip seems to follow certain rules of thumb. Certain fixed amounts tend to be paid independent of the fare amount (visible in the straight lines of dots on the right\-hand panel). At the same time, the pattern in the left panel indicates another habit: computing the amount of the tip as a linear function of the total fare amount (‘pay 10% tip’). A third habit might be to determine the amount of tip by ‘rounding up’ the total amount paid. In the following, we try to tease the latter out, only focusing on credit card payments.
```
taxi[, rounded_up := ifelse(Fare_Amt + Tip_Amt == round(Fare_Amt + Tip_Amt, 0),
"Rounded up",
"Not rounded")]
# extended x/y plot
taxiplot +
geom_scattermore(data= taxi[Payment_Type == "credit"],
pointsize = 3, alpha=0.2, aes(color=rounded_up)) +
facet_wrap("dollar_paid") +
theme(legend.position="bottom")
```
Now we can start modeling. A reasonable first shot is to model the tip amount as a linear function of the fare amount, conditional on no\-zero tip amounts paid as fractions of a dollar.
```
modelplot <- ggplot(data= taxi[Payment_Type == "credit" &
dollar_paid == "Fraction" &
0 < Tip_Amt],
aes(x = Fare_Amt, y = Tip_Amt))
modelplot +
geom_scattermore(pointsize = 3, alpha=0.2, color="darkgreen") +
geom_smooth(method = "lm", colour = "black") +
theme(legend.position="bottom")
```
Finally, we prepare the plot for reporting. `ggplot2` provides several predefined ‘themes’ for plots that define all kinds of aspects of a plot (background color, line colors, font size, etc.). The easiest way to tweak the design of your final plot in a certain direction is to just add such a pre\-defined theme at the end of your plot. Some of the pre\-defined themes allow you to change a few aspects, such as the font type and the base size of all the texts in the plot (labels, tick numbers, etc.). Here, we use the `theme_bw()`, increase the font size, and switch to a serif\-type font. `theme_bw()` is one of the complete themes that ships with the basic `ggplot2` installation.[64](#fn64) Many more themes can be found in additional R packages (see, for example, the [`ggthemes` package](https://cran.r-project.org/web/packages/ggthemes/index.html)).
```
modelplot <- ggplot(data= taxi[Payment_Type == "credit"
& dollar_paid == "Fraction"
& 0 < Tip_Amt],
aes(x = Fare_Amt, y = Tip_Amt))
modelplot +
geom_scattermore(pointsize = 3, alpha=0.2, color="darkgreen") +
geom_smooth(method = "lm", colour = "black") +
ylab("Amount of tip paid (in USD)") +
xlab("Amount of fare paid (in USD)") +
theme_bw(base_size = 18, base_family = "serif")
```
**Aside: modify and create themes**
*Simple modifications of themes*
Apart from using pre\-defined themes as illustrated above, we can use the `theme()` function to further modify the design of a plot. For example, we can print the axis labels (‘axis titles’) in bold.
```
modelplot <- ggplot(data= taxi[Payment_Type == "credit"
& dollar_paid == "Fraction"
& 0 < Tip_Amt],
aes(x = Fare_Amt, y = Tip_Amt))
modelplot +
geom_scattermore(pointsize = 3, alpha=0.2, color="darkgreen") +
geom_smooth(method = "lm", colour = "black") +
ylab("Amount of tip paid (in USD)") +
xlab("Amount of fare paid (in USD)") +
theme_bw(base_size = 18, base_family = "serif") +
theme(axis.title = element_text(face="bold"))
```
There is a large list of plot design aspects that can be modified in this way (see `?theme()` for details).
*Create your own themes*
Extensive design modifications via `theme()` can involve many lines of code, making your plot code harder to read/understand. In practice, you might want to define your specific theme once and then apply this theme to all of your plots. In order to do so it makes sense to choose one of the existing themes as a basis and then modify its design aspects until you have the design you are looking for. Following the design choices in the examples above, we can create our own `theme_my_serif()` as follows.
```
# 'define' a new theme
theme_my_serif <-
theme_bw(base_size = 18, base_family = "serif") +
theme(axis.title = element_text(face="bold"))
# apply it
modelplot +
geom_scattermore(pointsize = 3, alpha=0.2, color="darkgreen") +
geom_smooth(method = "lm", colour = "black") +
ylab("Amount of tip paid (in USD)") +
xlab("Amount of fare paid (in USD)") +
theme_my_serif
```
This practical approach does not require you to define every aspect of a theme. If you indeed want to completely define every aspect of a theme, you can set `complete=TRUE` when calling the theme function.
```
# 'define' a new theme
my_serif_theme <-
theme_bw(base_size = 18, base_family = "serif") +
theme(axis.title = element_text(face="bold"), complete = TRUE)
# apply it
modelplot +
geom_scattermore(pointsize = 3, alpha=0.2, color="darkgreen") +
geom_smooth(method = "lm", colour = "black") +
ylab("Amount of tip paid (in USD)") +
xlab("Amount of fare paid (in USD)") +
theme_my_serif
```
Note that since we have only defined one aspect (bold axis titles), the rest of the elements follow the default theme.
*Implementing actual themes as functions*
Importantly, the approach outlined above does not technically really create a new theme like `theme_bw()`, as these pre\-defined themes are implemented as functions. Note that we add the new theme to the plot simply with `+ theme_my_serif` (no parentheses). In practice this is the simplest approach, and it provides all the functionality you need in order to apply your own ‘theme’ to each of your plots. If you want to implement a theme as a function, the following blueprint can get you started.
```
# define own theme
theme_my_serif <-
function(base_size = 15,
base_family = "",
base_line_size = base_size/170,
base_rect_size = base_size/170){
# use theme_bw() as a basis but replace some design elements
theme_bw(base_size = base_size,
base_family = base_family,
base_line_size = base_size/170,
base_rect_size = base_size/170) %+replace%
theme(
axis.title = element_text(face="bold")
)
}
# apply the theme
modelplot +
geom_scattermore(pointsize = 3, alpha=0.2, color="darkgreen") +
geom_smooth(method = "lm", colour = "black") +
ylab("Amount of tip paid (in USD)") +
xlab("Amount of fare paid (in USD)") +
theme_my_serif(base_size = 18, base_family="serif")
```
11\.3 Visualizing time and space
--------------------------------
The previous visualization exercises were focused on visually exploring patterns in the tipping behavior of people taking a NYC yellow cab ride. Based on the same dataset, we will explore the time and spatial dimensions of the TLC Yellow Cab data. That is, we explore where trips tend to start and end, depending on the time of the day.
### 11\.3\.1 Preparations
For the visualization of spatial data, we first load additional packages that give R some GIS features.
```
# load GIS packages
library(rgdal)
library(rgeos)
```
Moreover, we download and import a so\-called [‘shape file’](https://en.wikipedia.org/wiki/Shapefile) (a geospatial data format) of New York City. This will be the basis for our visualization of the spatial dimension of taxi trips. The file is downloaded from [New York’s Department of City Planning](https://www1.nyc.gov/site/planning/index.page) and indicates the city’s community district borders.[65](#fn65)
```
# download the zipped shapefile to a temporary file; unzip
BASE_URL <-
"https://www1.nyc.gov/assets/planning/download/zip/data-maps/open-data/"
FILE <- "nycd_19a.zip"
URL <- paste0(BASE_URL, FILE)
tmp_file <- tempfile()
download.file(URL, tmp_file)
file_path <- unzip(tmp_file, exdir= "data")
# delete the temporary file
unlink(tmp_file)
```
Now we can import the shape file and have a look at how the GIS data is structured.
```
# read GIS data
nyc_map <- readOGR(file_path[1], verbose = FALSE)
# have a look at the GIS data
summary(nyc_map)
```
```
## Object of class SpatialPolygonsDataFrame
## Coordinates:
## min max
## x 913175 1067383
## y 120122 272844
## Is projected: TRUE
## proj4string :
## [+proj=lcc +lat_0=40.1666666666667 +lon_0=-74
## +lat_1=41.0333333333333 +lat_2=40.6666666666667
## +x_0=300000 +y_0=0 +datum=NAD83 +units=us-ft
## +no_defs]
## Data attributes:
## BoroCD Shape_Leng Shape_Area
## Min. :101 Min. : 23963 Min. :2.43e+07
## 1st Qu.:206 1st Qu.: 36611 1st Qu.:4.84e+07
## Median :308 Median : 52246 Median :8.27e+07
## Mean :297 Mean : 74890 Mean :1.19e+08
## 3rd Qu.:406 3rd Qu.: 85711 3rd Qu.:1.37e+08
## Max. :595 Max. :270660 Max. :5.99e+08
```
Note that the coordinates are not in the usual longitude and latitude units. The original map uses a different projection than the TLC data of taxi trip records. Before plotting, we thus have to change the projection to be in line with the TLC data.
```
# transform the projection
p <- CRS("+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0")
nyc_map <- spTransform(nyc_map, p)
# check result
summary(nyc_map)
```
```
## Object of class SpatialPolygonsDataFrame
## Coordinates:
## min max
## x -74.26 -73.70
## y 40.50 40.92
## Is projected: FALSE
## proj4string : [+proj=longlat +datum=WGS84 +no_defs]
## Data attributes:
## BoroCD Shape_Leng Shape_Area
## Min. :101 Min. : 23963 Min. :2.43e+07
## 1st Qu.:206 1st Qu.: 36611 1st Qu.:4.84e+07
## Median :308 Median : 52246 Median :8.27e+07
## Mean :297 Mean : 74890 Mean :1.19e+08
## 3rd Qu.:406 3rd Qu.: 85711 3rd Qu.:1.37e+08
## Max. :595 Max. :270660 Max. :5.99e+08
```
One last preparatory step is to convert the map data to a `data.frame` for plotting with `ggplot`.
```
nyc_map <- fortify(nyc_map)
```
### 11\.3\.2 Pick\-up and drop\-off locations
Since trips might actually start or end outside of NYC, we first restrict the sample of trips to those within the boundary box of the map. For the sake of the exercise, we only select a random sample of `50000` trips from the remaining trip records.
```
# taxi trips plot data
taxi_trips <- taxi[Start_Lon <= max(nyc_map$long) &
Start_Lon >= min(nyc_map$long) &
End_Lon <= max(nyc_map$long) &
End_Lon >= min(nyc_map$long) &
Start_Lat <= max(nyc_map$lat) &
Start_Lat >= min(nyc_map$lat) &
End_Lat <= max(nyc_map$lat) &
End_Lat >= min(nyc_map$lat)
]
taxi_trips <- taxi_trips[base::sample(1:nrow(taxi_trips), 50000)]
```
In order to visualize how the cab traffic is changing over the course of the day, we add an additional variable called `start_time` in which we store the time (hour) of the day a trip started.
```
taxi_trips$start_time <- lubridate::hour(taxi_trips$Trip_Pickup_DateTime)
```
Particularly, we want to look at differences between morning, afternoon, and evening/night.
```
# define new variable for facets
taxi_trips$time_of_day <- "Morning"
taxi_trips[start_time > 12 & start_time < 17]$time_of_day <- "Afternoon"
taxi_trips[start_time %in% c(17:24, 0:5)]$time_of_day <- "Evening/Night"
taxi_trips$time_of_day <-
factor(taxi_trips$time_of_day,
levels = c("Morning", "Afternoon", "Evening/Night"))
```
We create the plot by first setting up the canvas with our taxi trip data. Then, we add the map as a first layer.
```
# set up the canvas
locations <- ggplot(taxi_trips, aes(x=long, y=lat))
# add the map geometry
locations <- locations + geom_map(data = nyc_map,
map = nyc_map,
aes(map_id = id))
locations
```
Now we can start adding the pick\-up and drop\-off locations of cab trips.
```
# add pick-up locations to plot
locations +
geom_scattermore(aes(x=Start_Lon, y=Start_Lat),
color="orange",
pointsize = 1,
alpha = 0.2)
```
As is to be expected, most of the trips start in Manhattan. Now let’s look at where trips end.
```
# add drop-off locations to plot
locations +
geom_scattermore(aes(x=End_Lon, y=End_Lat),
color="steelblue",
pointsize = 1,
alpha = 0.2) +
geom_scattermore(aes(x=Start_Lon, y=Start_Lat),
color="orange",
pointsize = 1,
alpha = 0.2)
```
In fact, more trips tend to end outside of Manhattan. And the destinations seem to be broader spread across the city then the pick\-up locations. Most destinations are still in Manhattan, though.
Now let’s have a look at how this picture changes depending on the time of the day.
```
# pick-up locations
locations +
geom_scattermore(aes(x=Start_Lon, y=Start_Lat),
color="orange",
pointsize =1,
alpha = 0.2) +
facet_wrap(vars(time_of_day))
```
```
# drop-off locations
locations +
geom_scattermore(aes(x=End_Lon, y=End_Lat),
color="steelblue",
pointsize = 1,
alpha = 0.2) +
facet_wrap(vars(time_of_day))
```
Alternatively, we can plot the hours on a continuous scale.
```
# drop-off locations
locations +
geom_scattermore(aes(x=End_Lon, y=End_Lat, color = start_time),
pointsize = 1,
alpha = 0.2) +
scale_colour_gradient2( low = "red", mid = "yellow", high = "red",
midpoint = 12)
```
**Aside: change color schemes**
In the example above we use `scale_colour_gradient2()` to modify the color gradient used to visualize the start time of taxi trips. By default, ggplot would plot the following (default gradient color setting):
```
# drop-off locations
locations +
geom_scattermore(aes(x=End_Lon, y=End_Lat, color = start_time ),
pointsize = 1,
alpha = 0.2)
```
`ggplot2` offers various functions to modify the color scales used in a plot. In the case of the example above, we visualize values of a continuous variable. Hence we use a gradient color scale. In the case of categorical variables, we need to modify the default discrete color scale.
Recall the plot illustrating tipping behavior, where we highlight in which observations the client paid with credit card, cash, etc.
```
# indicate natural numbers
taxi[, dollar_paid := ifelse(Tip_Amt == round(Tip_Amt,0),
"Full",
"Fraction"),]
# extended x/y plot
taxiplot +
geom_scattermore(alpha=0.2,
pointsize=3,
aes(color=Payment_Type)) +
facet_wrap("dollar_paid") +
theme(legend.position="bottom")
```
Since we do not further specify the discrete color scheme to be used, ggplot simply uses its default color scheme for this plot. We can change this as follows.
```
# indicate natural numbers
taxi[, dollar_paid := ifelse(Tip_Amt == round(Tip_Amt,0),
"Full",
"Fraction"),]
# extended x/y plot
taxiplot +
geom_scattermore(alpha=0.2, pointsize = 3,
aes(color=Payment_Type)) +
facet_wrap("dollar_paid") +
scale_color_discrete(type = c("red",
"steelblue",
"orange",
"purple")) +
theme(legend.position="bottom")
```
11\.4 Wrapping up
-----------------
* *`ggplot`* offers a unified approach to generating a variety of plots common in the Big Data context: heatmaps, GIS\-like maps, density plots, 2D\-bin plots, etc.
* Building on the concept of the *Grammar of Graphics* ([Wilkinson et al. 2005](#ref-wilkinson2005grammar)), `ggplot2` follows the paradigm of creating plots layer\-by\-layer, which offers great flexibility regarding the visualization of complex (big) data.
* Standard plotting facilities in R (including in `ggplot`) are based on the concept of vector images (where each dot, line, and area is defined as in a coordinate system). While vector images have the advantage of flexible scaling (no reliance on a specific resolution), when plotting many observations, the computational load to generate and store/hold such graphics in memory can be substantial.
* Plotting of large amounts of data can be made more efficient by relying on less complex shapes (e.g., for dots in a scatter\-plot) or through *rasterization* and conversion of the plot into a *bitmap\-image (a raster\-based image)*. In contrast to vector images, raster images are created with a specific resolution that defines the size of a matrix of pixels that constitutes the image. If plotting a scatter\-plot based on many observations, this data structure is much more memory\-efficient than defining each dot in a vector image.
* Specific types of plots, such as hex\-bin plots and other 2D\-bin plots, facilitate plotting large amounts of data independent of the type of image (vector or raster). Moreover, they can be useful to show/highlight specific patterns in large amounts of data that could not be seen in standard scatter plots.
11\.1 Challenges of Big Data visualization
------------------------------------------
Generating a plot in an interactive R session means generating a new object in the R environment (RAM), which can (in the case of large datasets) take up a considerable amount of memory. Moreover, depending on how the plot function is called, RStudio will directly render the plot in the Plots tab (which again needs memory and processing). Consider the following simple example, in which we plot two vectors of random numbers against each other.[61](#fn61)
```
# load package
library(ggplot2) # for plotting
library(pryr) # for profiling
library(bench) # for profiling
library(fs) # for profiling
# random numbers generation
x <- rnorm(10^6, mean=5)
y <- 1 + 1.4*x + rnorm(10^6)
plotdata <- data.frame(x=x, y=y)
object_size(plotdata)
```
```
## 16.00 MB
```
```
# generate scatter plot
splot <-
ggplot(plotdata, aes(x=x, y=y))+
geom_point()
object_size(splot)
```
```
## 16.84 MB
```
The plot object, not surprisingly, takes up an additional slice of RAM of the size of the original dataset, plus some overhead. Now when we instruct ggplot to generate/plot the visualization on canvas, even more memory is needed. Moreover, rather a lot of data processing is needed to place one million points on the canvas (also, note that one million observations would not be considered a lot in the context of this book…).
```
mem_used()
```
```
## 2.26 GB
```
```
system.time(print(splot))
```
```
## user system elapsed
## 12.27 0.09 12.36
```
```
mem_used()
```
```
## 2.36 GB
```
First, to generate this one plot, an average modern laptop needs about 13\.6 seconds. This would not be very comfortable in an interactive session to explore the data visually. Second, and even more striking, before the plot was generated, `mem_used()` indicated the total amount of memory (in MBs) used by R was around 160MB, while right after plotting to the canvas, R had used around 270MB. Note that this is larger than the dataset and the ggplot\-object by an order of magnitude. Creating the same plot based on 100 million observations would likely crash or freeze your R session. Finally, when we output the plot to a file (for example, a pdf), the generated vector\-based graphic file is also rather large.
```
ggsave("splot.pdf", device="pdf", width = 5, height = 5)
file_size("splot.pdf")
```
```
## 54.8M
```
Hence generating plots visualizing large amounts of raw data tends to use up a lot of computing time, memory, and (ultimately) storage space for the generated plot file. There are a couple of solutions to address these performance issues.
**Avoid fancy symbols (costly rendering)**
It turns out that one aspect of the problem is the particular symbols/characters used in ggplot (and other plot functions in R) for the points in such a scatter\-plot. Thus, one solution is to override the default set of characters directly when calling `ggplot()`. A reasonable choice of character for this purpose is simply the full point (`.`).
```
# generate scatter plot
splot2 <-
ggplot(plotdata, aes(x=x, y=y))+
geom_point(pch=".")
```
```
mem_used()
```
```
## 2.26 GB
```
```
system.time(print(splot2))
```
```
## user system elapsed
## 1.862 0.018 1.882
```
```
mem_used()
```
```
## 2.37 GB
```
The increase in memory due to the plot call is comparatively smaller, and plotting is substantially faster.
**Use rasterization (bitmap graphics) instead of vector graphics**
By default, most data visualization libraries, including `ggplot2`, are implemented to generate vector\-based graphics. Conceptually, this makes a lot of sense for any type of plot when the number of observations plotted is small or moderate. In simple terms, vector\-based graphics define lines and shapes as vectors in a coordinate system. In the case of a scatter\-plot, the x and y coordinates of every point need to be recorded. In contrast, bitmap files contain image information in the form of a matrix (or several matrices if colors are involved), whereby each cell of the matrix represents a pixel and contains information about the pixel’s color. While a vector\-based representation of plot of few observations is likely more memory\-efficient than a high\-resolution bitmap representation of the same plot, it might well be the other way around when we are plotting millions of observations.
Thus, an alternative solution to save time and memory is to directly use a bitmap format instead of a vector\-based format. This could be done by plotting directly to a bitmap\-format file and then opening the file to look at the plot. However, this is somewhat clumsy as part of a data visualization workflow to explore the data. Luckily there is a ready\-made solution by Kratochvíl et al. ([2020](#ref-kratochvil_etal2020)) that builds on the idea of rasterizing scatter\-plots, but that then displays the bitmap image directly in R. The approach is implemented in the `scattermore` package ([Kratochvil 2022](#ref-scattermore)) and can straightforwardly be used in combination with `ggplot`.
```
# install.packages("scattermore")
library(scattermore)
# generate scatter plot
splot3 <-
ggplot()+
geom_scattermore(aes(x=x, y=y), data=plotdata)
# show plot in interactive session
system.time(print(splot3))
```
```
## user system elapsed
## 0.703 0.019 0.727
```
```
# plot to file
ggsave("splot3.pdf", device="pdf", width = 5, height = 5)
file_size("splot3.pdf")
```
```
## 13.2K
```
This approach is faster by an order of magnitude, and the resulting pdf takes up only a fraction of the storage space needed for `splot.pdf`, which is based on the classical `geom_points()` and a vector\-based image.
**Use aggregates instead of raw data**
Depending on what pattern/aspect of the data you want to inspect visually, you might not actually need to plot all observations directly but rather the result of aggregating the observations first. There are several options to do this, but in the context of scatter plots based on many observations, a two\-dimensional bin plot can be a good starting point. The idea behind this approach is to divide the canvas into grid\-cells (typically in the form of rectangles or hexagons), compute for each grid cell the number of observations/points that would fall into it (in a scatter plot), and then indicate the number of observations per grid cell via the cell’s shading. Such a 2D bin plot of the same data as above can be generated via `geom_hex()`:
```
# generate scatter plot
splot4 <-
ggplot(plotdata, aes(x=x, y=y))+
geom_hex()
```
```
mem_used()
```
```
## 2.26 GB
```
```
system.time(print(splot4))
```
```
## user system elapsed
## 0.465 0.008 0.496
```
```
mem_used()
```
```
## 2.27 GB
```
Obviously, this approach is much faster and uses up much less memory than the `geom_point()` approach. Moreover, note that this approach to visualizing a potential relation between two variables based on many observations might even have another advantage over the approaches taken above. In all of the scatter plots, it was not visible whether the point cloud contains areas with substantially more observations (more density). There were simply too many points plotted over each other to recognize much more than the contour of the overall point cloud. With the 2D bin plot implemented with `geom_hex()`, we recognize immediately that there are many more observations located in the center of the cloud than further away from the center.
11\.2 Data exploration with `ggplot2`
-------------------------------------
In this tutorial we will work with the TLC data used in the data aggregation session. The raw data consists of several monthly CSV files and can be downloaded via the [TLC’s website](https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page). Again, we work only with the first million observations.
In order to better understand the large dataset at hand (particularly regarding the determinants of tips paid), we use `ggplot2` to visualize some key aspects of the data.
First, let’s look at the raw relationship between fare paid and the tip paid. We set up the canvas with `ggplot`.
```
# load packages
library(ggplot2)
# set up the canvas
taxiplot <- ggplot(taxi, aes(y=Tip_Amt, x= Fare_Amt))
taxiplot
```
Now we visualize the co\-distribution of the two variables with a simple scatter\-plot. to speed things up, we use `geom_scattermore()` but increase the point size.[62](#fn62)
```
# simple x/y plot
taxiplot + geom_scattermore(pointsize = 3)
```
Note that this took quite a while, as R had to literally plot one million dots on the canvas. Moreover, many dots fall within the same area, making it impossible to recognize how much mass there actually is. This is typical for visualization exercises with large datasets. One way to improve this is by making the dots more transparent by setting the `alpha` parameter.
```
# simple x/y plot
taxiplot + geom_scattermore(pointsize = 3, alpha=0.2)
```
Alternatively, we can compute two\-dimensional bins. Here, we use `geom_bin2d()` (an alternative to `geom_hex` used above) in which the canvas is split into rectangles and the number of observations falling into each respective rectangle is computed. The visualization is then based on plotting the rectangles with counts greater than 0, and the shading of the rectangles indicates the count values.
```
# two-dimensional bins
taxiplot + geom_bin2d()
```
A large proportion of the tip/fare observations seem to be in the very lower\-left corner of the pane, while most other trips seem to be evenly distributed. However, we fail to see smaller differences in this visualization. In order to reduce the dominance of the 2D bins with very high counts, we display the natural logarithm of counts and display the bins as points.
```
# two-dimensional bins
taxiplot +
stat_bin_2d(geom="point",
mapping= aes(size = log(after_stat(count)))) +
guides(fill = "none")
```
We note that there are many cases with very low fare amounts, many cases with no or hardly any tip, and quite a lot of cases with very high tip amounts (in relation to the rather low fare amount). In the following, we dissect this picture by having a closer look at ‘typical’ tip amounts and whether they differ by type of payment.
```
# compute frequency of per tip amount and payment method
taxi[, n_same_tip:= .N, by= c("Tip_Amt", "Payment_Type")]
frequencies <- unique(taxi[Payment_Type %in% c("credit", "cash"),
c("n_same_tip",
"Tip_Amt",
"Payment_Type")][order(n_same_tip,
decreasing = TRUE)])
# plot top 20 frequent tip amounts
fare <- ggplot(data = frequencies[1:20], aes(x = factor(Tip_Amt),
y = n_same_tip))
fare + geom_bar(stat = "identity")
```
Indeed, paying no tip at all is quite frequent, overall.[63](#fn63) The bar plot also indicates that there seem to be some ‘focal points’ in the amount of tip paid. Clearly, paying one USD or two USD is more common than paying fractions. However, fractions of dollars might be more likely if tips are paid in cash and customers simply add some loose change to the fare amount paid.
```
fare + geom_bar(stat = "identity") +
facet_wrap("Payment_Type")
```
Clearly, it looks as if trips paid in cash tend not to be tipped (at least in this sub\-sample).
Let’s try to tease this information out of the initial points plot. Trips paid in cash are often not tipped; we thus should indicate the payment method. Moreover, tips paid in full dollar amounts might indicate a habit.
```
# indicate natural numbers
taxi[, dollar_paid := ifelse(Tip_Amt == round(Tip_Amt,0), "Full", "Fraction"),]
# extended x/y plot
taxiplot +
geom_scattermore(pointsize = 3, alpha=0.2, aes(color=Payment_Type)) +
facet_wrap("dollar_paid") +
theme(legend.position="bottom")
```
Now the picture is getting clearer. Paying a tip seems to follow certain rules of thumb. Certain fixed amounts tend to be paid independent of the fare amount (visible in the straight lines of dots on the right\-hand panel). At the same time, the pattern in the left panel indicates another habit: computing the amount of the tip as a linear function of the total fare amount (‘pay 10% tip’). A third habit might be to determine the amount of tip by ‘rounding up’ the total amount paid. In the following, we try to tease the latter out, only focusing on credit card payments.
```
taxi[, rounded_up := ifelse(Fare_Amt + Tip_Amt == round(Fare_Amt + Tip_Amt, 0),
"Rounded up",
"Not rounded")]
# extended x/y plot
taxiplot +
geom_scattermore(data= taxi[Payment_Type == "credit"],
pointsize = 3, alpha=0.2, aes(color=rounded_up)) +
facet_wrap("dollar_paid") +
theme(legend.position="bottom")
```
Now we can start modeling. A reasonable first shot is to model the tip amount as a linear function of the fare amount, conditional on no\-zero tip amounts paid as fractions of a dollar.
```
modelplot <- ggplot(data= taxi[Payment_Type == "credit" &
dollar_paid == "Fraction" &
0 < Tip_Amt],
aes(x = Fare_Amt, y = Tip_Amt))
modelplot +
geom_scattermore(pointsize = 3, alpha=0.2, color="darkgreen") +
geom_smooth(method = "lm", colour = "black") +
theme(legend.position="bottom")
```
Finally, we prepare the plot for reporting. `ggplot2` provides several predefined ‘themes’ for plots that define all kinds of aspects of a plot (background color, line colors, font size, etc.). The easiest way to tweak the design of your final plot in a certain direction is to just add such a pre\-defined theme at the end of your plot. Some of the pre\-defined themes allow you to change a few aspects, such as the font type and the base size of all the texts in the plot (labels, tick numbers, etc.). Here, we use the `theme_bw()`, increase the font size, and switch to a serif\-type font. `theme_bw()` is one of the complete themes that ships with the basic `ggplot2` installation.[64](#fn64) Many more themes can be found in additional R packages (see, for example, the [`ggthemes` package](https://cran.r-project.org/web/packages/ggthemes/index.html)).
```
modelplot <- ggplot(data= taxi[Payment_Type == "credit"
& dollar_paid == "Fraction"
& 0 < Tip_Amt],
aes(x = Fare_Amt, y = Tip_Amt))
modelplot +
geom_scattermore(pointsize = 3, alpha=0.2, color="darkgreen") +
geom_smooth(method = "lm", colour = "black") +
ylab("Amount of tip paid (in USD)") +
xlab("Amount of fare paid (in USD)") +
theme_bw(base_size = 18, base_family = "serif")
```
**Aside: modify and create themes**
*Simple modifications of themes*
Apart from using pre\-defined themes as illustrated above, we can use the `theme()` function to further modify the design of a plot. For example, we can print the axis labels (‘axis titles’) in bold.
```
modelplot <- ggplot(data= taxi[Payment_Type == "credit"
& dollar_paid == "Fraction"
& 0 < Tip_Amt],
aes(x = Fare_Amt, y = Tip_Amt))
modelplot +
geom_scattermore(pointsize = 3, alpha=0.2, color="darkgreen") +
geom_smooth(method = "lm", colour = "black") +
ylab("Amount of tip paid (in USD)") +
xlab("Amount of fare paid (in USD)") +
theme_bw(base_size = 18, base_family = "serif") +
theme(axis.title = element_text(face="bold"))
```
There is a large list of plot design aspects that can be modified in this way (see `?theme()` for details).
*Create your own themes*
Extensive design modifications via `theme()` can involve many lines of code, making your plot code harder to read/understand. In practice, you might want to define your specific theme once and then apply this theme to all of your plots. In order to do so it makes sense to choose one of the existing themes as a basis and then modify its design aspects until you have the design you are looking for. Following the design choices in the examples above, we can create our own `theme_my_serif()` as follows.
```
# 'define' a new theme
theme_my_serif <-
theme_bw(base_size = 18, base_family = "serif") +
theme(axis.title = element_text(face="bold"))
# apply it
modelplot +
geom_scattermore(pointsize = 3, alpha=0.2, color="darkgreen") +
geom_smooth(method = "lm", colour = "black") +
ylab("Amount of tip paid (in USD)") +
xlab("Amount of fare paid (in USD)") +
theme_my_serif
```
This practical approach does not require you to define every aspect of a theme. If you indeed want to completely define every aspect of a theme, you can set `complete=TRUE` when calling the theme function.
```
# 'define' a new theme
my_serif_theme <-
theme_bw(base_size = 18, base_family = "serif") +
theme(axis.title = element_text(face="bold"), complete = TRUE)
# apply it
modelplot +
geom_scattermore(pointsize = 3, alpha=0.2, color="darkgreen") +
geom_smooth(method = "lm", colour = "black") +
ylab("Amount of tip paid (in USD)") +
xlab("Amount of fare paid (in USD)") +
theme_my_serif
```
Note that since we have only defined one aspect (bold axis titles), the rest of the elements follow the default theme.
*Implementing actual themes as functions*
Importantly, the approach outlined above does not technically really create a new theme like `theme_bw()`, as these pre\-defined themes are implemented as functions. Note that we add the new theme to the plot simply with `+ theme_my_serif` (no parentheses). In practice this is the simplest approach, and it provides all the functionality you need in order to apply your own ‘theme’ to each of your plots. If you want to implement a theme as a function, the following blueprint can get you started.
```
# define own theme
theme_my_serif <-
function(base_size = 15,
base_family = "",
base_line_size = base_size/170,
base_rect_size = base_size/170){
# use theme_bw() as a basis but replace some design elements
theme_bw(base_size = base_size,
base_family = base_family,
base_line_size = base_size/170,
base_rect_size = base_size/170) %+replace%
theme(
axis.title = element_text(face="bold")
)
}
# apply the theme
modelplot +
geom_scattermore(pointsize = 3, alpha=0.2, color="darkgreen") +
geom_smooth(method = "lm", colour = "black") +
ylab("Amount of tip paid (in USD)") +
xlab("Amount of fare paid (in USD)") +
theme_my_serif(base_size = 18, base_family="serif")
```
11\.3 Visualizing time and space
--------------------------------
The previous visualization exercises were focused on visually exploring patterns in the tipping behavior of people taking a NYC yellow cab ride. Based on the same dataset, we will explore the time and spatial dimensions of the TLC Yellow Cab data. That is, we explore where trips tend to start and end, depending on the time of the day.
### 11\.3\.1 Preparations
For the visualization of spatial data, we first load additional packages that give R some GIS features.
```
# load GIS packages
library(rgdal)
library(rgeos)
```
Moreover, we download and import a so\-called [‘shape file’](https://en.wikipedia.org/wiki/Shapefile) (a geospatial data format) of New York City. This will be the basis for our visualization of the spatial dimension of taxi trips. The file is downloaded from [New York’s Department of City Planning](https://www1.nyc.gov/site/planning/index.page) and indicates the city’s community district borders.[65](#fn65)
```
# download the zipped shapefile to a temporary file; unzip
BASE_URL <-
"https://www1.nyc.gov/assets/planning/download/zip/data-maps/open-data/"
FILE <- "nycd_19a.zip"
URL <- paste0(BASE_URL, FILE)
tmp_file <- tempfile()
download.file(URL, tmp_file)
file_path <- unzip(tmp_file, exdir= "data")
# delete the temporary file
unlink(tmp_file)
```
Now we can import the shape file and have a look at how the GIS data is structured.
```
# read GIS data
nyc_map <- readOGR(file_path[1], verbose = FALSE)
# have a look at the GIS data
summary(nyc_map)
```
```
## Object of class SpatialPolygonsDataFrame
## Coordinates:
## min max
## x 913175 1067383
## y 120122 272844
## Is projected: TRUE
## proj4string :
## [+proj=lcc +lat_0=40.1666666666667 +lon_0=-74
## +lat_1=41.0333333333333 +lat_2=40.6666666666667
## +x_0=300000 +y_0=0 +datum=NAD83 +units=us-ft
## +no_defs]
## Data attributes:
## BoroCD Shape_Leng Shape_Area
## Min. :101 Min. : 23963 Min. :2.43e+07
## 1st Qu.:206 1st Qu.: 36611 1st Qu.:4.84e+07
## Median :308 Median : 52246 Median :8.27e+07
## Mean :297 Mean : 74890 Mean :1.19e+08
## 3rd Qu.:406 3rd Qu.: 85711 3rd Qu.:1.37e+08
## Max. :595 Max. :270660 Max. :5.99e+08
```
Note that the coordinates are not in the usual longitude and latitude units. The original map uses a different projection than the TLC data of taxi trip records. Before plotting, we thus have to change the projection to be in line with the TLC data.
```
# transform the projection
p <- CRS("+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0")
nyc_map <- spTransform(nyc_map, p)
# check result
summary(nyc_map)
```
```
## Object of class SpatialPolygonsDataFrame
## Coordinates:
## min max
## x -74.26 -73.70
## y 40.50 40.92
## Is projected: FALSE
## proj4string : [+proj=longlat +datum=WGS84 +no_defs]
## Data attributes:
## BoroCD Shape_Leng Shape_Area
## Min. :101 Min. : 23963 Min. :2.43e+07
## 1st Qu.:206 1st Qu.: 36611 1st Qu.:4.84e+07
## Median :308 Median : 52246 Median :8.27e+07
## Mean :297 Mean : 74890 Mean :1.19e+08
## 3rd Qu.:406 3rd Qu.: 85711 3rd Qu.:1.37e+08
## Max. :595 Max. :270660 Max. :5.99e+08
```
One last preparatory step is to convert the map data to a `data.frame` for plotting with `ggplot`.
```
nyc_map <- fortify(nyc_map)
```
### 11\.3\.2 Pick\-up and drop\-off locations
Since trips might actually start or end outside of NYC, we first restrict the sample of trips to those within the boundary box of the map. For the sake of the exercise, we only select a random sample of `50000` trips from the remaining trip records.
```
# taxi trips plot data
taxi_trips <- taxi[Start_Lon <= max(nyc_map$long) &
Start_Lon >= min(nyc_map$long) &
End_Lon <= max(nyc_map$long) &
End_Lon >= min(nyc_map$long) &
Start_Lat <= max(nyc_map$lat) &
Start_Lat >= min(nyc_map$lat) &
End_Lat <= max(nyc_map$lat) &
End_Lat >= min(nyc_map$lat)
]
taxi_trips <- taxi_trips[base::sample(1:nrow(taxi_trips), 50000)]
```
In order to visualize how the cab traffic is changing over the course of the day, we add an additional variable called `start_time` in which we store the time (hour) of the day a trip started.
```
taxi_trips$start_time <- lubridate::hour(taxi_trips$Trip_Pickup_DateTime)
```
Particularly, we want to look at differences between morning, afternoon, and evening/night.
```
# define new variable for facets
taxi_trips$time_of_day <- "Morning"
taxi_trips[start_time > 12 & start_time < 17]$time_of_day <- "Afternoon"
taxi_trips[start_time %in% c(17:24, 0:5)]$time_of_day <- "Evening/Night"
taxi_trips$time_of_day <-
factor(taxi_trips$time_of_day,
levels = c("Morning", "Afternoon", "Evening/Night"))
```
We create the plot by first setting up the canvas with our taxi trip data. Then, we add the map as a first layer.
```
# set up the canvas
locations <- ggplot(taxi_trips, aes(x=long, y=lat))
# add the map geometry
locations <- locations + geom_map(data = nyc_map,
map = nyc_map,
aes(map_id = id))
locations
```
Now we can start adding the pick\-up and drop\-off locations of cab trips.
```
# add pick-up locations to plot
locations +
geom_scattermore(aes(x=Start_Lon, y=Start_Lat),
color="orange",
pointsize = 1,
alpha = 0.2)
```
As is to be expected, most of the trips start in Manhattan. Now let’s look at where trips end.
```
# add drop-off locations to plot
locations +
geom_scattermore(aes(x=End_Lon, y=End_Lat),
color="steelblue",
pointsize = 1,
alpha = 0.2) +
geom_scattermore(aes(x=Start_Lon, y=Start_Lat),
color="orange",
pointsize = 1,
alpha = 0.2)
```
In fact, more trips tend to end outside of Manhattan. And the destinations seem to be broader spread across the city then the pick\-up locations. Most destinations are still in Manhattan, though.
Now let’s have a look at how this picture changes depending on the time of the day.
```
# pick-up locations
locations +
geom_scattermore(aes(x=Start_Lon, y=Start_Lat),
color="orange",
pointsize =1,
alpha = 0.2) +
facet_wrap(vars(time_of_day))
```
```
# drop-off locations
locations +
geom_scattermore(aes(x=End_Lon, y=End_Lat),
color="steelblue",
pointsize = 1,
alpha = 0.2) +
facet_wrap(vars(time_of_day))
```
Alternatively, we can plot the hours on a continuous scale.
```
# drop-off locations
locations +
geom_scattermore(aes(x=End_Lon, y=End_Lat, color = start_time),
pointsize = 1,
alpha = 0.2) +
scale_colour_gradient2( low = "red", mid = "yellow", high = "red",
midpoint = 12)
```
**Aside: change color schemes**
In the example above we use `scale_colour_gradient2()` to modify the color gradient used to visualize the start time of taxi trips. By default, ggplot would plot the following (default gradient color setting):
```
# drop-off locations
locations +
geom_scattermore(aes(x=End_Lon, y=End_Lat, color = start_time ),
pointsize = 1,
alpha = 0.2)
```
`ggplot2` offers various functions to modify the color scales used in a plot. In the case of the example above, we visualize values of a continuous variable. Hence we use a gradient color scale. In the case of categorical variables, we need to modify the default discrete color scale.
Recall the plot illustrating tipping behavior, where we highlight in which observations the client paid with credit card, cash, etc.
```
# indicate natural numbers
taxi[, dollar_paid := ifelse(Tip_Amt == round(Tip_Amt,0),
"Full",
"Fraction"),]
# extended x/y plot
taxiplot +
geom_scattermore(alpha=0.2,
pointsize=3,
aes(color=Payment_Type)) +
facet_wrap("dollar_paid") +
theme(legend.position="bottom")
```
Since we do not further specify the discrete color scheme to be used, ggplot simply uses its default color scheme for this plot. We can change this as follows.
```
# indicate natural numbers
taxi[, dollar_paid := ifelse(Tip_Amt == round(Tip_Amt,0),
"Full",
"Fraction"),]
# extended x/y plot
taxiplot +
geom_scattermore(alpha=0.2, pointsize = 3,
aes(color=Payment_Type)) +
facet_wrap("dollar_paid") +
scale_color_discrete(type = c("red",
"steelblue",
"orange",
"purple")) +
theme(legend.position="bottom")
```
### 11\.3\.1 Preparations
For the visualization of spatial data, we first load additional packages that give R some GIS features.
```
# load GIS packages
library(rgdal)
library(rgeos)
```
Moreover, we download and import a so\-called [‘shape file’](https://en.wikipedia.org/wiki/Shapefile) (a geospatial data format) of New York City. This will be the basis for our visualization of the spatial dimension of taxi trips. The file is downloaded from [New York’s Department of City Planning](https://www1.nyc.gov/site/planning/index.page) and indicates the city’s community district borders.[65](#fn65)
```
# download the zipped shapefile to a temporary file; unzip
BASE_URL <-
"https://www1.nyc.gov/assets/planning/download/zip/data-maps/open-data/"
FILE <- "nycd_19a.zip"
URL <- paste0(BASE_URL, FILE)
tmp_file <- tempfile()
download.file(URL, tmp_file)
file_path <- unzip(tmp_file, exdir= "data")
# delete the temporary file
unlink(tmp_file)
```
Now we can import the shape file and have a look at how the GIS data is structured.
```
# read GIS data
nyc_map <- readOGR(file_path[1], verbose = FALSE)
# have a look at the GIS data
summary(nyc_map)
```
```
## Object of class SpatialPolygonsDataFrame
## Coordinates:
## min max
## x 913175 1067383
## y 120122 272844
## Is projected: TRUE
## proj4string :
## [+proj=lcc +lat_0=40.1666666666667 +lon_0=-74
## +lat_1=41.0333333333333 +lat_2=40.6666666666667
## +x_0=300000 +y_0=0 +datum=NAD83 +units=us-ft
## +no_defs]
## Data attributes:
## BoroCD Shape_Leng Shape_Area
## Min. :101 Min. : 23963 Min. :2.43e+07
## 1st Qu.:206 1st Qu.: 36611 1st Qu.:4.84e+07
## Median :308 Median : 52246 Median :8.27e+07
## Mean :297 Mean : 74890 Mean :1.19e+08
## 3rd Qu.:406 3rd Qu.: 85711 3rd Qu.:1.37e+08
## Max. :595 Max. :270660 Max. :5.99e+08
```
Note that the coordinates are not in the usual longitude and latitude units. The original map uses a different projection than the TLC data of taxi trip records. Before plotting, we thus have to change the projection to be in line with the TLC data.
```
# transform the projection
p <- CRS("+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0")
nyc_map <- spTransform(nyc_map, p)
# check result
summary(nyc_map)
```
```
## Object of class SpatialPolygonsDataFrame
## Coordinates:
## min max
## x -74.26 -73.70
## y 40.50 40.92
## Is projected: FALSE
## proj4string : [+proj=longlat +datum=WGS84 +no_defs]
## Data attributes:
## BoroCD Shape_Leng Shape_Area
## Min. :101 Min. : 23963 Min. :2.43e+07
## 1st Qu.:206 1st Qu.: 36611 1st Qu.:4.84e+07
## Median :308 Median : 52246 Median :8.27e+07
## Mean :297 Mean : 74890 Mean :1.19e+08
## 3rd Qu.:406 3rd Qu.: 85711 3rd Qu.:1.37e+08
## Max. :595 Max. :270660 Max. :5.99e+08
```
One last preparatory step is to convert the map data to a `data.frame` for plotting with `ggplot`.
```
nyc_map <- fortify(nyc_map)
```
### 11\.3\.2 Pick\-up and drop\-off locations
Since trips might actually start or end outside of NYC, we first restrict the sample of trips to those within the boundary box of the map. For the sake of the exercise, we only select a random sample of `50000` trips from the remaining trip records.
```
# taxi trips plot data
taxi_trips <- taxi[Start_Lon <= max(nyc_map$long) &
Start_Lon >= min(nyc_map$long) &
End_Lon <= max(nyc_map$long) &
End_Lon >= min(nyc_map$long) &
Start_Lat <= max(nyc_map$lat) &
Start_Lat >= min(nyc_map$lat) &
End_Lat <= max(nyc_map$lat) &
End_Lat >= min(nyc_map$lat)
]
taxi_trips <- taxi_trips[base::sample(1:nrow(taxi_trips), 50000)]
```
In order to visualize how the cab traffic is changing over the course of the day, we add an additional variable called `start_time` in which we store the time (hour) of the day a trip started.
```
taxi_trips$start_time <- lubridate::hour(taxi_trips$Trip_Pickup_DateTime)
```
Particularly, we want to look at differences between morning, afternoon, and evening/night.
```
# define new variable for facets
taxi_trips$time_of_day <- "Morning"
taxi_trips[start_time > 12 & start_time < 17]$time_of_day <- "Afternoon"
taxi_trips[start_time %in% c(17:24, 0:5)]$time_of_day <- "Evening/Night"
taxi_trips$time_of_day <-
factor(taxi_trips$time_of_day,
levels = c("Morning", "Afternoon", "Evening/Night"))
```
We create the plot by first setting up the canvas with our taxi trip data. Then, we add the map as a first layer.
```
# set up the canvas
locations <- ggplot(taxi_trips, aes(x=long, y=lat))
# add the map geometry
locations <- locations + geom_map(data = nyc_map,
map = nyc_map,
aes(map_id = id))
locations
```
Now we can start adding the pick\-up and drop\-off locations of cab trips.
```
# add pick-up locations to plot
locations +
geom_scattermore(aes(x=Start_Lon, y=Start_Lat),
color="orange",
pointsize = 1,
alpha = 0.2)
```
As is to be expected, most of the trips start in Manhattan. Now let’s look at where trips end.
```
# add drop-off locations to plot
locations +
geom_scattermore(aes(x=End_Lon, y=End_Lat),
color="steelblue",
pointsize = 1,
alpha = 0.2) +
geom_scattermore(aes(x=Start_Lon, y=Start_Lat),
color="orange",
pointsize = 1,
alpha = 0.2)
```
In fact, more trips tend to end outside of Manhattan. And the destinations seem to be broader spread across the city then the pick\-up locations. Most destinations are still in Manhattan, though.
Now let’s have a look at how this picture changes depending on the time of the day.
```
# pick-up locations
locations +
geom_scattermore(aes(x=Start_Lon, y=Start_Lat),
color="orange",
pointsize =1,
alpha = 0.2) +
facet_wrap(vars(time_of_day))
```
```
# drop-off locations
locations +
geom_scattermore(aes(x=End_Lon, y=End_Lat),
color="steelblue",
pointsize = 1,
alpha = 0.2) +
facet_wrap(vars(time_of_day))
```
Alternatively, we can plot the hours on a continuous scale.
```
# drop-off locations
locations +
geom_scattermore(aes(x=End_Lon, y=End_Lat, color = start_time),
pointsize = 1,
alpha = 0.2) +
scale_colour_gradient2( low = "red", mid = "yellow", high = "red",
midpoint = 12)
```
**Aside: change color schemes**
In the example above we use `scale_colour_gradient2()` to modify the color gradient used to visualize the start time of taxi trips. By default, ggplot would plot the following (default gradient color setting):
```
# drop-off locations
locations +
geom_scattermore(aes(x=End_Lon, y=End_Lat, color = start_time ),
pointsize = 1,
alpha = 0.2)
```
`ggplot2` offers various functions to modify the color scales used in a plot. In the case of the example above, we visualize values of a continuous variable. Hence we use a gradient color scale. In the case of categorical variables, we need to modify the default discrete color scale.
Recall the plot illustrating tipping behavior, where we highlight in which observations the client paid with credit card, cash, etc.
```
# indicate natural numbers
taxi[, dollar_paid := ifelse(Tip_Amt == round(Tip_Amt,0),
"Full",
"Fraction"),]
# extended x/y plot
taxiplot +
geom_scattermore(alpha=0.2,
pointsize=3,
aes(color=Payment_Type)) +
facet_wrap("dollar_paid") +
theme(legend.position="bottom")
```
Since we do not further specify the discrete color scheme to be used, ggplot simply uses its default color scheme for this plot. We can change this as follows.
```
# indicate natural numbers
taxi[, dollar_paid := ifelse(Tip_Amt == round(Tip_Amt,0),
"Full",
"Fraction"),]
# extended x/y plot
taxiplot +
geom_scattermore(alpha=0.2, pointsize = 3,
aes(color=Payment_Type)) +
facet_wrap("dollar_paid") +
scale_color_discrete(type = c("red",
"steelblue",
"orange",
"purple")) +
theme(legend.position="bottom")
```
11\.4 Wrapping up
-----------------
* *`ggplot`* offers a unified approach to generating a variety of plots common in the Big Data context: heatmaps, GIS\-like maps, density plots, 2D\-bin plots, etc.
* Building on the concept of the *Grammar of Graphics* ([Wilkinson et al. 2005](#ref-wilkinson2005grammar)), `ggplot2` follows the paradigm of creating plots layer\-by\-layer, which offers great flexibility regarding the visualization of complex (big) data.
* Standard plotting facilities in R (including in `ggplot`) are based on the concept of vector images (where each dot, line, and area is defined as in a coordinate system). While vector images have the advantage of flexible scaling (no reliance on a specific resolution), when plotting many observations, the computational load to generate and store/hold such graphics in memory can be substantial.
* Plotting of large amounts of data can be made more efficient by relying on less complex shapes (e.g., for dots in a scatter\-plot) or through *rasterization* and conversion of the plot into a *bitmap\-image (a raster\-based image)*. In contrast to vector images, raster images are created with a specific resolution that defines the size of a matrix of pixels that constitutes the image. If plotting a scatter\-plot based on many observations, this data structure is much more memory\-efficient than defining each dot in a vector image.
* Specific types of plots, such as hex\-bin plots and other 2D\-bin plots, facilitate plotting large amounts of data independent of the type of image (vector or raster). Moreover, they can be useful to show/highlight specific patterns in large amounts of data that could not be seen in standard scatter plots.
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/bottlenecks-in-everyday-data-analytics-tasks.html |
Chapter 12 Bottlenecks in Everyday Data Analytics Tasks
=======================================================
This chapter presents three examples of how the lessons from the previous chapters could be applied in everyday data analytics tasks. The first section focuses on the statistics perspective: compute something in a different way (with a different algorithm) but end up with essentially the same result. It is also an illustration of how diverse the already implemented solutions for working with large data in applied econometrics in the R\-universe are, and how it makes sense to first look into a more efficient algorithm/statistical procedure than directly use specialized packages such as `bigmemory` or even scale up in the cloud. The second section is a reminder (in an extremely simple setting) of how we can use R more efficiently when taking basic R characteristics into consideration. It is an example and detailed illustration of how adapting a few simple coding habits with basic R can substantially improve the efficiency of your code for larger workloads. Finally, the third section in this chapter re\-visits the topics of scaling up both locally and in the cloud.
12\.1 Case study: Efficient fixed effects estimation
----------------------------------------------------
In this case study we look into a very common computational problem in applied econometrics: estimation of a fixed effects model with various fixed\-effects units (i.e., many intercepts). The aim of this case study is to give an illustration of how a specific statistical procedure can help us reduce the computational burden substantially (here, by reducing the number of columns in the model matrix and therefore the burden of computing the inverse of a huge model matrix). The context of this tutorial builds on a study called [“Friends in High Places”](https://www.aeaweb.org/articles?id=10.1257/pol.6.3.63) by Cohen and Malloy ([2014](#ref-cohen_malloy)). Cohen and Malloy show that US Senators who are alumni of the same university/college tend to help each other out in votes on industrial policies if the corresponding policy is highly relevant for the state of one senator but not relevant for the state of the other senator. The data is provided along with the published article and can be accessed here: [http://doi.org/10\.3886/E114873V1](http://doi.org/10.3886/E114873V1). The data (and code) is provided in STATA format. We can import the main dataset with the `foreign` package ([R Core Team 2022](#ref-foreign)). For data handling we load the `data.table` package and for hypotheses tests we load the `lmtest` package ([Zeileis and Hothorn 2002](#ref-lmtest)).
```
# SET UP ------------------
# load packages
library(foreign)
library(data.table)
library(lmtest)
# fix vars
DATA_PATH <- "data/data_for_tables.dta"
# import data
cm <- as.data.table(read.dta(DATA_PATH))
# keep only clean obs
cm <- cm[!(is.na(yes)
|is.na(pctsumyessameparty)
|is.na(pctsumyessameschool)
|is.na(pctsumyessamestate))]
```
As part of this case study, we will replicate parts of Table 3 of the main article (p. 73\). Specifically, we will estimate specifications (1\) and (2\). In both specifications, the dependent variable is an indicator `yes` that is equal to 1 if the corresponding senator voted Yes on the given bill and 0 otherwise. The main explanatory variables of interest are `pctsumyessameschool` (the percentage of senators from the same school as the corresponding senator who voted Yes on the given bill), `pctsumyessamestate` (the percentage of senators from the same state as the corresponding senator who voted Yes on the given bill), and `pctsumyessameparty` (the percentage of senators from the same party as the corresponding senator who voted Yes on the given bill). Specification 1 accounts for congress (time) fixed effects and senator (individual) fixed effects, and specification 2 accounts for congress\-session\-vote fixed effects and senator fixed effects.
First, let us look at a very simple example to highlight where the computational burden in the estimation of such specifications is coming from. In terms of the regression model 1, the fixed effect specification means that we introduce an indicator variable (an intercept) for \\(N\-1\\) senators and \\(M\-1\\) congresses. That is, the simple model matrix (\\(X\\)) without accounting for fixed effects has dimensions \\(425653\\times4\\).
```
# pooled model (no FE)
model0 <- yes ~
pctsumyessameschool +
pctsumyessamestate +
pctsumyessameparty
dim(model.matrix(model0, data=cm))
```
```
## [1] 425653 4
```
In contrast, the model matrix of specification (1\) is of dimensions \\(425653\\times221\\), and the model matrix of specification (2\) even of \\(425653\\times6929\\).
```
model1 <-
yes ~ pctsumyessameschool +
pctsumyessamestate +
pctsumyessameparty +
factor(congress) +
factor(id) -1
mm1 <- model.matrix(model1, data=cm)
dim(mm1)
```
```
## [1] 425653 168
```
Using OLS to estimate such a model thus involves the computation of a very large matrix inversion (because \\(\\hat{\\beta}\_{OLS} \= (\\mathbf{X}^\\intercal\\mathbf{X})^{\-1}\\mathbf{X}^{\\intercal}\\mathbf{y}\\)). In addition, the model matrix for specification 2 is about 22GB, which might further slow down the computer due to a lack of physical memory or even crash the R session altogether.
In order to set a point of reference, we first estimate specification (1\) with standard OLS.
```
# fit specification (1)
runtime <- system.time(fit1 <- lm(data = cm, formula = model1))
coeftest(fit1)[2:4,]
```
```
## Estimate Std. Error t value
## pctsumyessamestate 0.11861 0.001085 109.275
## pctsumyessameparty 0.92640 0.001397 662.910
## factor(congress)101 -0.01458 0.006429 -2.269
## Pr(>|t|)
## pctsumyessamestate 0.0000
## pctsumyessameparty 0.0000
## factor(congress)101 0.0233
```
```
# median amount of time needed for estimation
runtime[3]
```
```
## elapsed
## 6.486
```
As expected, this takes quite some time to compute. However, there is an alternative approach to estimating such models that substantially reduces the computational burden by “sweeping out the fixed effects dummies”. In the simple case of only one fixed effect variable (e.g., only individual fixed effects), the trick is called “within transformation” or “demeaning” and is quite simple to implement. For each of the categories in the fixed effect variable, compute the mean of the covariate and subtract the mean from the covariate’s value.
```
# illustration of within transformation for the senator fixed effects
cm_within <-
with(cm, data.table(yes = yes - ave(yes, id),
pctsumyessameschool = pctsumyessameschool -
ave(pctsumyessameschool, id),
pctsumyessamestate = pctsumyessamestate -
ave(pctsumyessamestate, id),
pctsumyessameparty = pctsumyessameparty -
ave(pctsumyessameparty, id)
))
# comparison of dummy fixed effects estimator and within estimator
dummy_time <- system.time(fit_dummy <-
lm(yes ~ pctsumyessameschool +
pctsumyessamestate +
pctsumyessameparty +
factor(id) -1, data = cm
))
within_time <- system.time(fit_within <-
lm(yes ~ pctsumyessameschool +
pctsumyessamestate +
pctsumyessameparty -1,
data = cm_within))
# computation time comparison
as.numeric(within_time[3])/as.numeric(dummy_time[3])
```
```
## [1] 0.009609
```
```
# comparison of estimates
coeftest(fit_dummy)[1:3,]
```
```
## Estimate Std. Error t value
## pctsumyessameschool 0.04424 0.001352 32.73
## pctsumyessamestate 0.11864 0.001085 109.30
## pctsumyessameparty 0.92615 0.001397 662.93
## Pr(>|t|)
## pctsumyessameschool 1.205e-234
## pctsumyessamestate 0.000e+00
## pctsumyessameparty 0.000e+00
```
```
coeftest(fit_within)
```
```
##
## t test of coefficients:
##
## Estimate Std. Error t value
## pctsumyessameschool 0.04424 0.00135 32.7
## pctsumyessamestate 0.11864 0.00109 109.3
## pctsumyessameparty 0.92615 0.00140 663.0
## Pr(>|t|)
## pctsumyessameschool <2e-16 ***
## pctsumyessamestate <2e-16 ***
## pctsumyessameparty <2e-16 ***
## ---
## Signif. codes:
## 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Unfortunately, we cannot simply apply the same procedure in a specification with several fixed effects variables. However, Gaure ([2013b](#ref-GAURE20138)) provides a generalization of the linear within\-estimator to several fixed effects variables. This method is implemented in the `lfe` package ([Gaure 2013a](#ref-gaure_2013)). With this package, we can easily estimate both fixed\-effect specifications (as well as the corresponding cluster\-robust standard errors) in order to replicate the original results by Cohen and Malloy ([2014](#ref-cohen_malloy)).
```
library(lfe)
# model and clustered SE specifications
model1 <- yes ~ pctsumyessameschool +
pctsumyessamestate +
pctsumyessameparty |congress+id|0|id
model2 <- yes ~ pctsumyessameschool +
pctsumyessamestate +
pctsumyessameparty |congress_session_votenumber+id|0|id
# estimation
fit1 <- felm(model1, data=cm)
fit2 <- felm(model2, data=cm)
```
Finally we can display the regression table.
```
stargazer::stargazer(fit1,fit2,
type="text",
dep.var.labels = "Vote (yes/no)",
covariate.labels = c("School Connected Votes",
"State Votes",
"Party Votes"),
keep.stat = c("adj.rsq", "n"))
```
```
##
## ===================================================
## Dependent variable:
## ----------------------------
## Vote (yes/no)
## (1) (2)
## ---------------------------------------------------
## School Connected Votes 0.045*** 0.052***
## (0.016) (0.016)
##
## State Votes 0.119*** 0.122***
## (0.013) (0.012)
##
## Party Votes 0.926*** 0.945***
## (0.022) (0.024)
##
## ---------------------------------------------------
## Observations 425,653 425,653
## Adjusted R2 0.641 0.641
## ===================================================
## Note: *p<0.1; **p<0.05; ***p<0.01
```
12\.2 Case study: Loops, memory, and vectorization
--------------------------------------------------
We first read the `economics` dataset into R and extend it by duplicating its rows to get a slightly larger dataset (this step can easily be adapted to create a very large dataset).
```
# read dataset into R
economics <- read.csv("data/economics.csv")
# have a look at the data
head(economics, 2)
```
```
## date pce pop psavert uempmed unemploy
## 1 1967-07-01 507.4 198712 12.5 4.5 2944
## 2 1967-08-01 510.5 198911 12.5 4.7 2945
```
```
# create a 'large' dataset out of this
for (i in 1:3) {
economics <- rbind(economics, economics)
}
dim(economics)
```
```
## [1] 4592 6
```
The goal of this code example is to compute real personal consumption expenditures, assuming that `pce` in the `economics` dataset provides nominal personal consumption expenditures. Thus, we divide each value in the vector `pce` by a deflator `1.05`.
### 12\.2\.1 Naïve approach (ignorant of R)
The first approach we take is based on a simple `for` loop. In each iteration one element in `pce` is divided by the `deflator`, and the resulting value is stored as a new element in the vector `pce_real`.
```
# Naïve approach (ignorant of R)
deflator <- 1.05 # define deflator
# iterate through each observation
pce_real <- c()
n_obs <- length(economics$pce)
for (i in 1:n_obs) {
pce_real <- c(pce_real, economics$pce[i]/deflator)
}
# look at the result
head(pce_real, 2)
```
```
## [1] 483.2 486.2
```
How long does it take?
```
# Naïve approach (ignorant of R)
deflator <- 1.05 # define deflator
# iterate through each observation
pce_real <- list()
n_obs <- length(economics$pce)
time_elapsed <-
system.time(
for (i in 1:n_obs) {
pce_real <- c(pce_real, economics$pce[i]/deflator)
})
time_elapsed
```
```
## user system elapsed
## 0.108 0.000 0.110
```
Assuming a linear time algorithm (\\(O(n)\\)), we need that much time for one additional row of data:
```
time_per_row <- time_elapsed[3]/n_obs
time_per_row
```
```
## elapsed
## 2.395e-05
```
If we are dealing with Big Data, say 100 million rows, that is
```
# in seconds
(time_per_row*100^4)
```
```
## elapsed
## 2395
```
```
# in minutes
(time_per_row*100^4)/60
```
```
## elapsed
## 39.92
```
```
# in hours
(time_per_row*100^4)/60^2
```
```
## elapsed
## 0.6654
```
Can we improve this?
### 12\.2\.2 Improvement 1: Pre\-allocation of memory
In the naïve approach taken above, each iteration of the loop causes R to re\-allocate memory because the number of elements in the vector `pce_element` is changing. In simple terms, this means that R needs to execute more steps in each iteration. We can improve this with a simple trick by initiating the vector to the right size to begin with (filled with `NA` values).
```
# Improve memory allocation (still somewhat ignorant of R)
deflator <- 1.05 # define deflator
n_obs <- length(economics$pce)
# allocate memory beforehand
# Initialize the vector to the right size
pce_real <- rep(NA, n_obs)
# iterate through each observation
time_elapsed <-
system.time(
for (i in 1:n_obs) {
pce_real[i] <- economics$pce[i]/deflator
})
```
Let’s see if this helped to make the code faster.
```
time_per_row <- time_elapsed[3]/n_obs
time_per_row
```
```
## elapsed
## 2.178e-06
```
Again, we can extrapolate (approximately) the computation time, assuming the dataset had millions of rows.
```
# in seconds
(time_per_row*100^4)
```
```
## elapsed
## 217.8
```
```
# in minutes
(time_per_row*100^4)/60
```
```
## elapsed
## 3.63
```
```
# in hours
(time_per_row*100^4)/60^2
```
```
## elapsed
## 0.06049
```
This looks much better, but we can do even better.
### 12\.2\.3 Improvement 2: Exploit vectorization
In this approach, we exploit the fact that in R, ‘everything is a vector’ and that many of the basic R functions (such as math operators) are *vectorized*. In simple terms, this means that a vectorized operation is implemented in such a way that it can take advantage of the similarity of each of the vector’s elements. That is, R only has to figure out once how to apply a given function to a vector element in order to apply it to all elements of the vector. In a simple loop, R has to go through the same ‘preparatory’ steps again and again in each iteration; this is time\-intensive.
In this example, we specifically exploit that the division operator `/` is actually a vectorized function. Thus, the division by our `deflator` is applied to each element of `economics$pce`.
```
# Do it 'the R way'
deflator <- 1.05 # define deflator
# Exploit R's vectorization
time_elapsed <-
system.time(
pce_real <- economics$pce/deflator
)
# same result
head(pce_real, 2)
```
```
## [1] 483.2 486.2
```
Now this is much faster. In fact, `system.time()` is not precise enough to capture the time elapsed. In order to measure the improvement, we use `microbenchmark::microbenchmark()` to measure the elapsed time in microseconds (millionth of a second).
```
library(microbenchmark)
# measure elapsed time in microseconds (avg.)
time_elapsed <-
summary(microbenchmark(pce_real <- economics$pce/deflator))$mean
# per row (in sec)
time_per_row <- (time_elapsed/n_obs)/10^6
```
Now we get a more precise picture regarding the improvement due to vectorization:
```
# in seconds
(time_per_row*100^4)
```
```
## [1] 0.1868
```
```
# in minutes
(time_per_row*100^4)/60
```
```
## [1] 0.003113
```
```
# in hours
(time_per_row*100^4)/60^2
```
```
## [1] 5.189e-05
```
12\.3 Case study: Bootstrapping and parallel processing
-------------------------------------------------------
In this example, we estimate a simple regression model that aims to assess racial discrimination in the context of police stops.[66](#fn66) The example is based on the ‘Minneapolis Police Department 2017 Stop Dataset’, containing data on nearly all stops made by the Minneapolis Police Department for the year 2017\.
We start by importing the data into R.
```
url <-
"https://vincentarelbundock.github.io/Rdatasets/csv/carData/MplsStops.csv"
stopdata <- data.table::fread(url)
```
We specify a simple linear probability model that aims to test whether a person identified as ‘white’ is less likely to have their vehicle searched when stopped by the police. In order to take into account level differences between different police precincts, we add precinct indicators to the regression specification.
First, let’s remove observations with missing entries (`NA`) and code our main explanatory variable and the dependent variable.
```
# remove incomplete obs
stopdata <- na.omit(stopdata)
# code dependent var
stopdata$vsearch <- 0
stopdata$vsearch[stopdata$vehicleSearch=="YES"] <- 1
# code explanatory var
stopdata$white <- 0
stopdata$white[stopdata$race=="White"] <- 1
```
We specify our baseline model as follows.
```
model <- vsearch ~ white + factor(policePrecinct)
```
and estimate the linear probability model via OLS (the `lm` function).
```
fit <- lm(model, stopdata)
summary(fit)
```
```
##
## Call:
## lm(formula = model, data = stopdata)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.1394 -0.0633 -0.0547 -0.0423 0.9773
##
## Coefficients:
## Estimate Std. Error t value
## (Intercept) 0.05473 0.00515 10.62
## white -0.01955 0.00446 -4.38
## factor(policePrecinct)2 0.00856 0.00676 1.27
## factor(policePrecinct)3 0.00341 0.00648 0.53
## factor(policePrecinct)4 0.08464 0.00623 13.58
## factor(policePrecinct)5 -0.01246 0.00637 -1.96
## Pr(>|t|)
## (Intercept) < 2e-16 ***
## white 1.2e-05 ***
## factor(policePrecinct)2 0.21
## factor(policePrecinct)3 0.60
## factor(policePrecinct)4 < 2e-16 ***
## factor(policePrecinct)5 0.05 .
## ---
## Signif. codes:
## 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.254 on 19078 degrees of freedom
## Multiple R-squared: 0.025, Adjusted R-squared: 0.0248
## F-statistic: 97.9 on 5 and 19078 DF, p-value: <2e-16
```
A potential problem with this approach (and there might be many more in this simple example) is that observations stemming from different police precincts might be correlated over time. If that is the case, we likely underestimate the coefficient’s standard errors. There is a standard approach to computing estimates for so\-called *cluster\-robust* standard errors, which would take the problem of correlation over time within clusters into consideration (and deliver a more conservative estimate of the SEs). However, this approach only works well if the number of clusters in the data is roughly 50 or more. Here we only have five.
The alternative approach is to compute bootstrapped clustered standard errors. That is, we apply the [bootstrap resampling procedure](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)) at the cluster level. Specifically, we draw \\(B\\) samples (with replacement), estimate and record the coefficient vector for each bootstrap\-sample, and then estimate \\(SE\_{boot}\\) based on the standard deviation of all respective estimated coefficient values.
```
# load packages
library(data.table)
# set the 'seed' for random numbers (makes the example reproducible)
set.seed(2)
# set number of bootstrap iterations
B <- 10
# get selection of precincts
precincts <- unique(stopdata$policePrecinct)
# container for coefficients
boot_coefs <- matrix(NA, nrow = B, ncol = 2)
# draw bootstrap samples, estimate model for each sample
for (i in 1:B) {
# draw sample of precincts (cluster level)
precincts_i <- base::sample(precincts, size = 5, replace = TRUE)
# get observations
bs_i <-
lapply(precincts_i, function(x){
stopdata[stopdata$policePrecinct==x,]
} )
bs_i <- rbindlist(bs_i)
# estimate model and record coefficients
boot_coefs[i,] <- coef(lm(model, bs_i))[1:2] # ignore FE-coefficients
}
```
Finally, let’s compute \\(SE\_{boot}\\).
```
se_boot <- apply(boot_coefs,
MARGIN = 2,
FUN = sd)
se_boot
```
```
## [1] 0.004043 0.004690
```
Note that even with a very small \\(B\\), computing \\(SE\_{boot}\\) takes some time to compute. When setting \\(B\\) to over 500, computation time will be substantial. Also note that running this code hardly uses up more memory than the very simple approach without bootstrapping (after all, in each bootstrap iteration the dataset used to estimate the model is approximately the same size as the original dataset). There is little we can do to improve the script’s performance regarding memory. However, we can tell R how to allocate CPU resources more efficiently to handle that many regression estimates.
In particular, we can make use of the fact that most modern computing environments (such as a laptop) have CPUs with several *cores*. We can exploit this fact by instructing the computer to run the computations *in parallel* (simultaneously computing on several cores). The following code is a parallel implementation of our bootstrap procedure that does exactly that.
```
# load packages for parallel processing
library(doSNOW)
# get the number of cores available
ncores <- parallel::detectCores()
# set cores for parallel processing
ctemp <- makeCluster(ncores) #
registerDoSNOW(ctemp)
# set number of bootstrap iterations
B <- 10
# get selection of precincts
precincts <- unique(stopdata$policePrecinct)
# container for coefficients
boot_coefs <- matrix(NA, nrow = B, ncol = 2)
# bootstrapping in parallel
boot_coefs <-
foreach(i = 1:B, .combine = rbind, .packages="data.table") %dopar% {
# draw sample of precincts (cluster level)
precincts_i <- base::sample(precincts, size = 5, replace = TRUE)
# get observations
bs_i <- lapply(precincts_i, function(x) {
stopdata[stopdata$policePrecinct==x,]
})
bs_i <- rbindlist(bs_i)
# estimate model and record coefficients
coef(lm(model, bs_i))[1:2] # ignore FE-coefficients
}
# be a good citizen and stop the snow clusters
stopCluster(cl = ctemp)
```
As a last step, we again compute \\(SE\_{boot}\\).
```
se_boot <- apply(boot_coefs,
MARGIN = 2,
FUN = sd)
se_boot
```
```
## (Intercept) white
## 0.002446 0.003017
```
### 12\.3\.1 Parallelization with an EC2 instance
This short tutorial illustrates how to scale up the computation of clustered standard errors shown above by running it on an AWS EC2 instance. Note that there are a few things that we need to keep in mind to make the script run on an AWS EC2 instance in RStudio Server. First, our EC2 instance is a Linux machine. When running R on a Linux machine, there is an additional step to install R packages (at least for most of the packages): R packages need to be compiled before they can be installed. The command to install packages is exactly the same (`install.packages()`), and normally you only notice a slight difference in the output shown in the R console during installation (and the installation process takes a little longer than you are used to). Apart from that, using R via RStudio Server in the cloud looks/feels very similar if not identical to when using R/RStudio locally. For this step of the case study, first follow the instructions of how to set up an AWS EC2 instance with R/RStudio Server in Chapter 7\. Then, open a browser window, log in to RStudio Server on the EC2 instance, and copy and paste the code below to a new R\-file on the EC2 instance (note that you might have to install the `data.table` and `doSNOW` packages before running the code).
When executing the code below line\-by\-line, you will notice that essentially all parts of the script work exactly as on your local machine. This is one of the great advantages of running R/RStudio Server in the cloud. You can implement your entire data analysis locally (based on a small sample), test it locally, and then move it to the cloud and run it at a larger scale in exactly the same way (even with the same Graphical User Interface (GUI)).
```
# install packages
install.packages("data.table")
install.packages("doSNOW")
# load packages
library(data.table)
# fetch the data
url <- "https://vincentarelbundock.github.io/Rdatasets/csv/carData/MplsStops.csv"
stopdata <- read.csv(url)
# remove incomplete obs
stopdata <- na.omit(stopdata)
# code dependent var
stopdata$vsearch <- 0
stopdata$vsearch[stopdata$vehicleSearch == "YES"] <- 1
# code explanatory var
stopdata$white <- 0
stopdata$white[stopdata$race == "White"] <- 1
# model fit
model <- vsearch ~ white + factor(policePrecinct)
fit <- lm(model, stopdata)
summary(fit)
# bootstrapping: normal approach set the 'seed' for random
# numbers (makes the example reproducible)
set.seed(2)
# set number of bootstrap iterations
B <- 50
# get selection of precincts
precincts <- unique(stopdata$policePrecinct)
# container for coefficients
boot_coefs <- matrix(NA, nrow = B, ncol = 2)
# draw bootstrap samples, estimate model for each sample
for (i in 1:B) {
# draw sample of precincts (cluster level)
precincts_i <- base::sample(precincts, size = 5, replace = TRUE)
# get observations
bs_i <- lapply(precincts_i, function(x) {
stopdata[stopdata$policePrecinct == x, ]
})
bs_i <- rbindlist(bs_i)
# estimate model and record coefficients
boot_coefs[i, ] <- coef(lm(model, bs_i))[1:2] # ignore FE-coefficients
}
se_boot <- apply(boot_coefs, MARGIN = 2, FUN = sd)
se_boot
```
So far, we have only demonstrated that the simple implementation (non\-parallel) works both locally and in the cloud. However, the real purpose of using an EC2 instance in this example is to make use of the fact that we can scale up our instance to have more CPU cores available for the parallel implementation of our bootstrap procedure. Recall that running the script below on our local machine will employ all cores available to us and compute the bootstrap resampling in parallel on all these cores. Exactly the same thing happens when running the code below on our simple `t2.micro` instance. However, this type of EC2 instance only has one core. You can check this when running the following line of code in RStudio Server (assuming the `doSNOW` package is installed and loaded): `parallel::detectCores()`
.
When running the entire parallel implementation below, you will thus notice that it won’t compute the bootstrap SE any faster than with the non\-parallel version above. However, by simply initiating another EC2 type with more cores, we can distribute the workload across many CPU cores, using exactly the same R script.
```
# bootstrapping: parallel approaach
# install.packages("doSNOW", "parallel")
# load packages for parallel processing
library(doSNOW)
# set cores for parallel processing
ncores <- parallel::detectCores()
ctemp <- makeCluster(ncores)
registerDoSNOW(ctemp)
# set number of bootstrap iterations
B <- 50
# get selection of precincts
precincts <- unique(stopdata$policePrecinct)
# container for coefficients
boot_coefs <- matrix(NA, nrow = B, ncol = 2)
# bootstrapping in parallel
boot_coefs <-
foreach(i = 1:B, .combine = rbind, .packages="data.table") %dopar% {
# draw sample of precincts (cluster level)
precincts_i <- base::sample(precincts, size = 5, replace = TRUE)
# get observations
bs_i <- lapply(precincts_i, function(x){
stopdata[stopdata$policePrecinct==x,])
}
bs_i <- rbindlist(bs_i)
# estimate model and record coefficients
coef(lm(model, bs_i))[1:2] # ignore FE-coefficients
}
# be a good citizen and stop the snow clusters
stopCluster(cl = ctemp)
# compute the bootstrapped standard errors
se_boot <- apply(boot_coefs,
MARGIN = 2,
FUN = sd)
```
12\.1 Case study: Efficient fixed effects estimation
----------------------------------------------------
In this case study we look into a very common computational problem in applied econometrics: estimation of a fixed effects model with various fixed\-effects units (i.e., many intercepts). The aim of this case study is to give an illustration of how a specific statistical procedure can help us reduce the computational burden substantially (here, by reducing the number of columns in the model matrix and therefore the burden of computing the inverse of a huge model matrix). The context of this tutorial builds on a study called [“Friends in High Places”](https://www.aeaweb.org/articles?id=10.1257/pol.6.3.63) by Cohen and Malloy ([2014](#ref-cohen_malloy)). Cohen and Malloy show that US Senators who are alumni of the same university/college tend to help each other out in votes on industrial policies if the corresponding policy is highly relevant for the state of one senator but not relevant for the state of the other senator. The data is provided along with the published article and can be accessed here: [http://doi.org/10\.3886/E114873V1](http://doi.org/10.3886/E114873V1). The data (and code) is provided in STATA format. We can import the main dataset with the `foreign` package ([R Core Team 2022](#ref-foreign)). For data handling we load the `data.table` package and for hypotheses tests we load the `lmtest` package ([Zeileis and Hothorn 2002](#ref-lmtest)).
```
# SET UP ------------------
# load packages
library(foreign)
library(data.table)
library(lmtest)
# fix vars
DATA_PATH <- "data/data_for_tables.dta"
# import data
cm <- as.data.table(read.dta(DATA_PATH))
# keep only clean obs
cm <- cm[!(is.na(yes)
|is.na(pctsumyessameparty)
|is.na(pctsumyessameschool)
|is.na(pctsumyessamestate))]
```
As part of this case study, we will replicate parts of Table 3 of the main article (p. 73\). Specifically, we will estimate specifications (1\) and (2\). In both specifications, the dependent variable is an indicator `yes` that is equal to 1 if the corresponding senator voted Yes on the given bill and 0 otherwise. The main explanatory variables of interest are `pctsumyessameschool` (the percentage of senators from the same school as the corresponding senator who voted Yes on the given bill), `pctsumyessamestate` (the percentage of senators from the same state as the corresponding senator who voted Yes on the given bill), and `pctsumyessameparty` (the percentage of senators from the same party as the corresponding senator who voted Yes on the given bill). Specification 1 accounts for congress (time) fixed effects and senator (individual) fixed effects, and specification 2 accounts for congress\-session\-vote fixed effects and senator fixed effects.
First, let us look at a very simple example to highlight where the computational burden in the estimation of such specifications is coming from. In terms of the regression model 1, the fixed effect specification means that we introduce an indicator variable (an intercept) for \\(N\-1\\) senators and \\(M\-1\\) congresses. That is, the simple model matrix (\\(X\\)) without accounting for fixed effects has dimensions \\(425653\\times4\\).
```
# pooled model (no FE)
model0 <- yes ~
pctsumyessameschool +
pctsumyessamestate +
pctsumyessameparty
dim(model.matrix(model0, data=cm))
```
```
## [1] 425653 4
```
In contrast, the model matrix of specification (1\) is of dimensions \\(425653\\times221\\), and the model matrix of specification (2\) even of \\(425653\\times6929\\).
```
model1 <-
yes ~ pctsumyessameschool +
pctsumyessamestate +
pctsumyessameparty +
factor(congress) +
factor(id) -1
mm1 <- model.matrix(model1, data=cm)
dim(mm1)
```
```
## [1] 425653 168
```
Using OLS to estimate such a model thus involves the computation of a very large matrix inversion (because \\(\\hat{\\beta}\_{OLS} \= (\\mathbf{X}^\\intercal\\mathbf{X})^{\-1}\\mathbf{X}^{\\intercal}\\mathbf{y}\\)). In addition, the model matrix for specification 2 is about 22GB, which might further slow down the computer due to a lack of physical memory or even crash the R session altogether.
In order to set a point of reference, we first estimate specification (1\) with standard OLS.
```
# fit specification (1)
runtime <- system.time(fit1 <- lm(data = cm, formula = model1))
coeftest(fit1)[2:4,]
```
```
## Estimate Std. Error t value
## pctsumyessamestate 0.11861 0.001085 109.275
## pctsumyessameparty 0.92640 0.001397 662.910
## factor(congress)101 -0.01458 0.006429 -2.269
## Pr(>|t|)
## pctsumyessamestate 0.0000
## pctsumyessameparty 0.0000
## factor(congress)101 0.0233
```
```
# median amount of time needed for estimation
runtime[3]
```
```
## elapsed
## 6.486
```
As expected, this takes quite some time to compute. However, there is an alternative approach to estimating such models that substantially reduces the computational burden by “sweeping out the fixed effects dummies”. In the simple case of only one fixed effect variable (e.g., only individual fixed effects), the trick is called “within transformation” or “demeaning” and is quite simple to implement. For each of the categories in the fixed effect variable, compute the mean of the covariate and subtract the mean from the covariate’s value.
```
# illustration of within transformation for the senator fixed effects
cm_within <-
with(cm, data.table(yes = yes - ave(yes, id),
pctsumyessameschool = pctsumyessameschool -
ave(pctsumyessameschool, id),
pctsumyessamestate = pctsumyessamestate -
ave(pctsumyessamestate, id),
pctsumyessameparty = pctsumyessameparty -
ave(pctsumyessameparty, id)
))
# comparison of dummy fixed effects estimator and within estimator
dummy_time <- system.time(fit_dummy <-
lm(yes ~ pctsumyessameschool +
pctsumyessamestate +
pctsumyessameparty +
factor(id) -1, data = cm
))
within_time <- system.time(fit_within <-
lm(yes ~ pctsumyessameschool +
pctsumyessamestate +
pctsumyessameparty -1,
data = cm_within))
# computation time comparison
as.numeric(within_time[3])/as.numeric(dummy_time[3])
```
```
## [1] 0.009609
```
```
# comparison of estimates
coeftest(fit_dummy)[1:3,]
```
```
## Estimate Std. Error t value
## pctsumyessameschool 0.04424 0.001352 32.73
## pctsumyessamestate 0.11864 0.001085 109.30
## pctsumyessameparty 0.92615 0.001397 662.93
## Pr(>|t|)
## pctsumyessameschool 1.205e-234
## pctsumyessamestate 0.000e+00
## pctsumyessameparty 0.000e+00
```
```
coeftest(fit_within)
```
```
##
## t test of coefficients:
##
## Estimate Std. Error t value
## pctsumyessameschool 0.04424 0.00135 32.7
## pctsumyessamestate 0.11864 0.00109 109.3
## pctsumyessameparty 0.92615 0.00140 663.0
## Pr(>|t|)
## pctsumyessameschool <2e-16 ***
## pctsumyessamestate <2e-16 ***
## pctsumyessameparty <2e-16 ***
## ---
## Signif. codes:
## 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Unfortunately, we cannot simply apply the same procedure in a specification with several fixed effects variables. However, Gaure ([2013b](#ref-GAURE20138)) provides a generalization of the linear within\-estimator to several fixed effects variables. This method is implemented in the `lfe` package ([Gaure 2013a](#ref-gaure_2013)). With this package, we can easily estimate both fixed\-effect specifications (as well as the corresponding cluster\-robust standard errors) in order to replicate the original results by Cohen and Malloy ([2014](#ref-cohen_malloy)).
```
library(lfe)
# model and clustered SE specifications
model1 <- yes ~ pctsumyessameschool +
pctsumyessamestate +
pctsumyessameparty |congress+id|0|id
model2 <- yes ~ pctsumyessameschool +
pctsumyessamestate +
pctsumyessameparty |congress_session_votenumber+id|0|id
# estimation
fit1 <- felm(model1, data=cm)
fit2 <- felm(model2, data=cm)
```
Finally we can display the regression table.
```
stargazer::stargazer(fit1,fit2,
type="text",
dep.var.labels = "Vote (yes/no)",
covariate.labels = c("School Connected Votes",
"State Votes",
"Party Votes"),
keep.stat = c("adj.rsq", "n"))
```
```
##
## ===================================================
## Dependent variable:
## ----------------------------
## Vote (yes/no)
## (1) (2)
## ---------------------------------------------------
## School Connected Votes 0.045*** 0.052***
## (0.016) (0.016)
##
## State Votes 0.119*** 0.122***
## (0.013) (0.012)
##
## Party Votes 0.926*** 0.945***
## (0.022) (0.024)
##
## ---------------------------------------------------
## Observations 425,653 425,653
## Adjusted R2 0.641 0.641
## ===================================================
## Note: *p<0.1; **p<0.05; ***p<0.01
```
12\.2 Case study: Loops, memory, and vectorization
--------------------------------------------------
We first read the `economics` dataset into R and extend it by duplicating its rows to get a slightly larger dataset (this step can easily be adapted to create a very large dataset).
```
# read dataset into R
economics <- read.csv("data/economics.csv")
# have a look at the data
head(economics, 2)
```
```
## date pce pop psavert uempmed unemploy
## 1 1967-07-01 507.4 198712 12.5 4.5 2944
## 2 1967-08-01 510.5 198911 12.5 4.7 2945
```
```
# create a 'large' dataset out of this
for (i in 1:3) {
economics <- rbind(economics, economics)
}
dim(economics)
```
```
## [1] 4592 6
```
The goal of this code example is to compute real personal consumption expenditures, assuming that `pce` in the `economics` dataset provides nominal personal consumption expenditures. Thus, we divide each value in the vector `pce` by a deflator `1.05`.
### 12\.2\.1 Naïve approach (ignorant of R)
The first approach we take is based on a simple `for` loop. In each iteration one element in `pce` is divided by the `deflator`, and the resulting value is stored as a new element in the vector `pce_real`.
```
# Naïve approach (ignorant of R)
deflator <- 1.05 # define deflator
# iterate through each observation
pce_real <- c()
n_obs <- length(economics$pce)
for (i in 1:n_obs) {
pce_real <- c(pce_real, economics$pce[i]/deflator)
}
# look at the result
head(pce_real, 2)
```
```
## [1] 483.2 486.2
```
How long does it take?
```
# Naïve approach (ignorant of R)
deflator <- 1.05 # define deflator
# iterate through each observation
pce_real <- list()
n_obs <- length(economics$pce)
time_elapsed <-
system.time(
for (i in 1:n_obs) {
pce_real <- c(pce_real, economics$pce[i]/deflator)
})
time_elapsed
```
```
## user system elapsed
## 0.108 0.000 0.110
```
Assuming a linear time algorithm (\\(O(n)\\)), we need that much time for one additional row of data:
```
time_per_row <- time_elapsed[3]/n_obs
time_per_row
```
```
## elapsed
## 2.395e-05
```
If we are dealing with Big Data, say 100 million rows, that is
```
# in seconds
(time_per_row*100^4)
```
```
## elapsed
## 2395
```
```
# in minutes
(time_per_row*100^4)/60
```
```
## elapsed
## 39.92
```
```
# in hours
(time_per_row*100^4)/60^2
```
```
## elapsed
## 0.6654
```
Can we improve this?
### 12\.2\.2 Improvement 1: Pre\-allocation of memory
In the naïve approach taken above, each iteration of the loop causes R to re\-allocate memory because the number of elements in the vector `pce_element` is changing. In simple terms, this means that R needs to execute more steps in each iteration. We can improve this with a simple trick by initiating the vector to the right size to begin with (filled with `NA` values).
```
# Improve memory allocation (still somewhat ignorant of R)
deflator <- 1.05 # define deflator
n_obs <- length(economics$pce)
# allocate memory beforehand
# Initialize the vector to the right size
pce_real <- rep(NA, n_obs)
# iterate through each observation
time_elapsed <-
system.time(
for (i in 1:n_obs) {
pce_real[i] <- economics$pce[i]/deflator
})
```
Let’s see if this helped to make the code faster.
```
time_per_row <- time_elapsed[3]/n_obs
time_per_row
```
```
## elapsed
## 2.178e-06
```
Again, we can extrapolate (approximately) the computation time, assuming the dataset had millions of rows.
```
# in seconds
(time_per_row*100^4)
```
```
## elapsed
## 217.8
```
```
# in minutes
(time_per_row*100^4)/60
```
```
## elapsed
## 3.63
```
```
# in hours
(time_per_row*100^4)/60^2
```
```
## elapsed
## 0.06049
```
This looks much better, but we can do even better.
### 12\.2\.3 Improvement 2: Exploit vectorization
In this approach, we exploit the fact that in R, ‘everything is a vector’ and that many of the basic R functions (such as math operators) are *vectorized*. In simple terms, this means that a vectorized operation is implemented in such a way that it can take advantage of the similarity of each of the vector’s elements. That is, R only has to figure out once how to apply a given function to a vector element in order to apply it to all elements of the vector. In a simple loop, R has to go through the same ‘preparatory’ steps again and again in each iteration; this is time\-intensive.
In this example, we specifically exploit that the division operator `/` is actually a vectorized function. Thus, the division by our `deflator` is applied to each element of `economics$pce`.
```
# Do it 'the R way'
deflator <- 1.05 # define deflator
# Exploit R's vectorization
time_elapsed <-
system.time(
pce_real <- economics$pce/deflator
)
# same result
head(pce_real, 2)
```
```
## [1] 483.2 486.2
```
Now this is much faster. In fact, `system.time()` is not precise enough to capture the time elapsed. In order to measure the improvement, we use `microbenchmark::microbenchmark()` to measure the elapsed time in microseconds (millionth of a second).
```
library(microbenchmark)
# measure elapsed time in microseconds (avg.)
time_elapsed <-
summary(microbenchmark(pce_real <- economics$pce/deflator))$mean
# per row (in sec)
time_per_row <- (time_elapsed/n_obs)/10^6
```
Now we get a more precise picture regarding the improvement due to vectorization:
```
# in seconds
(time_per_row*100^4)
```
```
## [1] 0.1868
```
```
# in minutes
(time_per_row*100^4)/60
```
```
## [1] 0.003113
```
```
# in hours
(time_per_row*100^4)/60^2
```
```
## [1] 5.189e-05
```
### 12\.2\.1 Naïve approach (ignorant of R)
The first approach we take is based on a simple `for` loop. In each iteration one element in `pce` is divided by the `deflator`, and the resulting value is stored as a new element in the vector `pce_real`.
```
# Naïve approach (ignorant of R)
deflator <- 1.05 # define deflator
# iterate through each observation
pce_real <- c()
n_obs <- length(economics$pce)
for (i in 1:n_obs) {
pce_real <- c(pce_real, economics$pce[i]/deflator)
}
# look at the result
head(pce_real, 2)
```
```
## [1] 483.2 486.2
```
How long does it take?
```
# Naïve approach (ignorant of R)
deflator <- 1.05 # define deflator
# iterate through each observation
pce_real <- list()
n_obs <- length(economics$pce)
time_elapsed <-
system.time(
for (i in 1:n_obs) {
pce_real <- c(pce_real, economics$pce[i]/deflator)
})
time_elapsed
```
```
## user system elapsed
## 0.108 0.000 0.110
```
Assuming a linear time algorithm (\\(O(n)\\)), we need that much time for one additional row of data:
```
time_per_row <- time_elapsed[3]/n_obs
time_per_row
```
```
## elapsed
## 2.395e-05
```
If we are dealing with Big Data, say 100 million rows, that is
```
# in seconds
(time_per_row*100^4)
```
```
## elapsed
## 2395
```
```
# in minutes
(time_per_row*100^4)/60
```
```
## elapsed
## 39.92
```
```
# in hours
(time_per_row*100^4)/60^2
```
```
## elapsed
## 0.6654
```
Can we improve this?
### 12\.2\.2 Improvement 1: Pre\-allocation of memory
In the naïve approach taken above, each iteration of the loop causes R to re\-allocate memory because the number of elements in the vector `pce_element` is changing. In simple terms, this means that R needs to execute more steps in each iteration. We can improve this with a simple trick by initiating the vector to the right size to begin with (filled with `NA` values).
```
# Improve memory allocation (still somewhat ignorant of R)
deflator <- 1.05 # define deflator
n_obs <- length(economics$pce)
# allocate memory beforehand
# Initialize the vector to the right size
pce_real <- rep(NA, n_obs)
# iterate through each observation
time_elapsed <-
system.time(
for (i in 1:n_obs) {
pce_real[i] <- economics$pce[i]/deflator
})
```
Let’s see if this helped to make the code faster.
```
time_per_row <- time_elapsed[3]/n_obs
time_per_row
```
```
## elapsed
## 2.178e-06
```
Again, we can extrapolate (approximately) the computation time, assuming the dataset had millions of rows.
```
# in seconds
(time_per_row*100^4)
```
```
## elapsed
## 217.8
```
```
# in minutes
(time_per_row*100^4)/60
```
```
## elapsed
## 3.63
```
```
# in hours
(time_per_row*100^4)/60^2
```
```
## elapsed
## 0.06049
```
This looks much better, but we can do even better.
### 12\.2\.3 Improvement 2: Exploit vectorization
In this approach, we exploit the fact that in R, ‘everything is a vector’ and that many of the basic R functions (such as math operators) are *vectorized*. In simple terms, this means that a vectorized operation is implemented in such a way that it can take advantage of the similarity of each of the vector’s elements. That is, R only has to figure out once how to apply a given function to a vector element in order to apply it to all elements of the vector. In a simple loop, R has to go through the same ‘preparatory’ steps again and again in each iteration; this is time\-intensive.
In this example, we specifically exploit that the division operator `/` is actually a vectorized function. Thus, the division by our `deflator` is applied to each element of `economics$pce`.
```
# Do it 'the R way'
deflator <- 1.05 # define deflator
# Exploit R's vectorization
time_elapsed <-
system.time(
pce_real <- economics$pce/deflator
)
# same result
head(pce_real, 2)
```
```
## [1] 483.2 486.2
```
Now this is much faster. In fact, `system.time()` is not precise enough to capture the time elapsed. In order to measure the improvement, we use `microbenchmark::microbenchmark()` to measure the elapsed time in microseconds (millionth of a second).
```
library(microbenchmark)
# measure elapsed time in microseconds (avg.)
time_elapsed <-
summary(microbenchmark(pce_real <- economics$pce/deflator))$mean
# per row (in sec)
time_per_row <- (time_elapsed/n_obs)/10^6
```
Now we get a more precise picture regarding the improvement due to vectorization:
```
# in seconds
(time_per_row*100^4)
```
```
## [1] 0.1868
```
```
# in minutes
(time_per_row*100^4)/60
```
```
## [1] 0.003113
```
```
# in hours
(time_per_row*100^4)/60^2
```
```
## [1] 5.189e-05
```
12\.3 Case study: Bootstrapping and parallel processing
-------------------------------------------------------
In this example, we estimate a simple regression model that aims to assess racial discrimination in the context of police stops.[66](#fn66) The example is based on the ‘Minneapolis Police Department 2017 Stop Dataset’, containing data on nearly all stops made by the Minneapolis Police Department for the year 2017\.
We start by importing the data into R.
```
url <-
"https://vincentarelbundock.github.io/Rdatasets/csv/carData/MplsStops.csv"
stopdata <- data.table::fread(url)
```
We specify a simple linear probability model that aims to test whether a person identified as ‘white’ is less likely to have their vehicle searched when stopped by the police. In order to take into account level differences between different police precincts, we add precinct indicators to the regression specification.
First, let’s remove observations with missing entries (`NA`) and code our main explanatory variable and the dependent variable.
```
# remove incomplete obs
stopdata <- na.omit(stopdata)
# code dependent var
stopdata$vsearch <- 0
stopdata$vsearch[stopdata$vehicleSearch=="YES"] <- 1
# code explanatory var
stopdata$white <- 0
stopdata$white[stopdata$race=="White"] <- 1
```
We specify our baseline model as follows.
```
model <- vsearch ~ white + factor(policePrecinct)
```
and estimate the linear probability model via OLS (the `lm` function).
```
fit <- lm(model, stopdata)
summary(fit)
```
```
##
## Call:
## lm(formula = model, data = stopdata)
##
## Residuals:
## Min 1Q Median 3Q Max
## -0.1394 -0.0633 -0.0547 -0.0423 0.9773
##
## Coefficients:
## Estimate Std. Error t value
## (Intercept) 0.05473 0.00515 10.62
## white -0.01955 0.00446 -4.38
## factor(policePrecinct)2 0.00856 0.00676 1.27
## factor(policePrecinct)3 0.00341 0.00648 0.53
## factor(policePrecinct)4 0.08464 0.00623 13.58
## factor(policePrecinct)5 -0.01246 0.00637 -1.96
## Pr(>|t|)
## (Intercept) < 2e-16 ***
## white 1.2e-05 ***
## factor(policePrecinct)2 0.21
## factor(policePrecinct)3 0.60
## factor(policePrecinct)4 < 2e-16 ***
## factor(policePrecinct)5 0.05 .
## ---
## Signif. codes:
## 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.254 on 19078 degrees of freedom
## Multiple R-squared: 0.025, Adjusted R-squared: 0.0248
## F-statistic: 97.9 on 5 and 19078 DF, p-value: <2e-16
```
A potential problem with this approach (and there might be many more in this simple example) is that observations stemming from different police precincts might be correlated over time. If that is the case, we likely underestimate the coefficient’s standard errors. There is a standard approach to computing estimates for so\-called *cluster\-robust* standard errors, which would take the problem of correlation over time within clusters into consideration (and deliver a more conservative estimate of the SEs). However, this approach only works well if the number of clusters in the data is roughly 50 or more. Here we only have five.
The alternative approach is to compute bootstrapped clustered standard errors. That is, we apply the [bootstrap resampling procedure](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)) at the cluster level. Specifically, we draw \\(B\\) samples (with replacement), estimate and record the coefficient vector for each bootstrap\-sample, and then estimate \\(SE\_{boot}\\) based on the standard deviation of all respective estimated coefficient values.
```
# load packages
library(data.table)
# set the 'seed' for random numbers (makes the example reproducible)
set.seed(2)
# set number of bootstrap iterations
B <- 10
# get selection of precincts
precincts <- unique(stopdata$policePrecinct)
# container for coefficients
boot_coefs <- matrix(NA, nrow = B, ncol = 2)
# draw bootstrap samples, estimate model for each sample
for (i in 1:B) {
# draw sample of precincts (cluster level)
precincts_i <- base::sample(precincts, size = 5, replace = TRUE)
# get observations
bs_i <-
lapply(precincts_i, function(x){
stopdata[stopdata$policePrecinct==x,]
} )
bs_i <- rbindlist(bs_i)
# estimate model and record coefficients
boot_coefs[i,] <- coef(lm(model, bs_i))[1:2] # ignore FE-coefficients
}
```
Finally, let’s compute \\(SE\_{boot}\\).
```
se_boot <- apply(boot_coefs,
MARGIN = 2,
FUN = sd)
se_boot
```
```
## [1] 0.004043 0.004690
```
Note that even with a very small \\(B\\), computing \\(SE\_{boot}\\) takes some time to compute. When setting \\(B\\) to over 500, computation time will be substantial. Also note that running this code hardly uses up more memory than the very simple approach without bootstrapping (after all, in each bootstrap iteration the dataset used to estimate the model is approximately the same size as the original dataset). There is little we can do to improve the script’s performance regarding memory. However, we can tell R how to allocate CPU resources more efficiently to handle that many regression estimates.
In particular, we can make use of the fact that most modern computing environments (such as a laptop) have CPUs with several *cores*. We can exploit this fact by instructing the computer to run the computations *in parallel* (simultaneously computing on several cores). The following code is a parallel implementation of our bootstrap procedure that does exactly that.
```
# load packages for parallel processing
library(doSNOW)
# get the number of cores available
ncores <- parallel::detectCores()
# set cores for parallel processing
ctemp <- makeCluster(ncores) #
registerDoSNOW(ctemp)
# set number of bootstrap iterations
B <- 10
# get selection of precincts
precincts <- unique(stopdata$policePrecinct)
# container for coefficients
boot_coefs <- matrix(NA, nrow = B, ncol = 2)
# bootstrapping in parallel
boot_coefs <-
foreach(i = 1:B, .combine = rbind, .packages="data.table") %dopar% {
# draw sample of precincts (cluster level)
precincts_i <- base::sample(precincts, size = 5, replace = TRUE)
# get observations
bs_i <- lapply(precincts_i, function(x) {
stopdata[stopdata$policePrecinct==x,]
})
bs_i <- rbindlist(bs_i)
# estimate model and record coefficients
coef(lm(model, bs_i))[1:2] # ignore FE-coefficients
}
# be a good citizen and stop the snow clusters
stopCluster(cl = ctemp)
```
As a last step, we again compute \\(SE\_{boot}\\).
```
se_boot <- apply(boot_coefs,
MARGIN = 2,
FUN = sd)
se_boot
```
```
## (Intercept) white
## 0.002446 0.003017
```
### 12\.3\.1 Parallelization with an EC2 instance
This short tutorial illustrates how to scale up the computation of clustered standard errors shown above by running it on an AWS EC2 instance. Note that there are a few things that we need to keep in mind to make the script run on an AWS EC2 instance in RStudio Server. First, our EC2 instance is a Linux machine. When running R on a Linux machine, there is an additional step to install R packages (at least for most of the packages): R packages need to be compiled before they can be installed. The command to install packages is exactly the same (`install.packages()`), and normally you only notice a slight difference in the output shown in the R console during installation (and the installation process takes a little longer than you are used to). Apart from that, using R via RStudio Server in the cloud looks/feels very similar if not identical to when using R/RStudio locally. For this step of the case study, first follow the instructions of how to set up an AWS EC2 instance with R/RStudio Server in Chapter 7\. Then, open a browser window, log in to RStudio Server on the EC2 instance, and copy and paste the code below to a new R\-file on the EC2 instance (note that you might have to install the `data.table` and `doSNOW` packages before running the code).
When executing the code below line\-by\-line, you will notice that essentially all parts of the script work exactly as on your local machine. This is one of the great advantages of running R/RStudio Server in the cloud. You can implement your entire data analysis locally (based on a small sample), test it locally, and then move it to the cloud and run it at a larger scale in exactly the same way (even with the same Graphical User Interface (GUI)).
```
# install packages
install.packages("data.table")
install.packages("doSNOW")
# load packages
library(data.table)
# fetch the data
url <- "https://vincentarelbundock.github.io/Rdatasets/csv/carData/MplsStops.csv"
stopdata <- read.csv(url)
# remove incomplete obs
stopdata <- na.omit(stopdata)
# code dependent var
stopdata$vsearch <- 0
stopdata$vsearch[stopdata$vehicleSearch == "YES"] <- 1
# code explanatory var
stopdata$white <- 0
stopdata$white[stopdata$race == "White"] <- 1
# model fit
model <- vsearch ~ white + factor(policePrecinct)
fit <- lm(model, stopdata)
summary(fit)
# bootstrapping: normal approach set the 'seed' for random
# numbers (makes the example reproducible)
set.seed(2)
# set number of bootstrap iterations
B <- 50
# get selection of precincts
precincts <- unique(stopdata$policePrecinct)
# container for coefficients
boot_coefs <- matrix(NA, nrow = B, ncol = 2)
# draw bootstrap samples, estimate model for each sample
for (i in 1:B) {
# draw sample of precincts (cluster level)
precincts_i <- base::sample(precincts, size = 5, replace = TRUE)
# get observations
bs_i <- lapply(precincts_i, function(x) {
stopdata[stopdata$policePrecinct == x, ]
})
bs_i <- rbindlist(bs_i)
# estimate model and record coefficients
boot_coefs[i, ] <- coef(lm(model, bs_i))[1:2] # ignore FE-coefficients
}
se_boot <- apply(boot_coefs, MARGIN = 2, FUN = sd)
se_boot
```
So far, we have only demonstrated that the simple implementation (non\-parallel) works both locally and in the cloud. However, the real purpose of using an EC2 instance in this example is to make use of the fact that we can scale up our instance to have more CPU cores available for the parallel implementation of our bootstrap procedure. Recall that running the script below on our local machine will employ all cores available to us and compute the bootstrap resampling in parallel on all these cores. Exactly the same thing happens when running the code below on our simple `t2.micro` instance. However, this type of EC2 instance only has one core. You can check this when running the following line of code in RStudio Server (assuming the `doSNOW` package is installed and loaded): `parallel::detectCores()`
.
When running the entire parallel implementation below, you will thus notice that it won’t compute the bootstrap SE any faster than with the non\-parallel version above. However, by simply initiating another EC2 type with more cores, we can distribute the workload across many CPU cores, using exactly the same R script.
```
# bootstrapping: parallel approaach
# install.packages("doSNOW", "parallel")
# load packages for parallel processing
library(doSNOW)
# set cores for parallel processing
ncores <- parallel::detectCores()
ctemp <- makeCluster(ncores)
registerDoSNOW(ctemp)
# set number of bootstrap iterations
B <- 50
# get selection of precincts
precincts <- unique(stopdata$policePrecinct)
# container for coefficients
boot_coefs <- matrix(NA, nrow = B, ncol = 2)
# bootstrapping in parallel
boot_coefs <-
foreach(i = 1:B, .combine = rbind, .packages="data.table") %dopar% {
# draw sample of precincts (cluster level)
precincts_i <- base::sample(precincts, size = 5, replace = TRUE)
# get observations
bs_i <- lapply(precincts_i, function(x){
stopdata[stopdata$policePrecinct==x,])
}
bs_i <- rbindlist(bs_i)
# estimate model and record coefficients
coef(lm(model, bs_i))[1:2] # ignore FE-coefficients
}
# be a good citizen and stop the snow clusters
stopCluster(cl = ctemp)
# compute the bootstrapped standard errors
se_boot <- apply(boot_coefs,
MARGIN = 2,
FUN = sd)
```
### 12\.3\.1 Parallelization with an EC2 instance
This short tutorial illustrates how to scale up the computation of clustered standard errors shown above by running it on an AWS EC2 instance. Note that there are a few things that we need to keep in mind to make the script run on an AWS EC2 instance in RStudio Server. First, our EC2 instance is a Linux machine. When running R on a Linux machine, there is an additional step to install R packages (at least for most of the packages): R packages need to be compiled before they can be installed. The command to install packages is exactly the same (`install.packages()`), and normally you only notice a slight difference in the output shown in the R console during installation (and the installation process takes a little longer than you are used to). Apart from that, using R via RStudio Server in the cloud looks/feels very similar if not identical to when using R/RStudio locally. For this step of the case study, first follow the instructions of how to set up an AWS EC2 instance with R/RStudio Server in Chapter 7\. Then, open a browser window, log in to RStudio Server on the EC2 instance, and copy and paste the code below to a new R\-file on the EC2 instance (note that you might have to install the `data.table` and `doSNOW` packages before running the code).
When executing the code below line\-by\-line, you will notice that essentially all parts of the script work exactly as on your local machine. This is one of the great advantages of running R/RStudio Server in the cloud. You can implement your entire data analysis locally (based on a small sample), test it locally, and then move it to the cloud and run it at a larger scale in exactly the same way (even with the same Graphical User Interface (GUI)).
```
# install packages
install.packages("data.table")
install.packages("doSNOW")
# load packages
library(data.table)
# fetch the data
url <- "https://vincentarelbundock.github.io/Rdatasets/csv/carData/MplsStops.csv"
stopdata <- read.csv(url)
# remove incomplete obs
stopdata <- na.omit(stopdata)
# code dependent var
stopdata$vsearch <- 0
stopdata$vsearch[stopdata$vehicleSearch == "YES"] <- 1
# code explanatory var
stopdata$white <- 0
stopdata$white[stopdata$race == "White"] <- 1
# model fit
model <- vsearch ~ white + factor(policePrecinct)
fit <- lm(model, stopdata)
summary(fit)
# bootstrapping: normal approach set the 'seed' for random
# numbers (makes the example reproducible)
set.seed(2)
# set number of bootstrap iterations
B <- 50
# get selection of precincts
precincts <- unique(stopdata$policePrecinct)
# container for coefficients
boot_coefs <- matrix(NA, nrow = B, ncol = 2)
# draw bootstrap samples, estimate model for each sample
for (i in 1:B) {
# draw sample of precincts (cluster level)
precincts_i <- base::sample(precincts, size = 5, replace = TRUE)
# get observations
bs_i <- lapply(precincts_i, function(x) {
stopdata[stopdata$policePrecinct == x, ]
})
bs_i <- rbindlist(bs_i)
# estimate model and record coefficients
boot_coefs[i, ] <- coef(lm(model, bs_i))[1:2] # ignore FE-coefficients
}
se_boot <- apply(boot_coefs, MARGIN = 2, FUN = sd)
se_boot
```
So far, we have only demonstrated that the simple implementation (non\-parallel) works both locally and in the cloud. However, the real purpose of using an EC2 instance in this example is to make use of the fact that we can scale up our instance to have more CPU cores available for the parallel implementation of our bootstrap procedure. Recall that running the script below on our local machine will employ all cores available to us and compute the bootstrap resampling in parallel on all these cores. Exactly the same thing happens when running the code below on our simple `t2.micro` instance. However, this type of EC2 instance only has one core. You can check this when running the following line of code in RStudio Server (assuming the `doSNOW` package is installed and loaded): `parallel::detectCores()`
.
When running the entire parallel implementation below, you will thus notice that it won’t compute the bootstrap SE any faster than with the non\-parallel version above. However, by simply initiating another EC2 type with more cores, we can distribute the workload across many CPU cores, using exactly the same R script.
```
# bootstrapping: parallel approaach
# install.packages("doSNOW", "parallel")
# load packages for parallel processing
library(doSNOW)
# set cores for parallel processing
ncores <- parallel::detectCores()
ctemp <- makeCluster(ncores)
registerDoSNOW(ctemp)
# set number of bootstrap iterations
B <- 50
# get selection of precincts
precincts <- unique(stopdata$policePrecinct)
# container for coefficients
boot_coefs <- matrix(NA, nrow = B, ncol = 2)
# bootstrapping in parallel
boot_coefs <-
foreach(i = 1:B, .combine = rbind, .packages="data.table") %dopar% {
# draw sample of precincts (cluster level)
precincts_i <- base::sample(precincts, size = 5, replace = TRUE)
# get observations
bs_i <- lapply(precincts_i, function(x){
stopdata[stopdata$policePrecinct==x,])
}
bs_i <- rbindlist(bs_i)
# estimate model and record coefficients
coef(lm(model, bs_i))[1:2] # ignore FE-coefficients
}
# be a good citizen and stop the snow clusters
stopCluster(cl = ctemp)
# compute the bootstrapped standard errors
se_boot <- apply(boot_coefs,
MARGIN = 2,
FUN = sd)
```
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/econometrics-with-gpus.html |
Chapter 13 Econometrics with GPUs
=================================
GPUs have been used for a while in computational economics (see Aldrich ([2014](#ref-aldrich_2014)) for an overview of early applications in economics). However, until recently most of the work building on GPUs in economics has focused on solving economic models numerically (see, e.g., Aldrich et al. ([2011](#ref-aldrich_etal2011))) and more broadly on Monte Carlo simulation. In this chapter, we first look at very basic GPU computation with R before having a look at the nowadays most common application of GPUs in applied econometrics, machine learning with neural networks.
13\.1 OLS on GPUs
-----------------
In a first simple tutorial, we have a look at how GPUs can be used to speed up basic econometric functions, such as the implementation of the OLS estimator. To this end, we will build on the `gpuR` package introduced in Chapter 5\. To keep the example code simple, we follow the same basic set\-up to implement and test our own OLS estimator function as in Chapter 3\. That is, we first generate a sample based on (pseudo\-)random numbers. To this end, we first define the sample size parameters `n` (the number of observations in our pseudo\-sample) and `p` (the number of variables describing each of these observations) and then initialize the dataset `X`.
```
set.seed(1)
# set parameter values
n <- 100000
p <- 4
# generate a design matrix (~ our 'dataset')
# with p variables and n observations
X <- matrix(rnorm(n*p, mean = 10), ncol = p)
# add column for intercept
#X <- cbind(rep(1, n), X)
```
Following exactly the same code as in Chapter 3, we can now define what the real linear model that we have in mind looks like and compute the output `y` of this model, given the input `X`.
```
# MC model
y <- 1.5*X[,1] + 4*X[,2] - 3.5*X[,3] + 0.5*X[,4] + rnorm(n)
```
Now we re\-implement our `beta_ols` function from Chapter 3 such that the OLS estimation is run on our local GPU. Recall that when computing on the GPU, we have the choice between keeping the data objects that go into the computation in RAM, or we can transfer the corresponding objects to GPU memory (which will further speed up the GPU computation). In the implementation of our `beta_ols_gpu`, I have added a parameter that allows switching between these two approaches. While setting `gpu_memory=TRUE` is likely faster, it might fail due to a lack of GPU memory (in all common desktop and laptop computers, RAM will be substantially larger than the GPU’s own memory). Hence, `gpu_memory` is set to `FALSE` by default.
```
beta_ols_gpu <-
function(X, y, gpu_memory=FALSE) {
require(gpuR)
if (!gpu_memory){
# point GPU to matrix (matrix stored in non-GPU memory)
vclX <- vclMatrix(X, type = "float")
vcly <- vclVector(y, type = "float")
# compute cross products and inverse
XXi <- solve(crossprod(vclX,vclX))
Xy <- crossprod(vclX, vcly)
} else {
# point GPU to matrix (matrix stored in non-GPU memory)
gpuX <- gpuMatrix(X, type = "float")
gpuy <- gpuVector(y, type = "float")
# compute cross products and inverse
XXi <- solve(crossprod(gpuX,gpuX))
Xy <- t(gpuX) %*% gpuy
}
beta_hat <- as.vector(XXi %*% Xy)
return(beta_hat)
}
```
Now we can verify whether the implemented GPU\-run OLS estimator works as expected.
```
beta_ols_gpu(X,y)
```
```
## [1] 1.5037 3.9997 -3.5036 0.5003
```
```
beta_ols_gpu(X,y, gpu_memory = TRUE)
```
```
## [1] 1.5033 3.9991 -3.5029 0.5005
```
Note how the coefficient estimates are very close to the true values. We can rest assured that our implementation of a GPU\-based OLS estimator works fairly well. Also note how simple the basic implementation of functions to compute matrix\-based operations on the GPU is through the `gpuR` package.
13\.2 A word of caution
-----------------------
From just comparing the number of threads of a modern CPU with the number of threads of a modern GPU, one might get the impression that parallelizable tasks should always be implemented for GPU computing. However, whether one approach or the other is faster can depend a lot on the overall task and the data at hand. Moreover, the parallel implementation of tasks can be done more or less well on either system. Really efficient parallel implementation of tasks can take a lot of coding time (particularly when done for GPUs).[67](#fn67)
As it turns out, the GPU OLS implementation above is actually a good example of a potential pitfall. While, as demonstrated in Chapter 4, matrix operations per se are likely much faster on GPUs than CPUs, the simple `beta_ols_gpu()` function implemented above involves more than the simple matrix operations. The model matrix as well as the vector of the dependent variable first had to be prepared for these operations (either a pointer for the GPU to the object in RAM had to be created or the objects had to be transferred to GPU memory). Finally, the computed values need to be transferred back to a normal R\-object (at least if we want to make the output consistent with our simple `beta_ols()` implementation from Chapter 3\). All of these steps create an additional overhead in terms of computing time.[68](#fn68) Depending on the problem at hand, this overhead resulting from preparatory steps before running the actual computations on the GPU might be dwarfed by the efficiency gain if the computing task is much more demanding then what is involved in OLS. The section on TensorFlow/Keras below points to exactly such a setting, where GPUs are typically much faster than CPUs.
13\.3 Higher\-level interfaces for basic econometrics with GPUs
---------------------------------------------------------------
The [CRAN Task View on High\-Performance and Parallel Computing with R](https://cran.r-project.org/web/views/HighPerformanceComputing.html) lists several projects that provide easy\-to\-use interfaces to canned implementations of regression and machine learning algorithms running on GPUs. For example, the `tfestimators` package provides an R interface to use the TensorFlow Estimators framework by Cheng et al. ([2017](#ref-cheng_etal2017)). The package provides various canned estimators to be run on GPUs (through TensorFlow).[69](#fn69) Note, however, that this framework is only compatible with TensorFlow version 1\. As we will build on the latest version of TensorFlow (version 2\) in the following example (and as most applications now build on version 2\), we will not go into details of how to work with `tfestimators`. However, there are excellent vignettes provided with the package that help you get started.[70](#fn70)
13\.4 TensorFlow/Keras example: Predict housing prices
------------------------------------------------------
The most common application of GPUs in modern econometrics is machine learning, in particular deep learning (a type of machine learning based on artificial neural networks). Training deep learning models can be very computationally intensive and to a great extent depends on tensor (matrix) multiplications. This is also an area where you might come across highly parallelized computing based on GPUs without even noticing it, as the now commonly used software to build and train deep neural nets ([TensorFlow](https://www.tensorflow.org/); Abadi et al. ([2015](#ref-tensorflow2015-whitepaper)), and the high\-level [Keras](https://keras.io/) API; Chollet et al. ([2015](#ref-chollet2015keras))) can easily be run on a CPU or GPU without any further configuration/preparation (apart from the initial installation of these programs). In this chapter, we look at a simple example of using GPUs with Keras in the context of predictive econometrics.
In this example we train a simple sequential model with two hidden layers to predict the median value of owner\-occupied homes (in USD 1,000\) in the Boston area (data is from the 1970s). The original data and a detailed description can be found here: [https://www.cs.toronto.edu/\~delve/data/boston/bostonDetail.html](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html). The example closely follows [this Keras tutorial](https://keras.rstudio.com/articles/tutorial_basic_regression.html#the-boston-housing-prices-dataset) published by RStudio. See [RStudio’s Keras installation guide](https://keras.rstudio.com/index.html) for how to install Keras (and TensorFlow) and the corresponding R package `keras` ([Allaire and Chollet 2022](#ref-keras)).[71](#fn71) While the purpose of the example here is to demonstrate a typical (but very simple!) use case of GPUs in machine learning, the same code should also run on a normal machine (without using GPUs) with a default installation of Keras.
Apart from `keras`, we load packages to prepare the data and visualize the output. Via `dataset_boston_housing()`, we load the dataset (shipped with the Keras installation) in the format preferred by the `keras` library.
```
# load packages
library(keras)
library(tibble)
library(ggplot2)
library(tfdatasets)
# load data
boston_housing <- dataset_boston_housing()
str(boston_housing)
```
```
## List of 2
## $ train:List of 2
## ..$ x: num [1:404, 1:13] 1.2325 0.0218 4.8982 0.0396 3.6931 ...
## ..$ y: num [1:404(1d)] 15.2 42.3 50 21.1 17.7 18.5 11.3 15.6 15.6 14.4 ...
## $ test :List of 2
## ..$ x: num [1:102, 1:13] 18.0846 0.1233 0.055 1.2735 0.0715 ...
## ..$ y: num [1:102(1d)] 7.2 18.8 19 27 22.2 24.5 31.2 22.9 20.5 23.2 ...
```
### 13\.4\.1 Data preparation
In a first step, we split the data into a training set and a test set. The latter is used to monitor the out\-of\-sample performance of the model fit. Testing the validity of an estimated model by looking at how it performs out\-of\-sample is of particular relevance when working with (deep) neural networks, as they can easily lead to over\-fitting. Validity checks based on the test sample are, therefore, often an integral part of modeling with TensorFlow/Keras.
```
# assign training and test data/labels
c(train_data, train_labels) %<-% boston_housing$train
c(test_data, test_labels) %<-% boston_housing$test
```
In order to better understand and interpret the dataset, we add the original variable names and convert it to a `tibble`.
```
library(dplyr)
column_names <- c('CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE',
'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT')
train_df <- train_data %>%
as_tibble(.name_repair = "minimal") %>%
setNames(column_names) %>%
mutate(label = train_labels)
test_df <- test_data %>%
as_tibble(.name_repair = "minimal") %>%
setNames(column_names) %>%
mutate(label = test_labels)
```
Next, we have a close look at the data. Note the usage of the term ‘label’ for what is usually called the ‘dependent variable’ in econometrics.[72](#fn72) As the aim of the exercise is to predict median prices of homes, the output of the model will be a continuous value (‘labels’).
```
# check training data dimensions and content
dim(train_df)
```
```
## [1] 404 14
```
```
head(train_df)
```
```
## # A tibble: 6 × 14
## CRIM ZN INDUS CHAS NOX RM AGE DIS
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1.23 0 8.14 0 0.538 6.14 91.7 3.98
## 2 0.0218 82.5 2.03 0 0.415 7.61 15.7 6.27
## 3 4.90 0 18.1 0 0.631 4.97 100 1.33
## 4 0.0396 0 5.19 0 0.515 6.04 34.5 5.99
## 5 3.69 0 18.1 0 0.713 6.38 88.4 2.57
## 6 0.284 0 7.38 0 0.493 5.71 74.3 4.72
## # ℹ 6 more variables: RAD <dbl>, TAX <dbl>,
## # PTRATIO <dbl>, B <dbl>, LSTAT <dbl>,
## # label <dbl[1d]>
```
As the dataset contains variables ranging from per capita crime rate to indicators for highway access, the variables are obviously measured in different units and hence displayed on different scales. This is not a problem per se for the fitting procedure. However, fitting is more efficient when all features (variables) are normalized.
```
spec <- feature_spec(train_df, label ~ . ) %>%
step_numeric_column(all_numeric(), normalizer_fn = scaler_standard()) %>%
fit()
```
### 13\.4\.2 Model specification
We specify the model as a linear stack of layers, the input (all 13 explanatory variables), two densely connected hidden layers (each with a 64\-dimensional output space), and finally the one\-dimensional output layer (the ‘dependent variable’).
```
# Create the model
# model specification
input <- layer_input_from_dataset(train_df %>% select(-label))
output <- input %>%
layer_dense_features(dense_features(spec)) %>%
layer_dense(units = 64, activation = "relu") %>%
layer_dense(units = 64, activation = "relu") %>%
layer_dense(units = 1)
model <- keras_model(input, output)
```
In order to fit the model, we first have to compile it (configure it for training). At this step we set the configuration parameters that will guide the training/optimization procedure. We use the mean squared errors loss function (`mse`) typically used for regressions, and we chose the [RMSProp](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) optimizer to find the minimum loss.
```
# compile the model
model %>%
compile(
loss = "mse",
optimizer = optimizer_rmsprop(),
metrics = list("mean_absolute_error")
)
```
Now we can get a summary of the model we are about to fit to the data.
```
# get a summary of the model
model
```
### 13\.4\.3 Training and prediction
Given the relatively simple model and small dataset, we set the maximum number of epochs to 500\.
```
# Set max. number of epochs
epochs <- 500
```
Finally, we fit the model while preserving the training history, and visualize the training progress.
```
# Fit the model and store training stats
history <- model %>% fit(
x = train_df %>% select(-label),
y = train_df$label,
epochs = epochs,
validation_split = 0.2,
verbose = 0
)
plot(history)
```
13\.5 Wrapping up
-----------------
* `gpuR` provides a straightforward interface for applied econometrics run on GPUs. While working with `gpuR`, be aware of the necessary computational overhead to run commands on the GPU via this interface. For example, implementing the OLS estimator with `gpuR` is a good exercise but does not really pay off in terms of performance.
* There are several ongoing projects in the R world to bring GPU computation closer to basic data analytics tasks, providing high\-level interfaces to work with GPUs (see the [CRAN Task View on High\-Performance and Parallel Computing with R](https://cran.r-project.org/web/views/HighPerformanceComputing.html) for some of those).
* A typical application of GPU computation in applied econometrics is the training of neural nets, particularly deep neural nets (deep learning). The `keras` and `tensorflow` packages provide excellent R interfaces to work with the deep learning libraries TensorFlow and Keras. Both of those libraries are implemented to directly work with GPUs.
13\.1 OLS on GPUs
-----------------
In a first simple tutorial, we have a look at how GPUs can be used to speed up basic econometric functions, such as the implementation of the OLS estimator. To this end, we will build on the `gpuR` package introduced in Chapter 5\. To keep the example code simple, we follow the same basic set\-up to implement and test our own OLS estimator function as in Chapter 3\. That is, we first generate a sample based on (pseudo\-)random numbers. To this end, we first define the sample size parameters `n` (the number of observations in our pseudo\-sample) and `p` (the number of variables describing each of these observations) and then initialize the dataset `X`.
```
set.seed(1)
# set parameter values
n <- 100000
p <- 4
# generate a design matrix (~ our 'dataset')
# with p variables and n observations
X <- matrix(rnorm(n*p, mean = 10), ncol = p)
# add column for intercept
#X <- cbind(rep(1, n), X)
```
Following exactly the same code as in Chapter 3, we can now define what the real linear model that we have in mind looks like and compute the output `y` of this model, given the input `X`.
```
# MC model
y <- 1.5*X[,1] + 4*X[,2] - 3.5*X[,3] + 0.5*X[,4] + rnorm(n)
```
Now we re\-implement our `beta_ols` function from Chapter 3 such that the OLS estimation is run on our local GPU. Recall that when computing on the GPU, we have the choice between keeping the data objects that go into the computation in RAM, or we can transfer the corresponding objects to GPU memory (which will further speed up the GPU computation). In the implementation of our `beta_ols_gpu`, I have added a parameter that allows switching between these two approaches. While setting `gpu_memory=TRUE` is likely faster, it might fail due to a lack of GPU memory (in all common desktop and laptop computers, RAM will be substantially larger than the GPU’s own memory). Hence, `gpu_memory` is set to `FALSE` by default.
```
beta_ols_gpu <-
function(X, y, gpu_memory=FALSE) {
require(gpuR)
if (!gpu_memory){
# point GPU to matrix (matrix stored in non-GPU memory)
vclX <- vclMatrix(X, type = "float")
vcly <- vclVector(y, type = "float")
# compute cross products and inverse
XXi <- solve(crossprod(vclX,vclX))
Xy <- crossprod(vclX, vcly)
} else {
# point GPU to matrix (matrix stored in non-GPU memory)
gpuX <- gpuMatrix(X, type = "float")
gpuy <- gpuVector(y, type = "float")
# compute cross products and inverse
XXi <- solve(crossprod(gpuX,gpuX))
Xy <- t(gpuX) %*% gpuy
}
beta_hat <- as.vector(XXi %*% Xy)
return(beta_hat)
}
```
Now we can verify whether the implemented GPU\-run OLS estimator works as expected.
```
beta_ols_gpu(X,y)
```
```
## [1] 1.5037 3.9997 -3.5036 0.5003
```
```
beta_ols_gpu(X,y, gpu_memory = TRUE)
```
```
## [1] 1.5033 3.9991 -3.5029 0.5005
```
Note how the coefficient estimates are very close to the true values. We can rest assured that our implementation of a GPU\-based OLS estimator works fairly well. Also note how simple the basic implementation of functions to compute matrix\-based operations on the GPU is through the `gpuR` package.
13\.2 A word of caution
-----------------------
From just comparing the number of threads of a modern CPU with the number of threads of a modern GPU, one might get the impression that parallelizable tasks should always be implemented for GPU computing. However, whether one approach or the other is faster can depend a lot on the overall task and the data at hand. Moreover, the parallel implementation of tasks can be done more or less well on either system. Really efficient parallel implementation of tasks can take a lot of coding time (particularly when done for GPUs).[67](#fn67)
As it turns out, the GPU OLS implementation above is actually a good example of a potential pitfall. While, as demonstrated in Chapter 4, matrix operations per se are likely much faster on GPUs than CPUs, the simple `beta_ols_gpu()` function implemented above involves more than the simple matrix operations. The model matrix as well as the vector of the dependent variable first had to be prepared for these operations (either a pointer for the GPU to the object in RAM had to be created or the objects had to be transferred to GPU memory). Finally, the computed values need to be transferred back to a normal R\-object (at least if we want to make the output consistent with our simple `beta_ols()` implementation from Chapter 3\). All of these steps create an additional overhead in terms of computing time.[68](#fn68) Depending on the problem at hand, this overhead resulting from preparatory steps before running the actual computations on the GPU might be dwarfed by the efficiency gain if the computing task is much more demanding then what is involved in OLS. The section on TensorFlow/Keras below points to exactly such a setting, where GPUs are typically much faster than CPUs.
13\.3 Higher\-level interfaces for basic econometrics with GPUs
---------------------------------------------------------------
The [CRAN Task View on High\-Performance and Parallel Computing with R](https://cran.r-project.org/web/views/HighPerformanceComputing.html) lists several projects that provide easy\-to\-use interfaces to canned implementations of regression and machine learning algorithms running on GPUs. For example, the `tfestimators` package provides an R interface to use the TensorFlow Estimators framework by Cheng et al. ([2017](#ref-cheng_etal2017)). The package provides various canned estimators to be run on GPUs (through TensorFlow).[69](#fn69) Note, however, that this framework is only compatible with TensorFlow version 1\. As we will build on the latest version of TensorFlow (version 2\) in the following example (and as most applications now build on version 2\), we will not go into details of how to work with `tfestimators`. However, there are excellent vignettes provided with the package that help you get started.[70](#fn70)
13\.4 TensorFlow/Keras example: Predict housing prices
------------------------------------------------------
The most common application of GPUs in modern econometrics is machine learning, in particular deep learning (a type of machine learning based on artificial neural networks). Training deep learning models can be very computationally intensive and to a great extent depends on tensor (matrix) multiplications. This is also an area where you might come across highly parallelized computing based on GPUs without even noticing it, as the now commonly used software to build and train deep neural nets ([TensorFlow](https://www.tensorflow.org/); Abadi et al. ([2015](#ref-tensorflow2015-whitepaper)), and the high\-level [Keras](https://keras.io/) API; Chollet et al. ([2015](#ref-chollet2015keras))) can easily be run on a CPU or GPU without any further configuration/preparation (apart from the initial installation of these programs). In this chapter, we look at a simple example of using GPUs with Keras in the context of predictive econometrics.
In this example we train a simple sequential model with two hidden layers to predict the median value of owner\-occupied homes (in USD 1,000\) in the Boston area (data is from the 1970s). The original data and a detailed description can be found here: [https://www.cs.toronto.edu/\~delve/data/boston/bostonDetail.html](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html). The example closely follows [this Keras tutorial](https://keras.rstudio.com/articles/tutorial_basic_regression.html#the-boston-housing-prices-dataset) published by RStudio. See [RStudio’s Keras installation guide](https://keras.rstudio.com/index.html) for how to install Keras (and TensorFlow) and the corresponding R package `keras` ([Allaire and Chollet 2022](#ref-keras)).[71](#fn71) While the purpose of the example here is to demonstrate a typical (but very simple!) use case of GPUs in machine learning, the same code should also run on a normal machine (without using GPUs) with a default installation of Keras.
Apart from `keras`, we load packages to prepare the data and visualize the output. Via `dataset_boston_housing()`, we load the dataset (shipped with the Keras installation) in the format preferred by the `keras` library.
```
# load packages
library(keras)
library(tibble)
library(ggplot2)
library(tfdatasets)
# load data
boston_housing <- dataset_boston_housing()
str(boston_housing)
```
```
## List of 2
## $ train:List of 2
## ..$ x: num [1:404, 1:13] 1.2325 0.0218 4.8982 0.0396 3.6931 ...
## ..$ y: num [1:404(1d)] 15.2 42.3 50 21.1 17.7 18.5 11.3 15.6 15.6 14.4 ...
## $ test :List of 2
## ..$ x: num [1:102, 1:13] 18.0846 0.1233 0.055 1.2735 0.0715 ...
## ..$ y: num [1:102(1d)] 7.2 18.8 19 27 22.2 24.5 31.2 22.9 20.5 23.2 ...
```
### 13\.4\.1 Data preparation
In a first step, we split the data into a training set and a test set. The latter is used to monitor the out\-of\-sample performance of the model fit. Testing the validity of an estimated model by looking at how it performs out\-of\-sample is of particular relevance when working with (deep) neural networks, as they can easily lead to over\-fitting. Validity checks based on the test sample are, therefore, often an integral part of modeling with TensorFlow/Keras.
```
# assign training and test data/labels
c(train_data, train_labels) %<-% boston_housing$train
c(test_data, test_labels) %<-% boston_housing$test
```
In order to better understand and interpret the dataset, we add the original variable names and convert it to a `tibble`.
```
library(dplyr)
column_names <- c('CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE',
'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT')
train_df <- train_data %>%
as_tibble(.name_repair = "minimal") %>%
setNames(column_names) %>%
mutate(label = train_labels)
test_df <- test_data %>%
as_tibble(.name_repair = "minimal") %>%
setNames(column_names) %>%
mutate(label = test_labels)
```
Next, we have a close look at the data. Note the usage of the term ‘label’ for what is usually called the ‘dependent variable’ in econometrics.[72](#fn72) As the aim of the exercise is to predict median prices of homes, the output of the model will be a continuous value (‘labels’).
```
# check training data dimensions and content
dim(train_df)
```
```
## [1] 404 14
```
```
head(train_df)
```
```
## # A tibble: 6 × 14
## CRIM ZN INDUS CHAS NOX RM AGE DIS
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1.23 0 8.14 0 0.538 6.14 91.7 3.98
## 2 0.0218 82.5 2.03 0 0.415 7.61 15.7 6.27
## 3 4.90 0 18.1 0 0.631 4.97 100 1.33
## 4 0.0396 0 5.19 0 0.515 6.04 34.5 5.99
## 5 3.69 0 18.1 0 0.713 6.38 88.4 2.57
## 6 0.284 0 7.38 0 0.493 5.71 74.3 4.72
## # ℹ 6 more variables: RAD <dbl>, TAX <dbl>,
## # PTRATIO <dbl>, B <dbl>, LSTAT <dbl>,
## # label <dbl[1d]>
```
As the dataset contains variables ranging from per capita crime rate to indicators for highway access, the variables are obviously measured in different units and hence displayed on different scales. This is not a problem per se for the fitting procedure. However, fitting is more efficient when all features (variables) are normalized.
```
spec <- feature_spec(train_df, label ~ . ) %>%
step_numeric_column(all_numeric(), normalizer_fn = scaler_standard()) %>%
fit()
```
### 13\.4\.2 Model specification
We specify the model as a linear stack of layers, the input (all 13 explanatory variables), two densely connected hidden layers (each with a 64\-dimensional output space), and finally the one\-dimensional output layer (the ‘dependent variable’).
```
# Create the model
# model specification
input <- layer_input_from_dataset(train_df %>% select(-label))
output <- input %>%
layer_dense_features(dense_features(spec)) %>%
layer_dense(units = 64, activation = "relu") %>%
layer_dense(units = 64, activation = "relu") %>%
layer_dense(units = 1)
model <- keras_model(input, output)
```
In order to fit the model, we first have to compile it (configure it for training). At this step we set the configuration parameters that will guide the training/optimization procedure. We use the mean squared errors loss function (`mse`) typically used for regressions, and we chose the [RMSProp](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) optimizer to find the minimum loss.
```
# compile the model
model %>%
compile(
loss = "mse",
optimizer = optimizer_rmsprop(),
metrics = list("mean_absolute_error")
)
```
Now we can get a summary of the model we are about to fit to the data.
```
# get a summary of the model
model
```
### 13\.4\.3 Training and prediction
Given the relatively simple model and small dataset, we set the maximum number of epochs to 500\.
```
# Set max. number of epochs
epochs <- 500
```
Finally, we fit the model while preserving the training history, and visualize the training progress.
```
# Fit the model and store training stats
history <- model %>% fit(
x = train_df %>% select(-label),
y = train_df$label,
epochs = epochs,
validation_split = 0.2,
verbose = 0
)
plot(history)
```
### 13\.4\.1 Data preparation
In a first step, we split the data into a training set and a test set. The latter is used to monitor the out\-of\-sample performance of the model fit. Testing the validity of an estimated model by looking at how it performs out\-of\-sample is of particular relevance when working with (deep) neural networks, as they can easily lead to over\-fitting. Validity checks based on the test sample are, therefore, often an integral part of modeling with TensorFlow/Keras.
```
# assign training and test data/labels
c(train_data, train_labels) %<-% boston_housing$train
c(test_data, test_labels) %<-% boston_housing$test
```
In order to better understand and interpret the dataset, we add the original variable names and convert it to a `tibble`.
```
library(dplyr)
column_names <- c('CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE',
'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT')
train_df <- train_data %>%
as_tibble(.name_repair = "minimal") %>%
setNames(column_names) %>%
mutate(label = train_labels)
test_df <- test_data %>%
as_tibble(.name_repair = "minimal") %>%
setNames(column_names) %>%
mutate(label = test_labels)
```
Next, we have a close look at the data. Note the usage of the term ‘label’ for what is usually called the ‘dependent variable’ in econometrics.[72](#fn72) As the aim of the exercise is to predict median prices of homes, the output of the model will be a continuous value (‘labels’).
```
# check training data dimensions and content
dim(train_df)
```
```
## [1] 404 14
```
```
head(train_df)
```
```
## # A tibble: 6 × 14
## CRIM ZN INDUS CHAS NOX RM AGE DIS
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1.23 0 8.14 0 0.538 6.14 91.7 3.98
## 2 0.0218 82.5 2.03 0 0.415 7.61 15.7 6.27
## 3 4.90 0 18.1 0 0.631 4.97 100 1.33
## 4 0.0396 0 5.19 0 0.515 6.04 34.5 5.99
## 5 3.69 0 18.1 0 0.713 6.38 88.4 2.57
## 6 0.284 0 7.38 0 0.493 5.71 74.3 4.72
## # ℹ 6 more variables: RAD <dbl>, TAX <dbl>,
## # PTRATIO <dbl>, B <dbl>, LSTAT <dbl>,
## # label <dbl[1d]>
```
As the dataset contains variables ranging from per capita crime rate to indicators for highway access, the variables are obviously measured in different units and hence displayed on different scales. This is not a problem per se for the fitting procedure. However, fitting is more efficient when all features (variables) are normalized.
```
spec <- feature_spec(train_df, label ~ . ) %>%
step_numeric_column(all_numeric(), normalizer_fn = scaler_standard()) %>%
fit()
```
### 13\.4\.2 Model specification
We specify the model as a linear stack of layers, the input (all 13 explanatory variables), two densely connected hidden layers (each with a 64\-dimensional output space), and finally the one\-dimensional output layer (the ‘dependent variable’).
```
# Create the model
# model specification
input <- layer_input_from_dataset(train_df %>% select(-label))
output <- input %>%
layer_dense_features(dense_features(spec)) %>%
layer_dense(units = 64, activation = "relu") %>%
layer_dense(units = 64, activation = "relu") %>%
layer_dense(units = 1)
model <- keras_model(input, output)
```
In order to fit the model, we first have to compile it (configure it for training). At this step we set the configuration parameters that will guide the training/optimization procedure. We use the mean squared errors loss function (`mse`) typically used for regressions, and we chose the [RMSProp](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) optimizer to find the minimum loss.
```
# compile the model
model %>%
compile(
loss = "mse",
optimizer = optimizer_rmsprop(),
metrics = list("mean_absolute_error")
)
```
Now we can get a summary of the model we are about to fit to the data.
```
# get a summary of the model
model
```
### 13\.4\.3 Training and prediction
Given the relatively simple model and small dataset, we set the maximum number of epochs to 500\.
```
# Set max. number of epochs
epochs <- 500
```
Finally, we fit the model while preserving the training history, and visualize the training progress.
```
# Fit the model and store training stats
history <- model %>% fit(
x = train_df %>% select(-label),
y = train_df$label,
epochs = epochs,
validation_split = 0.2,
verbose = 0
)
plot(history)
```
13\.5 Wrapping up
-----------------
* `gpuR` provides a straightforward interface for applied econometrics run on GPUs. While working with `gpuR`, be aware of the necessary computational overhead to run commands on the GPU via this interface. For example, implementing the OLS estimator with `gpuR` is a good exercise but does not really pay off in terms of performance.
* There are several ongoing projects in the R world to bring GPU computation closer to basic data analytics tasks, providing high\-level interfaces to work with GPUs (see the [CRAN Task View on High\-Performance and Parallel Computing with R](https://cran.r-project.org/web/views/HighPerformanceComputing.html) for some of those).
* A typical application of GPU computation in applied econometrics is the training of neural nets, particularly deep neural nets (deep learning). The `keras` and `tensorflow` packages provide excellent R interfaces to work with the deep learning libraries TensorFlow and Keras. Both of those libraries are implemented to directly work with GPUs.
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/regression-analysis-and-categorization-with-spark-and-r.html |
Chapter 14 Regression Analysis and Categorization with Spark and R
==================================================================
Regression analysis, particularly simple linear regression (OLS), is the backbone of applied econometrics. As discussed in previous chapters, regression analysis can be computationally very intensive with a dataset of many observations and variables, as it involves matrix operations on a very large model matrix. Chapter 12 discusses in one case study the special case of a large model matrix due to fixed\-effects dummy variables. In this chapter, we first look at a generally applicable approach for estimating linear regression models with large datasets (when the model matrix cannot be held in RAM). Building on the same `sparklyr` framework ([Luraschi et al. 2022](#ref-sparklyr)) as for the simple linear regression case, we then look at classification models, such as logit and random forest. Finally, we look at how regression analysis and machine learning tasks can be organized in machine learning pipelines to be run, stored/reloaded, and updated flexibly.
14\.1 Simple linear regression analysis
---------------------------------------
Suppose we want to conduct a correlation study of what factors are associated with longer or shorter arrival delays in air travel. Via its built\-in ‘MLib’ library, Spark provides several high\-level functions to conduct regression analyses. When calling these functions via `sparklyr` (or `SparkR`), their usage is actually very similar to the usual R packages/functions commonly used to run regressions in R.
As a simple point of reference, we first estimate a linear model with the usual R approach (all computed in the R environment). First, we load the data as a common `data.table`. We could also convert a copy of the entire `SparkDataFrame` object to a `data.frame` or `data.table` and get essentially the same outcome. However, collecting the data from the RDD structure would take much longer than parsing the CSV with `fread`. In addition, we only import the first 300 rows. Running regression analysis with relatively large datasets in Spark on a small local machine might fail or be rather slow.[73](#fn73)
```
# flights_r <- collect(flights) # very slow!
flights_r <- data.table::fread("data/flights.csv", nrows = 300)
```
Now we run a simple linear regression (OLS) and show the summary output.
```
# specify the linear model
model1 <- arr_delay ~ dep_delay + distance
# fit the model with OLS
fit1 <- lm(model1, flights_r)
# compute t-tests etc.
summary(fit1)
```
```
##
## Call:
## lm(formula = model1, data = flights_r)
##
## Residuals:
## Min 1Q Median 3Q Max
## -42.39 -9.96 -1.91 9.87 48.02
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.182662 1.676560 -0.11 0.91
## dep_delay 0.989553 0.017282 57.26 <2e-16 ***
## distance 0.000114 0.001239 0.09 0.93
## ---
## Signif. codes:
## 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 15.5 on 297 degrees of freedom
## Multiple R-squared: 0.917, Adjusted R-squared: 0.917
## F-statistic: 1.65e+03 on 2 and 297 DF, p-value: <2e-16
```
Now we aim to compute essentially the same model estimate in `sparklyr`.[74](#fn74) In order to use Spark via the `sparklyr` package, we need to first load the package and establish a connection with Spark (similar to `SparkR::sparkR.session()`).
```
library(sparklyr)
# connect with default configuration
sc <- spark_connect(master="local")
```
We then copy the data.table `flights_r` (previously loaded into our R session) to Spark. Again, working on a normal laptop this seems trivial, but the exact same command would allow us (when connected with Spark on a cluster computer in the cloud) to properly load and distribute the data.table on the cluster. Finally, we then fit the model with `ml_linear_regression()` and compute.
```
# load data to spark
flights_spark <- copy_to(sc, flights_r, "flights_spark")
# fit the model
fit1_spark <- ml_linear_regression(flights_spark, formula = model1)
# compute summary stats
summary(fit1_spark)
```
```
Deviance Residuals:
Min 1Q Median 3Q Max
-42.386 -9.965 -1.911 9.866 48.024
Coefficients:
(Intercept) dep_delay distance
-0.1826622687 0.9895529018 0.0001139616
R-Squared: 0.9172
Root Mean Squared Error: 15.42
```
Alternatively, we can use the `spark_apply()` function to run the regression analysis in R via the original R `lm()` function.[75](#fn75)
```
# fit the model
spark_apply(flights_spark,
function(df){
broom::tidy(lm(arr_delay ~ dep_delay + distance, df))},
names = c("term",
"estimate",
"std.error",
"statistic",
"p.value")
)
```
```
# Source: spark<?> [?? x 5]
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -0.183 1.68 -0.109 9.13e- 1
2 dep_delay 0.990 0.0173 57.3 1.63e-162
3 distance 0.000114 0.00124 0.0920 9.27e- 1
```
Finally, the `parsnip` package ([Kuhn and Vaughan 2022](#ref-parsnip)) (together with the `tidymodels` package; Kuhn and Wickham ([2020](#ref-tidymodels))) provides a simple interface to run the same model (or similar specifications) on different “engines” (estimators/fitting algorithms), and several of the `parsnip` models are also supported in `sparklyr`. This significantly facilitates the transition from local testing (with a small subset of the data) to running the estimation on the entire dataset on spark.
```
library(tidymodels)
library(parsnip)
# simple local linear regression example from above
# via tidymodels/parsnip
fit1 <- fit(linear_reg(engine="lm"), model1, data=flights_r)
tidy(fit1)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -0.183 1.68 -0.109 9.13e- 1
2 dep_delay 0.990 0.0173 57.3 1.63e-162
3 distance 0.000114 0.00124 0.0920 9.27e- 1
```
```
# run the same on Spark
fit1_spark <- fit(linear_reg(engine="spark"), model1, data=flights_spark)
tidy(fit1_spark)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -0.183 1.68 -0.109 9.13e- 1
2 dep_delay 0.990 0.0173 57.3 1.63e-162
3 distance 0.000114 0.00124 0.0920 9.27e- 1
```
We will further build on this interface in the next section where we look at different machine learning procedures for a classification problem.
14\.2 Machine learning for classification
-----------------------------------------
Building on `sparklyr`, `tidymodels`, and `parsnip`, we test a set of machine learning models on the classification problem discussed in Varian ([2014](#ref-varian_2014)), predicting Titanic survivors. The data for this exercise can be downloaded from here: [http://doi.org/10\.3886/E113925V1](http://doi.org/10.3886/E113925V1).
We import and prepare the data in R.
```
# load into R, select variables of interest, remove missing
titanic_r <- read.csv("data/titanic3.csv")
titanic_r <- na.omit(titanic_r[, c("survived",
"pclass",
"sex",
"age",
"sibsp",
"parch")])
titanic_r$survived <- ifelse(titanic_r$survived==1, "yes", "no")
```
In order to assess the performance of the classifiers later on, we split the sample into training and test datasets. We do so with the help of the `rsample` package ([Frick et al. 2022](#ref-rsample)), which provides a number of high\-level functions to facilitate this kind of pre\-processing.
```
library(rsample)
# split into training and test set
titanic_r <- initial_split(titanic_r)
ti_training <- training(titanic_r)
ti_testing <- testing(titanic_r)
```
For the training and assessment of the classifiers, we transfer the two datasets to the spark cluster.
```
# load data to spark
ti_training_spark <- copy_to(sc, ti_training, "ti_training_spark")
ti_testing_spark <- copy_to(sc, ti_testing, "ti_testing_spark")
```
Now we can set up a ‘horse race’ between different ML approaches to find the best performing model. Overall, we will consider the following models/algorithms:
* Logistic regression
* Boosted trees
* Random forest
```
# models to be used
models <- list(logit=logistic_reg(engine="spark", mode = "classification"),
btree=boost_tree(engine = "spark", mode = "classification"),
rforest=rand_forest(engine = "spark", mode = "classification"))
# train/fit the models
fits <- lapply(models, fit, formula=survived~., data=ti_training_spark)
```
The fitted models (trained algorithms) can now be assessed with the help of the test dataset. To this end, we use the high\-level `accuracy` function provided in the `yardstick` package ([Kuhn, Vaughan, and Hvitfeldt 2022](#ref-yardstick)) to compute the accuracy of the fitted models. We proceed in three steps. First, we use the fitted models to predict the outcomes (we classify cases into survived/did not survive) of the *test set*. Then we fetch the predictions from the Spark cluster, format the variables, and add the actual outcomes as an additional column.
```
# run predictions
predictions <- lapply(fits, predict, new_data=ti_testing_spark)
# fetch predictions from Spark, format, add actual outcomes
pred_outcomes <-
lapply(1:length(predictions), function(i){
x_r <- collect(predictions[[i]]) # load into local R environment
x_r$pred_class <- as.factor(x_r$pred_class) # format for predictions
x_r$survived <- as.factor(ti_testing$survived) # add true outcomes
return(x_r)
})
```
Finally, we compute the accuracy of the models, stack the results, and display them (ordered from best\-performing to worst\-performing.)
```
acc <- lapply(pred_outcomes, accuracy, truth="survived", estimate="pred_class")
acc <- bind_rows(acc)
acc$model <- names(fits)
acc[order(acc$.estimate, decreasing = TRUE),]
```
```
# A tibble: 3 × 4
.metric .estimator .estimate model
<chr> <chr> <dbl> <chr>
1 accuracy binary 0.817 rforest
2 accuracy binary 0.790 btree
3 accuracy binary 0.779 logit
```
In this simple example, all models perform similarly well. However, none of them really performs outstandingly. In a next step, we might want to learn about which variables are considered more or less important for the predictions. Here, the `tidy()` function is very useful. As long as the model types are comparable (here `btree` and `rforest`), `tidy()` delivers essentially the same type of summary for different models.
```
tidy(fits[["btree"]])
```
```
# A tibble: 5 × 2
feature importance
<chr> <dbl>
1 age 0.415
2 sex_male 0.223
3 pclass 0.143
4 sibsp 0.120
5 parch 0.0987
```
```
tidy(fits[["rforest"]])
```
```
# A tibble: 5 × 2
feature importance
<chr> <dbl>
1 sex_male 0.604
2 pclass 0.188
3 age 0.120
4 sibsp 0.0595
5 parch 0.0290
```
Finally, we clean up and disconnect from the Spark cluster.
```
spark_disconnect(sc)
```
14\.3 Building machine learning pipelines with R and Spark
----------------------------------------------------------
Spark provides a framework to implement machine learning pipelines called [ML Pipelines](https://spark.apache.org/docs/latest/ml-pipeline.html), with the aim of facilitating the combination of various preparatory steps and ML algorithms into a pipeline/workflow. `sparklyr` provides a straightforward interface to ML Pipelines that allows implementing and testing the entire ML workflow in R and then easily deploying the final pipeline to a Spark cluster or more generally to the production environment. In the following example, we will revisit the e\-commerce purchase prediction model (Google Analytics data from the Google Merchandise Shop) introduced in Chapter 1\. That is, we want to prepare the Google Analytics data and then use lasso to find a set of important predictors for purchase decisions, all built into a machine learning pipeline.
### 14\.3\.1 Set up and data import
All of the key ingredients are provided in `sparklyr`. However, I recommend using the ‘piping’ syntax provided in `dplyr` ([Wickham et al. 2023](#ref-dplyr)) to implement the machine learning pipeline. In this context, using this syntax is particularly helpful to make the code easy to read and understand.
```
# load packages
library(sparklyr)
library(dplyr)
# fix vars
INPUT_DATA <- "data/ga.csv"
```
Recall that the Google Analytics dataset is a small subset of the overall data generated by Google Analytics on a moderately sized e\-commerce site. Hence, it makes perfect sense to first implement and test the pipeline locally (on a local Spark installation) before deploying it on an actual Spark cluster in the cloud. In a first step, we thus copy the imported data to the local Spark instance.
```
# import to local R session, prepare raw data
ga <- na.omit(read.csv(INPUT_DATA))
#ga$purchase <- as.factor(ifelse(ga$purchase==1, "yes", "no"))
# connect to, and copy the data to the local cluster
sc <- spark_connect(master = "local")
ga_spark <- copy_to(sc, ga, "ga_spark", overwrite = TRUE)
```
### 14\.3\.2 Building the pipeline
The pipeline object is initialized via `ml_pipeline()`, in which we refer to the connection to the local Spark cluster. We then add the model specification (the formula) to the pipeline with `ft_r_formula()`. `ft_r_formula` essentially transforms the data in accordance with the common specification syntax in R (here: `purchase ~ .`). Among other things, this takes care of properly setting up the model matrix. Finally, we add the model via `ml_logistic_regression()`. We can set the penalization parameters via `elastic_net_param` (with `alpha=1`, we get the lasso).
```
# ml pipeline
ga_pipeline <-
ml_pipeline(sc) %>%
ft_string_indexer(input_col="city",
output_col="city_output",
handle_invalid = "skip") %>%
ft_string_indexer(input_col="country",
output_col="country_output",
handle_invalid = "skip") %>%
ft_string_indexer(input_col="source",
output_col="source_output",
handle_invalid = "skip") %>%
ft_string_indexer(input_col="browser",
output_col="browser_output",
handle_invalid = "skip") %>%
ft_r_formula(purchase ~ .) %>%
ml_logistic_regression(elastic_net_param = list(alpha=1))
```
Finally, we create a cross\-validator object to train the model with a k\-fold cross\-validation and fit the model.
For the sake of the example, we use only a 30\-fold cross validation (to be run in parallel on 8 cores).
```
# specify the hyperparameter grid
# (parameter values to be considered in optimization)
ga_params <- list(logistic_regression=list(max_iter=80))
# create the cross-validator object
set.seed(1)
cv_lasso <- ml_cross_validator(sc,
estimator=ga_pipeline,
estimator_param_maps = ga_params,
ml_binary_classification_evaluator(sc),
num_folds = 30,
parallelism = 8)
# train/fit the model
cv_lasso_fit <- ml_fit(cv_lasso, ga_spark)
# note: this takes several minutes to run on a local machine (1 node, 8 cores)
```
Finally, we can inspect and further process the results – in particular the model’s performance.
```
# pipeline summary
# cv_lasso_fit
# average performance
cv_lasso_fit$avg_metrics_df
```
```
areaUnderROC max_iter_1
1 0.8666304 80
```
Before closing the connection to the Spark cluster, we can save the entire pipeline to work further with it later on.
```
# save the entire pipeline/fit
ml_save(
cv_lasso_fit,
"ga_cv_lasso_fit",
overwrite = TRUE
)
```
To reload the pipeline later on, run `ml_load(sc, "ga_cv_lasso_fit")`.
14\.4 Wrapping up
-----------------
The key take\-aways from this chapter are:
* When running econometric analysis such as linear or logistic regressions with massive amounts of data, `sparklyr` provides all the basic functions you need.
* You can test your code on your local spark installation by connecting to the local ‘cluster’: `spark_connect(master="local")`. This allows you to test your entire regression analysis script locally (on a sub\-sample) before running the exact same script via a connection to a large spark cluster on AWS EMR. To do so, simply connect to the cluster via `spark_connect(master = "yarn")` from RStudio server, following the setup introduced in Section 8\.4\.
* The `rsample` package provides easy\-to\-use high\-level functions to split your dataset into training and test datasets: See `?initial_split`, `?training`, and `?testing`.
* The `parsnip` and `broom` packages ([Robinson, Hayes, and Couch 2022](#ref-broom)) provide a way to easily standardize regression output. This is very helpful if you want to verify your regression analysis implementation for Spark with the more familiar R regression frameworks such as `lm()`. For example, compare the standard R OLS output with the linear regression output computed on a Spark cluster: `fit(linear_reg(engine="lm"), model1, data=flights_r)` for R’s standard OLS; `fit(linear_reg(engine="spark"), model1, data=flights_r)` for Spark.
* For more advanced users, `sparklyr` provides a straightforward way to efficiently implement entire Spark machine learning pipelines in an R script via `ml_pipeline(sc)` and the `dplyr`\-style pipe operators `%>%`, including model specification, data preparation, and selection and specification of the estimator.
14\.1 Simple linear regression analysis
---------------------------------------
Suppose we want to conduct a correlation study of what factors are associated with longer or shorter arrival delays in air travel. Via its built\-in ‘MLib’ library, Spark provides several high\-level functions to conduct regression analyses. When calling these functions via `sparklyr` (or `SparkR`), their usage is actually very similar to the usual R packages/functions commonly used to run regressions in R.
As a simple point of reference, we first estimate a linear model with the usual R approach (all computed in the R environment). First, we load the data as a common `data.table`. We could also convert a copy of the entire `SparkDataFrame` object to a `data.frame` or `data.table` and get essentially the same outcome. However, collecting the data from the RDD structure would take much longer than parsing the CSV with `fread`. In addition, we only import the first 300 rows. Running regression analysis with relatively large datasets in Spark on a small local machine might fail or be rather slow.[73](#fn73)
```
# flights_r <- collect(flights) # very slow!
flights_r <- data.table::fread("data/flights.csv", nrows = 300)
```
Now we run a simple linear regression (OLS) and show the summary output.
```
# specify the linear model
model1 <- arr_delay ~ dep_delay + distance
# fit the model with OLS
fit1 <- lm(model1, flights_r)
# compute t-tests etc.
summary(fit1)
```
```
##
## Call:
## lm(formula = model1, data = flights_r)
##
## Residuals:
## Min 1Q Median 3Q Max
## -42.39 -9.96 -1.91 9.87 48.02
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.182662 1.676560 -0.11 0.91
## dep_delay 0.989553 0.017282 57.26 <2e-16 ***
## distance 0.000114 0.001239 0.09 0.93
## ---
## Signif. codes:
## 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 15.5 on 297 degrees of freedom
## Multiple R-squared: 0.917, Adjusted R-squared: 0.917
## F-statistic: 1.65e+03 on 2 and 297 DF, p-value: <2e-16
```
Now we aim to compute essentially the same model estimate in `sparklyr`.[74](#fn74) In order to use Spark via the `sparklyr` package, we need to first load the package and establish a connection with Spark (similar to `SparkR::sparkR.session()`).
```
library(sparklyr)
# connect with default configuration
sc <- spark_connect(master="local")
```
We then copy the data.table `flights_r` (previously loaded into our R session) to Spark. Again, working on a normal laptop this seems trivial, but the exact same command would allow us (when connected with Spark on a cluster computer in the cloud) to properly load and distribute the data.table on the cluster. Finally, we then fit the model with `ml_linear_regression()` and compute.
```
# load data to spark
flights_spark <- copy_to(sc, flights_r, "flights_spark")
# fit the model
fit1_spark <- ml_linear_regression(flights_spark, formula = model1)
# compute summary stats
summary(fit1_spark)
```
```
Deviance Residuals:
Min 1Q Median 3Q Max
-42.386 -9.965 -1.911 9.866 48.024
Coefficients:
(Intercept) dep_delay distance
-0.1826622687 0.9895529018 0.0001139616
R-Squared: 0.9172
Root Mean Squared Error: 15.42
```
Alternatively, we can use the `spark_apply()` function to run the regression analysis in R via the original R `lm()` function.[75](#fn75)
```
# fit the model
spark_apply(flights_spark,
function(df){
broom::tidy(lm(arr_delay ~ dep_delay + distance, df))},
names = c("term",
"estimate",
"std.error",
"statistic",
"p.value")
)
```
```
# Source: spark<?> [?? x 5]
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -0.183 1.68 -0.109 9.13e- 1
2 dep_delay 0.990 0.0173 57.3 1.63e-162
3 distance 0.000114 0.00124 0.0920 9.27e- 1
```
Finally, the `parsnip` package ([Kuhn and Vaughan 2022](#ref-parsnip)) (together with the `tidymodels` package; Kuhn and Wickham ([2020](#ref-tidymodels))) provides a simple interface to run the same model (or similar specifications) on different “engines” (estimators/fitting algorithms), and several of the `parsnip` models are also supported in `sparklyr`. This significantly facilitates the transition from local testing (with a small subset of the data) to running the estimation on the entire dataset on spark.
```
library(tidymodels)
library(parsnip)
# simple local linear regression example from above
# via tidymodels/parsnip
fit1 <- fit(linear_reg(engine="lm"), model1, data=flights_r)
tidy(fit1)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -0.183 1.68 -0.109 9.13e- 1
2 dep_delay 0.990 0.0173 57.3 1.63e-162
3 distance 0.000114 0.00124 0.0920 9.27e- 1
```
```
# run the same on Spark
fit1_spark <- fit(linear_reg(engine="spark"), model1, data=flights_spark)
tidy(fit1_spark)
```
```
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 (Intercept) -0.183 1.68 -0.109 9.13e- 1
2 dep_delay 0.990 0.0173 57.3 1.63e-162
3 distance 0.000114 0.00124 0.0920 9.27e- 1
```
We will further build on this interface in the next section where we look at different machine learning procedures for a classification problem.
14\.2 Machine learning for classification
-----------------------------------------
Building on `sparklyr`, `tidymodels`, and `parsnip`, we test a set of machine learning models on the classification problem discussed in Varian ([2014](#ref-varian_2014)), predicting Titanic survivors. The data for this exercise can be downloaded from here: [http://doi.org/10\.3886/E113925V1](http://doi.org/10.3886/E113925V1).
We import and prepare the data in R.
```
# load into R, select variables of interest, remove missing
titanic_r <- read.csv("data/titanic3.csv")
titanic_r <- na.omit(titanic_r[, c("survived",
"pclass",
"sex",
"age",
"sibsp",
"parch")])
titanic_r$survived <- ifelse(titanic_r$survived==1, "yes", "no")
```
In order to assess the performance of the classifiers later on, we split the sample into training and test datasets. We do so with the help of the `rsample` package ([Frick et al. 2022](#ref-rsample)), which provides a number of high\-level functions to facilitate this kind of pre\-processing.
```
library(rsample)
# split into training and test set
titanic_r <- initial_split(titanic_r)
ti_training <- training(titanic_r)
ti_testing <- testing(titanic_r)
```
For the training and assessment of the classifiers, we transfer the two datasets to the spark cluster.
```
# load data to spark
ti_training_spark <- copy_to(sc, ti_training, "ti_training_spark")
ti_testing_spark <- copy_to(sc, ti_testing, "ti_testing_spark")
```
Now we can set up a ‘horse race’ between different ML approaches to find the best performing model. Overall, we will consider the following models/algorithms:
* Logistic regression
* Boosted trees
* Random forest
```
# models to be used
models <- list(logit=logistic_reg(engine="spark", mode = "classification"),
btree=boost_tree(engine = "spark", mode = "classification"),
rforest=rand_forest(engine = "spark", mode = "classification"))
# train/fit the models
fits <- lapply(models, fit, formula=survived~., data=ti_training_spark)
```
The fitted models (trained algorithms) can now be assessed with the help of the test dataset. To this end, we use the high\-level `accuracy` function provided in the `yardstick` package ([Kuhn, Vaughan, and Hvitfeldt 2022](#ref-yardstick)) to compute the accuracy of the fitted models. We proceed in three steps. First, we use the fitted models to predict the outcomes (we classify cases into survived/did not survive) of the *test set*. Then we fetch the predictions from the Spark cluster, format the variables, and add the actual outcomes as an additional column.
```
# run predictions
predictions <- lapply(fits, predict, new_data=ti_testing_spark)
# fetch predictions from Spark, format, add actual outcomes
pred_outcomes <-
lapply(1:length(predictions), function(i){
x_r <- collect(predictions[[i]]) # load into local R environment
x_r$pred_class <- as.factor(x_r$pred_class) # format for predictions
x_r$survived <- as.factor(ti_testing$survived) # add true outcomes
return(x_r)
})
```
Finally, we compute the accuracy of the models, stack the results, and display them (ordered from best\-performing to worst\-performing.)
```
acc <- lapply(pred_outcomes, accuracy, truth="survived", estimate="pred_class")
acc <- bind_rows(acc)
acc$model <- names(fits)
acc[order(acc$.estimate, decreasing = TRUE),]
```
```
# A tibble: 3 × 4
.metric .estimator .estimate model
<chr> <chr> <dbl> <chr>
1 accuracy binary 0.817 rforest
2 accuracy binary 0.790 btree
3 accuracy binary 0.779 logit
```
In this simple example, all models perform similarly well. However, none of them really performs outstandingly. In a next step, we might want to learn about which variables are considered more or less important for the predictions. Here, the `tidy()` function is very useful. As long as the model types are comparable (here `btree` and `rforest`), `tidy()` delivers essentially the same type of summary for different models.
```
tidy(fits[["btree"]])
```
```
# A tibble: 5 × 2
feature importance
<chr> <dbl>
1 age 0.415
2 sex_male 0.223
3 pclass 0.143
4 sibsp 0.120
5 parch 0.0987
```
```
tidy(fits[["rforest"]])
```
```
# A tibble: 5 × 2
feature importance
<chr> <dbl>
1 sex_male 0.604
2 pclass 0.188
3 age 0.120
4 sibsp 0.0595
5 parch 0.0290
```
Finally, we clean up and disconnect from the Spark cluster.
```
spark_disconnect(sc)
```
14\.3 Building machine learning pipelines with R and Spark
----------------------------------------------------------
Spark provides a framework to implement machine learning pipelines called [ML Pipelines](https://spark.apache.org/docs/latest/ml-pipeline.html), with the aim of facilitating the combination of various preparatory steps and ML algorithms into a pipeline/workflow. `sparklyr` provides a straightforward interface to ML Pipelines that allows implementing and testing the entire ML workflow in R and then easily deploying the final pipeline to a Spark cluster or more generally to the production environment. In the following example, we will revisit the e\-commerce purchase prediction model (Google Analytics data from the Google Merchandise Shop) introduced in Chapter 1\. That is, we want to prepare the Google Analytics data and then use lasso to find a set of important predictors for purchase decisions, all built into a machine learning pipeline.
### 14\.3\.1 Set up and data import
All of the key ingredients are provided in `sparklyr`. However, I recommend using the ‘piping’ syntax provided in `dplyr` ([Wickham et al. 2023](#ref-dplyr)) to implement the machine learning pipeline. In this context, using this syntax is particularly helpful to make the code easy to read and understand.
```
# load packages
library(sparklyr)
library(dplyr)
# fix vars
INPUT_DATA <- "data/ga.csv"
```
Recall that the Google Analytics dataset is a small subset of the overall data generated by Google Analytics on a moderately sized e\-commerce site. Hence, it makes perfect sense to first implement and test the pipeline locally (on a local Spark installation) before deploying it on an actual Spark cluster in the cloud. In a first step, we thus copy the imported data to the local Spark instance.
```
# import to local R session, prepare raw data
ga <- na.omit(read.csv(INPUT_DATA))
#ga$purchase <- as.factor(ifelse(ga$purchase==1, "yes", "no"))
# connect to, and copy the data to the local cluster
sc <- spark_connect(master = "local")
ga_spark <- copy_to(sc, ga, "ga_spark", overwrite = TRUE)
```
### 14\.3\.2 Building the pipeline
The pipeline object is initialized via `ml_pipeline()`, in which we refer to the connection to the local Spark cluster. We then add the model specification (the formula) to the pipeline with `ft_r_formula()`. `ft_r_formula` essentially transforms the data in accordance with the common specification syntax in R (here: `purchase ~ .`). Among other things, this takes care of properly setting up the model matrix. Finally, we add the model via `ml_logistic_regression()`. We can set the penalization parameters via `elastic_net_param` (with `alpha=1`, we get the lasso).
```
# ml pipeline
ga_pipeline <-
ml_pipeline(sc) %>%
ft_string_indexer(input_col="city",
output_col="city_output",
handle_invalid = "skip") %>%
ft_string_indexer(input_col="country",
output_col="country_output",
handle_invalid = "skip") %>%
ft_string_indexer(input_col="source",
output_col="source_output",
handle_invalid = "skip") %>%
ft_string_indexer(input_col="browser",
output_col="browser_output",
handle_invalid = "skip") %>%
ft_r_formula(purchase ~ .) %>%
ml_logistic_regression(elastic_net_param = list(alpha=1))
```
Finally, we create a cross\-validator object to train the model with a k\-fold cross\-validation and fit the model.
For the sake of the example, we use only a 30\-fold cross validation (to be run in parallel on 8 cores).
```
# specify the hyperparameter grid
# (parameter values to be considered in optimization)
ga_params <- list(logistic_regression=list(max_iter=80))
# create the cross-validator object
set.seed(1)
cv_lasso <- ml_cross_validator(sc,
estimator=ga_pipeline,
estimator_param_maps = ga_params,
ml_binary_classification_evaluator(sc),
num_folds = 30,
parallelism = 8)
# train/fit the model
cv_lasso_fit <- ml_fit(cv_lasso, ga_spark)
# note: this takes several minutes to run on a local machine (1 node, 8 cores)
```
Finally, we can inspect and further process the results – in particular the model’s performance.
```
# pipeline summary
# cv_lasso_fit
# average performance
cv_lasso_fit$avg_metrics_df
```
```
areaUnderROC max_iter_1
1 0.8666304 80
```
Before closing the connection to the Spark cluster, we can save the entire pipeline to work further with it later on.
```
# save the entire pipeline/fit
ml_save(
cv_lasso_fit,
"ga_cv_lasso_fit",
overwrite = TRUE
)
```
To reload the pipeline later on, run `ml_load(sc, "ga_cv_lasso_fit")`.
### 14\.3\.1 Set up and data import
All of the key ingredients are provided in `sparklyr`. However, I recommend using the ‘piping’ syntax provided in `dplyr` ([Wickham et al. 2023](#ref-dplyr)) to implement the machine learning pipeline. In this context, using this syntax is particularly helpful to make the code easy to read and understand.
```
# load packages
library(sparklyr)
library(dplyr)
# fix vars
INPUT_DATA <- "data/ga.csv"
```
Recall that the Google Analytics dataset is a small subset of the overall data generated by Google Analytics on a moderately sized e\-commerce site. Hence, it makes perfect sense to first implement and test the pipeline locally (on a local Spark installation) before deploying it on an actual Spark cluster in the cloud. In a first step, we thus copy the imported data to the local Spark instance.
```
# import to local R session, prepare raw data
ga <- na.omit(read.csv(INPUT_DATA))
#ga$purchase <- as.factor(ifelse(ga$purchase==1, "yes", "no"))
# connect to, and copy the data to the local cluster
sc <- spark_connect(master = "local")
ga_spark <- copy_to(sc, ga, "ga_spark", overwrite = TRUE)
```
### 14\.3\.2 Building the pipeline
The pipeline object is initialized via `ml_pipeline()`, in which we refer to the connection to the local Spark cluster. We then add the model specification (the formula) to the pipeline with `ft_r_formula()`. `ft_r_formula` essentially transforms the data in accordance with the common specification syntax in R (here: `purchase ~ .`). Among other things, this takes care of properly setting up the model matrix. Finally, we add the model via `ml_logistic_regression()`. We can set the penalization parameters via `elastic_net_param` (with `alpha=1`, we get the lasso).
```
# ml pipeline
ga_pipeline <-
ml_pipeline(sc) %>%
ft_string_indexer(input_col="city",
output_col="city_output",
handle_invalid = "skip") %>%
ft_string_indexer(input_col="country",
output_col="country_output",
handle_invalid = "skip") %>%
ft_string_indexer(input_col="source",
output_col="source_output",
handle_invalid = "skip") %>%
ft_string_indexer(input_col="browser",
output_col="browser_output",
handle_invalid = "skip") %>%
ft_r_formula(purchase ~ .) %>%
ml_logistic_regression(elastic_net_param = list(alpha=1))
```
Finally, we create a cross\-validator object to train the model with a k\-fold cross\-validation and fit the model.
For the sake of the example, we use only a 30\-fold cross validation (to be run in parallel on 8 cores).
```
# specify the hyperparameter grid
# (parameter values to be considered in optimization)
ga_params <- list(logistic_regression=list(max_iter=80))
# create the cross-validator object
set.seed(1)
cv_lasso <- ml_cross_validator(sc,
estimator=ga_pipeline,
estimator_param_maps = ga_params,
ml_binary_classification_evaluator(sc),
num_folds = 30,
parallelism = 8)
# train/fit the model
cv_lasso_fit <- ml_fit(cv_lasso, ga_spark)
# note: this takes several minutes to run on a local machine (1 node, 8 cores)
```
Finally, we can inspect and further process the results – in particular the model’s performance.
```
# pipeline summary
# cv_lasso_fit
# average performance
cv_lasso_fit$avg_metrics_df
```
```
areaUnderROC max_iter_1
1 0.8666304 80
```
Before closing the connection to the Spark cluster, we can save the entire pipeline to work further with it later on.
```
# save the entire pipeline/fit
ml_save(
cv_lasso_fit,
"ga_cv_lasso_fit",
overwrite = TRUE
)
```
To reload the pipeline later on, run `ml_load(sc, "ga_cv_lasso_fit")`.
14\.4 Wrapping up
-----------------
The key take\-aways from this chapter are:
* When running econometric analysis such as linear or logistic regressions with massive amounts of data, `sparklyr` provides all the basic functions you need.
* You can test your code on your local spark installation by connecting to the local ‘cluster’: `spark_connect(master="local")`. This allows you to test your entire regression analysis script locally (on a sub\-sample) before running the exact same script via a connection to a large spark cluster on AWS EMR. To do so, simply connect to the cluster via `spark_connect(master = "yarn")` from RStudio server, following the setup introduced in Section 8\.4\.
* The `rsample` package provides easy\-to\-use high\-level functions to split your dataset into training and test datasets: See `?initial_split`, `?training`, and `?testing`.
* The `parsnip` and `broom` packages ([Robinson, Hayes, and Couch 2022](#ref-broom)) provide a way to easily standardize regression output. This is very helpful if you want to verify your regression analysis implementation for Spark with the more familiar R regression frameworks such as `lm()`. For example, compare the standard R OLS output with the linear regression output computed on a Spark cluster: `fit(linear_reg(engine="lm"), model1, data=flights_r)` for R’s standard OLS; `fit(linear_reg(engine="spark"), model1, data=flights_r)` for Spark.
* For more advanced users, `sparklyr` provides a straightforward way to efficiently implement entire Spark machine learning pipelines in an R script via `ml_pipeline(sc)` and the `dplyr`\-style pipe operators `%>%`, including model specification, data preparation, and selection and specification of the estimator.
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/large-scale-text-analysis-with-sparklyr.html |
Chapter 15 Large\-scale Text Analysis with sparklyr
===================================================
Text analysis/natural language processing (NLP) often involves rather large amounts of data and is particularly challenging for in\-memory processing. `sparklyr` provides several easy\-to\-use functions to run some of the computationally most demanding text data handling on a Spark cluster. In this chapter we explore these functions and the corresponding workflows to do text analysis on an AWS EMR cluster running Spark. Thereby we focus on the first few key components of a modern NLP pipeline. Figure [15\.1](large-scale-text-analysis-with-sparklyr.html#fig:nlppipeline) presents an overview of the main components of such a pipeline.
Figure 15\.1: Illustration of an NLP (Natural Language Processing) pipeline.
Up until the deployment of an NLP model, all the steps involved constitute the typical workflow of economic research projects based on text data. Conveniently, all these first crucial steps of analyzing text data are covered in a few high\-level functions provided in the `sparklyr` package. Implementing these steps and running them based on massive amounts of text data on an AWS EMR cluster are thus straightforward.
To get familiar with the basic syntax, the following subsection covers the first steps in such a pipeline based on a very simple text example.
15\.1 Getting started: Import, pre\-processing, and word count
--------------------------------------------------------------
The following example briefly guides the reader through some of the most common first steps when processing text data for NLP. In the code example, we process Friedrich Schiller’s “Wilhelm Tell” (English edition; Project Gutenberg Book ID 2782\), which we download from [Project Gutenberg](https://www.gutenberg.org/) by means of the `gutenbergr` package ([Johnston and Robinson 2022](#ref-gutenbergr)). The example can easily be extended to process many more books.
The example is set up to work straightforwardly on an AWS EMR cluster. However, given the relatively small amount of data processed here, you can also run it locally. If you want to run it on EMR, simply follow the steps in Chapter 6\.4 to set up the cluster and log in to RStudio on the master node. The `sparklyr` package is already installed on EMR (if you use the bootstrap\-script introduced in Chapter 6\.4 for the set\-up of the cluster), but other packages might still have to be installed.
We first load the packages and connect the RStudio session to the cluster (if you run this locally, use `spark_connect(master="local")`).
```
# install additional packages
# install.packages("gutenbergr") # download book from Project Gutenberg
# install.packages("dplyr") # for the data preparatory steps
# load packages
library(sparklyr)
library(gutenbergr)
library(dplyr)
# fix vars
TELL <- "https://www.gutenberg.org/cache/epub/6788/pg6788.txt"
# connect rstudio session to cluster
sc <- spark_connect(master = "yarn")
```
We fetch the raw text of the book and copy it to the Spark cluster. Note that you can do this sequentially for many books without exhausting the master node’s RAM and then further process the data on the cluster.
```
# Data gathering and preparation
# fetch Schiller's Tell, load to cluster
tmp_file <- tempfile()
download.file(TELL, tmp_file)
raw_text <- readLines(tmp_file)
tell <- data.frame(raw_text=raw_text)
tell_spark <- copy_to(sc, tell,
"tell_spark",
overwrite = TRUE)
```
The text data will be processed in a `SparkDataFrame` column behind the `tell_spark` object. First, we remove empty lines of text, select the column containing all the text, and then remove all non\-numeric and non\-alphabetical characters. The last step is an important text cleaning step as we want to avoid special characters being considered words or parts of words later on.
```
# data cleaning
tell_spark <- filter(tell_spark, raw_text!="")
tell_spark <- select(tell_spark, raw_text)
tell_spark <- mutate(tell_spark,
raw_text = regexp_replace(raw_text, "[^0-9a-zA-Z]+", " "))
```
Now we can split the lines of text in the column `raw_text` into individual words (sequences of characters separated by white space). To this end we can call a Spark feature transformation routine called tokenization, which essentially breaks text into individual terms. Specifically, each line of raw text in the column `raw_text` will be split into words. The overall result (stored in a new column specified with `output_col`), is then a nested list in which each word is an element of the corresponding line element.
```
# split into words
tell_spark <- ft_tokenizer(tell_spark,
input_col = "raw_text",
output_col = "words")
```
Now we can call another feature transformer called “stop words remover”, which excludes all the stop words (words often occurring in a text but not carrying much information) from the nested word list.
```
# remove stop-words
tell_spark <- ft_stop_words_remover(tell_spark,
input_col = "words",
output_col = "words_wo_stop")
```
Finally, we combine all of the words in one vector and store the result in a new SparkDataFrame called “all\_tell\_words” (by calling `compute()`) and add some final cleaning steps.
```
# unnest words, combine in one row
all_tell_words <- mutate(tell_spark,
word = explode(words_wo_stop))
# final cleaning
all_tell_words <- select(all_tell_words, word)
all_tell_words <- filter(all_tell_words, 2<nchar(word))
```
Based on this cleaned set of words, we can compute the word count for the entire book.
```
# get word count and store result in Spark memory
compute(count(all_tell_words, word), "wordcount_tell")
```
```
## # Source: spark<wordcount_tell> [?? x 2]
## word n
## <chr> <int>
## 1 located 7
## 2 using 6
## 3 martin 2
## 4 language 2
## 5 baron 8
## 6 nephew 3
## 7 hofe 3
## 8 reding 16
## 9 fisherman 39
## 10 baumgarten 32
## # ℹ more rows
```
Finally, we can disconnect the R session from the Spark cluster
```
spark_disconnect(sc)
```
15\.2 Tutorial: political slant
-------------------------------
The tutorial below shows how to use `sparklyr` (in conjunction with AWS EMR) to run the entire raw text processing of academic research projects in economics. We will replicate the data preparation and computation of the *slant measure* for congressional speeches suggested by Gentzkow and Shapiro ([2010](#ref-gentzkow_shapiro2010)) in the tutorial. To keep things simple, we’ll use the data compiled by Gentzkow, Shapiro, and Taddy ([2019](#ref-gentzkow_etal2019)) and made available at <https://data.stanford.edu/congress_text>.
### 15\.2\.1 Data download and import
To begin, we download the corresponding zip\-file to the EMR master node (if using EMR) or to your local RStudio working directory if using a local Spark installation. The unzipped folder contains text data from all speeches delivered from the 97th to the 114th US Congress, among other things. We will primarily work with the raw speeches text data, which is stored in files with the naming pattern `"speeches CONGRESS.txt,"` where `CONGRESS` is the number of the corresponding US Congress. To make things easier, we put all of the `speeches`\-files in a subdirectory called `'speeches'`. The following section simply illustrates one method for downloading and rearranging the data files. The following code chunks require that all files containing the text of speeches be stored in `data/text/speeches` and all speaker information be stored in `data/text/speakers`.
```
# download and unzip the raw text data
URL <- "https://stacks.stanford.edu/file/druid:md374tz9962/hein-daily.zip"
PATH <- "data/hein-daily.zip"
system(paste0("curl ",
URL,
" > ",
PATH,
" && unzip ",
PATH))
# move the speeches files
system("mkdir data/text/ && mkdir data/text/speeches")
system("mv hein-daily/speeches* data/text/speeches/")
# move the speaker files
system("mkdir data/text/speakers")
system("mv hein-daily/*SpeakerMap.txt data/text/speakers/")
```
In addition, we download an extra file in which the authors kept only valid phrases (after removing procedural phrases that often occur in congressional speeches but that do not contribute to finding partisan phrases). Thus we can later use this additional file to filter out invalid bigrams.[76](#fn76)
```
# download and unzip procedural phrases data
URL_P <- "https://stacks.stanford.edu/file/druid:md374tz9962/vocabulary.zip"
PATH_P <- "data/vocabulary.zip"
system(paste0("curl ",
URL_P,
" > ",
PATH_P,
" && unzip ",
PATH_P))
# move the procedural vocab file
system("mv vocabulary/vocab.txt data/text/")
```
We begin by loading the corresponding packages, defining some fixed variables, and connecting the R session to the Spark cluster, using the same basic pipeline structure as in the previous section’s introductory example. Typically, you would first test these steps on a local Spark installation before feeding in more data to process on a Spark cluster in the cloud. In the following example, we only process the congressional speeches from the 97th to the 114th US Congress. The original source provides data for almost the entire history of Congress (see <https://data.stanford.edu/congress_text> for details). Recall that for local tests/working with the local Spark installation, you can connect your R session with `sc <- spark_connect(master = "local")`. Since even the limited speeches set we work with locally is several GBs in size, we set the memory available to our local Spark node to 16GB. This can be done by fetching the config file via `spark_config()` and then setting the `driver-memory` accordingly before initializing the Spark connection with the adapted configuration object (see `config = conf` in the `spark_connect()` command).
Unlike in the simple introductory example above, the raw data is distributed in multiple files. By default, Spark expects to load data from multiple files in the same directory. Thus, we can use the `spark_read_csv()` function to specify where all of the raw data is located in order to read in all of the raw text data at once. The data in this example is essentially stored in CSV format, but the pipe symbol `|` is used to separate columns instead of the more common commas. By specifying `delimiter="|"`, we ensure that the data structure is correctly captured.
```
# LOAD TEXT DATA --------------------
# load data
speeches <- spark_read_csv(sc,
name = "speeches",
path = INPUT_PATH_SPEECHES,
delimiter = "|")
speakers <- spark_read_csv(sc,
name = "speakers",
path = INPUT_PATH_SPEAKERS,
delimiter = "|")
```
### 15\.2\.2 Cleaning speeches data
The intermediate goal of the data preparation steps is to determine the number of bigrams per party. That is, we want to know how frequently members of a particular political party have used a two\-word phrase. As a first step, we must combine the speeches and speaker data to obtain the party label per speech and then clean the raw text to extract words and create and count bigrams. Congressional speeches frequently include references to dates, bill numbers, years, and so on. This introduces a slew of tokens made up entirely of digits, single characters, and special characters. The cleaning steps that follow are intended to remove the majority of those.
```
# JOIN --------------------
speeches <-
inner_join(speeches,
speakers,
by="speech_id") %>%
filter(party %in% c("R", "D"), chamber=="H") %>%
mutate(congress=substr(speech_id, 1,3)) %>%
select(speech_id, speech, party, congress)
# CLEANING ----------------
# clean text: numbers, letters (bill IDs, etc.
speeches <-
mutate(speeches, speech = tolower(speech)) %>%
mutate(speech = regexp_replace(speech,
"[_\"\'():;,.!?\\-]",
"")) %>%
mutate(speech = regexp_replace(speech, "\\\\(.+\\\\)", " ")) %>%
mutate(speech = regexp_replace(speech, "[0-9]+", " ")) %>%
mutate(speech = regexp_replace(speech, "<[a-z]+>", " ")) %>%
mutate(speech = regexp_replace(speech, "<\\w+>", " ")) %>%
mutate(speech = regexp_replace(speech, "_", " ")) %>%
mutate(speech = trimws(speech))
```
### 15\.2\.3 Create a bigrams count per party
Based on the cleaned text, we now split the text into words (tokenization), remove stopwords, and create a list of bigrams (2\-word phrases). Finally, we unnest the bigram list and keep the party and bigram column. The resulting Spark table contains a row for each bigram mentioned in any of the speeches along with the information of whether the speech in which the bigram was mentioned was given by a Democrat or a Republican.
```
# TOKENIZATION, STOPWORDS REMOVAL, NGRAMS ----------------
# stopwords list
stop <- readLines("http://snowball.tartarus.org/algorithms/english/stop.txt")
stop <- trimws(gsub("\\|.*", "", stop))
stop <- stop[stop!=""]
# clean text: numbers, letters (bill IDs, etc.
bigrams <-
ft_tokenizer(speeches, "speech", "words") %>%
ft_stop_words_remover("words", "words_wo_stop",
stop_words = stop ) %>%
ft_ngram("words_wo_stop", "bigram_list", n=2) %>%
mutate(bigram=explode(bigram_list)) %>%
mutate(bigram=trim(bigram)) %>%
mutate(n_words=as.numeric(length(bigram) -
length(replace(bigram, ' ', '')) + 1)) %>%
filter(3<nchar(bigram), 1<n_words) %>%
select(party, congress, bigram)
```
Before counting the bigrams by party, we need an additional context\-specific cleaning step in which we remove procedural phrases from the speech bigrams.
```
# load the procedural phrases list
valid_vocab <- spark_read_csv(sc,
path="data/text/vocab.txt",
name = "valid_vocab",
delimiter = "|",
header = FALSE)
# remove corresponding bigrams via anti-join
bigrams <- inner_join(bigrams, valid_vocab, by= c("bigram"="V1"))
```
### 15\.2\.4 Find “partisan” phrases
At this point, we have all pieces in place in order to compute the bigram count (how often a certain bigram was mentioned by a member of either party). As this is an important intermediate result, we evaluate the entire operation for all the data and cache it in Spark memory through `compute()`. Note that if you run this code on your local machine, it can take a while to process.
```
# BIGRAM COUNT PER PARTY ---------------
bigram_count <-
count(bigrams, party, bigram, congress) %>%
compute("bigram_count")
```
Finally, we can turn to the actual method/analysis suggested by Gentzkow and Shapiro ([2010](#ref-gentzkow_shapiro2010)). They suggest a simple chi\-squared test to find the most partisan bigrams. For each bigram, we compute the corresponding chi\-squared value.
```
# FIND MOST PARTISAN BIGRAMS ------------
# compute frequencies and chi-squared values
freqs <-
bigram_count %>%
group_by(party, congress) %>%
mutate(total=sum(n), f_npl=total-n)
freqs_d <-
filter(freqs, party=="D") %>%
rename(f_pld=n, f_npld=f_npl) %>%
select(bigram, congress, f_pld, f_npld)
```
```
## Adding missing grouping variables: `party`
```
```
freqs_r <-
filter(freqs, party=="R") %>%
rename(f_plr=n, f_nplr=f_npl) %>%
select(bigram, congress, f_plr, f_nplr)
```
```
## Adding missing grouping variables: `party`
```
Based on the computed bigram frequencies, we can compute the chi\-squared test as follows.
```
pol_bigrams <-
inner_join(freqs_d, freqs_r, by=c("bigram", "congress")) %>%
group_by(bigram, congress) %>%
mutate(x2=((f_plr*f_npld-f_pld*f_nplr)^2)/
((f_plr + f_pld)*(f_plr + f_nplr)*
(f_pld + f_npld)*(f_nplr + f_npld))) %>%
select(bigram, congress, x2, f_pld, f_plr) %>%
compute("pol_bigrams")
```
### 15\.2\.5 Results: Most partisan phrases by congress
In order to present a first glimpse at the results we first select the 2,000 most partisan phrases per Congress according to the procedure above. To do so, we need to first create an index column in the corresponding Spark table.[77](#fn77) We then collect the 2,000 most partisan bigrams.[78](#fn78)
```
# create output data frame
output <- pol_bigrams %>%
group_by(congress) %>%
arrange(desc(x2)) %>%
sdf_with_sequential_id(id="index") %>%
filter(index<=2000) %>%
mutate(Party=ifelse(f_pld<f_plr, "R", "D"))%>%
select(bigram, congress, Party, x2) %>%
collect()
# disconnect from cluster
spark_disconnect(sc)
```
From the subset of the 2,000 most partisan bigrams, we then generate a table of the top 5 most partisan bigrams per Congress.
```
# packages to prepare and plot
library(data.table)
library(ggplot2)
# select top ten per congress, clean
output <- as.data.table(output)
topten <- output[order(congress, x2, decreasing = TRUE),
rank:=1:.N, by=list(congress)][rank %in% (1:5)]
topten[, congress:=gsub("990", "99", congress)]
topten[, congress:=gsub("980", "98", congress)]
topten[, congress:=gsub("970", "97", congress)]
# plot a visualization of the most partisan terms
ggplot(topten, mapping=aes(x=as.integer(congress), y=log(x2), color=Party)) +
geom_text(aes(label=bigram), nudge_y = 1)+
ylab("Partisanship score (Ln of Chisq. value)") +
xlab("Congress") +
scale_color_manual(values=c("D"="blue", "R"="red"), name="Party") +
guides(color=guide_legend(title.position="top")) +
scale_x_continuous(breaks=as.integer(unique(topten$congress))) +
theme_minimal() +
theme(axis.text.x = element_text(angle = 90, hjust = 1),
axis.text.y = element_text(hjust = 1),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.background = element_blank())
```
15\.3 Natural Language Processing at Scale
------------------------------------------
The examples above merely scratch the surface of what is possible these days in the realm of text analysis. With increasing availability of Big Data and the recent boost in deep learning, Natural Language Processing (NLP) put forward several very powerful (and very large) language models for various text prediction tasks. Due to the improvement of these models when trained on massive amounts of text data and the rather generic application of many of these large models, it has become common practice to work directly with a pre\-trained model. That is, we do not actually train the algorithm based on our own training dataset but rather build on a model that has been trained on a large text corpus and then has been made available to the public. In this section, we look at one straightforward way to build on such models with `sparklyr`.
### 15\.3\.1 Preparatory steps
Specifically, we look at a few brief examples based on the `sparknlp` package ([Kincaid and Kuo 2023](#ref-sparknlp)) providing a `sparklyr` extension for using the [John Snow Labs Spark NLP](https://www.johnsnowlabs.com/spark-nlp) library. The package can be directly installed from GitHub: `devtools::install_github("r-spark/sparknlp")` To begin, we load the corresponding packages and initialize a pre\-trained NLP pipeline for sentiment analysis, which we will then apply to the congressional speeches data. Note that the `sparknlp` package needs to be loaded before we connect the R session to the Spark cluster. In the following code chunk we thus first load the package and initiate the session by connecting again to the local Spark node. In addition to loading the `sparklyr` and `dplyr` packages, we also load the `sparklyr.nested` package ([Pollock 2023](#ref-sparklyr.nested)). The latter is useful when working with `sparknlp`’s pipelines because the results are often returned as nested lists (in Spark table columns).
```
# load packages
library(dplyr)
library(sparklyr)
library(sparknlp)
library(sparklyr.nested)
# configuration of local spark cluster
conf <- spark_config()
conf$`sparklyr.shell.driver-memory` <- "16g"
# connect rstudio session to cluster
sc <- spark_connect(master = "local",
config = conf)
```
The goal of this brief example of `sparknlp` is to demonstrate how we can easily tap into very powerful pre\-trained models to categorize text.To keep things simple, we return to the previous context (congressional speeches) and reload the speeches dataset. To make the following chunks of code run smoothly and relatively fast on a local Spark installation (for test purposes), we use `sample_n()` for a random draw of 10,000 speeches.
```
# LOAD --------------------
# load speeches
INPUT_PATH_SPEECHES <- "data/text/speeches/"
speeches <-
spark_read_csv(sc,
name = "speeches",
path = INPUT_PATH_SPEECHES,
delimiter = "|",
overwrite = TRUE) %>%
sample_n(10000, replace = FALSE) %>%
compute("speeches")
```
### 15\.3\.2 Sentiment annotation
In this short tutorial, we’ll examine the tone (sentiment) of the congressional speeches. Sentiment analysis is a fairly common task in NLP, but it is frequently a computationally demanding task with numerous preparatory steps. `sparknlp` provides a straightforward interface for creating the necessary NLP pipeline in R and massively scaling the analysis on Spark. Let’s begin by loading the pretrained NLP pipeline for sentiment analysis provided in `sparknlp`.
```
# load the nlp pipeline for sentiment analysis
pipeline <- nlp_pretrained_pipeline(sc, "analyze_sentiment", "en")
```
We can easily feed in the entire speech corpus via the `target` argument and point to the column containing the raw text (here `"speech"`). The code below divides the text into sentences and tokens (words) and returns the sentiment annotation for each sentence.
```
speeches_a <-
nlp_annotate(pipeline,
target = speeches,
column = "speech")
```
The sentiment of the sentences is then extracted for each corresponding speech ID and coded with two additional indicator variables, indicating whether a sentence was classified as positive or negative.
```
# extract sentiment coding per speech
sentiments <-
speeches_a %>%
sdf_select(speech_id, sentiments=sentiment.result) %>%
sdf_explode(sentiments) %>%
mutate(pos = as.integer(sentiments=="positive"),
neg = as.integer(sentiments=="negative")) %>%
select(speech_id, pos, neg)
```
15\.4 Aggregation and visualization
-----------------------------------
Finally, we compute the proportion of sentences with a positive sentiment per speech and export the aggregate sentiment analysis result to the R environment for further processing.[79](#fn79)
```
# aggregate and download to R environment -----
sentiments_aggr <-
sentiments %>%
select(speech_id, pos, neg) %>%
group_by(speech_id) %>%
mutate(rel_pos = sum(pos)/(sum(pos) + sum(neg))) %>%
filter(0<rel_pos) %>%
select(speech_id, rel_pos) %>%
sdf_distinct(name = "sentiments_aggr") %>%
collect()
```
```
# disconnect from cluster
spark_disconnect(sc)
```
We can easily plot the aggregate speech sentiment over time because the speech ID is based on the Congress number and the sequential number of speeches in this Congress. This allows us to compare (in the simple setup of this tutorial) the sentiment of congressional speeches over time.
```
# clean
library(data.table)
sa <- as.data.table(sentiments_aggr)
sa[, congress:=substr(speech_id, 1,3)]
sa[, congress:=gsub("990", "99", congress)]
sa[, congress:=gsub("980", "98", congress)]
sa[, congress:=gsub("970", "97", congress)]
# visualize results
library(ggplot2)
ggplot(sa, aes(x=as.integer(congress),
y=rel_pos,
group=congress)) +
geom_boxplot() +
ylab("Share of sentences with positive tone") +
xlab("Congress") +
theme_minimal()
```
15\.5 `sparklyr` and lazy evaluation
------------------------------------
When running the code examples above, you may have noticed that the execution times vary significantly between the different code chunks, and maybe not always in the expected way. When using Apache Spark via the `sparklyr`/`dplyr`\-interface as we did above, the evaluation of the code is intentionally (very) lazy. That is, unless a line of code really requires data to be processed (for example, due to printing the results to the console or explicitly due to calling `collect()`), Spark will not be triggered to run the actual processing of the entire data involved. When working with extremely large datasets, it makes sense to modify one’s workflow to accommodate this behavior. A reasonable workflow would then be to write down the pipeline so that the heavy load processing happens at the very end (which can then take several minutes, but you will have time for other things to do.)
The following short example taken from the script developed above illustrates this point. The arguably computationally most intensive part of the previous section was the sentiment annotation via `nlp_annotate()`:
```
system.time(
speeches_a <-
nlp_annotate(pipeline,
target = speeches,
column = "speech")
)
```
```
## user system elapsed
## 0.066 0.021 0.213
```
Remember that the pre\-trained pipeline used in this example includes many steps, such as breaking down speeches into sentences and words, cleaning the text, and predicting the sentiment of each sentence. When you run the code above, you’ll notice that this was not the most time\-consuming part to compute. That chunk of code runs in less than a second on my machine (with a local Spark node). Because we do not request the sentiment analysis results at this point, the pipeline is not actually run. It is only executed when we request it, for example, by adding the `compute()` call at the end.
```
system.time(
speeches_a <-
nlp_annotate(pipeline,
target = speeches,
column = "speech") %>%
compute(name= "speeches_a")
)
```
```
## user system elapsed
## 0.388 0.117 31.800
```
As you can see, this takes an order of magnitude longer, which makes perfect sense given that the pipeline is now running for the entire dataset fed into it. Unless you require the intermediate results (for example, for inspection), it thus makes sense to only process the big workload at the end of your `sparklyr`\-analytics script.
15\.1 Getting started: Import, pre\-processing, and word count
--------------------------------------------------------------
The following example briefly guides the reader through some of the most common first steps when processing text data for NLP. In the code example, we process Friedrich Schiller’s “Wilhelm Tell” (English edition; Project Gutenberg Book ID 2782\), which we download from [Project Gutenberg](https://www.gutenberg.org/) by means of the `gutenbergr` package ([Johnston and Robinson 2022](#ref-gutenbergr)). The example can easily be extended to process many more books.
The example is set up to work straightforwardly on an AWS EMR cluster. However, given the relatively small amount of data processed here, you can also run it locally. If you want to run it on EMR, simply follow the steps in Chapter 6\.4 to set up the cluster and log in to RStudio on the master node. The `sparklyr` package is already installed on EMR (if you use the bootstrap\-script introduced in Chapter 6\.4 for the set\-up of the cluster), but other packages might still have to be installed.
We first load the packages and connect the RStudio session to the cluster (if you run this locally, use `spark_connect(master="local")`).
```
# install additional packages
# install.packages("gutenbergr") # download book from Project Gutenberg
# install.packages("dplyr") # for the data preparatory steps
# load packages
library(sparklyr)
library(gutenbergr)
library(dplyr)
# fix vars
TELL <- "https://www.gutenberg.org/cache/epub/6788/pg6788.txt"
# connect rstudio session to cluster
sc <- spark_connect(master = "yarn")
```
We fetch the raw text of the book and copy it to the Spark cluster. Note that you can do this sequentially for many books without exhausting the master node’s RAM and then further process the data on the cluster.
```
# Data gathering and preparation
# fetch Schiller's Tell, load to cluster
tmp_file <- tempfile()
download.file(TELL, tmp_file)
raw_text <- readLines(tmp_file)
tell <- data.frame(raw_text=raw_text)
tell_spark <- copy_to(sc, tell,
"tell_spark",
overwrite = TRUE)
```
The text data will be processed in a `SparkDataFrame` column behind the `tell_spark` object. First, we remove empty lines of text, select the column containing all the text, and then remove all non\-numeric and non\-alphabetical characters. The last step is an important text cleaning step as we want to avoid special characters being considered words or parts of words later on.
```
# data cleaning
tell_spark <- filter(tell_spark, raw_text!="")
tell_spark <- select(tell_spark, raw_text)
tell_spark <- mutate(tell_spark,
raw_text = regexp_replace(raw_text, "[^0-9a-zA-Z]+", " "))
```
Now we can split the lines of text in the column `raw_text` into individual words (sequences of characters separated by white space). To this end we can call a Spark feature transformation routine called tokenization, which essentially breaks text into individual terms. Specifically, each line of raw text in the column `raw_text` will be split into words. The overall result (stored in a new column specified with `output_col`), is then a nested list in which each word is an element of the corresponding line element.
```
# split into words
tell_spark <- ft_tokenizer(tell_spark,
input_col = "raw_text",
output_col = "words")
```
Now we can call another feature transformer called “stop words remover”, which excludes all the stop words (words often occurring in a text but not carrying much information) from the nested word list.
```
# remove stop-words
tell_spark <- ft_stop_words_remover(tell_spark,
input_col = "words",
output_col = "words_wo_stop")
```
Finally, we combine all of the words in one vector and store the result in a new SparkDataFrame called “all\_tell\_words” (by calling `compute()`) and add some final cleaning steps.
```
# unnest words, combine in one row
all_tell_words <- mutate(tell_spark,
word = explode(words_wo_stop))
# final cleaning
all_tell_words <- select(all_tell_words, word)
all_tell_words <- filter(all_tell_words, 2<nchar(word))
```
Based on this cleaned set of words, we can compute the word count for the entire book.
```
# get word count and store result in Spark memory
compute(count(all_tell_words, word), "wordcount_tell")
```
```
## # Source: spark<wordcount_tell> [?? x 2]
## word n
## <chr> <int>
## 1 located 7
## 2 using 6
## 3 martin 2
## 4 language 2
## 5 baron 8
## 6 nephew 3
## 7 hofe 3
## 8 reding 16
## 9 fisherman 39
## 10 baumgarten 32
## # ℹ more rows
```
Finally, we can disconnect the R session from the Spark cluster
```
spark_disconnect(sc)
```
15\.2 Tutorial: political slant
-------------------------------
The tutorial below shows how to use `sparklyr` (in conjunction with AWS EMR) to run the entire raw text processing of academic research projects in economics. We will replicate the data preparation and computation of the *slant measure* for congressional speeches suggested by Gentzkow and Shapiro ([2010](#ref-gentzkow_shapiro2010)) in the tutorial. To keep things simple, we’ll use the data compiled by Gentzkow, Shapiro, and Taddy ([2019](#ref-gentzkow_etal2019)) and made available at <https://data.stanford.edu/congress_text>.
### 15\.2\.1 Data download and import
To begin, we download the corresponding zip\-file to the EMR master node (if using EMR) or to your local RStudio working directory if using a local Spark installation. The unzipped folder contains text data from all speeches delivered from the 97th to the 114th US Congress, among other things. We will primarily work with the raw speeches text data, which is stored in files with the naming pattern `"speeches CONGRESS.txt,"` where `CONGRESS` is the number of the corresponding US Congress. To make things easier, we put all of the `speeches`\-files in a subdirectory called `'speeches'`. The following section simply illustrates one method for downloading and rearranging the data files. The following code chunks require that all files containing the text of speeches be stored in `data/text/speeches` and all speaker information be stored in `data/text/speakers`.
```
# download and unzip the raw text data
URL <- "https://stacks.stanford.edu/file/druid:md374tz9962/hein-daily.zip"
PATH <- "data/hein-daily.zip"
system(paste0("curl ",
URL,
" > ",
PATH,
" && unzip ",
PATH))
# move the speeches files
system("mkdir data/text/ && mkdir data/text/speeches")
system("mv hein-daily/speeches* data/text/speeches/")
# move the speaker files
system("mkdir data/text/speakers")
system("mv hein-daily/*SpeakerMap.txt data/text/speakers/")
```
In addition, we download an extra file in which the authors kept only valid phrases (after removing procedural phrases that often occur in congressional speeches but that do not contribute to finding partisan phrases). Thus we can later use this additional file to filter out invalid bigrams.[76](#fn76)
```
# download and unzip procedural phrases data
URL_P <- "https://stacks.stanford.edu/file/druid:md374tz9962/vocabulary.zip"
PATH_P <- "data/vocabulary.zip"
system(paste0("curl ",
URL_P,
" > ",
PATH_P,
" && unzip ",
PATH_P))
# move the procedural vocab file
system("mv vocabulary/vocab.txt data/text/")
```
We begin by loading the corresponding packages, defining some fixed variables, and connecting the R session to the Spark cluster, using the same basic pipeline structure as in the previous section’s introductory example. Typically, you would first test these steps on a local Spark installation before feeding in more data to process on a Spark cluster in the cloud. In the following example, we only process the congressional speeches from the 97th to the 114th US Congress. The original source provides data for almost the entire history of Congress (see <https://data.stanford.edu/congress_text> for details). Recall that for local tests/working with the local Spark installation, you can connect your R session with `sc <- spark_connect(master = "local")`. Since even the limited speeches set we work with locally is several GBs in size, we set the memory available to our local Spark node to 16GB. This can be done by fetching the config file via `spark_config()` and then setting the `driver-memory` accordingly before initializing the Spark connection with the adapted configuration object (see `config = conf` in the `spark_connect()` command).
Unlike in the simple introductory example above, the raw data is distributed in multiple files. By default, Spark expects to load data from multiple files in the same directory. Thus, we can use the `spark_read_csv()` function to specify where all of the raw data is located in order to read in all of the raw text data at once. The data in this example is essentially stored in CSV format, but the pipe symbol `|` is used to separate columns instead of the more common commas. By specifying `delimiter="|"`, we ensure that the data structure is correctly captured.
```
# LOAD TEXT DATA --------------------
# load data
speeches <- spark_read_csv(sc,
name = "speeches",
path = INPUT_PATH_SPEECHES,
delimiter = "|")
speakers <- spark_read_csv(sc,
name = "speakers",
path = INPUT_PATH_SPEAKERS,
delimiter = "|")
```
### 15\.2\.2 Cleaning speeches data
The intermediate goal of the data preparation steps is to determine the number of bigrams per party. That is, we want to know how frequently members of a particular political party have used a two\-word phrase. As a first step, we must combine the speeches and speaker data to obtain the party label per speech and then clean the raw text to extract words and create and count bigrams. Congressional speeches frequently include references to dates, bill numbers, years, and so on. This introduces a slew of tokens made up entirely of digits, single characters, and special characters. The cleaning steps that follow are intended to remove the majority of those.
```
# JOIN --------------------
speeches <-
inner_join(speeches,
speakers,
by="speech_id") %>%
filter(party %in% c("R", "D"), chamber=="H") %>%
mutate(congress=substr(speech_id, 1,3)) %>%
select(speech_id, speech, party, congress)
# CLEANING ----------------
# clean text: numbers, letters (bill IDs, etc.
speeches <-
mutate(speeches, speech = tolower(speech)) %>%
mutate(speech = regexp_replace(speech,
"[_\"\'():;,.!?\\-]",
"")) %>%
mutate(speech = regexp_replace(speech, "\\\\(.+\\\\)", " ")) %>%
mutate(speech = regexp_replace(speech, "[0-9]+", " ")) %>%
mutate(speech = regexp_replace(speech, "<[a-z]+>", " ")) %>%
mutate(speech = regexp_replace(speech, "<\\w+>", " ")) %>%
mutate(speech = regexp_replace(speech, "_", " ")) %>%
mutate(speech = trimws(speech))
```
### 15\.2\.3 Create a bigrams count per party
Based on the cleaned text, we now split the text into words (tokenization), remove stopwords, and create a list of bigrams (2\-word phrases). Finally, we unnest the bigram list and keep the party and bigram column. The resulting Spark table contains a row for each bigram mentioned in any of the speeches along with the information of whether the speech in which the bigram was mentioned was given by a Democrat or a Republican.
```
# TOKENIZATION, STOPWORDS REMOVAL, NGRAMS ----------------
# stopwords list
stop <- readLines("http://snowball.tartarus.org/algorithms/english/stop.txt")
stop <- trimws(gsub("\\|.*", "", stop))
stop <- stop[stop!=""]
# clean text: numbers, letters (bill IDs, etc.
bigrams <-
ft_tokenizer(speeches, "speech", "words") %>%
ft_stop_words_remover("words", "words_wo_stop",
stop_words = stop ) %>%
ft_ngram("words_wo_stop", "bigram_list", n=2) %>%
mutate(bigram=explode(bigram_list)) %>%
mutate(bigram=trim(bigram)) %>%
mutate(n_words=as.numeric(length(bigram) -
length(replace(bigram, ' ', '')) + 1)) %>%
filter(3<nchar(bigram), 1<n_words) %>%
select(party, congress, bigram)
```
Before counting the bigrams by party, we need an additional context\-specific cleaning step in which we remove procedural phrases from the speech bigrams.
```
# load the procedural phrases list
valid_vocab <- spark_read_csv(sc,
path="data/text/vocab.txt",
name = "valid_vocab",
delimiter = "|",
header = FALSE)
# remove corresponding bigrams via anti-join
bigrams <- inner_join(bigrams, valid_vocab, by= c("bigram"="V1"))
```
### 15\.2\.4 Find “partisan” phrases
At this point, we have all pieces in place in order to compute the bigram count (how often a certain bigram was mentioned by a member of either party). As this is an important intermediate result, we evaluate the entire operation for all the data and cache it in Spark memory through `compute()`. Note that if you run this code on your local machine, it can take a while to process.
```
# BIGRAM COUNT PER PARTY ---------------
bigram_count <-
count(bigrams, party, bigram, congress) %>%
compute("bigram_count")
```
Finally, we can turn to the actual method/analysis suggested by Gentzkow and Shapiro ([2010](#ref-gentzkow_shapiro2010)). They suggest a simple chi\-squared test to find the most partisan bigrams. For each bigram, we compute the corresponding chi\-squared value.
```
# FIND MOST PARTISAN BIGRAMS ------------
# compute frequencies and chi-squared values
freqs <-
bigram_count %>%
group_by(party, congress) %>%
mutate(total=sum(n), f_npl=total-n)
freqs_d <-
filter(freqs, party=="D") %>%
rename(f_pld=n, f_npld=f_npl) %>%
select(bigram, congress, f_pld, f_npld)
```
```
## Adding missing grouping variables: `party`
```
```
freqs_r <-
filter(freqs, party=="R") %>%
rename(f_plr=n, f_nplr=f_npl) %>%
select(bigram, congress, f_plr, f_nplr)
```
```
## Adding missing grouping variables: `party`
```
Based on the computed bigram frequencies, we can compute the chi\-squared test as follows.
```
pol_bigrams <-
inner_join(freqs_d, freqs_r, by=c("bigram", "congress")) %>%
group_by(bigram, congress) %>%
mutate(x2=((f_plr*f_npld-f_pld*f_nplr)^2)/
((f_plr + f_pld)*(f_plr + f_nplr)*
(f_pld + f_npld)*(f_nplr + f_npld))) %>%
select(bigram, congress, x2, f_pld, f_plr) %>%
compute("pol_bigrams")
```
### 15\.2\.5 Results: Most partisan phrases by congress
In order to present a first glimpse at the results we first select the 2,000 most partisan phrases per Congress according to the procedure above. To do so, we need to first create an index column in the corresponding Spark table.[77](#fn77) We then collect the 2,000 most partisan bigrams.[78](#fn78)
```
# create output data frame
output <- pol_bigrams %>%
group_by(congress) %>%
arrange(desc(x2)) %>%
sdf_with_sequential_id(id="index") %>%
filter(index<=2000) %>%
mutate(Party=ifelse(f_pld<f_plr, "R", "D"))%>%
select(bigram, congress, Party, x2) %>%
collect()
# disconnect from cluster
spark_disconnect(sc)
```
From the subset of the 2,000 most partisan bigrams, we then generate a table of the top 5 most partisan bigrams per Congress.
```
# packages to prepare and plot
library(data.table)
library(ggplot2)
# select top ten per congress, clean
output <- as.data.table(output)
topten <- output[order(congress, x2, decreasing = TRUE),
rank:=1:.N, by=list(congress)][rank %in% (1:5)]
topten[, congress:=gsub("990", "99", congress)]
topten[, congress:=gsub("980", "98", congress)]
topten[, congress:=gsub("970", "97", congress)]
# plot a visualization of the most partisan terms
ggplot(topten, mapping=aes(x=as.integer(congress), y=log(x2), color=Party)) +
geom_text(aes(label=bigram), nudge_y = 1)+
ylab("Partisanship score (Ln of Chisq. value)") +
xlab("Congress") +
scale_color_manual(values=c("D"="blue", "R"="red"), name="Party") +
guides(color=guide_legend(title.position="top")) +
scale_x_continuous(breaks=as.integer(unique(topten$congress))) +
theme_minimal() +
theme(axis.text.x = element_text(angle = 90, hjust = 1),
axis.text.y = element_text(hjust = 1),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.background = element_blank())
```
### 15\.2\.1 Data download and import
To begin, we download the corresponding zip\-file to the EMR master node (if using EMR) or to your local RStudio working directory if using a local Spark installation. The unzipped folder contains text data from all speeches delivered from the 97th to the 114th US Congress, among other things. We will primarily work with the raw speeches text data, which is stored in files with the naming pattern `"speeches CONGRESS.txt,"` where `CONGRESS` is the number of the corresponding US Congress. To make things easier, we put all of the `speeches`\-files in a subdirectory called `'speeches'`. The following section simply illustrates one method for downloading and rearranging the data files. The following code chunks require that all files containing the text of speeches be stored in `data/text/speeches` and all speaker information be stored in `data/text/speakers`.
```
# download and unzip the raw text data
URL <- "https://stacks.stanford.edu/file/druid:md374tz9962/hein-daily.zip"
PATH <- "data/hein-daily.zip"
system(paste0("curl ",
URL,
" > ",
PATH,
" && unzip ",
PATH))
# move the speeches files
system("mkdir data/text/ && mkdir data/text/speeches")
system("mv hein-daily/speeches* data/text/speeches/")
# move the speaker files
system("mkdir data/text/speakers")
system("mv hein-daily/*SpeakerMap.txt data/text/speakers/")
```
In addition, we download an extra file in which the authors kept only valid phrases (after removing procedural phrases that often occur in congressional speeches but that do not contribute to finding partisan phrases). Thus we can later use this additional file to filter out invalid bigrams.[76](#fn76)
```
# download and unzip procedural phrases data
URL_P <- "https://stacks.stanford.edu/file/druid:md374tz9962/vocabulary.zip"
PATH_P <- "data/vocabulary.zip"
system(paste0("curl ",
URL_P,
" > ",
PATH_P,
" && unzip ",
PATH_P))
# move the procedural vocab file
system("mv vocabulary/vocab.txt data/text/")
```
We begin by loading the corresponding packages, defining some fixed variables, and connecting the R session to the Spark cluster, using the same basic pipeline structure as in the previous section’s introductory example. Typically, you would first test these steps on a local Spark installation before feeding in more data to process on a Spark cluster in the cloud. In the following example, we only process the congressional speeches from the 97th to the 114th US Congress. The original source provides data for almost the entire history of Congress (see <https://data.stanford.edu/congress_text> for details). Recall that for local tests/working with the local Spark installation, you can connect your R session with `sc <- spark_connect(master = "local")`. Since even the limited speeches set we work with locally is several GBs in size, we set the memory available to our local Spark node to 16GB. This can be done by fetching the config file via `spark_config()` and then setting the `driver-memory` accordingly before initializing the Spark connection with the adapted configuration object (see `config = conf` in the `spark_connect()` command).
Unlike in the simple introductory example above, the raw data is distributed in multiple files. By default, Spark expects to load data from multiple files in the same directory. Thus, we can use the `spark_read_csv()` function to specify where all of the raw data is located in order to read in all of the raw text data at once. The data in this example is essentially stored in CSV format, but the pipe symbol `|` is used to separate columns instead of the more common commas. By specifying `delimiter="|"`, we ensure that the data structure is correctly captured.
```
# LOAD TEXT DATA --------------------
# load data
speeches <- spark_read_csv(sc,
name = "speeches",
path = INPUT_PATH_SPEECHES,
delimiter = "|")
speakers <- spark_read_csv(sc,
name = "speakers",
path = INPUT_PATH_SPEAKERS,
delimiter = "|")
```
### 15\.2\.2 Cleaning speeches data
The intermediate goal of the data preparation steps is to determine the number of bigrams per party. That is, we want to know how frequently members of a particular political party have used a two\-word phrase. As a first step, we must combine the speeches and speaker data to obtain the party label per speech and then clean the raw text to extract words and create and count bigrams. Congressional speeches frequently include references to dates, bill numbers, years, and so on. This introduces a slew of tokens made up entirely of digits, single characters, and special characters. The cleaning steps that follow are intended to remove the majority of those.
```
# JOIN --------------------
speeches <-
inner_join(speeches,
speakers,
by="speech_id") %>%
filter(party %in% c("R", "D"), chamber=="H") %>%
mutate(congress=substr(speech_id, 1,3)) %>%
select(speech_id, speech, party, congress)
# CLEANING ----------------
# clean text: numbers, letters (bill IDs, etc.
speeches <-
mutate(speeches, speech = tolower(speech)) %>%
mutate(speech = regexp_replace(speech,
"[_\"\'():;,.!?\\-]",
"")) %>%
mutate(speech = regexp_replace(speech, "\\\\(.+\\\\)", " ")) %>%
mutate(speech = regexp_replace(speech, "[0-9]+", " ")) %>%
mutate(speech = regexp_replace(speech, "<[a-z]+>", " ")) %>%
mutate(speech = regexp_replace(speech, "<\\w+>", " ")) %>%
mutate(speech = regexp_replace(speech, "_", " ")) %>%
mutate(speech = trimws(speech))
```
### 15\.2\.3 Create a bigrams count per party
Based on the cleaned text, we now split the text into words (tokenization), remove stopwords, and create a list of bigrams (2\-word phrases). Finally, we unnest the bigram list and keep the party and bigram column. The resulting Spark table contains a row for each bigram mentioned in any of the speeches along with the information of whether the speech in which the bigram was mentioned was given by a Democrat or a Republican.
```
# TOKENIZATION, STOPWORDS REMOVAL, NGRAMS ----------------
# stopwords list
stop <- readLines("http://snowball.tartarus.org/algorithms/english/stop.txt")
stop <- trimws(gsub("\\|.*", "", stop))
stop <- stop[stop!=""]
# clean text: numbers, letters (bill IDs, etc.
bigrams <-
ft_tokenizer(speeches, "speech", "words") %>%
ft_stop_words_remover("words", "words_wo_stop",
stop_words = stop ) %>%
ft_ngram("words_wo_stop", "bigram_list", n=2) %>%
mutate(bigram=explode(bigram_list)) %>%
mutate(bigram=trim(bigram)) %>%
mutate(n_words=as.numeric(length(bigram) -
length(replace(bigram, ' ', '')) + 1)) %>%
filter(3<nchar(bigram), 1<n_words) %>%
select(party, congress, bigram)
```
Before counting the bigrams by party, we need an additional context\-specific cleaning step in which we remove procedural phrases from the speech bigrams.
```
# load the procedural phrases list
valid_vocab <- spark_read_csv(sc,
path="data/text/vocab.txt",
name = "valid_vocab",
delimiter = "|",
header = FALSE)
# remove corresponding bigrams via anti-join
bigrams <- inner_join(bigrams, valid_vocab, by= c("bigram"="V1"))
```
### 15\.2\.4 Find “partisan” phrases
At this point, we have all pieces in place in order to compute the bigram count (how often a certain bigram was mentioned by a member of either party). As this is an important intermediate result, we evaluate the entire operation for all the data and cache it in Spark memory through `compute()`. Note that if you run this code on your local machine, it can take a while to process.
```
# BIGRAM COUNT PER PARTY ---------------
bigram_count <-
count(bigrams, party, bigram, congress) %>%
compute("bigram_count")
```
Finally, we can turn to the actual method/analysis suggested by Gentzkow and Shapiro ([2010](#ref-gentzkow_shapiro2010)). They suggest a simple chi\-squared test to find the most partisan bigrams. For each bigram, we compute the corresponding chi\-squared value.
```
# FIND MOST PARTISAN BIGRAMS ------------
# compute frequencies and chi-squared values
freqs <-
bigram_count %>%
group_by(party, congress) %>%
mutate(total=sum(n), f_npl=total-n)
freqs_d <-
filter(freqs, party=="D") %>%
rename(f_pld=n, f_npld=f_npl) %>%
select(bigram, congress, f_pld, f_npld)
```
```
## Adding missing grouping variables: `party`
```
```
freqs_r <-
filter(freqs, party=="R") %>%
rename(f_plr=n, f_nplr=f_npl) %>%
select(bigram, congress, f_plr, f_nplr)
```
```
## Adding missing grouping variables: `party`
```
Based on the computed bigram frequencies, we can compute the chi\-squared test as follows.
```
pol_bigrams <-
inner_join(freqs_d, freqs_r, by=c("bigram", "congress")) %>%
group_by(bigram, congress) %>%
mutate(x2=((f_plr*f_npld-f_pld*f_nplr)^2)/
((f_plr + f_pld)*(f_plr + f_nplr)*
(f_pld + f_npld)*(f_nplr + f_npld))) %>%
select(bigram, congress, x2, f_pld, f_plr) %>%
compute("pol_bigrams")
```
### 15\.2\.5 Results: Most partisan phrases by congress
In order to present a first glimpse at the results we first select the 2,000 most partisan phrases per Congress according to the procedure above. To do so, we need to first create an index column in the corresponding Spark table.[77](#fn77) We then collect the 2,000 most partisan bigrams.[78](#fn78)
```
# create output data frame
output <- pol_bigrams %>%
group_by(congress) %>%
arrange(desc(x2)) %>%
sdf_with_sequential_id(id="index") %>%
filter(index<=2000) %>%
mutate(Party=ifelse(f_pld<f_plr, "R", "D"))%>%
select(bigram, congress, Party, x2) %>%
collect()
# disconnect from cluster
spark_disconnect(sc)
```
From the subset of the 2,000 most partisan bigrams, we then generate a table of the top 5 most partisan bigrams per Congress.
```
# packages to prepare and plot
library(data.table)
library(ggplot2)
# select top ten per congress, clean
output <- as.data.table(output)
topten <- output[order(congress, x2, decreasing = TRUE),
rank:=1:.N, by=list(congress)][rank %in% (1:5)]
topten[, congress:=gsub("990", "99", congress)]
topten[, congress:=gsub("980", "98", congress)]
topten[, congress:=gsub("970", "97", congress)]
# plot a visualization of the most partisan terms
ggplot(topten, mapping=aes(x=as.integer(congress), y=log(x2), color=Party)) +
geom_text(aes(label=bigram), nudge_y = 1)+
ylab("Partisanship score (Ln of Chisq. value)") +
xlab("Congress") +
scale_color_manual(values=c("D"="blue", "R"="red"), name="Party") +
guides(color=guide_legend(title.position="top")) +
scale_x_continuous(breaks=as.integer(unique(topten$congress))) +
theme_minimal() +
theme(axis.text.x = element_text(angle = 90, hjust = 1),
axis.text.y = element_text(hjust = 1),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.background = element_blank())
```
15\.3 Natural Language Processing at Scale
------------------------------------------
The examples above merely scratch the surface of what is possible these days in the realm of text analysis. With increasing availability of Big Data and the recent boost in deep learning, Natural Language Processing (NLP) put forward several very powerful (and very large) language models for various text prediction tasks. Due to the improvement of these models when trained on massive amounts of text data and the rather generic application of many of these large models, it has become common practice to work directly with a pre\-trained model. That is, we do not actually train the algorithm based on our own training dataset but rather build on a model that has been trained on a large text corpus and then has been made available to the public. In this section, we look at one straightforward way to build on such models with `sparklyr`.
### 15\.3\.1 Preparatory steps
Specifically, we look at a few brief examples based on the `sparknlp` package ([Kincaid and Kuo 2023](#ref-sparknlp)) providing a `sparklyr` extension for using the [John Snow Labs Spark NLP](https://www.johnsnowlabs.com/spark-nlp) library. The package can be directly installed from GitHub: `devtools::install_github("r-spark/sparknlp")` To begin, we load the corresponding packages and initialize a pre\-trained NLP pipeline for sentiment analysis, which we will then apply to the congressional speeches data. Note that the `sparknlp` package needs to be loaded before we connect the R session to the Spark cluster. In the following code chunk we thus first load the package and initiate the session by connecting again to the local Spark node. In addition to loading the `sparklyr` and `dplyr` packages, we also load the `sparklyr.nested` package ([Pollock 2023](#ref-sparklyr.nested)). The latter is useful when working with `sparknlp`’s pipelines because the results are often returned as nested lists (in Spark table columns).
```
# load packages
library(dplyr)
library(sparklyr)
library(sparknlp)
library(sparklyr.nested)
# configuration of local spark cluster
conf <- spark_config()
conf$`sparklyr.shell.driver-memory` <- "16g"
# connect rstudio session to cluster
sc <- spark_connect(master = "local",
config = conf)
```
The goal of this brief example of `sparknlp` is to demonstrate how we can easily tap into very powerful pre\-trained models to categorize text.To keep things simple, we return to the previous context (congressional speeches) and reload the speeches dataset. To make the following chunks of code run smoothly and relatively fast on a local Spark installation (for test purposes), we use `sample_n()` for a random draw of 10,000 speeches.
```
# LOAD --------------------
# load speeches
INPUT_PATH_SPEECHES <- "data/text/speeches/"
speeches <-
spark_read_csv(sc,
name = "speeches",
path = INPUT_PATH_SPEECHES,
delimiter = "|",
overwrite = TRUE) %>%
sample_n(10000, replace = FALSE) %>%
compute("speeches")
```
### 15\.3\.2 Sentiment annotation
In this short tutorial, we’ll examine the tone (sentiment) of the congressional speeches. Sentiment analysis is a fairly common task in NLP, but it is frequently a computationally demanding task with numerous preparatory steps. `sparknlp` provides a straightforward interface for creating the necessary NLP pipeline in R and massively scaling the analysis on Spark. Let’s begin by loading the pretrained NLP pipeline for sentiment analysis provided in `sparknlp`.
```
# load the nlp pipeline for sentiment analysis
pipeline <- nlp_pretrained_pipeline(sc, "analyze_sentiment", "en")
```
We can easily feed in the entire speech corpus via the `target` argument and point to the column containing the raw text (here `"speech"`). The code below divides the text into sentences and tokens (words) and returns the sentiment annotation for each sentence.
```
speeches_a <-
nlp_annotate(pipeline,
target = speeches,
column = "speech")
```
The sentiment of the sentences is then extracted for each corresponding speech ID and coded with two additional indicator variables, indicating whether a sentence was classified as positive or negative.
```
# extract sentiment coding per speech
sentiments <-
speeches_a %>%
sdf_select(speech_id, sentiments=sentiment.result) %>%
sdf_explode(sentiments) %>%
mutate(pos = as.integer(sentiments=="positive"),
neg = as.integer(sentiments=="negative")) %>%
select(speech_id, pos, neg)
```
### 15\.3\.1 Preparatory steps
Specifically, we look at a few brief examples based on the `sparknlp` package ([Kincaid and Kuo 2023](#ref-sparknlp)) providing a `sparklyr` extension for using the [John Snow Labs Spark NLP](https://www.johnsnowlabs.com/spark-nlp) library. The package can be directly installed from GitHub: `devtools::install_github("r-spark/sparknlp")` To begin, we load the corresponding packages and initialize a pre\-trained NLP pipeline for sentiment analysis, which we will then apply to the congressional speeches data. Note that the `sparknlp` package needs to be loaded before we connect the R session to the Spark cluster. In the following code chunk we thus first load the package and initiate the session by connecting again to the local Spark node. In addition to loading the `sparklyr` and `dplyr` packages, we also load the `sparklyr.nested` package ([Pollock 2023](#ref-sparklyr.nested)). The latter is useful when working with `sparknlp`’s pipelines because the results are often returned as nested lists (in Spark table columns).
```
# load packages
library(dplyr)
library(sparklyr)
library(sparknlp)
library(sparklyr.nested)
# configuration of local spark cluster
conf <- spark_config()
conf$`sparklyr.shell.driver-memory` <- "16g"
# connect rstudio session to cluster
sc <- spark_connect(master = "local",
config = conf)
```
The goal of this brief example of `sparknlp` is to demonstrate how we can easily tap into very powerful pre\-trained models to categorize text.To keep things simple, we return to the previous context (congressional speeches) and reload the speeches dataset. To make the following chunks of code run smoothly and relatively fast on a local Spark installation (for test purposes), we use `sample_n()` for a random draw of 10,000 speeches.
```
# LOAD --------------------
# load speeches
INPUT_PATH_SPEECHES <- "data/text/speeches/"
speeches <-
spark_read_csv(sc,
name = "speeches",
path = INPUT_PATH_SPEECHES,
delimiter = "|",
overwrite = TRUE) %>%
sample_n(10000, replace = FALSE) %>%
compute("speeches")
```
### 15\.3\.2 Sentiment annotation
In this short tutorial, we’ll examine the tone (sentiment) of the congressional speeches. Sentiment analysis is a fairly common task in NLP, but it is frequently a computationally demanding task with numerous preparatory steps. `sparknlp` provides a straightforward interface for creating the necessary NLP pipeline in R and massively scaling the analysis on Spark. Let’s begin by loading the pretrained NLP pipeline for sentiment analysis provided in `sparknlp`.
```
# load the nlp pipeline for sentiment analysis
pipeline <- nlp_pretrained_pipeline(sc, "analyze_sentiment", "en")
```
We can easily feed in the entire speech corpus via the `target` argument and point to the column containing the raw text (here `"speech"`). The code below divides the text into sentences and tokens (words) and returns the sentiment annotation for each sentence.
```
speeches_a <-
nlp_annotate(pipeline,
target = speeches,
column = "speech")
```
The sentiment of the sentences is then extracted for each corresponding speech ID and coded with two additional indicator variables, indicating whether a sentence was classified as positive or negative.
```
# extract sentiment coding per speech
sentiments <-
speeches_a %>%
sdf_select(speech_id, sentiments=sentiment.result) %>%
sdf_explode(sentiments) %>%
mutate(pos = as.integer(sentiments=="positive"),
neg = as.integer(sentiments=="negative")) %>%
select(speech_id, pos, neg)
```
15\.4 Aggregation and visualization
-----------------------------------
Finally, we compute the proportion of sentences with a positive sentiment per speech and export the aggregate sentiment analysis result to the R environment for further processing.[79](#fn79)
```
# aggregate and download to R environment -----
sentiments_aggr <-
sentiments %>%
select(speech_id, pos, neg) %>%
group_by(speech_id) %>%
mutate(rel_pos = sum(pos)/(sum(pos) + sum(neg))) %>%
filter(0<rel_pos) %>%
select(speech_id, rel_pos) %>%
sdf_distinct(name = "sentiments_aggr") %>%
collect()
```
```
# disconnect from cluster
spark_disconnect(sc)
```
We can easily plot the aggregate speech sentiment over time because the speech ID is based on the Congress number and the sequential number of speeches in this Congress. This allows us to compare (in the simple setup of this tutorial) the sentiment of congressional speeches over time.
```
# clean
library(data.table)
sa <- as.data.table(sentiments_aggr)
sa[, congress:=substr(speech_id, 1,3)]
sa[, congress:=gsub("990", "99", congress)]
sa[, congress:=gsub("980", "98", congress)]
sa[, congress:=gsub("970", "97", congress)]
# visualize results
library(ggplot2)
ggplot(sa, aes(x=as.integer(congress),
y=rel_pos,
group=congress)) +
geom_boxplot() +
ylab("Share of sentences with positive tone") +
xlab("Congress") +
theme_minimal()
```
15\.5 `sparklyr` and lazy evaluation
------------------------------------
When running the code examples above, you may have noticed that the execution times vary significantly between the different code chunks, and maybe not always in the expected way. When using Apache Spark via the `sparklyr`/`dplyr`\-interface as we did above, the evaluation of the code is intentionally (very) lazy. That is, unless a line of code really requires data to be processed (for example, due to printing the results to the console or explicitly due to calling `collect()`), Spark will not be triggered to run the actual processing of the entire data involved. When working with extremely large datasets, it makes sense to modify one’s workflow to accommodate this behavior. A reasonable workflow would then be to write down the pipeline so that the heavy load processing happens at the very end (which can then take several minutes, but you will have time for other things to do.)
The following short example taken from the script developed above illustrates this point. The arguably computationally most intensive part of the previous section was the sentiment annotation via `nlp_annotate()`:
```
system.time(
speeches_a <-
nlp_annotate(pipeline,
target = speeches,
column = "speech")
)
```
```
## user system elapsed
## 0.066 0.021 0.213
```
Remember that the pre\-trained pipeline used in this example includes many steps, such as breaking down speeches into sentences and words, cleaning the text, and predicting the sentiment of each sentence. When you run the code above, you’ll notice that this was not the most time\-consuming part to compute. That chunk of code runs in less than a second on my machine (with a local Spark node). Because we do not request the sentiment analysis results at this point, the pipeline is not actually run. It is only executed when we request it, for example, by adding the `compute()` call at the end.
```
system.time(
speeches_a <-
nlp_annotate(pipeline,
target = speeches,
column = "speech") %>%
compute(name= "speeches_a")
)
```
```
## user system elapsed
## 0.388 0.117 31.800
```
As you can see, this takes an order of magnitude longer, which makes perfect sense given that the pipeline is now running for the entire dataset fed into it. Unless you require the intermediate results (for example, for inspection), it thus makes sense to only process the big workload at the end of your `sparklyr`\-analytics script.
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/appendix-a-github.html |
Appendix A: GitHub
==================
GitHub can be a very useful platform to arrange, store, and share the code of your analytics projects even if it is typically used for collaborative software development. If you are unfamiliar with Git or GitHub, the steps below will assist you in getting started.
Initiate a new repository
-------------------------
1. Log in to your GitHub account and click on the plus sign in the upper right corner. From the drop\-down menu select `New repository`.
2. Give your repository a name, for example, `bigdatastat`. Then, click on the big green button, `Create repository`. You have just created a new repository.
3. Open Rstudio, and and navigate to a place on your hard\-disk where you want to have the local copy of your repository.
4. Then create the local repository as suggested by GitHub (see the page shown right after you have clicked on `Create repository`: “…or create a new repository on the command line”). In order to do so, you have to switch to the Terminal window in RStudio and type (or copy and paste) the commands as given by GitHub. This should look similar to the following code chunk:
```
echo "# bigdatastat" >> README.md
git init
git add README.md
git commit -m "first commit"
git remote add origin \
https://github.com/YOUR-GITHUB-ACCOUNTNAME/bigdatastat.git
git push -u origin master
```
Remember to replace `YOUR-GITHUB-ACCOUNTNAME` with your GitHub account name, before running the code above.
5. Refresh the page of your newly created GitHub repository. You should now see the result of your first commit.
6. Open `README.md` in RStudio, and add a few words describing what this repository is all about.
Clone this book’s repository
----------------------------
1. In RStudio, navigate to a folder on your hard\-disk where you want to have a local copy of this book’s GitHub repository.
2. Open a new browser window, and go to <https://github.com/umatter/BigData>.
3. Click on `Clone or download` and copy the link.
4. In RStudio, switch to the Terminal, and type the following command (pasting the copied link).
```
git clone https://github.com/umatter/BigData.git
```
You now have a local copy of the repository which is linked to the one on GitHub. You can see this by changing to the newly created directory, containing the local copy of the repository:
```
cd BigData
```
Whenever there are some updates to the book’s repository on GitHub, you can update your local copy with:
```
git pull
```
(Make sure you are in the `BigData` folder when running `git pull`.)
Fork this book’s repository
---------------------------
1. Go to <https://github.com/umatter/BigData>, and click on the ‘Fork’ button in the upper\-right corner (follow the instructions).
2. Clone the forked repository (see the cloning of a repository above for details). Assuming you called your forked repository `BigData-forked`, you run the following command in the terminal (replacing `<yourgithubusername>`):
```
git clone https://github.com/`<yourgithubusername>`/BigData-forked.git
```
3. Switch into the newly created directory:
```
cd BigData-forked
```
4. Set a remote connection to the *original* repository:
```
git remote add upstream https://github.com/umatter/BigData.git
```
You can verify the remotes of your local clone of your forked repository as follows:
```
git remote -v
```
You should see something like
```
origin https://github.com/<yourgithubusername>/BigData-forked.git (fetch)
origin https://github.com/<yourgithubusername>/BigData-forked.git (push)
upstream https://github.com/umatter/BigData.git (fetch)
upstream https://github.com/umatter/BigData.git (push)
```
5. Fetch changes from the original repository. New material has been added to the original book repository, and you want to merge it with your forked repository. In order to do so, you first fetch the changes from the original repository:
```
git fetch upstream
```
6. Make sure you are on the master branch of your local repository:
```
git checkout master
```
7. Merge the changes fetched from the original repo with the master of your (local clone of the) forked repository:
```
git merge upstream/master
```
8. Push the changes to your forked repository on GitHub:
```
git push
```
Now your forked repo on GitHub also contains the commits (changes) in the original repository. If you make changes to the files in your forked repo, you can add, commit, and push them as in any repository. Example: open `README.md` in a text editor (e.g. RStudio), add `# HELLO WORLD` to the last line of `README.md`, and save the changes. Then:
```
git add README.md
git commit -m "hello world"
git push
```
Initiate a new repository
-------------------------
1. Log in to your GitHub account and click on the plus sign in the upper right corner. From the drop\-down menu select `New repository`.
2. Give your repository a name, for example, `bigdatastat`. Then, click on the big green button, `Create repository`. You have just created a new repository.
3. Open Rstudio, and and navigate to a place on your hard\-disk where you want to have the local copy of your repository.
4. Then create the local repository as suggested by GitHub (see the page shown right after you have clicked on `Create repository`: “…or create a new repository on the command line”). In order to do so, you have to switch to the Terminal window in RStudio and type (or copy and paste) the commands as given by GitHub. This should look similar to the following code chunk:
```
echo "# bigdatastat" >> README.md
git init
git add README.md
git commit -m "first commit"
git remote add origin \
https://github.com/YOUR-GITHUB-ACCOUNTNAME/bigdatastat.git
git push -u origin master
```
Remember to replace `YOUR-GITHUB-ACCOUNTNAME` with your GitHub account name, before running the code above.
5. Refresh the page of your newly created GitHub repository. You should now see the result of your first commit.
6. Open `README.md` in RStudio, and add a few words describing what this repository is all about.
Clone this book’s repository
----------------------------
1. In RStudio, navigate to a folder on your hard\-disk where you want to have a local copy of this book’s GitHub repository.
2. Open a new browser window, and go to <https://github.com/umatter/BigData>.
3. Click on `Clone or download` and copy the link.
4. In RStudio, switch to the Terminal, and type the following command (pasting the copied link).
```
git clone https://github.com/umatter/BigData.git
```
You now have a local copy of the repository which is linked to the one on GitHub. You can see this by changing to the newly created directory, containing the local copy of the repository:
```
cd BigData
```
Whenever there are some updates to the book’s repository on GitHub, you can update your local copy with:
```
git pull
```
(Make sure you are in the `BigData` folder when running `git pull`.)
Fork this book’s repository
---------------------------
1. Go to <https://github.com/umatter/BigData>, and click on the ‘Fork’ button in the upper\-right corner (follow the instructions).
2. Clone the forked repository (see the cloning of a repository above for details). Assuming you called your forked repository `BigData-forked`, you run the following command in the terminal (replacing `<yourgithubusername>`):
```
git clone https://github.com/`<yourgithubusername>`/BigData-forked.git
```
3. Switch into the newly created directory:
```
cd BigData-forked
```
4. Set a remote connection to the *original* repository:
```
git remote add upstream https://github.com/umatter/BigData.git
```
You can verify the remotes of your local clone of your forked repository as follows:
```
git remote -v
```
You should see something like
```
origin https://github.com/<yourgithubusername>/BigData-forked.git (fetch)
origin https://github.com/<yourgithubusername>/BigData-forked.git (push)
upstream https://github.com/umatter/BigData.git (fetch)
upstream https://github.com/umatter/BigData.git (push)
```
5. Fetch changes from the original repository. New material has been added to the original book repository, and you want to merge it with your forked repository. In order to do so, you first fetch the changes from the original repository:
```
git fetch upstream
```
6. Make sure you are on the master branch of your local repository:
```
git checkout master
```
7. Merge the changes fetched from the original repo with the master of your (local clone of the) forked repository:
```
git merge upstream/master
```
8. Push the changes to your forked repository on GitHub:
```
git push
```
Now your forked repo on GitHub also contains the commits (changes) in the original repository. If you make changes to the files in your forked repo, you can add, commit, and push them as in any repository. Example: open `README.md` in a text editor (e.g. RStudio), add `# HELLO WORLD` to the last line of `README.md`, and save the changes. Then:
```
git add README.md
git commit -m "hello world"
git push
```
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/appendix-b-r-basics.html |
Appendix B: R Basics
====================
This appendix provides an overview of various key R properties, including data types and data structures.
Data types and memory/storage
-----------------------------
Data loaded into RAM can be interpreted differently by R depending on the data *type*. Some operators or functions in R only accept data of a specific type as arguments. For example, we can store the numeric values `1.5` and `3` in the variables `a` and `b`, respectively.
```
a <- 1.5
b <- 3
a + b
```
```
## [1] 4.5
```
R interprets this data as type `double` (class ‘numeric’):
```
typeof(a)
```
```
## [1] "double"
```
```
class(a)
```
```
## [1] "numeric"
```
```
object.size(a)
```
```
## 56 bytes
```
If, however, we define `a` and `b` as follows, R will interpret the values stored in `a` and `b` as text (`character`).
```
a <- "1.5"
b <- "3"
a + b
```
```
typeof(a)
```
```
## [1] "double"
```
```
class(a)
```
```
## [1] "numeric"
```
```
object.size(a)
```
```
## 56 bytes
```
Note that the symbols `1.5` take up more or less memory depending on the data\-type they are stored in. This directly links to how data/information is stored/represented in binary code, which in turn is reflected in how much memory is used to store these symbols in an object as well as what we can do with it.
### Example: Data types and information storage
Given the fact that computers only understand `0`s and `1`s, different approaches are taken to map these digital values to other symbols or images (text, decimal numbers, pictures, etc.) that we humans can more easily make sense of. Regarding text and numbers, these mappings involve *character encodings* (in which combinations of `0`s and `1`s represent a character in a specific alphabet) and *data types*.
Let’s illustrate the main concepts with the simple numerical example from above. When we see the decimal number `139` written somewhere, we know that it means ‘one\-hundred\-and\-thirty\-nine’. The fact that our computer is able to print `139` on the screen means that our computer can somehow map a sequence of `0`s and `1`s to the symbols `1`, `3`, and `9`. Depending on what we want to do with the data value `139` on our computer, there are different ways of how the computer can represent this value internally. Inter alia, we could load it into RAM as a *string* (‘text’/‘character’) or as an *integer* (‘natural number’) or *double* (numeric, floating point number). All of them can be printed on screen but only the latter two can be used for arithmetic computations. This concept can easily be illustrated in R.
We initiate a new variable with the value `139`. By using this syntax, R by default initiates the variable as an object of type `double`. We then can use this variable in arithmetic operations.
```
my_number <- 139
# check the class
typeof(my_number)
```
```
## [1] "double"
```
```
# arithmetic
my_number*2
```
```
## [1] 278
```
When we change the *data type* to ‘character’ (string) such operations are not possible.
```
# change and check type/class
my_number_string <- as.character(my_number)
typeof(my_number_string)
```
```
## [1] "character"
```
```
# try to multiply
my_number_string*2
```
```
## Error in my_number_string * 2: non-numeric argument to binary operator
```
If we change the variable to type `integer`, we can still use math operators.
```
# change and check type/class
my_number_int <- as.integer(my_number)
typeof(my_number_int)
```
```
## [1] "integer"
```
```
# arithmetics
my_number_int*2
```
```
## [1] 278
```
Having all variables in the correct type is important for data analytics with various sample sizes.
However, because different data types must be represented differently internally, different types may take up more or less memory, affecting performance when dealing with massive amounts of data.
We can illustrate this point with `object.size()`:
```
object.size("139")
```
```
## 112 bytes
```
```
object.size(139)
```
```
## 56 bytes
```
Data structures
---------------
For the time being, we have only looked at individual bytes of data. A single dataset can contain gigabytes of data and both text and numeric values. R has several classes of objects that provide different data structures. The data types and data structures used to store data can both affect how much memory is required to hold a dataset in RAM.
### Vectors vs. Factors in R
Vectors are collections of values of the same type. They can contain either all numeric values or all character values.
For example, we can initiate a character vector containing information on the hometowns of persons participating in a survey.
```
hometown <- c("St.Gallen", "Basel", "St.Gallen")
hometown
```
```
## [1] "St.Gallen" "Basel" "St.Gallen"
```
```
object.size(hometown)
```
```
## 200 bytes
```
Unlike in the data types example above, storing these values as type `numeric` to save memory is unlikely to be practical.
R would be unable to convert these strings into floating point numbers. Alternatively, we could consider a correspondence table in which each unique town name in the dataset is assigned a numeric (id) code. We would save memory this way, but it would require more effort to work with the data. Fortunately, the data structure ‘factor’ in basic R already implements this idea in a user\-friendly manner.
Factors are sets of categories. Thus, the values are drawn from a fixed set of possible values.
Considering the same example as above, we can store the same information in an object of type class `factor`.
```
hometown_f <- factor(c("St.Gallen", "Basel", "St.Gallen"))
hometown_f
```
```
## [1] St.Gallen Basel St.Gallen
## Levels: Basel St.Gallen
```
```
object.size(hometown_f)
```
```
## 584 bytes
```
At first glance, the fact that `hometown f` consumes more memory than its character vector sibling appears strange.
But we’ve seen this kind of ‘paradox’ before. Once again, the more sophisticated approach has an overhead (here not in terms of computing time but in terms of structure encoded in an object). `hometown_f` has more structure (i.e., a number\-to\-‘factor level’/category label mapping).
This additional structure is also data that must be saved somewhere. This disadvantage, as in previous examples of overhead costs, diminishes with larger datasets:
```
# create a large character vector
hometown_large <- rep(hometown, times = 1000)
# and the same content as factor
hometown_large_f <- factor(hometown_large)
# compare size
object.size(hometown_large)
```
```
## 24168 bytes
```
```
object.size(hometown_large_f)
```
```
## 12568 bytes
```
### Matrices/Arrays
Matrices are two\-dimensional collections of values of the same type, arrays are higher\-dimensional collections of values of the same type.
For example, we can initiate a three\-row/two\-column numeric matrix as follows.
```
my_matrix <- matrix(c(1,2,3,4,5,6), nrow = 3)
my_matrix
```
```
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
And a three\-dimensional numeric array as follows.
```
my_array <- array(c(1,2,3,4,5,6), dim = 3)
my_array
```
```
## [1] 1 2 3
```
### Data frames, tibbles, and data tables
Remember that in R, data frames are the most common way to represent a (table\-like) dataset. Each column can contain a vector of a specific data type (or a factor), but all columns must be the same length. In the context of data analysis, each row of a data frame contains an observation, and each column contains a characteristic of that observation.
The previous implementation of data frames in R made it difficult to work with large datasets.[80](#fn80) Several newer R implementations of the data\-frame concept were introduced with the aim to speed up data processing. One is known as `tibble`, and it is implemented and used in the `tidyverse` packages. The other is known as `data table`, and it is implemented in the `data table`\-package. Most of the shortcomings of the original ‘data.frame’ implementation, however, have been addressed in subsequent R versions, making traditional `data.frames`, `tibbles`, and `data.tables` more similarly suitable for working with large datasets (for in\-memory processing).
Here is how we define a `data.table` in R:
```
# load package
library(data.table)
# initiate a data.table
dt <- data.table(person = c("Alice", "Ben"),
age = c(50, 30),
gender = c("f", "m"))
dt
```
```
## person age gender
## 1: Alice 50 f
## 2: Ben 30 m
```
### Lists
Similar to data frames and data tables, lists can contain different types of data in each element. For example, a list could contain several other lists, data frames, and vectors with differing numbers of elements.
This flexibility can easily be demonstrated by combining some of the data structures created in the examples above:
```
my_list <- list(my_array, my_matrix, dt)
my_list
```
```
## [[1]]
## [1] 1 2 3
##
## [[2]]
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
##
## [[3]]
## person age gender
## 1: Alice 50 f
## 2: Ben 30 m
```
R\-tools to investigate structures and types
--------------------------------------------
| package | function | purpose |
| --- | --- | --- |
| `utils` | `str()` | Compactly display the structure of an arbitrary R object. |
| `base` | `class()` | Prints the class(es) of an R object. |
| `base` | `typeof()` | Determines the (R\-internal) type or storage mode of an object. |
Data types and memory/storage
-----------------------------
Data loaded into RAM can be interpreted differently by R depending on the data *type*. Some operators or functions in R only accept data of a specific type as arguments. For example, we can store the numeric values `1.5` and `3` in the variables `a` and `b`, respectively.
```
a <- 1.5
b <- 3
a + b
```
```
## [1] 4.5
```
R interprets this data as type `double` (class ‘numeric’):
```
typeof(a)
```
```
## [1] "double"
```
```
class(a)
```
```
## [1] "numeric"
```
```
object.size(a)
```
```
## 56 bytes
```
If, however, we define `a` and `b` as follows, R will interpret the values stored in `a` and `b` as text (`character`).
```
a <- "1.5"
b <- "3"
a + b
```
```
typeof(a)
```
```
## [1] "double"
```
```
class(a)
```
```
## [1] "numeric"
```
```
object.size(a)
```
```
## 56 bytes
```
Note that the symbols `1.5` take up more or less memory depending on the data\-type they are stored in. This directly links to how data/information is stored/represented in binary code, which in turn is reflected in how much memory is used to store these symbols in an object as well as what we can do with it.
### Example: Data types and information storage
Given the fact that computers only understand `0`s and `1`s, different approaches are taken to map these digital values to other symbols or images (text, decimal numbers, pictures, etc.) that we humans can more easily make sense of. Regarding text and numbers, these mappings involve *character encodings* (in which combinations of `0`s and `1`s represent a character in a specific alphabet) and *data types*.
Let’s illustrate the main concepts with the simple numerical example from above. When we see the decimal number `139` written somewhere, we know that it means ‘one\-hundred\-and\-thirty\-nine’. The fact that our computer is able to print `139` on the screen means that our computer can somehow map a sequence of `0`s and `1`s to the symbols `1`, `3`, and `9`. Depending on what we want to do with the data value `139` on our computer, there are different ways of how the computer can represent this value internally. Inter alia, we could load it into RAM as a *string* (‘text’/‘character’) or as an *integer* (‘natural number’) or *double* (numeric, floating point number). All of them can be printed on screen but only the latter two can be used for arithmetic computations. This concept can easily be illustrated in R.
We initiate a new variable with the value `139`. By using this syntax, R by default initiates the variable as an object of type `double`. We then can use this variable in arithmetic operations.
```
my_number <- 139
# check the class
typeof(my_number)
```
```
## [1] "double"
```
```
# arithmetic
my_number*2
```
```
## [1] 278
```
When we change the *data type* to ‘character’ (string) such operations are not possible.
```
# change and check type/class
my_number_string <- as.character(my_number)
typeof(my_number_string)
```
```
## [1] "character"
```
```
# try to multiply
my_number_string*2
```
```
## Error in my_number_string * 2: non-numeric argument to binary operator
```
If we change the variable to type `integer`, we can still use math operators.
```
# change and check type/class
my_number_int <- as.integer(my_number)
typeof(my_number_int)
```
```
## [1] "integer"
```
```
# arithmetics
my_number_int*2
```
```
## [1] 278
```
Having all variables in the correct type is important for data analytics with various sample sizes.
However, because different data types must be represented differently internally, different types may take up more or less memory, affecting performance when dealing with massive amounts of data.
We can illustrate this point with `object.size()`:
```
object.size("139")
```
```
## 112 bytes
```
```
object.size(139)
```
```
## 56 bytes
```
### Example: Data types and information storage
Given the fact that computers only understand `0`s and `1`s, different approaches are taken to map these digital values to other symbols or images (text, decimal numbers, pictures, etc.) that we humans can more easily make sense of. Regarding text and numbers, these mappings involve *character encodings* (in which combinations of `0`s and `1`s represent a character in a specific alphabet) and *data types*.
Let’s illustrate the main concepts with the simple numerical example from above. When we see the decimal number `139` written somewhere, we know that it means ‘one\-hundred\-and\-thirty\-nine’. The fact that our computer is able to print `139` on the screen means that our computer can somehow map a sequence of `0`s and `1`s to the symbols `1`, `3`, and `9`. Depending on what we want to do with the data value `139` on our computer, there are different ways of how the computer can represent this value internally. Inter alia, we could load it into RAM as a *string* (‘text’/‘character’) or as an *integer* (‘natural number’) or *double* (numeric, floating point number). All of them can be printed on screen but only the latter two can be used for arithmetic computations. This concept can easily be illustrated in R.
We initiate a new variable with the value `139`. By using this syntax, R by default initiates the variable as an object of type `double`. We then can use this variable in arithmetic operations.
```
my_number <- 139
# check the class
typeof(my_number)
```
```
## [1] "double"
```
```
# arithmetic
my_number*2
```
```
## [1] 278
```
When we change the *data type* to ‘character’ (string) such operations are not possible.
```
# change and check type/class
my_number_string <- as.character(my_number)
typeof(my_number_string)
```
```
## [1] "character"
```
```
# try to multiply
my_number_string*2
```
```
## Error in my_number_string * 2: non-numeric argument to binary operator
```
If we change the variable to type `integer`, we can still use math operators.
```
# change and check type/class
my_number_int <- as.integer(my_number)
typeof(my_number_int)
```
```
## [1] "integer"
```
```
# arithmetics
my_number_int*2
```
```
## [1] 278
```
Having all variables in the correct type is important for data analytics with various sample sizes.
However, because different data types must be represented differently internally, different types may take up more or less memory, affecting performance when dealing with massive amounts of data.
We can illustrate this point with `object.size()`:
```
object.size("139")
```
```
## 112 bytes
```
```
object.size(139)
```
```
## 56 bytes
```
Data structures
---------------
For the time being, we have only looked at individual bytes of data. A single dataset can contain gigabytes of data and both text and numeric values. R has several classes of objects that provide different data structures. The data types and data structures used to store data can both affect how much memory is required to hold a dataset in RAM.
### Vectors vs. Factors in R
Vectors are collections of values of the same type. They can contain either all numeric values or all character values.
For example, we can initiate a character vector containing information on the hometowns of persons participating in a survey.
```
hometown <- c("St.Gallen", "Basel", "St.Gallen")
hometown
```
```
## [1] "St.Gallen" "Basel" "St.Gallen"
```
```
object.size(hometown)
```
```
## 200 bytes
```
Unlike in the data types example above, storing these values as type `numeric` to save memory is unlikely to be practical.
R would be unable to convert these strings into floating point numbers. Alternatively, we could consider a correspondence table in which each unique town name in the dataset is assigned a numeric (id) code. We would save memory this way, but it would require more effort to work with the data. Fortunately, the data structure ‘factor’ in basic R already implements this idea in a user\-friendly manner.
Factors are sets of categories. Thus, the values are drawn from a fixed set of possible values.
Considering the same example as above, we can store the same information in an object of type class `factor`.
```
hometown_f <- factor(c("St.Gallen", "Basel", "St.Gallen"))
hometown_f
```
```
## [1] St.Gallen Basel St.Gallen
## Levels: Basel St.Gallen
```
```
object.size(hometown_f)
```
```
## 584 bytes
```
At first glance, the fact that `hometown f` consumes more memory than its character vector sibling appears strange.
But we’ve seen this kind of ‘paradox’ before. Once again, the more sophisticated approach has an overhead (here not in terms of computing time but in terms of structure encoded in an object). `hometown_f` has more structure (i.e., a number\-to\-‘factor level’/category label mapping).
This additional structure is also data that must be saved somewhere. This disadvantage, as in previous examples of overhead costs, diminishes with larger datasets:
```
# create a large character vector
hometown_large <- rep(hometown, times = 1000)
# and the same content as factor
hometown_large_f <- factor(hometown_large)
# compare size
object.size(hometown_large)
```
```
## 24168 bytes
```
```
object.size(hometown_large_f)
```
```
## 12568 bytes
```
### Matrices/Arrays
Matrices are two\-dimensional collections of values of the same type, arrays are higher\-dimensional collections of values of the same type.
For example, we can initiate a three\-row/two\-column numeric matrix as follows.
```
my_matrix <- matrix(c(1,2,3,4,5,6), nrow = 3)
my_matrix
```
```
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
And a three\-dimensional numeric array as follows.
```
my_array <- array(c(1,2,3,4,5,6), dim = 3)
my_array
```
```
## [1] 1 2 3
```
### Data frames, tibbles, and data tables
Remember that in R, data frames are the most common way to represent a (table\-like) dataset. Each column can contain a vector of a specific data type (or a factor), but all columns must be the same length. In the context of data analysis, each row of a data frame contains an observation, and each column contains a characteristic of that observation.
The previous implementation of data frames in R made it difficult to work with large datasets.[80](#fn80) Several newer R implementations of the data\-frame concept were introduced with the aim to speed up data processing. One is known as `tibble`, and it is implemented and used in the `tidyverse` packages. The other is known as `data table`, and it is implemented in the `data table`\-package. Most of the shortcomings of the original ‘data.frame’ implementation, however, have been addressed in subsequent R versions, making traditional `data.frames`, `tibbles`, and `data.tables` more similarly suitable for working with large datasets (for in\-memory processing).
Here is how we define a `data.table` in R:
```
# load package
library(data.table)
# initiate a data.table
dt <- data.table(person = c("Alice", "Ben"),
age = c(50, 30),
gender = c("f", "m"))
dt
```
```
## person age gender
## 1: Alice 50 f
## 2: Ben 30 m
```
### Lists
Similar to data frames and data tables, lists can contain different types of data in each element. For example, a list could contain several other lists, data frames, and vectors with differing numbers of elements.
This flexibility can easily be demonstrated by combining some of the data structures created in the examples above:
```
my_list <- list(my_array, my_matrix, dt)
my_list
```
```
## [[1]]
## [1] 1 2 3
##
## [[2]]
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
##
## [[3]]
## person age gender
## 1: Alice 50 f
## 2: Ben 30 m
```
### Vectors vs. Factors in R
Vectors are collections of values of the same type. They can contain either all numeric values or all character values.
For example, we can initiate a character vector containing information on the hometowns of persons participating in a survey.
```
hometown <- c("St.Gallen", "Basel", "St.Gallen")
hometown
```
```
## [1] "St.Gallen" "Basel" "St.Gallen"
```
```
object.size(hometown)
```
```
## 200 bytes
```
Unlike in the data types example above, storing these values as type `numeric` to save memory is unlikely to be practical.
R would be unable to convert these strings into floating point numbers. Alternatively, we could consider a correspondence table in which each unique town name in the dataset is assigned a numeric (id) code. We would save memory this way, but it would require more effort to work with the data. Fortunately, the data structure ‘factor’ in basic R already implements this idea in a user\-friendly manner.
Factors are sets of categories. Thus, the values are drawn from a fixed set of possible values.
Considering the same example as above, we can store the same information in an object of type class `factor`.
```
hometown_f <- factor(c("St.Gallen", "Basel", "St.Gallen"))
hometown_f
```
```
## [1] St.Gallen Basel St.Gallen
## Levels: Basel St.Gallen
```
```
object.size(hometown_f)
```
```
## 584 bytes
```
At first glance, the fact that `hometown f` consumes more memory than its character vector sibling appears strange.
But we’ve seen this kind of ‘paradox’ before. Once again, the more sophisticated approach has an overhead (here not in terms of computing time but in terms of structure encoded in an object). `hometown_f` has more structure (i.e., a number\-to\-‘factor level’/category label mapping).
This additional structure is also data that must be saved somewhere. This disadvantage, as in previous examples of overhead costs, diminishes with larger datasets:
```
# create a large character vector
hometown_large <- rep(hometown, times = 1000)
# and the same content as factor
hometown_large_f <- factor(hometown_large)
# compare size
object.size(hometown_large)
```
```
## 24168 bytes
```
```
object.size(hometown_large_f)
```
```
## 12568 bytes
```
### Matrices/Arrays
Matrices are two\-dimensional collections of values of the same type, arrays are higher\-dimensional collections of values of the same type.
For example, we can initiate a three\-row/two\-column numeric matrix as follows.
```
my_matrix <- matrix(c(1,2,3,4,5,6), nrow = 3)
my_matrix
```
```
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
```
And a three\-dimensional numeric array as follows.
```
my_array <- array(c(1,2,3,4,5,6), dim = 3)
my_array
```
```
## [1] 1 2 3
```
### Data frames, tibbles, and data tables
Remember that in R, data frames are the most common way to represent a (table\-like) dataset. Each column can contain a vector of a specific data type (or a factor), but all columns must be the same length. In the context of data analysis, each row of a data frame contains an observation, and each column contains a characteristic of that observation.
The previous implementation of data frames in R made it difficult to work with large datasets.[80](#fn80) Several newer R implementations of the data\-frame concept were introduced with the aim to speed up data processing. One is known as `tibble`, and it is implemented and used in the `tidyverse` packages. The other is known as `data table`, and it is implemented in the `data table`\-package. Most of the shortcomings of the original ‘data.frame’ implementation, however, have been addressed in subsequent R versions, making traditional `data.frames`, `tibbles`, and `data.tables` more similarly suitable for working with large datasets (for in\-memory processing).
Here is how we define a `data.table` in R:
```
# load package
library(data.table)
# initiate a data.table
dt <- data.table(person = c("Alice", "Ben"),
age = c(50, 30),
gender = c("f", "m"))
dt
```
```
## person age gender
## 1: Alice 50 f
## 2: Ben 30 m
```
### Lists
Similar to data frames and data tables, lists can contain different types of data in each element. For example, a list could contain several other lists, data frames, and vectors with differing numbers of elements.
This flexibility can easily be demonstrated by combining some of the data structures created in the examples above:
```
my_list <- list(my_array, my_matrix, dt)
my_list
```
```
## [[1]]
## [1] 1 2 3
##
## [[2]]
## [,1] [,2]
## [1,] 1 4
## [2,] 2 5
## [3,] 3 6
##
## [[3]]
## person age gender
## 1: Alice 50 f
## 2: Ben 30 m
```
R\-tools to investigate structures and types
--------------------------------------------
| package | function | purpose |
| --- | --- | --- |
| `utils` | `str()` | Compactly display the structure of an arbitrary R object. |
| `base` | `class()` | Prints the class(es) of an R object. |
| `base` | `typeof()` | Determines the (R\-internal) type or storage mode of an object. |
| Big Data |
umatter.github.io | https://umatter.github.io/BigData/appendix-c-install-hadoop.html |
Appendix C: Install Hadoop
==========================
You might wish to install Hadoop locally on your computer in order to perform the Hadoop example in Chapter 6\.
The next few stages assist you in configuring everything. Please be aware that between the time I wrote this book and the time you read it, Hadoop may have undergone some changes. Consult <https://hadoop.apache.org/> for further details on releases and to install the most recent version.
However, the instructions for installing Hadoop in the following should be nearly comparable. Please take note that the steps below assume you are using *Ubuntu Linux*. See the README file in <https://github.com/umatter/bigdata> for additional hints regarding the installation of software used in this book.
```
# download binary
wget https://dlcdn.apache.org/hadoop/common/hadoop-2.10.1/hadoop-2.10.1.tar.gz
# download checksum
wget \
https://dlcdn.apache.org/hadoop/common/hadoop-2.10.1/hadoop-2.10.1.tar.gz.sha512
# run the verification
shasum -a 512 hadoop-2.10.1.tar.gz
# compare with value in mds file
cat hadoop-2.10.1.tar.gz.sha512
# if all is fine, unpack
tar -xzvf hadoop-2.10.1.tar.gz
# move to proper place
sudo mv hadoop-2.10.1 /usr/local/hadoop
# then point to this version from hadoop
# open the file /usr/local/hadoop/etc/hadoop/hadoop-env.sh
```
```
# in a text editor and add (where export JAVA_HOME=...)
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
# clean up
rm hadoop-2.10.1.tar.gz
rm hadoop-2.10.1.tar.gz.sha512
```
After running all of the steps above, run the following line in the terminal to check the installation
```
# check installation
/usr/local/hadoop/bin/hadoop
```
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/introduction-to-vroom.html |
1 Introduction to `vroom`
=========================
1\.1 `vroom` basics
-------------------
*Load data into R using `vroom`*
1. Load the `vroom()` library
```
library(vroom)
```
2. Use the `vroom()` function to read the **transactions\_1\.csv** file from the **/usr/share/class/files** folder
```
vroom("/usr/share/class/files/transactions_1.csv")
```
```
## Observations: 50,000
## Variables: 14
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
## # A tibble: 50,000 x 14
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <dbl> <dbl> <chr> <chr> <dbl> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 7 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 8 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 9 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## 10 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## # … with 49,990 more rows, and 8 more variables: customer_lat <dbl>, date <date>,
## # date_year <dbl>, date_month <dbl>, date_month_name <chr>, date_day <chr>,
## # product_id <dbl>, price <dbl>
```
3. Use the `id` argument to add the file name to the data frame. Use **file\_name** as the argument’s value
```
vroom("/usr/share/class/files/transactions_1.csv", id = "file_name")
```
```
## Observations: 50,000
## Variables: 15
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
## # A tibble: 50,000 x 15
## file_name order_id customer_id customer_name customer_phone customer_cc
## <chr> <dbl> <dbl> <chr> <chr> <dbl>
## 1 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 2 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 3 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 4 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 5 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 6 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 7 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 8 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 9 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## 10 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## # … with 49,990 more rows, and 9 more variables: customer_lon <dbl>,
## # customer_lat <dbl>, date <date>, date_year <dbl>, date_month <dbl>,
## # date_month_name <chr>, date_day <chr>, product_id <dbl>, price <dbl>
```
4. Load the prior command into a variable called `vr_transactions`
```
vr_transactions <- vroom("/usr/share/class/files/transactions_1.csv", id = "file_name")
```
```
## Observations: 50,000
## Variables: 15
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
vr_transactions
```
```
## # A tibble: 50,000 x 15
## file_name order_id customer_id customer_name customer_phone customer_cc
## <chr> <dbl> <dbl> <chr> <chr> <dbl>
## 1 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 2 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 3 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 4 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 5 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 6 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 7 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 8 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 9 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## 10 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## # … with 49,990 more rows, and 9 more variables: customer_lon <dbl>,
## # customer_lat <dbl>, date <date>, date_year <dbl>, date_month <dbl>,
## # date_month_name <chr>, date_day <chr>, product_id <dbl>, price <dbl>
```
5. Load the file spec into a variable called `vr_spec`, using the `spec()` command
```
vr_spec <- spec(vr_transactions)
vr_spec
```
```
## cols(
## order_id = col_double(),
## customer_id = col_double(),
## customer_name = col_character(),
## customer_phone = col_character(),
## customer_cc = col_double(),
## customer_lon = col_double(),
## customer_lat = col_double(),
## date = col_date(format = ""),
## date_year = col_double(),
## date_month = col_double(),
## date_month_name = col_character(),
## date_day = col_character(),
## product_id = col_double(),
## price = col_double()
## )
```
1\.2 Load multiple files
------------------------
1. Load the `fs` and `dplyr` libraries
```
library(fs)
library(dplyr)
```
2. List files in the **/usr/share/class/files** folder using the `dir_ls()` function
```
dir_ls("/usr/share/class/files")
```
```
## /usr/share/class/files/transactions_1.csv
## /usr/share/class/files/transactions_2.csv
## /usr/share/class/files/transactions_3.csv
## /usr/share/class/files/transactions_4.csv
## /usr/share/class/files/transactions_5.csv
```
3. In the `dir_ls()` function, use the `glob` argument to pass a wildcard to list CSV files only. Load to a variable named `files`
```
files <- dir_ls("/usr/share/class/files", glob = "*.csv")
```
4. Pass the `files` variable to `vroom`. Set the `n_max` argument to 1,000 to limit the data load for now
```
vroom(files, n_max = 1000)
```
```
## Observations: 5,000
## Variables: 14
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
## # A tibble: 5,000 x 14
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <dbl> <dbl> <chr> <chr> <dbl> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 7 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 8 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 9 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## 10 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## # … with 4,990 more rows, and 8 more variables: customer_lat <dbl>, date <date>,
## # date_year <dbl>, date_month <dbl>, date_month_name <chr>, date_day <chr>,
## # product_id <dbl>, price <dbl>
```
5. Add a `col_types` argument with `vr_specs` as its value
```
vroom(files, n_max = 1000, col_types = vr_spec)
```
```
## # A tibble: 5,000 x 14
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <dbl> <dbl> <chr> <chr> <dbl> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 7 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 8 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 9 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## 10 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## # … with 4,990 more rows, and 8 more variables: customer_lat <dbl>, date <date>,
## # date_year <dbl>, date_month <dbl>, date_month_name <chr>, date_day <chr>,
## # product_id <dbl>, price <dbl>
```
6. Use the `col_select` argument to pass a `list` object containing the following variables: order\_id, date, customer\_name, and price
```
vroom(files, n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price)
)
```
```
## # A tibble: 5,000 x 4
## order_id date customer_name price
## <dbl> <date> <chr> <dbl>
## 1 1001 2016-01-01 Dr. Birdie Kessler 9.88
## 2 1001 2016-01-01 Dr. Birdie Kessler 7.53
## 3 1001 2016-01-01 Dr. Birdie Kessler 5.64
## 4 1001 2016-01-01 Dr. Birdie Kessler 4.89
## 5 1002 2016-01-01 Meggan Bruen 6.48
## 6 1002 2016-01-01 Meggan Bruen 6.7
## 7 1002 2016-01-01 Meggan Bruen 4.27
## 8 1002 2016-01-01 Meggan Bruen 7.38
## 9 1003 2016-01-01 Jessee Rodriguez Jr. 7.53
## 10 1003 2016-01-01 Jessee Rodriguez Jr. 5.21
## # … with 4,990 more rows
```
1\.3 Load and modify multiple files
-----------------------------------
*For files that are too large to have in memory, keep a summarization*
1. Use a `for()` loop to print the content of each vector inside `files`
```
for(i in seq_along(files)) {
print(files[i])
}
```
```
## /usr/share/class/files/transactions_1.csv
## /usr/share/class/files/transactions_2.csv
## /usr/share/class/files/transactions_3.csv
## /usr/share/class/files/transactions_4.csv
## /usr/share/class/files/transactions_5.csv
```
2. Switch the `print()` command with the `vroom` command, using the same arguments, except the file name. Use the `files` variable. Load the results into a variable called `transactions`.
```
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
}
```
3. Group `transactions` by `order_id` and get the total of `price` and the number of records. Name them `total_sales` and `no_items` respectively. Name the new variable `orders`
```
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
orders <- transactions %>%
group_by(order_id) %>%
summarise(total_sales = sum(price), no_items = n())
}
```
4. Define the `orders` variable as `NULL` prior to the for loop and add a `bind_rows()` step to `orders` to preserve each summarized view.
```
orders <- NULL
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
orders <- transactions %>%
group_by(order_id) %>%
summarise(total_sales = sum(price), no_items = n()) %>%
bind_rows(orders)
}
```
5. Remove the `transactions` variable at the end of each cycle
```
orders <- NULL
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
orders <- transactions %>%
group_by(order_id) %>%
summarise(total_sales = sum(price), no_items = n()) %>%
bind_rows(orders)
rm(transactions)
}
```
6. Preview the `orders` variable
```
orders
```
```
## # A tibble: 715 x 3
## order_id total_sales no_items
## <dbl> <dbl> <int>
## 1 41865 50.9 8
## 2 41866 97.4 14
## 3 41867 123. 16
## 4 41868 91.9 14
## 5 41869 63.2 10
## 6 41870 75.2 10
## 7 41871 70.6 10
## 8 41872 60.4 8
## 9 41873 76.2 10
## 10 41874 75.7 12
## # … with 705 more rows
```
1\.1 `vroom` basics
-------------------
*Load data into R using `vroom`*
1. Load the `vroom()` library
```
library(vroom)
```
2. Use the `vroom()` function to read the **transactions\_1\.csv** file from the **/usr/share/class/files** folder
```
vroom("/usr/share/class/files/transactions_1.csv")
```
```
## Observations: 50,000
## Variables: 14
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
## # A tibble: 50,000 x 14
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <dbl> <dbl> <chr> <chr> <dbl> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 7 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 8 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 9 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## 10 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## # … with 49,990 more rows, and 8 more variables: customer_lat <dbl>, date <date>,
## # date_year <dbl>, date_month <dbl>, date_month_name <chr>, date_day <chr>,
## # product_id <dbl>, price <dbl>
```
3. Use the `id` argument to add the file name to the data frame. Use **file\_name** as the argument’s value
```
vroom("/usr/share/class/files/transactions_1.csv", id = "file_name")
```
```
## Observations: 50,000
## Variables: 15
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
## # A tibble: 50,000 x 15
## file_name order_id customer_id customer_name customer_phone customer_cc
## <chr> <dbl> <dbl> <chr> <chr> <dbl>
## 1 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 2 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 3 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 4 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 5 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 6 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 7 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 8 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 9 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## 10 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## # … with 49,990 more rows, and 9 more variables: customer_lon <dbl>,
## # customer_lat <dbl>, date <date>, date_year <dbl>, date_month <dbl>,
## # date_month_name <chr>, date_day <chr>, product_id <dbl>, price <dbl>
```
4. Load the prior command into a variable called `vr_transactions`
```
vr_transactions <- vroom("/usr/share/class/files/transactions_1.csv", id = "file_name")
```
```
## Observations: 50,000
## Variables: 15
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
vr_transactions
```
```
## # A tibble: 50,000 x 15
## file_name order_id customer_id customer_name customer_phone customer_cc
## <chr> <dbl> <dbl> <chr> <chr> <dbl>
## 1 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 2 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 3 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 4 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 5 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 6 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 7 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 8 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 9 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## 10 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## # … with 49,990 more rows, and 9 more variables: customer_lon <dbl>,
## # customer_lat <dbl>, date <date>, date_year <dbl>, date_month <dbl>,
## # date_month_name <chr>, date_day <chr>, product_id <dbl>, price <dbl>
```
5. Load the file spec into a variable called `vr_spec`, using the `spec()` command
```
vr_spec <- spec(vr_transactions)
vr_spec
```
```
## cols(
## order_id = col_double(),
## customer_id = col_double(),
## customer_name = col_character(),
## customer_phone = col_character(),
## customer_cc = col_double(),
## customer_lon = col_double(),
## customer_lat = col_double(),
## date = col_date(format = ""),
## date_year = col_double(),
## date_month = col_double(),
## date_month_name = col_character(),
## date_day = col_character(),
## product_id = col_double(),
## price = col_double()
## )
```
1\.2 Load multiple files
------------------------
1. Load the `fs` and `dplyr` libraries
```
library(fs)
library(dplyr)
```
2. List files in the **/usr/share/class/files** folder using the `dir_ls()` function
```
dir_ls("/usr/share/class/files")
```
```
## /usr/share/class/files/transactions_1.csv
## /usr/share/class/files/transactions_2.csv
## /usr/share/class/files/transactions_3.csv
## /usr/share/class/files/transactions_4.csv
## /usr/share/class/files/transactions_5.csv
```
3. In the `dir_ls()` function, use the `glob` argument to pass a wildcard to list CSV files only. Load to a variable named `files`
```
files <- dir_ls("/usr/share/class/files", glob = "*.csv")
```
4. Pass the `files` variable to `vroom`. Set the `n_max` argument to 1,000 to limit the data load for now
```
vroom(files, n_max = 1000)
```
```
## Observations: 5,000
## Variables: 14
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
## # A tibble: 5,000 x 14
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <dbl> <dbl> <chr> <chr> <dbl> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 7 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 8 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 9 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## 10 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## # … with 4,990 more rows, and 8 more variables: customer_lat <dbl>, date <date>,
## # date_year <dbl>, date_month <dbl>, date_month_name <chr>, date_day <chr>,
## # product_id <dbl>, price <dbl>
```
5. Add a `col_types` argument with `vr_specs` as its value
```
vroom(files, n_max = 1000, col_types = vr_spec)
```
```
## # A tibble: 5,000 x 14
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <dbl> <dbl> <chr> <chr> <dbl> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 7 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 8 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 9 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## 10 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## # … with 4,990 more rows, and 8 more variables: customer_lat <dbl>, date <date>,
## # date_year <dbl>, date_month <dbl>, date_month_name <chr>, date_day <chr>,
## # product_id <dbl>, price <dbl>
```
6. Use the `col_select` argument to pass a `list` object containing the following variables: order\_id, date, customer\_name, and price
```
vroom(files, n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price)
)
```
```
## # A tibble: 5,000 x 4
## order_id date customer_name price
## <dbl> <date> <chr> <dbl>
## 1 1001 2016-01-01 Dr. Birdie Kessler 9.88
## 2 1001 2016-01-01 Dr. Birdie Kessler 7.53
## 3 1001 2016-01-01 Dr. Birdie Kessler 5.64
## 4 1001 2016-01-01 Dr. Birdie Kessler 4.89
## 5 1002 2016-01-01 Meggan Bruen 6.48
## 6 1002 2016-01-01 Meggan Bruen 6.7
## 7 1002 2016-01-01 Meggan Bruen 4.27
## 8 1002 2016-01-01 Meggan Bruen 7.38
## 9 1003 2016-01-01 Jessee Rodriguez Jr. 7.53
## 10 1003 2016-01-01 Jessee Rodriguez Jr. 5.21
## # … with 4,990 more rows
```
1\.3 Load and modify multiple files
-----------------------------------
*For files that are too large to have in memory, keep a summarization*
1. Use a `for()` loop to print the content of each vector inside `files`
```
for(i in seq_along(files)) {
print(files[i])
}
```
```
## /usr/share/class/files/transactions_1.csv
## /usr/share/class/files/transactions_2.csv
## /usr/share/class/files/transactions_3.csv
## /usr/share/class/files/transactions_4.csv
## /usr/share/class/files/transactions_5.csv
```
2. Switch the `print()` command with the `vroom` command, using the same arguments, except the file name. Use the `files` variable. Load the results into a variable called `transactions`.
```
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
}
```
3. Group `transactions` by `order_id` and get the total of `price` and the number of records. Name them `total_sales` and `no_items` respectively. Name the new variable `orders`
```
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
orders <- transactions %>%
group_by(order_id) %>%
summarise(total_sales = sum(price), no_items = n())
}
```
4. Define the `orders` variable as `NULL` prior to the for loop and add a `bind_rows()` step to `orders` to preserve each summarized view.
```
orders <- NULL
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
orders <- transactions %>%
group_by(order_id) %>%
summarise(total_sales = sum(price), no_items = n()) %>%
bind_rows(orders)
}
```
5. Remove the `transactions` variable at the end of each cycle
```
orders <- NULL
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
orders <- transactions %>%
group_by(order_id) %>%
summarise(total_sales = sum(price), no_items = n()) %>%
bind_rows(orders)
rm(transactions)
}
```
6. Preview the `orders` variable
```
orders
```
```
## # A tibble: 715 x 3
## order_id total_sales no_items
## <dbl> <dbl> <int>
## 1 41865 50.9 8
## 2 41866 97.4 14
## 3 41867 123. 16
## 4 41868 91.9 14
## 5 41869 63.2 10
## 6 41870 75.2 10
## 7 41871 70.6 10
## 8 41872 60.4 8
## 9 41873 76.2 10
## 10 41874 75.7 12
## # … with 705 more rows
```
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/introduction-to-vroom.html |
1 Introduction to `vroom`
=========================
1\.1 `vroom` basics
-------------------
*Load data into R using `vroom`*
1. Load the `vroom()` library
```
library(vroom)
```
2. Use the `vroom()` function to read the **transactions\_1\.csv** file from the **/usr/share/class/files** folder
```
vroom("/usr/share/class/files/transactions_1.csv")
```
```
## Observations: 50,000
## Variables: 14
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
## # A tibble: 50,000 x 14
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <dbl> <dbl> <chr> <chr> <dbl> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 7 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 8 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 9 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## 10 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## # … with 49,990 more rows, and 8 more variables: customer_lat <dbl>, date <date>,
## # date_year <dbl>, date_month <dbl>, date_month_name <chr>, date_day <chr>,
## # product_id <dbl>, price <dbl>
```
3. Use the `id` argument to add the file name to the data frame. Use **file\_name** as the argument’s value
```
vroom("/usr/share/class/files/transactions_1.csv", id = "file_name")
```
```
## Observations: 50,000
## Variables: 15
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
## # A tibble: 50,000 x 15
## file_name order_id customer_id customer_name customer_phone customer_cc
## <chr> <dbl> <dbl> <chr> <chr> <dbl>
## 1 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 2 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 3 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 4 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 5 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 6 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 7 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 8 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 9 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## 10 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## # … with 49,990 more rows, and 9 more variables: customer_lon <dbl>,
## # customer_lat <dbl>, date <date>, date_year <dbl>, date_month <dbl>,
## # date_month_name <chr>, date_day <chr>, product_id <dbl>, price <dbl>
```
4. Load the prior command into a variable called `vr_transactions`
```
vr_transactions <- vroom("/usr/share/class/files/transactions_1.csv", id = "file_name")
```
```
## Observations: 50,000
## Variables: 15
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
vr_transactions
```
```
## # A tibble: 50,000 x 15
## file_name order_id customer_id customer_name customer_phone customer_cc
## <chr> <dbl> <dbl> <chr> <chr> <dbl>
## 1 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 2 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 3 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 4 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 5 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 6 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 7 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 8 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 9 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## 10 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## # … with 49,990 more rows, and 9 more variables: customer_lon <dbl>,
## # customer_lat <dbl>, date <date>, date_year <dbl>, date_month <dbl>,
## # date_month_name <chr>, date_day <chr>, product_id <dbl>, price <dbl>
```
5. Load the file spec into a variable called `vr_spec`, using the `spec()` command
```
vr_spec <- spec(vr_transactions)
vr_spec
```
```
## cols(
## order_id = col_double(),
## customer_id = col_double(),
## customer_name = col_character(),
## customer_phone = col_character(),
## customer_cc = col_double(),
## customer_lon = col_double(),
## customer_lat = col_double(),
## date = col_date(format = ""),
## date_year = col_double(),
## date_month = col_double(),
## date_month_name = col_character(),
## date_day = col_character(),
## product_id = col_double(),
## price = col_double()
## )
```
1\.2 Load multiple files
------------------------
1. Load the `fs` and `dplyr` libraries
```
library(fs)
library(dplyr)
```
2. List files in the **/usr/share/class/files** folder using the `dir_ls()` function
```
dir_ls("/usr/share/class/files")
```
```
## /usr/share/class/files/transactions_1.csv
## /usr/share/class/files/transactions_2.csv
## /usr/share/class/files/transactions_3.csv
## /usr/share/class/files/transactions_4.csv
## /usr/share/class/files/transactions_5.csv
```
3. In the `dir_ls()` function, use the `glob` argument to pass a wildcard to list CSV files only. Load to a variable named `files`
```
files <- dir_ls("/usr/share/class/files", glob = "*.csv")
```
4. Pass the `files` variable to `vroom`. Set the `n_max` argument to 1,000 to limit the data load for now
```
vroom(files, n_max = 1000)
```
```
## Observations: 5,000
## Variables: 14
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
## # A tibble: 5,000 x 14
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <dbl> <dbl> <chr> <chr> <dbl> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 7 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 8 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 9 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## 10 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## # … with 4,990 more rows, and 8 more variables: customer_lat <dbl>, date <date>,
## # date_year <dbl>, date_month <dbl>, date_month_name <chr>, date_day <chr>,
## # product_id <dbl>, price <dbl>
```
5. Add a `col_types` argument with `vr_specs` as its value
```
vroom(files, n_max = 1000, col_types = vr_spec)
```
```
## # A tibble: 5,000 x 14
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <dbl> <dbl> <chr> <chr> <dbl> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 7 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 8 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 9 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## 10 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## # … with 4,990 more rows, and 8 more variables: customer_lat <dbl>, date <date>,
## # date_year <dbl>, date_month <dbl>, date_month_name <chr>, date_day <chr>,
## # product_id <dbl>, price <dbl>
```
6. Use the `col_select` argument to pass a `list` object containing the following variables: order\_id, date, customer\_name, and price
```
vroom(files, n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price)
)
```
```
## # A tibble: 5,000 x 4
## order_id date customer_name price
## <dbl> <date> <chr> <dbl>
## 1 1001 2016-01-01 Dr. Birdie Kessler 9.88
## 2 1001 2016-01-01 Dr. Birdie Kessler 7.53
## 3 1001 2016-01-01 Dr. Birdie Kessler 5.64
## 4 1001 2016-01-01 Dr. Birdie Kessler 4.89
## 5 1002 2016-01-01 Meggan Bruen 6.48
## 6 1002 2016-01-01 Meggan Bruen 6.7
## 7 1002 2016-01-01 Meggan Bruen 4.27
## 8 1002 2016-01-01 Meggan Bruen 7.38
## 9 1003 2016-01-01 Jessee Rodriguez Jr. 7.53
## 10 1003 2016-01-01 Jessee Rodriguez Jr. 5.21
## # … with 4,990 more rows
```
1\.3 Load and modify multiple files
-----------------------------------
*For files that are too large to have in memory, keep a summarization*
1. Use a `for()` loop to print the content of each vector inside `files`
```
for(i in seq_along(files)) {
print(files[i])
}
```
```
## /usr/share/class/files/transactions_1.csv
## /usr/share/class/files/transactions_2.csv
## /usr/share/class/files/transactions_3.csv
## /usr/share/class/files/transactions_4.csv
## /usr/share/class/files/transactions_5.csv
```
2. Switch the `print()` command with the `vroom` command, using the same arguments, except the file name. Use the `files` variable. Load the results into a variable called `transactions`.
```
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
}
```
3. Group `transactions` by `order_id` and get the total of `price` and the number of records. Name them `total_sales` and `no_items` respectively. Name the new variable `orders`
```
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
orders <- transactions %>%
group_by(order_id) %>%
summarise(total_sales = sum(price), no_items = n())
}
```
4. Define the `orders` variable as `NULL` prior to the for loop and add a `bind_rows()` step to `orders` to preserve each summarized view.
```
orders <- NULL
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
orders <- transactions %>%
group_by(order_id) %>%
summarise(total_sales = sum(price), no_items = n()) %>%
bind_rows(orders)
}
```
5. Remove the `transactions` variable at the end of each cycle
```
orders <- NULL
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
orders <- transactions %>%
group_by(order_id) %>%
summarise(total_sales = sum(price), no_items = n()) %>%
bind_rows(orders)
rm(transactions)
}
```
6. Preview the `orders` variable
```
orders
```
```
## # A tibble: 715 x 3
## order_id total_sales no_items
## <dbl> <dbl> <int>
## 1 41865 50.9 8
## 2 41866 97.4 14
## 3 41867 123. 16
## 4 41868 91.9 14
## 5 41869 63.2 10
## 6 41870 75.2 10
## 7 41871 70.6 10
## 8 41872 60.4 8
## 9 41873 76.2 10
## 10 41874 75.7 12
## # … with 705 more rows
```
1\.1 `vroom` basics
-------------------
*Load data into R using `vroom`*
1. Load the `vroom()` library
```
library(vroom)
```
2. Use the `vroom()` function to read the **transactions\_1\.csv** file from the **/usr/share/class/files** folder
```
vroom("/usr/share/class/files/transactions_1.csv")
```
```
## Observations: 50,000
## Variables: 14
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
## # A tibble: 50,000 x 14
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <dbl> <dbl> <chr> <chr> <dbl> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 7 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 8 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 9 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## 10 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## # … with 49,990 more rows, and 8 more variables: customer_lat <dbl>, date <date>,
## # date_year <dbl>, date_month <dbl>, date_month_name <chr>, date_day <chr>,
## # product_id <dbl>, price <dbl>
```
3. Use the `id` argument to add the file name to the data frame. Use **file\_name** as the argument’s value
```
vroom("/usr/share/class/files/transactions_1.csv", id = "file_name")
```
```
## Observations: 50,000
## Variables: 15
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
## # A tibble: 50,000 x 15
## file_name order_id customer_id customer_name customer_phone customer_cc
## <chr> <dbl> <dbl> <chr> <chr> <dbl>
## 1 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 2 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 3 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 4 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 5 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 6 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 7 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 8 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 9 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## 10 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## # … with 49,990 more rows, and 9 more variables: customer_lon <dbl>,
## # customer_lat <dbl>, date <date>, date_year <dbl>, date_month <dbl>,
## # date_month_name <chr>, date_day <chr>, product_id <dbl>, price <dbl>
```
4. Load the prior command into a variable called `vr_transactions`
```
vr_transactions <- vroom("/usr/share/class/files/transactions_1.csv", id = "file_name")
```
```
## Observations: 50,000
## Variables: 15
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
vr_transactions
```
```
## # A tibble: 50,000 x 15
## file_name order_id customer_id customer_name customer_phone customer_cc
## <chr> <dbl> <dbl> <chr> <chr> <dbl>
## 1 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 2 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 3 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 4 /usr/sha… 1001 22 Dr. Birdie K… 684.226.0455 6.01e18
## 5 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 6 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 7 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 8 /usr/sha… 1002 6 Meggan Bruen 326-151-4331 4.96e15
## 9 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## 10 /usr/sha… 1003 80 Jessee Rodri… 539.176.3896 4.50e12
## # … with 49,990 more rows, and 9 more variables: customer_lon <dbl>,
## # customer_lat <dbl>, date <date>, date_year <dbl>, date_month <dbl>,
## # date_month_name <chr>, date_day <chr>, product_id <dbl>, price <dbl>
```
5. Load the file spec into a variable called `vr_spec`, using the `spec()` command
```
vr_spec <- spec(vr_transactions)
vr_spec
```
```
## cols(
## order_id = col_double(),
## customer_id = col_double(),
## customer_name = col_character(),
## customer_phone = col_character(),
## customer_cc = col_double(),
## customer_lon = col_double(),
## customer_lat = col_double(),
## date = col_date(format = ""),
## date_year = col_double(),
## date_month = col_double(),
## date_month_name = col_character(),
## date_day = col_character(),
## product_id = col_double(),
## price = col_double()
## )
```
1\.2 Load multiple files
------------------------
1. Load the `fs` and `dplyr` libraries
```
library(fs)
library(dplyr)
```
2. List files in the **/usr/share/class/files** folder using the `dir_ls()` function
```
dir_ls("/usr/share/class/files")
```
```
## /usr/share/class/files/transactions_1.csv
## /usr/share/class/files/transactions_2.csv
## /usr/share/class/files/transactions_3.csv
## /usr/share/class/files/transactions_4.csv
## /usr/share/class/files/transactions_5.csv
```
3. In the `dir_ls()` function, use the `glob` argument to pass a wildcard to list CSV files only. Load to a variable named `files`
```
files <- dir_ls("/usr/share/class/files", glob = "*.csv")
```
4. Pass the `files` variable to `vroom`. Set the `n_max` argument to 1,000 to limit the data load for now
```
vroom(files, n_max = 1000)
```
```
## Observations: 5,000
## Variables: 14
## chr [4]: customer_name, customer_phone, date_month_name, date_day
## dbl [9]: order_id, customer_id, customer_cc, customer_lon, customer_lat, date_year, date_...
## date [1]: date
##
## Call `spec()` for a copy-pastable column specification
## Specify the column types with `col_types` to quiet this message
```
```
## # A tibble: 5,000 x 14
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <dbl> <dbl> <chr> <chr> <dbl> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 7 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 8 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 9 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## 10 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## # … with 4,990 more rows, and 8 more variables: customer_lat <dbl>, date <date>,
## # date_year <dbl>, date_month <dbl>, date_month_name <chr>, date_day <chr>,
## # product_id <dbl>, price <dbl>
```
5. Add a `col_types` argument with `vr_specs` as its value
```
vroom(files, n_max = 1000, col_types = vr_spec)
```
```
## # A tibble: 5,000 x 14
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <dbl> <dbl> <chr> <chr> <dbl> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6.01e18 -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 7 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 8 1002 6 Meggan Bruen 326-151-4331 4.96e15 -122.
## 9 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## 10 1003 80 Jessee Rodri… 539.176.3896 4.50e12 -122.
## # … with 4,990 more rows, and 8 more variables: customer_lat <dbl>, date <date>,
## # date_year <dbl>, date_month <dbl>, date_month_name <chr>, date_day <chr>,
## # product_id <dbl>, price <dbl>
```
6. Use the `col_select` argument to pass a `list` object containing the following variables: order\_id, date, customer\_name, and price
```
vroom(files, n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price)
)
```
```
## # A tibble: 5,000 x 4
## order_id date customer_name price
## <dbl> <date> <chr> <dbl>
## 1 1001 2016-01-01 Dr. Birdie Kessler 9.88
## 2 1001 2016-01-01 Dr. Birdie Kessler 7.53
## 3 1001 2016-01-01 Dr. Birdie Kessler 5.64
## 4 1001 2016-01-01 Dr. Birdie Kessler 4.89
## 5 1002 2016-01-01 Meggan Bruen 6.48
## 6 1002 2016-01-01 Meggan Bruen 6.7
## 7 1002 2016-01-01 Meggan Bruen 4.27
## 8 1002 2016-01-01 Meggan Bruen 7.38
## 9 1003 2016-01-01 Jessee Rodriguez Jr. 7.53
## 10 1003 2016-01-01 Jessee Rodriguez Jr. 5.21
## # … with 4,990 more rows
```
1\.3 Load and modify multiple files
-----------------------------------
*For files that are too large to have in memory, keep a summarization*
1. Use a `for()` loop to print the content of each vector inside `files`
```
for(i in seq_along(files)) {
print(files[i])
}
```
```
## /usr/share/class/files/transactions_1.csv
## /usr/share/class/files/transactions_2.csv
## /usr/share/class/files/transactions_3.csv
## /usr/share/class/files/transactions_4.csv
## /usr/share/class/files/transactions_5.csv
```
2. Switch the `print()` command with the `vroom` command, using the same arguments, except the file name. Use the `files` variable. Load the results into a variable called `transactions`.
```
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
}
```
3. Group `transactions` by `order_id` and get the total of `price` and the number of records. Name them `total_sales` and `no_items` respectively. Name the new variable `orders`
```
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
orders <- transactions %>%
group_by(order_id) %>%
summarise(total_sales = sum(price), no_items = n())
}
```
4. Define the `orders` variable as `NULL` prior to the for loop and add a `bind_rows()` step to `orders` to preserve each summarized view.
```
orders <- NULL
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
orders <- transactions %>%
group_by(order_id) %>%
summarise(total_sales = sum(price), no_items = n()) %>%
bind_rows(orders)
}
```
5. Remove the `transactions` variable at the end of each cycle
```
orders <- NULL
for(i in seq_along(files)) {
transactions <- vroom(files[i], n_max = 1000, col_types = vr_spec,
col_select = list(order_id, date, customer_name, price))
orders <- transactions %>%
group_by(order_id) %>%
summarise(total_sales = sum(price), no_items = n()) %>%
bind_rows(orders)
rm(transactions)
}
```
6. Preview the `orders` variable
```
orders
```
```
## # A tibble: 715 x 3
## order_id total_sales no_items
## <dbl> <dbl> <int>
## 1 41865 50.9 8
## 2 41866 97.4 14
## 3 41867 123. 16
## 4 41868 91.9 14
## 5 41869 63.2 10
## 6 41870 75.2 10
## 7 41871 70.6 10
## 8 41872 60.4 8
## 9 41873 76.2 10
## 10 41874 75.7 12
## # … with 705 more rows
```
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/introduction-to-dtplyr.html |
2 Introduction to `dtplyr`
==========================
2\.1 `dtplyr` basics
--------------------
*Load data into R via `data.table`, and then wrap it with `dtplyr`*
1. Load the `data.table`, `dplyr`, `dtplyr`, `purrr` and `fs` libraries
```
library(data.table)
library(dplyr)
library(dtplyr)
library(purrr)
library(fs)
```
2. Read the **transactions.csv** file, from the **/usr/share/class/files** folder. Use the `fread()` function to load the data into a variable called `transactions`
```
transactions <- dir_ls("/usr/share/class/files", glob = "*.csv") %>%
map(fread) %>%
rbindlist()
```
3. Preview the data using `glimpse()`
```
glimpse(transactions)
```
```
## Observations: 250,000
## Variables: 14
## $ order_id <int> 1001, 1001, 1001, 1001, 1002, 1002, 1002, 1002, 1003, 1…
## $ customer_id <int> 22, 22, 22, 22, 6, 6, 6, 6, 80, 80, 80, 80, 80, 80, 55,…
## $ customer_name <chr> "Dr. Birdie Kessler", "Dr. Birdie Kessler", "Dr. Birdie…
## $ customer_phone <chr> "684.226.0455", "684.226.0455", "684.226.0455", "684.22…
## $ customer_cc <int64> 6011608753104063698, 6011608753104063698, 60116087531…
## $ customer_lon <dbl> -122.484, -122.484, -122.484, -122.484, -122.429, -122.…
## $ customer_lat <dbl> 37.7395, 37.7395, 37.7395, 37.7395, 37.7298, 37.7298, 3…
## $ date <chr> "2016-01-01", "2016-01-01", "2016-01-01", "2016-01-01",…
## $ date_year <int> 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2…
## $ date_month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ date_month_name <chr> "Jan", "Jan", "Jan", "Jan", "Jan", "Jan", "Jan", "Jan",…
## $ date_day <chr> "Friday", "Friday", "Friday", "Friday", "Friday", "Frid…
## $ product_id <int> 6, 27, 30, 2, 17, 1, 5, 4, 27, 16, 30, 6, 11, 30, 30, 1…
## $ price <dbl> 9.88, 7.53, 5.64, 4.89, 6.48, 6.70, 4.27, 7.38, 7.53, 5…
```
4. Use `lazy_dt()` to “wrap” the `transactions` variable into a new variable called `dt_transactions`
```
dt_transactions <- lazy_dt(transactions)
```
5. View `dt_transactions` structure with `glimpse()`
```
glimpse(dt_transactions)
```
```
## List of 7
## $ parent :Classes 'data.table' and 'data.frame': 250000 obs. of 14 variables:
## ..$ order_id : int [1:250000] 1001 1001 1001 1001 1002 1002 1002 1002 1003 1003 ...
## ..$ customer_id : int [1:250000] 22 22 22 22 6 6 6 6 80 80 ...
## ..$ customer_name : chr [1:250000] "Dr. Birdie Kessler" "Dr. Birdie Kessler" "Dr. Birdie Kessler" "Dr. Birdie Kessler" ...
## ..$ customer_phone : chr [1:250000] "684.226.0455" "684.226.0455" "684.226.0455" "684.226.0455" ...
## ..$ customer_cc :integer64 [1:250000] 6011608753104063698 6011608753104063698 6011608753104063698 6011608753104063698 4964180480255037 4964180480255037 4964180480255037 4964180480255037 ...
## ..$ customer_lon : num [1:250000] -122 -122 -122 -122 -122 ...
## ..$ customer_lat : num [1:250000] 37.7 37.7 37.7 37.7 37.7 ...
## ..$ date : chr [1:250000] "2016-01-01" "2016-01-01" "2016-01-01" "2016-01-01" ...
## ..$ date_year : int [1:250000] 2016 2016 2016 2016 2016 2016 2016 2016 2016 2016 ...
## ..$ date_month : int [1:250000] 1 1 1 1 1 1 1 1 1 1 ...
## ..$ date_month_name: chr [1:250000] "Jan" "Jan" "Jan" "Jan" ...
## ..$ date_day : chr [1:250000] "Friday" "Friday" "Friday" "Friday" ...
## ..$ product_id : int [1:250000] 6 27 30 2 17 1 5 4 27 16 ...
## ..$ price : num [1:250000] 9.88 7.53 5.64 4.89 6.48 6.7 4.27 7.38 7.53 5.21 ...
## ..- attr(*, ".internal.selfref")=<externalptr>
## $ vars : chr [1:14] "order_id" "customer_id" "customer_name" "customer_phone" ...
## $ groups : chr(0)
## $ implicit_copy: logi FALSE
## $ needs_copy : logi FALSE
## $ env :<environment: R_GlobalEnv>
## $ name : symbol _DT1
## - attr(*, "class")= chr [1:2] "dtplyr_step_first" "dtplyr_step"
```
2\.2 Object sizes
-----------------
*Confirm that `dtplyr` is not making copies of the original `data.table`*
1. Load the `lobstr` library
```
library(lobstr)
```
2. Use `obj_size()` to obtain `transactions`’s size in memory
```
obj_size(transactions)
```
```
## 23,019,560 B
```
3. Use `obj_size()` to obtain `dt_transactions`’s size in memory
```
obj_size(dt_transactions)
```
```
## 23,020,672 B
```
4. Use `obj_size()` to obtain `dt_transactions` and `transactions` size in memory together
```
obj_size(transactions, dt_transactions)
```
```
## 23,020,672 B
```
2\.3 How `dtplyr` works
-----------------------
*Under the hood view of how `dtplyr` operates `data.table` objects*
1. Use `dplyr` verbs on top of `dt_transactions` to obtain the total sales by month
```
dt_transactions %>%
group_by(date_month) %>%
summarise(total_sales = sum(price))
```
```
## Source: local data table [?? x 2]
## Call: `_DT1`[, .(total_sales = sum(price)), keyby = .(date_month)]
##
## date_month total_sales
## <int> <dbl>
## 1 1 1120628.
## 2 2 562719.
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
2. Load the above code into a variable called `by_month`
```
by_month <- dt_transactions %>%
group_by(date_month) %>%
summarise(total_sales = sum(price))
```
3. Use `show_query()` to see the `data.table` code that `by_month` actually runs
```
show_query(by_month)
```
```
## `_DT1`[, .(total_sales = sum(price)), keyby = .(date_month)]
```
4. Use `glimpse()` to view how `by_month`, instead of modifying the data, only adds steps that will later be executed by `data.table`
```
glimpse(by_month)
```
```
## List of 6
## $ parent :List of 9
## ..$ parent :List of 6
## .. ..$ parent :List of 7
## .. .. ..- attr(*, "class")= chr [1:2] "dtplyr_step_first" "dtplyr_step"
## .. ..$ vars : chr [1:14] "order_id" "customer_id" "customer_name" "customer_phone" ...
## .. ..$ groups : chr "date_month"
## .. ..$ implicit_copy: logi FALSE
## .. ..$ needs_copy : logi FALSE
## .. ..$ env :<environment: R_GlobalEnv>
## .. ..- attr(*, "class")= chr [1:2] "dtplyr_step_group" "dtplyr_step"
## ..$ vars : chr [1:2] "date_month" "total_sales"
## ..$ groups : chr "date_month"
## ..$ implicit_copy: logi TRUE
## ..$ needs_copy : logi FALSE
## ..$ env :<environment: R_GlobalEnv>
## ..$ i : NULL
## ..$ j : language .(total_sales = sum(price))
## ..$ on : chr(0)
## ..- attr(*, "class")= chr [1:2] "dtplyr_step_subset" "dtplyr_step"
## $ vars : chr [1:2] "date_month" "total_sales"
## $ groups : chr(0)
## $ implicit_copy: logi TRUE
## $ needs_copy : logi FALSE
## $ env :<environment: R_GlobalEnv>
## - attr(*, "class")= chr [1:2] "dtplyr_step_group" "dtplyr_step"
```
5. Create a new column using `mutate()`
```
dt_transactions %>%
mutate(new_field = price / 2)
```
```
## Source: local data table [?? x 15]
## Call: copy(`_DT1`)[, `:=`(new_field = price/2)]
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 9 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>, new_field <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
6. Use `show_query()` to see the `copy()` command being used
```
dt_transactions %>%
mutate(new_field = price / 2) %>%
show_query()
```
```
## copy(`_DT1`)[, `:=`(new_field = price/2)]
```
7. Check to confirm that the new column *did not* persist in `dt_transactions`
```
dt_transactions
```
```
## Source: local data table [250,000 x 14]
## Call: `_DT1`
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 8 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
8. Use `lazy_dt()` with the `immutable` argument set to `FALSE` to avoid the copy
```
m_transactions <- lazy_dt(copy(transactions), immutable = FALSE)
```
```
m_transactions
```
```
## Source: local data table [250,000 x 14]
## Call: `_DT2`
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 8 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
9. Create a `new_field` column in `m_transactions` using `mutate()`
```
m_transactions %>%
mutate(new_field = price / 2)
```
```
## Source: local data table [?? x 15]
## Call: `_DT2`[, `:=`(new_field = price/2)]
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 9 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>, new_field <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
10. Use `show_query()` to see that `copy()` is no longer being used
```
m_transactions %>%
mutate(new_field = price / 2) %>%
show_query()
```
```
## `_DT2`[, `:=`(new_field = price/2)]
```
11. Inspect `m_transactions` to see that `new_field` has persisted
```
m_transactions
```
```
## Source: local data table [250,000 x 15]
## Call: `_DT2`
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 9 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>, new_field <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
2\.4 Working with `dtplyr`
--------------------------
*Learn data conversion and basic visualization techniques*
1. Use `as_tibble()` to convert the results of `by_month` into a `tibble`
```
by_month %>%
as_tibble()
```
```
## # A tibble: 2 x 2
## date_month total_sales
## <int> <dbl>
## 1 1 1120628.
## 2 2 562719.
```
2. Load the `ggplot2` library
```
library(ggplot2)
```
3. Use `as_tibble()` to convert before creating a line plot
```
by_month %>%
as_tibble() %>%
ggplot() +
geom_line(aes(date_month, total_sales))
```
2\.5 Pivot data
---------------
*Review a simple way to aggregate data faster, and then pivot it as a tibble*
1. Load the `tidyr` library
```
library(tidyr)
```
2. Group `db_transactions` by `date_month` and `date_day`, then aggregate `price` into `total_sales`
```
dt_transactions %>%
group_by(date_month, date_day) %>%
summarise(total_sales = sum(price))
```
```
## Source: local data table [?? x 3]
## Call: `_DT1`[, .(total_sales = sum(price)), keyby = .(date_month, date_day)]
##
## date_month date_day total_sales
## <int> <chr> <dbl>
## 1 1 Friday 173787.
## 2 1 Monday 139347.
## 3 1 Saturday 177207.
## 4 1 Sunday 177685.
## 5 1 Thursday 156396.
## 6 1 Tuesday 141127.
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
3. Copy the aggregation code above, **collect it into a `tibble`**, and then use `pivot_wider()` to make the `date_day` the column headers.
```
dt_transactions %>%
group_by(date_month, date_day) %>%
summarise(total_sales = sum(price)) %>%
as_tibble() %>%
pivot_wider(names_from = date_day, values_from = total_sales)
```
```
## # A tibble: 2 x 8
## date_month Friday Monday Saturday Sunday Thursday Tuesday Wednesday
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 173787. 139347. 177207. 177685. 156396. 141127. 155081.
## 2 2 80580. 83118. 84947. 80768. 77853. 79288. 76166.
```
2\.1 `dtplyr` basics
--------------------
*Load data into R via `data.table`, and then wrap it with `dtplyr`*
1. Load the `data.table`, `dplyr`, `dtplyr`, `purrr` and `fs` libraries
```
library(data.table)
library(dplyr)
library(dtplyr)
library(purrr)
library(fs)
```
2. Read the **transactions.csv** file, from the **/usr/share/class/files** folder. Use the `fread()` function to load the data into a variable called `transactions`
```
transactions <- dir_ls("/usr/share/class/files", glob = "*.csv") %>%
map(fread) %>%
rbindlist()
```
3. Preview the data using `glimpse()`
```
glimpse(transactions)
```
```
## Observations: 250,000
## Variables: 14
## $ order_id <int> 1001, 1001, 1001, 1001, 1002, 1002, 1002, 1002, 1003, 1…
## $ customer_id <int> 22, 22, 22, 22, 6, 6, 6, 6, 80, 80, 80, 80, 80, 80, 55,…
## $ customer_name <chr> "Dr. Birdie Kessler", "Dr. Birdie Kessler", "Dr. Birdie…
## $ customer_phone <chr> "684.226.0455", "684.226.0455", "684.226.0455", "684.22…
## $ customer_cc <int64> 6011608753104063698, 6011608753104063698, 60116087531…
## $ customer_lon <dbl> -122.484, -122.484, -122.484, -122.484, -122.429, -122.…
## $ customer_lat <dbl> 37.7395, 37.7395, 37.7395, 37.7395, 37.7298, 37.7298, 3…
## $ date <chr> "2016-01-01", "2016-01-01", "2016-01-01", "2016-01-01",…
## $ date_year <int> 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2…
## $ date_month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ date_month_name <chr> "Jan", "Jan", "Jan", "Jan", "Jan", "Jan", "Jan", "Jan",…
## $ date_day <chr> "Friday", "Friday", "Friday", "Friday", "Friday", "Frid…
## $ product_id <int> 6, 27, 30, 2, 17, 1, 5, 4, 27, 16, 30, 6, 11, 30, 30, 1…
## $ price <dbl> 9.88, 7.53, 5.64, 4.89, 6.48, 6.70, 4.27, 7.38, 7.53, 5…
```
4. Use `lazy_dt()` to “wrap” the `transactions` variable into a new variable called `dt_transactions`
```
dt_transactions <- lazy_dt(transactions)
```
5. View `dt_transactions` structure with `glimpse()`
```
glimpse(dt_transactions)
```
```
## List of 7
## $ parent :Classes 'data.table' and 'data.frame': 250000 obs. of 14 variables:
## ..$ order_id : int [1:250000] 1001 1001 1001 1001 1002 1002 1002 1002 1003 1003 ...
## ..$ customer_id : int [1:250000] 22 22 22 22 6 6 6 6 80 80 ...
## ..$ customer_name : chr [1:250000] "Dr. Birdie Kessler" "Dr. Birdie Kessler" "Dr. Birdie Kessler" "Dr. Birdie Kessler" ...
## ..$ customer_phone : chr [1:250000] "684.226.0455" "684.226.0455" "684.226.0455" "684.226.0455" ...
## ..$ customer_cc :integer64 [1:250000] 6011608753104063698 6011608753104063698 6011608753104063698 6011608753104063698 4964180480255037 4964180480255037 4964180480255037 4964180480255037 ...
## ..$ customer_lon : num [1:250000] -122 -122 -122 -122 -122 ...
## ..$ customer_lat : num [1:250000] 37.7 37.7 37.7 37.7 37.7 ...
## ..$ date : chr [1:250000] "2016-01-01" "2016-01-01" "2016-01-01" "2016-01-01" ...
## ..$ date_year : int [1:250000] 2016 2016 2016 2016 2016 2016 2016 2016 2016 2016 ...
## ..$ date_month : int [1:250000] 1 1 1 1 1 1 1 1 1 1 ...
## ..$ date_month_name: chr [1:250000] "Jan" "Jan" "Jan" "Jan" ...
## ..$ date_day : chr [1:250000] "Friday" "Friday" "Friday" "Friday" ...
## ..$ product_id : int [1:250000] 6 27 30 2 17 1 5 4 27 16 ...
## ..$ price : num [1:250000] 9.88 7.53 5.64 4.89 6.48 6.7 4.27 7.38 7.53 5.21 ...
## ..- attr(*, ".internal.selfref")=<externalptr>
## $ vars : chr [1:14] "order_id" "customer_id" "customer_name" "customer_phone" ...
## $ groups : chr(0)
## $ implicit_copy: logi FALSE
## $ needs_copy : logi FALSE
## $ env :<environment: R_GlobalEnv>
## $ name : symbol _DT1
## - attr(*, "class")= chr [1:2] "dtplyr_step_first" "dtplyr_step"
```
2\.2 Object sizes
-----------------
*Confirm that `dtplyr` is not making copies of the original `data.table`*
1. Load the `lobstr` library
```
library(lobstr)
```
2. Use `obj_size()` to obtain `transactions`’s size in memory
```
obj_size(transactions)
```
```
## 23,019,560 B
```
3. Use `obj_size()` to obtain `dt_transactions`’s size in memory
```
obj_size(dt_transactions)
```
```
## 23,020,672 B
```
4. Use `obj_size()` to obtain `dt_transactions` and `transactions` size in memory together
```
obj_size(transactions, dt_transactions)
```
```
## 23,020,672 B
```
2\.3 How `dtplyr` works
-----------------------
*Under the hood view of how `dtplyr` operates `data.table` objects*
1. Use `dplyr` verbs on top of `dt_transactions` to obtain the total sales by month
```
dt_transactions %>%
group_by(date_month) %>%
summarise(total_sales = sum(price))
```
```
## Source: local data table [?? x 2]
## Call: `_DT1`[, .(total_sales = sum(price)), keyby = .(date_month)]
##
## date_month total_sales
## <int> <dbl>
## 1 1 1120628.
## 2 2 562719.
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
2. Load the above code into a variable called `by_month`
```
by_month <- dt_transactions %>%
group_by(date_month) %>%
summarise(total_sales = sum(price))
```
3. Use `show_query()` to see the `data.table` code that `by_month` actually runs
```
show_query(by_month)
```
```
## `_DT1`[, .(total_sales = sum(price)), keyby = .(date_month)]
```
4. Use `glimpse()` to view how `by_month`, instead of modifying the data, only adds steps that will later be executed by `data.table`
```
glimpse(by_month)
```
```
## List of 6
## $ parent :List of 9
## ..$ parent :List of 6
## .. ..$ parent :List of 7
## .. .. ..- attr(*, "class")= chr [1:2] "dtplyr_step_first" "dtplyr_step"
## .. ..$ vars : chr [1:14] "order_id" "customer_id" "customer_name" "customer_phone" ...
## .. ..$ groups : chr "date_month"
## .. ..$ implicit_copy: logi FALSE
## .. ..$ needs_copy : logi FALSE
## .. ..$ env :<environment: R_GlobalEnv>
## .. ..- attr(*, "class")= chr [1:2] "dtplyr_step_group" "dtplyr_step"
## ..$ vars : chr [1:2] "date_month" "total_sales"
## ..$ groups : chr "date_month"
## ..$ implicit_copy: logi TRUE
## ..$ needs_copy : logi FALSE
## ..$ env :<environment: R_GlobalEnv>
## ..$ i : NULL
## ..$ j : language .(total_sales = sum(price))
## ..$ on : chr(0)
## ..- attr(*, "class")= chr [1:2] "dtplyr_step_subset" "dtplyr_step"
## $ vars : chr [1:2] "date_month" "total_sales"
## $ groups : chr(0)
## $ implicit_copy: logi TRUE
## $ needs_copy : logi FALSE
## $ env :<environment: R_GlobalEnv>
## - attr(*, "class")= chr [1:2] "dtplyr_step_group" "dtplyr_step"
```
5. Create a new column using `mutate()`
```
dt_transactions %>%
mutate(new_field = price / 2)
```
```
## Source: local data table [?? x 15]
## Call: copy(`_DT1`)[, `:=`(new_field = price/2)]
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 9 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>, new_field <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
6. Use `show_query()` to see the `copy()` command being used
```
dt_transactions %>%
mutate(new_field = price / 2) %>%
show_query()
```
```
## copy(`_DT1`)[, `:=`(new_field = price/2)]
```
7. Check to confirm that the new column *did not* persist in `dt_transactions`
```
dt_transactions
```
```
## Source: local data table [250,000 x 14]
## Call: `_DT1`
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 8 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
8. Use `lazy_dt()` with the `immutable` argument set to `FALSE` to avoid the copy
```
m_transactions <- lazy_dt(copy(transactions), immutable = FALSE)
```
```
m_transactions
```
```
## Source: local data table [250,000 x 14]
## Call: `_DT2`
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 8 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
9. Create a `new_field` column in `m_transactions` using `mutate()`
```
m_transactions %>%
mutate(new_field = price / 2)
```
```
## Source: local data table [?? x 15]
## Call: `_DT2`[, `:=`(new_field = price/2)]
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 9 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>, new_field <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
10. Use `show_query()` to see that `copy()` is no longer being used
```
m_transactions %>%
mutate(new_field = price / 2) %>%
show_query()
```
```
## `_DT2`[, `:=`(new_field = price/2)]
```
11. Inspect `m_transactions` to see that `new_field` has persisted
```
m_transactions
```
```
## Source: local data table [250,000 x 15]
## Call: `_DT2`
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 9 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>, new_field <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
2\.4 Working with `dtplyr`
--------------------------
*Learn data conversion and basic visualization techniques*
1. Use `as_tibble()` to convert the results of `by_month` into a `tibble`
```
by_month %>%
as_tibble()
```
```
## # A tibble: 2 x 2
## date_month total_sales
## <int> <dbl>
## 1 1 1120628.
## 2 2 562719.
```
2. Load the `ggplot2` library
```
library(ggplot2)
```
3. Use `as_tibble()` to convert before creating a line plot
```
by_month %>%
as_tibble() %>%
ggplot() +
geom_line(aes(date_month, total_sales))
```
2\.5 Pivot data
---------------
*Review a simple way to aggregate data faster, and then pivot it as a tibble*
1. Load the `tidyr` library
```
library(tidyr)
```
2. Group `db_transactions` by `date_month` and `date_day`, then aggregate `price` into `total_sales`
```
dt_transactions %>%
group_by(date_month, date_day) %>%
summarise(total_sales = sum(price))
```
```
## Source: local data table [?? x 3]
## Call: `_DT1`[, .(total_sales = sum(price)), keyby = .(date_month, date_day)]
##
## date_month date_day total_sales
## <int> <chr> <dbl>
## 1 1 Friday 173787.
## 2 1 Monday 139347.
## 3 1 Saturday 177207.
## 4 1 Sunday 177685.
## 5 1 Thursday 156396.
## 6 1 Tuesday 141127.
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
3. Copy the aggregation code above, **collect it into a `tibble`**, and then use `pivot_wider()` to make the `date_day` the column headers.
```
dt_transactions %>%
group_by(date_month, date_day) %>%
summarise(total_sales = sum(price)) %>%
as_tibble() %>%
pivot_wider(names_from = date_day, values_from = total_sales)
```
```
## # A tibble: 2 x 8
## date_month Friday Monday Saturday Sunday Thursday Tuesday Wednesday
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 173787. 139347. 177207. 177685. 156396. 141127. 155081.
## 2 2 80580. 83118. 84947. 80768. 77853. 79288. 76166.
```
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/introduction-to-dtplyr.html |
2 Introduction to `dtplyr`
==========================
2\.1 `dtplyr` basics
--------------------
*Load data into R via `data.table`, and then wrap it with `dtplyr`*
1. Load the `data.table`, `dplyr`, `dtplyr`, `purrr` and `fs` libraries
```
library(data.table)
library(dplyr)
library(dtplyr)
library(purrr)
library(fs)
```
2. Read the **transactions.csv** file, from the **/usr/share/class/files** folder. Use the `fread()` function to load the data into a variable called `transactions`
```
transactions <- dir_ls("/usr/share/class/files", glob = "*.csv") %>%
map(fread) %>%
rbindlist()
```
3. Preview the data using `glimpse()`
```
glimpse(transactions)
```
```
## Observations: 250,000
## Variables: 14
## $ order_id <int> 1001, 1001, 1001, 1001, 1002, 1002, 1002, 1002, 1003, 1…
## $ customer_id <int> 22, 22, 22, 22, 6, 6, 6, 6, 80, 80, 80, 80, 80, 80, 55,…
## $ customer_name <chr> "Dr. Birdie Kessler", "Dr. Birdie Kessler", "Dr. Birdie…
## $ customer_phone <chr> "684.226.0455", "684.226.0455", "684.226.0455", "684.22…
## $ customer_cc <int64> 6011608753104063698, 6011608753104063698, 60116087531…
## $ customer_lon <dbl> -122.484, -122.484, -122.484, -122.484, -122.429, -122.…
## $ customer_lat <dbl> 37.7395, 37.7395, 37.7395, 37.7395, 37.7298, 37.7298, 3…
## $ date <chr> "2016-01-01", "2016-01-01", "2016-01-01", "2016-01-01",…
## $ date_year <int> 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2…
## $ date_month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ date_month_name <chr> "Jan", "Jan", "Jan", "Jan", "Jan", "Jan", "Jan", "Jan",…
## $ date_day <chr> "Friday", "Friday", "Friday", "Friday", "Friday", "Frid…
## $ product_id <int> 6, 27, 30, 2, 17, 1, 5, 4, 27, 16, 30, 6, 11, 30, 30, 1…
## $ price <dbl> 9.88, 7.53, 5.64, 4.89, 6.48, 6.70, 4.27, 7.38, 7.53, 5…
```
4. Use `lazy_dt()` to “wrap” the `transactions` variable into a new variable called `dt_transactions`
```
dt_transactions <- lazy_dt(transactions)
```
5. View `dt_transactions` structure with `glimpse()`
```
glimpse(dt_transactions)
```
```
## List of 7
## $ parent :Classes 'data.table' and 'data.frame': 250000 obs. of 14 variables:
## ..$ order_id : int [1:250000] 1001 1001 1001 1001 1002 1002 1002 1002 1003 1003 ...
## ..$ customer_id : int [1:250000] 22 22 22 22 6 6 6 6 80 80 ...
## ..$ customer_name : chr [1:250000] "Dr. Birdie Kessler" "Dr. Birdie Kessler" "Dr. Birdie Kessler" "Dr. Birdie Kessler" ...
## ..$ customer_phone : chr [1:250000] "684.226.0455" "684.226.0455" "684.226.0455" "684.226.0455" ...
## ..$ customer_cc :integer64 [1:250000] 6011608753104063698 6011608753104063698 6011608753104063698 6011608753104063698 4964180480255037 4964180480255037 4964180480255037 4964180480255037 ...
## ..$ customer_lon : num [1:250000] -122 -122 -122 -122 -122 ...
## ..$ customer_lat : num [1:250000] 37.7 37.7 37.7 37.7 37.7 ...
## ..$ date : chr [1:250000] "2016-01-01" "2016-01-01" "2016-01-01" "2016-01-01" ...
## ..$ date_year : int [1:250000] 2016 2016 2016 2016 2016 2016 2016 2016 2016 2016 ...
## ..$ date_month : int [1:250000] 1 1 1 1 1 1 1 1 1 1 ...
## ..$ date_month_name: chr [1:250000] "Jan" "Jan" "Jan" "Jan" ...
## ..$ date_day : chr [1:250000] "Friday" "Friday" "Friday" "Friday" ...
## ..$ product_id : int [1:250000] 6 27 30 2 17 1 5 4 27 16 ...
## ..$ price : num [1:250000] 9.88 7.53 5.64 4.89 6.48 6.7 4.27 7.38 7.53 5.21 ...
## ..- attr(*, ".internal.selfref")=<externalptr>
## $ vars : chr [1:14] "order_id" "customer_id" "customer_name" "customer_phone" ...
## $ groups : chr(0)
## $ implicit_copy: logi FALSE
## $ needs_copy : logi FALSE
## $ env :<environment: R_GlobalEnv>
## $ name : symbol _DT1
## - attr(*, "class")= chr [1:2] "dtplyr_step_first" "dtplyr_step"
```
2\.2 Object sizes
-----------------
*Confirm that `dtplyr` is not making copies of the original `data.table`*
1. Load the `lobstr` library
```
library(lobstr)
```
2. Use `obj_size()` to obtain `transactions`’s size in memory
```
obj_size(transactions)
```
```
## 23,019,560 B
```
3. Use `obj_size()` to obtain `dt_transactions`’s size in memory
```
obj_size(dt_transactions)
```
```
## 23,020,672 B
```
4. Use `obj_size()` to obtain `dt_transactions` and `transactions` size in memory together
```
obj_size(transactions, dt_transactions)
```
```
## 23,020,672 B
```
2\.3 How `dtplyr` works
-----------------------
*Under the hood view of how `dtplyr` operates `data.table` objects*
1. Use `dplyr` verbs on top of `dt_transactions` to obtain the total sales by month
```
dt_transactions %>%
group_by(date_month) %>%
summarise(total_sales = sum(price))
```
```
## Source: local data table [?? x 2]
## Call: `_DT1`[, .(total_sales = sum(price)), keyby = .(date_month)]
##
## date_month total_sales
## <int> <dbl>
## 1 1 1120628.
## 2 2 562719.
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
2. Load the above code into a variable called `by_month`
```
by_month <- dt_transactions %>%
group_by(date_month) %>%
summarise(total_sales = sum(price))
```
3. Use `show_query()` to see the `data.table` code that `by_month` actually runs
```
show_query(by_month)
```
```
## `_DT1`[, .(total_sales = sum(price)), keyby = .(date_month)]
```
4. Use `glimpse()` to view how `by_month`, instead of modifying the data, only adds steps that will later be executed by `data.table`
```
glimpse(by_month)
```
```
## List of 6
## $ parent :List of 9
## ..$ parent :List of 6
## .. ..$ parent :List of 7
## .. .. ..- attr(*, "class")= chr [1:2] "dtplyr_step_first" "dtplyr_step"
## .. ..$ vars : chr [1:14] "order_id" "customer_id" "customer_name" "customer_phone" ...
## .. ..$ groups : chr "date_month"
## .. ..$ implicit_copy: logi FALSE
## .. ..$ needs_copy : logi FALSE
## .. ..$ env :<environment: R_GlobalEnv>
## .. ..- attr(*, "class")= chr [1:2] "dtplyr_step_group" "dtplyr_step"
## ..$ vars : chr [1:2] "date_month" "total_sales"
## ..$ groups : chr "date_month"
## ..$ implicit_copy: logi TRUE
## ..$ needs_copy : logi FALSE
## ..$ env :<environment: R_GlobalEnv>
## ..$ i : NULL
## ..$ j : language .(total_sales = sum(price))
## ..$ on : chr(0)
## ..- attr(*, "class")= chr [1:2] "dtplyr_step_subset" "dtplyr_step"
## $ vars : chr [1:2] "date_month" "total_sales"
## $ groups : chr(0)
## $ implicit_copy: logi TRUE
## $ needs_copy : logi FALSE
## $ env :<environment: R_GlobalEnv>
## - attr(*, "class")= chr [1:2] "dtplyr_step_group" "dtplyr_step"
```
5. Create a new column using `mutate()`
```
dt_transactions %>%
mutate(new_field = price / 2)
```
```
## Source: local data table [?? x 15]
## Call: copy(`_DT1`)[, `:=`(new_field = price/2)]
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 9 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>, new_field <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
6. Use `show_query()` to see the `copy()` command being used
```
dt_transactions %>%
mutate(new_field = price / 2) %>%
show_query()
```
```
## copy(`_DT1`)[, `:=`(new_field = price/2)]
```
7. Check to confirm that the new column *did not* persist in `dt_transactions`
```
dt_transactions
```
```
## Source: local data table [250,000 x 14]
## Call: `_DT1`
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 8 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
8. Use `lazy_dt()` with the `immutable` argument set to `FALSE` to avoid the copy
```
m_transactions <- lazy_dt(copy(transactions), immutable = FALSE)
```
```
m_transactions
```
```
## Source: local data table [250,000 x 14]
## Call: `_DT2`
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 8 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
9. Create a `new_field` column in `m_transactions` using `mutate()`
```
m_transactions %>%
mutate(new_field = price / 2)
```
```
## Source: local data table [?? x 15]
## Call: `_DT2`[, `:=`(new_field = price/2)]
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 9 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>, new_field <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
10. Use `show_query()` to see that `copy()` is no longer being used
```
m_transactions %>%
mutate(new_field = price / 2) %>%
show_query()
```
```
## `_DT2`[, `:=`(new_field = price/2)]
```
11. Inspect `m_transactions` to see that `new_field` has persisted
```
m_transactions
```
```
## Source: local data table [250,000 x 15]
## Call: `_DT2`
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 9 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>, new_field <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
2\.4 Working with `dtplyr`
--------------------------
*Learn data conversion and basic visualization techniques*
1. Use `as_tibble()` to convert the results of `by_month` into a `tibble`
```
by_month %>%
as_tibble()
```
```
## # A tibble: 2 x 2
## date_month total_sales
## <int> <dbl>
## 1 1 1120628.
## 2 2 562719.
```
2. Load the `ggplot2` library
```
library(ggplot2)
```
3. Use `as_tibble()` to convert before creating a line plot
```
by_month %>%
as_tibble() %>%
ggplot() +
geom_line(aes(date_month, total_sales))
```
2\.5 Pivot data
---------------
*Review a simple way to aggregate data faster, and then pivot it as a tibble*
1. Load the `tidyr` library
```
library(tidyr)
```
2. Group `db_transactions` by `date_month` and `date_day`, then aggregate `price` into `total_sales`
```
dt_transactions %>%
group_by(date_month, date_day) %>%
summarise(total_sales = sum(price))
```
```
## Source: local data table [?? x 3]
## Call: `_DT1`[, .(total_sales = sum(price)), keyby = .(date_month, date_day)]
##
## date_month date_day total_sales
## <int> <chr> <dbl>
## 1 1 Friday 173787.
## 2 1 Monday 139347.
## 3 1 Saturday 177207.
## 4 1 Sunday 177685.
## 5 1 Thursday 156396.
## 6 1 Tuesday 141127.
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
3. Copy the aggregation code above, **collect it into a `tibble`**, and then use `pivot_wider()` to make the `date_day` the column headers.
```
dt_transactions %>%
group_by(date_month, date_day) %>%
summarise(total_sales = sum(price)) %>%
as_tibble() %>%
pivot_wider(names_from = date_day, values_from = total_sales)
```
```
## # A tibble: 2 x 8
## date_month Friday Monday Saturday Sunday Thursday Tuesday Wednesday
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 173787. 139347. 177207. 177685. 156396. 141127. 155081.
## 2 2 80580. 83118. 84947. 80768. 77853. 79288. 76166.
```
2\.1 `dtplyr` basics
--------------------
*Load data into R via `data.table`, and then wrap it with `dtplyr`*
1. Load the `data.table`, `dplyr`, `dtplyr`, `purrr` and `fs` libraries
```
library(data.table)
library(dplyr)
library(dtplyr)
library(purrr)
library(fs)
```
2. Read the **transactions.csv** file, from the **/usr/share/class/files** folder. Use the `fread()` function to load the data into a variable called `transactions`
```
transactions <- dir_ls("/usr/share/class/files", glob = "*.csv") %>%
map(fread) %>%
rbindlist()
```
3. Preview the data using `glimpse()`
```
glimpse(transactions)
```
```
## Observations: 250,000
## Variables: 14
## $ order_id <int> 1001, 1001, 1001, 1001, 1002, 1002, 1002, 1002, 1003, 1…
## $ customer_id <int> 22, 22, 22, 22, 6, 6, 6, 6, 80, 80, 80, 80, 80, 80, 55,…
## $ customer_name <chr> "Dr. Birdie Kessler", "Dr. Birdie Kessler", "Dr. Birdie…
## $ customer_phone <chr> "684.226.0455", "684.226.0455", "684.226.0455", "684.22…
## $ customer_cc <int64> 6011608753104063698, 6011608753104063698, 60116087531…
## $ customer_lon <dbl> -122.484, -122.484, -122.484, -122.484, -122.429, -122.…
## $ customer_lat <dbl> 37.7395, 37.7395, 37.7395, 37.7395, 37.7298, 37.7298, 3…
## $ date <chr> "2016-01-01", "2016-01-01", "2016-01-01", "2016-01-01",…
## $ date_year <int> 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2016, 2…
## $ date_month <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ date_month_name <chr> "Jan", "Jan", "Jan", "Jan", "Jan", "Jan", "Jan", "Jan",…
## $ date_day <chr> "Friday", "Friday", "Friday", "Friday", "Friday", "Frid…
## $ product_id <int> 6, 27, 30, 2, 17, 1, 5, 4, 27, 16, 30, 6, 11, 30, 30, 1…
## $ price <dbl> 9.88, 7.53, 5.64, 4.89, 6.48, 6.70, 4.27, 7.38, 7.53, 5…
```
4. Use `lazy_dt()` to “wrap” the `transactions` variable into a new variable called `dt_transactions`
```
dt_transactions <- lazy_dt(transactions)
```
5. View `dt_transactions` structure with `glimpse()`
```
glimpse(dt_transactions)
```
```
## List of 7
## $ parent :Classes 'data.table' and 'data.frame': 250000 obs. of 14 variables:
## ..$ order_id : int [1:250000] 1001 1001 1001 1001 1002 1002 1002 1002 1003 1003 ...
## ..$ customer_id : int [1:250000] 22 22 22 22 6 6 6 6 80 80 ...
## ..$ customer_name : chr [1:250000] "Dr. Birdie Kessler" "Dr. Birdie Kessler" "Dr. Birdie Kessler" "Dr. Birdie Kessler" ...
## ..$ customer_phone : chr [1:250000] "684.226.0455" "684.226.0455" "684.226.0455" "684.226.0455" ...
## ..$ customer_cc :integer64 [1:250000] 6011608753104063698 6011608753104063698 6011608753104063698 6011608753104063698 4964180480255037 4964180480255037 4964180480255037 4964180480255037 ...
## ..$ customer_lon : num [1:250000] -122 -122 -122 -122 -122 ...
## ..$ customer_lat : num [1:250000] 37.7 37.7 37.7 37.7 37.7 ...
## ..$ date : chr [1:250000] "2016-01-01" "2016-01-01" "2016-01-01" "2016-01-01" ...
## ..$ date_year : int [1:250000] 2016 2016 2016 2016 2016 2016 2016 2016 2016 2016 ...
## ..$ date_month : int [1:250000] 1 1 1 1 1 1 1 1 1 1 ...
## ..$ date_month_name: chr [1:250000] "Jan" "Jan" "Jan" "Jan" ...
## ..$ date_day : chr [1:250000] "Friday" "Friday" "Friday" "Friday" ...
## ..$ product_id : int [1:250000] 6 27 30 2 17 1 5 4 27 16 ...
## ..$ price : num [1:250000] 9.88 7.53 5.64 4.89 6.48 6.7 4.27 7.38 7.53 5.21 ...
## ..- attr(*, ".internal.selfref")=<externalptr>
## $ vars : chr [1:14] "order_id" "customer_id" "customer_name" "customer_phone" ...
## $ groups : chr(0)
## $ implicit_copy: logi FALSE
## $ needs_copy : logi FALSE
## $ env :<environment: R_GlobalEnv>
## $ name : symbol _DT1
## - attr(*, "class")= chr [1:2] "dtplyr_step_first" "dtplyr_step"
```
2\.2 Object sizes
-----------------
*Confirm that `dtplyr` is not making copies of the original `data.table`*
1. Load the `lobstr` library
```
library(lobstr)
```
2. Use `obj_size()` to obtain `transactions`’s size in memory
```
obj_size(transactions)
```
```
## 23,019,560 B
```
3. Use `obj_size()` to obtain `dt_transactions`’s size in memory
```
obj_size(dt_transactions)
```
```
## 23,020,672 B
```
4. Use `obj_size()` to obtain `dt_transactions` and `transactions` size in memory together
```
obj_size(transactions, dt_transactions)
```
```
## 23,020,672 B
```
2\.3 How `dtplyr` works
-----------------------
*Under the hood view of how `dtplyr` operates `data.table` objects*
1. Use `dplyr` verbs on top of `dt_transactions` to obtain the total sales by month
```
dt_transactions %>%
group_by(date_month) %>%
summarise(total_sales = sum(price))
```
```
## Source: local data table [?? x 2]
## Call: `_DT1`[, .(total_sales = sum(price)), keyby = .(date_month)]
##
## date_month total_sales
## <int> <dbl>
## 1 1 1120628.
## 2 2 562719.
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
2. Load the above code into a variable called `by_month`
```
by_month <- dt_transactions %>%
group_by(date_month) %>%
summarise(total_sales = sum(price))
```
3. Use `show_query()` to see the `data.table` code that `by_month` actually runs
```
show_query(by_month)
```
```
## `_DT1`[, .(total_sales = sum(price)), keyby = .(date_month)]
```
4. Use `glimpse()` to view how `by_month`, instead of modifying the data, only adds steps that will later be executed by `data.table`
```
glimpse(by_month)
```
```
## List of 6
## $ parent :List of 9
## ..$ parent :List of 6
## .. ..$ parent :List of 7
## .. .. ..- attr(*, "class")= chr [1:2] "dtplyr_step_first" "dtplyr_step"
## .. ..$ vars : chr [1:14] "order_id" "customer_id" "customer_name" "customer_phone" ...
## .. ..$ groups : chr "date_month"
## .. ..$ implicit_copy: logi FALSE
## .. ..$ needs_copy : logi FALSE
## .. ..$ env :<environment: R_GlobalEnv>
## .. ..- attr(*, "class")= chr [1:2] "dtplyr_step_group" "dtplyr_step"
## ..$ vars : chr [1:2] "date_month" "total_sales"
## ..$ groups : chr "date_month"
## ..$ implicit_copy: logi TRUE
## ..$ needs_copy : logi FALSE
## ..$ env :<environment: R_GlobalEnv>
## ..$ i : NULL
## ..$ j : language .(total_sales = sum(price))
## ..$ on : chr(0)
## ..- attr(*, "class")= chr [1:2] "dtplyr_step_subset" "dtplyr_step"
## $ vars : chr [1:2] "date_month" "total_sales"
## $ groups : chr(0)
## $ implicit_copy: logi TRUE
## $ needs_copy : logi FALSE
## $ env :<environment: R_GlobalEnv>
## - attr(*, "class")= chr [1:2] "dtplyr_step_group" "dtplyr_step"
```
5. Create a new column using `mutate()`
```
dt_transactions %>%
mutate(new_field = price / 2)
```
```
## Source: local data table [?? x 15]
## Call: copy(`_DT1`)[, `:=`(new_field = price/2)]
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 9 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>, new_field <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
6. Use `show_query()` to see the `copy()` command being used
```
dt_transactions %>%
mutate(new_field = price / 2) %>%
show_query()
```
```
## copy(`_DT1`)[, `:=`(new_field = price/2)]
```
7. Check to confirm that the new column *did not* persist in `dt_transactions`
```
dt_transactions
```
```
## Source: local data table [250,000 x 14]
## Call: `_DT1`
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 8 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
8. Use `lazy_dt()` with the `immutable` argument set to `FALSE` to avoid the copy
```
m_transactions <- lazy_dt(copy(transactions), immutable = FALSE)
```
```
m_transactions
```
```
## Source: local data table [250,000 x 14]
## Call: `_DT2`
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 8 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
9. Create a `new_field` column in `m_transactions` using `mutate()`
```
m_transactions %>%
mutate(new_field = price / 2)
```
```
## Source: local data table [?? x 15]
## Call: `_DT2`[, `:=`(new_field = price/2)]
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 9 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>, new_field <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
10. Use `show_query()` to see that `copy()` is no longer being used
```
m_transactions %>%
mutate(new_field = price / 2) %>%
show_query()
```
```
## `_DT2`[, `:=`(new_field = price/2)]
```
11. Inspect `m_transactions` to see that `new_field` has persisted
```
m_transactions
```
```
## Source: local data table [250,000 x 15]
## Call: `_DT2`
##
## order_id customer_id customer_name customer_phone customer_cc customer_lon
## <int> <int> <chr> <chr> <int64> <dbl>
## 1 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 2 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 3 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 4 1001 22 Dr. Birdie K… 684.226.0455 6011608753… -122.
## 5 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## 6 1002 6 Meggan Bruen 326-151-4331 4964180… -122.
## # … with 9 more variables: customer_lat <dbl>, date <chr>, date_year <int>,
## # date_month <int>, date_month_name <chr>, date_day <chr>, product_id <int>,
## # price <dbl>, new_field <dbl>
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
2\.4 Working with `dtplyr`
--------------------------
*Learn data conversion and basic visualization techniques*
1. Use `as_tibble()` to convert the results of `by_month` into a `tibble`
```
by_month %>%
as_tibble()
```
```
## # A tibble: 2 x 2
## date_month total_sales
## <int> <dbl>
## 1 1 1120628.
## 2 2 562719.
```
2. Load the `ggplot2` library
```
library(ggplot2)
```
3. Use `as_tibble()` to convert before creating a line plot
```
by_month %>%
as_tibble() %>%
ggplot() +
geom_line(aes(date_month, total_sales))
```
2\.5 Pivot data
---------------
*Review a simple way to aggregate data faster, and then pivot it as a tibble*
1. Load the `tidyr` library
```
library(tidyr)
```
2. Group `db_transactions` by `date_month` and `date_day`, then aggregate `price` into `total_sales`
```
dt_transactions %>%
group_by(date_month, date_day) %>%
summarise(total_sales = sum(price))
```
```
## Source: local data table [?? x 3]
## Call: `_DT1`[, .(total_sales = sum(price)), keyby = .(date_month, date_day)]
##
## date_month date_day total_sales
## <int> <chr> <dbl>
## 1 1 Friday 173787.
## 2 1 Monday 139347.
## 3 1 Saturday 177207.
## 4 1 Sunday 177685.
## 5 1 Thursday 156396.
## 6 1 Tuesday 141127.
##
## # Use as.data.table()/as.data.frame()/as_tibble() to access results
```
3. Copy the aggregation code above, **collect it into a `tibble`**, and then use `pivot_wider()` to make the `date_day` the column headers.
```
dt_transactions %>%
group_by(date_month, date_day) %>%
summarise(total_sales = sum(price)) %>%
as_tibble() %>%
pivot_wider(names_from = date_day, values_from = total_sales)
```
```
## # A tibble: 2 x 8
## date_month Friday Monday Saturday Sunday Thursday Tuesday Wednesday
## <int> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 173787. 139347. 177207. 177685. 156396. 141127. 155081.
## 2 2 80580. 83118. 84947. 80768. 77853. 79288. 76166.
```
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/introduction-to-database-connections.html |
3 Introduction to database connections
======================================
3\.1 Connect with the Connections pane
--------------------------------------
*Connect using the features of the RStudio IDE*
1. The connections pane (top right hand corner of the RStudio IDE) can guide through establishing database connections.
3\.2 Connecting via DSN
-----------------------
*Connect using defined Data Source Name (DSN). This requires an ODBC driver.*
1. Load the `DBI` and `odbc` packages
```
library(DBI)
library(odbc)
```
2. Use `odbcListDatasources` to list available DSNs
```
odbcListDataSources()
```
3. Use `dbConnect` to connect to a database using the `odbc` function and a DSN
```
con <- dbConnect(odbc(), "Postgres Dev")
```
4. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
3\.3 Connect with a connection string
-------------------------------------
*Connect by specifying all connection details in `dbConnect`*
1. Use `dbConnect` and `odbc` to connect to a database, but this time all connection details are provided
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = "rstudio_dev",
pwd = "dev_user",
port = 5432,
database = "postgres",
bigint = "integer")
```
2. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
3\.4 Secure connection details
------------------------------
*Use `config`, `keyring`, or environment variables to secure connection credentials*
1. Load the `config` package
```
library(config)
```
2. Get the current config using the `get` function and store the results in an object called `config`
```
config <- get()
```
3. Use `str` to investigate the contents of `config`
```
str(config)
```
4. Connect using details provided in `config`
```
con <- dbConnect(odbc(),
driver = config$driver,
host = config$host,
user = config$user,
pwd = config$pwd,
port = config$port,
database = config$dbname,
bigint = config$bigint)
```
5. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
6. Load the `keyring` package
```
library(keyring)
```
7. Store the database username and password using `keyring`. The username is `rstudio_dev` and the password is `dev_user`
```
key_set("postgres", "username")
key_set("postgres", "password")
```
8. Use the stored credentials along with `dbConnect` to connect to the database
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = key_get("postgres", "username"),
pwd = key_get("postgres", "password"),
port = 5432,
database = "postgres",
bigint = "integer")
```
9. Discnonnect using `dbDisconnect`
```
dbDisconnect(con)
```
10. The `.Renviron` file contains entries to create environment variables for `PG_USER` and `PG_PWD`. These variables can be read using `Sys.getenv()`.
```
Sys.getenv("PG_USER")
```
11. Connect to the database using the credentials stored in `.Renviron` and `Sys.getenv()`
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = Sys.getenv("PG_USER"),
pwd = Sys.getenv("PG_PWD"),
port = 5432,
database = "postgres",
bigint = "integer")
```
12. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
13. Store connection details using `options()`
```
options(pg_user = "rstudio_dev", pg_pwd = "dev_user")
```
14. Connect using the credentials accessed via `getOption`
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = getOption("pg_user"),
pwd = getOption("pg_pwd"),
port = 5432,
database = "postgres",
bigint = "integer")
```
15. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
16. Interactively prompt users for input using `rstudioapi::askForPassword()`
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = rstudioapi::askForPassword("DB User"),
pwd = rstudioapi::askForPassword("DB Password"),
port = 5432,
database = "postgres",
bigint = "integer")
```
17. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
```
knitr::opts_chunk$set(connection = "con", max.print = 5)
```
3\.1 Connect with the Connections pane
--------------------------------------
*Connect using the features of the RStudio IDE*
1. The connections pane (top right hand corner of the RStudio IDE) can guide through establishing database connections.
3\.2 Connecting via DSN
-----------------------
*Connect using defined Data Source Name (DSN). This requires an ODBC driver.*
1. Load the `DBI` and `odbc` packages
```
library(DBI)
library(odbc)
```
2. Use `odbcListDatasources` to list available DSNs
```
odbcListDataSources()
```
3. Use `dbConnect` to connect to a database using the `odbc` function and a DSN
```
con <- dbConnect(odbc(), "Postgres Dev")
```
4. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
3\.3 Connect with a connection string
-------------------------------------
*Connect by specifying all connection details in `dbConnect`*
1. Use `dbConnect` and `odbc` to connect to a database, but this time all connection details are provided
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = "rstudio_dev",
pwd = "dev_user",
port = 5432,
database = "postgres",
bigint = "integer")
```
2. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
3\.4 Secure connection details
------------------------------
*Use `config`, `keyring`, or environment variables to secure connection credentials*
1. Load the `config` package
```
library(config)
```
2. Get the current config using the `get` function and store the results in an object called `config`
```
config <- get()
```
3. Use `str` to investigate the contents of `config`
```
str(config)
```
4. Connect using details provided in `config`
```
con <- dbConnect(odbc(),
driver = config$driver,
host = config$host,
user = config$user,
pwd = config$pwd,
port = config$port,
database = config$dbname,
bigint = config$bigint)
```
5. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
6. Load the `keyring` package
```
library(keyring)
```
7. Store the database username and password using `keyring`. The username is `rstudio_dev` and the password is `dev_user`
```
key_set("postgres", "username")
key_set("postgres", "password")
```
8. Use the stored credentials along with `dbConnect` to connect to the database
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = key_get("postgres", "username"),
pwd = key_get("postgres", "password"),
port = 5432,
database = "postgres",
bigint = "integer")
```
9. Discnonnect using `dbDisconnect`
```
dbDisconnect(con)
```
10. The `.Renviron` file contains entries to create environment variables for `PG_USER` and `PG_PWD`. These variables can be read using `Sys.getenv()`.
```
Sys.getenv("PG_USER")
```
11. Connect to the database using the credentials stored in `.Renviron` and `Sys.getenv()`
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = Sys.getenv("PG_USER"),
pwd = Sys.getenv("PG_PWD"),
port = 5432,
database = "postgres",
bigint = "integer")
```
12. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
13. Store connection details using `options()`
```
options(pg_user = "rstudio_dev", pg_pwd = "dev_user")
```
14. Connect using the credentials accessed via `getOption`
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = getOption("pg_user"),
pwd = getOption("pg_pwd"),
port = 5432,
database = "postgres",
bigint = "integer")
```
15. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
16. Interactively prompt users for input using `rstudioapi::askForPassword()`
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = rstudioapi::askForPassword("DB User"),
pwd = rstudioapi::askForPassword("DB Password"),
port = 5432,
database = "postgres",
bigint = "integer")
```
17. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
```
knitr::opts_chunk$set(connection = "con", max.print = 5)
```
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/introduction-to-database-connections.html |
3 Introduction to database connections
======================================
3\.1 Connect with the Connections pane
--------------------------------------
*Connect using the features of the RStudio IDE*
1. The connections pane (top right hand corner of the RStudio IDE) can guide through establishing database connections.
3\.2 Connecting via DSN
-----------------------
*Connect using defined Data Source Name (DSN). This requires an ODBC driver.*
1. Load the `DBI` and `odbc` packages
```
library(DBI)
library(odbc)
```
2. Use `odbcListDatasources` to list available DSNs
```
odbcListDataSources()
```
3. Use `dbConnect` to connect to a database using the `odbc` function and a DSN
```
con <- dbConnect(odbc(), "Postgres Dev")
```
4. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
3\.3 Connect with a connection string
-------------------------------------
*Connect by specifying all connection details in `dbConnect`*
1. Use `dbConnect` and `odbc` to connect to a database, but this time all connection details are provided
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = "rstudio_dev",
pwd = "dev_user",
port = 5432,
database = "postgres",
bigint = "integer")
```
2. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
3\.4 Secure connection details
------------------------------
*Use `config`, `keyring`, or environment variables to secure connection credentials*
1. Load the `config` package
```
library(config)
```
2. Get the current config using the `get` function and store the results in an object called `config`
```
config <- get()
```
3. Use `str` to investigate the contents of `config`
```
str(config)
```
4. Connect using details provided in `config`
```
con <- dbConnect(odbc(),
driver = config$driver,
host = config$host,
user = config$user,
pwd = config$pwd,
port = config$port,
database = config$dbname,
bigint = config$bigint)
```
5. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
6. Load the `keyring` package
```
library(keyring)
```
7. Store the database username and password using `keyring`. The username is `rstudio_dev` and the password is `dev_user`
```
key_set("postgres", "username")
key_set("postgres", "password")
```
8. Use the stored credentials along with `dbConnect` to connect to the database
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = key_get("postgres", "username"),
pwd = key_get("postgres", "password"),
port = 5432,
database = "postgres",
bigint = "integer")
```
9. Discnonnect using `dbDisconnect`
```
dbDisconnect(con)
```
10. The `.Renviron` file contains entries to create environment variables for `PG_USER` and `PG_PWD`. These variables can be read using `Sys.getenv()`.
```
Sys.getenv("PG_USER")
```
11. Connect to the database using the credentials stored in `.Renviron` and `Sys.getenv()`
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = Sys.getenv("PG_USER"),
pwd = Sys.getenv("PG_PWD"),
port = 5432,
database = "postgres",
bigint = "integer")
```
12. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
13. Store connection details using `options()`
```
options(pg_user = "rstudio_dev", pg_pwd = "dev_user")
```
14. Connect using the credentials accessed via `getOption`
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = getOption("pg_user"),
pwd = getOption("pg_pwd"),
port = 5432,
database = "postgres",
bigint = "integer")
```
15. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
16. Interactively prompt users for input using `rstudioapi::askForPassword()`
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = rstudioapi::askForPassword("DB User"),
pwd = rstudioapi::askForPassword("DB Password"),
port = 5432,
database = "postgres",
bigint = "integer")
```
17. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
```
knitr::opts_chunk$set(connection = "con", max.print = 5)
```
3\.1 Connect with the Connections pane
--------------------------------------
*Connect using the features of the RStudio IDE*
1. The connections pane (top right hand corner of the RStudio IDE) can guide through establishing database connections.
3\.2 Connecting via DSN
-----------------------
*Connect using defined Data Source Name (DSN). This requires an ODBC driver.*
1. Load the `DBI` and `odbc` packages
```
library(DBI)
library(odbc)
```
2. Use `odbcListDatasources` to list available DSNs
```
odbcListDataSources()
```
3. Use `dbConnect` to connect to a database using the `odbc` function and a DSN
```
con <- dbConnect(odbc(), "Postgres Dev")
```
4. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
3\.3 Connect with a connection string
-------------------------------------
*Connect by specifying all connection details in `dbConnect`*
1. Use `dbConnect` and `odbc` to connect to a database, but this time all connection details are provided
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = "rstudio_dev",
pwd = "dev_user",
port = 5432,
database = "postgres",
bigint = "integer")
```
2. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
3\.4 Secure connection details
------------------------------
*Use `config`, `keyring`, or environment variables to secure connection credentials*
1. Load the `config` package
```
library(config)
```
2. Get the current config using the `get` function and store the results in an object called `config`
```
config <- get()
```
3. Use `str` to investigate the contents of `config`
```
str(config)
```
4. Connect using details provided in `config`
```
con <- dbConnect(odbc(),
driver = config$driver,
host = config$host,
user = config$user,
pwd = config$pwd,
port = config$port,
database = config$dbname,
bigint = config$bigint)
```
5. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
6. Load the `keyring` package
```
library(keyring)
```
7. Store the database username and password using `keyring`. The username is `rstudio_dev` and the password is `dev_user`
```
key_set("postgres", "username")
key_set("postgres", "password")
```
8. Use the stored credentials along with `dbConnect` to connect to the database
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = key_get("postgres", "username"),
pwd = key_get("postgres", "password"),
port = 5432,
database = "postgres",
bigint = "integer")
```
9. Discnonnect using `dbDisconnect`
```
dbDisconnect(con)
```
10. The `.Renviron` file contains entries to create environment variables for `PG_USER` and `PG_PWD`. These variables can be read using `Sys.getenv()`.
```
Sys.getenv("PG_USER")
```
11. Connect to the database using the credentials stored in `.Renviron` and `Sys.getenv()`
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = Sys.getenv("PG_USER"),
pwd = Sys.getenv("PG_PWD"),
port = 5432,
database = "postgres",
bigint = "integer")
```
12. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
13. Store connection details using `options()`
```
options(pg_user = "rstudio_dev", pg_pwd = "dev_user")
```
14. Connect using the credentials accessed via `getOption`
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = getOption("pg_user"),
pwd = getOption("pg_pwd"),
port = 5432,
database = "postgres",
bigint = "integer")
```
15. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
16. Interactively prompt users for input using `rstudioapi::askForPassword()`
```
con <- dbConnect(odbc(),
driver = "postgresql",
host = "localhost",
user = rstudioapi::askForPassword("DB User"),
pwd = rstudioapi::askForPassword("DB Password"),
port = 5432,
database = "postgres",
bigint = "integer")
```
17. Disconnect using `dbDisconnect`
```
dbDisconnect(con)
```
```
knitr::opts_chunk$set(connection = "con", max.print = 5)
```
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/introduction-to-dbi.html |
4 Introduction to `DBI`
=======================
4\.1 Local database basics
--------------------------
*Connecting and adding data to a database*
1. Load the `DBI` package
```
library(DBI)
```
2. Use `dbConnect` to open a database connection
```
con <- dbConnect(RSQLite::SQLite(), "mydatabase.sqlite")
```
3. Use `dbListTables()` to view existing tables, there should be 0 tables
```
dbListTables(con)
```
```
## character(0)
```
4. Use `dbWriteTable()` to create a new table using `mtcars` data. Name it **db\_mtcars**
```
dbWriteTable(con, "db_mtcars", mtcars)
```
5. Use `dbListTables()` to view existing tables, it should return **db\_mtcars**
```
dbListTables(con)
```
```
## [1] "db_mtcars"
```
6. Use `dbGetQuery()` to pass a SQL query to the database
```
dbGetQuery(con, "select * from db_mtcars")
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
## 2 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
## 3 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
## 4 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
## 5 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
## 6 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
## 7 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
## 8 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
## 9 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
## 10 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
## 11 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4
## 12 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3
## 13 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3
## 14 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
## 15 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4
## 16 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4
## 17 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4
## 18 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1
## 19 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2
## 20 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1
## 21 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1
## 22 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2
## 23 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
## 24 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4
## 25 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2
## 26 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1
## 27 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2
## 28 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2
## 29 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4
## 30 19.7 6 145.0 175 3.62 2.770 15.50 0 1 5 6
## 31 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8
## 32 21.4 4 121.0 109 4.11 2.780 18.60 1 1 4 2
```
7. Close the database connection using `dbDisconnect()`
```
dbDisconnect(con)
```
4\.2 Options for writing tables
-------------------------------
*Understand how certain arguments in `dbWriteTable()` work*
1. Use `dbConnect()` to open a Database connection again
```
con <- dbConnect(RSQLite::SQLite(), "mydatabase.sqlite")
```
2. Use `dbWriteTable()` to re\-create the **db\_mtcars** table using `mtcars` data
```
dbWriteTable(con, "db_mtcars", mtcars)
```
```
Error: Table db_mtcars exists in database, and both overwrite and append are FALSE
```
3. Use the `append` argument in `dbWriteTable()` to add to the data in the **db\_mtcars** table
```
dbWriteTable(con, "db_mtcars", mtcars, append = TRUE)
```
4. Using `dbGetQuery()`, check the current record count of **db\_mtcars** with the following query: “select count() from db\_mtcars”
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 64
```
5. Use the `overwrite` argument to `dbWriteTable()` to replace the data in the **db\_mtcars** table
```
dbWriteTable(con, "db_mtcars", mtcars, overwrite = TRUE)
```
6. Check the record count of `db_mtcars` again
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 32
```
4\.3 Database operations
------------------------
*Understand how to use `dbSendStatement()` and `dbExecute()` to modify the database*
1. Use `dbSendStatement()` to pass a SQL commands that deletes any automatic car from `db_mtcars`: “delete from db\_mtcars where am \= 1”. Load the command to a variable called `rs`
```
rs <- dbSendStatement(con, "delete from db_mtcars where am = 1")
```
2. Call the `rs` variable to view information about the results of the requested change
```
rs
```
```
## <SQLiteResult>
## SQL delete from db_mtcars where am = 1
## ROWS Fetched: 0 [complete]
## Changed: 13
```
3. Use `dbHasCompleted()` to confirm that the job is complete
```
dbHasCompleted(rs)
```
```
## [1] TRUE
```
4. Use `dbGetRowsAffected()` to see the number of rows that were affected by the request
```
dbGetRowsAffected(rs)
```
```
## [1] 13
```
5. Clear the results using `dbClearResult()`
```
dbClearResult(rs)
```
6. Confirm that the result set has been removed by calling the `rs` variable once more
```
rs
```
```
## <SQLiteResult>
## EXPIRED
```
7. Check the record count of **db\_mtcars** again, the new count should be 19 (32 original records \- 13 deleted records)
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 19
```
8. Use `dbWriteTable()` to overwrite **db\_mtcars** with the value of `mtcars`
```
dbWriteTable(con, "db_mtcars", mtcars, overwrite = TRUE)
```
9. Use `dbExeceute()` to delete rows where am \= 1 using the same query as before. Load the results in a variable called `rs`
```
rs <- dbExecute(con, "delete from db_mtcars where am = 1")
```
10. `rs` contains the number of rows affected by the satement that was executed
```
rs
```
```
## [1] 13
```
11. Check the record count of **db\_mtcars** again.
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 19
```
4\.4 `knitr` SQL engine
-----------------------
*See how to run SQL queries as code chunks*
1. Start a new code chunk, but using `sql` instead of `r` as the first argument of the chunk. Also add `connection=con` as another argument of the chunk.
`{sql, connection=con} select * from db_mtcars`
Table 4\.1: Displaying records 1 \- 5
| mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 21\.4 | 6 | 258\.0 | 110 | 3\.08 | 3\.215 | 19\.44 | 1 | 0 | 3 | 1 |
| 18\.7 | 8 | 360\.0 | 175 | 3\.15 | 3\.440 | 17\.02 | 0 | 0 | 3 | 2 |
| 18\.1 | 6 | 225\.0 | 105 | 2\.76 | 3\.460 | 20\.22 | 1 | 0 | 3 | 1 |
| 14\.3 | 8 | 360\.0 | 245 | 3\.21 | 3\.570 | 15\.84 | 0 | 0 | 3 | 4 |
| 24\.4 | 4 | 146\.7 | 62 | 3\.69 | 3\.190 | 20\.00 | 1 | 0 | 4 | 2 |
1. Add the `max.print` options to the chunk, and set it to 5
`{sql, connection=con, max.print = 5} select * from db_mtcars`
Table 4\.2: Displaying records 1 \- 5
| mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 21\.4 | 6 | 258\.0 | 110 | 3\.08 | 3\.215 | 19\.44 | 1 | 0 | 3 | 1 |
| 18\.7 | 8 | 360\.0 | 175 | 3\.15 | 3\.440 | 17\.02 | 0 | 0 | 3 | 2 |
| 18\.1 | 6 | 225\.0 | 105 | 2\.76 | 3\.460 | 20\.22 | 1 | 0 | 3 | 1 |
| 14\.3 | 8 | 360\.0 | 245 | 3\.21 | 3\.570 | 15\.84 | 0 | 0 | 3 | 4 |
| 24\.4 | 4 | 146\.7 | 62 | 3\.69 | 3\.190 | 20\.00 | 1 | 0 | 4 | 2 |
1. Set defaults for the `sql` chunks by using the `knitr::opts_chunk$set()` command in the `setup` at the beginning of the document.
`{r setup} knitr::opts_chunk$set(connection = "con", max.print = 5)`
2. Run the same query in a new `sql` chunk, but without any other argument
```
select * from db_mtcars
```
Table 4\.3: Displaying records 1 \- 5
| mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 21\.4 | 6 | 258\.0 | 110 | 3\.08 | 3\.215 | 19\.44 | 1 | 0 | 3 | 1 |
| 18\.7 | 8 | 360\.0 | 175 | 3\.15 | 3\.440 | 17\.02 | 0 | 0 | 3 | 2 |
| 18\.1 | 6 | 225\.0 | 105 | 2\.76 | 3\.460 | 20\.22 | 1 | 0 | 3 | 1 |
| 14\.3 | 8 | 360\.0 | 245 | 3\.21 | 3\.570 | 15\.84 | 0 | 0 | 3 | 4 |
| 24\.4 | 4 | 146\.7 | 62 | 3\.69 | 3\.190 | 20\.00 | 1 | 0 | 4 | 2 |
1. Store the results of the query into an R object called `local_mtcars` using
the `output.var` option.
```
select * from db_mtcars
```
```
local_mtcars
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
## 2 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
## 3 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
## 4 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
## 5 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
## 6 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
## 7 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
## 8 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4
## 9 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3
## 10 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3
## 11 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
## 12 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4
## 13 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4
## 14 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4
## 15 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1
## 16 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2
## 17 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
## 18 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4
## 19 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2
```
1. Close the database connection using `dbDisconnect()`
```
dbDisconnect(con)
```
4\.1 Local database basics
--------------------------
*Connecting and adding data to a database*
1. Load the `DBI` package
```
library(DBI)
```
2. Use `dbConnect` to open a database connection
```
con <- dbConnect(RSQLite::SQLite(), "mydatabase.sqlite")
```
3. Use `dbListTables()` to view existing tables, there should be 0 tables
```
dbListTables(con)
```
```
## character(0)
```
4. Use `dbWriteTable()` to create a new table using `mtcars` data. Name it **db\_mtcars**
```
dbWriteTable(con, "db_mtcars", mtcars)
```
5. Use `dbListTables()` to view existing tables, it should return **db\_mtcars**
```
dbListTables(con)
```
```
## [1] "db_mtcars"
```
6. Use `dbGetQuery()` to pass a SQL query to the database
```
dbGetQuery(con, "select * from db_mtcars")
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
## 2 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
## 3 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
## 4 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
## 5 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
## 6 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
## 7 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
## 8 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
## 9 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
## 10 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
## 11 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4
## 12 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3
## 13 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3
## 14 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
## 15 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4
## 16 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4
## 17 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4
## 18 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1
## 19 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2
## 20 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1
## 21 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1
## 22 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2
## 23 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
## 24 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4
## 25 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2
## 26 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1
## 27 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2
## 28 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2
## 29 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4
## 30 19.7 6 145.0 175 3.62 2.770 15.50 0 1 5 6
## 31 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8
## 32 21.4 4 121.0 109 4.11 2.780 18.60 1 1 4 2
```
7. Close the database connection using `dbDisconnect()`
```
dbDisconnect(con)
```
4\.2 Options for writing tables
-------------------------------
*Understand how certain arguments in `dbWriteTable()` work*
1. Use `dbConnect()` to open a Database connection again
```
con <- dbConnect(RSQLite::SQLite(), "mydatabase.sqlite")
```
2. Use `dbWriteTable()` to re\-create the **db\_mtcars** table using `mtcars` data
```
dbWriteTable(con, "db_mtcars", mtcars)
```
```
Error: Table db_mtcars exists in database, and both overwrite and append are FALSE
```
3. Use the `append` argument in `dbWriteTable()` to add to the data in the **db\_mtcars** table
```
dbWriteTable(con, "db_mtcars", mtcars, append = TRUE)
```
4. Using `dbGetQuery()`, check the current record count of **db\_mtcars** with the following query: “select count() from db\_mtcars”
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 64
```
5. Use the `overwrite` argument to `dbWriteTable()` to replace the data in the **db\_mtcars** table
```
dbWriteTable(con, "db_mtcars", mtcars, overwrite = TRUE)
```
6. Check the record count of `db_mtcars` again
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 32
```
4\.3 Database operations
------------------------
*Understand how to use `dbSendStatement()` and `dbExecute()` to modify the database*
1. Use `dbSendStatement()` to pass a SQL commands that deletes any automatic car from `db_mtcars`: “delete from db\_mtcars where am \= 1”. Load the command to a variable called `rs`
```
rs <- dbSendStatement(con, "delete from db_mtcars where am = 1")
```
2. Call the `rs` variable to view information about the results of the requested change
```
rs
```
```
## <SQLiteResult>
## SQL delete from db_mtcars where am = 1
## ROWS Fetched: 0 [complete]
## Changed: 13
```
3. Use `dbHasCompleted()` to confirm that the job is complete
```
dbHasCompleted(rs)
```
```
## [1] TRUE
```
4. Use `dbGetRowsAffected()` to see the number of rows that were affected by the request
```
dbGetRowsAffected(rs)
```
```
## [1] 13
```
5. Clear the results using `dbClearResult()`
```
dbClearResult(rs)
```
6. Confirm that the result set has been removed by calling the `rs` variable once more
```
rs
```
```
## <SQLiteResult>
## EXPIRED
```
7. Check the record count of **db\_mtcars** again, the new count should be 19 (32 original records \- 13 deleted records)
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 19
```
8. Use `dbWriteTable()` to overwrite **db\_mtcars** with the value of `mtcars`
```
dbWriteTable(con, "db_mtcars", mtcars, overwrite = TRUE)
```
9. Use `dbExeceute()` to delete rows where am \= 1 using the same query as before. Load the results in a variable called `rs`
```
rs <- dbExecute(con, "delete from db_mtcars where am = 1")
```
10. `rs` contains the number of rows affected by the satement that was executed
```
rs
```
```
## [1] 13
```
11. Check the record count of **db\_mtcars** again.
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 19
```
4\.4 `knitr` SQL engine
-----------------------
*See how to run SQL queries as code chunks*
1. Start a new code chunk, but using `sql` instead of `r` as the first argument of the chunk. Also add `connection=con` as another argument of the chunk.
`{sql, connection=con} select * from db_mtcars`
Table 4\.1: Displaying records 1 \- 5
| mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 21\.4 | 6 | 258\.0 | 110 | 3\.08 | 3\.215 | 19\.44 | 1 | 0 | 3 | 1 |
| 18\.7 | 8 | 360\.0 | 175 | 3\.15 | 3\.440 | 17\.02 | 0 | 0 | 3 | 2 |
| 18\.1 | 6 | 225\.0 | 105 | 2\.76 | 3\.460 | 20\.22 | 1 | 0 | 3 | 1 |
| 14\.3 | 8 | 360\.0 | 245 | 3\.21 | 3\.570 | 15\.84 | 0 | 0 | 3 | 4 |
| 24\.4 | 4 | 146\.7 | 62 | 3\.69 | 3\.190 | 20\.00 | 1 | 0 | 4 | 2 |
1. Add the `max.print` options to the chunk, and set it to 5
`{sql, connection=con, max.print = 5} select * from db_mtcars`
Table 4\.2: Displaying records 1 \- 5
| mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 21\.4 | 6 | 258\.0 | 110 | 3\.08 | 3\.215 | 19\.44 | 1 | 0 | 3 | 1 |
| 18\.7 | 8 | 360\.0 | 175 | 3\.15 | 3\.440 | 17\.02 | 0 | 0 | 3 | 2 |
| 18\.1 | 6 | 225\.0 | 105 | 2\.76 | 3\.460 | 20\.22 | 1 | 0 | 3 | 1 |
| 14\.3 | 8 | 360\.0 | 245 | 3\.21 | 3\.570 | 15\.84 | 0 | 0 | 3 | 4 |
| 24\.4 | 4 | 146\.7 | 62 | 3\.69 | 3\.190 | 20\.00 | 1 | 0 | 4 | 2 |
1. Set defaults for the `sql` chunks by using the `knitr::opts_chunk$set()` command in the `setup` at the beginning of the document.
`{r setup} knitr::opts_chunk$set(connection = "con", max.print = 5)`
2. Run the same query in a new `sql` chunk, but without any other argument
```
select * from db_mtcars
```
Table 4\.3: Displaying records 1 \- 5
| mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 21\.4 | 6 | 258\.0 | 110 | 3\.08 | 3\.215 | 19\.44 | 1 | 0 | 3 | 1 |
| 18\.7 | 8 | 360\.0 | 175 | 3\.15 | 3\.440 | 17\.02 | 0 | 0 | 3 | 2 |
| 18\.1 | 6 | 225\.0 | 105 | 2\.76 | 3\.460 | 20\.22 | 1 | 0 | 3 | 1 |
| 14\.3 | 8 | 360\.0 | 245 | 3\.21 | 3\.570 | 15\.84 | 0 | 0 | 3 | 4 |
| 24\.4 | 4 | 146\.7 | 62 | 3\.69 | 3\.190 | 20\.00 | 1 | 0 | 4 | 2 |
1. Store the results of the query into an R object called `local_mtcars` using
the `output.var` option.
```
select * from db_mtcars
```
```
local_mtcars
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
## 2 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
## 3 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
## 4 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
## 5 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
## 6 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
## 7 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
## 8 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4
## 9 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3
## 10 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3
## 11 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
## 12 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4
## 13 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4
## 14 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4
## 15 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1
## 16 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2
## 17 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
## 18 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4
## 19 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2
```
1. Close the database connection using `dbDisconnect()`
```
dbDisconnect(con)
```
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/introduction-to-dbi.html |
4 Introduction to `DBI`
=======================
4\.1 Local database basics
--------------------------
*Connecting and adding data to a database*
1. Load the `DBI` package
```
library(DBI)
```
2. Use `dbConnect` to open a database connection
```
con <- dbConnect(RSQLite::SQLite(), "mydatabase.sqlite")
```
3. Use `dbListTables()` to view existing tables, there should be 0 tables
```
dbListTables(con)
```
```
## character(0)
```
4. Use `dbWriteTable()` to create a new table using `mtcars` data. Name it **db\_mtcars**
```
dbWriteTable(con, "db_mtcars", mtcars)
```
5. Use `dbListTables()` to view existing tables, it should return **db\_mtcars**
```
dbListTables(con)
```
```
## [1] "db_mtcars"
```
6. Use `dbGetQuery()` to pass a SQL query to the database
```
dbGetQuery(con, "select * from db_mtcars")
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
## 2 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
## 3 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
## 4 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
## 5 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
## 6 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
## 7 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
## 8 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
## 9 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
## 10 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
## 11 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4
## 12 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3
## 13 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3
## 14 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
## 15 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4
## 16 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4
## 17 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4
## 18 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1
## 19 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2
## 20 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1
## 21 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1
## 22 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2
## 23 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
## 24 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4
## 25 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2
## 26 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1
## 27 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2
## 28 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2
## 29 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4
## 30 19.7 6 145.0 175 3.62 2.770 15.50 0 1 5 6
## 31 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8
## 32 21.4 4 121.0 109 4.11 2.780 18.60 1 1 4 2
```
7. Close the database connection using `dbDisconnect()`
```
dbDisconnect(con)
```
4\.2 Options for writing tables
-------------------------------
*Understand how certain arguments in `dbWriteTable()` work*
1. Use `dbConnect()` to open a Database connection again
```
con <- dbConnect(RSQLite::SQLite(), "mydatabase.sqlite")
```
2. Use `dbWriteTable()` to re\-create the **db\_mtcars** table using `mtcars` data
```
dbWriteTable(con, "db_mtcars", mtcars)
```
```
Error: Table db_mtcars exists in database, and both overwrite and append are FALSE
```
3. Use the `append` argument in `dbWriteTable()` to add to the data in the **db\_mtcars** table
```
dbWriteTable(con, "db_mtcars", mtcars, append = TRUE)
```
4. Using `dbGetQuery()`, check the current record count of **db\_mtcars** with the following query: “select count() from db\_mtcars”
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 64
```
5. Use the `overwrite` argument to `dbWriteTable()` to replace the data in the **db\_mtcars** table
```
dbWriteTable(con, "db_mtcars", mtcars, overwrite = TRUE)
```
6. Check the record count of `db_mtcars` again
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 32
```
4\.3 Database operations
------------------------
*Understand how to use `dbSendStatement()` and `dbExecute()` to modify the database*
1. Use `dbSendStatement()` to pass a SQL commands that deletes any automatic car from `db_mtcars`: “delete from db\_mtcars where am \= 1”. Load the command to a variable called `rs`
```
rs <- dbSendStatement(con, "delete from db_mtcars where am = 1")
```
2. Call the `rs` variable to view information about the results of the requested change
```
rs
```
```
## <SQLiteResult>
## SQL delete from db_mtcars where am = 1
## ROWS Fetched: 0 [complete]
## Changed: 13
```
3. Use `dbHasCompleted()` to confirm that the job is complete
```
dbHasCompleted(rs)
```
```
## [1] TRUE
```
4. Use `dbGetRowsAffected()` to see the number of rows that were affected by the request
```
dbGetRowsAffected(rs)
```
```
## [1] 13
```
5. Clear the results using `dbClearResult()`
```
dbClearResult(rs)
```
6. Confirm that the result set has been removed by calling the `rs` variable once more
```
rs
```
```
## <SQLiteResult>
## EXPIRED
```
7. Check the record count of **db\_mtcars** again, the new count should be 19 (32 original records \- 13 deleted records)
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 19
```
8. Use `dbWriteTable()` to overwrite **db\_mtcars** with the value of `mtcars`
```
dbWriteTable(con, "db_mtcars", mtcars, overwrite = TRUE)
```
9. Use `dbExeceute()` to delete rows where am \= 1 using the same query as before. Load the results in a variable called `rs`
```
rs <- dbExecute(con, "delete from db_mtcars where am = 1")
```
10. `rs` contains the number of rows affected by the satement that was executed
```
rs
```
```
## [1] 13
```
11. Check the record count of **db\_mtcars** again.
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 19
```
4\.4 `knitr` SQL engine
-----------------------
*See how to run SQL queries as code chunks*
1. Start a new code chunk, but using `sql` instead of `r` as the first argument of the chunk. Also add `connection=con` as another argument of the chunk.
`{sql, connection=con} select * from db_mtcars`
Table 4\.1: Displaying records 1 \- 5
| mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 21\.4 | 6 | 258\.0 | 110 | 3\.08 | 3\.215 | 19\.44 | 1 | 0 | 3 | 1 |
| 18\.7 | 8 | 360\.0 | 175 | 3\.15 | 3\.440 | 17\.02 | 0 | 0 | 3 | 2 |
| 18\.1 | 6 | 225\.0 | 105 | 2\.76 | 3\.460 | 20\.22 | 1 | 0 | 3 | 1 |
| 14\.3 | 8 | 360\.0 | 245 | 3\.21 | 3\.570 | 15\.84 | 0 | 0 | 3 | 4 |
| 24\.4 | 4 | 146\.7 | 62 | 3\.69 | 3\.190 | 20\.00 | 1 | 0 | 4 | 2 |
1. Add the `max.print` options to the chunk, and set it to 5
`{sql, connection=con, max.print = 5} select * from db_mtcars`
Table 4\.2: Displaying records 1 \- 5
| mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 21\.4 | 6 | 258\.0 | 110 | 3\.08 | 3\.215 | 19\.44 | 1 | 0 | 3 | 1 |
| 18\.7 | 8 | 360\.0 | 175 | 3\.15 | 3\.440 | 17\.02 | 0 | 0 | 3 | 2 |
| 18\.1 | 6 | 225\.0 | 105 | 2\.76 | 3\.460 | 20\.22 | 1 | 0 | 3 | 1 |
| 14\.3 | 8 | 360\.0 | 245 | 3\.21 | 3\.570 | 15\.84 | 0 | 0 | 3 | 4 |
| 24\.4 | 4 | 146\.7 | 62 | 3\.69 | 3\.190 | 20\.00 | 1 | 0 | 4 | 2 |
1. Set defaults for the `sql` chunks by using the `knitr::opts_chunk$set()` command in the `setup` at the beginning of the document.
`{r setup} knitr::opts_chunk$set(connection = "con", max.print = 5)`
2. Run the same query in a new `sql` chunk, but without any other argument
```
select * from db_mtcars
```
Table 4\.3: Displaying records 1 \- 5
| mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 21\.4 | 6 | 258\.0 | 110 | 3\.08 | 3\.215 | 19\.44 | 1 | 0 | 3 | 1 |
| 18\.7 | 8 | 360\.0 | 175 | 3\.15 | 3\.440 | 17\.02 | 0 | 0 | 3 | 2 |
| 18\.1 | 6 | 225\.0 | 105 | 2\.76 | 3\.460 | 20\.22 | 1 | 0 | 3 | 1 |
| 14\.3 | 8 | 360\.0 | 245 | 3\.21 | 3\.570 | 15\.84 | 0 | 0 | 3 | 4 |
| 24\.4 | 4 | 146\.7 | 62 | 3\.69 | 3\.190 | 20\.00 | 1 | 0 | 4 | 2 |
1. Store the results of the query into an R object called `local_mtcars` using
the `output.var` option.
```
select * from db_mtcars
```
```
local_mtcars
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
## 2 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
## 3 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
## 4 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
## 5 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
## 6 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
## 7 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
## 8 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4
## 9 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3
## 10 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3
## 11 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
## 12 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4
## 13 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4
## 14 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4
## 15 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1
## 16 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2
## 17 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
## 18 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4
## 19 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2
```
1. Close the database connection using `dbDisconnect()`
```
dbDisconnect(con)
```
4\.1 Local database basics
--------------------------
*Connecting and adding data to a database*
1. Load the `DBI` package
```
library(DBI)
```
2. Use `dbConnect` to open a database connection
```
con <- dbConnect(RSQLite::SQLite(), "mydatabase.sqlite")
```
3. Use `dbListTables()` to view existing tables, there should be 0 tables
```
dbListTables(con)
```
```
## character(0)
```
4. Use `dbWriteTable()` to create a new table using `mtcars` data. Name it **db\_mtcars**
```
dbWriteTable(con, "db_mtcars", mtcars)
```
5. Use `dbListTables()` to view existing tables, it should return **db\_mtcars**
```
dbListTables(con)
```
```
## [1] "db_mtcars"
```
6. Use `dbGetQuery()` to pass a SQL query to the database
```
dbGetQuery(con, "select * from db_mtcars")
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
## 2 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
## 3 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
## 4 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
## 5 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
## 6 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
## 7 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
## 8 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
## 9 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
## 10 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
## 11 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4
## 12 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3
## 13 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3
## 14 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
## 15 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4
## 16 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4
## 17 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4
## 18 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1
## 19 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2
## 20 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1
## 21 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1
## 22 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2
## 23 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
## 24 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4
## 25 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2
## 26 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1
## 27 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2
## 28 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2
## 29 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4
## 30 19.7 6 145.0 175 3.62 2.770 15.50 0 1 5 6
## 31 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8
## 32 21.4 4 121.0 109 4.11 2.780 18.60 1 1 4 2
```
7. Close the database connection using `dbDisconnect()`
```
dbDisconnect(con)
```
4\.2 Options for writing tables
-------------------------------
*Understand how certain arguments in `dbWriteTable()` work*
1. Use `dbConnect()` to open a Database connection again
```
con <- dbConnect(RSQLite::SQLite(), "mydatabase.sqlite")
```
2. Use `dbWriteTable()` to re\-create the **db\_mtcars** table using `mtcars` data
```
dbWriteTable(con, "db_mtcars", mtcars)
```
```
Error: Table db_mtcars exists in database, and both overwrite and append are FALSE
```
3. Use the `append` argument in `dbWriteTable()` to add to the data in the **db\_mtcars** table
```
dbWriteTable(con, "db_mtcars", mtcars, append = TRUE)
```
4. Using `dbGetQuery()`, check the current record count of **db\_mtcars** with the following query: “select count() from db\_mtcars”
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 64
```
5. Use the `overwrite` argument to `dbWriteTable()` to replace the data in the **db\_mtcars** table
```
dbWriteTable(con, "db_mtcars", mtcars, overwrite = TRUE)
```
6. Check the record count of `db_mtcars` again
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 32
```
4\.3 Database operations
------------------------
*Understand how to use `dbSendStatement()` and `dbExecute()` to modify the database*
1. Use `dbSendStatement()` to pass a SQL commands that deletes any automatic car from `db_mtcars`: “delete from db\_mtcars where am \= 1”. Load the command to a variable called `rs`
```
rs <- dbSendStatement(con, "delete from db_mtcars where am = 1")
```
2. Call the `rs` variable to view information about the results of the requested change
```
rs
```
```
## <SQLiteResult>
## SQL delete from db_mtcars where am = 1
## ROWS Fetched: 0 [complete]
## Changed: 13
```
3. Use `dbHasCompleted()` to confirm that the job is complete
```
dbHasCompleted(rs)
```
```
## [1] TRUE
```
4. Use `dbGetRowsAffected()` to see the number of rows that were affected by the request
```
dbGetRowsAffected(rs)
```
```
## [1] 13
```
5. Clear the results using `dbClearResult()`
```
dbClearResult(rs)
```
6. Confirm that the result set has been removed by calling the `rs` variable once more
```
rs
```
```
## <SQLiteResult>
## EXPIRED
```
7. Check the record count of **db\_mtcars** again, the new count should be 19 (32 original records \- 13 deleted records)
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 19
```
8. Use `dbWriteTable()` to overwrite **db\_mtcars** with the value of `mtcars`
```
dbWriteTable(con, "db_mtcars", mtcars, overwrite = TRUE)
```
9. Use `dbExeceute()` to delete rows where am \= 1 using the same query as before. Load the results in a variable called `rs`
```
rs <- dbExecute(con, "delete from db_mtcars where am = 1")
```
10. `rs` contains the number of rows affected by the satement that was executed
```
rs
```
```
## [1] 13
```
11. Check the record count of **db\_mtcars** again.
```
dbGetQuery(con, "select count() from db_mtcars")
```
```
## count()
## 1 19
```
4\.4 `knitr` SQL engine
-----------------------
*See how to run SQL queries as code chunks*
1. Start a new code chunk, but using `sql` instead of `r` as the first argument of the chunk. Also add `connection=con` as another argument of the chunk.
`{sql, connection=con} select * from db_mtcars`
Table 4\.1: Displaying records 1 \- 5
| mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 21\.4 | 6 | 258\.0 | 110 | 3\.08 | 3\.215 | 19\.44 | 1 | 0 | 3 | 1 |
| 18\.7 | 8 | 360\.0 | 175 | 3\.15 | 3\.440 | 17\.02 | 0 | 0 | 3 | 2 |
| 18\.1 | 6 | 225\.0 | 105 | 2\.76 | 3\.460 | 20\.22 | 1 | 0 | 3 | 1 |
| 14\.3 | 8 | 360\.0 | 245 | 3\.21 | 3\.570 | 15\.84 | 0 | 0 | 3 | 4 |
| 24\.4 | 4 | 146\.7 | 62 | 3\.69 | 3\.190 | 20\.00 | 1 | 0 | 4 | 2 |
1. Add the `max.print` options to the chunk, and set it to 5
`{sql, connection=con, max.print = 5} select * from db_mtcars`
Table 4\.2: Displaying records 1 \- 5
| mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 21\.4 | 6 | 258\.0 | 110 | 3\.08 | 3\.215 | 19\.44 | 1 | 0 | 3 | 1 |
| 18\.7 | 8 | 360\.0 | 175 | 3\.15 | 3\.440 | 17\.02 | 0 | 0 | 3 | 2 |
| 18\.1 | 6 | 225\.0 | 105 | 2\.76 | 3\.460 | 20\.22 | 1 | 0 | 3 | 1 |
| 14\.3 | 8 | 360\.0 | 245 | 3\.21 | 3\.570 | 15\.84 | 0 | 0 | 3 | 4 |
| 24\.4 | 4 | 146\.7 | 62 | 3\.69 | 3\.190 | 20\.00 | 1 | 0 | 4 | 2 |
1. Set defaults for the `sql` chunks by using the `knitr::opts_chunk$set()` command in the `setup` at the beginning of the document.
`{r setup} knitr::opts_chunk$set(connection = "con", max.print = 5)`
2. Run the same query in a new `sql` chunk, but without any other argument
```
select * from db_mtcars
```
Table 4\.3: Displaying records 1 \- 5
| mpg | cyl | disp | hp | drat | wt | qsec | vs | am | gear | carb |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 21\.4 | 6 | 258\.0 | 110 | 3\.08 | 3\.215 | 19\.44 | 1 | 0 | 3 | 1 |
| 18\.7 | 8 | 360\.0 | 175 | 3\.15 | 3\.440 | 17\.02 | 0 | 0 | 3 | 2 |
| 18\.1 | 6 | 225\.0 | 105 | 2\.76 | 3\.460 | 20\.22 | 1 | 0 | 3 | 1 |
| 14\.3 | 8 | 360\.0 | 245 | 3\.21 | 3\.570 | 15\.84 | 0 | 0 | 3 | 4 |
| 24\.4 | 4 | 146\.7 | 62 | 3\.69 | 3\.190 | 20\.00 | 1 | 0 | 4 | 2 |
1. Store the results of the query into an R object called `local_mtcars` using
the `output.var` option.
```
select * from db_mtcars
```
```
local_mtcars
```
```
## mpg cyl disp hp drat wt qsec vs am gear carb
## 1 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
## 2 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
## 3 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
## 4 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
## 5 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
## 6 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
## 7 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
## 8 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4
## 9 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3
## 10 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3
## 11 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
## 12 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4
## 13 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4
## 14 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4
## 15 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1
## 16 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2
## 17 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
## 18 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4
## 19 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2
```
1. Close the database connection using `dbDisconnect()`
```
dbDisconnect(con)
```
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/databases-and-dplyr.html |
5 Databases and `dplyr`
=======================
5\.1 Intro to `connections`
---------------------------
*Use `connections` to open open a database connection*
1. Load the `connections` package
```
library(connections)
library(config)
```
2. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user", config = "dev"),
password = get("pwd", config = "dev"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
3. The RStudio Connections pane should show the tables in the database
5\.2 Table reference
--------------------
*Use the `dplyr`’s `tbl()` command*
1. Load the `dplyr` package
```
library(dplyr)
```
2. Add `in_schema()` as an argument to `tbl()` to specify the schema
```
tbl(con, in_schema("retail", "customer"))
```
```
## # Source: table<retail.customer> [?? x 6]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## customer_id customer_name customer_phone customer_cc customer_lon customer_lat
## <int> <chr> <chr> <chr> <dbl> <dbl>
## 1 1 Marilou Donne… 046-995-9387x… 4054106117… -122. 37.8
## 2 2 Aubrey Gulgow… (020)136-2064 6759766520… -122. 37.7
## 3 3 Arlis Koss 145.574.8189 8699968904… -122. 37.8
## 4 4 Duwayne Walsh 737-897-1968x… 4091991124… -122. 37.7
## 5 5 Nehemiah Doyl… (035)642-3662… 3709535249… -122. 37.7
## 6 6 Meggan Bruen 326-151-4331 4964180480… -122. 37.7
## 7 7 Tracie Swift … 776.442.3270x… 4354911637… -122. 37.8
## 8 8 Karrie Donnel… 883.024.5322x… 4232403376… -122. 37.8
## 9 9 Kip Eichmann (619)169-8761… 5177848238… -122. 37.7
## 10 10 Ms. Ciarra Bo… 964-240-3124 4893126879… -122. 37.8
## # … with more rows
```
3. Load the results from the `tbl()` command that points the table called **orders** to a variable called `orders`
```
orders <- tbl(con, in_schema("retail", "orders"))
```
4. Use the `class` function to determine the object type of `orders`
```
class(orders)
```
```
## [1] "tbl_conn" "tbl_PqConnection" "tbl_dbi" "tbl_sql"
## [5] "tbl_lazy" "tbl"
```
5\.3 Under the hood
-------------------
*Use `show_query()` to preview the SQL statement that will be sent to the database*
1. Use `show_query()` to preview SQL statement that actually runs when we run `orders` as a command
```
show_query(orders)
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
2. When executed, `orders` returns the first 1000 rows of the remote **orders** table
```
orders
```
```
## # Source: table<retail.orders> [?? x 3]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id
## <int> <int> <dbl>
## 1 801473 58 796
## 2 801474 19 796
## 3 801475 4 796
## 4 801476 10 796
## 5 801477 81 796
## 6 801478 89 796
## 7 801479 56 796
## 8 801480 53 796
## 9 801481 70 796
## 10 801482 37 796
## # … with more rows
```
3. Full results of a remote query can be brought into R with `collect`
```
local_orders <- collect(orders)
```
4. Easily view the resulting query by adding `show_query()` in another piped command
```
orders %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
5. Insert `head()` in between the two statements to see how the SQL changes
```
orders %>%
head() %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
## LIMIT 6
```
6. Queries can be assigned to variables. Create a variable called `orders_head` that contains the previous query
```
orders_head <- orders %>%
head()
orders_head %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
## LIMIT 6
```
7. Use `sql_render()` and `simulate_mssql()` to see how the SQL statement changes from vendor to vendor
```
orders %>%
head() %>%
sql_render(con = simulate_mssql())
```
```
## <SQL> SELECT TOP(6) *
## FROM retail.orders
```
8. Use `explain()` to explore the query plan
```
orders %>%
head() %>%
explain()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
## LIMIT 6
##
## <PLAN>
## Limit (cost=0.00..0.09 rows=6 width=12)
## -> Seq Scan on orders (cost=0.00..15406.47 rows=1000047 width=12)
```
5\.4 Un\-translated R commands
------------------------------
*Review of how `dbplyr` handles R commands that have not been translated into a like\-SQL command*
1. Preview how `mean` is translated
```
orders %>%
mutate(avg_id = mean(order_id, na.rm = TRUE)) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", AVG("order_id") OVER () AS "avg_id"
## FROM retail.orders
```
2. Preview how `Sys.Date()` is translated
```
orders %>%
mutate(today = Sys.Date()) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", Sys.Date() AS "today"
## FROM retail.orders
```
3. Use PostgreSQL native commands, in this case `date`
```
orders %>%
mutate(today = date('now')) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", date('now') AS "today"
## FROM retail.orders
```
4. Run the `dplyr` code to confirm it works
```
orders %>%
mutate(today = date('now')) %>%
head()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id today
## <int> <int> <dbl> <date>
## 1 801473 58 796 2020-01-23
## 2 801474 19 796 2020-01-23
## 3 801475 4 796 2020-01-23
## 4 801476 10 796 2020-01-23
## 5 801477 81 796 2020-01-23
## 6 801478 89 796 2020-01-23
```
5\.5 Using bang\-bang
---------------------
*Intro on passing unevaluated code to a dplyr verb*
1. Preview how `Sys.Date()` is translated when prefixing `!!`
```
orders %>%
mutate(today = !!Sys.Date()) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", '2020-01-23' AS "today"
## FROM retail.orders
```
2. View resulting table when `Sys.Date()` is translated when prefixing `!!`
```
orders %>%
mutate(today = !!Sys.Date()) %>%
head()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id today
## <int> <int> <dbl> <chr>
## 1 801473 58 796 2020-01-23
## 2 801474 19 796 2020-01-23
## 3 801475 4 796 2020-01-23
## 4 801476 10 796 2020-01-23
## 5 801477 81 796 2020-01-23
## 6 801478 89 796 2020-01-23
```
3. Disconnect from the database using `connection_close`
```
connection_close(con)
```
5\.1 Intro to `connections`
---------------------------
*Use `connections` to open open a database connection*
1. Load the `connections` package
```
library(connections)
library(config)
```
2. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user", config = "dev"),
password = get("pwd", config = "dev"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
3. The RStudio Connections pane should show the tables in the database
5\.2 Table reference
--------------------
*Use the `dplyr`’s `tbl()` command*
1. Load the `dplyr` package
```
library(dplyr)
```
2. Add `in_schema()` as an argument to `tbl()` to specify the schema
```
tbl(con, in_schema("retail", "customer"))
```
```
## # Source: table<retail.customer> [?? x 6]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## customer_id customer_name customer_phone customer_cc customer_lon customer_lat
## <int> <chr> <chr> <chr> <dbl> <dbl>
## 1 1 Marilou Donne… 046-995-9387x… 4054106117… -122. 37.8
## 2 2 Aubrey Gulgow… (020)136-2064 6759766520… -122. 37.7
## 3 3 Arlis Koss 145.574.8189 8699968904… -122. 37.8
## 4 4 Duwayne Walsh 737-897-1968x… 4091991124… -122. 37.7
## 5 5 Nehemiah Doyl… (035)642-3662… 3709535249… -122. 37.7
## 6 6 Meggan Bruen 326-151-4331 4964180480… -122. 37.7
## 7 7 Tracie Swift … 776.442.3270x… 4354911637… -122. 37.8
## 8 8 Karrie Donnel… 883.024.5322x… 4232403376… -122. 37.8
## 9 9 Kip Eichmann (619)169-8761… 5177848238… -122. 37.7
## 10 10 Ms. Ciarra Bo… 964-240-3124 4893126879… -122. 37.8
## # … with more rows
```
3. Load the results from the `tbl()` command that points the table called **orders** to a variable called `orders`
```
orders <- tbl(con, in_schema("retail", "orders"))
```
4. Use the `class` function to determine the object type of `orders`
```
class(orders)
```
```
## [1] "tbl_conn" "tbl_PqConnection" "tbl_dbi" "tbl_sql"
## [5] "tbl_lazy" "tbl"
```
5\.3 Under the hood
-------------------
*Use `show_query()` to preview the SQL statement that will be sent to the database*
1. Use `show_query()` to preview SQL statement that actually runs when we run `orders` as a command
```
show_query(orders)
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
2. When executed, `orders` returns the first 1000 rows of the remote **orders** table
```
orders
```
```
## # Source: table<retail.orders> [?? x 3]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id
## <int> <int> <dbl>
## 1 801473 58 796
## 2 801474 19 796
## 3 801475 4 796
## 4 801476 10 796
## 5 801477 81 796
## 6 801478 89 796
## 7 801479 56 796
## 8 801480 53 796
## 9 801481 70 796
## 10 801482 37 796
## # … with more rows
```
3. Full results of a remote query can be brought into R with `collect`
```
local_orders <- collect(orders)
```
4. Easily view the resulting query by adding `show_query()` in another piped command
```
orders %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
5. Insert `head()` in between the two statements to see how the SQL changes
```
orders %>%
head() %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
## LIMIT 6
```
6. Queries can be assigned to variables. Create a variable called `orders_head` that contains the previous query
```
orders_head <- orders %>%
head()
orders_head %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
## LIMIT 6
```
7. Use `sql_render()` and `simulate_mssql()` to see how the SQL statement changes from vendor to vendor
```
orders %>%
head() %>%
sql_render(con = simulate_mssql())
```
```
## <SQL> SELECT TOP(6) *
## FROM retail.orders
```
8. Use `explain()` to explore the query plan
```
orders %>%
head() %>%
explain()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
## LIMIT 6
##
## <PLAN>
## Limit (cost=0.00..0.09 rows=6 width=12)
## -> Seq Scan on orders (cost=0.00..15406.47 rows=1000047 width=12)
```
5\.4 Un\-translated R commands
------------------------------
*Review of how `dbplyr` handles R commands that have not been translated into a like\-SQL command*
1. Preview how `mean` is translated
```
orders %>%
mutate(avg_id = mean(order_id, na.rm = TRUE)) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", AVG("order_id") OVER () AS "avg_id"
## FROM retail.orders
```
2. Preview how `Sys.Date()` is translated
```
orders %>%
mutate(today = Sys.Date()) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", Sys.Date() AS "today"
## FROM retail.orders
```
3. Use PostgreSQL native commands, in this case `date`
```
orders %>%
mutate(today = date('now')) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", date('now') AS "today"
## FROM retail.orders
```
4. Run the `dplyr` code to confirm it works
```
orders %>%
mutate(today = date('now')) %>%
head()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id today
## <int> <int> <dbl> <date>
## 1 801473 58 796 2020-01-23
## 2 801474 19 796 2020-01-23
## 3 801475 4 796 2020-01-23
## 4 801476 10 796 2020-01-23
## 5 801477 81 796 2020-01-23
## 6 801478 89 796 2020-01-23
```
5\.5 Using bang\-bang
---------------------
*Intro on passing unevaluated code to a dplyr verb*
1. Preview how `Sys.Date()` is translated when prefixing `!!`
```
orders %>%
mutate(today = !!Sys.Date()) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", '2020-01-23' AS "today"
## FROM retail.orders
```
2. View resulting table when `Sys.Date()` is translated when prefixing `!!`
```
orders %>%
mutate(today = !!Sys.Date()) %>%
head()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id today
## <int> <int> <dbl> <chr>
## 1 801473 58 796 2020-01-23
## 2 801474 19 796 2020-01-23
## 3 801475 4 796 2020-01-23
## 4 801476 10 796 2020-01-23
## 5 801477 81 796 2020-01-23
## 6 801478 89 796 2020-01-23
```
3. Disconnect from the database using `connection_close`
```
connection_close(con)
```
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/databases-and-dplyr.html |
5 Databases and `dplyr`
=======================
5\.1 Intro to `connections`
---------------------------
*Use `connections` to open open a database connection*
1. Load the `connections` package
```
library(connections)
library(config)
```
2. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user", config = "dev"),
password = get("pwd", config = "dev"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
3. The RStudio Connections pane should show the tables in the database
5\.2 Table reference
--------------------
*Use the `dplyr`’s `tbl()` command*
1. Load the `dplyr` package
```
library(dplyr)
```
2. Add `in_schema()` as an argument to `tbl()` to specify the schema
```
tbl(con, in_schema("retail", "customer"))
```
```
## # Source: table<retail.customer> [?? x 6]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## customer_id customer_name customer_phone customer_cc customer_lon customer_lat
## <int> <chr> <chr> <chr> <dbl> <dbl>
## 1 1 Marilou Donne… 046-995-9387x… 4054106117… -122. 37.8
## 2 2 Aubrey Gulgow… (020)136-2064 6759766520… -122. 37.7
## 3 3 Arlis Koss 145.574.8189 8699968904… -122. 37.8
## 4 4 Duwayne Walsh 737-897-1968x… 4091991124… -122. 37.7
## 5 5 Nehemiah Doyl… (035)642-3662… 3709535249… -122. 37.7
## 6 6 Meggan Bruen 326-151-4331 4964180480… -122. 37.7
## 7 7 Tracie Swift … 776.442.3270x… 4354911637… -122. 37.8
## 8 8 Karrie Donnel… 883.024.5322x… 4232403376… -122. 37.8
## 9 9 Kip Eichmann (619)169-8761… 5177848238… -122. 37.7
## 10 10 Ms. Ciarra Bo… 964-240-3124 4893126879… -122. 37.8
## # … with more rows
```
3. Load the results from the `tbl()` command that points the table called **orders** to a variable called `orders`
```
orders <- tbl(con, in_schema("retail", "orders"))
```
4. Use the `class` function to determine the object type of `orders`
```
class(orders)
```
```
## [1] "tbl_conn" "tbl_PqConnection" "tbl_dbi" "tbl_sql"
## [5] "tbl_lazy" "tbl"
```
5\.3 Under the hood
-------------------
*Use `show_query()` to preview the SQL statement that will be sent to the database*
1. Use `show_query()` to preview SQL statement that actually runs when we run `orders` as a command
```
show_query(orders)
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
2. When executed, `orders` returns the first 1000 rows of the remote **orders** table
```
orders
```
```
## # Source: table<retail.orders> [?? x 3]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id
## <int> <int> <dbl>
## 1 801473 58 796
## 2 801474 19 796
## 3 801475 4 796
## 4 801476 10 796
## 5 801477 81 796
## 6 801478 89 796
## 7 801479 56 796
## 8 801480 53 796
## 9 801481 70 796
## 10 801482 37 796
## # … with more rows
```
3. Full results of a remote query can be brought into R with `collect`
```
local_orders <- collect(orders)
```
4. Easily view the resulting query by adding `show_query()` in another piped command
```
orders %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
5. Insert `head()` in between the two statements to see how the SQL changes
```
orders %>%
head() %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
## LIMIT 6
```
6. Queries can be assigned to variables. Create a variable called `orders_head` that contains the previous query
```
orders_head <- orders %>%
head()
orders_head %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
## LIMIT 6
```
7. Use `sql_render()` and `simulate_mssql()` to see how the SQL statement changes from vendor to vendor
```
orders %>%
head() %>%
sql_render(con = simulate_mssql())
```
```
## <SQL> SELECT TOP(6) *
## FROM retail.orders
```
8. Use `explain()` to explore the query plan
```
orders %>%
head() %>%
explain()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
## LIMIT 6
##
## <PLAN>
## Limit (cost=0.00..0.09 rows=6 width=12)
## -> Seq Scan on orders (cost=0.00..15406.47 rows=1000047 width=12)
```
5\.4 Un\-translated R commands
------------------------------
*Review of how `dbplyr` handles R commands that have not been translated into a like\-SQL command*
1. Preview how `mean` is translated
```
orders %>%
mutate(avg_id = mean(order_id, na.rm = TRUE)) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", AVG("order_id") OVER () AS "avg_id"
## FROM retail.orders
```
2. Preview how `Sys.Date()` is translated
```
orders %>%
mutate(today = Sys.Date()) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", Sys.Date() AS "today"
## FROM retail.orders
```
3. Use PostgreSQL native commands, in this case `date`
```
orders %>%
mutate(today = date('now')) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", date('now') AS "today"
## FROM retail.orders
```
4. Run the `dplyr` code to confirm it works
```
orders %>%
mutate(today = date('now')) %>%
head()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id today
## <int> <int> <dbl> <date>
## 1 801473 58 796 2020-01-23
## 2 801474 19 796 2020-01-23
## 3 801475 4 796 2020-01-23
## 4 801476 10 796 2020-01-23
## 5 801477 81 796 2020-01-23
## 6 801478 89 796 2020-01-23
```
5\.5 Using bang\-bang
---------------------
*Intro on passing unevaluated code to a dplyr verb*
1. Preview how `Sys.Date()` is translated when prefixing `!!`
```
orders %>%
mutate(today = !!Sys.Date()) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", '2020-01-23' AS "today"
## FROM retail.orders
```
2. View resulting table when `Sys.Date()` is translated when prefixing `!!`
```
orders %>%
mutate(today = !!Sys.Date()) %>%
head()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id today
## <int> <int> <dbl> <chr>
## 1 801473 58 796 2020-01-23
## 2 801474 19 796 2020-01-23
## 3 801475 4 796 2020-01-23
## 4 801476 10 796 2020-01-23
## 5 801477 81 796 2020-01-23
## 6 801478 89 796 2020-01-23
```
3. Disconnect from the database using `connection_close`
```
connection_close(con)
```
5\.1 Intro to `connections`
---------------------------
*Use `connections` to open open a database connection*
1. Load the `connections` package
```
library(connections)
library(config)
```
2. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user", config = "dev"),
password = get("pwd", config = "dev"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
3. The RStudio Connections pane should show the tables in the database
5\.2 Table reference
--------------------
*Use the `dplyr`’s `tbl()` command*
1. Load the `dplyr` package
```
library(dplyr)
```
2. Add `in_schema()` as an argument to `tbl()` to specify the schema
```
tbl(con, in_schema("retail", "customer"))
```
```
## # Source: table<retail.customer> [?? x 6]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## customer_id customer_name customer_phone customer_cc customer_lon customer_lat
## <int> <chr> <chr> <chr> <dbl> <dbl>
## 1 1 Marilou Donne… 046-995-9387x… 4054106117… -122. 37.8
## 2 2 Aubrey Gulgow… (020)136-2064 6759766520… -122. 37.7
## 3 3 Arlis Koss 145.574.8189 8699968904… -122. 37.8
## 4 4 Duwayne Walsh 737-897-1968x… 4091991124… -122. 37.7
## 5 5 Nehemiah Doyl… (035)642-3662… 3709535249… -122. 37.7
## 6 6 Meggan Bruen 326-151-4331 4964180480… -122. 37.7
## 7 7 Tracie Swift … 776.442.3270x… 4354911637… -122. 37.8
## 8 8 Karrie Donnel… 883.024.5322x… 4232403376… -122. 37.8
## 9 9 Kip Eichmann (619)169-8761… 5177848238… -122. 37.7
## 10 10 Ms. Ciarra Bo… 964-240-3124 4893126879… -122. 37.8
## # … with more rows
```
3. Load the results from the `tbl()` command that points the table called **orders** to a variable called `orders`
```
orders <- tbl(con, in_schema("retail", "orders"))
```
4. Use the `class` function to determine the object type of `orders`
```
class(orders)
```
```
## [1] "tbl_conn" "tbl_PqConnection" "tbl_dbi" "tbl_sql"
## [5] "tbl_lazy" "tbl"
```
5\.3 Under the hood
-------------------
*Use `show_query()` to preview the SQL statement that will be sent to the database*
1. Use `show_query()` to preview SQL statement that actually runs when we run `orders` as a command
```
show_query(orders)
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
2. When executed, `orders` returns the first 1000 rows of the remote **orders** table
```
orders
```
```
## # Source: table<retail.orders> [?? x 3]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id
## <int> <int> <dbl>
## 1 801473 58 796
## 2 801474 19 796
## 3 801475 4 796
## 4 801476 10 796
## 5 801477 81 796
## 6 801478 89 796
## 7 801479 56 796
## 8 801480 53 796
## 9 801481 70 796
## 10 801482 37 796
## # … with more rows
```
3. Full results of a remote query can be brought into R with `collect`
```
local_orders <- collect(orders)
```
4. Easily view the resulting query by adding `show_query()` in another piped command
```
orders %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
5. Insert `head()` in between the two statements to see how the SQL changes
```
orders %>%
head() %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
## LIMIT 6
```
6. Queries can be assigned to variables. Create a variable called `orders_head` that contains the previous query
```
orders_head <- orders %>%
head()
orders_head %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
## LIMIT 6
```
7. Use `sql_render()` and `simulate_mssql()` to see how the SQL statement changes from vendor to vendor
```
orders %>%
head() %>%
sql_render(con = simulate_mssql())
```
```
## <SQL> SELECT TOP(6) *
## FROM retail.orders
```
8. Use `explain()` to explore the query plan
```
orders %>%
head() %>%
explain()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
## LIMIT 6
##
## <PLAN>
## Limit (cost=0.00..0.09 rows=6 width=12)
## -> Seq Scan on orders (cost=0.00..15406.47 rows=1000047 width=12)
```
5\.4 Un\-translated R commands
------------------------------
*Review of how `dbplyr` handles R commands that have not been translated into a like\-SQL command*
1. Preview how `mean` is translated
```
orders %>%
mutate(avg_id = mean(order_id, na.rm = TRUE)) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", AVG("order_id") OVER () AS "avg_id"
## FROM retail.orders
```
2. Preview how `Sys.Date()` is translated
```
orders %>%
mutate(today = Sys.Date()) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", Sys.Date() AS "today"
## FROM retail.orders
```
3. Use PostgreSQL native commands, in this case `date`
```
orders %>%
mutate(today = date('now')) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", date('now') AS "today"
## FROM retail.orders
```
4. Run the `dplyr` code to confirm it works
```
orders %>%
mutate(today = date('now')) %>%
head()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id today
## <int> <int> <dbl> <date>
## 1 801473 58 796 2020-01-23
## 2 801474 19 796 2020-01-23
## 3 801475 4 796 2020-01-23
## 4 801476 10 796 2020-01-23
## 5 801477 81 796 2020-01-23
## 6 801478 89 796 2020-01-23
```
5\.5 Using bang\-bang
---------------------
*Intro on passing unevaluated code to a dplyr verb*
1. Preview how `Sys.Date()` is translated when prefixing `!!`
```
orders %>%
mutate(today = !!Sys.Date()) %>%
show_query()
```
```
## <SQL>
## SELECT "order_id", "customer_id", "step_id", '2020-01-23' AS "today"
## FROM retail.orders
```
2. View resulting table when `Sys.Date()` is translated when prefixing `!!`
```
orders %>%
mutate(today = !!Sys.Date()) %>%
head()
```
```
## # Source: lazy query [?? x 4]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id today
## <int> <int> <dbl> <chr>
## 1 801473 58 796 2020-01-23
## 2 801474 19 796 2020-01-23
## 3 801475 4 796 2020-01-23
## 4 801476 10 796 2020-01-23
## 5 801477 81 796 2020-01-23
## 6 801478 89 796 2020-01-23
```
3. Disconnect from the database using `connection_close`
```
connection_close(con)
```
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/data-visualizations.html |
6 Data Visualizations
=====================
6\.1 Simple plot
----------------
*Practice pushing the calculations to the database*
1. Load the `connections`, `dplyr`, `dbplyr`, and `config` libraries
```
library(connections)
library(dplyr)
library(dbplyr)
library(config)
```
2. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user", config = "dev"),
password = get("pwd", config = "dev"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
3. Use `tbl()` to create a pointer to the **v\_orders** table
```
orders <- tbl(con, in_schema("retail", "v_orders"))
```
4. Use `collect()` bring back the aggregated results into a “pass\-through” variable called `by_year`
```
by_year <- orders %>%
count(date_year) %>%
collect()
```
5. Preview the `by_year` variable
```
by_year
```
```
## # A tibble: 3 x 2
## date_year n
## <int> <int>
## 1 2017 364317
## 2 2016 366796
## 3 2018 268934
```
6. Load the `ggplot2` library
```
library(ggplot2)
```
7. Plot results using `ggplot2`
```
ggplot(by_year) +
geom_col(aes(date_year, n))
```
8. Using the code in this section, create a single piped code set which also creates the plot
```
orders %>%
count(date_year) %>%
collect() %>%
ggplot() + # < Don't forget to switch to `+`
geom_col(aes(date_year, n))
```
6\.2 Plot in one code segment
-----------------------------
*Practice going from `dplyr` to `ggplot2` without using pass\-through variable, great for EDA*
1. Summarize the order totals in a new variable called `sales`
```
orders %>%
summarise(sales = sum(order_total))
```
```
## Warning: Missing values are always removed in SQL.
## Use `SUM(x, na.rm = TRUE)` to silence this warning
## This warning is displayed only once per session.
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## sales
## <dbl>
## 1 38014000
```
2. Summarize the order totals grouped by `date_year` in a new variable called `sales`
```
orders %>%
group_by(date_year) %>%
summarise(sales = sum(order_total))
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## date_year sales
## <int> <dbl>
## 1 2017 13911500
## 2 2016 13998800
## 3 2018 10103600
```
3. Summarize the order totals grouped by `date_year` in a new variable called `sales` and plot the results
```
orders %>%
group_by(date_year) %>%
summarise(sales = sum(order_total)) %>%
ggplot() +
geom_col(aes(date_year, sales))
```
4. Switch the calculation to reflect the average of the order sale total
```
orders %>%
group_by(date_year) %>%
summarise(sales = mean(order_total)) %>%
ggplot() +
geom_col(aes(date_year, sales))
```
```
## Warning: Missing values are always removed in SQL.
## Use `mean(x, na.rm = TRUE)` to silence this warning
## This warning is displayed only once per session.
```
6\.3 Create a histogram
-----------------------
*Use the `dbplot` package to easily create a histogram*
1. Load the `dbplot` package
```
library(dbplot)
```
2. Use the `dbplot_histogram()` to build the histogram
```
orders %>%
dbplot_histogram(order_total)
```
3. Adjust the `binwidth` to 10
```
orders %>%
dbplot_histogram(order_total, binwidth = 10)
```
6\.4 Raster plot
----------------
*Use `dbplot`’s raster graph*
1. Use a `dbplot_raster()` to visualize `order_qty` versus `order_total`
```
orders %>%
dbplot_raster(order_qty, order_total)
```
2. Change the plot’s resolution to 10
```
orders %>%
dbplot_raster(order_qty, order_total, resolution = 10)
```
6\.5 Using the `compute` functions
----------------------------------
1. Use the `db_compute_raster()` function to get the underlying results that feed the plot
```
locations <- orders %>%
db_compute_raster2(customer_lon, customer_lat, resolution = 10)
```
2. Preview the `locations` variable
```
locations
```
```
## # A tibble: 58 x 5
## customer_lon customer_lat `n()` customer_lon_2 customer_lat_2
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 -122. 37.8 10819 -122. 37.8
## 2 -122. 37.7 22034 -122. 37.8
## 3 -122. 37.8 10906 -122. 37.8
## 4 -122. 37.8 11574 -122. 37.8
## 5 -122. 37.8 33725 -122. 37.8
## 6 -122. 37.8 20083 -122. 37.8
## 7 -122. 37.7 11475 -122. 37.7
## 8 -122. 37.7 23571 -122. 37.7
## 9 -122. 37.8 11416 -122. 37.8
## 10 -122. 37.8 11089 -122. 37.8
## # … with 48 more rows
```
3. Load the `leaflet` library
```
library(leaflet)
```
4. Pipe `location` into the `leaflet()` function, and then pipe that into the `addTiles()` function
```
locations %>%
leaflet() %>%
addTiles()
```
5. Add the `addRectangles()` function using the longitude and latitude variables
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2
)
```
1. Add the `fillOpacity` argument to the `addRectangles()` step, use `n()` as the value for it
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2,
fillOpacity = ~`n()`
)
```
1. Modify `fillOpacity` to be calculated as a percentage against the maximum number of orders
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2,
fillOpacity = ~(`n()` / max(`n()`))
)
```
1. Add the `popup` argument with the following instruction as its value: `~paste0("<p>No of orders: ",`n()`,"</p>")`
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2,
fillOpacity = ~(`n()` / max(`n()`)),
popup = ~paste0("<p>No of orders: ", `n()`,"</p>")
)
```
1. Disconnect from the database using `connection_close`
```
connection_close(con)
```
6\.1 Simple plot
----------------
*Practice pushing the calculations to the database*
1. Load the `connections`, `dplyr`, `dbplyr`, and `config` libraries
```
library(connections)
library(dplyr)
library(dbplyr)
library(config)
```
2. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user", config = "dev"),
password = get("pwd", config = "dev"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
3. Use `tbl()` to create a pointer to the **v\_orders** table
```
orders <- tbl(con, in_schema("retail", "v_orders"))
```
4. Use `collect()` bring back the aggregated results into a “pass\-through” variable called `by_year`
```
by_year <- orders %>%
count(date_year) %>%
collect()
```
5. Preview the `by_year` variable
```
by_year
```
```
## # A tibble: 3 x 2
## date_year n
## <int> <int>
## 1 2017 364317
## 2 2016 366796
## 3 2018 268934
```
6. Load the `ggplot2` library
```
library(ggplot2)
```
7. Plot results using `ggplot2`
```
ggplot(by_year) +
geom_col(aes(date_year, n))
```
8. Using the code in this section, create a single piped code set which also creates the plot
```
orders %>%
count(date_year) %>%
collect() %>%
ggplot() + # < Don't forget to switch to `+`
geom_col(aes(date_year, n))
```
6\.2 Plot in one code segment
-----------------------------
*Practice going from `dplyr` to `ggplot2` without using pass\-through variable, great for EDA*
1. Summarize the order totals in a new variable called `sales`
```
orders %>%
summarise(sales = sum(order_total))
```
```
## Warning: Missing values are always removed in SQL.
## Use `SUM(x, na.rm = TRUE)` to silence this warning
## This warning is displayed only once per session.
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## sales
## <dbl>
## 1 38014000
```
2. Summarize the order totals grouped by `date_year` in a new variable called `sales`
```
orders %>%
group_by(date_year) %>%
summarise(sales = sum(order_total))
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## date_year sales
## <int> <dbl>
## 1 2017 13911500
## 2 2016 13998800
## 3 2018 10103600
```
3. Summarize the order totals grouped by `date_year` in a new variable called `sales` and plot the results
```
orders %>%
group_by(date_year) %>%
summarise(sales = sum(order_total)) %>%
ggplot() +
geom_col(aes(date_year, sales))
```
4. Switch the calculation to reflect the average of the order sale total
```
orders %>%
group_by(date_year) %>%
summarise(sales = mean(order_total)) %>%
ggplot() +
geom_col(aes(date_year, sales))
```
```
## Warning: Missing values are always removed in SQL.
## Use `mean(x, na.rm = TRUE)` to silence this warning
## This warning is displayed only once per session.
```
6\.3 Create a histogram
-----------------------
*Use the `dbplot` package to easily create a histogram*
1. Load the `dbplot` package
```
library(dbplot)
```
2. Use the `dbplot_histogram()` to build the histogram
```
orders %>%
dbplot_histogram(order_total)
```
3. Adjust the `binwidth` to 10
```
orders %>%
dbplot_histogram(order_total, binwidth = 10)
```
6\.4 Raster plot
----------------
*Use `dbplot`’s raster graph*
1. Use a `dbplot_raster()` to visualize `order_qty` versus `order_total`
```
orders %>%
dbplot_raster(order_qty, order_total)
```
2. Change the plot’s resolution to 10
```
orders %>%
dbplot_raster(order_qty, order_total, resolution = 10)
```
6\.5 Using the `compute` functions
----------------------------------
1. Use the `db_compute_raster()` function to get the underlying results that feed the plot
```
locations <- orders %>%
db_compute_raster2(customer_lon, customer_lat, resolution = 10)
```
2. Preview the `locations` variable
```
locations
```
```
## # A tibble: 58 x 5
## customer_lon customer_lat `n()` customer_lon_2 customer_lat_2
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 -122. 37.8 10819 -122. 37.8
## 2 -122. 37.7 22034 -122. 37.8
## 3 -122. 37.8 10906 -122. 37.8
## 4 -122. 37.8 11574 -122. 37.8
## 5 -122. 37.8 33725 -122. 37.8
## 6 -122. 37.8 20083 -122. 37.8
## 7 -122. 37.7 11475 -122. 37.7
## 8 -122. 37.7 23571 -122. 37.7
## 9 -122. 37.8 11416 -122. 37.8
## 10 -122. 37.8 11089 -122. 37.8
## # … with 48 more rows
```
3. Load the `leaflet` library
```
library(leaflet)
```
4. Pipe `location` into the `leaflet()` function, and then pipe that into the `addTiles()` function
```
locations %>%
leaflet() %>%
addTiles()
```
5. Add the `addRectangles()` function using the longitude and latitude variables
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2
)
```
1. Add the `fillOpacity` argument to the `addRectangles()` step, use `n()` as the value for it
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2,
fillOpacity = ~`n()`
)
```
1. Modify `fillOpacity` to be calculated as a percentage against the maximum number of orders
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2,
fillOpacity = ~(`n()` / max(`n()`))
)
```
1. Add the `popup` argument with the following instruction as its value: `~paste0("<p>No of orders: ",`n()`,"</p>")`
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2,
fillOpacity = ~(`n()` / max(`n()`)),
popup = ~paste0("<p>No of orders: ", `n()`,"</p>")
)
```
1. Disconnect from the database using `connection_close`
```
connection_close(con)
```
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/data-visualizations.html |
6 Data Visualizations
=====================
6\.1 Simple plot
----------------
*Practice pushing the calculations to the database*
1. Load the `connections`, `dplyr`, `dbplyr`, and `config` libraries
```
library(connections)
library(dplyr)
library(dbplyr)
library(config)
```
2. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user", config = "dev"),
password = get("pwd", config = "dev"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
3. Use `tbl()` to create a pointer to the **v\_orders** table
```
orders <- tbl(con, in_schema("retail", "v_orders"))
```
4. Use `collect()` bring back the aggregated results into a “pass\-through” variable called `by_year`
```
by_year <- orders %>%
count(date_year) %>%
collect()
```
5. Preview the `by_year` variable
```
by_year
```
```
## # A tibble: 3 x 2
## date_year n
## <int> <int>
## 1 2017 364317
## 2 2016 366796
## 3 2018 268934
```
6. Load the `ggplot2` library
```
library(ggplot2)
```
7. Plot results using `ggplot2`
```
ggplot(by_year) +
geom_col(aes(date_year, n))
```
8. Using the code in this section, create a single piped code set which also creates the plot
```
orders %>%
count(date_year) %>%
collect() %>%
ggplot() + # < Don't forget to switch to `+`
geom_col(aes(date_year, n))
```
6\.2 Plot in one code segment
-----------------------------
*Practice going from `dplyr` to `ggplot2` without using pass\-through variable, great for EDA*
1. Summarize the order totals in a new variable called `sales`
```
orders %>%
summarise(sales = sum(order_total))
```
```
## Warning: Missing values are always removed in SQL.
## Use `SUM(x, na.rm = TRUE)` to silence this warning
## This warning is displayed only once per session.
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## sales
## <dbl>
## 1 38014000
```
2. Summarize the order totals grouped by `date_year` in a new variable called `sales`
```
orders %>%
group_by(date_year) %>%
summarise(sales = sum(order_total))
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## date_year sales
## <int> <dbl>
## 1 2017 13911500
## 2 2016 13998800
## 3 2018 10103600
```
3. Summarize the order totals grouped by `date_year` in a new variable called `sales` and plot the results
```
orders %>%
group_by(date_year) %>%
summarise(sales = sum(order_total)) %>%
ggplot() +
geom_col(aes(date_year, sales))
```
4. Switch the calculation to reflect the average of the order sale total
```
orders %>%
group_by(date_year) %>%
summarise(sales = mean(order_total)) %>%
ggplot() +
geom_col(aes(date_year, sales))
```
```
## Warning: Missing values are always removed in SQL.
## Use `mean(x, na.rm = TRUE)` to silence this warning
## This warning is displayed only once per session.
```
6\.3 Create a histogram
-----------------------
*Use the `dbplot` package to easily create a histogram*
1. Load the `dbplot` package
```
library(dbplot)
```
2. Use the `dbplot_histogram()` to build the histogram
```
orders %>%
dbplot_histogram(order_total)
```
3. Adjust the `binwidth` to 10
```
orders %>%
dbplot_histogram(order_total, binwidth = 10)
```
6\.4 Raster plot
----------------
*Use `dbplot`’s raster graph*
1. Use a `dbplot_raster()` to visualize `order_qty` versus `order_total`
```
orders %>%
dbplot_raster(order_qty, order_total)
```
2. Change the plot’s resolution to 10
```
orders %>%
dbplot_raster(order_qty, order_total, resolution = 10)
```
6\.5 Using the `compute` functions
----------------------------------
1. Use the `db_compute_raster()` function to get the underlying results that feed the plot
```
locations <- orders %>%
db_compute_raster2(customer_lon, customer_lat, resolution = 10)
```
2. Preview the `locations` variable
```
locations
```
```
## # A tibble: 58 x 5
## customer_lon customer_lat `n()` customer_lon_2 customer_lat_2
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 -122. 37.8 10819 -122. 37.8
## 2 -122. 37.7 22034 -122. 37.8
## 3 -122. 37.8 10906 -122. 37.8
## 4 -122. 37.8 11574 -122. 37.8
## 5 -122. 37.8 33725 -122. 37.8
## 6 -122. 37.8 20083 -122. 37.8
## 7 -122. 37.7 11475 -122. 37.7
## 8 -122. 37.7 23571 -122. 37.7
## 9 -122. 37.8 11416 -122. 37.8
## 10 -122. 37.8 11089 -122. 37.8
## # … with 48 more rows
```
3. Load the `leaflet` library
```
library(leaflet)
```
4. Pipe `location` into the `leaflet()` function, and then pipe that into the `addTiles()` function
```
locations %>%
leaflet() %>%
addTiles()
```
5. Add the `addRectangles()` function using the longitude and latitude variables
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2
)
```
1. Add the `fillOpacity` argument to the `addRectangles()` step, use `n()` as the value for it
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2,
fillOpacity = ~`n()`
)
```
1. Modify `fillOpacity` to be calculated as a percentage against the maximum number of orders
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2,
fillOpacity = ~(`n()` / max(`n()`))
)
```
1. Add the `popup` argument with the following instruction as its value: `~paste0("<p>No of orders: ",`n()`,"</p>")`
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2,
fillOpacity = ~(`n()` / max(`n()`)),
popup = ~paste0("<p>No of orders: ", `n()`,"</p>")
)
```
1. Disconnect from the database using `connection_close`
```
connection_close(con)
```
6\.1 Simple plot
----------------
*Practice pushing the calculations to the database*
1. Load the `connections`, `dplyr`, `dbplyr`, and `config` libraries
```
library(connections)
library(dplyr)
library(dbplyr)
library(config)
```
2. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user", config = "dev"),
password = get("pwd", config = "dev"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
3. Use `tbl()` to create a pointer to the **v\_orders** table
```
orders <- tbl(con, in_schema("retail", "v_orders"))
```
4. Use `collect()` bring back the aggregated results into a “pass\-through” variable called `by_year`
```
by_year <- orders %>%
count(date_year) %>%
collect()
```
5. Preview the `by_year` variable
```
by_year
```
```
## # A tibble: 3 x 2
## date_year n
## <int> <int>
## 1 2017 364317
## 2 2016 366796
## 3 2018 268934
```
6. Load the `ggplot2` library
```
library(ggplot2)
```
7. Plot results using `ggplot2`
```
ggplot(by_year) +
geom_col(aes(date_year, n))
```
8. Using the code in this section, create a single piped code set which also creates the plot
```
orders %>%
count(date_year) %>%
collect() %>%
ggplot() + # < Don't forget to switch to `+`
geom_col(aes(date_year, n))
```
6\.2 Plot in one code segment
-----------------------------
*Practice going from `dplyr` to `ggplot2` without using pass\-through variable, great for EDA*
1. Summarize the order totals in a new variable called `sales`
```
orders %>%
summarise(sales = sum(order_total))
```
```
## Warning: Missing values are always removed in SQL.
## Use `SUM(x, na.rm = TRUE)` to silence this warning
## This warning is displayed only once per session.
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## sales
## <dbl>
## 1 38014000
```
2. Summarize the order totals grouped by `date_year` in a new variable called `sales`
```
orders %>%
group_by(date_year) %>%
summarise(sales = sum(order_total))
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## date_year sales
## <int> <dbl>
## 1 2017 13911500
## 2 2016 13998800
## 3 2018 10103600
```
3. Summarize the order totals grouped by `date_year` in a new variable called `sales` and plot the results
```
orders %>%
group_by(date_year) %>%
summarise(sales = sum(order_total)) %>%
ggplot() +
geom_col(aes(date_year, sales))
```
4. Switch the calculation to reflect the average of the order sale total
```
orders %>%
group_by(date_year) %>%
summarise(sales = mean(order_total)) %>%
ggplot() +
geom_col(aes(date_year, sales))
```
```
## Warning: Missing values are always removed in SQL.
## Use `mean(x, na.rm = TRUE)` to silence this warning
## This warning is displayed only once per session.
```
6\.3 Create a histogram
-----------------------
*Use the `dbplot` package to easily create a histogram*
1. Load the `dbplot` package
```
library(dbplot)
```
2. Use the `dbplot_histogram()` to build the histogram
```
orders %>%
dbplot_histogram(order_total)
```
3. Adjust the `binwidth` to 10
```
orders %>%
dbplot_histogram(order_total, binwidth = 10)
```
6\.4 Raster plot
----------------
*Use `dbplot`’s raster graph*
1. Use a `dbplot_raster()` to visualize `order_qty` versus `order_total`
```
orders %>%
dbplot_raster(order_qty, order_total)
```
2. Change the plot’s resolution to 10
```
orders %>%
dbplot_raster(order_qty, order_total, resolution = 10)
```
6\.5 Using the `compute` functions
----------------------------------
1. Use the `db_compute_raster()` function to get the underlying results that feed the plot
```
locations <- orders %>%
db_compute_raster2(customer_lon, customer_lat, resolution = 10)
```
2. Preview the `locations` variable
```
locations
```
```
## # A tibble: 58 x 5
## customer_lon customer_lat `n()` customer_lon_2 customer_lat_2
## <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 -122. 37.8 10819 -122. 37.8
## 2 -122. 37.7 22034 -122. 37.8
## 3 -122. 37.8 10906 -122. 37.8
## 4 -122. 37.8 11574 -122. 37.8
## 5 -122. 37.8 33725 -122. 37.8
## 6 -122. 37.8 20083 -122. 37.8
## 7 -122. 37.7 11475 -122. 37.7
## 8 -122. 37.7 23571 -122. 37.7
## 9 -122. 37.8 11416 -122. 37.8
## 10 -122. 37.8 11089 -122. 37.8
## # … with 48 more rows
```
3. Load the `leaflet` library
```
library(leaflet)
```
4. Pipe `location` into the `leaflet()` function, and then pipe that into the `addTiles()` function
```
locations %>%
leaflet() %>%
addTiles()
```
5. Add the `addRectangles()` function using the longitude and latitude variables
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2
)
```
1. Add the `fillOpacity` argument to the `addRectangles()` step, use `n()` as the value for it
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2,
fillOpacity = ~`n()`
)
```
1. Modify `fillOpacity` to be calculated as a percentage against the maximum number of orders
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2,
fillOpacity = ~(`n()` / max(`n()`))
)
```
1. Add the `popup` argument with the following instruction as its value: `~paste0("<p>No of orders: ",`n()`,"</p>")`
```
locations %>%
leaflet() %>%
addTiles() %>%
addRectangles(
~customer_lon,
~customer_lat,
~customer_lon_2,
~customer_lat_2,
fillOpacity = ~(`n()` / max(`n()`)),
popup = ~paste0("<p>No of orders: ", `n()`,"</p>")
)
```
1. Disconnect from the database using `connection_close`
```
connection_close(con)
```
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/modeling-with-databases.html |
7 Modeling with databases
=========================
7\.1 Single step sampling
-------------------------
*Use PostgreSQL TABLESAMPLE clause*
1. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user"),
password = get("pwd"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
2. Set the `orders` variable to point to the **orders** table
```
orders <- tbl(con, in_schema("retail", "orders"))
```
3. Set the `orders_view` variable to point to the **v\_orders** table
```
orders_view <- tbl(con, in_schema("retail", "v_orders"))
```
4. Pipe `orders` into the function `show_query()`
```
orders %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
5. Pipe the previous command into the `class()` function to see the kind of output `show_query()` returns
```
orders %>%
show_query() %>%
class()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
```
## [1] "tbl_conn" "tbl_PqConnection" "tbl_dbi" "tbl_sql"
## [5] "tbl_lazy" "tbl"
```
6. Replace `show_query()` with `remote_query()` to compare the output types
```
orders %>%
remote_query() %>%
class()
```
```
## [1] "sql" "character"
```
7. Replace `class()` with `build_sql()`. Use `con` as the value for the `con` argument
```
orders %>%
remote_query() %>%
build_sql(con = con)
```
```
## <SQL> SELECT *
## FROM retail.orders
```
8. Add *" TABLESAMPLE BERNOULLI (0\.1\)"* to `build_sql()` as another `...` argument
```
orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)")
```
```
## <SQL> SELECT *
## FROM retail.orders TABLESAMPLE BERNOULLI (0.1)
```
9. Pipe the code into `tbl()`. Use `con` for the `con` argument, and `.` for the rest
```
orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)") %>%
tbl(con, .)
```
```
## # Source: SQL [?? x 3]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id
## <int> <int> <dbl>
## 1 969600 39 965
## 2 970974 89 966
## 3 973046 86 967
## 4 973200 78 967
## 5 975219 75 968
## 6 975885 62 968
## 7 977001 21 969
## 8 979327 53 970
## 9 973304 26 972
## 10 973555 48 972
## # … with more rows
```
10. Use `inner_join()` to add the information from the `orders_view` pointer, use `order_id` as the matching field
```
orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)") %>%
tbl(con, .) %>%
inner_join(orders_view, by = "order_id")
```
```
## # Source: lazy query [?? x 12]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id.x step_id date date_year date_month customer_id.y
## <int> <int> <dbl> <chr> <int> <int> <int>
## 1 5693 33 3 2016… 2016 1 33
## 2 7095 69 4 2016… 2016 1 69
## 3 7280 78 4 2016… 2016 1 78
## 4 11260 20 6 2016… 2016 1 20
## 5 11260 20 6 2016… 2016 1 22
## 6 15317 22 13 2016… 2016 1 87
## 7 15317 22 13 2016… 2016 1 22
## 8 15841 41 8 2016… 2016 1 41
## 9 15841 41 8 2016… 2016 1 51
## 10 21315 54 16 2016… 2016 1 54
## # … with more rows, and 5 more variables: customer_name <chr>,
## # customer_lon <dbl>, customer_lat <dbl>, order_total <dbl>, order_qty <int>
```
11. Assign the resulting code to a variable `orders_sample_db`
```
orders_sample_db <- orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)") %>%
tbl(con, .) %>%
inner_join(orders_view, by = "order_id")
```
12. Use `collect()` to load the results of `orders_sample_db` to a new variable called `orders_sample`
```
orders_sample <- collect(orders_sample_db)
```
13. Load the `dbplot` library
```
library(dbplot)
```
14. Use `dbplot_histogram()` to visualize the distribution of `order_total` from `orders_sample`
```
orders_sample %>%
dbplot_histogram(order_total, binwidth = 5)
```
15. Use `dbplot_histogram()` to visualize the distribution of `order_total` from `orders_view`
```
orders_view %>%
dbplot_histogram(order_total, binwidth = 5)
```
7\.2 Using `tidymodels` for modeling
------------------------------------
*Fit and measure the model’s performance using functions from `parsnip` and `yardstick`*
1. Load the `tidymodels` library
```
library(tidymodels)
```
2. Start with the `linear_reg()` command, pipe into `set_engine()`, and use *“lm”* as its sole argument
```
linear_reg() %>%
set_engine("lm")
```
```
## Linear Regression Model Specification (regression)
##
## Computational engine: lm
```
3. Pipe into the `fit()` command. Use the formula: `order_total ~ order_qty`, and `orders_sample` as the `data` argument
```
linear_reg() %>%
set_engine("lm") %>%
fit(order_total ~ order_qty, data = orders_sample)
```
```
## parsnip model object
##
## Fit in: 126ms
## Call:
## stats::lm(formula = formula, data = data)
##
## Coefficients:
## (Intercept) order_qty
## 0.09443 6.62037
```
4. Assign the previous code to a variable called `parsnip_model`
```
parsnip_model <- linear_reg() %>%
set_engine("lm") %>%
fit(order_total ~ order_qty, data = orders_sample)
```
5. Use `bind_cols()` to add the predictions to `order_sample`. Calculate the prediction with `predict()`
```
orders_sample %>%
bind_cols(predict(parsnip_model, orders_sample))
```
```
## # A tibble: 1,964 x 13
## order_id customer_id.x step_id date date_year date_month customer_id.y
## <int> <int> <dbl> <chr> <int> <int> <int>
## 1 1864 35 1 2016… 2016 1 35
## 2 3617 13 2 2016… 2016 1 13
## 3 5279 81 3 2016… 2016 1 81
## 4 5426 42 3 2016… 2016 1 42
## 5 7472 1 4 2016… 2016 1 1
## 6 7967 50 4 2016… 2016 1 50
## 7 11174 74 6 2016… 2016 1 74
## 8 11174 74 6 2016… 2016 1 50
## 9 11532 22 11 2016… 2016 1 62
## 10 11532 22 11 2016… 2016 1 22
## # … with 1,954 more rows, and 6 more variables: customer_name <chr>,
## # customer_lon <dbl>, customer_lat <dbl>, order_total <dbl>, order_qty <int>,
## # .pred <dbl>
```
6. Pipe the code into the `metrics()` function. Use `order_total` as the `truth` argument, and `.pred` as the `estimate` argument
```
orders_sample %>%
bind_cols(predict(parsnip_model, orders_sample)) %>%
metrics(truth = order_total, estimate = .pred)
```
```
## # A tibble: 3 x 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 3.50
## 2 rsq standard 0.939
## 3 mae standard 2.78
```
7\.3 Score with `tidypredict`
-----------------------------
1. Load the `tidypredict` library
```
library(tidypredict)
```
2. Use the `parse_model()` function to parse `parsnip_model`, and assign it to a variable called `parsed_parsnip`
```
parsed_parsnip <- parse_model(parsnip_model)
```
3. Use `str()` to see the `parsed_parsnip` object’s structure
```
str(parsed_parsnip)
```
```
## List of 2
## $ general:List of 6
## ..$ model : chr "lm"
## ..$ version : num 2
## ..$ type : chr "regression"
## ..$ residual: int 1962
## ..$ sigma2 : num 12.3
## ..$ is_glm : num 0
## $ terms :List of 2
## ..$ :List of 5
## .. ..$ label : chr "(Intercept)"
## .. ..$ coef : num 0.0944
## .. ..$ is_intercept: num 1
## .. ..$ fields :List of 1
## .. .. ..$ :List of 2
## .. .. .. ..$ type: chr "ordinary"
## .. .. .. ..$ col : chr "(Intercept)"
## .. ..$ qr :List of 2
## .. .. ..$ qr_1: num -0.0226
## .. .. ..$ qr_2: num -0.0639
## ..$ :List of 5
## .. ..$ label : chr "order_qty"
## .. ..$ coef : num 6.62
## .. ..$ is_intercept: num 0
## .. ..$ fields :List of 1
## .. .. ..$ :List of 2
## .. .. .. ..$ type: chr "ordinary"
## .. .. .. ..$ col : chr "order_qty"
## .. ..$ qr :List of 2
## .. .. ..$ qr_1: num 0
## .. .. ..$ qr_2: num 0.0109
## - attr(*, "class")= chr [1:3] "parsed_model" "pm_regression" "list"
```
4. Use `tidypredict_fit()` to view the `dplyr` formula that calculates the prediction
```
tidypredict_fit(parsed_parsnip)
```
```
## 0.0944307069267399 + (order_qty * 6.62036536915535)
```
5. Use `head()` to get the first 10 records from `orders_view`
```
orders_view %>%
head(10)
```
```
## # Source: lazy query [?? x 10]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 3 more variables: customer_lat <dbl>, order_total <dbl>, order_qty <int>
```
6. Pipe the code into `mutate()`. Assign to a new `my_pred` variable the results of `tidypredict_fit()`. Make sure to prefix `tidypredict_fit()` with the bang\-bang operator so that the formula is evaluated.
```
orders_view %>%
head(10) %>%
mutate(my_pred = !! tidypredict_fit(parsed_parsnip))
```
```
## # Source: lazy query [?? x 11]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 4 more variables: customer_lat <dbl>, order_total <dbl>,
## # order_qty <int>, my_pred <dbl>
```
7. Replace the `mutate()` command with `tidypredict_to_column()`
```
orders_view %>%
head(10) %>%
tidypredict_to_column(parsnip_model)
```
```
## # Source: lazy query [?? x 11]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 4 more variables: customer_lat <dbl>, order_total <dbl>,
## # order_qty <int>, fit <dbl>
```
8. Load the `yaml` library
```
library(yaml)
```
9. Use `write_yaml()` to save the contents of `parsed_parsnip` into a file called **model.yaml**
```
write_yaml(parsed_parsnip, "model.yaml")
```
10. Using `read_yaml()`, read the contents of the **model.yaml** file into the a new variable called `loaded_model`
```
loaded_model <- read_yaml("model.yaml")
```
11. Use `as_parsed_model()` to convert the `loaded_model` variable into a `tidypredict` parsed model object, assign the results to `loaded_model_2`
```
loaded_model_2 <- as_parsed_model(loaded_model)
```
7\.4 Run predictions in DB
--------------------------
1. Load the `modeldb` library
```
library(modeldb)
```
2. Use `select()` to pick the `order_total` and `order_qty` fields from the `orders_sample_db` table pointer
```
orders_sample_db %>%
select(order_total, order_qty)
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total order_qty
## <dbl> <int>
## 1 23.2 3
## 2 7.53 1
## 3 13.7 2
## 4 17.6 3
## 5 12.6 2
## 6 31.7 5
## 7 31.7 5
## 8 21.2 3
## 9 21.2 3
## 10 37.1 6
## # … with more rows
```
3. Pipe the code into the `linear_regression_db()` function, pass `order_total` as the only argument
```
orders_sample_db %>%
select(order_total, order_qty) %>%
linear_regression_db(order_total)
```
```
## # A tibble: 1 x 2
## `(Intercept)` order_qty
## <dbl> <dbl>
## 1 0.105 6.64
```
4. Assign the model results to a new variable called `db_model`
```
db_model <- orders_sample_db %>%
select(order_total, order_qty) %>%
linear_regression_db(order_total)
```
5. Use `as_parsed_model()` to convert `db_model` to a parsed model object. Assign to new a variable called `pm`
```
pm <- as_parsed_model(db_model)
```
6. Use `head()` to get the top 10 records, and then pipe into `tidypredict_to_column()` to add the results from `pm`
```
orders_view %>%
head(10) %>%
tidypredict_to_column(pm)
```
```
## # Source: lazy query [?? x 11]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 4 more variables: customer_lat <dbl>, order_total <dbl>,
## # order_qty <int>, fit <dbl>
```
7\.1 Single step sampling
-------------------------
*Use PostgreSQL TABLESAMPLE clause*
1. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user"),
password = get("pwd"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
2. Set the `orders` variable to point to the **orders** table
```
orders <- tbl(con, in_schema("retail", "orders"))
```
3. Set the `orders_view` variable to point to the **v\_orders** table
```
orders_view <- tbl(con, in_schema("retail", "v_orders"))
```
4. Pipe `orders` into the function `show_query()`
```
orders %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
5. Pipe the previous command into the `class()` function to see the kind of output `show_query()` returns
```
orders %>%
show_query() %>%
class()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
```
## [1] "tbl_conn" "tbl_PqConnection" "tbl_dbi" "tbl_sql"
## [5] "tbl_lazy" "tbl"
```
6. Replace `show_query()` with `remote_query()` to compare the output types
```
orders %>%
remote_query() %>%
class()
```
```
## [1] "sql" "character"
```
7. Replace `class()` with `build_sql()`. Use `con` as the value for the `con` argument
```
orders %>%
remote_query() %>%
build_sql(con = con)
```
```
## <SQL> SELECT *
## FROM retail.orders
```
8. Add *" TABLESAMPLE BERNOULLI (0\.1\)"* to `build_sql()` as another `...` argument
```
orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)")
```
```
## <SQL> SELECT *
## FROM retail.orders TABLESAMPLE BERNOULLI (0.1)
```
9. Pipe the code into `tbl()`. Use `con` for the `con` argument, and `.` for the rest
```
orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)") %>%
tbl(con, .)
```
```
## # Source: SQL [?? x 3]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id
## <int> <int> <dbl>
## 1 969600 39 965
## 2 970974 89 966
## 3 973046 86 967
## 4 973200 78 967
## 5 975219 75 968
## 6 975885 62 968
## 7 977001 21 969
## 8 979327 53 970
## 9 973304 26 972
## 10 973555 48 972
## # … with more rows
```
10. Use `inner_join()` to add the information from the `orders_view` pointer, use `order_id` as the matching field
```
orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)") %>%
tbl(con, .) %>%
inner_join(orders_view, by = "order_id")
```
```
## # Source: lazy query [?? x 12]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id.x step_id date date_year date_month customer_id.y
## <int> <int> <dbl> <chr> <int> <int> <int>
## 1 5693 33 3 2016… 2016 1 33
## 2 7095 69 4 2016… 2016 1 69
## 3 7280 78 4 2016… 2016 1 78
## 4 11260 20 6 2016… 2016 1 20
## 5 11260 20 6 2016… 2016 1 22
## 6 15317 22 13 2016… 2016 1 87
## 7 15317 22 13 2016… 2016 1 22
## 8 15841 41 8 2016… 2016 1 41
## 9 15841 41 8 2016… 2016 1 51
## 10 21315 54 16 2016… 2016 1 54
## # … with more rows, and 5 more variables: customer_name <chr>,
## # customer_lon <dbl>, customer_lat <dbl>, order_total <dbl>, order_qty <int>
```
11. Assign the resulting code to a variable `orders_sample_db`
```
orders_sample_db <- orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)") %>%
tbl(con, .) %>%
inner_join(orders_view, by = "order_id")
```
12. Use `collect()` to load the results of `orders_sample_db` to a new variable called `orders_sample`
```
orders_sample <- collect(orders_sample_db)
```
13. Load the `dbplot` library
```
library(dbplot)
```
14. Use `dbplot_histogram()` to visualize the distribution of `order_total` from `orders_sample`
```
orders_sample %>%
dbplot_histogram(order_total, binwidth = 5)
```
15. Use `dbplot_histogram()` to visualize the distribution of `order_total` from `orders_view`
```
orders_view %>%
dbplot_histogram(order_total, binwidth = 5)
```
7\.2 Using `tidymodels` for modeling
------------------------------------
*Fit and measure the model’s performance using functions from `parsnip` and `yardstick`*
1. Load the `tidymodels` library
```
library(tidymodels)
```
2. Start with the `linear_reg()` command, pipe into `set_engine()`, and use *“lm”* as its sole argument
```
linear_reg() %>%
set_engine("lm")
```
```
## Linear Regression Model Specification (regression)
##
## Computational engine: lm
```
3. Pipe into the `fit()` command. Use the formula: `order_total ~ order_qty`, and `orders_sample` as the `data` argument
```
linear_reg() %>%
set_engine("lm") %>%
fit(order_total ~ order_qty, data = orders_sample)
```
```
## parsnip model object
##
## Fit in: 126ms
## Call:
## stats::lm(formula = formula, data = data)
##
## Coefficients:
## (Intercept) order_qty
## 0.09443 6.62037
```
4. Assign the previous code to a variable called `parsnip_model`
```
parsnip_model <- linear_reg() %>%
set_engine("lm") %>%
fit(order_total ~ order_qty, data = orders_sample)
```
5. Use `bind_cols()` to add the predictions to `order_sample`. Calculate the prediction with `predict()`
```
orders_sample %>%
bind_cols(predict(parsnip_model, orders_sample))
```
```
## # A tibble: 1,964 x 13
## order_id customer_id.x step_id date date_year date_month customer_id.y
## <int> <int> <dbl> <chr> <int> <int> <int>
## 1 1864 35 1 2016… 2016 1 35
## 2 3617 13 2 2016… 2016 1 13
## 3 5279 81 3 2016… 2016 1 81
## 4 5426 42 3 2016… 2016 1 42
## 5 7472 1 4 2016… 2016 1 1
## 6 7967 50 4 2016… 2016 1 50
## 7 11174 74 6 2016… 2016 1 74
## 8 11174 74 6 2016… 2016 1 50
## 9 11532 22 11 2016… 2016 1 62
## 10 11532 22 11 2016… 2016 1 22
## # … with 1,954 more rows, and 6 more variables: customer_name <chr>,
## # customer_lon <dbl>, customer_lat <dbl>, order_total <dbl>, order_qty <int>,
## # .pred <dbl>
```
6. Pipe the code into the `metrics()` function. Use `order_total` as the `truth` argument, and `.pred` as the `estimate` argument
```
orders_sample %>%
bind_cols(predict(parsnip_model, orders_sample)) %>%
metrics(truth = order_total, estimate = .pred)
```
```
## # A tibble: 3 x 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 3.50
## 2 rsq standard 0.939
## 3 mae standard 2.78
```
7\.3 Score with `tidypredict`
-----------------------------
1. Load the `tidypredict` library
```
library(tidypredict)
```
2. Use the `parse_model()` function to parse `parsnip_model`, and assign it to a variable called `parsed_parsnip`
```
parsed_parsnip <- parse_model(parsnip_model)
```
3. Use `str()` to see the `parsed_parsnip` object’s structure
```
str(parsed_parsnip)
```
```
## List of 2
## $ general:List of 6
## ..$ model : chr "lm"
## ..$ version : num 2
## ..$ type : chr "regression"
## ..$ residual: int 1962
## ..$ sigma2 : num 12.3
## ..$ is_glm : num 0
## $ terms :List of 2
## ..$ :List of 5
## .. ..$ label : chr "(Intercept)"
## .. ..$ coef : num 0.0944
## .. ..$ is_intercept: num 1
## .. ..$ fields :List of 1
## .. .. ..$ :List of 2
## .. .. .. ..$ type: chr "ordinary"
## .. .. .. ..$ col : chr "(Intercept)"
## .. ..$ qr :List of 2
## .. .. ..$ qr_1: num -0.0226
## .. .. ..$ qr_2: num -0.0639
## ..$ :List of 5
## .. ..$ label : chr "order_qty"
## .. ..$ coef : num 6.62
## .. ..$ is_intercept: num 0
## .. ..$ fields :List of 1
## .. .. ..$ :List of 2
## .. .. .. ..$ type: chr "ordinary"
## .. .. .. ..$ col : chr "order_qty"
## .. ..$ qr :List of 2
## .. .. ..$ qr_1: num 0
## .. .. ..$ qr_2: num 0.0109
## - attr(*, "class")= chr [1:3] "parsed_model" "pm_regression" "list"
```
4. Use `tidypredict_fit()` to view the `dplyr` formula that calculates the prediction
```
tidypredict_fit(parsed_parsnip)
```
```
## 0.0944307069267399 + (order_qty * 6.62036536915535)
```
5. Use `head()` to get the first 10 records from `orders_view`
```
orders_view %>%
head(10)
```
```
## # Source: lazy query [?? x 10]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 3 more variables: customer_lat <dbl>, order_total <dbl>, order_qty <int>
```
6. Pipe the code into `mutate()`. Assign to a new `my_pred` variable the results of `tidypredict_fit()`. Make sure to prefix `tidypredict_fit()` with the bang\-bang operator so that the formula is evaluated.
```
orders_view %>%
head(10) %>%
mutate(my_pred = !! tidypredict_fit(parsed_parsnip))
```
```
## # Source: lazy query [?? x 11]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 4 more variables: customer_lat <dbl>, order_total <dbl>,
## # order_qty <int>, my_pred <dbl>
```
7. Replace the `mutate()` command with `tidypredict_to_column()`
```
orders_view %>%
head(10) %>%
tidypredict_to_column(parsnip_model)
```
```
## # Source: lazy query [?? x 11]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 4 more variables: customer_lat <dbl>, order_total <dbl>,
## # order_qty <int>, fit <dbl>
```
8. Load the `yaml` library
```
library(yaml)
```
9. Use `write_yaml()` to save the contents of `parsed_parsnip` into a file called **model.yaml**
```
write_yaml(parsed_parsnip, "model.yaml")
```
10. Using `read_yaml()`, read the contents of the **model.yaml** file into the a new variable called `loaded_model`
```
loaded_model <- read_yaml("model.yaml")
```
11. Use `as_parsed_model()` to convert the `loaded_model` variable into a `tidypredict` parsed model object, assign the results to `loaded_model_2`
```
loaded_model_2 <- as_parsed_model(loaded_model)
```
7\.4 Run predictions in DB
--------------------------
1. Load the `modeldb` library
```
library(modeldb)
```
2. Use `select()` to pick the `order_total` and `order_qty` fields from the `orders_sample_db` table pointer
```
orders_sample_db %>%
select(order_total, order_qty)
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total order_qty
## <dbl> <int>
## 1 23.2 3
## 2 7.53 1
## 3 13.7 2
## 4 17.6 3
## 5 12.6 2
## 6 31.7 5
## 7 31.7 5
## 8 21.2 3
## 9 21.2 3
## 10 37.1 6
## # … with more rows
```
3. Pipe the code into the `linear_regression_db()` function, pass `order_total` as the only argument
```
orders_sample_db %>%
select(order_total, order_qty) %>%
linear_regression_db(order_total)
```
```
## # A tibble: 1 x 2
## `(Intercept)` order_qty
## <dbl> <dbl>
## 1 0.105 6.64
```
4. Assign the model results to a new variable called `db_model`
```
db_model <- orders_sample_db %>%
select(order_total, order_qty) %>%
linear_regression_db(order_total)
```
5. Use `as_parsed_model()` to convert `db_model` to a parsed model object. Assign to new a variable called `pm`
```
pm <- as_parsed_model(db_model)
```
6. Use `head()` to get the top 10 records, and then pipe into `tidypredict_to_column()` to add the results from `pm`
```
orders_view %>%
head(10) %>%
tidypredict_to_column(pm)
```
```
## # Source: lazy query [?? x 11]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 4 more variables: customer_lat <dbl>, order_total <dbl>,
## # order_qty <int>, fit <dbl>
```
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/modeling-with-databases.html |
7 Modeling with databases
=========================
7\.1 Single step sampling
-------------------------
*Use PostgreSQL TABLESAMPLE clause*
1. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user"),
password = get("pwd"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
2. Set the `orders` variable to point to the **orders** table
```
orders <- tbl(con, in_schema("retail", "orders"))
```
3. Set the `orders_view` variable to point to the **v\_orders** table
```
orders_view <- tbl(con, in_schema("retail", "v_orders"))
```
4. Pipe `orders` into the function `show_query()`
```
orders %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
5. Pipe the previous command into the `class()` function to see the kind of output `show_query()` returns
```
orders %>%
show_query() %>%
class()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
```
## [1] "tbl_conn" "tbl_PqConnection" "tbl_dbi" "tbl_sql"
## [5] "tbl_lazy" "tbl"
```
6. Replace `show_query()` with `remote_query()` to compare the output types
```
orders %>%
remote_query() %>%
class()
```
```
## [1] "sql" "character"
```
7. Replace `class()` with `build_sql()`. Use `con` as the value for the `con` argument
```
orders %>%
remote_query() %>%
build_sql(con = con)
```
```
## <SQL> SELECT *
## FROM retail.orders
```
8. Add *" TABLESAMPLE BERNOULLI (0\.1\)"* to `build_sql()` as another `...` argument
```
orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)")
```
```
## <SQL> SELECT *
## FROM retail.orders TABLESAMPLE BERNOULLI (0.1)
```
9. Pipe the code into `tbl()`. Use `con` for the `con` argument, and `.` for the rest
```
orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)") %>%
tbl(con, .)
```
```
## # Source: SQL [?? x 3]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id
## <int> <int> <dbl>
## 1 969600 39 965
## 2 970974 89 966
## 3 973046 86 967
## 4 973200 78 967
## 5 975219 75 968
## 6 975885 62 968
## 7 977001 21 969
## 8 979327 53 970
## 9 973304 26 972
## 10 973555 48 972
## # … with more rows
```
10. Use `inner_join()` to add the information from the `orders_view` pointer, use `order_id` as the matching field
```
orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)") %>%
tbl(con, .) %>%
inner_join(orders_view, by = "order_id")
```
```
## # Source: lazy query [?? x 12]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id.x step_id date date_year date_month customer_id.y
## <int> <int> <dbl> <chr> <int> <int> <int>
## 1 5693 33 3 2016… 2016 1 33
## 2 7095 69 4 2016… 2016 1 69
## 3 7280 78 4 2016… 2016 1 78
## 4 11260 20 6 2016… 2016 1 20
## 5 11260 20 6 2016… 2016 1 22
## 6 15317 22 13 2016… 2016 1 87
## 7 15317 22 13 2016… 2016 1 22
## 8 15841 41 8 2016… 2016 1 41
## 9 15841 41 8 2016… 2016 1 51
## 10 21315 54 16 2016… 2016 1 54
## # … with more rows, and 5 more variables: customer_name <chr>,
## # customer_lon <dbl>, customer_lat <dbl>, order_total <dbl>, order_qty <int>
```
11. Assign the resulting code to a variable `orders_sample_db`
```
orders_sample_db <- orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)") %>%
tbl(con, .) %>%
inner_join(orders_view, by = "order_id")
```
12. Use `collect()` to load the results of `orders_sample_db` to a new variable called `orders_sample`
```
orders_sample <- collect(orders_sample_db)
```
13. Load the `dbplot` library
```
library(dbplot)
```
14. Use `dbplot_histogram()` to visualize the distribution of `order_total` from `orders_sample`
```
orders_sample %>%
dbplot_histogram(order_total, binwidth = 5)
```
15. Use `dbplot_histogram()` to visualize the distribution of `order_total` from `orders_view`
```
orders_view %>%
dbplot_histogram(order_total, binwidth = 5)
```
7\.2 Using `tidymodels` for modeling
------------------------------------
*Fit and measure the model’s performance using functions from `parsnip` and `yardstick`*
1. Load the `tidymodels` library
```
library(tidymodels)
```
2. Start with the `linear_reg()` command, pipe into `set_engine()`, and use *“lm”* as its sole argument
```
linear_reg() %>%
set_engine("lm")
```
```
## Linear Regression Model Specification (regression)
##
## Computational engine: lm
```
3. Pipe into the `fit()` command. Use the formula: `order_total ~ order_qty`, and `orders_sample` as the `data` argument
```
linear_reg() %>%
set_engine("lm") %>%
fit(order_total ~ order_qty, data = orders_sample)
```
```
## parsnip model object
##
## Fit in: 126ms
## Call:
## stats::lm(formula = formula, data = data)
##
## Coefficients:
## (Intercept) order_qty
## 0.09443 6.62037
```
4. Assign the previous code to a variable called `parsnip_model`
```
parsnip_model <- linear_reg() %>%
set_engine("lm") %>%
fit(order_total ~ order_qty, data = orders_sample)
```
5. Use `bind_cols()` to add the predictions to `order_sample`. Calculate the prediction with `predict()`
```
orders_sample %>%
bind_cols(predict(parsnip_model, orders_sample))
```
```
## # A tibble: 1,964 x 13
## order_id customer_id.x step_id date date_year date_month customer_id.y
## <int> <int> <dbl> <chr> <int> <int> <int>
## 1 1864 35 1 2016… 2016 1 35
## 2 3617 13 2 2016… 2016 1 13
## 3 5279 81 3 2016… 2016 1 81
## 4 5426 42 3 2016… 2016 1 42
## 5 7472 1 4 2016… 2016 1 1
## 6 7967 50 4 2016… 2016 1 50
## 7 11174 74 6 2016… 2016 1 74
## 8 11174 74 6 2016… 2016 1 50
## 9 11532 22 11 2016… 2016 1 62
## 10 11532 22 11 2016… 2016 1 22
## # … with 1,954 more rows, and 6 more variables: customer_name <chr>,
## # customer_lon <dbl>, customer_lat <dbl>, order_total <dbl>, order_qty <int>,
## # .pred <dbl>
```
6. Pipe the code into the `metrics()` function. Use `order_total` as the `truth` argument, and `.pred` as the `estimate` argument
```
orders_sample %>%
bind_cols(predict(parsnip_model, orders_sample)) %>%
metrics(truth = order_total, estimate = .pred)
```
```
## # A tibble: 3 x 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 3.50
## 2 rsq standard 0.939
## 3 mae standard 2.78
```
7\.3 Score with `tidypredict`
-----------------------------
1. Load the `tidypredict` library
```
library(tidypredict)
```
2. Use the `parse_model()` function to parse `parsnip_model`, and assign it to a variable called `parsed_parsnip`
```
parsed_parsnip <- parse_model(parsnip_model)
```
3. Use `str()` to see the `parsed_parsnip` object’s structure
```
str(parsed_parsnip)
```
```
## List of 2
## $ general:List of 6
## ..$ model : chr "lm"
## ..$ version : num 2
## ..$ type : chr "regression"
## ..$ residual: int 1962
## ..$ sigma2 : num 12.3
## ..$ is_glm : num 0
## $ terms :List of 2
## ..$ :List of 5
## .. ..$ label : chr "(Intercept)"
## .. ..$ coef : num 0.0944
## .. ..$ is_intercept: num 1
## .. ..$ fields :List of 1
## .. .. ..$ :List of 2
## .. .. .. ..$ type: chr "ordinary"
## .. .. .. ..$ col : chr "(Intercept)"
## .. ..$ qr :List of 2
## .. .. ..$ qr_1: num -0.0226
## .. .. ..$ qr_2: num -0.0639
## ..$ :List of 5
## .. ..$ label : chr "order_qty"
## .. ..$ coef : num 6.62
## .. ..$ is_intercept: num 0
## .. ..$ fields :List of 1
## .. .. ..$ :List of 2
## .. .. .. ..$ type: chr "ordinary"
## .. .. .. ..$ col : chr "order_qty"
## .. ..$ qr :List of 2
## .. .. ..$ qr_1: num 0
## .. .. ..$ qr_2: num 0.0109
## - attr(*, "class")= chr [1:3] "parsed_model" "pm_regression" "list"
```
4. Use `tidypredict_fit()` to view the `dplyr` formula that calculates the prediction
```
tidypredict_fit(parsed_parsnip)
```
```
## 0.0944307069267399 + (order_qty * 6.62036536915535)
```
5. Use `head()` to get the first 10 records from `orders_view`
```
orders_view %>%
head(10)
```
```
## # Source: lazy query [?? x 10]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 3 more variables: customer_lat <dbl>, order_total <dbl>, order_qty <int>
```
6. Pipe the code into `mutate()`. Assign to a new `my_pred` variable the results of `tidypredict_fit()`. Make sure to prefix `tidypredict_fit()` with the bang\-bang operator so that the formula is evaluated.
```
orders_view %>%
head(10) %>%
mutate(my_pred = !! tidypredict_fit(parsed_parsnip))
```
```
## # Source: lazy query [?? x 11]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 4 more variables: customer_lat <dbl>, order_total <dbl>,
## # order_qty <int>, my_pred <dbl>
```
7. Replace the `mutate()` command with `tidypredict_to_column()`
```
orders_view %>%
head(10) %>%
tidypredict_to_column(parsnip_model)
```
```
## # Source: lazy query [?? x 11]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 4 more variables: customer_lat <dbl>, order_total <dbl>,
## # order_qty <int>, fit <dbl>
```
8. Load the `yaml` library
```
library(yaml)
```
9. Use `write_yaml()` to save the contents of `parsed_parsnip` into a file called **model.yaml**
```
write_yaml(parsed_parsnip, "model.yaml")
```
10. Using `read_yaml()`, read the contents of the **model.yaml** file into the a new variable called `loaded_model`
```
loaded_model <- read_yaml("model.yaml")
```
11. Use `as_parsed_model()` to convert the `loaded_model` variable into a `tidypredict` parsed model object, assign the results to `loaded_model_2`
```
loaded_model_2 <- as_parsed_model(loaded_model)
```
7\.4 Run predictions in DB
--------------------------
1. Load the `modeldb` library
```
library(modeldb)
```
2. Use `select()` to pick the `order_total` and `order_qty` fields from the `orders_sample_db` table pointer
```
orders_sample_db %>%
select(order_total, order_qty)
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total order_qty
## <dbl> <int>
## 1 23.2 3
## 2 7.53 1
## 3 13.7 2
## 4 17.6 3
## 5 12.6 2
## 6 31.7 5
## 7 31.7 5
## 8 21.2 3
## 9 21.2 3
## 10 37.1 6
## # … with more rows
```
3. Pipe the code into the `linear_regression_db()` function, pass `order_total` as the only argument
```
orders_sample_db %>%
select(order_total, order_qty) %>%
linear_regression_db(order_total)
```
```
## # A tibble: 1 x 2
## `(Intercept)` order_qty
## <dbl> <dbl>
## 1 0.105 6.64
```
4. Assign the model results to a new variable called `db_model`
```
db_model <- orders_sample_db %>%
select(order_total, order_qty) %>%
linear_regression_db(order_total)
```
5. Use `as_parsed_model()` to convert `db_model` to a parsed model object. Assign to new a variable called `pm`
```
pm <- as_parsed_model(db_model)
```
6. Use `head()` to get the top 10 records, and then pipe into `tidypredict_to_column()` to add the results from `pm`
```
orders_view %>%
head(10) %>%
tidypredict_to_column(pm)
```
```
## # Source: lazy query [?? x 11]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 4 more variables: customer_lat <dbl>, order_total <dbl>,
## # order_qty <int>, fit <dbl>
```
7\.1 Single step sampling
-------------------------
*Use PostgreSQL TABLESAMPLE clause*
1. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user"),
password = get("pwd"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
2. Set the `orders` variable to point to the **orders** table
```
orders <- tbl(con, in_schema("retail", "orders"))
```
3. Set the `orders_view` variable to point to the **v\_orders** table
```
orders_view <- tbl(con, in_schema("retail", "v_orders"))
```
4. Pipe `orders` into the function `show_query()`
```
orders %>%
show_query()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
5. Pipe the previous command into the `class()` function to see the kind of output `show_query()` returns
```
orders %>%
show_query() %>%
class()
```
```
## <SQL>
## SELECT *
## FROM retail.orders
```
```
## [1] "tbl_conn" "tbl_PqConnection" "tbl_dbi" "tbl_sql"
## [5] "tbl_lazy" "tbl"
```
6. Replace `show_query()` with `remote_query()` to compare the output types
```
orders %>%
remote_query() %>%
class()
```
```
## [1] "sql" "character"
```
7. Replace `class()` with `build_sql()`. Use `con` as the value for the `con` argument
```
orders %>%
remote_query() %>%
build_sql(con = con)
```
```
## <SQL> SELECT *
## FROM retail.orders
```
8. Add *" TABLESAMPLE BERNOULLI (0\.1\)"* to `build_sql()` as another `...` argument
```
orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)")
```
```
## <SQL> SELECT *
## FROM retail.orders TABLESAMPLE BERNOULLI (0.1)
```
9. Pipe the code into `tbl()`. Use `con` for the `con` argument, and `.` for the rest
```
orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)") %>%
tbl(con, .)
```
```
## # Source: SQL [?? x 3]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id step_id
## <int> <int> <dbl>
## 1 969600 39 965
## 2 970974 89 966
## 3 973046 86 967
## 4 973200 78 967
## 5 975219 75 968
## 6 975885 62 968
## 7 977001 21 969
## 8 979327 53 970
## 9 973304 26 972
## 10 973555 48 972
## # … with more rows
```
10. Use `inner_join()` to add the information from the `orders_view` pointer, use `order_id` as the matching field
```
orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)") %>%
tbl(con, .) %>%
inner_join(orders_view, by = "order_id")
```
```
## # Source: lazy query [?? x 12]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id customer_id.x step_id date date_year date_month customer_id.y
## <int> <int> <dbl> <chr> <int> <int> <int>
## 1 5693 33 3 2016… 2016 1 33
## 2 7095 69 4 2016… 2016 1 69
## 3 7280 78 4 2016… 2016 1 78
## 4 11260 20 6 2016… 2016 1 20
## 5 11260 20 6 2016… 2016 1 22
## 6 15317 22 13 2016… 2016 1 87
## 7 15317 22 13 2016… 2016 1 22
## 8 15841 41 8 2016… 2016 1 41
## 9 15841 41 8 2016… 2016 1 51
## 10 21315 54 16 2016… 2016 1 54
## # … with more rows, and 5 more variables: customer_name <chr>,
## # customer_lon <dbl>, customer_lat <dbl>, order_total <dbl>, order_qty <int>
```
11. Assign the resulting code to a variable `orders_sample_db`
```
orders_sample_db <- orders %>%
remote_query() %>%
build_sql(con = con, " TABLESAMPLE BERNOULLI (0.1)") %>%
tbl(con, .) %>%
inner_join(orders_view, by = "order_id")
```
12. Use `collect()` to load the results of `orders_sample_db` to a new variable called `orders_sample`
```
orders_sample <- collect(orders_sample_db)
```
13. Load the `dbplot` library
```
library(dbplot)
```
14. Use `dbplot_histogram()` to visualize the distribution of `order_total` from `orders_sample`
```
orders_sample %>%
dbplot_histogram(order_total, binwidth = 5)
```
15. Use `dbplot_histogram()` to visualize the distribution of `order_total` from `orders_view`
```
orders_view %>%
dbplot_histogram(order_total, binwidth = 5)
```
7\.2 Using `tidymodels` for modeling
------------------------------------
*Fit and measure the model’s performance using functions from `parsnip` and `yardstick`*
1. Load the `tidymodels` library
```
library(tidymodels)
```
2. Start with the `linear_reg()` command, pipe into `set_engine()`, and use *“lm”* as its sole argument
```
linear_reg() %>%
set_engine("lm")
```
```
## Linear Regression Model Specification (regression)
##
## Computational engine: lm
```
3. Pipe into the `fit()` command. Use the formula: `order_total ~ order_qty`, and `orders_sample` as the `data` argument
```
linear_reg() %>%
set_engine("lm") %>%
fit(order_total ~ order_qty, data = orders_sample)
```
```
## parsnip model object
##
## Fit in: 126ms
## Call:
## stats::lm(formula = formula, data = data)
##
## Coefficients:
## (Intercept) order_qty
## 0.09443 6.62037
```
4. Assign the previous code to a variable called `parsnip_model`
```
parsnip_model <- linear_reg() %>%
set_engine("lm") %>%
fit(order_total ~ order_qty, data = orders_sample)
```
5. Use `bind_cols()` to add the predictions to `order_sample`. Calculate the prediction with `predict()`
```
orders_sample %>%
bind_cols(predict(parsnip_model, orders_sample))
```
```
## # A tibble: 1,964 x 13
## order_id customer_id.x step_id date date_year date_month customer_id.y
## <int> <int> <dbl> <chr> <int> <int> <int>
## 1 1864 35 1 2016… 2016 1 35
## 2 3617 13 2 2016… 2016 1 13
## 3 5279 81 3 2016… 2016 1 81
## 4 5426 42 3 2016… 2016 1 42
## 5 7472 1 4 2016… 2016 1 1
## 6 7967 50 4 2016… 2016 1 50
## 7 11174 74 6 2016… 2016 1 74
## 8 11174 74 6 2016… 2016 1 50
## 9 11532 22 11 2016… 2016 1 62
## 10 11532 22 11 2016… 2016 1 22
## # … with 1,954 more rows, and 6 more variables: customer_name <chr>,
## # customer_lon <dbl>, customer_lat <dbl>, order_total <dbl>, order_qty <int>,
## # .pred <dbl>
```
6. Pipe the code into the `metrics()` function. Use `order_total` as the `truth` argument, and `.pred` as the `estimate` argument
```
orders_sample %>%
bind_cols(predict(parsnip_model, orders_sample)) %>%
metrics(truth = order_total, estimate = .pred)
```
```
## # A tibble: 3 x 3
## .metric .estimator .estimate
## <chr> <chr> <dbl>
## 1 rmse standard 3.50
## 2 rsq standard 0.939
## 3 mae standard 2.78
```
7\.3 Score with `tidypredict`
-----------------------------
1. Load the `tidypredict` library
```
library(tidypredict)
```
2. Use the `parse_model()` function to parse `parsnip_model`, and assign it to a variable called `parsed_parsnip`
```
parsed_parsnip <- parse_model(parsnip_model)
```
3. Use `str()` to see the `parsed_parsnip` object’s structure
```
str(parsed_parsnip)
```
```
## List of 2
## $ general:List of 6
## ..$ model : chr "lm"
## ..$ version : num 2
## ..$ type : chr "regression"
## ..$ residual: int 1962
## ..$ sigma2 : num 12.3
## ..$ is_glm : num 0
## $ terms :List of 2
## ..$ :List of 5
## .. ..$ label : chr "(Intercept)"
## .. ..$ coef : num 0.0944
## .. ..$ is_intercept: num 1
## .. ..$ fields :List of 1
## .. .. ..$ :List of 2
## .. .. .. ..$ type: chr "ordinary"
## .. .. .. ..$ col : chr "(Intercept)"
## .. ..$ qr :List of 2
## .. .. ..$ qr_1: num -0.0226
## .. .. ..$ qr_2: num -0.0639
## ..$ :List of 5
## .. ..$ label : chr "order_qty"
## .. ..$ coef : num 6.62
## .. ..$ is_intercept: num 0
## .. ..$ fields :List of 1
## .. .. ..$ :List of 2
## .. .. .. ..$ type: chr "ordinary"
## .. .. .. ..$ col : chr "order_qty"
## .. ..$ qr :List of 2
## .. .. ..$ qr_1: num 0
## .. .. ..$ qr_2: num 0.0109
## - attr(*, "class")= chr [1:3] "parsed_model" "pm_regression" "list"
```
4. Use `tidypredict_fit()` to view the `dplyr` formula that calculates the prediction
```
tidypredict_fit(parsed_parsnip)
```
```
## 0.0944307069267399 + (order_qty * 6.62036536915535)
```
5. Use `head()` to get the first 10 records from `orders_view`
```
orders_view %>%
head(10)
```
```
## # Source: lazy query [?? x 10]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 3 more variables: customer_lat <dbl>, order_total <dbl>, order_qty <int>
```
6. Pipe the code into `mutate()`. Assign to a new `my_pred` variable the results of `tidypredict_fit()`. Make sure to prefix `tidypredict_fit()` with the bang\-bang operator so that the formula is evaluated.
```
orders_view %>%
head(10) %>%
mutate(my_pred = !! tidypredict_fit(parsed_parsnip))
```
```
## # Source: lazy query [?? x 11]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 4 more variables: customer_lat <dbl>, order_total <dbl>,
## # order_qty <int>, my_pred <dbl>
```
7. Replace the `mutate()` command with `tidypredict_to_column()`
```
orders_view %>%
head(10) %>%
tidypredict_to_column(parsnip_model)
```
```
## # Source: lazy query [?? x 11]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 4 more variables: customer_lat <dbl>, order_total <dbl>,
## # order_qty <int>, fit <dbl>
```
8. Load the `yaml` library
```
library(yaml)
```
9. Use `write_yaml()` to save the contents of `parsed_parsnip` into a file called **model.yaml**
```
write_yaml(parsed_parsnip, "model.yaml")
```
10. Using `read_yaml()`, read the contents of the **model.yaml** file into the a new variable called `loaded_model`
```
loaded_model <- read_yaml("model.yaml")
```
11. Use `as_parsed_model()` to convert the `loaded_model` variable into a `tidypredict` parsed model object, assign the results to `loaded_model_2`
```
loaded_model_2 <- as_parsed_model(loaded_model)
```
7\.4 Run predictions in DB
--------------------------
1. Load the `modeldb` library
```
library(modeldb)
```
2. Use `select()` to pick the `order_total` and `order_qty` fields from the `orders_sample_db` table pointer
```
orders_sample_db %>%
select(order_total, order_qty)
```
```
## # Source: lazy query [?? x 2]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total order_qty
## <dbl> <int>
## 1 23.2 3
## 2 7.53 1
## 3 13.7 2
## 4 17.6 3
## 5 12.6 2
## 6 31.7 5
## 7 31.7 5
## 8 21.2 3
## 9 21.2 3
## 10 37.1 6
## # … with more rows
```
3. Pipe the code into the `linear_regression_db()` function, pass `order_total` as the only argument
```
orders_sample_db %>%
select(order_total, order_qty) %>%
linear_regression_db(order_total)
```
```
## # A tibble: 1 x 2
## `(Intercept)` order_qty
## <dbl> <dbl>
## 1 0.105 6.64
```
4. Assign the model results to a new variable called `db_model`
```
db_model <- orders_sample_db %>%
select(order_total, order_qty) %>%
linear_regression_db(order_total)
```
5. Use `as_parsed_model()` to convert `db_model` to a parsed model object. Assign to new a variable called `pm`
```
pm <- as_parsed_model(db_model)
```
6. Use `head()` to get the top 10 records, and then pipe into `tidypredict_to_column()` to add the results from `pm`
```
orders_view %>%
head(10) %>%
tidypredict_to_column(pm)
```
```
## # Source: lazy query [?? x 11]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_id date date_year date_month customer_id customer_name customer_lon
## <int> <chr> <int> <int> <int> <chr> <dbl>
## 1 1001 2016… 2016 1 22 Dr. Birdie K… -122.
## 2 1002 2016… 2016 1 6 Meggan Bruen -122.
## 3 1003 2016… 2016 1 80 Jessee Rodri… -122.
## 4 1004 2016… 2016 1 55 Kathryn Stehr -122.
## 5 1005 2016… 2016 1 73 Merlyn Runol… -122.
## 6 1006 2016… 2016 1 70 Reggie Mills -122.
## 7 1007 2016… 2016 1 55 Kathryn Stehr -122.
## 8 1008 2016… 2016 1 40 Dr. Trace Gl… -122.
## 9 1009 2016… 2016 1 78 Pricilla Goo… -122.
## 10 1010 2016… 2016 1 35 Mr. Commodor… -122.
## # … with 4 more variables: customer_lat <dbl>, order_total <dbl>,
## # order_qty <int>, fit <dbl>
```
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/advanced-operations.html |
8 Advanced Operations
=====================
8\.1 Simple wrapper function
----------------------------
1. Load the `connections` and `dplyr` libraries
```
library(connections)
library(dplyr)
library(dbplyr)
library(config)
```
2. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user"),
password = get("pwd"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
3. Create a variable that points to the **v\_orders** table
```
orders <- tbl(con, in_schema("retail", "v_orders"))
```
4. Create a simple `dplyr` call that gets the average of all order totals
```
orders %>%
summarise(mean = mean(order_total, na.rm = TRUE))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
```
5. Load the `rlang` library
```
library(rlang)
```
6. Create a new function call `my_mean()` that will take an argument, `x`, and then returns the results of `enquo(x)`
```
my_mean <- function(x){
enquo(x)
}
```
7. Test the new function. It should return the same variable name, but inside quosure. Use `order_total` as its argument’s value to test
```
my_mean(order_total)
```
```
## <quosure>
## expr: ^order_total
## env: global
```
8. In the function, re\-assign `x` to the result of `enquo(x)`, and then return `x`
```
my_mean <- function(x){
x <- enquo(x)
x
}
```
9. Test the same way again, the output should match to what it was as before
```
my_mean(order_total)
```
```
## <quosure>
## expr: ^order_total
## env: global
```
10. Remove the last line that has `x`, add the contents of the function with the initial `dplyr` code from step 3\. Then replace `order_total` with `!! x`
```
my_mean <- function(x){
x <- enquo(x)
orders %>%
summarise(mean = mean(!! x, na.rm = TRUE))
}
```
11. Test the new function by passing `order_total` as `x`
```
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
```
12. In the `summarise()` step, replace the name `mean`, with `!! as_label(x)`, also replace the `=` sign, with `:=`
```
my_mean <- function(x){
x <- enquo(x)
orders %>%
summarise(!! as_label(x) := mean(!! x, na.rm = TRUE))
}
```
13. Run the function again, the name of the column should match the argument value
```
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total
## <dbl>
## 1 38.0
```
14. Test the function by passing a formula, such as `order_total / order_qty`
```
my_mean(order_total / order_qty)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## `order_total/order_qty`
## <dbl>
## 1 6.65
```
15. Make the function generic, add a new argument called: `.data`. Inisde the function, replace `orders` with `.data`
```
my_mean <- function(.data, x){
x <- enquo(x)
.data %>%
summarise(!! as_label(x) := mean(!! x, na.rm = TRUE))
}
```
16. The function now behaves more like a `dplyr` verb. Start with `orders` and then pipe into the function
```
orders %>%
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total
## <dbl>
## 1 38.0
```
17. Clean up the code by removing the pipe that inside the function
```
my_mean <- function(.data, x){
x <- enquo(x)
summarise(
.data,
!! as_label(x) := mean(!! x, na.rm = TRUE)
)
}
```
18. Confirm that there is no change in the behavior of the function
```
orders %>%
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total
## <dbl>
## 1 38.0
```
19. Add a `show_query()` step to preview the resulting SQL statement
```
orders %>%
my_mean(order_total) %>%
show_query()
```
```
## <SQL>
## SELECT AVG("order_total") AS "order_total"
## FROM retail.v_orders
```
20. Try the function with a non\-DB backed variable, such as `mtcars`. Use `mpg` as the aggregating variable
```
mtcars %>%
my_mean(mpg)
```
```
## mpg
## 1 20.09062
```
8\.2 Multiple variables
-----------------------
*Create functions that handle a variable number of arguments. The goal of the exercise is to create an `anti-select()` function.*
1. Load the `purrr` package
```
library(purrr)
```
2. Use *…* as the second argument of a function called `de_select()`. Inside the function use `enquos()` to parse it
```
de_select <- function(.data, ...){
vars <- enquos(...)
vars
}
```
3. Test the function using *airports*
```
orders %>%
de_select(order_id, date)
```
```
## <list_of<quosure>>
##
## [[1]]
## <quosure>
## expr: ^order_id
## env: 0x56522573ace8
##
## [[2]]
## <quosure>
## expr: ^date
## env: 0x56522573ace8
```
4. Add a step to the function that iterates through each quosure and prefixes a minus sign to tell `select()` to drop that specific field. Use `map()` for the iteration, and `quo()` to create the prefixed expression.
```
de_select <- function(.data, ...){
vars <- enquos(...)
vars <- map(vars, ~ quo(- !! .x))
vars
}
```
5. Run the same test to view the new results
```
orders %>%
de_select(order_id, date)
```
```
## [[1]]
## <quosure>
## expr: ^-^order_id
## env: 0x565225b688f0
##
## [[2]]
## <quosure>
## expr: ^-^date
## env: 0x565225b6b610
```
6. Add the `select()` step. Use *!!!* to parse the *vars* variable inside `select()`
```
de_select <- function(.data, ...){
vars <- enquos(...)
vars <- map(vars, ~ quo(- !! .x))
select(.data, !!! vars)
}
```
7. Run the test again, this time the operation will take place.
```
orders %>%
de_select(order_id, date)
```
```
## # Source: lazy query [?? x 8]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## date_year date_month customer_id customer_name customer_lon customer_lat
## <int> <int> <int> <chr> <dbl> <dbl>
## 1 2016 1 22 Dr. Birdie K… -122. 37.7
## 2 2016 1 6 Meggan Bruen -122. 37.7
## 3 2016 1 80 Jessee Rodri… -122. 37.7
## 4 2016 1 55 Kathryn Stehr -122. 37.8
## 5 2016 1 73 Merlyn Runol… -122. 37.7
## 6 2016 1 70 Reggie Mills -122. 37.7
## 7 2016 1 55 Kathryn Stehr -122. 37.8
## 8 2016 1 40 Dr. Trace Gl… -122. 37.8
## 9 2016 1 78 Pricilla Goo… -122. 37.8
## 10 2016 1 35 Mr. Commodor… -122. 37.7
## # … with more rows, and 2 more variables: order_total <dbl>, order_qty <int>
```
8. Add a `show_query()` step to see the resulting SQL
```
orders %>%
de_select(order_id, date) %>%
show_query()
```
```
## <SQL>
## SELECT "date_year", "date_month", "customer_id", "customer_name", "customer_lon", "customer_lat", "order_total", "order_qty"
## FROM retail.v_orders
```
9. Test the function with a different data set, such as `mtcars`
```
mtcars %>%
de_select(mpg, wt, am)
```
```
## cyl disp hp drat qsec vs gear carb
## Mazda RX4 6 160.0 110 3.90 16.46 0 4 4
## Mazda RX4 Wag 6 160.0 110 3.90 17.02 0 4 4
## Datsun 710 4 108.0 93 3.85 18.61 1 4 1
## Hornet 4 Drive 6 258.0 110 3.08 19.44 1 3 1
## Hornet Sportabout 8 360.0 175 3.15 17.02 0 3 2
## Valiant 6 225.0 105 2.76 20.22 1 3 1
## Duster 360 8 360.0 245 3.21 15.84 0 3 4
## Merc 240D 4 146.7 62 3.69 20.00 1 4 2
## Merc 230 4 140.8 95 3.92 22.90 1 4 2
## Merc 280 6 167.6 123 3.92 18.30 1 4 4
## Merc 280C 6 167.6 123 3.92 18.90 1 4 4
## Merc 450SE 8 275.8 180 3.07 17.40 0 3 3
## Merc 450SL 8 275.8 180 3.07 17.60 0 3 3
## Merc 450SLC 8 275.8 180 3.07 18.00 0 3 3
## Cadillac Fleetwood 8 472.0 205 2.93 17.98 0 3 4
## Lincoln Continental 8 460.0 215 3.00 17.82 0 3 4
## Chrysler Imperial 8 440.0 230 3.23 17.42 0 3 4
## Fiat 128 4 78.7 66 4.08 19.47 1 4 1
## Honda Civic 4 75.7 52 4.93 18.52 1 4 2
## Toyota Corolla 4 71.1 65 4.22 19.90 1 4 1
## Toyota Corona 4 120.1 97 3.70 20.01 1 3 1
## Dodge Challenger 8 318.0 150 2.76 16.87 0 3 2
## AMC Javelin 8 304.0 150 3.15 17.30 0 3 2
## Camaro Z28 8 350.0 245 3.73 15.41 0 3 4
## Pontiac Firebird 8 400.0 175 3.08 17.05 0 3 2
## Fiat X1-9 4 79.0 66 4.08 18.90 1 4 1
## Porsche 914-2 4 120.3 91 4.43 16.70 0 5 2
## Lotus Europa 4 95.1 113 3.77 16.90 1 5 2
## Ford Pantera L 8 351.0 264 4.22 14.50 0 5 4
## Ferrari Dino 6 145.0 175 3.62 15.50 0 5 6
## Maserati Bora 8 301.0 335 3.54 14.60 0 5 8
## Volvo 142E 4 121.0 109 4.11 18.60 1 4 2
```
8\.3 Multiple queries
---------------------
*Suggested approach to avoid passing multiple, and similar, queries to the database*
1. Create a simple `dplyr` piped operation that returns the mean of *order\_total* for the months of January, February and March as a group
```
orders %>%
filter(date_month %in% c(1,2,3)) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 37.9
```
2. Assign the first operation to a variable called *a*, and create copy of the operation but changing the selected months to January, March and April. Assign the second one to a variable called *b*.
```
a <- orders %>%
filter(date_month %in% c(1,2,3)) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
b <- orders %>%
filter(date_month %in% c(1,3,4)) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
```
3. Use *union()* to pass *a* and *b* at the same time to the database
```
union(a, b)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 37.9
## 2 38.0
```
4. Pipe the previous instruction to `show_query()` to confirm that the resulting query is a single one
```
union(a, b) %>%
show_query()
```
```
## <SQL>
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 2.0, 3.0)))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 3.0, 4.0)))
```
5. Assign to a new variable called *months* an overlapping set of months
```
months <- list(
c(1,2,3),
c(1,3,4),
c(2,4,6)
)
```
6. Use `map()` to cycle through each set of overlapping months. Notice that it returns three separate results, meaning that it went to the database three times
```
months %>%
map(
~ orders %>%
filter(date_month %in% .x) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
)
```
```
## [[1]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 37.9
##
## [[2]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
##
## [[3]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.2
```
7. Add a `reduce()` operation and use `union()` command to create a single query
```
months %>%
map(
~ orders %>%
filter(date_month %in% .x) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.2
## 2 38.0
## 3 37.9
```
8. Use `show_query()` to see the resulting single query sent to the database
```
months %>%
map(
~ orders %>%
filter(date_month %in% .x) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y)) %>%
show_query()
```
```
## <SQL>
## ((SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 2.0, 3.0)))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 3.0, 4.0))))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (2.0, 4.0, 6.0)))
```
8\.4 Multiple queries with an overlapping range
-----------------------------------------------
1. Create a table with a *from* and *to* ranges
```
ranges <- tribble(
~ from, ~to,
1, 4,
2, 5,
3, 7
)
```
2. See how `map2()` works by passing the two variables as the *x* and *y* arguments, and adding them as the function
```
map2(ranges$from, ranges$to, ~.x + .y)
```
```
## [[1]]
## [1] 5
##
## [[2]]
## [1] 7
##
## [[3]]
## [1] 10
```
3. Replace *x \+ y* with the `dplyr` operation from the previous exercise. In it, re\-write the filter to use *x* and *y* as the month ranges
```
map2(
ranges$from,
ranges$to,
~ orders %>%
filter(date_month >= .x & date_month <= .y) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
)
```
```
## [[1]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
##
## [[2]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.2
##
## [[3]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.3
```
4. Add the `reduce()` operation
```
map2(
ranges$from,
ranges$to,
~ orders %>%
filter(date_month >= .x & date_month <= .y) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.3
## 2 38.0
## 3 38.2
```
5. Add a `show_query()` step to see how the final query was constructed.
```
map2(
ranges$from,
ranges$to,
~ orders %>%
filter(date_month >= .x & date_month <= .y) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y)) %>%
show_query()
```
```
## <SQL>
## ((SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" >= 1.0 AND "date_month" <= 4.0))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" >= 2.0 AND "date_month" <= 5.0)))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" >= 3.0 AND "date_month" <= 7.0))
```
8\.5 Characters to field names
------------------------------
1. Create two character variables. One with the name of a field in *flights* and another with a new name to be given to the field
```
my_field <- "new"
orders_field <- "order_total"
```
2. Add a `mutate()` step that adds the new field. And then another step selecting just the new field
```
orders %>%
mutate(my_field = !! orders_field) %>%
select(my_field)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## my_field
## <chr>
## 1 order_total
## 2 order_total
## 3 order_total
## 4 order_total
## 5 order_total
## 6 order_total
## 7 order_total
## 8 order_total
## 9 order_total
## 10 order_total
## # … with more rows
```
3. Add a `mutate()` step that adds the new field. And then another step selecting just the new field
```
orders %>%
mutate(!! my_field := !! orders_field) %>%
select(my_field)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## new
## <chr>
## 1 order_total
## 2 order_total
## 3 order_total
## 4 order_total
## 5 order_total
## 6 order_total
## 7 order_total
## 8 order_total
## 9 order_total
## 10 order_total
## # … with more rows
```
4. Wrap `orders_field` inside a `sym()` function
```
orders %>%
mutate(!! my_field := !! sym(orders_field)) %>%
select(my_field)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## new
## <dbl>
## 1 27.9
## 2 24.8
## 3 42.2
## 4 16.6
## 5 32.6
## 6 6.7
## 7 46.8
## 8 15.8
## 9 9.69
## 10 21.6
## # … with more rows
```
5. Pipe the code into `show_query()`
```
orders %>%
mutate(!! my_field := !! sym(orders_field)) %>%
select(my_field) %>%
show_query()
```
```
## <SQL>
## SELECT "order_total" AS "new"
## FROM retail.v_orders
```
8\.1 Simple wrapper function
----------------------------
1. Load the `connections` and `dplyr` libraries
```
library(connections)
library(dplyr)
library(dbplyr)
library(config)
```
2. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user"),
password = get("pwd"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
3. Create a variable that points to the **v\_orders** table
```
orders <- tbl(con, in_schema("retail", "v_orders"))
```
4. Create a simple `dplyr` call that gets the average of all order totals
```
orders %>%
summarise(mean = mean(order_total, na.rm = TRUE))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
```
5. Load the `rlang` library
```
library(rlang)
```
6. Create a new function call `my_mean()` that will take an argument, `x`, and then returns the results of `enquo(x)`
```
my_mean <- function(x){
enquo(x)
}
```
7. Test the new function. It should return the same variable name, but inside quosure. Use `order_total` as its argument’s value to test
```
my_mean(order_total)
```
```
## <quosure>
## expr: ^order_total
## env: global
```
8. In the function, re\-assign `x` to the result of `enquo(x)`, and then return `x`
```
my_mean <- function(x){
x <- enquo(x)
x
}
```
9. Test the same way again, the output should match to what it was as before
```
my_mean(order_total)
```
```
## <quosure>
## expr: ^order_total
## env: global
```
10. Remove the last line that has `x`, add the contents of the function with the initial `dplyr` code from step 3\. Then replace `order_total` with `!! x`
```
my_mean <- function(x){
x <- enquo(x)
orders %>%
summarise(mean = mean(!! x, na.rm = TRUE))
}
```
11. Test the new function by passing `order_total` as `x`
```
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
```
12. In the `summarise()` step, replace the name `mean`, with `!! as_label(x)`, also replace the `=` sign, with `:=`
```
my_mean <- function(x){
x <- enquo(x)
orders %>%
summarise(!! as_label(x) := mean(!! x, na.rm = TRUE))
}
```
13. Run the function again, the name of the column should match the argument value
```
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total
## <dbl>
## 1 38.0
```
14. Test the function by passing a formula, such as `order_total / order_qty`
```
my_mean(order_total / order_qty)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## `order_total/order_qty`
## <dbl>
## 1 6.65
```
15. Make the function generic, add a new argument called: `.data`. Inisde the function, replace `orders` with `.data`
```
my_mean <- function(.data, x){
x <- enquo(x)
.data %>%
summarise(!! as_label(x) := mean(!! x, na.rm = TRUE))
}
```
16. The function now behaves more like a `dplyr` verb. Start with `orders` and then pipe into the function
```
orders %>%
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total
## <dbl>
## 1 38.0
```
17. Clean up the code by removing the pipe that inside the function
```
my_mean <- function(.data, x){
x <- enquo(x)
summarise(
.data,
!! as_label(x) := mean(!! x, na.rm = TRUE)
)
}
```
18. Confirm that there is no change in the behavior of the function
```
orders %>%
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total
## <dbl>
## 1 38.0
```
19. Add a `show_query()` step to preview the resulting SQL statement
```
orders %>%
my_mean(order_total) %>%
show_query()
```
```
## <SQL>
## SELECT AVG("order_total") AS "order_total"
## FROM retail.v_orders
```
20. Try the function with a non\-DB backed variable, such as `mtcars`. Use `mpg` as the aggregating variable
```
mtcars %>%
my_mean(mpg)
```
```
## mpg
## 1 20.09062
```
8\.2 Multiple variables
-----------------------
*Create functions that handle a variable number of arguments. The goal of the exercise is to create an `anti-select()` function.*
1. Load the `purrr` package
```
library(purrr)
```
2. Use *…* as the second argument of a function called `de_select()`. Inside the function use `enquos()` to parse it
```
de_select <- function(.data, ...){
vars <- enquos(...)
vars
}
```
3. Test the function using *airports*
```
orders %>%
de_select(order_id, date)
```
```
## <list_of<quosure>>
##
## [[1]]
## <quosure>
## expr: ^order_id
## env: 0x56522573ace8
##
## [[2]]
## <quosure>
## expr: ^date
## env: 0x56522573ace8
```
4. Add a step to the function that iterates through each quosure and prefixes a minus sign to tell `select()` to drop that specific field. Use `map()` for the iteration, and `quo()` to create the prefixed expression.
```
de_select <- function(.data, ...){
vars <- enquos(...)
vars <- map(vars, ~ quo(- !! .x))
vars
}
```
5. Run the same test to view the new results
```
orders %>%
de_select(order_id, date)
```
```
## [[1]]
## <quosure>
## expr: ^-^order_id
## env: 0x565225b688f0
##
## [[2]]
## <quosure>
## expr: ^-^date
## env: 0x565225b6b610
```
6. Add the `select()` step. Use *!!!* to parse the *vars* variable inside `select()`
```
de_select <- function(.data, ...){
vars <- enquos(...)
vars <- map(vars, ~ quo(- !! .x))
select(.data, !!! vars)
}
```
7. Run the test again, this time the operation will take place.
```
orders %>%
de_select(order_id, date)
```
```
## # Source: lazy query [?? x 8]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## date_year date_month customer_id customer_name customer_lon customer_lat
## <int> <int> <int> <chr> <dbl> <dbl>
## 1 2016 1 22 Dr. Birdie K… -122. 37.7
## 2 2016 1 6 Meggan Bruen -122. 37.7
## 3 2016 1 80 Jessee Rodri… -122. 37.7
## 4 2016 1 55 Kathryn Stehr -122. 37.8
## 5 2016 1 73 Merlyn Runol… -122. 37.7
## 6 2016 1 70 Reggie Mills -122. 37.7
## 7 2016 1 55 Kathryn Stehr -122. 37.8
## 8 2016 1 40 Dr. Trace Gl… -122. 37.8
## 9 2016 1 78 Pricilla Goo… -122. 37.8
## 10 2016 1 35 Mr. Commodor… -122. 37.7
## # … with more rows, and 2 more variables: order_total <dbl>, order_qty <int>
```
8. Add a `show_query()` step to see the resulting SQL
```
orders %>%
de_select(order_id, date) %>%
show_query()
```
```
## <SQL>
## SELECT "date_year", "date_month", "customer_id", "customer_name", "customer_lon", "customer_lat", "order_total", "order_qty"
## FROM retail.v_orders
```
9. Test the function with a different data set, such as `mtcars`
```
mtcars %>%
de_select(mpg, wt, am)
```
```
## cyl disp hp drat qsec vs gear carb
## Mazda RX4 6 160.0 110 3.90 16.46 0 4 4
## Mazda RX4 Wag 6 160.0 110 3.90 17.02 0 4 4
## Datsun 710 4 108.0 93 3.85 18.61 1 4 1
## Hornet 4 Drive 6 258.0 110 3.08 19.44 1 3 1
## Hornet Sportabout 8 360.0 175 3.15 17.02 0 3 2
## Valiant 6 225.0 105 2.76 20.22 1 3 1
## Duster 360 8 360.0 245 3.21 15.84 0 3 4
## Merc 240D 4 146.7 62 3.69 20.00 1 4 2
## Merc 230 4 140.8 95 3.92 22.90 1 4 2
## Merc 280 6 167.6 123 3.92 18.30 1 4 4
## Merc 280C 6 167.6 123 3.92 18.90 1 4 4
## Merc 450SE 8 275.8 180 3.07 17.40 0 3 3
## Merc 450SL 8 275.8 180 3.07 17.60 0 3 3
## Merc 450SLC 8 275.8 180 3.07 18.00 0 3 3
## Cadillac Fleetwood 8 472.0 205 2.93 17.98 0 3 4
## Lincoln Continental 8 460.0 215 3.00 17.82 0 3 4
## Chrysler Imperial 8 440.0 230 3.23 17.42 0 3 4
## Fiat 128 4 78.7 66 4.08 19.47 1 4 1
## Honda Civic 4 75.7 52 4.93 18.52 1 4 2
## Toyota Corolla 4 71.1 65 4.22 19.90 1 4 1
## Toyota Corona 4 120.1 97 3.70 20.01 1 3 1
## Dodge Challenger 8 318.0 150 2.76 16.87 0 3 2
## AMC Javelin 8 304.0 150 3.15 17.30 0 3 2
## Camaro Z28 8 350.0 245 3.73 15.41 0 3 4
## Pontiac Firebird 8 400.0 175 3.08 17.05 0 3 2
## Fiat X1-9 4 79.0 66 4.08 18.90 1 4 1
## Porsche 914-2 4 120.3 91 4.43 16.70 0 5 2
## Lotus Europa 4 95.1 113 3.77 16.90 1 5 2
## Ford Pantera L 8 351.0 264 4.22 14.50 0 5 4
## Ferrari Dino 6 145.0 175 3.62 15.50 0 5 6
## Maserati Bora 8 301.0 335 3.54 14.60 0 5 8
## Volvo 142E 4 121.0 109 4.11 18.60 1 4 2
```
8\.3 Multiple queries
---------------------
*Suggested approach to avoid passing multiple, and similar, queries to the database*
1. Create a simple `dplyr` piped operation that returns the mean of *order\_total* for the months of January, February and March as a group
```
orders %>%
filter(date_month %in% c(1,2,3)) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 37.9
```
2. Assign the first operation to a variable called *a*, and create copy of the operation but changing the selected months to January, March and April. Assign the second one to a variable called *b*.
```
a <- orders %>%
filter(date_month %in% c(1,2,3)) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
b <- orders %>%
filter(date_month %in% c(1,3,4)) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
```
3. Use *union()* to pass *a* and *b* at the same time to the database
```
union(a, b)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 37.9
## 2 38.0
```
4. Pipe the previous instruction to `show_query()` to confirm that the resulting query is a single one
```
union(a, b) %>%
show_query()
```
```
## <SQL>
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 2.0, 3.0)))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 3.0, 4.0)))
```
5. Assign to a new variable called *months* an overlapping set of months
```
months <- list(
c(1,2,3),
c(1,3,4),
c(2,4,6)
)
```
6. Use `map()` to cycle through each set of overlapping months. Notice that it returns three separate results, meaning that it went to the database three times
```
months %>%
map(
~ orders %>%
filter(date_month %in% .x) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
)
```
```
## [[1]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 37.9
##
## [[2]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
##
## [[3]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.2
```
7. Add a `reduce()` operation and use `union()` command to create a single query
```
months %>%
map(
~ orders %>%
filter(date_month %in% .x) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.2
## 2 38.0
## 3 37.9
```
8. Use `show_query()` to see the resulting single query sent to the database
```
months %>%
map(
~ orders %>%
filter(date_month %in% .x) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y)) %>%
show_query()
```
```
## <SQL>
## ((SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 2.0, 3.0)))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 3.0, 4.0))))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (2.0, 4.0, 6.0)))
```
8\.4 Multiple queries with an overlapping range
-----------------------------------------------
1. Create a table with a *from* and *to* ranges
```
ranges <- tribble(
~ from, ~to,
1, 4,
2, 5,
3, 7
)
```
2. See how `map2()` works by passing the two variables as the *x* and *y* arguments, and adding them as the function
```
map2(ranges$from, ranges$to, ~.x + .y)
```
```
## [[1]]
## [1] 5
##
## [[2]]
## [1] 7
##
## [[3]]
## [1] 10
```
3. Replace *x \+ y* with the `dplyr` operation from the previous exercise. In it, re\-write the filter to use *x* and *y* as the month ranges
```
map2(
ranges$from,
ranges$to,
~ orders %>%
filter(date_month >= .x & date_month <= .y) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
)
```
```
## [[1]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
##
## [[2]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.2
##
## [[3]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.3
```
4. Add the `reduce()` operation
```
map2(
ranges$from,
ranges$to,
~ orders %>%
filter(date_month >= .x & date_month <= .y) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.3
## 2 38.0
## 3 38.2
```
5. Add a `show_query()` step to see how the final query was constructed.
```
map2(
ranges$from,
ranges$to,
~ orders %>%
filter(date_month >= .x & date_month <= .y) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y)) %>%
show_query()
```
```
## <SQL>
## ((SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" >= 1.0 AND "date_month" <= 4.0))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" >= 2.0 AND "date_month" <= 5.0)))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" >= 3.0 AND "date_month" <= 7.0))
```
8\.5 Characters to field names
------------------------------
1. Create two character variables. One with the name of a field in *flights* and another with a new name to be given to the field
```
my_field <- "new"
orders_field <- "order_total"
```
2. Add a `mutate()` step that adds the new field. And then another step selecting just the new field
```
orders %>%
mutate(my_field = !! orders_field) %>%
select(my_field)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## my_field
## <chr>
## 1 order_total
## 2 order_total
## 3 order_total
## 4 order_total
## 5 order_total
## 6 order_total
## 7 order_total
## 8 order_total
## 9 order_total
## 10 order_total
## # … with more rows
```
3. Add a `mutate()` step that adds the new field. And then another step selecting just the new field
```
orders %>%
mutate(!! my_field := !! orders_field) %>%
select(my_field)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## new
## <chr>
## 1 order_total
## 2 order_total
## 3 order_total
## 4 order_total
## 5 order_total
## 6 order_total
## 7 order_total
## 8 order_total
## 9 order_total
## 10 order_total
## # … with more rows
```
4. Wrap `orders_field` inside a `sym()` function
```
orders %>%
mutate(!! my_field := !! sym(orders_field)) %>%
select(my_field)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## new
## <dbl>
## 1 27.9
## 2 24.8
## 3 42.2
## 4 16.6
## 5 32.6
## 6 6.7
## 7 46.8
## 8 15.8
## 9 9.69
## 10 21.6
## # … with more rows
```
5. Pipe the code into `show_query()`
```
orders %>%
mutate(!! my_field := !! sym(orders_field)) %>%
select(my_field) %>%
show_query()
```
```
## <SQL>
## SELECT "order_total" AS "new"
## FROM retail.v_orders
```
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/advanced-operations.html |
8 Advanced Operations
=====================
8\.1 Simple wrapper function
----------------------------
1. Load the `connections` and `dplyr` libraries
```
library(connections)
library(dplyr)
library(dbplyr)
library(config)
```
2. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user"),
password = get("pwd"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
3. Create a variable that points to the **v\_orders** table
```
orders <- tbl(con, in_schema("retail", "v_orders"))
```
4. Create a simple `dplyr` call that gets the average of all order totals
```
orders %>%
summarise(mean = mean(order_total, na.rm = TRUE))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
```
5. Load the `rlang` library
```
library(rlang)
```
6. Create a new function call `my_mean()` that will take an argument, `x`, and then returns the results of `enquo(x)`
```
my_mean <- function(x){
enquo(x)
}
```
7. Test the new function. It should return the same variable name, but inside quosure. Use `order_total` as its argument’s value to test
```
my_mean(order_total)
```
```
## <quosure>
## expr: ^order_total
## env: global
```
8. In the function, re\-assign `x` to the result of `enquo(x)`, and then return `x`
```
my_mean <- function(x){
x <- enquo(x)
x
}
```
9. Test the same way again, the output should match to what it was as before
```
my_mean(order_total)
```
```
## <quosure>
## expr: ^order_total
## env: global
```
10. Remove the last line that has `x`, add the contents of the function with the initial `dplyr` code from step 3\. Then replace `order_total` with `!! x`
```
my_mean <- function(x){
x <- enquo(x)
orders %>%
summarise(mean = mean(!! x, na.rm = TRUE))
}
```
11. Test the new function by passing `order_total` as `x`
```
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
```
12. In the `summarise()` step, replace the name `mean`, with `!! as_label(x)`, also replace the `=` sign, with `:=`
```
my_mean <- function(x){
x <- enquo(x)
orders %>%
summarise(!! as_label(x) := mean(!! x, na.rm = TRUE))
}
```
13. Run the function again, the name of the column should match the argument value
```
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total
## <dbl>
## 1 38.0
```
14. Test the function by passing a formula, such as `order_total / order_qty`
```
my_mean(order_total / order_qty)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## `order_total/order_qty`
## <dbl>
## 1 6.65
```
15. Make the function generic, add a new argument called: `.data`. Inisde the function, replace `orders` with `.data`
```
my_mean <- function(.data, x){
x <- enquo(x)
.data %>%
summarise(!! as_label(x) := mean(!! x, na.rm = TRUE))
}
```
16. The function now behaves more like a `dplyr` verb. Start with `orders` and then pipe into the function
```
orders %>%
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total
## <dbl>
## 1 38.0
```
17. Clean up the code by removing the pipe that inside the function
```
my_mean <- function(.data, x){
x <- enquo(x)
summarise(
.data,
!! as_label(x) := mean(!! x, na.rm = TRUE)
)
}
```
18. Confirm that there is no change in the behavior of the function
```
orders %>%
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total
## <dbl>
## 1 38.0
```
19. Add a `show_query()` step to preview the resulting SQL statement
```
orders %>%
my_mean(order_total) %>%
show_query()
```
```
## <SQL>
## SELECT AVG("order_total") AS "order_total"
## FROM retail.v_orders
```
20. Try the function with a non\-DB backed variable, such as `mtcars`. Use `mpg` as the aggregating variable
```
mtcars %>%
my_mean(mpg)
```
```
## mpg
## 1 20.09062
```
8\.2 Multiple variables
-----------------------
*Create functions that handle a variable number of arguments. The goal of the exercise is to create an `anti-select()` function.*
1. Load the `purrr` package
```
library(purrr)
```
2. Use *…* as the second argument of a function called `de_select()`. Inside the function use `enquos()` to parse it
```
de_select <- function(.data, ...){
vars <- enquos(...)
vars
}
```
3. Test the function using *airports*
```
orders %>%
de_select(order_id, date)
```
```
## <list_of<quosure>>
##
## [[1]]
## <quosure>
## expr: ^order_id
## env: 0x56522573ace8
##
## [[2]]
## <quosure>
## expr: ^date
## env: 0x56522573ace8
```
4. Add a step to the function that iterates through each quosure and prefixes a minus sign to tell `select()` to drop that specific field. Use `map()` for the iteration, and `quo()` to create the prefixed expression.
```
de_select <- function(.data, ...){
vars <- enquos(...)
vars <- map(vars, ~ quo(- !! .x))
vars
}
```
5. Run the same test to view the new results
```
orders %>%
de_select(order_id, date)
```
```
## [[1]]
## <quosure>
## expr: ^-^order_id
## env: 0x565225b688f0
##
## [[2]]
## <quosure>
## expr: ^-^date
## env: 0x565225b6b610
```
6. Add the `select()` step. Use *!!!* to parse the *vars* variable inside `select()`
```
de_select <- function(.data, ...){
vars <- enquos(...)
vars <- map(vars, ~ quo(- !! .x))
select(.data, !!! vars)
}
```
7. Run the test again, this time the operation will take place.
```
orders %>%
de_select(order_id, date)
```
```
## # Source: lazy query [?? x 8]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## date_year date_month customer_id customer_name customer_lon customer_lat
## <int> <int> <int> <chr> <dbl> <dbl>
## 1 2016 1 22 Dr. Birdie K… -122. 37.7
## 2 2016 1 6 Meggan Bruen -122. 37.7
## 3 2016 1 80 Jessee Rodri… -122. 37.7
## 4 2016 1 55 Kathryn Stehr -122. 37.8
## 5 2016 1 73 Merlyn Runol… -122. 37.7
## 6 2016 1 70 Reggie Mills -122. 37.7
## 7 2016 1 55 Kathryn Stehr -122. 37.8
## 8 2016 1 40 Dr. Trace Gl… -122. 37.8
## 9 2016 1 78 Pricilla Goo… -122. 37.8
## 10 2016 1 35 Mr. Commodor… -122. 37.7
## # … with more rows, and 2 more variables: order_total <dbl>, order_qty <int>
```
8. Add a `show_query()` step to see the resulting SQL
```
orders %>%
de_select(order_id, date) %>%
show_query()
```
```
## <SQL>
## SELECT "date_year", "date_month", "customer_id", "customer_name", "customer_lon", "customer_lat", "order_total", "order_qty"
## FROM retail.v_orders
```
9. Test the function with a different data set, such as `mtcars`
```
mtcars %>%
de_select(mpg, wt, am)
```
```
## cyl disp hp drat qsec vs gear carb
## Mazda RX4 6 160.0 110 3.90 16.46 0 4 4
## Mazda RX4 Wag 6 160.0 110 3.90 17.02 0 4 4
## Datsun 710 4 108.0 93 3.85 18.61 1 4 1
## Hornet 4 Drive 6 258.0 110 3.08 19.44 1 3 1
## Hornet Sportabout 8 360.0 175 3.15 17.02 0 3 2
## Valiant 6 225.0 105 2.76 20.22 1 3 1
## Duster 360 8 360.0 245 3.21 15.84 0 3 4
## Merc 240D 4 146.7 62 3.69 20.00 1 4 2
## Merc 230 4 140.8 95 3.92 22.90 1 4 2
## Merc 280 6 167.6 123 3.92 18.30 1 4 4
## Merc 280C 6 167.6 123 3.92 18.90 1 4 4
## Merc 450SE 8 275.8 180 3.07 17.40 0 3 3
## Merc 450SL 8 275.8 180 3.07 17.60 0 3 3
## Merc 450SLC 8 275.8 180 3.07 18.00 0 3 3
## Cadillac Fleetwood 8 472.0 205 2.93 17.98 0 3 4
## Lincoln Continental 8 460.0 215 3.00 17.82 0 3 4
## Chrysler Imperial 8 440.0 230 3.23 17.42 0 3 4
## Fiat 128 4 78.7 66 4.08 19.47 1 4 1
## Honda Civic 4 75.7 52 4.93 18.52 1 4 2
## Toyota Corolla 4 71.1 65 4.22 19.90 1 4 1
## Toyota Corona 4 120.1 97 3.70 20.01 1 3 1
## Dodge Challenger 8 318.0 150 2.76 16.87 0 3 2
## AMC Javelin 8 304.0 150 3.15 17.30 0 3 2
## Camaro Z28 8 350.0 245 3.73 15.41 0 3 4
## Pontiac Firebird 8 400.0 175 3.08 17.05 0 3 2
## Fiat X1-9 4 79.0 66 4.08 18.90 1 4 1
## Porsche 914-2 4 120.3 91 4.43 16.70 0 5 2
## Lotus Europa 4 95.1 113 3.77 16.90 1 5 2
## Ford Pantera L 8 351.0 264 4.22 14.50 0 5 4
## Ferrari Dino 6 145.0 175 3.62 15.50 0 5 6
## Maserati Bora 8 301.0 335 3.54 14.60 0 5 8
## Volvo 142E 4 121.0 109 4.11 18.60 1 4 2
```
8\.3 Multiple queries
---------------------
*Suggested approach to avoid passing multiple, and similar, queries to the database*
1. Create a simple `dplyr` piped operation that returns the mean of *order\_total* for the months of January, February and March as a group
```
orders %>%
filter(date_month %in% c(1,2,3)) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 37.9
```
2. Assign the first operation to a variable called *a*, and create copy of the operation but changing the selected months to January, March and April. Assign the second one to a variable called *b*.
```
a <- orders %>%
filter(date_month %in% c(1,2,3)) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
b <- orders %>%
filter(date_month %in% c(1,3,4)) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
```
3. Use *union()* to pass *a* and *b* at the same time to the database
```
union(a, b)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 37.9
## 2 38.0
```
4. Pipe the previous instruction to `show_query()` to confirm that the resulting query is a single one
```
union(a, b) %>%
show_query()
```
```
## <SQL>
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 2.0, 3.0)))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 3.0, 4.0)))
```
5. Assign to a new variable called *months* an overlapping set of months
```
months <- list(
c(1,2,3),
c(1,3,4),
c(2,4,6)
)
```
6. Use `map()` to cycle through each set of overlapping months. Notice that it returns three separate results, meaning that it went to the database three times
```
months %>%
map(
~ orders %>%
filter(date_month %in% .x) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
)
```
```
## [[1]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 37.9
##
## [[2]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
##
## [[3]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.2
```
7. Add a `reduce()` operation and use `union()` command to create a single query
```
months %>%
map(
~ orders %>%
filter(date_month %in% .x) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.2
## 2 38.0
## 3 37.9
```
8. Use `show_query()` to see the resulting single query sent to the database
```
months %>%
map(
~ orders %>%
filter(date_month %in% .x) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y)) %>%
show_query()
```
```
## <SQL>
## ((SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 2.0, 3.0)))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 3.0, 4.0))))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (2.0, 4.0, 6.0)))
```
8\.4 Multiple queries with an overlapping range
-----------------------------------------------
1. Create a table with a *from* and *to* ranges
```
ranges <- tribble(
~ from, ~to,
1, 4,
2, 5,
3, 7
)
```
2. See how `map2()` works by passing the two variables as the *x* and *y* arguments, and adding them as the function
```
map2(ranges$from, ranges$to, ~.x + .y)
```
```
## [[1]]
## [1] 5
##
## [[2]]
## [1] 7
##
## [[3]]
## [1] 10
```
3. Replace *x \+ y* with the `dplyr` operation from the previous exercise. In it, re\-write the filter to use *x* and *y* as the month ranges
```
map2(
ranges$from,
ranges$to,
~ orders %>%
filter(date_month >= .x & date_month <= .y) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
)
```
```
## [[1]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
##
## [[2]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.2
##
## [[3]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.3
```
4. Add the `reduce()` operation
```
map2(
ranges$from,
ranges$to,
~ orders %>%
filter(date_month >= .x & date_month <= .y) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.3
## 2 38.0
## 3 38.2
```
5. Add a `show_query()` step to see how the final query was constructed.
```
map2(
ranges$from,
ranges$to,
~ orders %>%
filter(date_month >= .x & date_month <= .y) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y)) %>%
show_query()
```
```
## <SQL>
## ((SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" >= 1.0 AND "date_month" <= 4.0))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" >= 2.0 AND "date_month" <= 5.0)))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" >= 3.0 AND "date_month" <= 7.0))
```
8\.5 Characters to field names
------------------------------
1. Create two character variables. One with the name of a field in *flights* and another with a new name to be given to the field
```
my_field <- "new"
orders_field <- "order_total"
```
2. Add a `mutate()` step that adds the new field. And then another step selecting just the new field
```
orders %>%
mutate(my_field = !! orders_field) %>%
select(my_field)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## my_field
## <chr>
## 1 order_total
## 2 order_total
## 3 order_total
## 4 order_total
## 5 order_total
## 6 order_total
## 7 order_total
## 8 order_total
## 9 order_total
## 10 order_total
## # … with more rows
```
3. Add a `mutate()` step that adds the new field. And then another step selecting just the new field
```
orders %>%
mutate(!! my_field := !! orders_field) %>%
select(my_field)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## new
## <chr>
## 1 order_total
## 2 order_total
## 3 order_total
## 4 order_total
## 5 order_total
## 6 order_total
## 7 order_total
## 8 order_total
## 9 order_total
## 10 order_total
## # … with more rows
```
4. Wrap `orders_field` inside a `sym()` function
```
orders %>%
mutate(!! my_field := !! sym(orders_field)) %>%
select(my_field)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## new
## <dbl>
## 1 27.9
## 2 24.8
## 3 42.2
## 4 16.6
## 5 32.6
## 6 6.7
## 7 46.8
## 8 15.8
## 9 9.69
## 10 21.6
## # … with more rows
```
5. Pipe the code into `show_query()`
```
orders %>%
mutate(!! my_field := !! sym(orders_field)) %>%
select(my_field) %>%
show_query()
```
```
## <SQL>
## SELECT "order_total" AS "new"
## FROM retail.v_orders
```
8\.1 Simple wrapper function
----------------------------
1. Load the `connections` and `dplyr` libraries
```
library(connections)
library(dplyr)
library(dbplyr)
library(config)
```
2. Use `connection_open()` to open a Database connection
```
con <- connection_open(
RPostgres::Postgres(),
host = "localhost",
user = get("user"),
password = get("pwd"),
port = 5432,
dbname = "postgres",
bigint = "integer"
)
```
3. Create a variable that points to the **v\_orders** table
```
orders <- tbl(con, in_schema("retail", "v_orders"))
```
4. Create a simple `dplyr` call that gets the average of all order totals
```
orders %>%
summarise(mean = mean(order_total, na.rm = TRUE))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
```
5. Load the `rlang` library
```
library(rlang)
```
6. Create a new function call `my_mean()` that will take an argument, `x`, and then returns the results of `enquo(x)`
```
my_mean <- function(x){
enquo(x)
}
```
7. Test the new function. It should return the same variable name, but inside quosure. Use `order_total` as its argument’s value to test
```
my_mean(order_total)
```
```
## <quosure>
## expr: ^order_total
## env: global
```
8. In the function, re\-assign `x` to the result of `enquo(x)`, and then return `x`
```
my_mean <- function(x){
x <- enquo(x)
x
}
```
9. Test the same way again, the output should match to what it was as before
```
my_mean(order_total)
```
```
## <quosure>
## expr: ^order_total
## env: global
```
10. Remove the last line that has `x`, add the contents of the function with the initial `dplyr` code from step 3\. Then replace `order_total` with `!! x`
```
my_mean <- function(x){
x <- enquo(x)
orders %>%
summarise(mean = mean(!! x, na.rm = TRUE))
}
```
11. Test the new function by passing `order_total` as `x`
```
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
```
12. In the `summarise()` step, replace the name `mean`, with `!! as_label(x)`, also replace the `=` sign, with `:=`
```
my_mean <- function(x){
x <- enquo(x)
orders %>%
summarise(!! as_label(x) := mean(!! x, na.rm = TRUE))
}
```
13. Run the function again, the name of the column should match the argument value
```
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total
## <dbl>
## 1 38.0
```
14. Test the function by passing a formula, such as `order_total / order_qty`
```
my_mean(order_total / order_qty)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## `order_total/order_qty`
## <dbl>
## 1 6.65
```
15. Make the function generic, add a new argument called: `.data`. Inisde the function, replace `orders` with `.data`
```
my_mean <- function(.data, x){
x <- enquo(x)
.data %>%
summarise(!! as_label(x) := mean(!! x, na.rm = TRUE))
}
```
16. The function now behaves more like a `dplyr` verb. Start with `orders` and then pipe into the function
```
orders %>%
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total
## <dbl>
## 1 38.0
```
17. Clean up the code by removing the pipe that inside the function
```
my_mean <- function(.data, x){
x <- enquo(x)
summarise(
.data,
!! as_label(x) := mean(!! x, na.rm = TRUE)
)
}
```
18. Confirm that there is no change in the behavior of the function
```
orders %>%
my_mean(order_total)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## order_total
## <dbl>
## 1 38.0
```
19. Add a `show_query()` step to preview the resulting SQL statement
```
orders %>%
my_mean(order_total) %>%
show_query()
```
```
## <SQL>
## SELECT AVG("order_total") AS "order_total"
## FROM retail.v_orders
```
20. Try the function with a non\-DB backed variable, such as `mtcars`. Use `mpg` as the aggregating variable
```
mtcars %>%
my_mean(mpg)
```
```
## mpg
## 1 20.09062
```
8\.2 Multiple variables
-----------------------
*Create functions that handle a variable number of arguments. The goal of the exercise is to create an `anti-select()` function.*
1. Load the `purrr` package
```
library(purrr)
```
2. Use *…* as the second argument of a function called `de_select()`. Inside the function use `enquos()` to parse it
```
de_select <- function(.data, ...){
vars <- enquos(...)
vars
}
```
3. Test the function using *airports*
```
orders %>%
de_select(order_id, date)
```
```
## <list_of<quosure>>
##
## [[1]]
## <quosure>
## expr: ^order_id
## env: 0x56522573ace8
##
## [[2]]
## <quosure>
## expr: ^date
## env: 0x56522573ace8
```
4. Add a step to the function that iterates through each quosure and prefixes a minus sign to tell `select()` to drop that specific field. Use `map()` for the iteration, and `quo()` to create the prefixed expression.
```
de_select <- function(.data, ...){
vars <- enquos(...)
vars <- map(vars, ~ quo(- !! .x))
vars
}
```
5. Run the same test to view the new results
```
orders %>%
de_select(order_id, date)
```
```
## [[1]]
## <quosure>
## expr: ^-^order_id
## env: 0x565225b688f0
##
## [[2]]
## <quosure>
## expr: ^-^date
## env: 0x565225b6b610
```
6. Add the `select()` step. Use *!!!* to parse the *vars* variable inside `select()`
```
de_select <- function(.data, ...){
vars <- enquos(...)
vars <- map(vars, ~ quo(- !! .x))
select(.data, !!! vars)
}
```
7. Run the test again, this time the operation will take place.
```
orders %>%
de_select(order_id, date)
```
```
## # Source: lazy query [?? x 8]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## date_year date_month customer_id customer_name customer_lon customer_lat
## <int> <int> <int> <chr> <dbl> <dbl>
## 1 2016 1 22 Dr. Birdie K… -122. 37.7
## 2 2016 1 6 Meggan Bruen -122. 37.7
## 3 2016 1 80 Jessee Rodri… -122. 37.7
## 4 2016 1 55 Kathryn Stehr -122. 37.8
## 5 2016 1 73 Merlyn Runol… -122. 37.7
## 6 2016 1 70 Reggie Mills -122. 37.7
## 7 2016 1 55 Kathryn Stehr -122. 37.8
## 8 2016 1 40 Dr. Trace Gl… -122. 37.8
## 9 2016 1 78 Pricilla Goo… -122. 37.8
## 10 2016 1 35 Mr. Commodor… -122. 37.7
## # … with more rows, and 2 more variables: order_total <dbl>, order_qty <int>
```
8. Add a `show_query()` step to see the resulting SQL
```
orders %>%
de_select(order_id, date) %>%
show_query()
```
```
## <SQL>
## SELECT "date_year", "date_month", "customer_id", "customer_name", "customer_lon", "customer_lat", "order_total", "order_qty"
## FROM retail.v_orders
```
9. Test the function with a different data set, such as `mtcars`
```
mtcars %>%
de_select(mpg, wt, am)
```
```
## cyl disp hp drat qsec vs gear carb
## Mazda RX4 6 160.0 110 3.90 16.46 0 4 4
## Mazda RX4 Wag 6 160.0 110 3.90 17.02 0 4 4
## Datsun 710 4 108.0 93 3.85 18.61 1 4 1
## Hornet 4 Drive 6 258.0 110 3.08 19.44 1 3 1
## Hornet Sportabout 8 360.0 175 3.15 17.02 0 3 2
## Valiant 6 225.0 105 2.76 20.22 1 3 1
## Duster 360 8 360.0 245 3.21 15.84 0 3 4
## Merc 240D 4 146.7 62 3.69 20.00 1 4 2
## Merc 230 4 140.8 95 3.92 22.90 1 4 2
## Merc 280 6 167.6 123 3.92 18.30 1 4 4
## Merc 280C 6 167.6 123 3.92 18.90 1 4 4
## Merc 450SE 8 275.8 180 3.07 17.40 0 3 3
## Merc 450SL 8 275.8 180 3.07 17.60 0 3 3
## Merc 450SLC 8 275.8 180 3.07 18.00 0 3 3
## Cadillac Fleetwood 8 472.0 205 2.93 17.98 0 3 4
## Lincoln Continental 8 460.0 215 3.00 17.82 0 3 4
## Chrysler Imperial 8 440.0 230 3.23 17.42 0 3 4
## Fiat 128 4 78.7 66 4.08 19.47 1 4 1
## Honda Civic 4 75.7 52 4.93 18.52 1 4 2
## Toyota Corolla 4 71.1 65 4.22 19.90 1 4 1
## Toyota Corona 4 120.1 97 3.70 20.01 1 3 1
## Dodge Challenger 8 318.0 150 2.76 16.87 0 3 2
## AMC Javelin 8 304.0 150 3.15 17.30 0 3 2
## Camaro Z28 8 350.0 245 3.73 15.41 0 3 4
## Pontiac Firebird 8 400.0 175 3.08 17.05 0 3 2
## Fiat X1-9 4 79.0 66 4.08 18.90 1 4 1
## Porsche 914-2 4 120.3 91 4.43 16.70 0 5 2
## Lotus Europa 4 95.1 113 3.77 16.90 1 5 2
## Ford Pantera L 8 351.0 264 4.22 14.50 0 5 4
## Ferrari Dino 6 145.0 175 3.62 15.50 0 5 6
## Maserati Bora 8 301.0 335 3.54 14.60 0 5 8
## Volvo 142E 4 121.0 109 4.11 18.60 1 4 2
```
8\.3 Multiple queries
---------------------
*Suggested approach to avoid passing multiple, and similar, queries to the database*
1. Create a simple `dplyr` piped operation that returns the mean of *order\_total* for the months of January, February and March as a group
```
orders %>%
filter(date_month %in% c(1,2,3)) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 37.9
```
2. Assign the first operation to a variable called *a*, and create copy of the operation but changing the selected months to January, March and April. Assign the second one to a variable called *b*.
```
a <- orders %>%
filter(date_month %in% c(1,2,3)) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
b <- orders %>%
filter(date_month %in% c(1,3,4)) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
```
3. Use *union()* to pass *a* and *b* at the same time to the database
```
union(a, b)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 37.9
## 2 38.0
```
4. Pipe the previous instruction to `show_query()` to confirm that the resulting query is a single one
```
union(a, b) %>%
show_query()
```
```
## <SQL>
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 2.0, 3.0)))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 3.0, 4.0)))
```
5. Assign to a new variable called *months* an overlapping set of months
```
months <- list(
c(1,2,3),
c(1,3,4),
c(2,4,6)
)
```
6. Use `map()` to cycle through each set of overlapping months. Notice that it returns three separate results, meaning that it went to the database three times
```
months %>%
map(
~ orders %>%
filter(date_month %in% .x) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
)
```
```
## [[1]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 37.9
##
## [[2]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
##
## [[3]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.2
```
7. Add a `reduce()` operation and use `union()` command to create a single query
```
months %>%
map(
~ orders %>%
filter(date_month %in% .x) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.2
## 2 38.0
## 3 37.9
```
8. Use `show_query()` to see the resulting single query sent to the database
```
months %>%
map(
~ orders %>%
filter(date_month %in% .x) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y)) %>%
show_query()
```
```
## <SQL>
## ((SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 2.0, 3.0)))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (1.0, 3.0, 4.0))))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" IN (2.0, 4.0, 6.0)))
```
8\.4 Multiple queries with an overlapping range
-----------------------------------------------
1. Create a table with a *from* and *to* ranges
```
ranges <- tribble(
~ from, ~to,
1, 4,
2, 5,
3, 7
)
```
2. See how `map2()` works by passing the two variables as the *x* and *y* arguments, and adding them as the function
```
map2(ranges$from, ranges$to, ~.x + .y)
```
```
## [[1]]
## [1] 5
##
## [[2]]
## [1] 7
##
## [[3]]
## [1] 10
```
3. Replace *x \+ y* with the `dplyr` operation from the previous exercise. In it, re\-write the filter to use *x* and *y* as the month ranges
```
map2(
ranges$from,
ranges$to,
~ orders %>%
filter(date_month >= .x & date_month <= .y) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
)
```
```
## [[1]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.0
##
## [[2]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.2
##
## [[3]]
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.3
```
4. Add the `reduce()` operation
```
map2(
ranges$from,
ranges$to,
~ orders %>%
filter(date_month >= .x & date_month <= .y) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y))
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## mean
## <dbl>
## 1 38.3
## 2 38.0
## 3 38.2
```
5. Add a `show_query()` step to see how the final query was constructed.
```
map2(
ranges$from,
ranges$to,
~ orders %>%
filter(date_month >= .x & date_month <= .y) %>%
summarise(mean = mean(order_total, na.rm = TRUE))
) %>%
reduce(function(x, y) union(x, y)) %>%
show_query()
```
```
## <SQL>
## ((SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" >= 1.0 AND "date_month" <= 4.0))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" >= 2.0 AND "date_month" <= 5.0)))
## UNION
## (SELECT AVG("order_total") AS "mean"
## FROM retail.v_orders
## WHERE ("date_month" >= 3.0 AND "date_month" <= 7.0))
```
8\.5 Characters to field names
------------------------------
1. Create two character variables. One with the name of a field in *flights* and another with a new name to be given to the field
```
my_field <- "new"
orders_field <- "order_total"
```
2. Add a `mutate()` step that adds the new field. And then another step selecting just the new field
```
orders %>%
mutate(my_field = !! orders_field) %>%
select(my_field)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## my_field
## <chr>
## 1 order_total
## 2 order_total
## 3 order_total
## 4 order_total
## 5 order_total
## 6 order_total
## 7 order_total
## 8 order_total
## 9 order_total
## 10 order_total
## # … with more rows
```
3. Add a `mutate()` step that adds the new field. And then another step selecting just the new field
```
orders %>%
mutate(!! my_field := !! orders_field) %>%
select(my_field)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## new
## <chr>
## 1 order_total
## 2 order_total
## 3 order_total
## 4 order_total
## 5 order_total
## 6 order_total
## 7 order_total
## 8 order_total
## 9 order_total
## 10 order_total
## # … with more rows
```
4. Wrap `orders_field` inside a `sym()` function
```
orders %>%
mutate(!! my_field := !! sym(orders_field)) %>%
select(my_field)
```
```
## # Source: lazy query [?? x 1]
## # Database: postgres [rstudio_admin@localhost:5432/postgres]
## new
## <dbl>
## 1 27.9
## 2 24.8
## 3 42.2
## 4 16.6
## 5 32.6
## 6 6.7
## 7 46.8
## 8 15.8
## 9 9.69
## 10 21.6
## # … with more rows
```
5. Pipe the code into `show_query()`
```
orders %>%
mutate(!! my_field := !! sym(orders_field)) %>%
select(my_field) %>%
show_query()
```
```
## <SQL>
## SELECT "order_total" AS "new"
## FROM retail.v_orders
```
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/intro-to-sparklyr.html |
9 Intro to `sparklyr`
=====================
9\.1 New Spark session
----------------------
*Learn to open a new Spark session*
1. Load the `sparklyr` library
```
library(sparklyr)
```
2. Use `spark_connect()` to create a new local Spark session
```
sc <- spark_connect(master = "local")
```
```
## * Using Spark: 2.4.0
```
3. Click on the `Spark` button to view the current Spark session’s UI
4. Click on the `Log` button to see the message history
9\.2 Data transfer
------------------
*Practice uploading data to Spark*
1. Load the `dplyr` library
```
library(dplyr)
```
2. Copy the `mtcars` dataset into the session
```
spark_mtcars <- copy_to(sc, mtcars, "my_mtcars")
```
3. In the **Connections** pane, expande the `my_mtcars` table
4. Go to the Spark UI, note the new jobs
5. In the UI, click the Storage button, note the new table
6. Click on the **In\-memory table my\_mtcars** link
9\.3 Spark and `dplyr`
----------------------
*See how Spark handles `dplyr` commands*
1. Run the following code snipett
```
spark_mtcars %>%
group_by(am) %>%
summarise(mpg_mean = mean(mpg, na.rm = TRUE))
```
```
## # Source: spark<?> [?? x 2]
## am mpg_mean
## <dbl> <dbl>
## 1 0 17.1
## 2 1 24.4
```
2. Go to the Spark UI and click the **SQL** button
3. Click on the top item inside the **Completed Queries** table
4. At the bottom of the diagram, expand **Details**
9\.4 Feature transformers
-------------------------
*Introduction to how Spark Feature Transformers can be called from R*
1. Use `ft_binarizer()` to create a new column, called `over_20`, that indicates if that row’s `mpg` value is over or under 20MPG
```
spark_mtcars %>%
ft_binarizer("mpg", "over_20", 20)
```
```
## # Source: spark<?> [?? x 12]
## mpg cyl disp hp drat wt qsec vs am gear carb over_20
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 1
## 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 1
## 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 1
## 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 1
## 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 0
## 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 0
## 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 0
## 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 1
## 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 1
## 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 0
## # … with more rows
```
2. Pipe the code into `count()` to see how the data splits between the two values
```
spark_mtcars %>%
ft_binarizer("mpg", "over_20", 20) %>%
count(over_20)
```
```
## # Source: spark<?> [?? x 2]
## over_20 n
## <dbl> <dbl>
## 1 0 18
## 2 1 14
```
3. Start a new code chunk. This time use `ft_quantile_discretizer()` to create a new column called `mpg_quantile`
```
spark_mtcars %>%
ft_quantile_discretizer("mpg", "mpg_quantile")
```
```
## # Source: spark<?> [?? x 12]
## mpg cyl disp hp drat wt qsec vs am gear carb mpg_quantile
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 1
## 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 1
## 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 1
## 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 1
## 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 0
## 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 0
## 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 0
## 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 1
## 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 1
## 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 1
## # … with more rows
```
4. Add the `num_buckets` argument to `ft_quantile_discretizer()`, set its value to 5
```
spark_mtcars %>%
ft_quantile_discretizer("mpg", "mpg_quantile", num_buckets = 5)
```
```
## # Source: spark<?> [?? x 12]
## mpg cyl disp hp drat wt qsec vs am gear carb mpg_quantile
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 3
## 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 3
## 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 3
## 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 3
## 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 2
## 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 2
## 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 0
## 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 4
## 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 3
## 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 2
## # … with more rows
```
5. 1. Pipe the code into `count()` to see how the data splits between the quantiles
```
spark_mtcars %>%
ft_quantile_discretizer("mpg", "mpg_quantile", num_buckets = 5) %>%
count(mpg_quantile)
```
```
## # Source: spark<?> [?? x 2]
## mpg_quantile n
## <dbl> <dbl>
## 1 0 6
## 2 3 7
## 3 4 7
## 4 1 6
## 5 2 6
```
9\.5 Models
-----------
*Introduce Spark ML models by running a couple of them in R*
1. Use `ml_kmeans()` to run a model based on the following formula: `wt ~ mpg`. Assign the results to a variable called `k_mtcars`
```
k_mtcars <- spark_mtcars %>%
ml_kmeans(wt ~ mpg)
```
2. Use `k_mtcars$summary` to view the results of the model. Pull the cluster sizes by using `...$cluster_sizes()`
```
k_mtcars$summary$cluster_sizes()
```
```
## [1] 14 18
```
3. Start a new code chunk. This time use `ml_linear_regression()` to produce a Linear Regression model of the same formula used in the previous model. Assign the results to a variable called `lr_mtcars`
```
lr_mtcars <- spark_mtcars %>%
ml_linear_regression(wt ~ mpg)
```
4. Use `summary()` to view the results of the model
```
summary(lr_mtcars)
```
```
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.6516 -0.3490 -0.1381 0.3190 1.3684
##
## Coefficients:
## (Intercept) mpg
## 6.047255 -0.140862
##
## R-Squared: 0.7528
## Root Mean Squared Error: 0.4788
```
9\.1 New Spark session
----------------------
*Learn to open a new Spark session*
1. Load the `sparklyr` library
```
library(sparklyr)
```
2. Use `spark_connect()` to create a new local Spark session
```
sc <- spark_connect(master = "local")
```
```
## * Using Spark: 2.4.0
```
3. Click on the `Spark` button to view the current Spark session’s UI
4. Click on the `Log` button to see the message history
9\.2 Data transfer
------------------
*Practice uploading data to Spark*
1. Load the `dplyr` library
```
library(dplyr)
```
2. Copy the `mtcars` dataset into the session
```
spark_mtcars <- copy_to(sc, mtcars, "my_mtcars")
```
3. In the **Connections** pane, expande the `my_mtcars` table
4. Go to the Spark UI, note the new jobs
5. In the UI, click the Storage button, note the new table
6. Click on the **In\-memory table my\_mtcars** link
9\.3 Spark and `dplyr`
----------------------
*See how Spark handles `dplyr` commands*
1. Run the following code snipett
```
spark_mtcars %>%
group_by(am) %>%
summarise(mpg_mean = mean(mpg, na.rm = TRUE))
```
```
## # Source: spark<?> [?? x 2]
## am mpg_mean
## <dbl> <dbl>
## 1 0 17.1
## 2 1 24.4
```
2. Go to the Spark UI and click the **SQL** button
3. Click on the top item inside the **Completed Queries** table
4. At the bottom of the diagram, expand **Details**
9\.4 Feature transformers
-------------------------
*Introduction to how Spark Feature Transformers can be called from R*
1. Use `ft_binarizer()` to create a new column, called `over_20`, that indicates if that row’s `mpg` value is over or under 20MPG
```
spark_mtcars %>%
ft_binarizer("mpg", "over_20", 20)
```
```
## # Source: spark<?> [?? x 12]
## mpg cyl disp hp drat wt qsec vs am gear carb over_20
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 1
## 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 1
## 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 1
## 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 1
## 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 0
## 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 0
## 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 0
## 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 1
## 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 1
## 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 0
## # … with more rows
```
2. Pipe the code into `count()` to see how the data splits between the two values
```
spark_mtcars %>%
ft_binarizer("mpg", "over_20", 20) %>%
count(over_20)
```
```
## # Source: spark<?> [?? x 2]
## over_20 n
## <dbl> <dbl>
## 1 0 18
## 2 1 14
```
3. Start a new code chunk. This time use `ft_quantile_discretizer()` to create a new column called `mpg_quantile`
```
spark_mtcars %>%
ft_quantile_discretizer("mpg", "mpg_quantile")
```
```
## # Source: spark<?> [?? x 12]
## mpg cyl disp hp drat wt qsec vs am gear carb mpg_quantile
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 1
## 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 1
## 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 1
## 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 1
## 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 0
## 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 0
## 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 0
## 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 1
## 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 1
## 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 1
## # … with more rows
```
4. Add the `num_buckets` argument to `ft_quantile_discretizer()`, set its value to 5
```
spark_mtcars %>%
ft_quantile_discretizer("mpg", "mpg_quantile", num_buckets = 5)
```
```
## # Source: spark<?> [?? x 12]
## mpg cyl disp hp drat wt qsec vs am gear carb mpg_quantile
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 3
## 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 3
## 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 3
## 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 3
## 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 2
## 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 2
## 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 0
## 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 4
## 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 3
## 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 2
## # … with more rows
```
5. 1. Pipe the code into `count()` to see how the data splits between the quantiles
```
spark_mtcars %>%
ft_quantile_discretizer("mpg", "mpg_quantile", num_buckets = 5) %>%
count(mpg_quantile)
```
```
## # Source: spark<?> [?? x 2]
## mpg_quantile n
## <dbl> <dbl>
## 1 0 6
## 2 3 7
## 3 4 7
## 4 1 6
## 5 2 6
```
9\.5 Models
-----------
*Introduce Spark ML models by running a couple of them in R*
1. Use `ml_kmeans()` to run a model based on the following formula: `wt ~ mpg`. Assign the results to a variable called `k_mtcars`
```
k_mtcars <- spark_mtcars %>%
ml_kmeans(wt ~ mpg)
```
2. Use `k_mtcars$summary` to view the results of the model. Pull the cluster sizes by using `...$cluster_sizes()`
```
k_mtcars$summary$cluster_sizes()
```
```
## [1] 14 18
```
3. Start a new code chunk. This time use `ml_linear_regression()` to produce a Linear Regression model of the same formula used in the previous model. Assign the results to a variable called `lr_mtcars`
```
lr_mtcars <- spark_mtcars %>%
ml_linear_regression(wt ~ mpg)
```
4. Use `summary()` to view the results of the model
```
summary(lr_mtcars)
```
```
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.6516 -0.3490 -0.1381 0.3190 1.3684
##
## Coefficients:
## (Intercept) mpg
## 6.047255 -0.140862
##
## R-Squared: 0.7528
## Root Mean Squared Error: 0.4788
```
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/intro-to-sparklyr.html |
9 Intro to `sparklyr`
=====================
9\.1 New Spark session
----------------------
*Learn to open a new Spark session*
1. Load the `sparklyr` library
```
library(sparklyr)
```
2. Use `spark_connect()` to create a new local Spark session
```
sc <- spark_connect(master = "local")
```
```
## * Using Spark: 2.4.0
```
3. Click on the `Spark` button to view the current Spark session’s UI
4. Click on the `Log` button to see the message history
9\.2 Data transfer
------------------
*Practice uploading data to Spark*
1. Load the `dplyr` library
```
library(dplyr)
```
2. Copy the `mtcars` dataset into the session
```
spark_mtcars <- copy_to(sc, mtcars, "my_mtcars")
```
3. In the **Connections** pane, expande the `my_mtcars` table
4. Go to the Spark UI, note the new jobs
5. In the UI, click the Storage button, note the new table
6. Click on the **In\-memory table my\_mtcars** link
9\.3 Spark and `dplyr`
----------------------
*See how Spark handles `dplyr` commands*
1. Run the following code snipett
```
spark_mtcars %>%
group_by(am) %>%
summarise(mpg_mean = mean(mpg, na.rm = TRUE))
```
```
## # Source: spark<?> [?? x 2]
## am mpg_mean
## <dbl> <dbl>
## 1 0 17.1
## 2 1 24.4
```
2. Go to the Spark UI and click the **SQL** button
3. Click on the top item inside the **Completed Queries** table
4. At the bottom of the diagram, expand **Details**
9\.4 Feature transformers
-------------------------
*Introduction to how Spark Feature Transformers can be called from R*
1. Use `ft_binarizer()` to create a new column, called `over_20`, that indicates if that row’s `mpg` value is over or under 20MPG
```
spark_mtcars %>%
ft_binarizer("mpg", "over_20", 20)
```
```
## # Source: spark<?> [?? x 12]
## mpg cyl disp hp drat wt qsec vs am gear carb over_20
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 1
## 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 1
## 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 1
## 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 1
## 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 0
## 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 0
## 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 0
## 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 1
## 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 1
## 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 0
## # … with more rows
```
2. Pipe the code into `count()` to see how the data splits between the two values
```
spark_mtcars %>%
ft_binarizer("mpg", "over_20", 20) %>%
count(over_20)
```
```
## # Source: spark<?> [?? x 2]
## over_20 n
## <dbl> <dbl>
## 1 0 18
## 2 1 14
```
3. Start a new code chunk. This time use `ft_quantile_discretizer()` to create a new column called `mpg_quantile`
```
spark_mtcars %>%
ft_quantile_discretizer("mpg", "mpg_quantile")
```
```
## # Source: spark<?> [?? x 12]
## mpg cyl disp hp drat wt qsec vs am gear carb mpg_quantile
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 1
## 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 1
## 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 1
## 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 1
## 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 0
## 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 0
## 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 0
## 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 1
## 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 1
## 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 1
## # … with more rows
```
4. Add the `num_buckets` argument to `ft_quantile_discretizer()`, set its value to 5
```
spark_mtcars %>%
ft_quantile_discretizer("mpg", "mpg_quantile", num_buckets = 5)
```
```
## # Source: spark<?> [?? x 12]
## mpg cyl disp hp drat wt qsec vs am gear carb mpg_quantile
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 3
## 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 3
## 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 3
## 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 3
## 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 2
## 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 2
## 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 0
## 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 4
## 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 3
## 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 2
## # … with more rows
```
5. 1. Pipe the code into `count()` to see how the data splits between the quantiles
```
spark_mtcars %>%
ft_quantile_discretizer("mpg", "mpg_quantile", num_buckets = 5) %>%
count(mpg_quantile)
```
```
## # Source: spark<?> [?? x 2]
## mpg_quantile n
## <dbl> <dbl>
## 1 0 6
## 2 3 7
## 3 4 7
## 4 1 6
## 5 2 6
```
9\.5 Models
-----------
*Introduce Spark ML models by running a couple of them in R*
1. Use `ml_kmeans()` to run a model based on the following formula: `wt ~ mpg`. Assign the results to a variable called `k_mtcars`
```
k_mtcars <- spark_mtcars %>%
ml_kmeans(wt ~ mpg)
```
2. Use `k_mtcars$summary` to view the results of the model. Pull the cluster sizes by using `...$cluster_sizes()`
```
k_mtcars$summary$cluster_sizes()
```
```
## [1] 14 18
```
3. Start a new code chunk. This time use `ml_linear_regression()` to produce a Linear Regression model of the same formula used in the previous model. Assign the results to a variable called `lr_mtcars`
```
lr_mtcars <- spark_mtcars %>%
ml_linear_regression(wt ~ mpg)
```
4. Use `summary()` to view the results of the model
```
summary(lr_mtcars)
```
```
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.6516 -0.3490 -0.1381 0.3190 1.3684
##
## Coefficients:
## (Intercept) mpg
## 6.047255 -0.140862
##
## R-Squared: 0.7528
## Root Mean Squared Error: 0.4788
```
9\.1 New Spark session
----------------------
*Learn to open a new Spark session*
1. Load the `sparklyr` library
```
library(sparklyr)
```
2. Use `spark_connect()` to create a new local Spark session
```
sc <- spark_connect(master = "local")
```
```
## * Using Spark: 2.4.0
```
3. Click on the `Spark` button to view the current Spark session’s UI
4. Click on the `Log` button to see the message history
9\.2 Data transfer
------------------
*Practice uploading data to Spark*
1. Load the `dplyr` library
```
library(dplyr)
```
2. Copy the `mtcars` dataset into the session
```
spark_mtcars <- copy_to(sc, mtcars, "my_mtcars")
```
3. In the **Connections** pane, expande the `my_mtcars` table
4. Go to the Spark UI, note the new jobs
5. In the UI, click the Storage button, note the new table
6. Click on the **In\-memory table my\_mtcars** link
9\.3 Spark and `dplyr`
----------------------
*See how Spark handles `dplyr` commands*
1. Run the following code snipett
```
spark_mtcars %>%
group_by(am) %>%
summarise(mpg_mean = mean(mpg, na.rm = TRUE))
```
```
## # Source: spark<?> [?? x 2]
## am mpg_mean
## <dbl> <dbl>
## 1 0 17.1
## 2 1 24.4
```
2. Go to the Spark UI and click the **SQL** button
3. Click on the top item inside the **Completed Queries** table
4. At the bottom of the diagram, expand **Details**
9\.4 Feature transformers
-------------------------
*Introduction to how Spark Feature Transformers can be called from R*
1. Use `ft_binarizer()` to create a new column, called `over_20`, that indicates if that row’s `mpg` value is over or under 20MPG
```
spark_mtcars %>%
ft_binarizer("mpg", "over_20", 20)
```
```
## # Source: spark<?> [?? x 12]
## mpg cyl disp hp drat wt qsec vs am gear carb over_20
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 1
## 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 1
## 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 1
## 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 1
## 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 0
## 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 0
## 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 0
## 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 1
## 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 1
## 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 0
## # … with more rows
```
2. Pipe the code into `count()` to see how the data splits between the two values
```
spark_mtcars %>%
ft_binarizer("mpg", "over_20", 20) %>%
count(over_20)
```
```
## # Source: spark<?> [?? x 2]
## over_20 n
## <dbl> <dbl>
## 1 0 18
## 2 1 14
```
3. Start a new code chunk. This time use `ft_quantile_discretizer()` to create a new column called `mpg_quantile`
```
spark_mtcars %>%
ft_quantile_discretizer("mpg", "mpg_quantile")
```
```
## # Source: spark<?> [?? x 12]
## mpg cyl disp hp drat wt qsec vs am gear carb mpg_quantile
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 1
## 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 1
## 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 1
## 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 1
## 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 0
## 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 0
## 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 0
## 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 1
## 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 1
## 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 1
## # … with more rows
```
4. Add the `num_buckets` argument to `ft_quantile_discretizer()`, set its value to 5
```
spark_mtcars %>%
ft_quantile_discretizer("mpg", "mpg_quantile", num_buckets = 5)
```
```
## # Source: spark<?> [?? x 12]
## mpg cyl disp hp drat wt qsec vs am gear carb mpg_quantile
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 21 6 160 110 3.9 2.62 16.5 0 1 4 4 3
## 2 21 6 160 110 3.9 2.88 17.0 0 1 4 4 3
## 3 22.8 4 108 93 3.85 2.32 18.6 1 1 4 1 3
## 4 21.4 6 258 110 3.08 3.22 19.4 1 0 3 1 3
## 5 18.7 8 360 175 3.15 3.44 17.0 0 0 3 2 2
## 6 18.1 6 225 105 2.76 3.46 20.2 1 0 3 1 2
## 7 14.3 8 360 245 3.21 3.57 15.8 0 0 3 4 0
## 8 24.4 4 147. 62 3.69 3.19 20 1 0 4 2 4
## 9 22.8 4 141. 95 3.92 3.15 22.9 1 0 4 2 3
## 10 19.2 6 168. 123 3.92 3.44 18.3 1 0 4 4 2
## # … with more rows
```
5. 1. Pipe the code into `count()` to see how the data splits between the quantiles
```
spark_mtcars %>%
ft_quantile_discretizer("mpg", "mpg_quantile", num_buckets = 5) %>%
count(mpg_quantile)
```
```
## # Source: spark<?> [?? x 2]
## mpg_quantile n
## <dbl> <dbl>
## 1 0 6
## 2 3 7
## 3 4 7
## 4 1 6
## 5 2 6
```
9\.5 Models
-----------
*Introduce Spark ML models by running a couple of them in R*
1. Use `ml_kmeans()` to run a model based on the following formula: `wt ~ mpg`. Assign the results to a variable called `k_mtcars`
```
k_mtcars <- spark_mtcars %>%
ml_kmeans(wt ~ mpg)
```
2. Use `k_mtcars$summary` to view the results of the model. Pull the cluster sizes by using `...$cluster_sizes()`
```
k_mtcars$summary$cluster_sizes()
```
```
## [1] 14 18
```
3. Start a new code chunk. This time use `ml_linear_regression()` to produce a Linear Regression model of the same formula used in the previous model. Assign the results to a variable called `lr_mtcars`
```
lr_mtcars <- spark_mtcars %>%
ml_linear_regression(wt ~ mpg)
```
4. Use `summary()` to view the results of the model
```
summary(lr_mtcars)
```
```
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -0.6516 -0.3490 -0.1381 0.3190 1.3684
##
## Coefficients:
## (Intercept) mpg
## 6.047255 -0.140862
##
## R-Squared: 0.7528
## Root Mean Squared Error: 0.4788
```
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/text-mining-with-sparklyr.html |
10 Text mining with `sparklyr`
==============================
For this example, there are two files that will be analyzed. They are both the full works of Sir Arthur Conan Doyle and Mark Twain. The files were downloaded from the [Gutenberg Project](https://www.gutenberg.org/) site via the `gutenbergr` package. Intentionally, no data cleanup was done to the files prior to this analysis. See the appendix below to see how the data was downloaded and prepared.
```
readLines("/usr/share/class/books/arthur_doyle.txt", 30)
```
```
## [1] "THE RETURN OF SHERLOCK HOLMES,"
## [2] ""
## [3] "A Collection of Holmes Adventures"
## [4] ""
## [5] ""
## [6] "by Sir Arthur Conan Doyle"
## [7] ""
## [8] ""
## [9] ""
## [10] ""
## [11] "CONTENTS:"
## [12] ""
## [13] " The Adventure Of The Empty House"
## [14] ""
## [15] " The Adventure Of The Norwood Builder"
## [16] ""
## [17] " The Adventure Of The Dancing Men"
## [18] ""
## [19] " The Adventure Of The Solitary Cyclist"
## [20] ""
## [21] " The Adventure Of The Priory School"
## [22] ""
## [23] " The Adventure Of Black Peter"
## [24] ""
## [25] " The Adventure Of Charles Augustus Milverton"
## [26] ""
## [27] " The Adventure Of The Six Napoleons"
## [28] ""
## [29] " The Adventure Of The Three Students"
## [30] ""
```
10\.1 Data Import
-----------------
*Read the book data into Spark*
1. Load the `sparklyr` library
```
library(sparklyr)
```
2. Open a Spark session
```
sc <- spark_connect(master = "local")
```
```
## Re-using existing Spark connection to local
```
3. Use the `spark_read_text()` function to read the **mark\_twain.txt** file, assign it to a variable called `twain`
```
twain <- spark_read_text(sc, "twain", "/usr/share/class/books/mark_twain.txt")
```
4. Use the `spark_read_text()` function to read the **arthur\_doyle.txt** file, assign it to a variable called `doyle`
```
doyle <- spark_read_text(sc, "doyle", "/usr/share/class/books/arthur_doyle.txt")
```
10\.2 Tidying data
------------------
*Prepare the data for analysis*
1. Load the `dplyr` library
```
library(dplyr)
```
2. Add a column to `twain` named `author` with a value of “twain”. Assign it to a new variable called `twain_id`
```
twain_id <- twain %>%
mutate(author = "twain")
```
3. Add a column to `doyle` named `author` with a value of “doyle”. Assign it to a new variable called `doyle_id`
```
doyle_id <- doyle %>%
mutate(author = "doyle")
```
4. Use `sdf_bind_rows()` to append the two files together in a variable called `both`
```
both <- doyle_id %>%
sdf_bind_rows(twain_id)
```
5. Preview `both`
```
both
```
```
## # Source: spark<?> [?? x 2]
## line author
## <chr> <chr>
## 1 "THE RETURN OF SHERLOCK HOLMES," doyle
## 2 "" doyle
## 3 "A Collection of Holmes Adventures" doyle
## 4 "" doyle
## 5 "" doyle
## 6 "by Sir Arthur Conan Doyle" doyle
## 7 "" doyle
## 8 "" doyle
## 9 "" doyle
## 10 "" doyle
## # … with more rows
```
6. Filter out empty lines into a variable called `all_lines`
```
all_lines <- both %>%
filter(nchar(line) > 0)
```
7. Use Hive’s *regexp\_replace* to remove punctuation, assign it to the same `all_lines` variable
```
all_lines <- all_lines %>%
mutate(line = regexp_replace(line, "[_\"\'():;,.!?\\-]", " "))
```
10\.3 Transform the data
------------------------
*Use feature transformers to make additional preparations*
1. Use `ft_tokenizer()` to separate each word. in the line. Set the `output_col` to “word\_list”. Assign to a variable called `word_list`
```
word_list <- all_lines %>%
ft_tokenizer(
input_col = "line",
output_col = "word_list"
)
```
2. Remove “stop words” with the `ft_stop_words_remover()` transformer. Set the `output_col` to “wo\_stop\_words”. Assign to a variable called `wo_stop`
```
wo_stop <- word_list %>%
ft_stop_words_remover(
input_col = "word_list",
output_col = "wo_stop_words"
)
```
3. Un\-nest the tokens inside *wo\_stop\_words* using `explode()`. Assign to a variable called `exploded`
```
exploded <- wo_stop %>%
mutate(word = explode(wo_stop_words))
```
4. Select the *word* and *author* columns, and remove any word with less than 3 characters. Assign to `all_words`
```
all_words <- exploded %>%
select(word, author) %>%
filter(nchar(word) > 2)
```
5. Cache the `all_words` variable using `compute()`
```
all_words <- all_words %>%
compute("all_words")
```
10\.4 Data Exploration
----------------------
*Used word clouds to explore the data*
1. Create a variable with the word count by author, name it `word_count`
```
word_count <- all_words %>%
count(author, word, sort = TRUE)
```
2. Filter `word_cout` to only retain “twain”, assign it to `twain_most`
```
twain_most <- word_count %>%
filter(author == "twain")
```
3. Use `wordcloud` to visualize the top 50 words used by Twain
```
twain_most %>%
head(50) %>%
collect() %>%
with(wordcloud::wordcloud(
word,
n,
colors = c("#999999", "#E69F00", "#56B4E9","#56B4E9"))
)
```
4. Filter `word_cout` to only retain “doyle”, assign it to `doyle_most`
```
doyle_most <- word_count %>%
filter(author == "doyle")
```
5. Used `wordcloud` to visualize the top 50 words used by Doyle that have more than 5 characters
```
doyle_most %>%
filter(nchar(word) > 5) %>%
head(50) %>%
collect() %>%
with(wordcloud::wordcloud(
word,
n,
colors = c("#999999", "#E69F00", "#56B4E9","#56B4E9")
))
```
6. Use `anti_join()` to figure out which words are used by Doyle but not Twain. Order the results by number of words.
```
doyle_unique <- doyle_most %>%
anti_join(twain_most, by = "word") %>%
arrange(desc(n))
```
7. Use `wordcloud` to visualize top 50 records in the previous step
```
doyle_unique %>%
head(50) %>%
collect() %>%
with(wordcloud::wordcloud(
word,
n,
colors = c("#999999", "#E69F00", "#56B4E9","#56B4E9"))
)
```
8. Find out how many times Twain used the word “sherlock”
```
all_words %>%
filter(author == "twain", word == "sherlock") %>%
tally()
```
```
## # Source: spark<?> [?? x 1]
## n
## <dbl>
## 1 47
```
9. Against the `twain` variable, use Hive’s *instr* and *lower* to make all ever word lower cap, and then look for “sherlock” in the line
```
twain %>%
mutate(line = lower(line)) %>%
filter(instr(line, "sherlock") > 0) %>%
pull(line)
```
```
## [1] "late sherlock holmes, and yet discernible by a member of a race charged"
## [2] "sherlock holmes."
## [3] "\"uncle sherlock! the mean luck of it!--that he should come just"
## [4] "another trouble presented itself. \"uncle sherlock 'll be wanting to talk"
## [5] "flint buckner's cabin in the frosty gloom. they were sherlock holmes and"
## [6] "\"uncle sherlock's got some work to do, gentlemen, that 'll keep him till"
## [7] "\"by george, he's just a duke, boys! three cheers for sherlock holmes,"
## [8] "he brought sherlock holmes to the billiard-room, which was jammed with"
## [9] "of interest was there--sherlock holmes. the miners stood silent and"
## [10] "the room; the chair was on it; sherlock holmes, stately, imposing,"
## [11] "\"you have hunted me around the world, sherlock holmes, yet god is my"
## [12] "\"if it's only sherlock holmes that's troubling you, you needn't worry"
## [13] "they sighed; then one said: \"we must bring sherlock holmes. he can be"
## [14] "i had small desire that sherlock holmes should hang for my deeds, as you"
## [15] "\"my name is sherlock holmes, and i have not been doing anything.\""
## [16] "late sherlock holmes, and yet discernible by a member of a race charged"
## [17] "plus fort que sherlock holmes"
## [18] "sherlock holmes entre en scene"
## [19] "sherlock holmes"
## [20] "--l'oncle sherlock! quelle guigne!"
## [21] "bien! cette fois sherlock sera tres embarrasse; il manquera de preuve et"
## [22] "--l'oncle sherlock va vouloir, ce soir, causer avec moi de notre"
## [23] "passage etroit sur la chambre de sherlock holmes; ils s'y embusquerent"
## [24] "d'archy, il ne peut etre nullement compare au genie de sherlock holmes,"
## [25] "flint buckner. c'etait sherlock holmes et son neveu."
## [26] "--messieurs, mon oncle sherlock a un travail pressant a faire qui le"
## [27] "--mes amis! trois vivats a sherlock holmes, le plus grand homme qui ait"
## [28] "mettaient de coeur a leur reception. arrive dans sa chambre, sherlock"
## [29] "il introduisit sherlock holmes dans la salle de billard qui etait comble"
## [30] "de mineurs, tous impatients de le voir arriver. sherlock commanda les"
## [31] "sherlock holmes. les mineurs se tenaient en demi-cercle en observant un"
## [32] "sherlock au milieu de nous? dit ferguson."
## [33] "arracher; quand sherlock y met la main, il faut qu'ils parlent, qu'ils"
## [34] "plus complexe; sherlock va pouvoir etaler devant nous son art et sa"
## [35] "regardant comment sherlock procede. mais non, au lieu de cela, il a"
## [36] "sherlock holmes etait assis sur cette chaise, l'air grave, imposant et"
## [37] "sherlock holmes leva la main pour concentrer sur lui l'attention du"
## [38] "pas baisse pavillon devant sherlock holmes.\" la serenite de ce dernier"
## [39] "objets, il y a une heure a peine pendant que maitre sherlock holmes se"
## [40] "sherlock regardait avec la volonte bien arretee de conserver son"
## [41] "silence complet qui suivit, maitre sherlock prit la parole, disant avec"
## [42] "--vous m'avez pourchasse dans tout l'univers, sherlock holmes, et"
## [43] "--si c'est uniquement sherlock holmes qui vous inquiete, inutile de vous"
## [44] "\"elles soupirerent, puis l'une dit: \"il faut que nous amenions sherlock"
## [45] "d'assister de sang-froid a la pendaison de sherlock holmes. j'avais"
## [46] "--je m'appelle sherlock holmes; je n'ai rien a me reprocher."
## [47] "plus fort que sherlock holmes"
```
10. Close Spark session
```
spark_disconnect(sc)
```
Most of these lines are in a short story by Mark Twain called [A Double Barrelled Detective Story](https://www.gutenberg.org/files/3180/3180-h/3180-h.htm#link2H_4_0008). As per the [Wikipedia](https://en.wikipedia.org/wiki/A_Double_Barrelled_Detective_Story) page about this story, this is a satire by Twain on the mystery novel genre, published in 1902\.
10\.1 Data Import
-----------------
*Read the book data into Spark*
1. Load the `sparklyr` library
```
library(sparklyr)
```
2. Open a Spark session
```
sc <- spark_connect(master = "local")
```
```
## Re-using existing Spark connection to local
```
3. Use the `spark_read_text()` function to read the **mark\_twain.txt** file, assign it to a variable called `twain`
```
twain <- spark_read_text(sc, "twain", "/usr/share/class/books/mark_twain.txt")
```
4. Use the `spark_read_text()` function to read the **arthur\_doyle.txt** file, assign it to a variable called `doyle`
```
doyle <- spark_read_text(sc, "doyle", "/usr/share/class/books/arthur_doyle.txt")
```
10\.2 Tidying data
------------------
*Prepare the data for analysis*
1. Load the `dplyr` library
```
library(dplyr)
```
2. Add a column to `twain` named `author` with a value of “twain”. Assign it to a new variable called `twain_id`
```
twain_id <- twain %>%
mutate(author = "twain")
```
3. Add a column to `doyle` named `author` with a value of “doyle”. Assign it to a new variable called `doyle_id`
```
doyle_id <- doyle %>%
mutate(author = "doyle")
```
4. Use `sdf_bind_rows()` to append the two files together in a variable called `both`
```
both <- doyle_id %>%
sdf_bind_rows(twain_id)
```
5. Preview `both`
```
both
```
```
## # Source: spark<?> [?? x 2]
## line author
## <chr> <chr>
## 1 "THE RETURN OF SHERLOCK HOLMES," doyle
## 2 "" doyle
## 3 "A Collection of Holmes Adventures" doyle
## 4 "" doyle
## 5 "" doyle
## 6 "by Sir Arthur Conan Doyle" doyle
## 7 "" doyle
## 8 "" doyle
## 9 "" doyle
## 10 "" doyle
## # … with more rows
```
6. Filter out empty lines into a variable called `all_lines`
```
all_lines <- both %>%
filter(nchar(line) > 0)
```
7. Use Hive’s *regexp\_replace* to remove punctuation, assign it to the same `all_lines` variable
```
all_lines <- all_lines %>%
mutate(line = regexp_replace(line, "[_\"\'():;,.!?\\-]", " "))
```
10\.3 Transform the data
------------------------
*Use feature transformers to make additional preparations*
1. Use `ft_tokenizer()` to separate each word. in the line. Set the `output_col` to “word\_list”. Assign to a variable called `word_list`
```
word_list <- all_lines %>%
ft_tokenizer(
input_col = "line",
output_col = "word_list"
)
```
2. Remove “stop words” with the `ft_stop_words_remover()` transformer. Set the `output_col` to “wo\_stop\_words”. Assign to a variable called `wo_stop`
```
wo_stop <- word_list %>%
ft_stop_words_remover(
input_col = "word_list",
output_col = "wo_stop_words"
)
```
3. Un\-nest the tokens inside *wo\_stop\_words* using `explode()`. Assign to a variable called `exploded`
```
exploded <- wo_stop %>%
mutate(word = explode(wo_stop_words))
```
4. Select the *word* and *author* columns, and remove any word with less than 3 characters. Assign to `all_words`
```
all_words <- exploded %>%
select(word, author) %>%
filter(nchar(word) > 2)
```
5. Cache the `all_words` variable using `compute()`
```
all_words <- all_words %>%
compute("all_words")
```
10\.4 Data Exploration
----------------------
*Used word clouds to explore the data*
1. Create a variable with the word count by author, name it `word_count`
```
word_count <- all_words %>%
count(author, word, sort = TRUE)
```
2. Filter `word_cout` to only retain “twain”, assign it to `twain_most`
```
twain_most <- word_count %>%
filter(author == "twain")
```
3. Use `wordcloud` to visualize the top 50 words used by Twain
```
twain_most %>%
head(50) %>%
collect() %>%
with(wordcloud::wordcloud(
word,
n,
colors = c("#999999", "#E69F00", "#56B4E9","#56B4E9"))
)
```
4. Filter `word_cout` to only retain “doyle”, assign it to `doyle_most`
```
doyle_most <- word_count %>%
filter(author == "doyle")
```
5. Used `wordcloud` to visualize the top 50 words used by Doyle that have more than 5 characters
```
doyle_most %>%
filter(nchar(word) > 5) %>%
head(50) %>%
collect() %>%
with(wordcloud::wordcloud(
word,
n,
colors = c("#999999", "#E69F00", "#56B4E9","#56B4E9")
))
```
6. Use `anti_join()` to figure out which words are used by Doyle but not Twain. Order the results by number of words.
```
doyle_unique <- doyle_most %>%
anti_join(twain_most, by = "word") %>%
arrange(desc(n))
```
7. Use `wordcloud` to visualize top 50 records in the previous step
```
doyle_unique %>%
head(50) %>%
collect() %>%
with(wordcloud::wordcloud(
word,
n,
colors = c("#999999", "#E69F00", "#56B4E9","#56B4E9"))
)
```
8. Find out how many times Twain used the word “sherlock”
```
all_words %>%
filter(author == "twain", word == "sherlock") %>%
tally()
```
```
## # Source: spark<?> [?? x 1]
## n
## <dbl>
## 1 47
```
9. Against the `twain` variable, use Hive’s *instr* and *lower* to make all ever word lower cap, and then look for “sherlock” in the line
```
twain %>%
mutate(line = lower(line)) %>%
filter(instr(line, "sherlock") > 0) %>%
pull(line)
```
```
## [1] "late sherlock holmes, and yet discernible by a member of a race charged"
## [2] "sherlock holmes."
## [3] "\"uncle sherlock! the mean luck of it!--that he should come just"
## [4] "another trouble presented itself. \"uncle sherlock 'll be wanting to talk"
## [5] "flint buckner's cabin in the frosty gloom. they were sherlock holmes and"
## [6] "\"uncle sherlock's got some work to do, gentlemen, that 'll keep him till"
## [7] "\"by george, he's just a duke, boys! three cheers for sherlock holmes,"
## [8] "he brought sherlock holmes to the billiard-room, which was jammed with"
## [9] "of interest was there--sherlock holmes. the miners stood silent and"
## [10] "the room; the chair was on it; sherlock holmes, stately, imposing,"
## [11] "\"you have hunted me around the world, sherlock holmes, yet god is my"
## [12] "\"if it's only sherlock holmes that's troubling you, you needn't worry"
## [13] "they sighed; then one said: \"we must bring sherlock holmes. he can be"
## [14] "i had small desire that sherlock holmes should hang for my deeds, as you"
## [15] "\"my name is sherlock holmes, and i have not been doing anything.\""
## [16] "late sherlock holmes, and yet discernible by a member of a race charged"
## [17] "plus fort que sherlock holmes"
## [18] "sherlock holmes entre en scene"
## [19] "sherlock holmes"
## [20] "--l'oncle sherlock! quelle guigne!"
## [21] "bien! cette fois sherlock sera tres embarrasse; il manquera de preuve et"
## [22] "--l'oncle sherlock va vouloir, ce soir, causer avec moi de notre"
## [23] "passage etroit sur la chambre de sherlock holmes; ils s'y embusquerent"
## [24] "d'archy, il ne peut etre nullement compare au genie de sherlock holmes,"
## [25] "flint buckner. c'etait sherlock holmes et son neveu."
## [26] "--messieurs, mon oncle sherlock a un travail pressant a faire qui le"
## [27] "--mes amis! trois vivats a sherlock holmes, le plus grand homme qui ait"
## [28] "mettaient de coeur a leur reception. arrive dans sa chambre, sherlock"
## [29] "il introduisit sherlock holmes dans la salle de billard qui etait comble"
## [30] "de mineurs, tous impatients de le voir arriver. sherlock commanda les"
## [31] "sherlock holmes. les mineurs se tenaient en demi-cercle en observant un"
## [32] "sherlock au milieu de nous? dit ferguson."
## [33] "arracher; quand sherlock y met la main, il faut qu'ils parlent, qu'ils"
## [34] "plus complexe; sherlock va pouvoir etaler devant nous son art et sa"
## [35] "regardant comment sherlock procede. mais non, au lieu de cela, il a"
## [36] "sherlock holmes etait assis sur cette chaise, l'air grave, imposant et"
## [37] "sherlock holmes leva la main pour concentrer sur lui l'attention du"
## [38] "pas baisse pavillon devant sherlock holmes.\" la serenite de ce dernier"
## [39] "objets, il y a une heure a peine pendant que maitre sherlock holmes se"
## [40] "sherlock regardait avec la volonte bien arretee de conserver son"
## [41] "silence complet qui suivit, maitre sherlock prit la parole, disant avec"
## [42] "--vous m'avez pourchasse dans tout l'univers, sherlock holmes, et"
## [43] "--si c'est uniquement sherlock holmes qui vous inquiete, inutile de vous"
## [44] "\"elles soupirerent, puis l'une dit: \"il faut que nous amenions sherlock"
## [45] "d'assister de sang-froid a la pendaison de sherlock holmes. j'avais"
## [46] "--je m'appelle sherlock holmes; je n'ai rien a me reprocher."
## [47] "plus fort que sherlock holmes"
```
10. Close Spark session
```
spark_disconnect(sc)
```
Most of these lines are in a short story by Mark Twain called [A Double Barrelled Detective Story](https://www.gutenberg.org/files/3180/3180-h/3180-h.htm#link2H_4_0008). As per the [Wikipedia](https://en.wikipedia.org/wiki/A_Double_Barrelled_Detective_Story) page about this story, this is a satire by Twain on the mystery novel genre, published in 1902\.
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/text-mining-with-sparklyr.html |
10 Text mining with `sparklyr`
==============================
For this example, there are two files that will be analyzed. They are both the full works of Sir Arthur Conan Doyle and Mark Twain. The files were downloaded from the [Gutenberg Project](https://www.gutenberg.org/) site via the `gutenbergr` package. Intentionally, no data cleanup was done to the files prior to this analysis. See the appendix below to see how the data was downloaded and prepared.
```
readLines("/usr/share/class/books/arthur_doyle.txt", 30)
```
```
## [1] "THE RETURN OF SHERLOCK HOLMES,"
## [2] ""
## [3] "A Collection of Holmes Adventures"
## [4] ""
## [5] ""
## [6] "by Sir Arthur Conan Doyle"
## [7] ""
## [8] ""
## [9] ""
## [10] ""
## [11] "CONTENTS:"
## [12] ""
## [13] " The Adventure Of The Empty House"
## [14] ""
## [15] " The Adventure Of The Norwood Builder"
## [16] ""
## [17] " The Adventure Of The Dancing Men"
## [18] ""
## [19] " The Adventure Of The Solitary Cyclist"
## [20] ""
## [21] " The Adventure Of The Priory School"
## [22] ""
## [23] " The Adventure Of Black Peter"
## [24] ""
## [25] " The Adventure Of Charles Augustus Milverton"
## [26] ""
## [27] " The Adventure Of The Six Napoleons"
## [28] ""
## [29] " The Adventure Of The Three Students"
## [30] ""
```
10\.1 Data Import
-----------------
*Read the book data into Spark*
1. Load the `sparklyr` library
```
library(sparklyr)
```
2. Open a Spark session
```
sc <- spark_connect(master = "local")
```
```
## Re-using existing Spark connection to local
```
3. Use the `spark_read_text()` function to read the **mark\_twain.txt** file, assign it to a variable called `twain`
```
twain <- spark_read_text(sc, "twain", "/usr/share/class/books/mark_twain.txt")
```
4. Use the `spark_read_text()` function to read the **arthur\_doyle.txt** file, assign it to a variable called `doyle`
```
doyle <- spark_read_text(sc, "doyle", "/usr/share/class/books/arthur_doyle.txt")
```
10\.2 Tidying data
------------------
*Prepare the data for analysis*
1. Load the `dplyr` library
```
library(dplyr)
```
2. Add a column to `twain` named `author` with a value of “twain”. Assign it to a new variable called `twain_id`
```
twain_id <- twain %>%
mutate(author = "twain")
```
3. Add a column to `doyle` named `author` with a value of “doyle”. Assign it to a new variable called `doyle_id`
```
doyle_id <- doyle %>%
mutate(author = "doyle")
```
4. Use `sdf_bind_rows()` to append the two files together in a variable called `both`
```
both <- doyle_id %>%
sdf_bind_rows(twain_id)
```
5. Preview `both`
```
both
```
```
## # Source: spark<?> [?? x 2]
## line author
## <chr> <chr>
## 1 "THE RETURN OF SHERLOCK HOLMES," doyle
## 2 "" doyle
## 3 "A Collection of Holmes Adventures" doyle
## 4 "" doyle
## 5 "" doyle
## 6 "by Sir Arthur Conan Doyle" doyle
## 7 "" doyle
## 8 "" doyle
## 9 "" doyle
## 10 "" doyle
## # … with more rows
```
6. Filter out empty lines into a variable called `all_lines`
```
all_lines <- both %>%
filter(nchar(line) > 0)
```
7. Use Hive’s *regexp\_replace* to remove punctuation, assign it to the same `all_lines` variable
```
all_lines <- all_lines %>%
mutate(line = regexp_replace(line, "[_\"\'():;,.!?\\-]", " "))
```
10\.3 Transform the data
------------------------
*Use feature transformers to make additional preparations*
1. Use `ft_tokenizer()` to separate each word. in the line. Set the `output_col` to “word\_list”. Assign to a variable called `word_list`
```
word_list <- all_lines %>%
ft_tokenizer(
input_col = "line",
output_col = "word_list"
)
```
2. Remove “stop words” with the `ft_stop_words_remover()` transformer. Set the `output_col` to “wo\_stop\_words”. Assign to a variable called `wo_stop`
```
wo_stop <- word_list %>%
ft_stop_words_remover(
input_col = "word_list",
output_col = "wo_stop_words"
)
```
3. Un\-nest the tokens inside *wo\_stop\_words* using `explode()`. Assign to a variable called `exploded`
```
exploded <- wo_stop %>%
mutate(word = explode(wo_stop_words))
```
4. Select the *word* and *author* columns, and remove any word with less than 3 characters. Assign to `all_words`
```
all_words <- exploded %>%
select(word, author) %>%
filter(nchar(word) > 2)
```
5. Cache the `all_words` variable using `compute()`
```
all_words <- all_words %>%
compute("all_words")
```
10\.4 Data Exploration
----------------------
*Used word clouds to explore the data*
1. Create a variable with the word count by author, name it `word_count`
```
word_count <- all_words %>%
count(author, word, sort = TRUE)
```
2. Filter `word_cout` to only retain “twain”, assign it to `twain_most`
```
twain_most <- word_count %>%
filter(author == "twain")
```
3. Use `wordcloud` to visualize the top 50 words used by Twain
```
twain_most %>%
head(50) %>%
collect() %>%
with(wordcloud::wordcloud(
word,
n,
colors = c("#999999", "#E69F00", "#56B4E9","#56B4E9"))
)
```
4. Filter `word_cout` to only retain “doyle”, assign it to `doyle_most`
```
doyle_most <- word_count %>%
filter(author == "doyle")
```
5. Used `wordcloud` to visualize the top 50 words used by Doyle that have more than 5 characters
```
doyle_most %>%
filter(nchar(word) > 5) %>%
head(50) %>%
collect() %>%
with(wordcloud::wordcloud(
word,
n,
colors = c("#999999", "#E69F00", "#56B4E9","#56B4E9")
))
```
6. Use `anti_join()` to figure out which words are used by Doyle but not Twain. Order the results by number of words.
```
doyle_unique <- doyle_most %>%
anti_join(twain_most, by = "word") %>%
arrange(desc(n))
```
7. Use `wordcloud` to visualize top 50 records in the previous step
```
doyle_unique %>%
head(50) %>%
collect() %>%
with(wordcloud::wordcloud(
word,
n,
colors = c("#999999", "#E69F00", "#56B4E9","#56B4E9"))
)
```
8. Find out how many times Twain used the word “sherlock”
```
all_words %>%
filter(author == "twain", word == "sherlock") %>%
tally()
```
```
## # Source: spark<?> [?? x 1]
## n
## <dbl>
## 1 47
```
9. Against the `twain` variable, use Hive’s *instr* and *lower* to make all ever word lower cap, and then look for “sherlock” in the line
```
twain %>%
mutate(line = lower(line)) %>%
filter(instr(line, "sherlock") > 0) %>%
pull(line)
```
```
## [1] "late sherlock holmes, and yet discernible by a member of a race charged"
## [2] "sherlock holmes."
## [3] "\"uncle sherlock! the mean luck of it!--that he should come just"
## [4] "another trouble presented itself. \"uncle sherlock 'll be wanting to talk"
## [5] "flint buckner's cabin in the frosty gloom. they were sherlock holmes and"
## [6] "\"uncle sherlock's got some work to do, gentlemen, that 'll keep him till"
## [7] "\"by george, he's just a duke, boys! three cheers for sherlock holmes,"
## [8] "he brought sherlock holmes to the billiard-room, which was jammed with"
## [9] "of interest was there--sherlock holmes. the miners stood silent and"
## [10] "the room; the chair was on it; sherlock holmes, stately, imposing,"
## [11] "\"you have hunted me around the world, sherlock holmes, yet god is my"
## [12] "\"if it's only sherlock holmes that's troubling you, you needn't worry"
## [13] "they sighed; then one said: \"we must bring sherlock holmes. he can be"
## [14] "i had small desire that sherlock holmes should hang for my deeds, as you"
## [15] "\"my name is sherlock holmes, and i have not been doing anything.\""
## [16] "late sherlock holmes, and yet discernible by a member of a race charged"
## [17] "plus fort que sherlock holmes"
## [18] "sherlock holmes entre en scene"
## [19] "sherlock holmes"
## [20] "--l'oncle sherlock! quelle guigne!"
## [21] "bien! cette fois sherlock sera tres embarrasse; il manquera de preuve et"
## [22] "--l'oncle sherlock va vouloir, ce soir, causer avec moi de notre"
## [23] "passage etroit sur la chambre de sherlock holmes; ils s'y embusquerent"
## [24] "d'archy, il ne peut etre nullement compare au genie de sherlock holmes,"
## [25] "flint buckner. c'etait sherlock holmes et son neveu."
## [26] "--messieurs, mon oncle sherlock a un travail pressant a faire qui le"
## [27] "--mes amis! trois vivats a sherlock holmes, le plus grand homme qui ait"
## [28] "mettaient de coeur a leur reception. arrive dans sa chambre, sherlock"
## [29] "il introduisit sherlock holmes dans la salle de billard qui etait comble"
## [30] "de mineurs, tous impatients de le voir arriver. sherlock commanda les"
## [31] "sherlock holmes. les mineurs se tenaient en demi-cercle en observant un"
## [32] "sherlock au milieu de nous? dit ferguson."
## [33] "arracher; quand sherlock y met la main, il faut qu'ils parlent, qu'ils"
## [34] "plus complexe; sherlock va pouvoir etaler devant nous son art et sa"
## [35] "regardant comment sherlock procede. mais non, au lieu de cela, il a"
## [36] "sherlock holmes etait assis sur cette chaise, l'air grave, imposant et"
## [37] "sherlock holmes leva la main pour concentrer sur lui l'attention du"
## [38] "pas baisse pavillon devant sherlock holmes.\" la serenite de ce dernier"
## [39] "objets, il y a une heure a peine pendant que maitre sherlock holmes se"
## [40] "sherlock regardait avec la volonte bien arretee de conserver son"
## [41] "silence complet qui suivit, maitre sherlock prit la parole, disant avec"
## [42] "--vous m'avez pourchasse dans tout l'univers, sherlock holmes, et"
## [43] "--si c'est uniquement sherlock holmes qui vous inquiete, inutile de vous"
## [44] "\"elles soupirerent, puis l'une dit: \"il faut que nous amenions sherlock"
## [45] "d'assister de sang-froid a la pendaison de sherlock holmes. j'avais"
## [46] "--je m'appelle sherlock holmes; je n'ai rien a me reprocher."
## [47] "plus fort que sherlock holmes"
```
10. Close Spark session
```
spark_disconnect(sc)
```
Most of these lines are in a short story by Mark Twain called [A Double Barrelled Detective Story](https://www.gutenberg.org/files/3180/3180-h/3180-h.htm#link2H_4_0008). As per the [Wikipedia](https://en.wikipedia.org/wiki/A_Double_Barrelled_Detective_Story) page about this story, this is a satire by Twain on the mystery novel genre, published in 1902\.
10\.1 Data Import
-----------------
*Read the book data into Spark*
1. Load the `sparklyr` library
```
library(sparklyr)
```
2. Open a Spark session
```
sc <- spark_connect(master = "local")
```
```
## Re-using existing Spark connection to local
```
3. Use the `spark_read_text()` function to read the **mark\_twain.txt** file, assign it to a variable called `twain`
```
twain <- spark_read_text(sc, "twain", "/usr/share/class/books/mark_twain.txt")
```
4. Use the `spark_read_text()` function to read the **arthur\_doyle.txt** file, assign it to a variable called `doyle`
```
doyle <- spark_read_text(sc, "doyle", "/usr/share/class/books/arthur_doyle.txt")
```
10\.2 Tidying data
------------------
*Prepare the data for analysis*
1. Load the `dplyr` library
```
library(dplyr)
```
2. Add a column to `twain` named `author` with a value of “twain”. Assign it to a new variable called `twain_id`
```
twain_id <- twain %>%
mutate(author = "twain")
```
3. Add a column to `doyle` named `author` with a value of “doyle”. Assign it to a new variable called `doyle_id`
```
doyle_id <- doyle %>%
mutate(author = "doyle")
```
4. Use `sdf_bind_rows()` to append the two files together in a variable called `both`
```
both <- doyle_id %>%
sdf_bind_rows(twain_id)
```
5. Preview `both`
```
both
```
```
## # Source: spark<?> [?? x 2]
## line author
## <chr> <chr>
## 1 "THE RETURN OF SHERLOCK HOLMES," doyle
## 2 "" doyle
## 3 "A Collection of Holmes Adventures" doyle
## 4 "" doyle
## 5 "" doyle
## 6 "by Sir Arthur Conan Doyle" doyle
## 7 "" doyle
## 8 "" doyle
## 9 "" doyle
## 10 "" doyle
## # … with more rows
```
6. Filter out empty lines into a variable called `all_lines`
```
all_lines <- both %>%
filter(nchar(line) > 0)
```
7. Use Hive’s *regexp\_replace* to remove punctuation, assign it to the same `all_lines` variable
```
all_lines <- all_lines %>%
mutate(line = regexp_replace(line, "[_\"\'():;,.!?\\-]", " "))
```
10\.3 Transform the data
------------------------
*Use feature transformers to make additional preparations*
1. Use `ft_tokenizer()` to separate each word. in the line. Set the `output_col` to “word\_list”. Assign to a variable called `word_list`
```
word_list <- all_lines %>%
ft_tokenizer(
input_col = "line",
output_col = "word_list"
)
```
2. Remove “stop words” with the `ft_stop_words_remover()` transformer. Set the `output_col` to “wo\_stop\_words”. Assign to a variable called `wo_stop`
```
wo_stop <- word_list %>%
ft_stop_words_remover(
input_col = "word_list",
output_col = "wo_stop_words"
)
```
3. Un\-nest the tokens inside *wo\_stop\_words* using `explode()`. Assign to a variable called `exploded`
```
exploded <- wo_stop %>%
mutate(word = explode(wo_stop_words))
```
4. Select the *word* and *author* columns, and remove any word with less than 3 characters. Assign to `all_words`
```
all_words <- exploded %>%
select(word, author) %>%
filter(nchar(word) > 2)
```
5. Cache the `all_words` variable using `compute()`
```
all_words <- all_words %>%
compute("all_words")
```
10\.4 Data Exploration
----------------------
*Used word clouds to explore the data*
1. Create a variable with the word count by author, name it `word_count`
```
word_count <- all_words %>%
count(author, word, sort = TRUE)
```
2. Filter `word_cout` to only retain “twain”, assign it to `twain_most`
```
twain_most <- word_count %>%
filter(author == "twain")
```
3. Use `wordcloud` to visualize the top 50 words used by Twain
```
twain_most %>%
head(50) %>%
collect() %>%
with(wordcloud::wordcloud(
word,
n,
colors = c("#999999", "#E69F00", "#56B4E9","#56B4E9"))
)
```
4. Filter `word_cout` to only retain “doyle”, assign it to `doyle_most`
```
doyle_most <- word_count %>%
filter(author == "doyle")
```
5. Used `wordcloud` to visualize the top 50 words used by Doyle that have more than 5 characters
```
doyle_most %>%
filter(nchar(word) > 5) %>%
head(50) %>%
collect() %>%
with(wordcloud::wordcloud(
word,
n,
colors = c("#999999", "#E69F00", "#56B4E9","#56B4E9")
))
```
6. Use `anti_join()` to figure out which words are used by Doyle but not Twain. Order the results by number of words.
```
doyle_unique <- doyle_most %>%
anti_join(twain_most, by = "word") %>%
arrange(desc(n))
```
7. Use `wordcloud` to visualize top 50 records in the previous step
```
doyle_unique %>%
head(50) %>%
collect() %>%
with(wordcloud::wordcloud(
word,
n,
colors = c("#999999", "#E69F00", "#56B4E9","#56B4E9"))
)
```
8. Find out how many times Twain used the word “sherlock”
```
all_words %>%
filter(author == "twain", word == "sherlock") %>%
tally()
```
```
## # Source: spark<?> [?? x 1]
## n
## <dbl>
## 1 47
```
9. Against the `twain` variable, use Hive’s *instr* and *lower* to make all ever word lower cap, and then look for “sherlock” in the line
```
twain %>%
mutate(line = lower(line)) %>%
filter(instr(line, "sherlock") > 0) %>%
pull(line)
```
```
## [1] "late sherlock holmes, and yet discernible by a member of a race charged"
## [2] "sherlock holmes."
## [3] "\"uncle sherlock! the mean luck of it!--that he should come just"
## [4] "another trouble presented itself. \"uncle sherlock 'll be wanting to talk"
## [5] "flint buckner's cabin in the frosty gloom. they were sherlock holmes and"
## [6] "\"uncle sherlock's got some work to do, gentlemen, that 'll keep him till"
## [7] "\"by george, he's just a duke, boys! three cheers for sherlock holmes,"
## [8] "he brought sherlock holmes to the billiard-room, which was jammed with"
## [9] "of interest was there--sherlock holmes. the miners stood silent and"
## [10] "the room; the chair was on it; sherlock holmes, stately, imposing,"
## [11] "\"you have hunted me around the world, sherlock holmes, yet god is my"
## [12] "\"if it's only sherlock holmes that's troubling you, you needn't worry"
## [13] "they sighed; then one said: \"we must bring sherlock holmes. he can be"
## [14] "i had small desire that sherlock holmes should hang for my deeds, as you"
## [15] "\"my name is sherlock holmes, and i have not been doing anything.\""
## [16] "late sherlock holmes, and yet discernible by a member of a race charged"
## [17] "plus fort que sherlock holmes"
## [18] "sherlock holmes entre en scene"
## [19] "sherlock holmes"
## [20] "--l'oncle sherlock! quelle guigne!"
## [21] "bien! cette fois sherlock sera tres embarrasse; il manquera de preuve et"
## [22] "--l'oncle sherlock va vouloir, ce soir, causer avec moi de notre"
## [23] "passage etroit sur la chambre de sherlock holmes; ils s'y embusquerent"
## [24] "d'archy, il ne peut etre nullement compare au genie de sherlock holmes,"
## [25] "flint buckner. c'etait sherlock holmes et son neveu."
## [26] "--messieurs, mon oncle sherlock a un travail pressant a faire qui le"
## [27] "--mes amis! trois vivats a sherlock holmes, le plus grand homme qui ait"
## [28] "mettaient de coeur a leur reception. arrive dans sa chambre, sherlock"
## [29] "il introduisit sherlock holmes dans la salle de billard qui etait comble"
## [30] "de mineurs, tous impatients de le voir arriver. sherlock commanda les"
## [31] "sherlock holmes. les mineurs se tenaient en demi-cercle en observant un"
## [32] "sherlock au milieu de nous? dit ferguson."
## [33] "arracher; quand sherlock y met la main, il faut qu'ils parlent, qu'ils"
## [34] "plus complexe; sherlock va pouvoir etaler devant nous son art et sa"
## [35] "regardant comment sherlock procede. mais non, au lieu de cela, il a"
## [36] "sherlock holmes etait assis sur cette chaise, l'air grave, imposant et"
## [37] "sherlock holmes leva la main pour concentrer sur lui l'attention du"
## [38] "pas baisse pavillon devant sherlock holmes.\" la serenite de ce dernier"
## [39] "objets, il y a une heure a peine pendant que maitre sherlock holmes se"
## [40] "sherlock regardait avec la volonte bien arretee de conserver son"
## [41] "silence complet qui suivit, maitre sherlock prit la parole, disant avec"
## [42] "--vous m'avez pourchasse dans tout l'univers, sherlock holmes, et"
## [43] "--si c'est uniquement sherlock holmes qui vous inquiete, inutile de vous"
## [44] "\"elles soupirerent, puis l'une dit: \"il faut que nous amenions sherlock"
## [45] "d'assister de sang-froid a la pendaison de sherlock holmes. j'avais"
## [46] "--je m'appelle sherlock holmes; je n'ai rien a me reprocher."
## [47] "plus fort que sherlock holmes"
```
10. Close Spark session
```
spark_disconnect(sc)
```
Most of these lines are in a short story by Mark Twain called [A Double Barrelled Detective Story](https://www.gutenberg.org/files/3180/3180-h/3180-h.htm#link2H_4_0008). As per the [Wikipedia](https://en.wikipedia.org/wiki/A_Double_Barrelled_Detective_Story) page about this story, this is a satire by Twain on the mystery novel genre, published in 1902\.
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/spark-data-caching.html |
11 Spark data caching
=====================
11\.1 Map data
--------------
*See the machanics of how Spark is able to use files as a data source*
1. Examine the contents of the **/usr/share/class/files** folder
2. Load the `sparklyr` library
```
library(sparklyr)
```
3. Use `spark_connect()` to create a new local Spark session
```
sc <- spark_connect(master = "local")
```
```
## * Using Spark: 2.4.0
```
4. Load the `readr` and `purrr` libraries
```
library(readr)
library(purrr)
```
5. Read the top 5 rows of the **transactions\_1** CSV file
```
top_rows <- read_csv("/usr/share/class/files/transactions_1.csv", n_max = 5)
```
```
## Parsed with column specification:
## cols(
## order_id = col_double(),
## customer_id = col_double(),
## customer_name = col_character(),
## customer_phone = col_character(),
## customer_cc = col_double(),
## customer_lon = col_double(),
## customer_lat = col_double(),
## date = col_date(format = ""),
## date_year = col_double(),
## date_month = col_double(),
## date_month_name = col_character(),
## date_day = col_character(),
## product_id = col_double(),
## price = col_double()
## )
```
6. Create a list based on the column names, and add a list item with “character” as its value. Name the variable `file_columns`
```
file_columns <- top_rows %>%
rename_all(tolower) %>%
map(function(x) "character")
```
7. Preview the contents of the `file_columns` variable
```
head(file_columns)
```
```
## $order_id
## [1] "character"
##
## $customer_id
## [1] "character"
##
## $customer_name
## [1] "character"
##
## $customer_phone
## [1] "character"
##
## $customer_cc
## [1] "character"
##
## $customer_lon
## [1] "character"
```
8. Use `spark_read()` to “map” the file’s structure and location to the Spark context. Assign it to the `spark_lineitems` variable
```
spark_lineitems <- spark_read_csv(
sc,
name = "orders",
path = "/usr/share/class/files",
memory = FALSE,
columns = file_columns,
infer_schema = FALSE
)
```
9. In the Connections pane, click on the table icon by the `transactions` variable
10. Verify that the new variable pointer works by using `tally()`
```
spark_lineitems %>%
tally()
```
```
## # Source: spark<?> [?? x 1]
## n
## <dbl>
## 1 250000
```
11\.2 Caching data
------------------
*Learn how to cache a subset of the data in Spark*
1. Create a subset of the *orders* table object. Summarize by **date**, careate a total price and number of items sold.
```
daily_orders <- spark_lineitems %>%
mutate(price = as.double(price)) %>%
group_by(date) %>%
summarise(total_sales = sum(price, na.rm = TRUE), no_items = n())
```
2. Use `compute()` to extract the data into Spark memory
```
cached_orders <- compute(daily_orders, "daily")
```
3. Confirm new variable pointer works
```
head(cached_orders)
```
```
## # Source: spark<?> [?? x 3]
## date total_sales no_items
## <chr> <dbl> <dbl>
## 1 2016-01-27 39311. 5866
## 2 2016-01-28 38424. 5771
## 3 2016-02-03 37666. 5659
## 4 2016-01-29 37582. 5652
## 5 2016-02-04 38193. 5719
## 6 2016-02-10 38500. 5686
```
4. Go to the Spark UI
5. Click the **Storage** button
6. Notice that “orders” is now cached into Spark memory
11\.1 Map data
--------------
*See the machanics of how Spark is able to use files as a data source*
1. Examine the contents of the **/usr/share/class/files** folder
2. Load the `sparklyr` library
```
library(sparklyr)
```
3. Use `spark_connect()` to create a new local Spark session
```
sc <- spark_connect(master = "local")
```
```
## * Using Spark: 2.4.0
```
4. Load the `readr` and `purrr` libraries
```
library(readr)
library(purrr)
```
5. Read the top 5 rows of the **transactions\_1** CSV file
```
top_rows <- read_csv("/usr/share/class/files/transactions_1.csv", n_max = 5)
```
```
## Parsed with column specification:
## cols(
## order_id = col_double(),
## customer_id = col_double(),
## customer_name = col_character(),
## customer_phone = col_character(),
## customer_cc = col_double(),
## customer_lon = col_double(),
## customer_lat = col_double(),
## date = col_date(format = ""),
## date_year = col_double(),
## date_month = col_double(),
## date_month_name = col_character(),
## date_day = col_character(),
## product_id = col_double(),
## price = col_double()
## )
```
6. Create a list based on the column names, and add a list item with “character” as its value. Name the variable `file_columns`
```
file_columns <- top_rows %>%
rename_all(tolower) %>%
map(function(x) "character")
```
7. Preview the contents of the `file_columns` variable
```
head(file_columns)
```
```
## $order_id
## [1] "character"
##
## $customer_id
## [1] "character"
##
## $customer_name
## [1] "character"
##
## $customer_phone
## [1] "character"
##
## $customer_cc
## [1] "character"
##
## $customer_lon
## [1] "character"
```
8. Use `spark_read()` to “map” the file’s structure and location to the Spark context. Assign it to the `spark_lineitems` variable
```
spark_lineitems <- spark_read_csv(
sc,
name = "orders",
path = "/usr/share/class/files",
memory = FALSE,
columns = file_columns,
infer_schema = FALSE
)
```
9. In the Connections pane, click on the table icon by the `transactions` variable
10. Verify that the new variable pointer works by using `tally()`
```
spark_lineitems %>%
tally()
```
```
## # Source: spark<?> [?? x 1]
## n
## <dbl>
## 1 250000
```
11\.2 Caching data
------------------
*Learn how to cache a subset of the data in Spark*
1. Create a subset of the *orders* table object. Summarize by **date**, careate a total price and number of items sold.
```
daily_orders <- spark_lineitems %>%
mutate(price = as.double(price)) %>%
group_by(date) %>%
summarise(total_sales = sum(price, na.rm = TRUE), no_items = n())
```
2. Use `compute()` to extract the data into Spark memory
```
cached_orders <- compute(daily_orders, "daily")
```
3. Confirm new variable pointer works
```
head(cached_orders)
```
```
## # Source: spark<?> [?? x 3]
## date total_sales no_items
## <chr> <dbl> <dbl>
## 1 2016-01-27 39311. 5866
## 2 2016-01-28 38424. 5771
## 3 2016-02-03 37666. 5659
## 4 2016-01-29 37582. 5652
## 5 2016-02-04 38193. 5719
## 6 2016-02-10 38500. 5686
```
4. Go to the Spark UI
5. Click the **Storage** button
6. Notice that “orders” is now cached into Spark memory
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/spark-data-caching.html |
11 Spark data caching
=====================
11\.1 Map data
--------------
*See the machanics of how Spark is able to use files as a data source*
1. Examine the contents of the **/usr/share/class/files** folder
2. Load the `sparklyr` library
```
library(sparklyr)
```
3. Use `spark_connect()` to create a new local Spark session
```
sc <- spark_connect(master = "local")
```
```
## * Using Spark: 2.4.0
```
4. Load the `readr` and `purrr` libraries
```
library(readr)
library(purrr)
```
5. Read the top 5 rows of the **transactions\_1** CSV file
```
top_rows <- read_csv("/usr/share/class/files/transactions_1.csv", n_max = 5)
```
```
## Parsed with column specification:
## cols(
## order_id = col_double(),
## customer_id = col_double(),
## customer_name = col_character(),
## customer_phone = col_character(),
## customer_cc = col_double(),
## customer_lon = col_double(),
## customer_lat = col_double(),
## date = col_date(format = ""),
## date_year = col_double(),
## date_month = col_double(),
## date_month_name = col_character(),
## date_day = col_character(),
## product_id = col_double(),
## price = col_double()
## )
```
6. Create a list based on the column names, and add a list item with “character” as its value. Name the variable `file_columns`
```
file_columns <- top_rows %>%
rename_all(tolower) %>%
map(function(x) "character")
```
7. Preview the contents of the `file_columns` variable
```
head(file_columns)
```
```
## $order_id
## [1] "character"
##
## $customer_id
## [1] "character"
##
## $customer_name
## [1] "character"
##
## $customer_phone
## [1] "character"
##
## $customer_cc
## [1] "character"
##
## $customer_lon
## [1] "character"
```
8. Use `spark_read()` to “map” the file’s structure and location to the Spark context. Assign it to the `spark_lineitems` variable
```
spark_lineitems <- spark_read_csv(
sc,
name = "orders",
path = "/usr/share/class/files",
memory = FALSE,
columns = file_columns,
infer_schema = FALSE
)
```
9. In the Connections pane, click on the table icon by the `transactions` variable
10. Verify that the new variable pointer works by using `tally()`
```
spark_lineitems %>%
tally()
```
```
## # Source: spark<?> [?? x 1]
## n
## <dbl>
## 1 250000
```
11\.2 Caching data
------------------
*Learn how to cache a subset of the data in Spark*
1. Create a subset of the *orders* table object. Summarize by **date**, careate a total price and number of items sold.
```
daily_orders <- spark_lineitems %>%
mutate(price = as.double(price)) %>%
group_by(date) %>%
summarise(total_sales = sum(price, na.rm = TRUE), no_items = n())
```
2. Use `compute()` to extract the data into Spark memory
```
cached_orders <- compute(daily_orders, "daily")
```
3. Confirm new variable pointer works
```
head(cached_orders)
```
```
## # Source: spark<?> [?? x 3]
## date total_sales no_items
## <chr> <dbl> <dbl>
## 1 2016-01-27 39311. 5866
## 2 2016-01-28 38424. 5771
## 3 2016-02-03 37666. 5659
## 4 2016-01-29 37582. 5652
## 5 2016-02-04 38193. 5719
## 6 2016-02-10 38500. 5686
```
4. Go to the Spark UI
5. Click the **Storage** button
6. Notice that “orders” is now cached into Spark memory
11\.1 Map data
--------------
*See the machanics of how Spark is able to use files as a data source*
1. Examine the contents of the **/usr/share/class/files** folder
2. Load the `sparklyr` library
```
library(sparklyr)
```
3. Use `spark_connect()` to create a new local Spark session
```
sc <- spark_connect(master = "local")
```
```
## * Using Spark: 2.4.0
```
4. Load the `readr` and `purrr` libraries
```
library(readr)
library(purrr)
```
5. Read the top 5 rows of the **transactions\_1** CSV file
```
top_rows <- read_csv("/usr/share/class/files/transactions_1.csv", n_max = 5)
```
```
## Parsed with column specification:
## cols(
## order_id = col_double(),
## customer_id = col_double(),
## customer_name = col_character(),
## customer_phone = col_character(),
## customer_cc = col_double(),
## customer_lon = col_double(),
## customer_lat = col_double(),
## date = col_date(format = ""),
## date_year = col_double(),
## date_month = col_double(),
## date_month_name = col_character(),
## date_day = col_character(),
## product_id = col_double(),
## price = col_double()
## )
```
6. Create a list based on the column names, and add a list item with “character” as its value. Name the variable `file_columns`
```
file_columns <- top_rows %>%
rename_all(tolower) %>%
map(function(x) "character")
```
7. Preview the contents of the `file_columns` variable
```
head(file_columns)
```
```
## $order_id
## [1] "character"
##
## $customer_id
## [1] "character"
##
## $customer_name
## [1] "character"
##
## $customer_phone
## [1] "character"
##
## $customer_cc
## [1] "character"
##
## $customer_lon
## [1] "character"
```
8. Use `spark_read()` to “map” the file’s structure and location to the Spark context. Assign it to the `spark_lineitems` variable
```
spark_lineitems <- spark_read_csv(
sc,
name = "orders",
path = "/usr/share/class/files",
memory = FALSE,
columns = file_columns,
infer_schema = FALSE
)
```
9. In the Connections pane, click on the table icon by the `transactions` variable
10. Verify that the new variable pointer works by using `tally()`
```
spark_lineitems %>%
tally()
```
```
## # Source: spark<?> [?? x 1]
## n
## <dbl>
## 1 250000
```
11\.2 Caching data
------------------
*Learn how to cache a subset of the data in Spark*
1. Create a subset of the *orders* table object. Summarize by **date**, careate a total price and number of items sold.
```
daily_orders <- spark_lineitems %>%
mutate(price = as.double(price)) %>%
group_by(date) %>%
summarise(total_sales = sum(price, na.rm = TRUE), no_items = n())
```
2. Use `compute()` to extract the data into Spark memory
```
cached_orders <- compute(daily_orders, "daily")
```
3. Confirm new variable pointer works
```
head(cached_orders)
```
```
## # Source: spark<?> [?? x 3]
## date total_sales no_items
## <chr> <dbl> <dbl>
## 1 2016-01-27 39311. 5866
## 2 2016-01-28 38424. 5771
## 3 2016-02-03 37666. 5659
## 4 2016-01-29 37582. 5652
## 5 2016-02-04 38193. 5719
## 6 2016-02-10 38500. 5686
```
4. Go to the Spark UI
5. Click the **Storage** button
6. Notice that “orders” is now cached into Spark memory
| Field Specific |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/spark-pipelines.html |
12 Spark Pipelines
==================
12\.1 Build an Estimator (plan)
-------------------------------
*Create a simple estimator that transforms data and fits a model*
1. Use the `spark_lineitems` variable to create a new aggregation by `order_id`. Summarize the total sales and number of items
```
spark_lineitems %>%
mutate(price = as.double(price)) %>%
group_by(order_id) %>%
summarise(total_sales = sum(price, na.rm = TRUE), no_items = n())
```
```
## # Source: spark<?> [?? x 3]
## order_id total_sales no_items
## <chr> <dbl> <dbl>
## 1 33531 45.4 6
## 2 33538 79.9 12
## 3 33541 70.0 10
## 4 33550 92.8 16
## 5 33561 46.6 8
## 6 33565 55.6 8
## 7 33568 76.6 10
## 8 33571 74 12
## 9 33572 54.4 8
## 10 33582 82.5 12
## # … with more rows
```
2. Assign the code to a new variable called `orders`
```
orders <- spark_lineitems %>%
mutate(price = as.double(price)) %>%
group_by(order_id) %>%
summarise(total_sales = sum(price, na.rm = TRUE), no_items = n())
```
3. Start a new code chunk, with calling `ml_pipeline(sc)`
```
ml_pipeline(sc)
```
```
## Pipeline (Estimator) with no stages
## <pipeline_55c15862a53>
```
4. Pipe the `ml_pipeline()` code into a `ft_dplyr_transfomer()` call. Use the `orders` variable for its argument
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders)
```
```
## Pipeline (Estimator) with 1 stage
## <pipeline_55c68ca32c1>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c795af321>
## | (Parameters -- Column Names)
```
5. Add an `ft_binarizer()` step that determines if the total sale is above $50\. Name the new variable `above_50`
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50)
```
```
## Pipeline (Estimator) with 2 stages
## <pipeline_55c2f66e520>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c74b98194>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55cbe5bb73>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
```
6. Using the `ft_r_formula`, add a step that sets the model’s formula to: `above_50 ~ no_items`
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50) %>%
ft_r_formula(above_50 ~ no_items)
```
```
## Pipeline (Estimator) with 3 stages
## <pipeline_55c7014263c>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c640e84e2>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c6073c45b>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormula (Estimator)
## | <r_formula_55c3c7df218>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Parameters)
## | force_index_label: FALSE
## | formula: above_50 ~ no_items
## | handle_invalid: error
## | stringIndexerOrderType: frequencyDesc
```
7. Finalize the pipeline by adding a `ml_logistic_regression()` step, no arguments are needed
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50) %>%
ft_r_formula(above_50 ~ no_items) %>%
ml_logistic_regression()
```
```
## Pipeline (Estimator) with 4 stages
## <pipeline_55cc32e40c>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c183307c4>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c10313fcc>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormula (Estimator)
## | <r_formula_55c69641535>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Parameters)
## | force_index_label: FALSE
## | formula: above_50 ~ no_items
## | handle_invalid: error
## | stringIndexerOrderType: frequencyDesc
## |--4 LogisticRegression (Estimator)
## | <logistic_regression_55c412af3f9>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | prediction_col: prediction
## | probability_col: probability
## | raw_prediction_col: rawPrediction
## | (Parameters)
## | aggregation_depth: 2
## | elastic_net_param: 0
## | family: auto
## | fit_intercept: TRUE
## | max_iter: 100
## | reg_param: 0
## | standardization: TRUE
## | threshold: 0.5
## | tol: 1e-06
```
8. Assign the code to a new variable called `orders_plan`
```
orders_plan <- ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50) %>%
ft_r_formula(above_50 ~ no_items) %>%
ml_logistic_regression()
```
9. Call `orders_plan` to confirm that all of the steps are present
```
orders_plan
```
```
## Pipeline (Estimator) with 4 stages
## <pipeline_55c13a42573>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c59d87735>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c52b9fd9e>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormula (Estimator)
## | <r_formula_55c30166c14>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Parameters)
## | force_index_label: FALSE
## | formula: above_50 ~ no_items
## | handle_invalid: error
## | stringIndexerOrderType: frequencyDesc
## |--4 LogisticRegression (Estimator)
## | <logistic_regression_55c5655d997>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | prediction_col: prediction
## | probability_col: probability
## | raw_prediction_col: rawPrediction
## | (Parameters)
## | aggregation_depth: 2
## | elastic_net_param: 0
## | family: auto
## | fit_intercept: TRUE
## | max_iter: 100
## | reg_param: 0
## | standardization: TRUE
## | threshold: 0.5
## | tol: 1e-06
```
12\.2 Build a Transformer (fit)
-------------------------------
*Execute the planned changes to obtain a new model*
1. Use `ml_fit()` to execute the changes in `order_plan` using the `spark_lineitems` data. Assign to a new variable called `orders_fit`
```
orders_fit <- ml_fit(orders_plan, spark_lineitems)
```
2. Call `orders_fit` to see the print\-out of the newly fitted model
```
orders_fit
```
```
## PipelineModel (Transformer) with 4 stages
## <pipeline_55c13a42573>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c59d87735>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c52b9fd9e>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormulaModel (Transformer)
## | <r_formula_55c30166c14>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Transformer Info)
## | formula: chr "above_50 ~ no_items"
## |--4 LogisticRegressionModel (Transformer)
## | <logistic_regression_55c5655d997>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | prediction_col: prediction
## | probability_col: probability
## | raw_prediction_col: rawPrediction
## | (Transformer Info)
## | coefficient_matrix: num [1, 1] 2.21
## | coefficients: num 2.21
## | intercept: num -16.7
## | intercept_vector: num -16.7
## | num_classes: int 2
## | num_features: int 1
## | threshold: num 0.5
## | thresholds: num [1:2] 0.5 0.5
```
12\.3 Predictions using Spark Pipelines
---------------------------------------
*Overview of how to use a fitted pipeline to run predictions*
1. Use `ml_transform()` in order to use the `orders_fit` model to run predictions over `spark_lineitems`
```
orders_preds <- ml_transform(orders_fit, spark_lineitems)
```
2. With `count()`, compare the results from `above_50` against the predictions, the variable created by `ml_transform()` is called `prediction`
```
orders_preds %>%
count(above_50, prediction)
```
```
## # Source: spark<?> [?? x 3]
## # Groups: above_50
## above_50 prediction n
## <dbl> <dbl> <dbl>
## 1 0 1 783
## 2 0 0 9282
## 3 1 1 16387
## 4 1 0 92
```
12\.4 Save the pipeline objects
-------------------------------
*Overview of how to save the Estimator and the Transformer*
1. Use `ml_save()` to save `order_plan` in a new folder called “saved\_model”
```
ml_save(orders_plan, "saved_model", overwrite = TRUE)
```
```
## Model successfully saved.
```
2. Navigate to the “saved\_model” folder to inspect its contents
3. Use `ml_save()` to save `orders_fit` in a new folder called “saved\_pipeline”
```
ml_save(orders_fit, "saved_pipeline", overwrite = TRUE)
```
```
## Model successfully saved.
```
4. Navigate to the “saved\_pipeline” folder to inspect its contents
12\.1 Build an Estimator (plan)
-------------------------------
*Create a simple estimator that transforms data and fits a model*
1. Use the `spark_lineitems` variable to create a new aggregation by `order_id`. Summarize the total sales and number of items
```
spark_lineitems %>%
mutate(price = as.double(price)) %>%
group_by(order_id) %>%
summarise(total_sales = sum(price, na.rm = TRUE), no_items = n())
```
```
## # Source: spark<?> [?? x 3]
## order_id total_sales no_items
## <chr> <dbl> <dbl>
## 1 33531 45.4 6
## 2 33538 79.9 12
## 3 33541 70.0 10
## 4 33550 92.8 16
## 5 33561 46.6 8
## 6 33565 55.6 8
## 7 33568 76.6 10
## 8 33571 74 12
## 9 33572 54.4 8
## 10 33582 82.5 12
## # … with more rows
```
2. Assign the code to a new variable called `orders`
```
orders <- spark_lineitems %>%
mutate(price = as.double(price)) %>%
group_by(order_id) %>%
summarise(total_sales = sum(price, na.rm = TRUE), no_items = n())
```
3. Start a new code chunk, with calling `ml_pipeline(sc)`
```
ml_pipeline(sc)
```
```
## Pipeline (Estimator) with no stages
## <pipeline_55c15862a53>
```
4. Pipe the `ml_pipeline()` code into a `ft_dplyr_transfomer()` call. Use the `orders` variable for its argument
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders)
```
```
## Pipeline (Estimator) with 1 stage
## <pipeline_55c68ca32c1>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c795af321>
## | (Parameters -- Column Names)
```
5. Add an `ft_binarizer()` step that determines if the total sale is above $50\. Name the new variable `above_50`
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50)
```
```
## Pipeline (Estimator) with 2 stages
## <pipeline_55c2f66e520>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c74b98194>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55cbe5bb73>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
```
6. Using the `ft_r_formula`, add a step that sets the model’s formula to: `above_50 ~ no_items`
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50) %>%
ft_r_formula(above_50 ~ no_items)
```
```
## Pipeline (Estimator) with 3 stages
## <pipeline_55c7014263c>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c640e84e2>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c6073c45b>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormula (Estimator)
## | <r_formula_55c3c7df218>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Parameters)
## | force_index_label: FALSE
## | formula: above_50 ~ no_items
## | handle_invalid: error
## | stringIndexerOrderType: frequencyDesc
```
7. Finalize the pipeline by adding a `ml_logistic_regression()` step, no arguments are needed
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50) %>%
ft_r_formula(above_50 ~ no_items) %>%
ml_logistic_regression()
```
```
## Pipeline (Estimator) with 4 stages
## <pipeline_55cc32e40c>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c183307c4>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c10313fcc>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormula (Estimator)
## | <r_formula_55c69641535>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Parameters)
## | force_index_label: FALSE
## | formula: above_50 ~ no_items
## | handle_invalid: error
## | stringIndexerOrderType: frequencyDesc
## |--4 LogisticRegression (Estimator)
## | <logistic_regression_55c412af3f9>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | prediction_col: prediction
## | probability_col: probability
## | raw_prediction_col: rawPrediction
## | (Parameters)
## | aggregation_depth: 2
## | elastic_net_param: 0
## | family: auto
## | fit_intercept: TRUE
## | max_iter: 100
## | reg_param: 0
## | standardization: TRUE
## | threshold: 0.5
## | tol: 1e-06
```
8. Assign the code to a new variable called `orders_plan`
```
orders_plan <- ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50) %>%
ft_r_formula(above_50 ~ no_items) %>%
ml_logistic_regression()
```
9. Call `orders_plan` to confirm that all of the steps are present
```
orders_plan
```
```
## Pipeline (Estimator) with 4 stages
## <pipeline_55c13a42573>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c59d87735>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c52b9fd9e>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormula (Estimator)
## | <r_formula_55c30166c14>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Parameters)
## | force_index_label: FALSE
## | formula: above_50 ~ no_items
## | handle_invalid: error
## | stringIndexerOrderType: frequencyDesc
## |--4 LogisticRegression (Estimator)
## | <logistic_regression_55c5655d997>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | prediction_col: prediction
## | probability_col: probability
## | raw_prediction_col: rawPrediction
## | (Parameters)
## | aggregation_depth: 2
## | elastic_net_param: 0
## | family: auto
## | fit_intercept: TRUE
## | max_iter: 100
## | reg_param: 0
## | standardization: TRUE
## | threshold: 0.5
## | tol: 1e-06
```
12\.2 Build a Transformer (fit)
-------------------------------
*Execute the planned changes to obtain a new model*
1. Use `ml_fit()` to execute the changes in `order_plan` using the `spark_lineitems` data. Assign to a new variable called `orders_fit`
```
orders_fit <- ml_fit(orders_plan, spark_lineitems)
```
2. Call `orders_fit` to see the print\-out of the newly fitted model
```
orders_fit
```
```
## PipelineModel (Transformer) with 4 stages
## <pipeline_55c13a42573>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c59d87735>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c52b9fd9e>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormulaModel (Transformer)
## | <r_formula_55c30166c14>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Transformer Info)
## | formula: chr "above_50 ~ no_items"
## |--4 LogisticRegressionModel (Transformer)
## | <logistic_regression_55c5655d997>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | prediction_col: prediction
## | probability_col: probability
## | raw_prediction_col: rawPrediction
## | (Transformer Info)
## | coefficient_matrix: num [1, 1] 2.21
## | coefficients: num 2.21
## | intercept: num -16.7
## | intercept_vector: num -16.7
## | num_classes: int 2
## | num_features: int 1
## | threshold: num 0.5
## | thresholds: num [1:2] 0.5 0.5
```
12\.3 Predictions using Spark Pipelines
---------------------------------------
*Overview of how to use a fitted pipeline to run predictions*
1. Use `ml_transform()` in order to use the `orders_fit` model to run predictions over `spark_lineitems`
```
orders_preds <- ml_transform(orders_fit, spark_lineitems)
```
2. With `count()`, compare the results from `above_50` against the predictions, the variable created by `ml_transform()` is called `prediction`
```
orders_preds %>%
count(above_50, prediction)
```
```
## # Source: spark<?> [?? x 3]
## # Groups: above_50
## above_50 prediction n
## <dbl> <dbl> <dbl>
## 1 0 1 783
## 2 0 0 9282
## 3 1 1 16387
## 4 1 0 92
```
12\.4 Save the pipeline objects
-------------------------------
*Overview of how to save the Estimator and the Transformer*
1. Use `ml_save()` to save `order_plan` in a new folder called “saved\_model”
```
ml_save(orders_plan, "saved_model", overwrite = TRUE)
```
```
## Model successfully saved.
```
2. Navigate to the “saved\_model” folder to inspect its contents
3. Use `ml_save()` to save `orders_fit` in a new folder called “saved\_pipeline”
```
ml_save(orders_fit, "saved_pipeline", overwrite = TRUE)
```
```
## Model successfully saved.
```
4. Navigate to the “saved\_pipeline” folder to inspect its contents
| Big Data |
rstudio-conf-2020.github.io | https://rstudio-conf-2020.github.io/big-data/spark-pipelines.html |
12 Spark Pipelines
==================
12\.1 Build an Estimator (plan)
-------------------------------
*Create a simple estimator that transforms data and fits a model*
1. Use the `spark_lineitems` variable to create a new aggregation by `order_id`. Summarize the total sales and number of items
```
spark_lineitems %>%
mutate(price = as.double(price)) %>%
group_by(order_id) %>%
summarise(total_sales = sum(price, na.rm = TRUE), no_items = n())
```
```
## # Source: spark<?> [?? x 3]
## order_id total_sales no_items
## <chr> <dbl> <dbl>
## 1 33531 45.4 6
## 2 33538 79.9 12
## 3 33541 70.0 10
## 4 33550 92.8 16
## 5 33561 46.6 8
## 6 33565 55.6 8
## 7 33568 76.6 10
## 8 33571 74 12
## 9 33572 54.4 8
## 10 33582 82.5 12
## # … with more rows
```
2. Assign the code to a new variable called `orders`
```
orders <- spark_lineitems %>%
mutate(price = as.double(price)) %>%
group_by(order_id) %>%
summarise(total_sales = sum(price, na.rm = TRUE), no_items = n())
```
3. Start a new code chunk, with calling `ml_pipeline(sc)`
```
ml_pipeline(sc)
```
```
## Pipeline (Estimator) with no stages
## <pipeline_55c15862a53>
```
4. Pipe the `ml_pipeline()` code into a `ft_dplyr_transfomer()` call. Use the `orders` variable for its argument
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders)
```
```
## Pipeline (Estimator) with 1 stage
## <pipeline_55c68ca32c1>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c795af321>
## | (Parameters -- Column Names)
```
5. Add an `ft_binarizer()` step that determines if the total sale is above $50\. Name the new variable `above_50`
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50)
```
```
## Pipeline (Estimator) with 2 stages
## <pipeline_55c2f66e520>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c74b98194>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55cbe5bb73>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
```
6. Using the `ft_r_formula`, add a step that sets the model’s formula to: `above_50 ~ no_items`
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50) %>%
ft_r_formula(above_50 ~ no_items)
```
```
## Pipeline (Estimator) with 3 stages
## <pipeline_55c7014263c>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c640e84e2>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c6073c45b>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormula (Estimator)
## | <r_formula_55c3c7df218>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Parameters)
## | force_index_label: FALSE
## | formula: above_50 ~ no_items
## | handle_invalid: error
## | stringIndexerOrderType: frequencyDesc
```
7. Finalize the pipeline by adding a `ml_logistic_regression()` step, no arguments are needed
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50) %>%
ft_r_formula(above_50 ~ no_items) %>%
ml_logistic_regression()
```
```
## Pipeline (Estimator) with 4 stages
## <pipeline_55cc32e40c>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c183307c4>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c10313fcc>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormula (Estimator)
## | <r_formula_55c69641535>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Parameters)
## | force_index_label: FALSE
## | formula: above_50 ~ no_items
## | handle_invalid: error
## | stringIndexerOrderType: frequencyDesc
## |--4 LogisticRegression (Estimator)
## | <logistic_regression_55c412af3f9>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | prediction_col: prediction
## | probability_col: probability
## | raw_prediction_col: rawPrediction
## | (Parameters)
## | aggregation_depth: 2
## | elastic_net_param: 0
## | family: auto
## | fit_intercept: TRUE
## | max_iter: 100
## | reg_param: 0
## | standardization: TRUE
## | threshold: 0.5
## | tol: 1e-06
```
8. Assign the code to a new variable called `orders_plan`
```
orders_plan <- ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50) %>%
ft_r_formula(above_50 ~ no_items) %>%
ml_logistic_regression()
```
9. Call `orders_plan` to confirm that all of the steps are present
```
orders_plan
```
```
## Pipeline (Estimator) with 4 stages
## <pipeline_55c13a42573>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c59d87735>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c52b9fd9e>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormula (Estimator)
## | <r_formula_55c30166c14>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Parameters)
## | force_index_label: FALSE
## | formula: above_50 ~ no_items
## | handle_invalid: error
## | stringIndexerOrderType: frequencyDesc
## |--4 LogisticRegression (Estimator)
## | <logistic_regression_55c5655d997>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | prediction_col: prediction
## | probability_col: probability
## | raw_prediction_col: rawPrediction
## | (Parameters)
## | aggregation_depth: 2
## | elastic_net_param: 0
## | family: auto
## | fit_intercept: TRUE
## | max_iter: 100
## | reg_param: 0
## | standardization: TRUE
## | threshold: 0.5
## | tol: 1e-06
```
12\.2 Build a Transformer (fit)
-------------------------------
*Execute the planned changes to obtain a new model*
1. Use `ml_fit()` to execute the changes in `order_plan` using the `spark_lineitems` data. Assign to a new variable called `orders_fit`
```
orders_fit <- ml_fit(orders_plan, spark_lineitems)
```
2. Call `orders_fit` to see the print\-out of the newly fitted model
```
orders_fit
```
```
## PipelineModel (Transformer) with 4 stages
## <pipeline_55c13a42573>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c59d87735>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c52b9fd9e>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormulaModel (Transformer)
## | <r_formula_55c30166c14>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Transformer Info)
## | formula: chr "above_50 ~ no_items"
## |--4 LogisticRegressionModel (Transformer)
## | <logistic_regression_55c5655d997>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | prediction_col: prediction
## | probability_col: probability
## | raw_prediction_col: rawPrediction
## | (Transformer Info)
## | coefficient_matrix: num [1, 1] 2.21
## | coefficients: num 2.21
## | intercept: num -16.7
## | intercept_vector: num -16.7
## | num_classes: int 2
## | num_features: int 1
## | threshold: num 0.5
## | thresholds: num [1:2] 0.5 0.5
```
12\.3 Predictions using Spark Pipelines
---------------------------------------
*Overview of how to use a fitted pipeline to run predictions*
1. Use `ml_transform()` in order to use the `orders_fit` model to run predictions over `spark_lineitems`
```
orders_preds <- ml_transform(orders_fit, spark_lineitems)
```
2. With `count()`, compare the results from `above_50` against the predictions, the variable created by `ml_transform()` is called `prediction`
```
orders_preds %>%
count(above_50, prediction)
```
```
## # Source: spark<?> [?? x 3]
## # Groups: above_50
## above_50 prediction n
## <dbl> <dbl> <dbl>
## 1 0 1 783
## 2 0 0 9282
## 3 1 1 16387
## 4 1 0 92
```
12\.4 Save the pipeline objects
-------------------------------
*Overview of how to save the Estimator and the Transformer*
1. Use `ml_save()` to save `order_plan` in a new folder called “saved\_model”
```
ml_save(orders_plan, "saved_model", overwrite = TRUE)
```
```
## Model successfully saved.
```
2. Navigate to the “saved\_model” folder to inspect its contents
3. Use `ml_save()` to save `orders_fit` in a new folder called “saved\_pipeline”
```
ml_save(orders_fit, "saved_pipeline", overwrite = TRUE)
```
```
## Model successfully saved.
```
4. Navigate to the “saved\_pipeline” folder to inspect its contents
12\.1 Build an Estimator (plan)
-------------------------------
*Create a simple estimator that transforms data and fits a model*
1. Use the `spark_lineitems` variable to create a new aggregation by `order_id`. Summarize the total sales and number of items
```
spark_lineitems %>%
mutate(price = as.double(price)) %>%
group_by(order_id) %>%
summarise(total_sales = sum(price, na.rm = TRUE), no_items = n())
```
```
## # Source: spark<?> [?? x 3]
## order_id total_sales no_items
## <chr> <dbl> <dbl>
## 1 33531 45.4 6
## 2 33538 79.9 12
## 3 33541 70.0 10
## 4 33550 92.8 16
## 5 33561 46.6 8
## 6 33565 55.6 8
## 7 33568 76.6 10
## 8 33571 74 12
## 9 33572 54.4 8
## 10 33582 82.5 12
## # … with more rows
```
2. Assign the code to a new variable called `orders`
```
orders <- spark_lineitems %>%
mutate(price = as.double(price)) %>%
group_by(order_id) %>%
summarise(total_sales = sum(price, na.rm = TRUE), no_items = n())
```
3. Start a new code chunk, with calling `ml_pipeline(sc)`
```
ml_pipeline(sc)
```
```
## Pipeline (Estimator) with no stages
## <pipeline_55c15862a53>
```
4. Pipe the `ml_pipeline()` code into a `ft_dplyr_transfomer()` call. Use the `orders` variable for its argument
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders)
```
```
## Pipeline (Estimator) with 1 stage
## <pipeline_55c68ca32c1>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c795af321>
## | (Parameters -- Column Names)
```
5. Add an `ft_binarizer()` step that determines if the total sale is above $50\. Name the new variable `above_50`
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50)
```
```
## Pipeline (Estimator) with 2 stages
## <pipeline_55c2f66e520>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c74b98194>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55cbe5bb73>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
```
6. Using the `ft_r_formula`, add a step that sets the model’s formula to: `above_50 ~ no_items`
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50) %>%
ft_r_formula(above_50 ~ no_items)
```
```
## Pipeline (Estimator) with 3 stages
## <pipeline_55c7014263c>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c640e84e2>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c6073c45b>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormula (Estimator)
## | <r_formula_55c3c7df218>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Parameters)
## | force_index_label: FALSE
## | formula: above_50 ~ no_items
## | handle_invalid: error
## | stringIndexerOrderType: frequencyDesc
```
7. Finalize the pipeline by adding a `ml_logistic_regression()` step, no arguments are needed
```
ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50) %>%
ft_r_formula(above_50 ~ no_items) %>%
ml_logistic_regression()
```
```
## Pipeline (Estimator) with 4 stages
## <pipeline_55cc32e40c>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c183307c4>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c10313fcc>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormula (Estimator)
## | <r_formula_55c69641535>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Parameters)
## | force_index_label: FALSE
## | formula: above_50 ~ no_items
## | handle_invalid: error
## | stringIndexerOrderType: frequencyDesc
## |--4 LogisticRegression (Estimator)
## | <logistic_regression_55c412af3f9>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | prediction_col: prediction
## | probability_col: probability
## | raw_prediction_col: rawPrediction
## | (Parameters)
## | aggregation_depth: 2
## | elastic_net_param: 0
## | family: auto
## | fit_intercept: TRUE
## | max_iter: 100
## | reg_param: 0
## | standardization: TRUE
## | threshold: 0.5
## | tol: 1e-06
```
8. Assign the code to a new variable called `orders_plan`
```
orders_plan <- ml_pipeline(sc) %>%
ft_dplyr_transformer(orders) %>%
ft_binarizer("total_sales", "above_50", 50) %>%
ft_r_formula(above_50 ~ no_items) %>%
ml_logistic_regression()
```
9. Call `orders_plan` to confirm that all of the steps are present
```
orders_plan
```
```
## Pipeline (Estimator) with 4 stages
## <pipeline_55c13a42573>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c59d87735>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c52b9fd9e>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormula (Estimator)
## | <r_formula_55c30166c14>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Parameters)
## | force_index_label: FALSE
## | formula: above_50 ~ no_items
## | handle_invalid: error
## | stringIndexerOrderType: frequencyDesc
## |--4 LogisticRegression (Estimator)
## | <logistic_regression_55c5655d997>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | prediction_col: prediction
## | probability_col: probability
## | raw_prediction_col: rawPrediction
## | (Parameters)
## | aggregation_depth: 2
## | elastic_net_param: 0
## | family: auto
## | fit_intercept: TRUE
## | max_iter: 100
## | reg_param: 0
## | standardization: TRUE
## | threshold: 0.5
## | tol: 1e-06
```
12\.2 Build a Transformer (fit)
-------------------------------
*Execute the planned changes to obtain a new model*
1. Use `ml_fit()` to execute the changes in `order_plan` using the `spark_lineitems` data. Assign to a new variable called `orders_fit`
```
orders_fit <- ml_fit(orders_plan, spark_lineitems)
```
2. Call `orders_fit` to see the print\-out of the newly fitted model
```
orders_fit
```
```
## PipelineModel (Transformer) with 4 stages
## <pipeline_55c13a42573>
## Stages
## |--1 SQLTransformer (Transformer)
## | <dplyr_transformer_55c59d87735>
## | (Parameters -- Column Names)
## |--2 Binarizer (Transformer)
## | <binarizer_55c52b9fd9e>
## | (Parameters -- Column Names)
## | input_col: total_sales
## | output_col: above_50
## |--3 RFormulaModel (Transformer)
## | <r_formula_55c30166c14>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | (Transformer Info)
## | formula: chr "above_50 ~ no_items"
## |--4 LogisticRegressionModel (Transformer)
## | <logistic_regression_55c5655d997>
## | (Parameters -- Column Names)
## | features_col: features
## | label_col: label
## | prediction_col: prediction
## | probability_col: probability
## | raw_prediction_col: rawPrediction
## | (Transformer Info)
## | coefficient_matrix: num [1, 1] 2.21
## | coefficients: num 2.21
## | intercept: num -16.7
## | intercept_vector: num -16.7
## | num_classes: int 2
## | num_features: int 1
## | threshold: num 0.5
## | thresholds: num [1:2] 0.5 0.5
```
12\.3 Predictions using Spark Pipelines
---------------------------------------
*Overview of how to use a fitted pipeline to run predictions*
1. Use `ml_transform()` in order to use the `orders_fit` model to run predictions over `spark_lineitems`
```
orders_preds <- ml_transform(orders_fit, spark_lineitems)
```
2. With `count()`, compare the results from `above_50` against the predictions, the variable created by `ml_transform()` is called `prediction`
```
orders_preds %>%
count(above_50, prediction)
```
```
## # Source: spark<?> [?? x 3]
## # Groups: above_50
## above_50 prediction n
## <dbl> <dbl> <dbl>
## 1 0 1 783
## 2 0 0 9282
## 3 1 1 16387
## 4 1 0 92
```
12\.4 Save the pipeline objects
-------------------------------
*Overview of how to save the Estimator and the Transformer*
1. Use `ml_save()` to save `order_plan` in a new folder called “saved\_model”
```
ml_save(orders_plan, "saved_model", overwrite = TRUE)
```
```
## Model successfully saved.
```
2. Navigate to the “saved\_model” folder to inspect its contents
3. Use `ml_save()` to save `orders_fit` in a new folder called “saved\_pipeline”
```
ml_save(orders_fit, "saved_pipeline", overwrite = TRUE)
```
```
## Model successfully saved.
```
4. Navigate to the “saved\_pipeline” folder to inspect its contents
| Field Specific |
okanbulut.github.io | https://okanbulut.github.io/bigdata/intro.html |
2 Introduction
==============
2\.1 What is big data?
----------------------
When we handle a data\-related problem, how do we know that we are actually dealing with “big data?” What is “big data?” What characteristics make a dataset big? The following three characteristics (three Vs of big data, source: [TechTarget](https://whatis.techtarget.com/definition/3Vs)) can help us define the size of data:
1. **Volume**: The number of rows or cases (e.g., students) and the number of columns or variables (e.g., age, gender, student responses, response times)
2. **Variety**: Whether there are secondary sources or data that expand the existing data even further
3. **Velocity**: Whether real\-time data are being used
Figure 2\.1: Three Vs of big data
2\.2 Why is big data important?
-------------------------------
Nowadays nearly every private and public sector of industry, commerce, health, education, and so forth are talking about big data. Data is a strategic and valuable asset when we know which questions we want to answer (see Bernard Marr’s article titled [Big Data: Too Many Answers, Not Enough Questions](https://www.forbes.com/sites/bernardmarr/2015/08/25/big-data-too-many-answers-not-enough-questions/#527635fb1361)). Therefore, it is very important to identify the right questions at the beginning of data collection. More data with appropriate questions can yield quality answers that we can use for better decision\-making. However, too much data without any purpose may obfuscate the truth.
Currently big data is used to better understand customers and their behaviors and preferences. Consider [Netflix](https://www.netflix.com) – one of the world’s leading subscription services for watching movies and TV shows online. They use big data – such as customers’ ratings for each movie and TV show and when customers subscribe/unsubscribe – to make better recommendations for existing customers and to convince more customers to subscribe. Target, a big retailer in the US, implements data mining techniques to predict pregnancies of their shoppers and send them a sale booklet for baby clothes, cribs, and diapers (see this interesting [article](https://www.driveresearch.com/single-post/2016/12/06/How-Target-Used-Data-Analytics-to-Predict-Pregnancies)). Car insurance companies analyze big data from their customers to understand how well their customers actually drive and how much they need to charge each customer to make a profit.
In education, there is no shortage of big data. Student records, teacher observations, assessment results, and other student\-related databases make tons of information available to researchers and practitioners. With the advent of new technologies such as [facial recognition software](https://www.edweek.org/ew/articles/2016/01/13/the-future-of-big-data-and-analytics.html) and [biometric signals](https://www.smartdatacollective.com/jay-z-kanye-west-used-biometrics-beat-album-leaks/), now we get access to a variety of visual and audio data on students. In the context of educational testing and psychometrics, big data can help us to assess students more accurately, while continuously monitoring their progress via [learning analytics](https://isit.arts.ubc.ca/learning-analytics-examples/). We can use [log data](https://link.springer.com/content/pdf/10.1007%2Fs41237-018-0063-y.pdf) and response times to understand students’ engagement with the test, whether they were cheating, and whether they had pre\-knowledge of the items presented on the test.
2\.3 How do we analyze big data?
--------------------------------
Big data analysis often begins with reading and then extracting the data. First, we need to read the data into a software program – such as R – and then manage it properly. Second, we need to extract a subset, sample, or summary from the big data. Due to its size, even a subset of the big data might itself be quite large. Third, we need to repeat computation (e.g., fitting a model) for many subgroups of the data (e.g., for each individual or by larger groups that combine individuals based on a particular characteristic). Therefore, we need to use the right tools for our data operations. For example, we may need to store big data in a data warehouse (either a local database or a cloud system) and then pass subsets of data from the warehouse to the local machine where we are analyzing the data.
R, maintained by the [R Core Team](https://www.r-project.org/contributors.html), has its packages (collect of R functions) available on this [The Comprehensive R Archive Network (CRAN)](https://cran.r-project.org/). It used to be considered an *inadequate* programming language for big data (see Douglass Merril’s [article](https://www.forbes.com/sites/douglasmerrill/2012/05/01/r-is-not-enough-for-big-data/#59c7ad9b5924) from 2012\). Fortunately, today’s R, with the help of [RStudio](https://www.rstudio.com/) and many data scientists, is capable of running most analytic tasks for big data either alone or with the help of other programs and programming languages, such as [Spark](https://spark.apache.org/docs/latest/sparkr.html), [Hadoop](https://hadoop.apache.org/), [SQL](https://en.wikipedia.org/wiki/SQL), and [C\+\+](http://www.cplusplus.com/) (see Figure [2\.2](intro.html#fig:fig1-2)). R is an amazing data science programming tool, it has a myriad statistical techniques available, and can readily translate the results of our analyses into colourful graphics. There is no doubt that R is one of the most preferred programming tool for statisticians, data scientists, and data analysts who deal with big data on a daily basis.
Figure 2\.2: Other big data programs integrated with R
Some general suggestions on big data analysis include:
1. Obtain a strong computer (multiple and faster CPUs, more memory)
2. If memory is a problem, access the data differently or split up the data
3. Preview a subset of big data using a program, **not** the entire raw data.
4. Visualize either a subset of data or a summary of the big data, **not** the entire raw data.
5. Calculate necessary summary statistics manually, **not** for all variables in big data.
6. Delay computationally expensive operations (e.g., those that require large memory) until you actually need them.
7. Consider using parallel computing – [parallel](https://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf) and [foreach](https://cran.r-project.org/web/packages/foreach/vignettes/foreach.pdf) packages and [cloud computing](https://rstudio.cloud/)
8. Profile big tasks (in R) to cut down on computational time
```
start_time <- proc.time()
# Do all of your coding here
end_time <- proc.time()
end_time - start_time
# Alternatively,
system.time({
# Do all of your coding here
})
```
During this training session, we will follow these steps and demonstrate how each one helps us explore, visualize, and model big data in R.
2\.4 Additional resources
-------------------------
There are dozens of online resources and books on big data analysis. Here are a few of them that we recommend you check out:
* James, G., Witten, D., Hastie, T., \& Tibshirani, R. (2017\). An introduction to statistical learning with applications in R. New York, NY: Springer. (Freely available from the authors’ website: [http://www\-bcf.usc.edu/\~gareth/ISL/index.html](http://www-bcf.usc.edu/~gareth/ISL/index.html))
* Grolemund, G., \& Wickham, H. (2016\). R for data science. Sebastopol, CA: O’Reilly Media, Inc. (Freely available from the authors’ website: <http://r4ds.had.co.nz/>)
* Baumer, B. S., Kaplan, D. T., \& Horton, N. J. (2017\). [Modern data science with R](https://mdsr-book.github.io/). Boca Raton, FL: CRC Press.
* Romero, C., Ventura, S., Pechenizkiy, M., \& Baker, de, R. S. J. (Eds.) (2011\). Handbook of educational data mining. (Chapman and Hall/CRC data mining and knowledge discovery series). Boca Raton: CRC Press.
* DataCamp: [https://www.datacamp.com/tracks/big\-data\-with\-r](https://www.datacamp.com/tracks/big-data-with-r)
* RStudio: [https://www.rstudio.com/resources/webinars/working\-with\-big\-data\-in\-r/](https://www.rstudio.com/resources/webinars/working-with-big-data-in-r/)
---
2\.5 PISA dataset
-----------------
In this training session, we will use the 2015 administration of the OECD’s [Programme for International Student Assessment (PISA)](http://www.oecd.org/pisa/). PISA is a large\-scale, international assessment that involves students, parents, teachers, and school principals from all over the world as participants. Every three years, PISA tests 15\-year\-old students from all over the world in reading, mathematics and science. The tests are designed to gauge how well the students master key subjects in order to be prepared for real\-life situations in the adult world.
In addition to assessing students’ competencies, PISA also aims to inform educational policies and practices for the participating countries and economies by providing additional information obtained from students, parents, teachers, and school principals through the questionnaires. Students complete a background questionnaire with questions about themselves, their family and home, and their school and learning experiences. School principals complete a questionnaire about the system and learning environment in schools. In some countries, teachers and parents also complete optional questionnaires to provide more information on their perceptions and expectations regarding students. In this training session, we specifically focus on the assessment data and the background questionnaire that all participating students are required to complete.
The 2015 administration of PISA involves approximately 540,000 15\-year\-old students from 72 participating countries and economies. During this training session, we will sometimes use the entire dataset or take a subset of the PISA dataset to demonstrate the methods used for exploring, visualizing, and modeling big data. For more details about the PISA dataset and its codebooks, please see [the PISA website](http://www.oecd.org/pisa/data/2015database/).
The three data files that we will use in this training session can be downloaded using the following links. Please download and unzip the files to follow the examples that we will demonstrate in this training session.
* <http://bit.ly/2VleDPZ> (all PISA records – 331\.65 MB)
* <http://bit.ly/2Uf2mQA> (only 6 regions with 17 countries – 103\.76 MB)
* <http://bit.ly/2YNzei0> (randomly selected cases from 6 regions – 22\.92 MB)
2\.1 What is big data?
----------------------
When we handle a data\-related problem, how do we know that we are actually dealing with “big data?” What is “big data?” What characteristics make a dataset big? The following three characteristics (three Vs of big data, source: [TechTarget](https://whatis.techtarget.com/definition/3Vs)) can help us define the size of data:
1. **Volume**: The number of rows or cases (e.g., students) and the number of columns or variables (e.g., age, gender, student responses, response times)
2. **Variety**: Whether there are secondary sources or data that expand the existing data even further
3. **Velocity**: Whether real\-time data are being used
Figure 2\.1: Three Vs of big data
2\.2 Why is big data important?
-------------------------------
Nowadays nearly every private and public sector of industry, commerce, health, education, and so forth are talking about big data. Data is a strategic and valuable asset when we know which questions we want to answer (see Bernard Marr’s article titled [Big Data: Too Many Answers, Not Enough Questions](https://www.forbes.com/sites/bernardmarr/2015/08/25/big-data-too-many-answers-not-enough-questions/#527635fb1361)). Therefore, it is very important to identify the right questions at the beginning of data collection. More data with appropriate questions can yield quality answers that we can use for better decision\-making. However, too much data without any purpose may obfuscate the truth.
Currently big data is used to better understand customers and their behaviors and preferences. Consider [Netflix](https://www.netflix.com) – one of the world’s leading subscription services for watching movies and TV shows online. They use big data – such as customers’ ratings for each movie and TV show and when customers subscribe/unsubscribe – to make better recommendations for existing customers and to convince more customers to subscribe. Target, a big retailer in the US, implements data mining techniques to predict pregnancies of their shoppers and send them a sale booklet for baby clothes, cribs, and diapers (see this interesting [article](https://www.driveresearch.com/single-post/2016/12/06/How-Target-Used-Data-Analytics-to-Predict-Pregnancies)). Car insurance companies analyze big data from their customers to understand how well their customers actually drive and how much they need to charge each customer to make a profit.
In education, there is no shortage of big data. Student records, teacher observations, assessment results, and other student\-related databases make tons of information available to researchers and practitioners. With the advent of new technologies such as [facial recognition software](https://www.edweek.org/ew/articles/2016/01/13/the-future-of-big-data-and-analytics.html) and [biometric signals](https://www.smartdatacollective.com/jay-z-kanye-west-used-biometrics-beat-album-leaks/), now we get access to a variety of visual and audio data on students. In the context of educational testing and psychometrics, big data can help us to assess students more accurately, while continuously monitoring their progress via [learning analytics](https://isit.arts.ubc.ca/learning-analytics-examples/). We can use [log data](https://link.springer.com/content/pdf/10.1007%2Fs41237-018-0063-y.pdf) and response times to understand students’ engagement with the test, whether they were cheating, and whether they had pre\-knowledge of the items presented on the test.
2\.3 How do we analyze big data?
--------------------------------
Big data analysis often begins with reading and then extracting the data. First, we need to read the data into a software program – such as R – and then manage it properly. Second, we need to extract a subset, sample, or summary from the big data. Due to its size, even a subset of the big data might itself be quite large. Third, we need to repeat computation (e.g., fitting a model) for many subgroups of the data (e.g., for each individual or by larger groups that combine individuals based on a particular characteristic). Therefore, we need to use the right tools for our data operations. For example, we may need to store big data in a data warehouse (either a local database or a cloud system) and then pass subsets of data from the warehouse to the local machine where we are analyzing the data.
R, maintained by the [R Core Team](https://www.r-project.org/contributors.html), has its packages (collect of R functions) available on this [The Comprehensive R Archive Network (CRAN)](https://cran.r-project.org/). It used to be considered an *inadequate* programming language for big data (see Douglass Merril’s [article](https://www.forbes.com/sites/douglasmerrill/2012/05/01/r-is-not-enough-for-big-data/#59c7ad9b5924) from 2012\). Fortunately, today’s R, with the help of [RStudio](https://www.rstudio.com/) and many data scientists, is capable of running most analytic tasks for big data either alone or with the help of other programs and programming languages, such as [Spark](https://spark.apache.org/docs/latest/sparkr.html), [Hadoop](https://hadoop.apache.org/), [SQL](https://en.wikipedia.org/wiki/SQL), and [C\+\+](http://www.cplusplus.com/) (see Figure [2\.2](intro.html#fig:fig1-2)). R is an amazing data science programming tool, it has a myriad statistical techniques available, and can readily translate the results of our analyses into colourful graphics. There is no doubt that R is one of the most preferred programming tool for statisticians, data scientists, and data analysts who deal with big data on a daily basis.
Figure 2\.2: Other big data programs integrated with R
Some general suggestions on big data analysis include:
1. Obtain a strong computer (multiple and faster CPUs, more memory)
2. If memory is a problem, access the data differently or split up the data
3. Preview a subset of big data using a program, **not** the entire raw data.
4. Visualize either a subset of data or a summary of the big data, **not** the entire raw data.
5. Calculate necessary summary statistics manually, **not** for all variables in big data.
6. Delay computationally expensive operations (e.g., those that require large memory) until you actually need them.
7. Consider using parallel computing – [parallel](https://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf) and [foreach](https://cran.r-project.org/web/packages/foreach/vignettes/foreach.pdf) packages and [cloud computing](https://rstudio.cloud/)
8. Profile big tasks (in R) to cut down on computational time
```
start_time <- proc.time()
# Do all of your coding here
end_time <- proc.time()
end_time - start_time
# Alternatively,
system.time({
# Do all of your coding here
})
```
During this training session, we will follow these steps and demonstrate how each one helps us explore, visualize, and model big data in R.
2\.4 Additional resources
-------------------------
There are dozens of online resources and books on big data analysis. Here are a few of them that we recommend you check out:
* James, G., Witten, D., Hastie, T., \& Tibshirani, R. (2017\). An introduction to statistical learning with applications in R. New York, NY: Springer. (Freely available from the authors’ website: [http://www\-bcf.usc.edu/\~gareth/ISL/index.html](http://www-bcf.usc.edu/~gareth/ISL/index.html))
* Grolemund, G., \& Wickham, H. (2016\). R for data science. Sebastopol, CA: O’Reilly Media, Inc. (Freely available from the authors’ website: <http://r4ds.had.co.nz/>)
* Baumer, B. S., Kaplan, D. T., \& Horton, N. J. (2017\). [Modern data science with R](https://mdsr-book.github.io/). Boca Raton, FL: CRC Press.
* Romero, C., Ventura, S., Pechenizkiy, M., \& Baker, de, R. S. J. (Eds.) (2011\). Handbook of educational data mining. (Chapman and Hall/CRC data mining and knowledge discovery series). Boca Raton: CRC Press.
* DataCamp: [https://www.datacamp.com/tracks/big\-data\-with\-r](https://www.datacamp.com/tracks/big-data-with-r)
* RStudio: [https://www.rstudio.com/resources/webinars/working\-with\-big\-data\-in\-r/](https://www.rstudio.com/resources/webinars/working-with-big-data-in-r/)
---
2\.5 PISA dataset
-----------------
In this training session, we will use the 2015 administration of the OECD’s [Programme for International Student Assessment (PISA)](http://www.oecd.org/pisa/). PISA is a large\-scale, international assessment that involves students, parents, teachers, and school principals from all over the world as participants. Every three years, PISA tests 15\-year\-old students from all over the world in reading, mathematics and science. The tests are designed to gauge how well the students master key subjects in order to be prepared for real\-life situations in the adult world.
In addition to assessing students’ competencies, PISA also aims to inform educational policies and practices for the participating countries and economies by providing additional information obtained from students, parents, teachers, and school principals through the questionnaires. Students complete a background questionnaire with questions about themselves, their family and home, and their school and learning experiences. School principals complete a questionnaire about the system and learning environment in schools. In some countries, teachers and parents also complete optional questionnaires to provide more information on their perceptions and expectations regarding students. In this training session, we specifically focus on the assessment data and the background questionnaire that all participating students are required to complete.
The 2015 administration of PISA involves approximately 540,000 15\-year\-old students from 72 participating countries and economies. During this training session, we will sometimes use the entire dataset or take a subset of the PISA dataset to demonstrate the methods used for exploring, visualizing, and modeling big data. For more details about the PISA dataset and its codebooks, please see [the PISA website](http://www.oecd.org/pisa/data/2015database/).
The three data files that we will use in this training session can be downloaded using the following links. Please download and unzip the files to follow the examples that we will demonstrate in this training session.
* <http://bit.ly/2VleDPZ> (all PISA records – 331\.65 MB)
* <http://bit.ly/2Uf2mQA> (only 6 regions with 17 countries – 103\.76 MB)
* <http://bit.ly/2YNzei0> (randomly selected cases from 6 regions – 22\.92 MB)
| Big Data |
okanbulut.github.io | https://okanbulut.github.io/bigdata/wrangling-big-data.html |
4 Wrangling big data
====================
Data wrangling is a general term that refers to transforming data. Wrangling could involve subsetting, recoding, and transforming variables. For the workshop, we’ll also include summarizing data as wrangling as it fits within our discussion of the `data.table` and `sparklyr` packages. However, summarizing might more appropriately occur during data exploration/initial data analysis.
4\.1 What is `data.table`?
--------------------------
From the `data.table` wiki
> It is a high\-performance version of base R’s `data.frame` with syntax and feature enhancements for ease of use, convenience and programming speed.
Its syntax is designed to be concise and consistent. It’s somewhat similar to base R, but arguably less intuitive than `tidyverse`. We, and many others, would say that `data.table` is one of the most underrated package out there.
If you’re familiar with SQL, then working with a `data.table` (DT) is conceptually similar to querying.
```
DT[i, j, by]
R: i j by
SQL: where | order by select | update group by
```
This should be read as take `DT`, subset ( or order) rows using `i`, then calculate `j`, and group by `by`. A graphical depiction of this “grammar,” created by one of the developers of `data.table`, is shown in Figure [4\.1](wrangling-big-data.html#fig:dtvis).
Figure 4\.1: Source: <https://tinyurl.com/yyepwjpt>.
The `data.table` package needs to be installed and loaded throughout the workshop.
```
install.packages("data.table")
library(data.table)
```
Throughout the workshop, we will write DT code as:
```
DT[i,
j,
by]
```
That is, we will use write separate lines for the i, j, and by DT statements.
### 4\.1\.1 Why use `data.table` over `tidyverse`?
If you’re familiar with R, then you might wonder why we are using DT and not `tidyverse`? This has to do with memory management and speed.
```
#
# Benchmark #1 - Reading in data
#
system.time({read.csv("data/pisa2015.csv")})
system.time({fread("data/pisa2015.csv", na.strings = "")})
system.time({read_csv("data/pisa2015.csv")})
#
# Benchmark #2 - Calculating a conditional mean
#
#' Calculate proportion that strongly agreed to an item
#' @param x likert-type item as a numeric vector
getSA <- function(x, ...) mean(x == "Strongly agree", ...)
# read in data using fread()
pisa <- fread("data/pisa2015.csv", na.strings = "")
# calculate conditional means
# This is the proportion of students in each country that
# strongly agree that
# "I want top grades in most or all of my courses."
benchmark(
"baseR" = {
X <- aggregate(ST119Q01NA ~ CNTRYID, data = pisa, getSA, na.rm = TRUE)
},
"data.table" = {
X <- pisa[,
getSA(ST119Q01NA, na.rm = TRUE),
by = CNTRYID]
},
"tidyverse" = {
X <- pisa %>%
group_by(CNTRYID) %>%
summarize(getSA(ST119Q01NA, na.rm = TRUE))
},
replications = 1000)
```
Table [4\.1](wrangling-big-data.html#tab:benchresults) shows the results of this (relatively) unscientific minibenchmark. The first column is the method, the second column is elapsed time (in seconds) to read in the **pisa** data set (only once, though similar results/pattern is found if repeated), and the third column is the elapsed time (in seconds) to calculate the conditional mean 1000 times. We see that `data.table` is substantially faster than base R and the `tidyverse`.
Table 4\.1: Comparing base R, data.table, and tidyverse.
| Method | Reading in data | Conditional mean (1000 times) |
| --- | --- | --- |
| base R | 225\.5 | 196\.59 |
| data.table | 46\.8 | 27\.73 |
| tidyverse | 233\.7 | 159\.22 |
This extends to other data wrangling procedures (e.g., reshaping, recoding). Importantly, `tidyverse` is not designed for big data but instead for data science, more generally. From Grolemund \& Wickham (2017\)
> “This book (R for Data Science) proudly focuses on small, in\-memory datasets. This is the right place to start because you cannot tackle big data unless you have experience with small data. The tools you learn in this book will easily handle hundreds of megabytes of data, and with a little care you can typically use them to work with 1\-2 Gb of data. If you are routinely working with larger data (10\-100 Gb, say), you should learn more about data.table. This book does not teach data.table because it has a very concise interface which makes it harder to learn since it offers fewer linguistic cues. But if you are working with large data, the performance payoff is worth the extra effort required to learn it.”
4\.2 Reading/writing data with `data.table`
-------------------------------------------
The `fread` function should always be used when reading in large data sets and arguably when ever you read in a CSV file. As shown above, `read.csv` and `readr::read_csv` are painfully slow with big data.
Throughout the workshop we’ll be using the **pisa** data set. Therefore, we begin by reading in (or importing) the data set
```
pisa <- fread("data/pisa2015.csv", na.strings = "")
```
To see the **class** the object `pisa` is and how big it is in R
```
class(pisa)
```
```
## [1] "data.table" "data.frame"
```
```
print(object.size(pisa), unit = "GB")
```
```
## 3.5 Gb
```
We see that objects that are read in with `fread` are of class `data.table` and `data.frame`. That means that methods for data.tables and data.frames will work on these objects. We also see this data set uses up 3\.5 Gb of memory and this is all in the memory (RAM) not on the disk and allocated to memory dynamically (this is what SAS does).
If we wanted to write `pisa` back to a CSV to share with a colleague or to use in another program after some wrangling, then we should use the `fwrite` function instead of `write.csv`:
```
fwrite(pisa, file = "pisa2015.csv")
```
The following image (Figure [4\.2](wrangling-big-data.html#fig:dtcomp)), taken from Matt Dowle’s blog, shows the speed difference using common ways to save R objects and the differences in sizes of these files.
Figure 4\.2: Time to write an R object to a file. Source: <https://tinyurl.com/y366kvfx>.
In the event that you did **just** want to read the data in using the `fread()` function but then wanted to work with a tibble (tidyverse) or a data.frame, you can convert the data set after its been read in:
```
pisa.tib <- tibble::as_tibble(pisa)
pisa.df <- as.data.frame(pisa)
```
However, I strongly recommend against this approach unless you have done some amount of subsetting. If your data set is large enough to benefit appreciably by `fread` then you should try and use the `data.table` package.
For the workshop, we have created two smaller versions of the **pisa** data set for those of you with less beefy computers. The first is a file called `region6.csv` and it was created by
```
region6 <- subset(pisa, CNT %in% c("United States", "Canada", "Mexico",
"B-S-J-G (China)", "Japan", "Korea",
"Germany", "Italy", "France", "Brazil",
"Colombia", "Uruguay", "Australia",
"New Zealand", "Jordan", "Israel", "Lebanon"))
fwrite(region6, file = "region6.csv")
```
These are the 6 regions that will be covered during data visualization and can be used for the exercises and labs. The other file is a random sample of one country from each regions for even less powerful computers, which can also be used.
```
random6 <- subset(pisa, CNT %in% c("Mexico", "Uruguay", "Japan",
"Germany", "New Zealand", "Lebanon"))
fwrite(random6, file = "random6.csv")
```
### 4\.2\.1 Exercises
1. Read in the **pisa** data set. Either the full data set (recommended to have \> 8 Gb of RAM) or one of the smaller data sets.
4\.3 Using the i in `data.table`
--------------------------------
One of the first things we need to do when data wrangling is subsetting. Subsetting with `data.table` is very similar to base R but not identical. For example, if we wanted to subset all the students from Mexico who are currently taking Physics, i.e., they checked the item “Which course did you attend? Physics: This year” (ST063Q01NA) we would do the following:
```
pisa[CNTRYID == "Mexico" & ST063Q01NA == "Checked"]
# or (identical to base R)
subset(pisa, CNTRYID == "Mexico" & ST063Q01NA == "Checked")
```
Note that with `data.table` we do not need to use the `$` operator to access a variable in a `data.table` object. This is one improvement to the syntax of a `data.frame`.
Typing the name of a `data.table` won’t print all the rows by default like a `data.frame`. Instead it prints just the first and last 5 rows.
```
pisa
```
This is extremely helpful because when we have a object in R, it often defaults to printing the entire object and this has the negative consequence of endless output if we type just the name of a very large object.
Because we have 921 variables, `data.table` will still truncate this output. If we want to view just the rows 10 through 25\.
```
pisa[10:25]
```
However, with this many columns it is useless to print all of them and instead we should focus on examining just the columns we’re interested in and we will see how to do this when we examine the `j` operator.
Often when data wrangling we would like to perform multiple steps without needing to create intermediate variables. This is known as **chaining**. Chaining can be done in `data.table` via
```
DT[ ...
][ ...
][ ...
]
```
For example, if we wanted to just see rows 17 through 20 after we’ve done previous subset, we can chain together these commands:
```
pisa[CNTRYID == "Mexico" & ST063Q01NA == "Checked"
][17:20]
```
When we’re wrangling data, it’s common and quite helpful to reorder rows. This can be done using the `order()` function. First, we print the first 6 six elements of the CNTRYID using the default ordering in the **pisa** data. Then we reorder the data by country name in a descending order and then print the first 6 six elements again using chaining.
```
head(pisa$CNTRYID)
```
```
## [1] "Albania" "Albania" "Albania" "Albania" "Albania" "Albania"
```
```
pisa[order(CNTRYID, decreasing = TRUE)
][,
head(CNTRYID)]
```
```
## [1] "Vietnam" "Vietnam" "Vietnam" "Vietnam" "Vietnam" "Vietnam"
```
### 4\.3\.1 Exercises
1. Subset all the Female students (ST004D01T) in Germany
2. How many female students are there in Germany?
3. The `.N` function returns the length of a vector/number of rows. Use chaining with the `.N` function to answer Exercise 2\.
4\.4 Using the j in `data.table`
--------------------------------
Using j we can select columns, summarize variables by performing actions on the variables, and create new variables. If we wanted to just select the country identifier:
```
pisa[,
CNTRYID]
```
However, this returns a vector not a `data.table`. If we wanted instead to return a `data.table`:
```
pisa[,
list(CNTRYID)]
```
```
## CNTRYID
## 1: Albania
## 2: Albania
## 3: Albania
## 4: Albania
## 5: Albania
## ---
## 519330: Argentina (Ciudad Autónoma de Buenos)
## 519331: Argentina (Ciudad Autónoma de Buenos)
## 519332: Argentina (Ciudad Autónoma de Buenos)
## 519333: Argentina (Ciudad Autónoma de Buenos)
## 519334: Argentina (Ciudad Autónoma de Buenos)
```
```
pisa[,
.(CNTRYID)]
```
```
## CNTRYID
## 1: Albania
## 2: Albania
## 3: Albania
## 4: Albania
## 5: Albania
## ---
## 519330: Argentina (Ciudad Autónoma de Buenos)
## 519331: Argentina (Ciudad Autónoma de Buenos)
## 519332: Argentina (Ciudad Autónoma de Buenos)
## 519333: Argentina (Ciudad Autónoma de Buenos)
## 519334: Argentina (Ciudad Autónoma de Buenos)
```
The `.()` is `data.table` shorthand for `list()`. To subset more than one variable, we can just add another variable within the `.()`. For example, if we also wanted to select the science self\-efficacy scale (SCIEEFF) as well, we do the following:
```
pisa[,
.(CNTRYID, SCIEEFF)]
```
```
## CNTRYID SCIEEFF
## 1: Albania NA
## 2: Albania NA
## 3: Albania NA
## 4: Albania NA
## 5: Albania NA
## ---
## 519330: Argentina (Ciudad Autónoma de Buenos) -0.8799
## 519331: Argentina (Ciudad Autónoma de Buenos) 0.9802
## 519332: Argentina (Ciudad Autónoma de Buenos) -0.5696
## 519333: Argentina (Ciudad Autónoma de Buenos) -0.7065
## 519334: Argentina (Ciudad Autónoma de Buenos) -0.3609
```
If we wanted see how many students took physics in Japan and Mexico, we would do the following:
```
pisa[CNTRYID %in% c("Mexico", "Japan"),
table(ST063Q01NA)]
```
```
## ST063Q01NA
## Checked Not checked
## 4283 9762
```
Because `data.table` treats string variables as character variables by default we see that when they are printed they are printed alphabetically, which in this case is fine but is often unhelpful. We can chain together variables and create an intermediate tense variable to get this in the correct format. However, when we want to know how students in Mexico and Japan responded to “I get very tense when I study for a test.”
```
pisa[CNTRYID %in% c("Mexico", "Japan"),
table(ST118Q04NA)]
```
```
## ST118Q04NA
## Agree Disagree Strongly agree Strongly disagree
## 4074 5313 1760 2904
```
We see that the output is unhelpful. Instead, we should convert the character vector into a factor and we will create an intermediate variable called `tense`, which we won’t add to our data set.
```
pisa[CNTRYID %in% c("Mexico", "Japan"),
.(tense = factor(ST118Q04NA, levels = c("Strongly disagree", "Disagree", "Agree", "Strongly agree")))
][,
table(tense)
]
```
```
## tense
## Strongly disagree Disagree Agree Strongly agree
## 2904 5313 4074 1760
```
Quick digression, in case you were wondering why base R reads strings in as factors and not characters by default (which `data.table` and `readr::read_csv` do),
```
pisa[, .(tense.as.char = ST118Q04NA,
tense.as.fac = factor(ST118Q04NA, levels = c("Strongly disagree", "Disagree", "Agree", "Strongly agree")))
][,
.(character = object.size(tense.as.char),
factor = object.size(tense.as.fac))
]
```
```
## character factor
## 1: 4154984 bytes 2078064 bytes
```
Returning to the science self\-efficacy scale, we can request summary information for just these two countries:
```
pisa[CNTRYID %in% c("Mexico","Japan"),
.(xbar = mean(SCIEEFF, na.rm = T),
sigma = sd(SCIEEFF, na.rm = T),
minimum = min(SCIEEFF, na.rm = T),
med = median(SCIEEFF, na.rm = T),
maximum = max(SCIEEFF, na.rm = T))]
```
```
## xbar sigma minimum med maximum
## 1: -0.08694 1.216 -3.756 -0.0541 3.277
```
We can create a quick plot this way, too. For example, if we wanted a create a scatter plot of the science self\-efficacy scale against the enjoyment of science scale (JOYSCIE) for just these two countries and print the mean of the enjoyment of science scale, we can do the following:
```
pisa[CNTRYID %in% c("Mexico","Japan"),
.(plot(y = SCIEEFF, x = JOYSCIE,
col = rgb(red = 0, green = 0, blue = 0, alpha = 0.3)),
xbar.joyscie = mean(JOYSCIE, na.rm = T))]
```
```
## xbar.joyscie
## 1: 0.0614
```
This example is kind of silly but it shows that j is incredibly flexible and that we can string together a bunch of commands using j without even needing to do chaining.
Let’s say we need to recode “After leaving school did you: Eat dinner” from a character variable to a numeric variable. We can do this with a series of if else statements
```
table(pisa$ST078Q01NA)
```
```
##
## No Yes
## 23617 373131
```
```
pisa[,
"eat.dinner" := sapply(ST078Q01NA,
function(x) {
if (is.na(x)) NA
else if (x == "No") 0L
else if (x == "Yes") 1L
})
][,
table(eat.dinner)
]
```
```
## eat.dinner
## 0 1
## 23617 373131
```
In this example we created a new variable called `eat.dinner` using `:=` the function. The `:=` syntax adds this variable directly to the DT. We also specified the `L` to ensure the variable was treated as an integer and not a double, which uses less memory.
We should create a function to do this recoding as there are lots of dichotomous items in the **pisa** data set.
```
#' Convert a dichtomous item (yes/no) to numeric scoring
#' @param x a character vector containing "Yes" and "No" responses.
bin.to.num <- function(x){
if (is.na(x)) NA
else if (x == "Yes") 1L
else if (x == "No") 0L
}
```
Then use this function to create some variables as well as recoding gender to give it a more intuitive variable name.
```
pisa[, `:=`
(female = ifelse(ST004D01T == "Female", 1, 0),
sex = ST004D01T,
# At my house we have ...
desk = sapply(ST011Q01TA, bin.to.num),
own.room = sapply(ST011Q02TA, bin.to.num),
quiet.study = sapply(ST011Q03TA, bin.to.num),
computer = sapply(ST011Q04TA, bin.to.num),
software = sapply(ST011Q05TA, bin.to.num),
internet = sapply(ST011Q06TA, bin.to.num),
lit = sapply(ST011Q07TA, bin.to.num),
poetry = sapply(ST011Q08TA, bin.to.num),
art = sapply(ST011Q09TA, bin.to.num),
book.sch = sapply(ST011Q10TA, bin.to.num),
tech.book = sapply(ST011Q11TA, bin.to.num),
dict = sapply(ST011Q12TA, bin.to.num),
art.book = sapply(ST011Q16NA, bin.to.num))]
```
Similarly, we can create new variables by combining pre\-existing ones. In the later data visualization section, we will use the following variables, so we will create them now. The `rowMeans` function takes a data.frame, so we need to subset the variables from the **pisa** data set and then convert it to a data.frame. This is what the brackets are doing.
```
pisa[, `:=`
(math = rowMeans(pisa[, c(paste0("PV", 1:10, "MATH"))], na.rm = TRUE),
reading = rowMeans(pisa[, c(paste0("PV", 1:10, "READ"))], na.rm = TRUE),
science = rowMeans(pisa[, c(paste0("PV", 1:10, "SCIE"))], na.rm = TRUE))]
```
### 4\.4\.1 Exercises
1. The computer and software variables that were created above ask a student whether they had a computer in their home that they can use for school work (computer) and whether they had educational software in their home (software). Find the proportion of students in the Germany and Uruguay that have a computer in their home or have educational software.
2. For just female students, find the proportion of students who have their own room (own.room) or a quiet place to study (quiet.study).
4\.5 Summarizing using the by in `data.table`
---------------------------------------------
With the by argument, we can now get conditional responses without the need to subset. If we want to know the proportion of students in each country that have their own room at home.
```
pisa[,
.(mean(own.room, na.rm = TRUE)),
by = .(CNTRYID)
][1:6,
]
```
```
## CNTRYID V1
## 1: Albania NaN
## 2: Algeria 0.5188
## 3: Australia 0.9216
## 4: Austria 0.9054
## 5: Belgium 0.9154
## 6: Brazil 0.7498
```
Again, we can reorder this using chaining:
```
pisa[,
.(own.room = mean(own.room, na.rm = TRUE)),
by = .(country = CNTRYID)
][order(own.room, decreasing = TRUE)
][1:6
]
```
```
## country own.room
## 1: Iceland 0.9863
## 2: Netherlands 0.9750
## 3: Norway 0.9738
## 4: Sweden 0.9559
## 5: Finland 0.9441
## 6: Germany 0.9379
```
What if we want to compare just the Canada and Iceland on the proportion of students that have books of poetry at home (poetry) or and their mean on the enjoyment of science by student’s biological sex?
```
pisa[CNTRYID %in% c("Canada", "Iceland"),
.(poetry = mean(poetry, na.rm = TRUE),
enjoy = mean(JOYSCIE, na.rm = TRUE)),
by = .(country = CNTRYID, sex = sex)]
```
```
## country sex poetry enjoy
## 1: Canada Female 0.3632 0.29636
## 2: Canada Male 0.3124 0.40950
## 3: Iceland Female 0.7281 0.03584
## 4: Iceland Male 0.7011 0.30316
```
We see a strong country effect on poetry at home, with \> 70% of Icelandic students reporting poetry books at home and just above 30% in Canadian students and we see that Canadian students enjoy science more than Icelandic students and, male students, overall, enjoy science more than females.
Let’s examine books of poetry at home by countries and sort it in descending order.
```
pisa[,
.(poetry = mean(poetry, na.rm = TRUE)),
by = .(country = CNTRYID)
][order(poetry, decreasing = TRUE)
][1:6
]
```
```
## country poetry
## 1: Kosovo 0.8353
## 2: Russian Federation 0.8046
## 3: Romania 0.8019
## 4: Georgia 0.7496
## 5: B-S-J-G (China) 0.7442
## 6: Estonia 0.7423
```
Iceland is in the top 10, while Canada is 59\.
We can also write more complex functions and provide these to `data.table`. For example, if wanted to fit a regression model to predict a student’s score on science self\-efficacy scale given their score on the enjoyment of science scale and their sex for just the G7 countries (Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States), we can fit a multiple regression model and return the intercept and slope terms.
```
get.params <- function(cntry){
mod <- lm(SCIEEFF ~ JOYSCIE + sex, cntry)
est.params <- list(int = coef(mod)[[1]], enjoy.slope = coef(mod)[[2]], sex.slope = coef(mod)[[3]])
return(est.params)
}
g7.params <- pisa[CNTRYID %in% c("Canada", "France", "Germany", "Italy",
"Japan", "United Kingdom", "United States"),
get.params(.SD),
by = .(CNTRYID)]
g7.params
```
```
## CNTRYID int enjoy.slope sex.slope
## 1: Canada 0.009803357 0.4370945 0.21489577
## 2: France -0.208698984 0.4760903 0.17743126
## 3: Germany -0.019150031 0.4316565 0.17971821
## 4: Italy -0.030880063 0.3309990 0.18831666
## 5: Japan -0.353806055 0.3914385 0.04912039
## 6: United Kingdom 0.009711647 0.5182592 0.18981965
## 7: United States 0.096920721 0.3907848 0.15022008
```
We see a fair bit of variability in these estimated parameters
### 4\.5\.1 Exercises
1. Calculate the proportion of students who have art in their home (art) and the average age (AGE) of the students by gender.
2. Within a by argument you can discretize a variable to create a grouping variable. Perform a median split for age within the by argument and assess whether there are age difference associated with having your own room (own.room) or a desk (desk).
4\.6 Reshaping data
-------------------
The `data.table` package provides some very fast methods to reshape data from wide (the current format) to long format. In long format, a single test taker will correspond to multiple rows of data. Some software and `R` packages require data to be in long format (e.g., `lme4` and `nlme`).
Let’s begin by creating a student ID and then subsetting this ID and the at\-home variables:
```
pisa$id <- 1:nrow(pisa)
athome <- subset(pisa, select = c(id, desk:art.book))
```
To transform the data to long format we *melt* the data.
```
athome.l <- melt(athome,
id.vars = "id",
measure.vars = c("desk", "own.room", "quiet.study", "lit",
"poetry", "art", "book.sch", "tech.book",
"dict", "art.book"))
athome.l
```
```
## id variable value
## 1: 1 desk NA
## 2: 2 desk NA
## 3: 3 desk NA
## 4: 4 desk NA
## 5: 5 desk NA
## ---
## 5193336: 519330 art.book 1
## 5193337: 519331 art.book 0
## 5193338: 519332 art.book 1
## 5193339: 519333 art.book 0
## 5193340: 519334 art.book 0
```
We could have also allowed `melt()` to guess the format:
```
athome.guess <- melt(athome)
```
```
## Warning in melt.data.table(athome): id.vars and measure.vars are internally
## guessed when both are 'NULL'. All non-numeric/integer/logical type columns are
## considered id.vars, which in this case are columns []. Consider providing at
## least one of 'id' or 'measure' vars in future.
```
```
athome.guess
```
```
## variable value
## 1: id 1
## 2: id 2
## 3: id 3
## 4: id 4
## 5: id 5
## ---
## 7270672: art.book 1
## 7270673: art.book 0
## 7270674: art.book 1
## 7270675: art.book 0
## 7270676: art.book 0
```
It guessed incorrectly. If id was set as a character vector, then it would have guessed correctly this time. However, you should not allow it to guess the names of the variables.
To go back to wide format we use the `dcast()` function.
```
athome.w <- dcast(athome.l,
id ~ variable)
```
Unlike other reshaping packages, `data.table` can also handle reshaping multiple outcomes variables. More about reshaping with `data.table` is available [here](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-reshape.html).
4\.7 The `sparklyr` package
---------------------------
The `sparklyr` package provides an R interface to Apache Spark and a complete `dplyr` backend. Apache Spark “is a unified analytics engine for big data processing, with built\-in modules for streaming, SQL, machine learning and graph processing.” Apache Spark can also be interfaced using the `sparkR` package provided by Apache. See [here](https://spark.apache.org/docs/2.4.0/) and [here](https://spark.apache.org/docs/2.4.0/api/R/index.html) for more details.
To use Apache Spark you will need Java 8 JDK installed. It can be installed [here](https://www.oracle.com/technetwork/java/jdk8-downloads-2133151.html). To begin with you need to install `sparklyr` and `dplyr`.
```
install.packages("sparklyr")
install.packages("dplyr")
library("sparklyr")
library("dplyr")
```
We then need to install Spark, which we can do from R.
```
spark_install()
```
Next, we need to setup a connection with Spark and we’ll be connecting to a local install of Spark.
```
sc <- spark_connect(master = "local")
```
Then we need to copy the **pisa** data set to the Spark cluster. However, with this large of a data set, this is a bad idea. We will run into memory issues during the copying process. So, we’ll first subset the data before we do this.
```
pisa_sub <- subset(pisa, CNTRYID %in% c("Canada", "France", "Germany",
"Italy", "Japan", "United Kingdom",
"United States"),
select = c("DISCLISCI", "TEACHSUP", "IBTEACH", "TDTEACH",
"ENVAWARE", "JOYSCIE", "INTBRSCI", "INSTSCIE",
"SCIEEFF", "EPIST", "SCIEACT", "BSMJ", "MISCED",
"FISCED", "OUTHOURS", "SMINS", "TMINS",
"BELONG", "ANXTEST", "MOTIVAT", "COOPERATE",
"PERFEED", "unfairteacher", "HEDRES", "HOMEPOS",
"ICTRES", "WEALTH", "ESCS", "math", "reading",
"CNTRYID", "sex"))
```
We will use the selected variables in the labs and a description of these variables can be seen below.
Now the data are ready to be copied into Spark.
```
pisa_tbl <- copy_to(sc, pisa_sub, overwrite = TRUE)
```
In tidyverse, you can use the `%>%` to chain together commands or to pass data to functions. With `sparklyr`, we can use the `filter` function instead of subset. For example, if we just want to see the female students’ scores on these scales for Germany, we would do the following:
```
pisa_tbl %>%
filter(CNTRYID == "Germany" & sex == "Female")
```
```
## # Source: spark<?> [?? x 32]
## DISCLISCI TEACHSUP IBTEACH TDTEACH ENVAWARE JOYSCIE INTBRSCI INSTSCIE SCIEEFF
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 -0.234 -0.804 -0.608 -0.867 -0.536 -0.821 -0.550 NA NA
## 2 0.283 0.488 -0.157 -0.685 -0.805 -2.12 -1.13 -1.93 -2.83
## 3 0.700 1.45 0.988 0.525 0.171 -1.72 -0.225 -0.718 -2.09
## 4 0.0039 0.568 0.209 -0.0742 -0.234 -0.821 -0.0831 -0.826 -0.426
## 5 0.763 -0.450 0.535 -0.0057 -0.479 0.613 0.198 -0.304 -0.713
## 6 0.660 -0.461 0.647 0.450 -0.706 -0.631 -0.551 -1.93 -0.596
## 7 0.288 -1.82 0.430 -1.32 -0.0217 -0.576 -0.566 -0.670 0.748
## 8 0.835 -1.07 0.89 -0.610 0.256 2.16 0.341 1.33 -0.0804
## 9 1.32 -0.246 0.257 -0.867 -0.685 -0.152 -0.509 -0.778 -1.42
## 10 1.32 -1.29 0.308 -0.790 -0.385 -1.61 -0.399 -1.93 -1.05
## # ... with more rows, and 23 more variables: EPIST <dbl>, SCIEACT <dbl>,
## # BSMJ <int>, MISCED <chr>, FISCED <chr>, OUTHOURS <int>, SMINS <int>,
## # TMINS <int>, BELONG <dbl>, ANXTEST <dbl>, MOTIVAT <dbl>, COOPERATE <dbl>,
## # PERFEED <dbl>, unfairteacher <int>, HEDRES <dbl>, HOMEPOS <dbl>,
## # ICTRES <dbl>, WEALTH <dbl>, ESCS <dbl>, math <dbl>, reading <dbl>,
## # CNTRYID <chr>, sex <chr>
```
You’ll notice the at the top it says `#Source: spark<?>`
If we wanted to calculate the average disciplinary climate in science classes (DISCLISCI) by country and by sex and have it reorder by country than sex, we can do the following:
```
pisa_tbl %>%
group_by(CNTRYID, sex) %>%
summarize(ave_disclip = mean(DISCLISCI, na.rm = TRUE)) %>%
arrange(CNTRYID, sex)
```
```
## # Source: spark<?> [?? x 3]
## # Groups: CNTRYID
## # Ordered by: CNTRYID, sex
## CNTRYID sex ave_disclip
## <chr> <chr> <dbl>
## 1 Canada Female 0.0110
## 2 Canada Male -0.0205
## 3 France Female -0.236
## 4 France Male -0.297
## 5 Germany Female 0.0915
## 6 Germany Male 0.0162
## 7 Italy Female 0.0708
## 8 Italy Male -0.137
## 9 Japan Female 0.916
## 10 Japan Male 0.788
## # ... with more rows
```
We can also create new variables using the `mutate` function. If we want to get a measure of home affluence, we could add home educational resources (HEDRES) and home possessions (HOMEPOS)
```
pisa_tbl %>%
mutate(totl_home = HEDRES + HOMEPOS) %>%
group_by(CNTRYID) %>%
summarize(xbar = mean(totl_home, na.rm = TRUE))
```
```
## # Source: spark<?> [?? x 2]
## CNTRYID xbar
## <chr> <dbl>
## 1 United Kingdom 0.370
## 2 United States 0.113
## 3 Italy 0.324
## 4 Japan -1.30
## 5 France -0.332
## 6 Germany 0.279
## 7 Canada 0.430
```
On my computer, the Spark code is slightly faster than `data.table`, but not by much. The real power of using Spark is that we can use its machine learning functions. However, if you’re familiar with `tidyverse` (`dplyr`) syntax, then `sparklyr` is a package that is worth investigating for data wrangling with big data sets.
4\.8 Lab
--------
This afternoon when we discuss supervised learning, we’ll ask you to develop some models to predict the response to the question Do you expect your child will go into a ?" (PA032Q03TA).
1. Recode this variable so that a “Yes” is 1 and a “No” is a \-1 and save the variable as `sci_car`.
2. Calculate descriptives for this variable by sex and country. Specifically, the proportion of test takers whose parents said “Yes” or 1\.
After you’ve done this, spend some time investigating the following variables
| Label | Description |
| --- | --- |
| DISCLISCI | Disciplinary climate in science classes (WLE) |
| TEACHSUP | Teacher support in a science classes of students choice (WLE) |
| IBTEACH | Inquiry\-based science teaching an learning practices (WLE) |
| TDTEACH | Teacher\-directed science instruction (WLE) |
| ENVAWARE | Environmental Awareness (WLE) |
| JOYSCIE | Enjoyment of science (WLE) |
| INTBRSCI | Interest in broad science topics (WLE) |
| INSTSCIE | Instrumental motivation (WLE) |
| SCIEEFF | Science self\-efficacy (WLE) |
| EPIST | Epistemological beliefs (WLE) |
| SCIEACT | Index science activities (WLE) |
| BSMJ | Student’s expected occupational status (SEI) |
| MISCED | Mother’s Education (ISCED) |
| FISCED | Father’s Education (ISCED) |
| OUTHOURS | Out\-of\-School Study Time per week (Sum) |
| SMINS | Learning time (minutes per week) \- |
| TMINS | Learning time (minutes per week) \- in total |
| BELONG | Subjective well\-being: Sense of Belonging to School (WLE) |
| ANXTEST | Personality: Test Anxiety (WLE) |
| MOTIVAT | Student Attitudes, Preferences and Self\-related beliefs: Achieving motivation (WLE) |
| COOPERATE | Collaboration and teamwork dispositions: Enjoy cooperation (WLE) |
| PERFEED | Perceived Feedback (WLE) |
| unfairteacher | Teacher Fairness (Sum) |
| HEDRES | Home educational resources (WLE) |
| HOMEPOS | Home possessions (WLE) |
| ICTRES | ICT Resources (WLE) |
| WEALTH | Family wealth (WLE) |
| ESCS | Index of economic, social and cultural status (WLE) |
| math | Students’ math score in PISA 2015 |
| reading | Students’ reading score in PISA 2015 |
and then do the following using `data.table` and/or `sparklyr`:
3. Means and standard deviations (`sd`) for the variables that you think will be most predictive of `sci_car`.
4. Calculate these same descriptives by groups (by `sci_car` and by `sex`).
5. Calculate correlations between these variables and `sci_car`,
6. Create new variables
* Discretize the math and reading variables using the OECD means (490 for math and 493\) and code them as 1 (at or above the mean) and \-1 (below the mean), but do in the `data.table` way without using the `$` operator.
* Calculate the correlation between these variables and the list of variables above.
7. Chain together a set of operations
* For example, create an intermediate variable that is the average of JOYSCIE and INTBRSCI, and then calculate the mean by country by `sci_car` through chaining.
8. Transform variables, specifically recode MISCED and FISCED from characters to numeric variables.
9. Examine other variables in the **pisa** data set that you think might be predictive of `PA032Q03TA`.
4\.1 What is `data.table`?
--------------------------
From the `data.table` wiki
> It is a high\-performance version of base R’s `data.frame` with syntax and feature enhancements for ease of use, convenience and programming speed.
Its syntax is designed to be concise and consistent. It’s somewhat similar to base R, but arguably less intuitive than `tidyverse`. We, and many others, would say that `data.table` is one of the most underrated package out there.
If you’re familiar with SQL, then working with a `data.table` (DT) is conceptually similar to querying.
```
DT[i, j, by]
R: i j by
SQL: where | order by select | update group by
```
This should be read as take `DT`, subset ( or order) rows using `i`, then calculate `j`, and group by `by`. A graphical depiction of this “grammar,” created by one of the developers of `data.table`, is shown in Figure [4\.1](wrangling-big-data.html#fig:dtvis).
Figure 4\.1: Source: <https://tinyurl.com/yyepwjpt>.
The `data.table` package needs to be installed and loaded throughout the workshop.
```
install.packages("data.table")
library(data.table)
```
Throughout the workshop, we will write DT code as:
```
DT[i,
j,
by]
```
That is, we will use write separate lines for the i, j, and by DT statements.
### 4\.1\.1 Why use `data.table` over `tidyverse`?
If you’re familiar with R, then you might wonder why we are using DT and not `tidyverse`? This has to do with memory management and speed.
```
#
# Benchmark #1 - Reading in data
#
system.time({read.csv("data/pisa2015.csv")})
system.time({fread("data/pisa2015.csv", na.strings = "")})
system.time({read_csv("data/pisa2015.csv")})
#
# Benchmark #2 - Calculating a conditional mean
#
#' Calculate proportion that strongly agreed to an item
#' @param x likert-type item as a numeric vector
getSA <- function(x, ...) mean(x == "Strongly agree", ...)
# read in data using fread()
pisa <- fread("data/pisa2015.csv", na.strings = "")
# calculate conditional means
# This is the proportion of students in each country that
# strongly agree that
# "I want top grades in most or all of my courses."
benchmark(
"baseR" = {
X <- aggregate(ST119Q01NA ~ CNTRYID, data = pisa, getSA, na.rm = TRUE)
},
"data.table" = {
X <- pisa[,
getSA(ST119Q01NA, na.rm = TRUE),
by = CNTRYID]
},
"tidyverse" = {
X <- pisa %>%
group_by(CNTRYID) %>%
summarize(getSA(ST119Q01NA, na.rm = TRUE))
},
replications = 1000)
```
Table [4\.1](wrangling-big-data.html#tab:benchresults) shows the results of this (relatively) unscientific minibenchmark. The first column is the method, the second column is elapsed time (in seconds) to read in the **pisa** data set (only once, though similar results/pattern is found if repeated), and the third column is the elapsed time (in seconds) to calculate the conditional mean 1000 times. We see that `data.table` is substantially faster than base R and the `tidyverse`.
Table 4\.1: Comparing base R, data.table, and tidyverse.
| Method | Reading in data | Conditional mean (1000 times) |
| --- | --- | --- |
| base R | 225\.5 | 196\.59 |
| data.table | 46\.8 | 27\.73 |
| tidyverse | 233\.7 | 159\.22 |
This extends to other data wrangling procedures (e.g., reshaping, recoding). Importantly, `tidyverse` is not designed for big data but instead for data science, more generally. From Grolemund \& Wickham (2017\)
> “This book (R for Data Science) proudly focuses on small, in\-memory datasets. This is the right place to start because you cannot tackle big data unless you have experience with small data. The tools you learn in this book will easily handle hundreds of megabytes of data, and with a little care you can typically use them to work with 1\-2 Gb of data. If you are routinely working with larger data (10\-100 Gb, say), you should learn more about data.table. This book does not teach data.table because it has a very concise interface which makes it harder to learn since it offers fewer linguistic cues. But if you are working with large data, the performance payoff is worth the extra effort required to learn it.”
### 4\.1\.1 Why use `data.table` over `tidyverse`?
If you’re familiar with R, then you might wonder why we are using DT and not `tidyverse`? This has to do with memory management and speed.
```
#
# Benchmark #1 - Reading in data
#
system.time({read.csv("data/pisa2015.csv")})
system.time({fread("data/pisa2015.csv", na.strings = "")})
system.time({read_csv("data/pisa2015.csv")})
#
# Benchmark #2 - Calculating a conditional mean
#
#' Calculate proportion that strongly agreed to an item
#' @param x likert-type item as a numeric vector
getSA <- function(x, ...) mean(x == "Strongly agree", ...)
# read in data using fread()
pisa <- fread("data/pisa2015.csv", na.strings = "")
# calculate conditional means
# This is the proportion of students in each country that
# strongly agree that
# "I want top grades in most or all of my courses."
benchmark(
"baseR" = {
X <- aggregate(ST119Q01NA ~ CNTRYID, data = pisa, getSA, na.rm = TRUE)
},
"data.table" = {
X <- pisa[,
getSA(ST119Q01NA, na.rm = TRUE),
by = CNTRYID]
},
"tidyverse" = {
X <- pisa %>%
group_by(CNTRYID) %>%
summarize(getSA(ST119Q01NA, na.rm = TRUE))
},
replications = 1000)
```
Table [4\.1](wrangling-big-data.html#tab:benchresults) shows the results of this (relatively) unscientific minibenchmark. The first column is the method, the second column is elapsed time (in seconds) to read in the **pisa** data set (only once, though similar results/pattern is found if repeated), and the third column is the elapsed time (in seconds) to calculate the conditional mean 1000 times. We see that `data.table` is substantially faster than base R and the `tidyverse`.
Table 4\.1: Comparing base R, data.table, and tidyverse.
| Method | Reading in data | Conditional mean (1000 times) |
| --- | --- | --- |
| base R | 225\.5 | 196\.59 |
| data.table | 46\.8 | 27\.73 |
| tidyverse | 233\.7 | 159\.22 |
This extends to other data wrangling procedures (e.g., reshaping, recoding). Importantly, `tidyverse` is not designed for big data but instead for data science, more generally. From Grolemund \& Wickham (2017\)
> “This book (R for Data Science) proudly focuses on small, in\-memory datasets. This is the right place to start because you cannot tackle big data unless you have experience with small data. The tools you learn in this book will easily handle hundreds of megabytes of data, and with a little care you can typically use them to work with 1\-2 Gb of data. If you are routinely working with larger data (10\-100 Gb, say), you should learn more about data.table. This book does not teach data.table because it has a very concise interface which makes it harder to learn since it offers fewer linguistic cues. But if you are working with large data, the performance payoff is worth the extra effort required to learn it.”
4\.2 Reading/writing data with `data.table`
-------------------------------------------
The `fread` function should always be used when reading in large data sets and arguably when ever you read in a CSV file. As shown above, `read.csv` and `readr::read_csv` are painfully slow with big data.
Throughout the workshop we’ll be using the **pisa** data set. Therefore, we begin by reading in (or importing) the data set
```
pisa <- fread("data/pisa2015.csv", na.strings = "")
```
To see the **class** the object `pisa` is and how big it is in R
```
class(pisa)
```
```
## [1] "data.table" "data.frame"
```
```
print(object.size(pisa), unit = "GB")
```
```
## 3.5 Gb
```
We see that objects that are read in with `fread` are of class `data.table` and `data.frame`. That means that methods for data.tables and data.frames will work on these objects. We also see this data set uses up 3\.5 Gb of memory and this is all in the memory (RAM) not on the disk and allocated to memory dynamically (this is what SAS does).
If we wanted to write `pisa` back to a CSV to share with a colleague or to use in another program after some wrangling, then we should use the `fwrite` function instead of `write.csv`:
```
fwrite(pisa, file = "pisa2015.csv")
```
The following image (Figure [4\.2](wrangling-big-data.html#fig:dtcomp)), taken from Matt Dowle’s blog, shows the speed difference using common ways to save R objects and the differences in sizes of these files.
Figure 4\.2: Time to write an R object to a file. Source: <https://tinyurl.com/y366kvfx>.
In the event that you did **just** want to read the data in using the `fread()` function but then wanted to work with a tibble (tidyverse) or a data.frame, you can convert the data set after its been read in:
```
pisa.tib <- tibble::as_tibble(pisa)
pisa.df <- as.data.frame(pisa)
```
However, I strongly recommend against this approach unless you have done some amount of subsetting. If your data set is large enough to benefit appreciably by `fread` then you should try and use the `data.table` package.
For the workshop, we have created two smaller versions of the **pisa** data set for those of you with less beefy computers. The first is a file called `region6.csv` and it was created by
```
region6 <- subset(pisa, CNT %in% c("United States", "Canada", "Mexico",
"B-S-J-G (China)", "Japan", "Korea",
"Germany", "Italy", "France", "Brazil",
"Colombia", "Uruguay", "Australia",
"New Zealand", "Jordan", "Israel", "Lebanon"))
fwrite(region6, file = "region6.csv")
```
These are the 6 regions that will be covered during data visualization and can be used for the exercises and labs. The other file is a random sample of one country from each regions for even less powerful computers, which can also be used.
```
random6 <- subset(pisa, CNT %in% c("Mexico", "Uruguay", "Japan",
"Germany", "New Zealand", "Lebanon"))
fwrite(random6, file = "random6.csv")
```
### 4\.2\.1 Exercises
1. Read in the **pisa** data set. Either the full data set (recommended to have \> 8 Gb of RAM) or one of the smaller data sets.
4\.3 Using the i in `data.table`
--------------------------------
One of the first things we need to do when data wrangling is subsetting. Subsetting with `data.table` is very similar to base R but not identical. For example, if we wanted to subset all the students from Mexico who are currently taking Physics, i.e., they checked the item “Which course did you attend? Physics: This year” (ST063Q01NA) we would do the following:
```
pisa[CNTRYID == "Mexico" & ST063Q01NA == "Checked"]
# or (identical to base R)
subset(pisa, CNTRYID == "Mexico" & ST063Q01NA == "Checked")
```
Note that with `data.table` we do not need to use the `$` operator to access a variable in a `data.table` object. This is one improvement to the syntax of a `data.frame`.
Typing the name of a `data.table` won’t print all the rows by default like a `data.frame`. Instead it prints just the first and last 5 rows.
```
pisa
```
This is extremely helpful because when we have a object in R, it often defaults to printing the entire object and this has the negative consequence of endless output if we type just the name of a very large object.
Because we have 921 variables, `data.table` will still truncate this output. If we want to view just the rows 10 through 25\.
```
pisa[10:25]
```
However, with this many columns it is useless to print all of them and instead we should focus on examining just the columns we’re interested in and we will see how to do this when we examine the `j` operator.
Often when data wrangling we would like to perform multiple steps without needing to create intermediate variables. This is known as **chaining**. Chaining can be done in `data.table` via
```
DT[ ...
][ ...
][ ...
]
```
For example, if we wanted to just see rows 17 through 20 after we’ve done previous subset, we can chain together these commands:
```
pisa[CNTRYID == "Mexico" & ST063Q01NA == "Checked"
][17:20]
```
When we’re wrangling data, it’s common and quite helpful to reorder rows. This can be done using the `order()` function. First, we print the first 6 six elements of the CNTRYID using the default ordering in the **pisa** data. Then we reorder the data by country name in a descending order and then print the first 6 six elements again using chaining.
```
head(pisa$CNTRYID)
```
```
## [1] "Albania" "Albania" "Albania" "Albania" "Albania" "Albania"
```
```
pisa[order(CNTRYID, decreasing = TRUE)
][,
head(CNTRYID)]
```
```
## [1] "Vietnam" "Vietnam" "Vietnam" "Vietnam" "Vietnam" "Vietnam"
```
### 4\.3\.1 Exercises
1. Subset all the Female students (ST004D01T) in Germany
2. How many female students are there in Germany?
3. The `.N` function returns the length of a vector/number of rows. Use chaining with the `.N` function to answer Exercise 2\.
### 4\.3\.1 Exercises
1. Subset all the Female students (ST004D01T) in Germany
2. How many female students are there in Germany?
3. The `.N` function returns the length of a vector/number of rows. Use chaining with the `.N` function to answer Exercise 2\.
4\.4 Using the j in `data.table`
--------------------------------
Using j we can select columns, summarize variables by performing actions on the variables, and create new variables. If we wanted to just select the country identifier:
```
pisa[,
CNTRYID]
```
However, this returns a vector not a `data.table`. If we wanted instead to return a `data.table`:
```
pisa[,
list(CNTRYID)]
```
```
## CNTRYID
## 1: Albania
## 2: Albania
## 3: Albania
## 4: Albania
## 5: Albania
## ---
## 519330: Argentina (Ciudad Autónoma de Buenos)
## 519331: Argentina (Ciudad Autónoma de Buenos)
## 519332: Argentina (Ciudad Autónoma de Buenos)
## 519333: Argentina (Ciudad Autónoma de Buenos)
## 519334: Argentina (Ciudad Autónoma de Buenos)
```
```
pisa[,
.(CNTRYID)]
```
```
## CNTRYID
## 1: Albania
## 2: Albania
## 3: Albania
## 4: Albania
## 5: Albania
## ---
## 519330: Argentina (Ciudad Autónoma de Buenos)
## 519331: Argentina (Ciudad Autónoma de Buenos)
## 519332: Argentina (Ciudad Autónoma de Buenos)
## 519333: Argentina (Ciudad Autónoma de Buenos)
## 519334: Argentina (Ciudad Autónoma de Buenos)
```
The `.()` is `data.table` shorthand for `list()`. To subset more than one variable, we can just add another variable within the `.()`. For example, if we also wanted to select the science self\-efficacy scale (SCIEEFF) as well, we do the following:
```
pisa[,
.(CNTRYID, SCIEEFF)]
```
```
## CNTRYID SCIEEFF
## 1: Albania NA
## 2: Albania NA
## 3: Albania NA
## 4: Albania NA
## 5: Albania NA
## ---
## 519330: Argentina (Ciudad Autónoma de Buenos) -0.8799
## 519331: Argentina (Ciudad Autónoma de Buenos) 0.9802
## 519332: Argentina (Ciudad Autónoma de Buenos) -0.5696
## 519333: Argentina (Ciudad Autónoma de Buenos) -0.7065
## 519334: Argentina (Ciudad Autónoma de Buenos) -0.3609
```
If we wanted see how many students took physics in Japan and Mexico, we would do the following:
```
pisa[CNTRYID %in% c("Mexico", "Japan"),
table(ST063Q01NA)]
```
```
## ST063Q01NA
## Checked Not checked
## 4283 9762
```
Because `data.table` treats string variables as character variables by default we see that when they are printed they are printed alphabetically, which in this case is fine but is often unhelpful. We can chain together variables and create an intermediate tense variable to get this in the correct format. However, when we want to know how students in Mexico and Japan responded to “I get very tense when I study for a test.”
```
pisa[CNTRYID %in% c("Mexico", "Japan"),
table(ST118Q04NA)]
```
```
## ST118Q04NA
## Agree Disagree Strongly agree Strongly disagree
## 4074 5313 1760 2904
```
We see that the output is unhelpful. Instead, we should convert the character vector into a factor and we will create an intermediate variable called `tense`, which we won’t add to our data set.
```
pisa[CNTRYID %in% c("Mexico", "Japan"),
.(tense = factor(ST118Q04NA, levels = c("Strongly disagree", "Disagree", "Agree", "Strongly agree")))
][,
table(tense)
]
```
```
## tense
## Strongly disagree Disagree Agree Strongly agree
## 2904 5313 4074 1760
```
Quick digression, in case you were wondering why base R reads strings in as factors and not characters by default (which `data.table` and `readr::read_csv` do),
```
pisa[, .(tense.as.char = ST118Q04NA,
tense.as.fac = factor(ST118Q04NA, levels = c("Strongly disagree", "Disagree", "Agree", "Strongly agree")))
][,
.(character = object.size(tense.as.char),
factor = object.size(tense.as.fac))
]
```
```
## character factor
## 1: 4154984 bytes 2078064 bytes
```
Returning to the science self\-efficacy scale, we can request summary information for just these two countries:
```
pisa[CNTRYID %in% c("Mexico","Japan"),
.(xbar = mean(SCIEEFF, na.rm = T),
sigma = sd(SCIEEFF, na.rm = T),
minimum = min(SCIEEFF, na.rm = T),
med = median(SCIEEFF, na.rm = T),
maximum = max(SCIEEFF, na.rm = T))]
```
```
## xbar sigma minimum med maximum
## 1: -0.08694 1.216 -3.756 -0.0541 3.277
```
We can create a quick plot this way, too. For example, if we wanted a create a scatter plot of the science self\-efficacy scale against the enjoyment of science scale (JOYSCIE) for just these two countries and print the mean of the enjoyment of science scale, we can do the following:
```
pisa[CNTRYID %in% c("Mexico","Japan"),
.(plot(y = SCIEEFF, x = JOYSCIE,
col = rgb(red = 0, green = 0, blue = 0, alpha = 0.3)),
xbar.joyscie = mean(JOYSCIE, na.rm = T))]
```
```
## xbar.joyscie
## 1: 0.0614
```
This example is kind of silly but it shows that j is incredibly flexible and that we can string together a bunch of commands using j without even needing to do chaining.
Let’s say we need to recode “After leaving school did you: Eat dinner” from a character variable to a numeric variable. We can do this with a series of if else statements
```
table(pisa$ST078Q01NA)
```
```
##
## No Yes
## 23617 373131
```
```
pisa[,
"eat.dinner" := sapply(ST078Q01NA,
function(x) {
if (is.na(x)) NA
else if (x == "No") 0L
else if (x == "Yes") 1L
})
][,
table(eat.dinner)
]
```
```
## eat.dinner
## 0 1
## 23617 373131
```
In this example we created a new variable called `eat.dinner` using `:=` the function. The `:=` syntax adds this variable directly to the DT. We also specified the `L` to ensure the variable was treated as an integer and not a double, which uses less memory.
We should create a function to do this recoding as there are lots of dichotomous items in the **pisa** data set.
```
#' Convert a dichtomous item (yes/no) to numeric scoring
#' @param x a character vector containing "Yes" and "No" responses.
bin.to.num <- function(x){
if (is.na(x)) NA
else if (x == "Yes") 1L
else if (x == "No") 0L
}
```
Then use this function to create some variables as well as recoding gender to give it a more intuitive variable name.
```
pisa[, `:=`
(female = ifelse(ST004D01T == "Female", 1, 0),
sex = ST004D01T,
# At my house we have ...
desk = sapply(ST011Q01TA, bin.to.num),
own.room = sapply(ST011Q02TA, bin.to.num),
quiet.study = sapply(ST011Q03TA, bin.to.num),
computer = sapply(ST011Q04TA, bin.to.num),
software = sapply(ST011Q05TA, bin.to.num),
internet = sapply(ST011Q06TA, bin.to.num),
lit = sapply(ST011Q07TA, bin.to.num),
poetry = sapply(ST011Q08TA, bin.to.num),
art = sapply(ST011Q09TA, bin.to.num),
book.sch = sapply(ST011Q10TA, bin.to.num),
tech.book = sapply(ST011Q11TA, bin.to.num),
dict = sapply(ST011Q12TA, bin.to.num),
art.book = sapply(ST011Q16NA, bin.to.num))]
```
Similarly, we can create new variables by combining pre\-existing ones. In the later data visualization section, we will use the following variables, so we will create them now. The `rowMeans` function takes a data.frame, so we need to subset the variables from the **pisa** data set and then convert it to a data.frame. This is what the brackets are doing.
```
pisa[, `:=`
(math = rowMeans(pisa[, c(paste0("PV", 1:10, "MATH"))], na.rm = TRUE),
reading = rowMeans(pisa[, c(paste0("PV", 1:10, "READ"))], na.rm = TRUE),
science = rowMeans(pisa[, c(paste0("PV", 1:10, "SCIE"))], na.rm = TRUE))]
```
### 4\.4\.1 Exercises
1. The computer and software variables that were created above ask a student whether they had a computer in their home that they can use for school work (computer) and whether they had educational software in their home (software). Find the proportion of students in the Germany and Uruguay that have a computer in their home or have educational software.
2. For just female students, find the proportion of students who have their own room (own.room) or a quiet place to study (quiet.study).
### 4\.4\.1 Exercises
1. The computer and software variables that were created above ask a student whether they had a computer in their home that they can use for school work (computer) and whether they had educational software in their home (software). Find the proportion of students in the Germany and Uruguay that have a computer in their home or have educational software.
2. For just female students, find the proportion of students who have their own room (own.room) or a quiet place to study (quiet.study).
4\.5 Summarizing using the by in `data.table`
---------------------------------------------
With the by argument, we can now get conditional responses without the need to subset. If we want to know the proportion of students in each country that have their own room at home.
```
pisa[,
.(mean(own.room, na.rm = TRUE)),
by = .(CNTRYID)
][1:6,
]
```
```
## CNTRYID V1
## 1: Albania NaN
## 2: Algeria 0.5188
## 3: Australia 0.9216
## 4: Austria 0.9054
## 5: Belgium 0.9154
## 6: Brazil 0.7498
```
Again, we can reorder this using chaining:
```
pisa[,
.(own.room = mean(own.room, na.rm = TRUE)),
by = .(country = CNTRYID)
][order(own.room, decreasing = TRUE)
][1:6
]
```
```
## country own.room
## 1: Iceland 0.9863
## 2: Netherlands 0.9750
## 3: Norway 0.9738
## 4: Sweden 0.9559
## 5: Finland 0.9441
## 6: Germany 0.9379
```
What if we want to compare just the Canada and Iceland on the proportion of students that have books of poetry at home (poetry) or and their mean on the enjoyment of science by student’s biological sex?
```
pisa[CNTRYID %in% c("Canada", "Iceland"),
.(poetry = mean(poetry, na.rm = TRUE),
enjoy = mean(JOYSCIE, na.rm = TRUE)),
by = .(country = CNTRYID, sex = sex)]
```
```
## country sex poetry enjoy
## 1: Canada Female 0.3632 0.29636
## 2: Canada Male 0.3124 0.40950
## 3: Iceland Female 0.7281 0.03584
## 4: Iceland Male 0.7011 0.30316
```
We see a strong country effect on poetry at home, with \> 70% of Icelandic students reporting poetry books at home and just above 30% in Canadian students and we see that Canadian students enjoy science more than Icelandic students and, male students, overall, enjoy science more than females.
Let’s examine books of poetry at home by countries and sort it in descending order.
```
pisa[,
.(poetry = mean(poetry, na.rm = TRUE)),
by = .(country = CNTRYID)
][order(poetry, decreasing = TRUE)
][1:6
]
```
```
## country poetry
## 1: Kosovo 0.8353
## 2: Russian Federation 0.8046
## 3: Romania 0.8019
## 4: Georgia 0.7496
## 5: B-S-J-G (China) 0.7442
## 6: Estonia 0.7423
```
Iceland is in the top 10, while Canada is 59\.
We can also write more complex functions and provide these to `data.table`. For example, if wanted to fit a regression model to predict a student’s score on science self\-efficacy scale given their score on the enjoyment of science scale and their sex for just the G7 countries (Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States), we can fit a multiple regression model and return the intercept and slope terms.
```
get.params <- function(cntry){
mod <- lm(SCIEEFF ~ JOYSCIE + sex, cntry)
est.params <- list(int = coef(mod)[[1]], enjoy.slope = coef(mod)[[2]], sex.slope = coef(mod)[[3]])
return(est.params)
}
g7.params <- pisa[CNTRYID %in% c("Canada", "France", "Germany", "Italy",
"Japan", "United Kingdom", "United States"),
get.params(.SD),
by = .(CNTRYID)]
g7.params
```
```
## CNTRYID int enjoy.slope sex.slope
## 1: Canada 0.009803357 0.4370945 0.21489577
## 2: France -0.208698984 0.4760903 0.17743126
## 3: Germany -0.019150031 0.4316565 0.17971821
## 4: Italy -0.030880063 0.3309990 0.18831666
## 5: Japan -0.353806055 0.3914385 0.04912039
## 6: United Kingdom 0.009711647 0.5182592 0.18981965
## 7: United States 0.096920721 0.3907848 0.15022008
```
We see a fair bit of variability in these estimated parameters
### 4\.5\.1 Exercises
1. Calculate the proportion of students who have art in their home (art) and the average age (AGE) of the students by gender.
2. Within a by argument you can discretize a variable to create a grouping variable. Perform a median split for age within the by argument and assess whether there are age difference associated with having your own room (own.room) or a desk (desk).
### 4\.5\.1 Exercises
1. Calculate the proportion of students who have art in their home (art) and the average age (AGE) of the students by gender.
2. Within a by argument you can discretize a variable to create a grouping variable. Perform a median split for age within the by argument and assess whether there are age difference associated with having your own room (own.room) or a desk (desk).
4\.6 Reshaping data
-------------------
The `data.table` package provides some very fast methods to reshape data from wide (the current format) to long format. In long format, a single test taker will correspond to multiple rows of data. Some software and `R` packages require data to be in long format (e.g., `lme4` and `nlme`).
Let’s begin by creating a student ID and then subsetting this ID and the at\-home variables:
```
pisa$id <- 1:nrow(pisa)
athome <- subset(pisa, select = c(id, desk:art.book))
```
To transform the data to long format we *melt* the data.
```
athome.l <- melt(athome,
id.vars = "id",
measure.vars = c("desk", "own.room", "quiet.study", "lit",
"poetry", "art", "book.sch", "tech.book",
"dict", "art.book"))
athome.l
```
```
## id variable value
## 1: 1 desk NA
## 2: 2 desk NA
## 3: 3 desk NA
## 4: 4 desk NA
## 5: 5 desk NA
## ---
## 5193336: 519330 art.book 1
## 5193337: 519331 art.book 0
## 5193338: 519332 art.book 1
## 5193339: 519333 art.book 0
## 5193340: 519334 art.book 0
```
We could have also allowed `melt()` to guess the format:
```
athome.guess <- melt(athome)
```
```
## Warning in melt.data.table(athome): id.vars and measure.vars are internally
## guessed when both are 'NULL'. All non-numeric/integer/logical type columns are
## considered id.vars, which in this case are columns []. Consider providing at
## least one of 'id' or 'measure' vars in future.
```
```
athome.guess
```
```
## variable value
## 1: id 1
## 2: id 2
## 3: id 3
## 4: id 4
## 5: id 5
## ---
## 7270672: art.book 1
## 7270673: art.book 0
## 7270674: art.book 1
## 7270675: art.book 0
## 7270676: art.book 0
```
It guessed incorrectly. If id was set as a character vector, then it would have guessed correctly this time. However, you should not allow it to guess the names of the variables.
To go back to wide format we use the `dcast()` function.
```
athome.w <- dcast(athome.l,
id ~ variable)
```
Unlike other reshaping packages, `data.table` can also handle reshaping multiple outcomes variables. More about reshaping with `data.table` is available [here](https://cran.r-project.org/web/packages/data.table/vignettes/datatable-reshape.html).
4\.7 The `sparklyr` package
---------------------------
The `sparklyr` package provides an R interface to Apache Spark and a complete `dplyr` backend. Apache Spark “is a unified analytics engine for big data processing, with built\-in modules for streaming, SQL, machine learning and graph processing.” Apache Spark can also be interfaced using the `sparkR` package provided by Apache. See [here](https://spark.apache.org/docs/2.4.0/) and [here](https://spark.apache.org/docs/2.4.0/api/R/index.html) for more details.
To use Apache Spark you will need Java 8 JDK installed. It can be installed [here](https://www.oracle.com/technetwork/java/jdk8-downloads-2133151.html). To begin with you need to install `sparklyr` and `dplyr`.
```
install.packages("sparklyr")
install.packages("dplyr")
library("sparklyr")
library("dplyr")
```
We then need to install Spark, which we can do from R.
```
spark_install()
```
Next, we need to setup a connection with Spark and we’ll be connecting to a local install of Spark.
```
sc <- spark_connect(master = "local")
```
Then we need to copy the **pisa** data set to the Spark cluster. However, with this large of a data set, this is a bad idea. We will run into memory issues during the copying process. So, we’ll first subset the data before we do this.
```
pisa_sub <- subset(pisa, CNTRYID %in% c("Canada", "France", "Germany",
"Italy", "Japan", "United Kingdom",
"United States"),
select = c("DISCLISCI", "TEACHSUP", "IBTEACH", "TDTEACH",
"ENVAWARE", "JOYSCIE", "INTBRSCI", "INSTSCIE",
"SCIEEFF", "EPIST", "SCIEACT", "BSMJ", "MISCED",
"FISCED", "OUTHOURS", "SMINS", "TMINS",
"BELONG", "ANXTEST", "MOTIVAT", "COOPERATE",
"PERFEED", "unfairteacher", "HEDRES", "HOMEPOS",
"ICTRES", "WEALTH", "ESCS", "math", "reading",
"CNTRYID", "sex"))
```
We will use the selected variables in the labs and a description of these variables can be seen below.
Now the data are ready to be copied into Spark.
```
pisa_tbl <- copy_to(sc, pisa_sub, overwrite = TRUE)
```
In tidyverse, you can use the `%>%` to chain together commands or to pass data to functions. With `sparklyr`, we can use the `filter` function instead of subset. For example, if we just want to see the female students’ scores on these scales for Germany, we would do the following:
```
pisa_tbl %>%
filter(CNTRYID == "Germany" & sex == "Female")
```
```
## # Source: spark<?> [?? x 32]
## DISCLISCI TEACHSUP IBTEACH TDTEACH ENVAWARE JOYSCIE INTBRSCI INSTSCIE SCIEEFF
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 -0.234 -0.804 -0.608 -0.867 -0.536 -0.821 -0.550 NA NA
## 2 0.283 0.488 -0.157 -0.685 -0.805 -2.12 -1.13 -1.93 -2.83
## 3 0.700 1.45 0.988 0.525 0.171 -1.72 -0.225 -0.718 -2.09
## 4 0.0039 0.568 0.209 -0.0742 -0.234 -0.821 -0.0831 -0.826 -0.426
## 5 0.763 -0.450 0.535 -0.0057 -0.479 0.613 0.198 -0.304 -0.713
## 6 0.660 -0.461 0.647 0.450 -0.706 -0.631 -0.551 -1.93 -0.596
## 7 0.288 -1.82 0.430 -1.32 -0.0217 -0.576 -0.566 -0.670 0.748
## 8 0.835 -1.07 0.89 -0.610 0.256 2.16 0.341 1.33 -0.0804
## 9 1.32 -0.246 0.257 -0.867 -0.685 -0.152 -0.509 -0.778 -1.42
## 10 1.32 -1.29 0.308 -0.790 -0.385 -1.61 -0.399 -1.93 -1.05
## # ... with more rows, and 23 more variables: EPIST <dbl>, SCIEACT <dbl>,
## # BSMJ <int>, MISCED <chr>, FISCED <chr>, OUTHOURS <int>, SMINS <int>,
## # TMINS <int>, BELONG <dbl>, ANXTEST <dbl>, MOTIVAT <dbl>, COOPERATE <dbl>,
## # PERFEED <dbl>, unfairteacher <int>, HEDRES <dbl>, HOMEPOS <dbl>,
## # ICTRES <dbl>, WEALTH <dbl>, ESCS <dbl>, math <dbl>, reading <dbl>,
## # CNTRYID <chr>, sex <chr>
```
You’ll notice the at the top it says `#Source: spark<?>`
If we wanted to calculate the average disciplinary climate in science classes (DISCLISCI) by country and by sex and have it reorder by country than sex, we can do the following:
```
pisa_tbl %>%
group_by(CNTRYID, sex) %>%
summarize(ave_disclip = mean(DISCLISCI, na.rm = TRUE)) %>%
arrange(CNTRYID, sex)
```
```
## # Source: spark<?> [?? x 3]
## # Groups: CNTRYID
## # Ordered by: CNTRYID, sex
## CNTRYID sex ave_disclip
## <chr> <chr> <dbl>
## 1 Canada Female 0.0110
## 2 Canada Male -0.0205
## 3 France Female -0.236
## 4 France Male -0.297
## 5 Germany Female 0.0915
## 6 Germany Male 0.0162
## 7 Italy Female 0.0708
## 8 Italy Male -0.137
## 9 Japan Female 0.916
## 10 Japan Male 0.788
## # ... with more rows
```
We can also create new variables using the `mutate` function. If we want to get a measure of home affluence, we could add home educational resources (HEDRES) and home possessions (HOMEPOS)
```
pisa_tbl %>%
mutate(totl_home = HEDRES + HOMEPOS) %>%
group_by(CNTRYID) %>%
summarize(xbar = mean(totl_home, na.rm = TRUE))
```
```
## # Source: spark<?> [?? x 2]
## CNTRYID xbar
## <chr> <dbl>
## 1 United Kingdom 0.370
## 2 United States 0.113
## 3 Italy 0.324
## 4 Japan -1.30
## 5 France -0.332
## 6 Germany 0.279
## 7 Canada 0.430
```
On my computer, the Spark code is slightly faster than `data.table`, but not by much. The real power of using Spark is that we can use its machine learning functions. However, if you’re familiar with `tidyverse` (`dplyr`) syntax, then `sparklyr` is a package that is worth investigating for data wrangling with big data sets.
4\.8 Lab
--------
This afternoon when we discuss supervised learning, we’ll ask you to develop some models to predict the response to the question Do you expect your child will go into a ?" (PA032Q03TA).
1. Recode this variable so that a “Yes” is 1 and a “No” is a \-1 and save the variable as `sci_car`.
2. Calculate descriptives for this variable by sex and country. Specifically, the proportion of test takers whose parents said “Yes” or 1\.
After you’ve done this, spend some time investigating the following variables
| Label | Description |
| --- | --- |
| DISCLISCI | Disciplinary climate in science classes (WLE) |
| TEACHSUP | Teacher support in a science classes of students choice (WLE) |
| IBTEACH | Inquiry\-based science teaching an learning practices (WLE) |
| TDTEACH | Teacher\-directed science instruction (WLE) |
| ENVAWARE | Environmental Awareness (WLE) |
| JOYSCIE | Enjoyment of science (WLE) |
| INTBRSCI | Interest in broad science topics (WLE) |
| INSTSCIE | Instrumental motivation (WLE) |
| SCIEEFF | Science self\-efficacy (WLE) |
| EPIST | Epistemological beliefs (WLE) |
| SCIEACT | Index science activities (WLE) |
| BSMJ | Student’s expected occupational status (SEI) |
| MISCED | Mother’s Education (ISCED) |
| FISCED | Father’s Education (ISCED) |
| OUTHOURS | Out\-of\-School Study Time per week (Sum) |
| SMINS | Learning time (minutes per week) \- |
| TMINS | Learning time (minutes per week) \- in total |
| BELONG | Subjective well\-being: Sense of Belonging to School (WLE) |
| ANXTEST | Personality: Test Anxiety (WLE) |
| MOTIVAT | Student Attitudes, Preferences and Self\-related beliefs: Achieving motivation (WLE) |
| COOPERATE | Collaboration and teamwork dispositions: Enjoy cooperation (WLE) |
| PERFEED | Perceived Feedback (WLE) |
| unfairteacher | Teacher Fairness (Sum) |
| HEDRES | Home educational resources (WLE) |
| HOMEPOS | Home possessions (WLE) |
| ICTRES | ICT Resources (WLE) |
| WEALTH | Family wealth (WLE) |
| ESCS | Index of economic, social and cultural status (WLE) |
| math | Students’ math score in PISA 2015 |
| reading | Students’ reading score in PISA 2015 |
and then do the following using `data.table` and/or `sparklyr`:
3. Means and standard deviations (`sd`) for the variables that you think will be most predictive of `sci_car`.
4. Calculate these same descriptives by groups (by `sci_car` and by `sex`).
5. Calculate correlations between these variables and `sci_car`,
6. Create new variables
* Discretize the math and reading variables using the OECD means (490 for math and 493\) and code them as 1 (at or above the mean) and \-1 (below the mean), but do in the `data.table` way without using the `$` operator.
* Calculate the correlation between these variables and the list of variables above.
7. Chain together a set of operations
* For example, create an intermediate variable that is the average of JOYSCIE and INTBRSCI, and then calculate the mean by country by `sci_car` through chaining.
8. Transform variables, specifically recode MISCED and FISCED from characters to numeric variables.
9. Examine other variables in the **pisa** data set that you think might be predictive of `PA032Q03TA`.
| Big Data |
okanbulut.github.io | https://okanbulut.github.io/bigdata/visualizing-big-data.html |
5 Visualizing big data
======================
One of the most effective ways to explore big data, interpret variables, and communicate results obtained from big data analyses to varied audiences is through **data visualization**. When we deal with big data, we can benefit from data visualizations in many ways, such as:
* understanding the distributional characteristics of variables,
* detecting data entry issues,
* identifying outliers in the data,
* understanding relationships among variables,
* selecting suitable variables for data analysis (a.k.a., feature extraction),
* examining the outcomes of predictive models (e.g., accuracy and overfit), and
* communicating the results to various audiences.
Developing effective visualizations requires identifying the goals and design of data analysis clearly. Sometimes we may already know the answers for some questions about the data; in other cases, we may want to explore further and understand the data in order to generate better insights into the next steps of data analysis. In this process, we need to consider many elements, such as types of variables to be used, axes, labels, legends, colors, and so on. Furthermore, if we aim to present the visualization to a particular audience, then we also need to consider the usability and interpretability of the visualization for the target audience.
The development of an effective data visualization typically includes the following steps:
1. Determine the goal of data visualization (e.g., exploring data, relationships, model outcomes)
2. Prepare the data (e.g., clean, organize, and transform data)
3. Identify the ideal visualization tool based on the goal of data visualization
4. Produce the visualization
5. Interpret the information in the visualization and present it to your target audience
Figure 5\.1: Chart suggestions (Source: <https://extremepresentation.com/>)
Figure [5\.1](visualizing-big-data.html#fig:fig1) shows some suggestions for visualizing data based on the type of variables and the purpose of the visualization. In R, almost all of these visualizations can be created very easily, although preparing the data for these visualizations is sometimes quite tedious.
In this section of our session, we will review data visualization tools in R that can help us organize big data, interpret variables, and identify potential variables for predictive models. The first part will focus on data visualizations using the [ggplot2](https://ggplot2.tidyverse.org/) package. Furthermore, we will use other R packages (e.g., [GGally](http://ggobi.github.io/ggally/#ggally), [ggExtra](https://www.ggplot2-exts.org/ggExtra.html), and [ggalluvial](http://corybrunson.github.io/ggalluvial/)) that expand the capabilities of `ggplot2` even further (also see [https://exts.ggplot2\.tidyverse.org/gallery/](https://exts.ggplot2.tidyverse.org/gallery/) for more extensions of `ggplot2`). In the second part, we will discuss web\-based, interactive visualizations and dashboards using [plotly](https://plot.ly/r/).
As we review data visualization tools, we will also demonstrate how to use each visualization tool in R and produce sample plots and graphics using the **pisa** dataset. Furthermore, we will ask you to work on short exercises where you will need to use the functions and packages presented in this section in order to generate your own plots and visualizations using the **pisa** dataset.
Before we begin, let’s install and load all of the R packages that we will use in this section:
```
# Install and load the packages one by one.
install.packages("ggplot2")
install.packages("GGally")
install.packages("ggExtra")
install.packages("ggalluvial")
install.packages("plotly")
library("ggplot2")
library("GGally")
library("ggExtra")
library("ggalluvial")
library("plotly")
# Or, just simply run the following to install and load all packages:
dataviz_packages <- c("ggplot2", "GGally", "ggExtra", "ggalluvial", "plotly")
install.packages(dataviz_packages)
lapply(packages, require, character.only = TRUE)
# Load already installed packages
library("data.table")
# we will also use cowplot later in this session.
# Please install it but do not load it for now.
install.packages("cowplot")
```
---
5\.1 Introduction to `ggplot2`
------------------------------
This section will demonstrate how to visualise your big data using `ggplot2` and other R packages that rely on `ggplot2`. We use \``ggplot2` because it is the most elegant and versatile visualization package in R. Also, it implements a simple grammar of graphics for building a variety of visualizations for either small or large data. This enables creating high\-quality plots for publications and presentations easily, with minimal amounts of adjustments and tweaking.
A typical `ggplot2` template ranges from a few layers to many layers, depending on the complexity of the visualization of interest. Layers generate a plot and plot transformations within the plot. We can combine multiple layers using the \+ operator. Therefore, plots are built step by step by adding new elements in each layer. A simple `ggplot2` template is shown below:
```
ggplot(data = my_data,
mapping = aes(x = var1, y = var2)) +
geom_function()
```
where the `ggplot` function uses the two variables (**var1** and **var2**) from a dataset (**my\_data**), and draws a new plot based on a particular geom function (**geom\_function**). Selecting the variables to be plotted is done through the aesthetic mapping (via the `aes` function). Depending on the aesthetic mapping of interest, we can split the plot, add colors by a group variable, change the labels for each axis, change the font size, and so on. The \``ggplot2` package offers many geom functions to draw different types of plots:
* `geom_point` for scatter plots, dot plots, etc.
* `geom_boxplot` for boxplots
* `geom_line` for trend lines, time series, etc.
In addition, functions such as `theme_bw()` and `theme()` enable adjusting the theme elements (e.g., font size, font type, background colors) for a given plot. As we create plots in our examples, we will use some of these theme elements to make our plots look nicer.
An important caveat in visualizing big data is that the size of the dataset (*especially the number of rows*) and complexity level of the plot (e.g., additional lines, colors, facets) will influence how quickly and successfully `ggplot2` can render the desired plot. Nobody can absorb the meaning of thousands of data points presented on a single visualization. Therefore, in some cases we will need to find a way to cluster or reduce the magnitude of items to visualize before we render the visualization. Typically we can achieve this by:
* taking smaller, sometimes random, samples from our big data, or
* summarizing our big data using categorical, group variables (e.g., gender, grade, year).
---
5\.2 Marginal plots
-------------------
We can use marginal plots to examine the distributions of individual variables in a large dataset. A typical marginal plot is a scatter plot that also has histograms or boxplots in the margins of the x\- and y\-axes. In this section, first we will create histograms and boxplots for the variables in the **pisa** dataset. Then, we will review other options where we will combine multiple variables and different types of plots in a single visualization.
To demonstrate data visualizations, we will first take a subset of the **pisa** dataset by selecting some countries and some variables of interest. The selected variables are shown below.
Table 5\.1: Variables to be used in the data visualizations
| Variable | Description | Variable | Description |
| --- | --- | --- | --- |
| CNT | Country | BELONG | Sense of belonging to school |
| OECD | OECD membership | EMOSUPS | Parents emotional support |
| CNTSTUID | Student ID | HOMESCH | ICT use outside of school for schoolwork |
| W\_FSTUWT | Student weight in the PISA database | ENTUSE | ICT use outside of school leisure |
| ST001D01T | Grade level | ICTHOME | ICT available at home |
| ST004D01T | Gender (female/male) | ICTSCH | ICT availability at school |
| ST011Q04TA | Possessing a computer at home | WEALTH | Family wealth |
| ST011Q05TA | Possessing educational software at home | PARED | Highest parental education in years of schooling |
| ST011Q06TA | Having internet access at home | TMINS | Total learning time per week |
| ST071Q02NA | Additional time spent for learning math | ESCS | Index of economic, social and cultural status |
| ST071Q01NA | Additional time spent for learning science | TDTEACH | Teacher\-directed science instruction |
| ST123Q02NA | Whether parents support educational efforts and achievements | IBTEACH | Inquiry based science instruction |
| ST082Q01NA | Prefering working as part of a team to working alone | TEACHSUP | Teacher support in science classes |
| ST119Q01NA | Wanting top grades in most or all courses | SCIEEFF | Science self\-efficacy |
| ST119Q05NA | Wanting to the best student in class | math | Students math scores in PISA 2015 |
| ANXTEST | Test anxiety | reading | Students reading scores in PISA 2015 |
| COOPERATE | Enjoying cooperation | science | Students science scores in PISA 2015 |
Here we filter our big data based on a list of countries (we called `country`), select the variables that we have just identified in Table [5\.1](visualizing-big-data.html#tab:tab1) and the reading, math, and science scales we created earlier.
```
country <- c("United States", "Canada", "Mexico", "B-S-J-G (China)", "Japan",
"Korea", "Germany", "Italy", "France", "Brazil", "Colombia", "Uruguay",
"Australia", "New Zealand", "Jordan", "Israel", "Lebanon")
dat <- pisa[CNT %in% country,
.(CNT, OECD, CNTSTUID, W_FSTUWT, sex, female,
ST001D01T, computer, software, internet,
ST011Q05TA, ST071Q02NA, ST071Q01NA, ST123Q02NA,
ST082Q01NA, ST119Q01NA, ST119Q05NA, ANXTEST,
COOPERATE, BELONG, EMOSUPS, HOMESCH, ENTUSE,
ICTHOME, ICTSCH, WEALTH, PARED, TMINS, ESCS,
TEACHSUP, TDTEACH, IBTEACH, SCIEEFF,
math, reading, science)
]
```
Next, we create additional variables by recoding some of the existing variables. The goal is to create some numerical variables out of the character variables in case we want to use them in the modeling stage.
```
# Let's create additional variables that we will use for visualizations
dat <- dat[, `:=` (
# New grade variable
grade = (as.numeric(sapply(ST001D01T, function(x) {
if(x=="Grade 7") "7"
else if (x=="Grade 8") "8"
else if (x=="Grade 9") "9"
else if (x=="Grade 10") "10"
else if (x=="Grade 11") "11"
else if (x=="Grade 12") "12"
else if (x=="Grade 13") NA_character_
else if (x=="Ungraded") NA_character_}))),
# Total learning time as hours
learning = round(TMINS/60, 0),
# Regions for selected countries
Region = (sapply(CNT, function(x) {
if(x %in% c("Canada", "United States", "Mexico")) "N. America"
else if (x %in% c("Colombia", "Brazil", "Uruguay")) "S. America"
else if (x %in% c("Japan", "B-S-J-G (China)", "Korea")) "Asia"
else if (x %in% c("Germany", "Italy", "France")) "Europe"
else if (x %in% c("Australia", "New Zealand")) "Australia"
else if (x %in% c("Israel", "Jordan", "Lebanon")) "Middle-East"
}))
)]
```
Now, let’s see the number of rows in the final dataset and print the first few rows of the selected variables.
```
# N count for the final dataset
dat[,.N] # 158,061 rows
```
```
## [1] 158061
```
```
# Let's preview the final data
head(dat)
```
```
## CNT OECD CNTSTUID W_FSTUWT sex female ST001D01T computer software
## 1: Australia Yes 3610676 28.20 Female 1 Grade 10 1 1
## 2: Australia Yes 3611874 28.20 Female 1 Grade 10 1 1
## 3: Australia Yes 3601769 28.20 Female 1 Grade 10 1 1
## 4: Australia Yes 3605996 28.20 Female 1 Grade 10 1 1
## 5: Australia Yes 3608147 33.45 Male 0 Grade 10 1 1
## 6: Australia Yes 3610012 33.45 Male 0 Grade 10 1 1
## internet ST011Q05TA ST071Q02NA ST071Q01NA ST123Q02NA ST082Q01NA
## 1: 1 Yes 0 1 Disagree Disagree
## 2: 1 Yes 1 1 Agree Agree
## 3: 1 Yes NA NA Agree Strongly disagree
## 4: 1 Yes 5 7 Strongly agree Strongly disagree
## 5: 1 Yes 1 1 Agree Agree
## 6: 1 Yes 2 2 Agree Agree
## ST119Q01NA ST119Q05NA ANXTEST COOPERATE BELONG EMOSUPS HOMESCH
## 1: Agree Strongly agree -0.1522 0.2085 0.5073 -2.2547 -0.1686
## 2: Agree Disagree 0.2594 -0.2882 -0.8021 -0.2511 0.0302
## 3: Strongly agree Disagree 2.5493 -1.2109 -2.4078 -1.9895 1.2836
## 4: Strongly agree Strongly agree 0.2563 0.3950 -0.3381 1.0991 -0.0498
## 5: Agree Disagree 0.4517 -1.3606 -0.5050 -1.3298 -0.3355
## 6: Agree Agree 0.5175 0.4252 -0.0099 -0.4263 0.1567
## ENTUSE ICTHOME ICTSCH WEALTH PARED TMINS ESCS TEACHSUP TDTEACH IBTEACH
## 1: -0.7369 4 5 0.0592 12 1400 0.4078 NA NA NA
## 2: -0.1047 9 6 0.7605 12 1100 0.4500 0.3574 0.0615 0.2208
## 3: -1.5403 11 10 -0.1220 11 1960 -0.5889 -1.0718 -0.6102 -0.2198
## 4: 0.0342 10 7 0.9314 15 2450 0.6498 0.6375 0.7979 -0.0282
## 5: 0.2309 NA 7 0.7905 15 1400 0.7675 0.8213 0.1990 1.1477
## 6: 0.6896 10 5 0.7054 15 1400 1.1151 NA NA NA
## SCIEEFF math reading science grade learning Region
## 1: NA 545.9 586.5 589.6 10 23 Australia
## 2: -0.4041 511.6 570.8 557.2 10 18 Australia
## 3: -0.9003 478.6 570.0 569.5 10 33 Australia
## 4: 1.2395 506.1 531.1 529.0 10 41 Australia
## 5: -0.0746 481.9 506.5 504.2 10 23 Australia
## 6: NA 455.0 456.5 472.6 10 23 Australia
```
We want to see the distributions of the science scores across the 17 countries in our final dataset. The first line with `ggplot` creates a layout for our figure, the second line draws a box plot using `geom_boxplot`, the fourth line with `labs` creates labels of the axes, and the last line with `theme_bw` removes the default theme with a grey background and activates the dark\-on\-light `ggplot2` theme – which is much better for publications and presentations (see [https://ggplot2\.tidyverse.org/reference/ggtheme.html](https://ggplot2.tidyverse.org/reference/ggtheme.html) for a complete list of themes available in `ggplot2`).
```
ggplot(data = dat, mapping = aes(x = CNT, y = science)) +
geom_boxplot() +
labs(x=NULL, y="Science Scores") +
theme_bw()
```
The resulting plot is not necessarily nice because all the country names on the x\-axis seem to be squeezed together and thus some of the country names are not visible on the x\-axis. To correct this, we may want to flip the coordinates of the plot and use country names on the y\-axis instead. The `coord_flip()` function allows us to achieve that very easily.
```
ggplot(data = dat,
mapping = aes(x = CNT, y = science)) +
geom_boxplot() +
labs(x=NULL, y="Science Scores") +
coord_flip() +
theme_bw()
```
Next, I want to show the mean values in the boxplots since the line in the middle represents the median, not the mean. To achieve this, we first calculate the means by countries.
```
means <- dat[,
.(science = mean(science)),
by = CNT]
```
Now we can use `means` to add a point into each boxplot to show the mean score by countries. We will use `stat_summary()` along with the options `colour = "blue", geom = "point"` to create a blue point for the mean. In addition, given that the average science score in PISA 2015 was 493 across all participating countries (see [PISA 2015 Results in Focus](https://www.oecd.org/pisa/pisa-2015-results-in-focus.pdf) for more details), we can add a reference line into our plot to identify the average score, which would then allow us to visually examine which countries are above or below the average score. To achieve this, we use `geom_hline` function and specify where it should intersect the plot (i.e., `yintercept = 493`). We also want the reference line to be a red, dashed\-line with a thickness level of 1 – to make it more visible in the plot. Finally, to facilitate the interpretation of the plot, we want the boxplots to be ordered based on the average scores for each country and thus we add `reorder(CNT, science)` into the mapping.
```
ggplot(data = dat,
mapping = aes(x = reorder(CNT, science), y = science)) +
geom_boxplot() +
stat_summary(fun.y = mean, colour = "blue", geom = "point",
shape = 18, size = 3) +
labs(x=NULL, y="Science Scores") +
coord_flip() +
geom_hline(yintercept = 493, linetype="dashed", color = "red", size = 1) +
theme_bw()
```
Now let’s add some colors to our figure based on the region where each country is located. In order to do this, we use the region variable to fill the boxplots with color, using `fill = Region`.
```
ggplot(data = dat,
mapping = aes(x = reorder(CNT, science), y = science, fill = Region)) +
geom_boxplot() +
labs(x=NULL, y="Science Scores") +
coord_flip() +
geom_hline(yintercept = 493, linetype="dashed", color = "red", size = 1) +
theme_bw()
```
### 5\.2\.1 Exercise
Create a plot of **math** scores over countries with different colors based on region. You need to modify the R code below by replacing `geom_boxplot` with:
* `geom_point(aes(color = Region))`, and then
* `geom_violin(aes(color = Region))`.
How long did it take to create both plots? Which one is a better way to visualize this type of data?
```
ggplot(data = dat,
mapping = aes(x = reorder(CNT, math), y = math, fill = Region)) +
geom_boxplot() +
labs(x=NULL, y="Math Scores") +
coord_flip() +
geom_hline(yintercept = 490, linetype="dashed", color = "red", size = 1) +
theme_bw()
```
We can also create histograms (or density plots) for a particular variable and split the plot into multiple plots by using a categorical, group variable. In the following example, we use `x = Region` in the mapping in order to identify different regions in the distribution of the science scores. In addition, we use `facet_grid(. ~ sex)` to generate separate histograms by gender. Note that we also added `title = "Science Scores by Gender and Region"` as a title in the `labs` function.
```
ggplot(data = dat,
mapping = aes(x = science, fill = Region)) +
geom_histogram(alpha = 0.5, bins = 50) +
labs(x = "Science Scores", y = "Count",
title = "Science Scores by Gender and Region") +
facet_grid(. ~ sex) +
theme_bw()
```
If we are interested in visualizing multiple variables, plotting each variable individually can be time consuming. Therefore, we can use the `ggpairs` function from the `GGally` package to build a more complex, diagnostic plot for multiple variables.
In the following example, we plot reading, science, and math scores as well as gender (i.e., sex) in the same plot. Because our dataset is quite large, plotting all the data points would result in a highly complex plot where most data points would overlap on each other. Therefore, we will take a random sample of 500 cases from each region defined in the data, save this smaller dataset as `dat_small`, and use this dataset inside the `ggpairs` function. We colorize each variable by region (using `mapping = aes(color = Region)`). The resulting plot shows density plots for the continuous variables (by region), a stacked bar chart for gender, and box plots for the continuous variables by region and gender.
```
# Random sample of 500 students from each region
dat_small <- dat[,.SD[sample(.N, min(500,.N))], by = Region]
ggpairs(data = dat_small,
mapping = aes(color = Region),
columns = c("reading", "science", "math", "sex"),
upper = list(continuous = wrap("cor", size = 2.5))
)
```
**Interpretation:**
* What can we say about the regions based on the plots above?
* Do you see any major gender differences for reading, science, or math?
* What is the relationship among reading, science, or math?
---
5\.3 Conditional plots
----------------------
When we deal with continuous variables, an effective way to understand the relationship between the variables is to produce conditional plots, such as scatterplots, dotplots, and bubble charts. Simple scatterplots in R can be created using `plot(var1, var2, data = name_of_dataset)`. Using the extended capabilities of `ggplot2` via the `ggExtra` package, we can combine histograms and density plots with scatterplots and visualize them together.
In the following example, we first create a scatterplot of learning time per week and science scores using `ggplot`. We use `geom_point` to draw a plot with points and `geom_smooth(method = "loess")` to add a regression line with loess smoothing (i.e., **Lo**cally **E**stimated **S**catterplot **S**moothing). We save this plot as `p1` and then pass it to `ggMarginal` to transform the plot into a marginal scatterplot. Inside `ggMarginal`, we use `type = "histogram"` to create histograms for learning time per week and science scores on the x and y axes of the plot. Note that as the plot is created, you may see some warning messages, such as “Removed 750 rows containing missing values,” because some variables have missing rows in the dataset.
```
p1 <- ggplot(data = dat_small,
mapping = aes(x = learning, y = science)) +
geom_point() +
geom_smooth(method = "loess") +
labs(x = "Weekly Learning Time", y = "Science Scores") +
theme_bw()
# Replace "histogram" with "boxplot" or "density" for other types
ggMarginal(p1, type = "histogram")
```
We can also distinguish male and female students in the plot and create a scatterplot of learning time and science scores with densities by gender. To achieve this, we add `colour = sex` into the mapping of `ggplot` and change the type of plot to `type = "density"` in `ggMarginal`. In addition, we use `groupColour = TRUE, groupFill = TRUE` inside `ggMarginal` to use separate colors for each gender in the density plots.
```
p2 <- ggplot(data = dat_small,
mapping = aes(x = learning, y = science,
colour = sex)) +
geom_point() +
geom_smooth(method = "loess") +
labs(x = "Weekly Learning Time", y = "Science Scores") +
theme_bw() +
theme(legend.position = "bottom",
legend.title = element_blank())
ggMarginal(p2, type = "density", groupColour = TRUE, groupFill = TRUE)
```
**Interpretation:**
* What can we say about the relationship between weekly learning time and science scores?
* Do you see any gender differences?
Now let’s incorporate more variables into the plot. This time we are not going to use marginal plots. Instead, we will create a regular scatterplot but add other layers to represent additional variables. In the following example, we examine the relationship between students’ weekly learning time (learning) and science scores (science) across regions (region) and gender (sex). Adding `fill = Region` into the mapping will allow us to draw regression lines by regions, while adding `aes(colour = sex)` into `geom_point` will allow us to use different colors for male and female students in the plot.
```
ggplot(data = dat_small,
mapping = aes(x = learning, y = science, fill = Region)) +
geom_point(aes(colour = sex)) +
geom_smooth(method = "loess") +
labs(x = "Weekly Learning Time", y = "Science Scores") +
theme_bw()
```
The resulting scatterplot is nice but it is hard to compare the results clearly between gender groups and regions. To improve the interpretability of the plot, we will use the faceting option. This will allow us to split the scatterplot into multiple plots based on gender and region. In the following example, we examine the relationship between students’ learning time and science scores across regions and gender. We use `facet_grid(sex ~ Region)` to split the plots into multiple rows based on gender and multiple columns based on region.
```
ggplot(data = dat_small,
mapping = aes(x = learning, y = science)) +
geom_point() +
geom_smooth(method = "loess") +
labs(x = "Weekly Learning Time", y = "Science Scores") +
theme_bw() +
theme(legend.title = element_blank()) +
facet_grid(sex ~ Region)
```
**Interpretation:**
* Do you see any regional differences?
* Is there any interaction between gender and region?
### 5\.3\.1 Exercise
Create a scatterplot of socio\-economic status (`ESCS`) and math scores (`math`) across regions (`region`) and gender (`sex`). Use `geom_smooth(method = "lm")` to draw linear regression lines (instead of loess smoothing). Do you think that the relationship between ESCS and math changes across gender and regions?
5\.4 Plots for examining correlations
-------------------------------------
For a simple examination of the correlation between two continuous variables, we could just create a scatterplot matrix. In the following plot, we will create a scatterplot matrix of family wealth (`WEALTH`) and science scores (`science`) by gender (`sex`) and region (`region`). We will use region for facetting and gender for coloring the data points.
```
ggplot(data = dat_small,
mapping = aes(x = WEALTH, y = science)) +
geom_point(aes(color = sex)) +
facet_wrap( ~ Region) +
labs(x = "Family Wealth", y = "Science Scores") +
theme_bw() +
theme(legend.title = element_blank())
```
A more effective way for identifying correlated variables in a dataset for further statistical analyses (also known as feature extraction) is to create a correlation matrix plot. The `ggcorr()` function from the `GGally` package provides a quick way to make a correlation matrix plot. In the following example, we will create a correlation matrix plot for science, math, reading, ICT possession at home, socio\-economic status, family wealth, highest parental education, science self\-efficacy, sense of belonging to school, and grade level.
```
ggcorr(data = dat[,.(science, math, reading, ICTHOME, ESCS,
WEALTH, PARED, SCIEEFF, BELONG, grade)],
method = c("pairwise.complete.obs", "pearson"),
label = TRUE, label_size = 4)
```
5\.5 Plots for examining means by group
---------------------------------------
Let’s assume that we want to see average science scores by gender and country. First, we need to find the average science scores by country and gender and save them in a new dataset. Below we calculate average science scores and N counts by both gender and country and save the dataset as `science_summary`.
```
science_summary <- dat[,
.(Science = mean(science, na.rm = TRUE),
Freq = .N),
by = c("sex", "CNT")]
head(science_summary)
```
```
## sex CNT Science Freq
## 1: Female Australia 498.0 7163
## 2: Male Australia 499.4 7367
## 3: Male Brazil 400.8 11068
## 4: Female Brazil 396.3 12073
## 5: Female Canada 515.3 10022
## 6: Male Canada 517.3 10036
```
Now, we can create a simple bar graph summarizing the average science performance by gender and country, using our new dataset.
```
ggplot(data = science_summary,
mapping = aes(x = CNT, y = Science, fill = sex)) +
geom_bar(stat = "identity", position = "dodge") +
coord_flip() +
labs(x = "", y = "Science Scores", fill = "Gender") +
theme_bw()
```
Despite their easiness and simplicity, bar graphs are not necessarily visually appealing. Thus, we will create a bubble chart to visualize the same information in a different way. A bubble chart is essentially a weighted scatterplot where a third variable determines the size of the dots in the plot. In the following bubble chart, we use *Freq* (i.e., number of students from each country) to determine the size of the dots in the plot, using `size = Freq`.
```
ggplot(data = science_summary,
mapping = aes(x = CNT, y = Science, size = Freq, fill = sex)) +
geom_point(shape = 21) +
coord_flip() +
theme_bw() +
labs(x = NULL, y = "Science Scores", fill = "Gender",
size = "Frequency")
```
**Interpretation:**
* Which countries seem to have the highest numbers of students?
* Which countries seem to have the larger achievement gap in science between male and female students?
We can also create a dot plot, which is very similar to the bubble chart when one of the variables is categorical, to convey the same information even more effectively. As you will see, this is a more polished version of the bubble chart with additional titles and subtitles.
```
ggplot(data = science_summary, mapping = aes(x = CNT, y = Science, fill = sex)) +
geom_line(aes(group = CNT)) + geom_point(aes(size = Freq), shape = 21) + geom_hline(yintercept = 493,
linetype = "dashed", color = "red", size = 1) + labs(x = NULL, y = "PISA Science Scores",
fill = "Gender", size = "Frequency", title = "Science Performance by Country and Gender") +
coord_flip() + theme_bw() + theme(plot.title = element_text(size = 18, margin = ggplot2::margin(b = 10)),
plot.subtitle = element_text(size = 10, color = "darkslategrey"))
```
5\.6 Plots for ordinal/categorical variables
--------------------------------------------
An *alluvial plot* can be used to summarize relationships between multiple categorical variables. In the following example, we use region (Region), gender (sex), and a survey item regarding whether parents support educational efforts and achievements (ST123Q02NA). We first create a new dataset called `dat_alluvial` to have frequency counts by region, gender, and our survey item. Because the survey item includes missing values, we label them as “missing” and then recode this variable as a factor with re\-ordered levels.
```
dat_alluvial <- dat[,
.(Freq = .N),
by = c("Region", "sex", "ST123Q02NA")
][,
ST123Q02NA := as.factor(ifelse(ST123Q02NA == "", "Missing", ST123Q02NA))
]
levels(dat_alluvial$ST123Q02NA) <- c("Strongly disagree", "Disagree", "Agree",
"Strongly agree", "Missing")
head(dat_alluvial)
```
```
## Region sex ST123Q02NA Freq
## 1: Australia Female Disagree 232
## 2: Australia Female Strongly disagree 2773
## 3: Australia Female Strongly agree 5981
## 4: Australia Male Strongly disagree 3209
## 5: Australia Male Strongly agree 5626
## 6: Australia Male Missing 186
```
Unlike the previous visualizations, there is a new layer called `geom_alluvium`, which allows creating an alluvial plot using the `ggplot` function. We use `aes(fill = sex)` inside `geom_alluvium` to differentiate the frequencies by gender.
```
# StatStratum <- StatStratum
ggplot(data = dat_alluvial,
aes(axis1 = Region, axis2 = ST123Q02NA, y = Freq)) +
scale_x_discrete(limits = c("Region", "Parents supporting\nachievement"),
expand = c(.1, .05)) +
geom_alluvium(aes(fill = sex)) +
geom_stratum() +
geom_text(stat = "stratum", label.strata = TRUE) +
labs(x = "Demographics", y = "Frequency", fill = "Gender") +
theme_bw()
```
**Interpretation:**
* Does parents’ support for educational efforts and achievement vary by region and gender?
### 5\.6\.1 Exercise
Create an alluvial plot for the survey item (ST119Q01NA) of whether students want top grades in most or all courses by region (Region) and gender (sex). Below we create the summary dataset (dat\_alluvial2\) for this plot. Use this dataset to draw the alluvial plot plot. How should we interpret the plot (e.g., for each region)?
```
dat_alluvial2 <- dat[,
.(Freq = .N),
by = c("Region", "sex", "ST119Q01NA")
][,
ST119Q01NA := as.factor(ifelse(ST119Q01NA == "", "Missing", ST119Q01NA))]
levels(dat_alluvial2$ST119Q01NA) <- c("Strongly disagree", "Disagree", "Agree",
"Strongly agree", "Missing")
```
---
5\.7 Interactive plots with `plotly`
------------------------------------
Using the `plotly` package, we can make more interactive visualizations. The `ggplotly` function from the `plotly` package transforms a `ggplot2` plot into an interactive plot in the HTML format. In the following example, we first save a boxplot as `p3` and then insert this plot into the `plotly` function in order to generate an interactive plot. As we hover the pointer over the plot area, the plot shows the min, max, q1, q3, and median values.
```
p3 <- ggplot(data = dat,
mapping = aes(x = CNT, y = science, fill = Region))+
geom_boxplot() +
facet_grid(. ~ sex)+
labs(x = NULL, y = "Science Scores", fill = "Region") +
coord_flip() +
theme_bw()
ggplotly(p3)
```
Similarly, we can transform our bubble chart into an interactive plot using `ggplotly()`.
```
p4 <- ggplot(data = science_summary,
mapping = aes(x = CNT, y = Science, size = Freq, fill = sex)) +
geom_point(shape = 21) +
coord_flip() +
theme_bw() +
labs(x = NULL, y = "Science Scores", fill = "Gender",
size = "Frequency")
ggplotly(p4)
```
We can also use the `plot_ly` function to create interactive visualizations, without using \``ggplot2`. In the following example, we create a scatterplot of reading scores and science scores where the color of the dots will be based on region and the size of the dots will be based on student weight in the PISA database. Because the resulting figure is interactive, we can click on the legend and hide some regions as we review the plot. In addition, we add a hover text (`text = ~paste("Reading: ", reading, '<br>Science:', science)`) into the plot. As we hover on the plot, it will show us a label with reading and science scores.
```
plot_ly(data = dat_small,
x = ~reading, y = ~science, color = ~Region,
size = ~W_FSTUWT,
type = "scatter",
text = ~paste("Reading: ", reading, '<br>Science:', science))
```
Lastly, we create a bar chart showing average science scores by region and gender. We will also include error bars in the plot. First we will create a new dataset `science_region` with the mean and standard deviation values by gender and region. Then, we will use this summary dataset in `plot_ly()` to draw a bar chart for females and save it as `p5`. Finally, we will add a new layer for males using `add_trace`.
```
science_region <- dat[, .(Science = mean(science, na.rm = TRUE),
SD = sd(science, na.rm = TRUE)),
by = c("sex", "Region")]
p5 <- plot_ly(data = science_region[which(science_region$sex == 'Female'),],
x = ~Region,
y = ~Science,
type = 'bar',
name = 'Female',
error_y = ~list(array = SD, color = 'black'))
add_trace(p5, data = science_region[which(science_region$sex == 'Male'),],
name = 'Male')
```
Check out the [plotly](https://plot.ly/r/) website to see more interesting examples of interactive visualizations and dashboards.
### 5\.7\.1 Exercise
Replicate the science\-by\-region histogram below as a density plot and use`plotly` to make it interactive. You will need to replace `geom_histogram(alpha = 0.5, bins = 50)` with `geom_density(alpha = 0.5)`. Repeat the same process by changing `alpha = 0.5` to `alpha = 0.8`. Which version is better for examining the science score distribution?
```
ggplot(data = dat,
mapping = aes(x = science, fill = Region)) +
geom_histogram(alpha = 0.5, bins = 50) +
labs(x = "Science Scores", y = "Count",
title = "Science Scores by Gender and Region") +
facet_grid(. ~ sex) +
theme_bw()
```
5\.8 Customizing visualizations
-------------------------------
Although `ggplot2` has many ways to customize visualizations, sometimes making a plot ready for a publication or a presentation becomes quite tedious. Therefore, we recommend the [cowplot](https://cran.r-project.org/web/packages/cowplot/index.html) package – which is capable of quickly transforming plots created with `ggplot2` into publication\-ready plots. The `cowplot` package provides a nice theme that requires a minimum amount of editing for changing sizes of axis labels, plot backgrounds, etc. In addition, we can add custom annotations to `ggplot2` plots using `cowplot` (see the [cowplot vignette](https://cran.r-project.org/web/packages/cowplot/vignettes/introduction.html) for more details).
On of the plots that we created earlier was a bubble chart by gender and frequency.
```
ggplot(data = science_summary,
mapping = aes(x = CNT, y = Science, size = Freq, fill = sex)) +
geom_point(shape = 21) +
coord_flip() +
theme_bw() +
labs(x = NULL, y = "Science Scores", fill = "Gender",
size = "Frequency")
```
After we load the `cowplot` package and remove `theme_bw` from the plot, it will change as follows:
```
library("cowplot")
plot1 <-
ggplot(data = science_summary,
mapping = aes(x = CNT, y = Science, size = Freq, fill = sex)) +
geom_point(shape = 21) +
coord_flip() +
labs(x = NULL, y = "Science Scores", fill = "Gender",
size = "Frequency")
plot1
```
The `cowplot` package removes the gray background, gridlines, and make the axes more visible. If we want to save the plot, we can export it using `save_plot`.
```
save_plot("plot1.png", plot1,
base_aspect_ratio = 1.6)
```
Also, `cowplot` enables combining two or more plots into one graph via the function `plot_grid`:
```
plot2 <-
ggplot(data = science_summary,
mapping = aes(x = CNT, y = Science, fill = sex)) +
geom_bar(stat = "identity", position = "dodge") +
coord_flip() +
labs(x = "", y = "Science Scores", fill = "Gender")
plot_grid(plot1, plot2, labels = c("A", "B"))
```
If you decide not to use the `cowplot` theme, you can just simply unload the package as follows:
```
detach("package:cowplot", unload=TRUE)
```
5\.9 Lab
--------
We want to examine the relationships between reading scores and technology\-related variables in the `dat` dataset that we created earlier. Create at least two visualizations (either static or interactive) using some of the variables shown below:
* Region
* sex
* grade
* HOMESCH
* ENTUSE
* ICTHOME
* ICTSCH
You can focus on a particular country or region or use the entire dataset for your visualizations.
5\.1 Introduction to `ggplot2`
------------------------------
This section will demonstrate how to visualise your big data using `ggplot2` and other R packages that rely on `ggplot2`. We use \``ggplot2` because it is the most elegant and versatile visualization package in R. Also, it implements a simple grammar of graphics for building a variety of visualizations for either small or large data. This enables creating high\-quality plots for publications and presentations easily, with minimal amounts of adjustments and tweaking.
A typical `ggplot2` template ranges from a few layers to many layers, depending on the complexity of the visualization of interest. Layers generate a plot and plot transformations within the plot. We can combine multiple layers using the \+ operator. Therefore, plots are built step by step by adding new elements in each layer. A simple `ggplot2` template is shown below:
```
ggplot(data = my_data,
mapping = aes(x = var1, y = var2)) +
geom_function()
```
where the `ggplot` function uses the two variables (**var1** and **var2**) from a dataset (**my\_data**), and draws a new plot based on a particular geom function (**geom\_function**). Selecting the variables to be plotted is done through the aesthetic mapping (via the `aes` function). Depending on the aesthetic mapping of interest, we can split the plot, add colors by a group variable, change the labels for each axis, change the font size, and so on. The \``ggplot2` package offers many geom functions to draw different types of plots:
* `geom_point` for scatter plots, dot plots, etc.
* `geom_boxplot` for boxplots
* `geom_line` for trend lines, time series, etc.
In addition, functions such as `theme_bw()` and `theme()` enable adjusting the theme elements (e.g., font size, font type, background colors) for a given plot. As we create plots in our examples, we will use some of these theme elements to make our plots look nicer.
An important caveat in visualizing big data is that the size of the dataset (*especially the number of rows*) and complexity level of the plot (e.g., additional lines, colors, facets) will influence how quickly and successfully `ggplot2` can render the desired plot. Nobody can absorb the meaning of thousands of data points presented on a single visualization. Therefore, in some cases we will need to find a way to cluster or reduce the magnitude of items to visualize before we render the visualization. Typically we can achieve this by:
* taking smaller, sometimes random, samples from our big data, or
* summarizing our big data using categorical, group variables (e.g., gender, grade, year).
---
5\.2 Marginal plots
-------------------
We can use marginal plots to examine the distributions of individual variables in a large dataset. A typical marginal plot is a scatter plot that also has histograms or boxplots in the margins of the x\- and y\-axes. In this section, first we will create histograms and boxplots for the variables in the **pisa** dataset. Then, we will review other options where we will combine multiple variables and different types of plots in a single visualization.
To demonstrate data visualizations, we will first take a subset of the **pisa** dataset by selecting some countries and some variables of interest. The selected variables are shown below.
Table 5\.1: Variables to be used in the data visualizations
| Variable | Description | Variable | Description |
| --- | --- | --- | --- |
| CNT | Country | BELONG | Sense of belonging to school |
| OECD | OECD membership | EMOSUPS | Parents emotional support |
| CNTSTUID | Student ID | HOMESCH | ICT use outside of school for schoolwork |
| W\_FSTUWT | Student weight in the PISA database | ENTUSE | ICT use outside of school leisure |
| ST001D01T | Grade level | ICTHOME | ICT available at home |
| ST004D01T | Gender (female/male) | ICTSCH | ICT availability at school |
| ST011Q04TA | Possessing a computer at home | WEALTH | Family wealth |
| ST011Q05TA | Possessing educational software at home | PARED | Highest parental education in years of schooling |
| ST011Q06TA | Having internet access at home | TMINS | Total learning time per week |
| ST071Q02NA | Additional time spent for learning math | ESCS | Index of economic, social and cultural status |
| ST071Q01NA | Additional time spent for learning science | TDTEACH | Teacher\-directed science instruction |
| ST123Q02NA | Whether parents support educational efforts and achievements | IBTEACH | Inquiry based science instruction |
| ST082Q01NA | Prefering working as part of a team to working alone | TEACHSUP | Teacher support in science classes |
| ST119Q01NA | Wanting top grades in most or all courses | SCIEEFF | Science self\-efficacy |
| ST119Q05NA | Wanting to the best student in class | math | Students math scores in PISA 2015 |
| ANXTEST | Test anxiety | reading | Students reading scores in PISA 2015 |
| COOPERATE | Enjoying cooperation | science | Students science scores in PISA 2015 |
Here we filter our big data based on a list of countries (we called `country`), select the variables that we have just identified in Table [5\.1](visualizing-big-data.html#tab:tab1) and the reading, math, and science scales we created earlier.
```
country <- c("United States", "Canada", "Mexico", "B-S-J-G (China)", "Japan",
"Korea", "Germany", "Italy", "France", "Brazil", "Colombia", "Uruguay",
"Australia", "New Zealand", "Jordan", "Israel", "Lebanon")
dat <- pisa[CNT %in% country,
.(CNT, OECD, CNTSTUID, W_FSTUWT, sex, female,
ST001D01T, computer, software, internet,
ST011Q05TA, ST071Q02NA, ST071Q01NA, ST123Q02NA,
ST082Q01NA, ST119Q01NA, ST119Q05NA, ANXTEST,
COOPERATE, BELONG, EMOSUPS, HOMESCH, ENTUSE,
ICTHOME, ICTSCH, WEALTH, PARED, TMINS, ESCS,
TEACHSUP, TDTEACH, IBTEACH, SCIEEFF,
math, reading, science)
]
```
Next, we create additional variables by recoding some of the existing variables. The goal is to create some numerical variables out of the character variables in case we want to use them in the modeling stage.
```
# Let's create additional variables that we will use for visualizations
dat <- dat[, `:=` (
# New grade variable
grade = (as.numeric(sapply(ST001D01T, function(x) {
if(x=="Grade 7") "7"
else if (x=="Grade 8") "8"
else if (x=="Grade 9") "9"
else if (x=="Grade 10") "10"
else if (x=="Grade 11") "11"
else if (x=="Grade 12") "12"
else if (x=="Grade 13") NA_character_
else if (x=="Ungraded") NA_character_}))),
# Total learning time as hours
learning = round(TMINS/60, 0),
# Regions for selected countries
Region = (sapply(CNT, function(x) {
if(x %in% c("Canada", "United States", "Mexico")) "N. America"
else if (x %in% c("Colombia", "Brazil", "Uruguay")) "S. America"
else if (x %in% c("Japan", "B-S-J-G (China)", "Korea")) "Asia"
else if (x %in% c("Germany", "Italy", "France")) "Europe"
else if (x %in% c("Australia", "New Zealand")) "Australia"
else if (x %in% c("Israel", "Jordan", "Lebanon")) "Middle-East"
}))
)]
```
Now, let’s see the number of rows in the final dataset and print the first few rows of the selected variables.
```
# N count for the final dataset
dat[,.N] # 158,061 rows
```
```
## [1] 158061
```
```
# Let's preview the final data
head(dat)
```
```
## CNT OECD CNTSTUID W_FSTUWT sex female ST001D01T computer software
## 1: Australia Yes 3610676 28.20 Female 1 Grade 10 1 1
## 2: Australia Yes 3611874 28.20 Female 1 Grade 10 1 1
## 3: Australia Yes 3601769 28.20 Female 1 Grade 10 1 1
## 4: Australia Yes 3605996 28.20 Female 1 Grade 10 1 1
## 5: Australia Yes 3608147 33.45 Male 0 Grade 10 1 1
## 6: Australia Yes 3610012 33.45 Male 0 Grade 10 1 1
## internet ST011Q05TA ST071Q02NA ST071Q01NA ST123Q02NA ST082Q01NA
## 1: 1 Yes 0 1 Disagree Disagree
## 2: 1 Yes 1 1 Agree Agree
## 3: 1 Yes NA NA Agree Strongly disagree
## 4: 1 Yes 5 7 Strongly agree Strongly disagree
## 5: 1 Yes 1 1 Agree Agree
## 6: 1 Yes 2 2 Agree Agree
## ST119Q01NA ST119Q05NA ANXTEST COOPERATE BELONG EMOSUPS HOMESCH
## 1: Agree Strongly agree -0.1522 0.2085 0.5073 -2.2547 -0.1686
## 2: Agree Disagree 0.2594 -0.2882 -0.8021 -0.2511 0.0302
## 3: Strongly agree Disagree 2.5493 -1.2109 -2.4078 -1.9895 1.2836
## 4: Strongly agree Strongly agree 0.2563 0.3950 -0.3381 1.0991 -0.0498
## 5: Agree Disagree 0.4517 -1.3606 -0.5050 -1.3298 -0.3355
## 6: Agree Agree 0.5175 0.4252 -0.0099 -0.4263 0.1567
## ENTUSE ICTHOME ICTSCH WEALTH PARED TMINS ESCS TEACHSUP TDTEACH IBTEACH
## 1: -0.7369 4 5 0.0592 12 1400 0.4078 NA NA NA
## 2: -0.1047 9 6 0.7605 12 1100 0.4500 0.3574 0.0615 0.2208
## 3: -1.5403 11 10 -0.1220 11 1960 -0.5889 -1.0718 -0.6102 -0.2198
## 4: 0.0342 10 7 0.9314 15 2450 0.6498 0.6375 0.7979 -0.0282
## 5: 0.2309 NA 7 0.7905 15 1400 0.7675 0.8213 0.1990 1.1477
## 6: 0.6896 10 5 0.7054 15 1400 1.1151 NA NA NA
## SCIEEFF math reading science grade learning Region
## 1: NA 545.9 586.5 589.6 10 23 Australia
## 2: -0.4041 511.6 570.8 557.2 10 18 Australia
## 3: -0.9003 478.6 570.0 569.5 10 33 Australia
## 4: 1.2395 506.1 531.1 529.0 10 41 Australia
## 5: -0.0746 481.9 506.5 504.2 10 23 Australia
## 6: NA 455.0 456.5 472.6 10 23 Australia
```
We want to see the distributions of the science scores across the 17 countries in our final dataset. The first line with `ggplot` creates a layout for our figure, the second line draws a box plot using `geom_boxplot`, the fourth line with `labs` creates labels of the axes, and the last line with `theme_bw` removes the default theme with a grey background and activates the dark\-on\-light `ggplot2` theme – which is much better for publications and presentations (see [https://ggplot2\.tidyverse.org/reference/ggtheme.html](https://ggplot2.tidyverse.org/reference/ggtheme.html) for a complete list of themes available in `ggplot2`).
```
ggplot(data = dat, mapping = aes(x = CNT, y = science)) +
geom_boxplot() +
labs(x=NULL, y="Science Scores") +
theme_bw()
```
The resulting plot is not necessarily nice because all the country names on the x\-axis seem to be squeezed together and thus some of the country names are not visible on the x\-axis. To correct this, we may want to flip the coordinates of the plot and use country names on the y\-axis instead. The `coord_flip()` function allows us to achieve that very easily.
```
ggplot(data = dat,
mapping = aes(x = CNT, y = science)) +
geom_boxplot() +
labs(x=NULL, y="Science Scores") +
coord_flip() +
theme_bw()
```
Next, I want to show the mean values in the boxplots since the line in the middle represents the median, not the mean. To achieve this, we first calculate the means by countries.
```
means <- dat[,
.(science = mean(science)),
by = CNT]
```
Now we can use `means` to add a point into each boxplot to show the mean score by countries. We will use `stat_summary()` along with the options `colour = "blue", geom = "point"` to create a blue point for the mean. In addition, given that the average science score in PISA 2015 was 493 across all participating countries (see [PISA 2015 Results in Focus](https://www.oecd.org/pisa/pisa-2015-results-in-focus.pdf) for more details), we can add a reference line into our plot to identify the average score, which would then allow us to visually examine which countries are above or below the average score. To achieve this, we use `geom_hline` function and specify where it should intersect the plot (i.e., `yintercept = 493`). We also want the reference line to be a red, dashed\-line with a thickness level of 1 – to make it more visible in the plot. Finally, to facilitate the interpretation of the plot, we want the boxplots to be ordered based on the average scores for each country and thus we add `reorder(CNT, science)` into the mapping.
```
ggplot(data = dat,
mapping = aes(x = reorder(CNT, science), y = science)) +
geom_boxplot() +
stat_summary(fun.y = mean, colour = "blue", geom = "point",
shape = 18, size = 3) +
labs(x=NULL, y="Science Scores") +
coord_flip() +
geom_hline(yintercept = 493, linetype="dashed", color = "red", size = 1) +
theme_bw()
```
Now let’s add some colors to our figure based on the region where each country is located. In order to do this, we use the region variable to fill the boxplots with color, using `fill = Region`.
```
ggplot(data = dat,
mapping = aes(x = reorder(CNT, science), y = science, fill = Region)) +
geom_boxplot() +
labs(x=NULL, y="Science Scores") +
coord_flip() +
geom_hline(yintercept = 493, linetype="dashed", color = "red", size = 1) +
theme_bw()
```
### 5\.2\.1 Exercise
Create a plot of **math** scores over countries with different colors based on region. You need to modify the R code below by replacing `geom_boxplot` with:
* `geom_point(aes(color = Region))`, and then
* `geom_violin(aes(color = Region))`.
How long did it take to create both plots? Which one is a better way to visualize this type of data?
```
ggplot(data = dat,
mapping = aes(x = reorder(CNT, math), y = math, fill = Region)) +
geom_boxplot() +
labs(x=NULL, y="Math Scores") +
coord_flip() +
geom_hline(yintercept = 490, linetype="dashed", color = "red", size = 1) +
theme_bw()
```
We can also create histograms (or density plots) for a particular variable and split the plot into multiple plots by using a categorical, group variable. In the following example, we use `x = Region` in the mapping in order to identify different regions in the distribution of the science scores. In addition, we use `facet_grid(. ~ sex)` to generate separate histograms by gender. Note that we also added `title = "Science Scores by Gender and Region"` as a title in the `labs` function.
```
ggplot(data = dat,
mapping = aes(x = science, fill = Region)) +
geom_histogram(alpha = 0.5, bins = 50) +
labs(x = "Science Scores", y = "Count",
title = "Science Scores by Gender and Region") +
facet_grid(. ~ sex) +
theme_bw()
```
If we are interested in visualizing multiple variables, plotting each variable individually can be time consuming. Therefore, we can use the `ggpairs` function from the `GGally` package to build a more complex, diagnostic plot for multiple variables.
In the following example, we plot reading, science, and math scores as well as gender (i.e., sex) in the same plot. Because our dataset is quite large, plotting all the data points would result in a highly complex plot where most data points would overlap on each other. Therefore, we will take a random sample of 500 cases from each region defined in the data, save this smaller dataset as `dat_small`, and use this dataset inside the `ggpairs` function. We colorize each variable by region (using `mapping = aes(color = Region)`). The resulting plot shows density plots for the continuous variables (by region), a stacked bar chart for gender, and box plots for the continuous variables by region and gender.
```
# Random sample of 500 students from each region
dat_small <- dat[,.SD[sample(.N, min(500,.N))], by = Region]
ggpairs(data = dat_small,
mapping = aes(color = Region),
columns = c("reading", "science", "math", "sex"),
upper = list(continuous = wrap("cor", size = 2.5))
)
```
**Interpretation:**
* What can we say about the regions based on the plots above?
* Do you see any major gender differences for reading, science, or math?
* What is the relationship among reading, science, or math?
---
### 5\.2\.1 Exercise
Create a plot of **math** scores over countries with different colors based on region. You need to modify the R code below by replacing `geom_boxplot` with:
* `geom_point(aes(color = Region))`, and then
* `geom_violin(aes(color = Region))`.
How long did it take to create both plots? Which one is a better way to visualize this type of data?
```
ggplot(data = dat,
mapping = aes(x = reorder(CNT, math), y = math, fill = Region)) +
geom_boxplot() +
labs(x=NULL, y="Math Scores") +
coord_flip() +
geom_hline(yintercept = 490, linetype="dashed", color = "red", size = 1) +
theme_bw()
```
We can also create histograms (or density plots) for a particular variable and split the plot into multiple plots by using a categorical, group variable. In the following example, we use `x = Region` in the mapping in order to identify different regions in the distribution of the science scores. In addition, we use `facet_grid(. ~ sex)` to generate separate histograms by gender. Note that we also added `title = "Science Scores by Gender and Region"` as a title in the `labs` function.
```
ggplot(data = dat,
mapping = aes(x = science, fill = Region)) +
geom_histogram(alpha = 0.5, bins = 50) +
labs(x = "Science Scores", y = "Count",
title = "Science Scores by Gender and Region") +
facet_grid(. ~ sex) +
theme_bw()
```
If we are interested in visualizing multiple variables, plotting each variable individually can be time consuming. Therefore, we can use the `ggpairs` function from the `GGally` package to build a more complex, diagnostic plot for multiple variables.
In the following example, we plot reading, science, and math scores as well as gender (i.e., sex) in the same plot. Because our dataset is quite large, plotting all the data points would result in a highly complex plot where most data points would overlap on each other. Therefore, we will take a random sample of 500 cases from each region defined in the data, save this smaller dataset as `dat_small`, and use this dataset inside the `ggpairs` function. We colorize each variable by region (using `mapping = aes(color = Region)`). The resulting plot shows density plots for the continuous variables (by region), a stacked bar chart for gender, and box plots for the continuous variables by region and gender.
```
# Random sample of 500 students from each region
dat_small <- dat[,.SD[sample(.N, min(500,.N))], by = Region]
ggpairs(data = dat_small,
mapping = aes(color = Region),
columns = c("reading", "science", "math", "sex"),
upper = list(continuous = wrap("cor", size = 2.5))
)
```
**Interpretation:**
* What can we say about the regions based on the plots above?
* Do you see any major gender differences for reading, science, or math?
* What is the relationship among reading, science, or math?
---
5\.3 Conditional plots
----------------------
When we deal with continuous variables, an effective way to understand the relationship between the variables is to produce conditional plots, such as scatterplots, dotplots, and bubble charts. Simple scatterplots in R can be created using `plot(var1, var2, data = name_of_dataset)`. Using the extended capabilities of `ggplot2` via the `ggExtra` package, we can combine histograms and density plots with scatterplots and visualize them together.
In the following example, we first create a scatterplot of learning time per week and science scores using `ggplot`. We use `geom_point` to draw a plot with points and `geom_smooth(method = "loess")` to add a regression line with loess smoothing (i.e., **Lo**cally **E**stimated **S**catterplot **S**moothing). We save this plot as `p1` and then pass it to `ggMarginal` to transform the plot into a marginal scatterplot. Inside `ggMarginal`, we use `type = "histogram"` to create histograms for learning time per week and science scores on the x and y axes of the plot. Note that as the plot is created, you may see some warning messages, such as “Removed 750 rows containing missing values,” because some variables have missing rows in the dataset.
```
p1 <- ggplot(data = dat_small,
mapping = aes(x = learning, y = science)) +
geom_point() +
geom_smooth(method = "loess") +
labs(x = "Weekly Learning Time", y = "Science Scores") +
theme_bw()
# Replace "histogram" with "boxplot" or "density" for other types
ggMarginal(p1, type = "histogram")
```
We can also distinguish male and female students in the plot and create a scatterplot of learning time and science scores with densities by gender. To achieve this, we add `colour = sex` into the mapping of `ggplot` and change the type of plot to `type = "density"` in `ggMarginal`. In addition, we use `groupColour = TRUE, groupFill = TRUE` inside `ggMarginal` to use separate colors for each gender in the density plots.
```
p2 <- ggplot(data = dat_small,
mapping = aes(x = learning, y = science,
colour = sex)) +
geom_point() +
geom_smooth(method = "loess") +
labs(x = "Weekly Learning Time", y = "Science Scores") +
theme_bw() +
theme(legend.position = "bottom",
legend.title = element_blank())
ggMarginal(p2, type = "density", groupColour = TRUE, groupFill = TRUE)
```
**Interpretation:**
* What can we say about the relationship between weekly learning time and science scores?
* Do you see any gender differences?
Now let’s incorporate more variables into the plot. This time we are not going to use marginal plots. Instead, we will create a regular scatterplot but add other layers to represent additional variables. In the following example, we examine the relationship between students’ weekly learning time (learning) and science scores (science) across regions (region) and gender (sex). Adding `fill = Region` into the mapping will allow us to draw regression lines by regions, while adding `aes(colour = sex)` into `geom_point` will allow us to use different colors for male and female students in the plot.
```
ggplot(data = dat_small,
mapping = aes(x = learning, y = science, fill = Region)) +
geom_point(aes(colour = sex)) +
geom_smooth(method = "loess") +
labs(x = "Weekly Learning Time", y = "Science Scores") +
theme_bw()
```
The resulting scatterplot is nice but it is hard to compare the results clearly between gender groups and regions. To improve the interpretability of the plot, we will use the faceting option. This will allow us to split the scatterplot into multiple plots based on gender and region. In the following example, we examine the relationship between students’ learning time and science scores across regions and gender. We use `facet_grid(sex ~ Region)` to split the plots into multiple rows based on gender and multiple columns based on region.
```
ggplot(data = dat_small,
mapping = aes(x = learning, y = science)) +
geom_point() +
geom_smooth(method = "loess") +
labs(x = "Weekly Learning Time", y = "Science Scores") +
theme_bw() +
theme(legend.title = element_blank()) +
facet_grid(sex ~ Region)
```
**Interpretation:**
* Do you see any regional differences?
* Is there any interaction between gender and region?
### 5\.3\.1 Exercise
Create a scatterplot of socio\-economic status (`ESCS`) and math scores (`math`) across regions (`region`) and gender (`sex`). Use `geom_smooth(method = "lm")` to draw linear regression lines (instead of loess smoothing). Do you think that the relationship between ESCS and math changes across gender and regions?
### 5\.3\.1 Exercise
Create a scatterplot of socio\-economic status (`ESCS`) and math scores (`math`) across regions (`region`) and gender (`sex`). Use `geom_smooth(method = "lm")` to draw linear regression lines (instead of loess smoothing). Do you think that the relationship between ESCS and math changes across gender and regions?
5\.4 Plots for examining correlations
-------------------------------------
For a simple examination of the correlation between two continuous variables, we could just create a scatterplot matrix. In the following plot, we will create a scatterplot matrix of family wealth (`WEALTH`) and science scores (`science`) by gender (`sex`) and region (`region`). We will use region for facetting and gender for coloring the data points.
```
ggplot(data = dat_small,
mapping = aes(x = WEALTH, y = science)) +
geom_point(aes(color = sex)) +
facet_wrap( ~ Region) +
labs(x = "Family Wealth", y = "Science Scores") +
theme_bw() +
theme(legend.title = element_blank())
```
A more effective way for identifying correlated variables in a dataset for further statistical analyses (also known as feature extraction) is to create a correlation matrix plot. The `ggcorr()` function from the `GGally` package provides a quick way to make a correlation matrix plot. In the following example, we will create a correlation matrix plot for science, math, reading, ICT possession at home, socio\-economic status, family wealth, highest parental education, science self\-efficacy, sense of belonging to school, and grade level.
```
ggcorr(data = dat[,.(science, math, reading, ICTHOME, ESCS,
WEALTH, PARED, SCIEEFF, BELONG, grade)],
method = c("pairwise.complete.obs", "pearson"),
label = TRUE, label_size = 4)
```
5\.5 Plots for examining means by group
---------------------------------------
Let’s assume that we want to see average science scores by gender and country. First, we need to find the average science scores by country and gender and save them in a new dataset. Below we calculate average science scores and N counts by both gender and country and save the dataset as `science_summary`.
```
science_summary <- dat[,
.(Science = mean(science, na.rm = TRUE),
Freq = .N),
by = c("sex", "CNT")]
head(science_summary)
```
```
## sex CNT Science Freq
## 1: Female Australia 498.0 7163
## 2: Male Australia 499.4 7367
## 3: Male Brazil 400.8 11068
## 4: Female Brazil 396.3 12073
## 5: Female Canada 515.3 10022
## 6: Male Canada 517.3 10036
```
Now, we can create a simple bar graph summarizing the average science performance by gender and country, using our new dataset.
```
ggplot(data = science_summary,
mapping = aes(x = CNT, y = Science, fill = sex)) +
geom_bar(stat = "identity", position = "dodge") +
coord_flip() +
labs(x = "", y = "Science Scores", fill = "Gender") +
theme_bw()
```
Despite their easiness and simplicity, bar graphs are not necessarily visually appealing. Thus, we will create a bubble chart to visualize the same information in a different way. A bubble chart is essentially a weighted scatterplot where a third variable determines the size of the dots in the plot. In the following bubble chart, we use *Freq* (i.e., number of students from each country) to determine the size of the dots in the plot, using `size = Freq`.
```
ggplot(data = science_summary,
mapping = aes(x = CNT, y = Science, size = Freq, fill = sex)) +
geom_point(shape = 21) +
coord_flip() +
theme_bw() +
labs(x = NULL, y = "Science Scores", fill = "Gender",
size = "Frequency")
```
**Interpretation:**
* Which countries seem to have the highest numbers of students?
* Which countries seem to have the larger achievement gap in science between male and female students?
We can also create a dot plot, which is very similar to the bubble chart when one of the variables is categorical, to convey the same information even more effectively. As you will see, this is a more polished version of the bubble chart with additional titles and subtitles.
```
ggplot(data = science_summary, mapping = aes(x = CNT, y = Science, fill = sex)) +
geom_line(aes(group = CNT)) + geom_point(aes(size = Freq), shape = 21) + geom_hline(yintercept = 493,
linetype = "dashed", color = "red", size = 1) + labs(x = NULL, y = "PISA Science Scores",
fill = "Gender", size = "Frequency", title = "Science Performance by Country and Gender") +
coord_flip() + theme_bw() + theme(plot.title = element_text(size = 18, margin = ggplot2::margin(b = 10)),
plot.subtitle = element_text(size = 10, color = "darkslategrey"))
```
5\.6 Plots for ordinal/categorical variables
--------------------------------------------
An *alluvial plot* can be used to summarize relationships between multiple categorical variables. In the following example, we use region (Region), gender (sex), and a survey item regarding whether parents support educational efforts and achievements (ST123Q02NA). We first create a new dataset called `dat_alluvial` to have frequency counts by region, gender, and our survey item. Because the survey item includes missing values, we label them as “missing” and then recode this variable as a factor with re\-ordered levels.
```
dat_alluvial <- dat[,
.(Freq = .N),
by = c("Region", "sex", "ST123Q02NA")
][,
ST123Q02NA := as.factor(ifelse(ST123Q02NA == "", "Missing", ST123Q02NA))
]
levels(dat_alluvial$ST123Q02NA) <- c("Strongly disagree", "Disagree", "Agree",
"Strongly agree", "Missing")
head(dat_alluvial)
```
```
## Region sex ST123Q02NA Freq
## 1: Australia Female Disagree 232
## 2: Australia Female Strongly disagree 2773
## 3: Australia Female Strongly agree 5981
## 4: Australia Male Strongly disagree 3209
## 5: Australia Male Strongly agree 5626
## 6: Australia Male Missing 186
```
Unlike the previous visualizations, there is a new layer called `geom_alluvium`, which allows creating an alluvial plot using the `ggplot` function. We use `aes(fill = sex)` inside `geom_alluvium` to differentiate the frequencies by gender.
```
# StatStratum <- StatStratum
ggplot(data = dat_alluvial,
aes(axis1 = Region, axis2 = ST123Q02NA, y = Freq)) +
scale_x_discrete(limits = c("Region", "Parents supporting\nachievement"),
expand = c(.1, .05)) +
geom_alluvium(aes(fill = sex)) +
geom_stratum() +
geom_text(stat = "stratum", label.strata = TRUE) +
labs(x = "Demographics", y = "Frequency", fill = "Gender") +
theme_bw()
```
**Interpretation:**
* Does parents’ support for educational efforts and achievement vary by region and gender?
### 5\.6\.1 Exercise
Create an alluvial plot for the survey item (ST119Q01NA) of whether students want top grades in most or all courses by region (Region) and gender (sex). Below we create the summary dataset (dat\_alluvial2\) for this plot. Use this dataset to draw the alluvial plot plot. How should we interpret the plot (e.g., for each region)?
```
dat_alluvial2 <- dat[,
.(Freq = .N),
by = c("Region", "sex", "ST119Q01NA")
][,
ST119Q01NA := as.factor(ifelse(ST119Q01NA == "", "Missing", ST119Q01NA))]
levels(dat_alluvial2$ST119Q01NA) <- c("Strongly disagree", "Disagree", "Agree",
"Strongly agree", "Missing")
```
---
### 5\.6\.1 Exercise
Create an alluvial plot for the survey item (ST119Q01NA) of whether students want top grades in most or all courses by region (Region) and gender (sex). Below we create the summary dataset (dat\_alluvial2\) for this plot. Use this dataset to draw the alluvial plot plot. How should we interpret the plot (e.g., for each region)?
```
dat_alluvial2 <- dat[,
.(Freq = .N),
by = c("Region", "sex", "ST119Q01NA")
][,
ST119Q01NA := as.factor(ifelse(ST119Q01NA == "", "Missing", ST119Q01NA))]
levels(dat_alluvial2$ST119Q01NA) <- c("Strongly disagree", "Disagree", "Agree",
"Strongly agree", "Missing")
```
---
5\.7 Interactive plots with `plotly`
------------------------------------
Using the `plotly` package, we can make more interactive visualizations. The `ggplotly` function from the `plotly` package transforms a `ggplot2` plot into an interactive plot in the HTML format. In the following example, we first save a boxplot as `p3` and then insert this plot into the `plotly` function in order to generate an interactive plot. As we hover the pointer over the plot area, the plot shows the min, max, q1, q3, and median values.
```
p3 <- ggplot(data = dat,
mapping = aes(x = CNT, y = science, fill = Region))+
geom_boxplot() +
facet_grid(. ~ sex)+
labs(x = NULL, y = "Science Scores", fill = "Region") +
coord_flip() +
theme_bw()
ggplotly(p3)
```
Similarly, we can transform our bubble chart into an interactive plot using `ggplotly()`.
```
p4 <- ggplot(data = science_summary,
mapping = aes(x = CNT, y = Science, size = Freq, fill = sex)) +
geom_point(shape = 21) +
coord_flip() +
theme_bw() +
labs(x = NULL, y = "Science Scores", fill = "Gender",
size = "Frequency")
ggplotly(p4)
```
We can also use the `plot_ly` function to create interactive visualizations, without using \``ggplot2`. In the following example, we create a scatterplot of reading scores and science scores where the color of the dots will be based on region and the size of the dots will be based on student weight in the PISA database. Because the resulting figure is interactive, we can click on the legend and hide some regions as we review the plot. In addition, we add a hover text (`text = ~paste("Reading: ", reading, '<br>Science:', science)`) into the plot. As we hover on the plot, it will show us a label with reading and science scores.
```
plot_ly(data = dat_small,
x = ~reading, y = ~science, color = ~Region,
size = ~W_FSTUWT,
type = "scatter",
text = ~paste("Reading: ", reading, '<br>Science:', science))
```
Lastly, we create a bar chart showing average science scores by region and gender. We will also include error bars in the plot. First we will create a new dataset `science_region` with the mean and standard deviation values by gender and region. Then, we will use this summary dataset in `plot_ly()` to draw a bar chart for females and save it as `p5`. Finally, we will add a new layer for males using `add_trace`.
```
science_region <- dat[, .(Science = mean(science, na.rm = TRUE),
SD = sd(science, na.rm = TRUE)),
by = c("sex", "Region")]
p5 <- plot_ly(data = science_region[which(science_region$sex == 'Female'),],
x = ~Region,
y = ~Science,
type = 'bar',
name = 'Female',
error_y = ~list(array = SD, color = 'black'))
add_trace(p5, data = science_region[which(science_region$sex == 'Male'),],
name = 'Male')
```
Check out the [plotly](https://plot.ly/r/) website to see more interesting examples of interactive visualizations and dashboards.
### 5\.7\.1 Exercise
Replicate the science\-by\-region histogram below as a density plot and use`plotly` to make it interactive. You will need to replace `geom_histogram(alpha = 0.5, bins = 50)` with `geom_density(alpha = 0.5)`. Repeat the same process by changing `alpha = 0.5` to `alpha = 0.8`. Which version is better for examining the science score distribution?
```
ggplot(data = dat,
mapping = aes(x = science, fill = Region)) +
geom_histogram(alpha = 0.5, bins = 50) +
labs(x = "Science Scores", y = "Count",
title = "Science Scores by Gender and Region") +
facet_grid(. ~ sex) +
theme_bw()
```
### 5\.7\.1 Exercise
Replicate the science\-by\-region histogram below as a density plot and use`plotly` to make it interactive. You will need to replace `geom_histogram(alpha = 0.5, bins = 50)` with `geom_density(alpha = 0.5)`. Repeat the same process by changing `alpha = 0.5` to `alpha = 0.8`. Which version is better for examining the science score distribution?
```
ggplot(data = dat,
mapping = aes(x = science, fill = Region)) +
geom_histogram(alpha = 0.5, bins = 50) +
labs(x = "Science Scores", y = "Count",
title = "Science Scores by Gender and Region") +
facet_grid(. ~ sex) +
theme_bw()
```
5\.8 Customizing visualizations
-------------------------------
Although `ggplot2` has many ways to customize visualizations, sometimes making a plot ready for a publication or a presentation becomes quite tedious. Therefore, we recommend the [cowplot](https://cran.r-project.org/web/packages/cowplot/index.html) package – which is capable of quickly transforming plots created with `ggplot2` into publication\-ready plots. The `cowplot` package provides a nice theme that requires a minimum amount of editing for changing sizes of axis labels, plot backgrounds, etc. In addition, we can add custom annotations to `ggplot2` plots using `cowplot` (see the [cowplot vignette](https://cran.r-project.org/web/packages/cowplot/vignettes/introduction.html) for more details).
On of the plots that we created earlier was a bubble chart by gender and frequency.
```
ggplot(data = science_summary,
mapping = aes(x = CNT, y = Science, size = Freq, fill = sex)) +
geom_point(shape = 21) +
coord_flip() +
theme_bw() +
labs(x = NULL, y = "Science Scores", fill = "Gender",
size = "Frequency")
```
After we load the `cowplot` package and remove `theme_bw` from the plot, it will change as follows:
```
library("cowplot")
plot1 <-
ggplot(data = science_summary,
mapping = aes(x = CNT, y = Science, size = Freq, fill = sex)) +
geom_point(shape = 21) +
coord_flip() +
labs(x = NULL, y = "Science Scores", fill = "Gender",
size = "Frequency")
plot1
```
The `cowplot` package removes the gray background, gridlines, and make the axes more visible. If we want to save the plot, we can export it using `save_plot`.
```
save_plot("plot1.png", plot1,
base_aspect_ratio = 1.6)
```
Also, `cowplot` enables combining two or more plots into one graph via the function `plot_grid`:
```
plot2 <-
ggplot(data = science_summary,
mapping = aes(x = CNT, y = Science, fill = sex)) +
geom_bar(stat = "identity", position = "dodge") +
coord_flip() +
labs(x = "", y = "Science Scores", fill = "Gender")
plot_grid(plot1, plot2, labels = c("A", "B"))
```
If you decide not to use the `cowplot` theme, you can just simply unload the package as follows:
```
detach("package:cowplot", unload=TRUE)
```
5\.9 Lab
--------
We want to examine the relationships between reading scores and technology\-related variables in the `dat` dataset that we created earlier. Create at least two visualizations (either static or interactive) using some of the variables shown below:
* Region
* sex
* grade
* HOMESCH
* ENTUSE
* ICTHOME
* ICTSCH
You can focus on a particular country or region or use the entire dataset for your visualizations.
| Big Data |
okanbulut.github.io | https://okanbulut.github.io/bigdata/supervised-machine-learning---part-i.html |
7 Supervised Machine Learning \- Part I
=======================================
7\.1 Decision Trees
-------------------
Decision trees (also known as classification and regression trees – CART) are an important type of algorithm for predictive modeling and machine learning. In general, the CART approach relies on *stratifying* or *segmenting* the prediction space into a number of simple regions. In order to make regression\-based or classification\-based predictions, we use the mean or the mode of the training observations in the region to which they belong.
A typical layout of a decision tree model looks like a binary tree. The tree has a root node that represents the starting point of the prediction. There are also decision nodes where we split the data into a smaller subset and leaf nodes where we make a decision. Each node represents a single input variable (i.e., predictor) and a split point on that variable. The leaf nodes of the tree contain an output variable (i.e., dependent variable) for which we make a prediction. Predictions are made by walking the splits of the tree until arriving at a leaf node and output the class value at that leaf node. Figure [7\.1](supervised-machine-learning---part-i.html#fig:fig6-1) shows an example of a decision tree model in the context of a binary dependent variable (accepting or not accepting a new job offer).
Figure 7\.1: An example of decision tree approach
Although decision trees are not highly competitive with the advanced supervised learning approaches, they are still quite popular in ML applications because they:
* are fast to learn and very fast for making predictions.
* are often accurate for a broad range of problems.
* do not require any special preparation for the data.
* are highly interpretable compared to more complex ML methods (e.g., neural networks).
* are very easy to explain to people as the logic of decision trees closely mirrors human decision\-making.
* can be displayed graphically, and thus are easily interpreted even by a non\-expert.
In a decision tree model, either categorical and continuous variables can be used as the outcome variable depending on whether we want classification trees (categorical outcomes) or regression trees (continuous outcomes). Decision trees are particularly useful when predictors interact well with the outcome variable (and with each other).
### 7\.1\.1 Regression trees
In regression trees, the following two steps will allow us to create a decision tree model:
1. We divide the prediction space (with several predictors) into distinct and non\-overlapping regions, using a *top\-down*, *greedy* approach – which is also known as *recursive binary splitting*. We begin splitting at the top of the tree and then go down by successively splitting the prediction space into two new branches. This step is completed by dividing the prediction space into high\-dimensional rectangles and minimizing the following equation:
\\\[
RSS\=\\sum\_{i: x\_i \\in R\_1(j,s)}(y\_i\-\\hat{y}\_{R\_1})^2 \+ \\sum\_{i: x\_i \\in R\_2(j,s)}(y\_i\-\\hat{y}\_{R\_2})^2
\\]
where \\(RSS\\) is the residual sum of squares, \\(y\_i\\) is the observed predicted variable for the observations \\(i\=(1,2,3, \\dots, N)\\) in the training data, \\(j\\) is the index for the \\(j^{th}\\) split, \\(s\\) is the cutpoint for a given predictor \\(X\_i\\), \\(\\hat{y}\_{R\_1}\\) is the mean response for the observations in the \\(R\_1(j,s)\\) region of the training data and \\(\\hat{y}\_{R\_2}\\) is the mean response for the observations in the \\(R\_2(j,s)\\) region of the training data.
2. Once all the regions \\(R\_1, \\dots, R\_J\\) have been created, we predict the response for a given observation using the mean of the observations in the region of the training data to which that observation belongs.
### 7\.1\.2 Classification trees
A classification tree is very similar to a regression tree, except that the decision tree predicts a qualitative (i.e., categorical) variable rather than a quantitative (i.e., continuous and numerical) variable. The procedure for splitting the data in multiple branches is the same as the one we described for the regression tree above. The only difference is that instead of using the mean of the observations in the region of the training data, we assume that each observation belongs to the *mode* class (i.e., most commonly occurring class) of the observations in the region of the training data. Also, rather than minimizing \\(RSS\\), we try to minimize the *classification error rate*, which is the fraction of the training observations in a given region that do not belong to the most common class:
\\\[
E \= 1 \- max\_k(\\hat{p}\_{mk})
\\]
where \\(\\hat{p}\_{mk}\\) is the proportion of training observations in the \\(m^{th}\\) region that are from the \\(k^{th}\\) class. However, only classification error is **NOT** good enough to split decision trees. Therefore, there are two other indices for the same purpose:
1. **The Gini index**:
\\\[
G \= \\sum\_{k\=1}^{K}\\hat{p}\_{mk}(1\-\\hat{p}\_{mk})
\\]
where \\(K\\) represents the number of classes. This is essentially a measure of total variance across the \\(K\\) classes. A small Gini index indicates that a node contains predominantly observations from a single class.
2. **Entropy**:
\\\[
Entropy \= \-\\sum\_{k\=1}^{K}\\hat{p}\_{mk}\\text{log}\\hat{p}\_{mk}
\\]
Like the Gini index, the entropy will also take on a small value if the \\(m^{th}\\) node is pure.
When building a classification tree, either the Gini index or the entropy is typically used to evaluate the quality of a particular split, as they are more sensitive to the changes in the splits than the classification error rate. Typically, the Gini index is better for minimizing misclassification, while the Entropy is better for exploratory analysis.
### 7\.1\.3 Pruning decision trees
Sometimes decision trees end up having many branches and nodes, yielding a model that overfits the training data and poorly fits the validation or test data. To eliminate this overfitting problem, we may prefer to have a smaller and more interpretable tree with fewer splits at the cost of a little bias. One strategy to achieve this is to grow a very large tree and then prune it back in order to obtain a *subtree*.
Given a subtree, we can estimate its error in the test or validation data. However, estimating the error for every possible subtree would be computationally too expensive. A more feasible way is to use *cost complexity pruning* by getting a sequence of trees indexed by a nonnegative tuning parameter \\(\\alpha\\) – which also known as the complexity parameter (cp). The cp parameter controls a trade\-off between the subtree’s complexity and its fit to the training data. As the cp parameter increases from zero, branches in the decision tree get pruned in a nested and predictable fashion. To determine the ideal value for the cp parameter, we can try different values of cp in a validation set or use cross\-validation (e.g., *K*\-fold approach). By checking the error (using either RSS, or Gini index, or Entropy depending on the prediction problem) for different sizes of decision trees, we can determine the ideal point to prune the tree.
7\.2 Decision trees in R
------------------------
In the following example, we will build a classification tree model, using the science scores from PISA 2015\. Using a set of predictors in the **pisa** dataset, we will predict whether students are above or below the mean scale score for science. The average science score in PISA 2015 was 493 across all participating countries (see [PISA 2015 Results in Focus](https://www.oecd.org/pisa/pisa-2015-results-in-focus.pdf) for more details). Using this score as a cut\-off value, we will first create a binary variable called `science_perf` where `science_perf`\= High if a student’s science score is equal or larger than 493; otherwise `science_perf`\= Low.
```
pisa <- pisa[, science_perf := as.factor(ifelse(science >= 493, "High", "Low"))]
```
In addition, we will subset the students from the United States and Canada and choose some variables (rather than the entire set of variables) to make our example relatively simple and manageable in terms of time. We will use the following variables in our model:
| Label | Description |
| --- | --- |
| WEALTH | Family wealth (WLE) |
| HEDRES | Home educational resources (WLE) |
| ENVAWARE | Environmental Awareness (WLE) |
| ICTRES | ICT Resources (WLE) |
| EPIST | Epistemological beliefs (WLE) |
| HOMEPOS | Home possessions (WLE) |
| ESCS | Index of economic, social and cultural status (WLE) |
| reading | Students’ reading score in PISA 2015 |
| math | Students’ math score in PISA 2015 |
We call this new dataset `pisa_small`.
```
pisa_small <- subset(pisa, CNT %in% c("Canada", "United States"),
select = c(science_perf, WEALTH, HEDRES, ENVAWARE, ICTRES,
EPIST, HOMEPOS, ESCS, reading, math))
```
Before we begin the analysis, we need to install and load all the required packages.
```
decision_packages <- c("caret", "rpart", "rpart.plot", "randomForest", "modelr")
install.packages(decision_packages)
library("caret")
library("rpart")
library("rpart.plot")
library("randomForest")
library("modelr")
# Already installed packages that we will use
library("data.table")
library("dplyr")
library("ggplot2")
```
Next, we will split our dataset into a training dataset and a test dataset. We will train the decision tree on the training data and check its accuracy using the test data. In order to replicate the results later on, we need to set the seed – which will allow us to fix the randomization. Next, we remove the missing cases, save it as a new dataset, and then use `createDataPartition()` from the `caret` package to create an index to split the dataset as 70% to 30% using p \= 0\.7\.
```
# Set the seed before splitting the data
set.seed(442019)
# We need to remove missing cases
pisa_nm <- na.omit(pisa_small)
# Split the data into training and test
index <- createDataPartition(pisa_nm$science_perf, p = 0.7, list = FALSE)
train_dat <- pisa_nm[index, ]
test_dat <- pisa_nm[-index, ]
nrow(train_dat)
```
```
## [1] 16561
```
```
nrow(test_dat)
```
```
## [1] 7097
```
Alternatively, we could simply create the index using random number generation with `sample.int()`.
```
n <- nrow(pisa_nm)
index <- sample.int(n, size = round(0.7 * n))
```
To build a decision tree model, we will use the `rpart` function from the `rpart` package. In the function, there are several elements:
* `formula = science_perf ~ .` defines the dependent variable (i.e., science\_perf) and the predictors (and `~` is the separator). Because we use `science_perf ~ .`, we use all variables in the dataset (except for science\_perf) as our predictors. We could also write the same formula as `science_perf ~ math + reading + ESCS + ... + WEALTH` by specifying each variable individually.
* `data = train_dat` defines the dataset we are using for the analysis.
* `method = "class"` defines what type of decision tree we are building. `method = "class"` defines a classification tree and `method = "anova"` defines a regression tree.
* `control` is a list of control (i.e., tuning) elements for the decision tree algorithm. `minsplit` defines the minimum number of observations that must exist in a node (default \= 20\); `cp` is the complexity parameter to prune the subtrees that don’t improve the model fit (default \= 0\.01, if `cp` \= 0, then no pruning); `xval` is the number of cross\-validations (default \= 10, if `xval` \= 0, then no cross validation).
* `parms` is a list of optional parameters for the splitting function.`anova` splitting (i.e., regression trees) has no parameters. For `class` splitting (i.e., classification tree), the most important option is the split index – which is either `"gini"` for the Gini index or `"information"` for the Entropy index. Splitting based on `information` can be slightly slower compared to the Gini index (see the [vignette](https://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf) for more information).
We will start building our decision tree model `df_fit1` (standing for decision tree fit for model 1\) with no pruning (i.e., `cp = 0`) and no cross\-validation as we have a test dataset already (i.e., `xval = 0`). We will use the Gini index for the splitting.
```
dt_fit1 <- rpart(formula = science_perf ~ .,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0,
xval = 0),
parms = list(split = "gini"))
```
The estimated model is very likely to have too many nodes because we set `cp = 0`. Due to having many nodes, first we will examine the results graphically, before we attempt to print the output. Although the `rpart` package can draw decision tree plots, they are very basic. Therefore, we will use the `rpart.plot` function from the `rpart.plot` package to draw a nicer decision tree plot. Let’s see the results graphically using the default settings of the `rpart.plot` function.
```
rpart.plot(dt_fit1)
```
How does the model look like? It is **NOT** very interpretable, isn’t it? We definitely need to prune the trees; otherwise the model yields a very complex model with many nodes – which is very likely to overfit the data. In the following model, we use `cp = 0.005`. Remember that as we increase `cp`, the pruning for the model will also increase. The higher the cp value, the shorter the trees with possibly fewer predictors.
```
dt_fit2 <- rpart(formula = science_perf ~ .,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0.005,
xval = 0),
parms = list(split = "gini"))
rpart.plot(dt_fit2)
```
We could also estimate the same model with the Entropy as the split criterion, `split = "information"`, and the results would be similar (not necessarily the tree itself, but its classification performance).
```
dt_fit2 <- rpart(formula = science_perf ~ .,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0.005,
xval = 0),
parms = list(split = "information"))
```
Now our model is less complex compared compared to the previous model. In the above decision tree plot, each node shows:
* the predicted class (High or low)
* the predicted probability of the second class (i.e., “Low”)
* the percentage of observations in the node
Let’s play with the colors to make the trees even more distinct. Also, we will adjust which values should be shown in the nodes, using `extra = 8` (see other possible options [HERE](http://www.milbo.org/doc/prp.pdf)). Each node in the new plot shows:
* the predicted class (High or low)
* the predicted probability of the fitted class
```
rpart.plot(dt_fit2, extra = 8, box.palette = "RdBu", shadow.col = "gray")
```
An alternative way to prune the model is to use the `prune()` function from the `rpart` package. In the following example, we will use our initial complex model `dt_fit1` and prune it.
```
dt_fit1_prune <- prune(dt_fit1, cp = 0.005)
rpart.plot(dt_fit1_prune, extra = 8, box.palette = "RdBu", shadow.col = "gray")
```
which would yield the same model that we estimated. Now let’s print the output of our model using `printcp()`:
```
printcp(dt_fit2)
```
```
##
## Classification tree:
## rpart(formula = science_perf ~ ., data = train_dat, method = "class",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0.005, xval = 0))
##
## Variables actually used in tree construction:
## [1] math reading
##
## Root node error: 6461/16561 = 0.39
##
## n= 16561
##
## CP nsplit rel error
## 1 0.746 0 1.00
## 2 0.015 1 0.25
## 3 0.015 3 0.22
## 4 0.008 4 0.21
## 5 0.005 6 0.19
```
In the output, `CP` refers to the complexity parameter, `nsplit` is the number of splits in the decision tree based on the complexity parameter, and `rel error` is the relative error (i.e., \\(1 \- R^2\\)) of the solution. This is the error for predictions of the data that were used to estimate the model. The section of `Variables actually used in tree construction` shows which variables have been used in the final model. In our example, only math and reading have been used. What happened to the other variables?
In addition to `printcp()`, we can use `summary()` to print out more detailed results with all splits.
```
summary(dt_fit2)
```
We don’t print the entire summary output here. Instead, we want to focus on a specific section in the output:
```
Variable importance
math reading ENVAWARE ESCS EPIST HOMEPOS
46 37 5 4 4 4
```
Similarly, `varImp()` from the `caret` package also gives us a similar output:
```
varImp(dt_fit2)
```
```
## Overall
## ENVAWARE 582.079
## EPIST 791.499
## ESCS 427.819
## HEDRES 5.288
## HOMEPOS 17.914
## math 5529.902
## reading 5752.549
## WEALTH 7.573
## ICTRES 0.000
```
Both of these show the importance of the variables for our estimated decision tree model. The larger the values are, the more crucial they are for the model. In our example, math and reading seem to be highly important for the decision tree model, whereas ICTRES is the least important variable. The variables that were not very important for the model are those that were not included in the final model. These variables are possibly have very low correlations with our outcome variable, `science_perf`.
We can use `rpart.rules` to print out the decision rules from the trees. By default, the output from this function shows the probability of the **second** class for each decision/split being made (i.e., the category “low” in our example) and what percent of the observations fall into this category.
```
rpart.rules(dt_fit2, cover = TRUE)
```
```
## science_perf cover
## 0.03 when math >= 478 & reading >= 499 53%
## 0.21 when math < 478 & reading >= 539 1%
## 0.34 when math >= 478 & reading is 468 to 499 6%
## 0.40 when math is 458 to 478 & reading is 507 to 539 2%
## 0.68 when math < 458 & reading is 507 to 539 2%
## 0.70 when math >= 478 & reading < 468 3%
## 0.95 when math < 478 & reading < 507 33%
```
Furthermore, we need to check the classification accuracy of the estimated decision tree with the **test** data. Otherwise, it is hard to justify whether or not the estimated decision tree would work accurately for prediction. Below we estimate the predicted classes (either high or low) from the test data by applying the estimated model.First we obtain model predictions using `predict()` and then turn the results into a data frame called `dt_pred`.
```
dt_pred <- predict(dt_fit2, test_dat) %>%
as.data.frame()
head(dt_pred)
```
```
## High Low
## 1 0.97465 0.02535
## 2 0.05406 0.94594
## 3 0.05406 0.94594
## 4 0.66243 0.33757
## 5 0.97465 0.02535
## 6 0.05406 0.94594
```
This dataset shows each observation’s (i.e., students from the test data) probability of falling into either *high* or *low* categories based on the decision rules that we estimated. We will turn these probabilities into binary classifications, depending on whether or not they are \>\= \\(50\\%\\). Then, we will compare these estimates with the actual classes in the test data (i.e., `test_dat$science_perf`) in order to create a confusion matrix.
```
dt_pred <- mutate(dt_pred,
science_perf = as.factor(ifelse(High >= 0.5, "High", "Low"))
) %>%
select(science_perf)
confusionMatrix(dt_pred$science_perf, test_dat$science_perf)
```
```
## Confusion Matrix and Statistics
##
## Reference
## Prediction High Low
## High 4076 316
## Low 252 2453
##
## Accuracy : 0.92
## 95% CI : (0.913, 0.926)
## No Information Rate : 0.61
## P-Value [Acc > NIR] : < 2e-16
##
## Kappa : 0.831
##
## Mcnemar's Test P-Value : 0.00821
##
## Sensitivity : 0.942
## Specificity : 0.886
## Pos Pred Value : 0.928
## Neg Pred Value : 0.907
## Prevalence : 0.610
## Detection Rate : 0.574
## Detection Prevalence : 0.619
## Balanced Accuracy : 0.914
##
## 'Positive' Class : High
##
```
The output shows that the overall accuracy is around \\(92\\%\\), sensitivit is \\(94\\%\\), and specificity is \\(89\\%\\). For only two variables, this is very good. However, sometimes we do not have predictors that are highly correlated with the outcome variables. In such cases, the model tuning might take much longer.
Let’s assume that we did **NOT** have reading and math in our dataset. We still want to predict `science_perf` using the remaining variables.
```
dt_fit3a <- rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES + EPIST +
HOMEPOS +ESCS,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0.001,
xval = 0),
parms = list(split = "gini"))
rpart.plot(dt_fit3a, extra = 8, box.palette = "RdBu", shadow.col = "gray")
```
Now, let’s change cp to 0\.005\.
```
dt_fit3b <- rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES + EPIST +
HOMEPOS + ESCS,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0.005,
xval = 0),
parms = list(split = "gini"))
rpart.plot(dt_fit3b, extra = 8, box.palette = "RdBu", shadow.col = "gray")
```
Since we also care about the accuracy, sensitivity, and specificity of these models, we can turn this experiment into a small function.
```
decision_check <- function(cp) {
require("rpart")
require("dplyr")
dt <- rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES + EPIST +
HOMEPOS + ESCS,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = cp,
xval = 0),
parms = list(split = "gini"))
dt_pred <- predict(dt, test_dat) %>%
as.data.frame() %>%
mutate(science_perf = as.factor(ifelse(High >= 0.5, "High", "Low"))) %>%
select(science_perf)
cm <- confusionMatrix(dt_pred$science_perf, test_dat$science_perf)
results <- data.frame(cp = cp,
Accuracy = round(cm$overall[1], 3),
Sensitivity = round(cm$byClass[1], 3),
Specificity = round(cm$byClass[2], 3))
return(results)
}
result <- NULL
for(i in seq(from=0.001, to=0.08, by = 0.005)) {
result <- rbind(result, decision_check(cp = i))
}
result <- result[order(result$Accuracy, result$Sensitivity, result$Specificity),]
result
```
```
## cp Accuracy Sensitivity Specificity
## Accuracy9 0.046 0.675 0.947 0.250
## Accuracy10 0.051 0.675 0.947 0.250
## Accuracy11 0.056 0.675 0.947 0.250
## Accuracy12 0.061 0.675 0.947 0.250
## Accuracy13 0.066 0.675 0.947 0.250
## Accuracy14 0.071 0.675 0.947 0.250
## Accuracy15 0.076 0.675 0.947 0.250
## Accuracy3 0.016 0.686 0.757 0.574
## Accuracy4 0.021 0.686 0.757 0.574
## Accuracy5 0.026 0.686 0.757 0.574
## Accuracy6 0.031 0.686 0.757 0.574
## Accuracy7 0.036 0.686 0.757 0.574
## Accuracy8 0.041 0.686 0.757 0.574
## Accuracy1 0.006 0.694 0.850 0.449
## Accuracy2 0.011 0.694 0.850 0.449
## Accuracy 0.001 0.705 0.835 0.502
```
We can also visulize the results using `ggplot2`. First, we wil transform the `result` dataset into a long format and then use this new dataset (called `result_long`) in `ggplot()`.
```
result_long <- melt(as.data.table(result),
id.vars = c("cp"),
measure = c("Accuracy", "Sensitivity", "Specificity"),
variable.name = "Index",
value.name = "Value")
ggplot(data = result_long,
mapping = aes(x = cp, y = Value)) +
geom_point(aes(color = Index), size = 3) +
labs(x = "Complexity Parameter", y = "Value") +
theme_bw()
```
In the plot, we see that there is a trade\-off between sensitivity and specificity. Depending on the situation, we may prefer higher sensitivity (e.g., correctly identifying those who have “high” science scores) or higher specificity (e.g., correctly identifying those who have “low” science scores). For example, if we want to know who is performing poorly in science (so that we can design additional instructional materials), we may want the model to identify “low” performers more accurately.
### 7\.2\.1 Cross\-validation
As you may remember, we set `xval = 0` in our decision tree models because we did not want to run any cross\-validation samples. However, cross\-validations (e.g., *K*\-fold approach) are highly useful when we do not have a test or validation dataset, or our dataset is to small to split into training and test data. A typical way to use cross\-validation in decision trees is to not specify a cp (i.e., complexity parameter) and perform cross validation. In the following example, we will assume that our dataset is not too big and thus we want to run 10 cross\-validation samples (i.e., splits) as we build our decision tree model. Note that we use `cp = 0` this time.
```
dt_fit4 <- rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
EPIST + HOMEPOS + ESCS,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0,
xval = 10),
parms = list(split = "gini"))
```
In the results, we can evaluate the cross\-validated error (i.e., X\-val Relative Error) and choose the complexity parameter that would give us an acceptable value. Then, we can use this cp value and prune the trees. We use `plotcp()` function to visualize the cross\-validation results.
```
printcp(dt_fit4)
```
```
##
## Classification tree:
## rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
## EPIST + HOMEPOS + ESCS, data = train_dat, method = "class",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0, xval = 10))
##
## Variables actually used in tree construction:
## [1] ENVAWARE EPIST ESCS HEDRES HOMEPOS ICTRES WEALTH
##
## Root node error: 6461/16561 = 0.39
##
## n= 16561
##
## CP nsplit rel error xerror xstd
## 1 7.7e-02 0 1.00 1.00 0.0097
## 2 4.4e-02 2 0.85 0.85 0.0094
## 3 1.2e-02 3 0.80 0.80 0.0092
## 4 5.6e-03 5 0.78 0.79 0.0092
## 5 2.6e-03 7 0.77 0.78 0.0092
## 6 2.3e-03 9 0.76 0.78 0.0092
## 7 1.9e-03 11 0.76 0.78 0.0092
## 8 1.9e-03 13 0.75 0.78 0.0092
## 9 1.7e-03 16 0.75 0.78 0.0092
## 10 1.5e-03 18 0.74 0.78 0.0092
## 11 1.5e-03 24 0.73 0.78 0.0092
## 12 1.4e-03 30 0.73 0.78 0.0092
## 13 1.2e-03 34 0.72 0.77 0.0091
## 14 1.1e-03 35 0.72 0.77 0.0091
## 15 1.1e-03 47 0.70 0.77 0.0091
## 16 8.5e-04 52 0.70 0.77 0.0091
## 17 8.3e-04 54 0.70 0.78 0.0092
## 18 7.7e-04 57 0.69 0.78 0.0092
## 19 7.2e-04 69 0.68 0.78 0.0092
## 20 6.7e-04 72 0.68 0.78 0.0092
## 21 6.2e-04 84 0.67 0.78 0.0092
## 22 5.8e-04 101 0.66 0.79 0.0092
## 23 5.7e-04 106 0.66 0.79 0.0092
## 24 5.4e-04 123 0.65 0.79 0.0092
## 25 5.2e-04 148 0.64 0.79 0.0092
## 26 4.9e-04 153 0.63 0.80 0.0092
## 27 4.6e-04 159 0.63 0.80 0.0092
## 28 4.3e-04 198 0.61 0.80 0.0092
## 29 4.1e-04 211 0.60 0.81 0.0092
## 30 4.0e-04 231 0.59 0.81 0.0093
## 31 3.9e-04 254 0.58 0.81 0.0093
## 32 3.6e-04 275 0.57 0.81 0.0093
## 33 3.3e-04 298 0.56 0.81 0.0093
## 34 3.1e-04 310 0.56 0.82 0.0093
## 35 2.7e-04 360 0.54 0.82 0.0093
## 36 2.6e-04 380 0.54 0.83 0.0093
## 37 2.5e-04 399 0.53 0.84 0.0093
## 38 2.3e-04 411 0.53 0.84 0.0094
## 39 2.2e-04 456 0.52 0.84 0.0094
## 40 2.2e-04 467 0.51 0.84 0.0094
## 41 2.1e-04 495 0.50 0.85 0.0094
## 42 1.9e-04 507 0.50 0.85 0.0094
## 43 1.9e-04 521 0.50 0.85 0.0094
## 44 1.7e-04 529 0.50 0.85 0.0094
## 45 1.5e-04 538 0.49 0.87 0.0094
## 46 1.3e-04 632 0.48 0.87 0.0094
## 47 1.2e-04 638 0.48 0.88 0.0095
## 48 1.0e-04 646 0.48 0.88 0.0095
## 49 9.3e-05 667 0.47 0.88 0.0095
## 50 7.7e-05 672 0.47 0.89 0.0095
## 51 6.9e-05 716 0.47 0.89 0.0095
## 52 5.8e-05 725 0.47 0.90 0.0095
## 53 5.2e-05 740 0.47 0.90 0.0095
## 54 3.9e-05 770 0.47 0.90 0.0095
## 55 2.2e-05 782 0.47 0.90 0.0095
## 56 0.0e+00 796 0.47 0.91 0.0095
```
```
plotcp(dt_fit4)
```
Next, we can modify our model as follows:
```
dt_fit5 <- rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
EPIST + HOMEPOS + ESCS,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0.0039,
xval = 0),
parms = list(split = "gini"))
printcp(dt_fit5)
```
```
##
## Classification tree:
## rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
## EPIST + HOMEPOS + ESCS, data = train_dat, method = "class",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0.0039, xval = 0))
##
## Variables actually used in tree construction:
## [1] ENVAWARE EPIST ESCS HOMEPOS
##
## Root node error: 6461/16561 = 0.39
##
## n= 16561
##
## CP nsplit rel error
## 1 0.0772 0 1.00
## 2 0.0436 2 0.85
## 3 0.0119 3 0.80
## 4 0.0056 5 0.78
## 5 0.0039 7 0.77
```
```
rpart.plot(dt_fit5, extra = 8, box.palette = "RdBu", shadow.col = "gray")
```
Lastly, for the sake of brevity, we demonstrate a short regression tree example below where we predict math scores (a continuous variable) using the same set of variables. This time we use `method = "anova"` in the `rpart()` function to estimate a regression tree.
Let’s begin with cross\-validation and check how \\(R^2\\) changes depending on the number of splits.
```
rt_fit1 <- rpart(formula = math ~ WEALTH + HEDRES + ENVAWARE +
ICTRES + EPIST + HOMEPOS + ESCS,
data = train_dat,
method = "anova",
control = rpart.control(minsplit = 20,
cp = 0.001,
xval = 10),
parms = list(split = "gini"))
printcp(rt_fit1)
```
```
##
## Regression tree:
## rpart(formula = math ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
## EPIST + HOMEPOS + ESCS, data = train_dat, method = "anova",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0.001, xval = 10))
##
## Variables actually used in tree construction:
## [1] ENVAWARE EPIST ESCS HEDRES WEALTH
##
## Root node error: 1e+08/16561 = 6318
##
## n= 16561
##
## CP nsplit rel error xerror xstd
## 1 0.1122 0 1.00 1.00 0.0101
## 2 0.0382 1 0.89 0.89 0.0092
## 3 0.0353 2 0.85 0.86 0.0090
## 4 0.0167 3 0.81 0.82 0.0086
## 5 0.0078 4 0.80 0.80 0.0084
## 6 0.0070 5 0.79 0.79 0.0084
## 7 0.0064 6 0.78 0.79 0.0083
## 8 0.0041 7 0.78 0.78 0.0083
## 9 0.0033 8 0.77 0.78 0.0083
## 10 0.0030 9 0.77 0.78 0.0083
## 11 0.0029 10 0.77 0.78 0.0083
## 12 0.0025 11 0.76 0.77 0.0082
## 13 0.0021 12 0.76 0.77 0.0082
## 14 0.0021 13 0.76 0.77 0.0082
## 15 0.0020 14 0.76 0.77 0.0082
## 16 0.0018 15 0.75 0.77 0.0082
## 17 0.0017 17 0.75 0.77 0.0082
## 18 0.0017 18 0.75 0.77 0.0082
## 19 0.0017 19 0.75 0.77 0.0082
## 20 0.0016 20 0.75 0.77 0.0082
## 21 0.0015 21 0.74 0.77 0.0082
## 22 0.0013 22 0.74 0.77 0.0082
## 23 0.0012 23 0.74 0.76 0.0082
## 24 0.0012 25 0.74 0.76 0.0082
## 25 0.0011 27 0.74 0.76 0.0081
## 26 0.0011 28 0.74 0.76 0.0081
## 27 0.0011 29 0.73 0.76 0.0081
## 28 0.0011 30 0.73 0.76 0.0081
## 29 0.0010 31 0.73 0.76 0.0081
## 30 0.0010 32 0.73 0.76 0.0081
```
Then, we can adjust our model based on the suggestions from the previous plot. Note that we use `extra = 100` in the `rpart.plot()` function to show percentages (*Note*: `rpart.plot` has different *extra* options depending on whether it is a classification or regression tree).
```
rt_fit2 <- rpart(formula = math ~ WEALTH + HEDRES + ENVAWARE +
ICTRES + EPIST + HOMEPOS + ESCS,
data = train_dat,
method = "anova",
control = rpart.control(minsplit = 20,
cp = 0.007,
xval = 0),
parms = list(split = "gini"))
printcp(rt_fit2)
```
```
##
## Regression tree:
## rpart(formula = math ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
## EPIST + HOMEPOS + ESCS, data = train_dat, method = "anova",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0.007, xval = 0))
##
## Variables actually used in tree construction:
## [1] ENVAWARE EPIST ESCS
##
## Root node error: 1e+08/16561 = 6318
##
## n= 16561
##
## CP nsplit rel error
## 1 0.1122 0 1.00
## 2 0.0382 1 0.89
## 3 0.0353 2 0.85
## 4 0.0167 3 0.81
## 5 0.0078 4 0.80
## 6 0.0070 5 0.79
## 7 0.0070 6 0.78
```
```
rpart.plot(rt_fit2, extra = 100, box.palette = "RdBu", shadow.col = "gray")
```
To evaluate the model accuracy, we cannot use the classification\-based indices anymore because we built a regression tree, not a classification tree. Two useful measures that we can for evaluating regression trees are the mean absolute error (mae) and the root mean square error (rmse). The `modelr` package has several functions – such as `mae()` and `rmse()` – to evaluate regression\-based models. Using the training and (more importantly) test data, we can evaluate the accuracy of the decision tree model that we estimated above.
```
# Training data
mae(model = rt_fit2, data = train_dat)
```
```
## [1] 56.49
```
```
rmse(model = rt_fit2, data = train_dat)
```
```
## [1] 70.32
```
```
# Test data
mae(model = rt_fit2, data = test_dat)
```
```
## [1] 56.66
```
```
rmse(model = rt_fit2, data = test_dat)
```
```
## [1] 70.42
```
We seem to have slightly less error with the training data than the test data. Is this finding suprising to you?
---
7\.3 Random Forests
-------------------
Decision trees can sometimes be non\-robust because a small change in the data may cause a significant change in the final estimated tree. Therefore, whenever a decision tree approach is not completely stable, an alternative method – such as **random forests** – can be more suitable for supervised ML applications. Unlike the decision tree approach where there is a single solution from the same sample, random forest builds multiple decision trees by splitting the data into multiple sub\-samples and merges them together to get a more accurate and stable prediction.
The underlying mechanism of random forests is very similar to that of decision trees. However, random forests first build lots of bushy trees and then average them to reduce the overall variance. Figure [7\.2](supervised-machine-learning---part-i.html#fig:fig6-2) shows how a random forest would look like with three trees.
Figure 7\.2: An example of random forests approach
Random forest adds additional randomness to the model, while growing the trees. Instead of searching for the most important feature (i.e., predictor) while splitting a node, it searches for the best feature among a random subset of features. That is, only a random subset of the features is taken into consideration by the algorithm for splitting a node. This results in a wide diversity that generally results in a better model. For example, if there is a strong predictor among a set of predictors, a decision tree would typically rely on this particular predictor to make predictions and build trees. However, random forests force each split to consider only a set of the predictors – which would result in trees that utilize not only the strong predictor but also other predictors that are moderately correlated with the outcome variable.
Random forest has nearly the same tuning parameters as a decision tree. Also, like decision trees, random forests can be used for both classification and regression problems. However, there are some differences between the two approaches. Unlike in decision trees, it is easier to control and prevent overfitting in random forests. This is because random forests create random subsets of the features and build much smaller trees using these subsets. Afterwards, it combines the subtrees. It should be noted that this procedure makes random forests computationally slower, depending on how many trees random forest builds. Therefore, it may not be effective for *real\-time* predictions.
The random forest algorithm is used in a lot of different fields, like banking, stock market, medicine, and e\-commerce. For example, random forests can be used to detect customers who will use the bank’s services more frequently than others and repay their debt in time. It can also used to detect fraud customers who want to scam the bank. In educational testing, we can use random forests to analyze a student’s assessment history (e.g., test scores, response times, demographic variables, grade level, and so on) to identify whether the student has any learning difficulties. Similarly, we can use examinee\-related variables, test scores, and test administration date to identify whether an examinee is likely to re\-take the test (e.g., TOEFL or GRE) in the future.
7\.4 Random forests in R
------------------------
In R, `randomForest` and `caret` packages can be used to apply the random forest algorithm to classification and regression problems. The use of the `randomForest()` function is similar to that of `rpart()`. The main elements that we need to define are:
* **formula**: A regression\-like formula defining the dependent variable and the predictors – it is the same as the one for `rpart()`.
* **data**: The dataset that we use to train the model.
* **importance**: If TRUE, then importance of the predictors is assessed in the model.
* **ntree**: Number of trees to grow in the model; we often start with a large number and then reduce it as we adjust the model based on the results. A large number for **ntree** can significantly increase the estimation time for the model.
There are also other elements that we can change depending on whether it is a classification or regression model (see `?randomForest` for more details). In the following example, we will focus on the same classification problem that we used before for decision trees. We initially set `ntree = 1000` to get 1000 trees in total but we will evaluate whether we need all of these trees to have an accurate model.
```
library("randomForest")
library("caret")
rf_fit1 <- randomForest(formula = science_perf ~ .,
data = train_dat,
importance = TRUE, ntree = 1000)
print(rf_fit1)
```
```
##
## Call:
## randomForest(formula = science_perf ~ ., data = train_dat, importance = TRUE, ntree = 1000)
## Type of random forest: classification
## Number of trees: 1000
## No. of variables tried at each split: 3
##
## OOB estimate of error rate: 7.58%
## Confusion matrix:
## High Low class.error
## High 9464 636 0.06297
## Low 619 5842 0.09581
```
In the output, we see the confusion matrix along with classification error and out\-of\-bag (OOB) error. OBB is a method of measuring the prediction error of random forests, finding the mean prediction error on each training sample, using only the trees that did not have in their bootstrap sample. The results show that the overall OBB error is around \\(7\.6\\%\\), while the classification error is \\(6\\%\\) for the *high* category and around \\(10\\%\\) for the *low* category.
Next, by checking the level error across the number of trees, we can determine the ideal number of trees for our model.
```
plot(rf_fit1)
```
The plot shows that the error level does not go down any further after roughly 50 trees. So, we can run our model again by using `ntree = 50` this time.
```
rf_fit2 <- randomForest(formula = science_perf ~ .,
data = train_dat,
importance = TRUE, ntree = 50)
print(rf_fit2)
```
```
##
## Call:
## randomForest(formula = science_perf ~ ., data = train_dat, importance = TRUE, ntree = 50)
## Type of random forest: classification
## Number of trees: 50
## No. of variables tried at each split: 3
##
## OOB estimate of error rate: 7.95%
## Confusion matrix:
## High Low class.error
## High 9459 641 0.06347
## Low 675 5786 0.10447
```
We can see the overall accuracy of model (\\(92\.12\\%\\)) as follows:
```
sum(diag(rf_fit2$confusion)) / nrow(train_dat)
```
```
## [1] 0.9205
```
As we did for the decision trees, we can check the importance of the predictors in the model, using `importance()` and `varImpPlot()`. With `importance()`, we will first import the importance measures, turn it into a data.frame, save the row names as predictor names, and finally sort the data by MeanDecreaseGini (or, you can also see the basic output using only `importance(rf_fit2)`)
```
importance(rf_fit2) %>%
as.data.frame() %>%
mutate(Predictors = row.names(.)) %>%
arrange(desc(MeanDecreaseGini))
```
```
## High Low MeanDecreaseAccuracy MeanDecreaseGini Predictors
## math 28.659 33.636 39.132 3483.8 math
## reading 36.009 34.864 47.183 2748.0 reading
## EPIST 1.738 1.235 1.907 362.2 EPIST
## ENVAWARE 4.234 6.218 7.870 292.7 ENVAWARE
## ESCS 5.396 3.215 6.759 281.2 ESCS
## HOMEPOS 6.820 6.219 11.009 218.7 HOMEPOS
## WEALTH 6.796 9.105 10.888 197.7 WEALTH
## ICTRES 5.246 3.812 6.575 161.3 ICTRES
## HEDRES 7.454 1.510 5.714 133.5 HEDRES
```
```
varImpPlot(rf_fit2,
main = "Importance of Variables for Science Performance")
```
The output shows different importance measures for the predictors that we used in the model. `MeanDecreaseAccuracy` and `MeanDecreaseGini` represent the overall classification error rate (or, mean squared error for regression) and the total decrease in node impurities from splitting on the variable, averaged over all trees. In the output, math and reading are the two predictors that seem to influence the model performance substantially, whereas EPIST and HEDRES are the least important variables. `varImpPlot()` presents the same information visually.
Next, we check the confusion matrix to see the accuracy, sensitivity, and specificity of our model.
```
rf_pred <- predict(rf_fit2, test_dat) %>%
as.data.frame() %>%
mutate(science_perf = as.factor(`.`)) %>%
select(science_perf)
confusionMatrix(rf_pred$science_perf, test_dat$science_perf)
```
```
## Confusion Matrix and Statistics
##
## Reference
## Prediction High Low
## High 4058 274
## Low 270 2495
##
## Accuracy : 0.923
## 95% CI : (0.917, 0.929)
## No Information Rate : 0.61
## P-Value [Acc > NIR] : <2e-16
##
## Kappa : 0.839
##
## Mcnemar's Test P-Value : 0.898
##
## Sensitivity : 0.938
## Specificity : 0.901
## Pos Pred Value : 0.937
## Neg Pred Value : 0.902
## Prevalence : 0.610
## Detection Rate : 0.572
## Detection Prevalence : 0.610
## Balanced Accuracy : 0.919
##
## 'Positive' Class : High
##
```
The results show that the accuracy is quite high (\\(92\\%\\)). Similarly, sensitivity and specificity are also very high. This is not necessarily surprising because we already knew that the math and reading scores are highly correlated with the science performance. Also, our decision tree model yielded very similar results.
Finally, let’s visualize the classification results using `ggplot2`. First, we will create a new dataset called `rf_class` with the predicted and actual classifications (from the test data) based on the random forest model. Then, we will visualize the correct and incorrect classifications using a bar chart and a point plot with jittering.
```
rf_class <- data.frame(actual = test_dat$science_perf,
predicted = rf_pred$science_perf) %>%
mutate(Status = ifelse(actual == predicted, TRUE, FALSE))
ggplot(data = rf_class,
mapping = aes(x = predicted, fill = Status)) +
geom_bar(position = "dodge") +
labs(x = "Predicted Science Performance",
y = "Actual Science Performance") +
theme_bw()
```
```
ggplot(data = rf_class,
mapping = aes(x = predicted, y = actual,
color = Status, shape = Status)) +
geom_jitter(size = 2, alpha = 0.6) +
labs(x = "Predicted Science Performance",
y = "Actual Science Performance") +
theme_bw()
```
Like decision trees, random forests can also be used for cross\-validation, using the package `rfUtilities` that utilizes the objects returned from the `randomForest()` function. Below we show how cross\-validation would work for random forests (output is not shown). Using the `randomForest` object that we estimated earlier (i.e.,, `rf_fit2`), we can run cross validations as follows:
```
install.packages("rfUtilities")
library("rfUtilities")
rf_fit2_cv <- rf.crossValidation(
x = rf_fit2,
xdata = train_dat,
p=0.10, # Proportion of data to test (the rest is training)
n=10, # Number of cross validation samples
ntree = 50)
# Plot cross validation verses model producers accuracy
par(mfrow=c(1,2))
plot(rf_fit2_cv, type = "cv", main = "CV producers accuracy")
plot(rf_fit2_cv, type = "model", main = "Model producers accuracy")
par(mfrow=c(1,1))
# Plot cross validation verses model oob
par(mfrow=c(1,2))
plot(rf_fit2_cv, type = "cv", stat = "oob", main = "CV oob error")
plot(rf_fit2_cv, type = "model", stat = "oob", main = "Model oob error")
par(mfrow=c(1,1))
```
7\.1 Decision Trees
-------------------
Decision trees (also known as classification and regression trees – CART) are an important type of algorithm for predictive modeling and machine learning. In general, the CART approach relies on *stratifying* or *segmenting* the prediction space into a number of simple regions. In order to make regression\-based or classification\-based predictions, we use the mean or the mode of the training observations in the region to which they belong.
A typical layout of a decision tree model looks like a binary tree. The tree has a root node that represents the starting point of the prediction. There are also decision nodes where we split the data into a smaller subset and leaf nodes where we make a decision. Each node represents a single input variable (i.e., predictor) and a split point on that variable. The leaf nodes of the tree contain an output variable (i.e., dependent variable) for which we make a prediction. Predictions are made by walking the splits of the tree until arriving at a leaf node and output the class value at that leaf node. Figure [7\.1](supervised-machine-learning---part-i.html#fig:fig6-1) shows an example of a decision tree model in the context of a binary dependent variable (accepting or not accepting a new job offer).
Figure 7\.1: An example of decision tree approach
Although decision trees are not highly competitive with the advanced supervised learning approaches, they are still quite popular in ML applications because they:
* are fast to learn and very fast for making predictions.
* are often accurate for a broad range of problems.
* do not require any special preparation for the data.
* are highly interpretable compared to more complex ML methods (e.g., neural networks).
* are very easy to explain to people as the logic of decision trees closely mirrors human decision\-making.
* can be displayed graphically, and thus are easily interpreted even by a non\-expert.
In a decision tree model, either categorical and continuous variables can be used as the outcome variable depending on whether we want classification trees (categorical outcomes) or regression trees (continuous outcomes). Decision trees are particularly useful when predictors interact well with the outcome variable (and with each other).
### 7\.1\.1 Regression trees
In regression trees, the following two steps will allow us to create a decision tree model:
1. We divide the prediction space (with several predictors) into distinct and non\-overlapping regions, using a *top\-down*, *greedy* approach – which is also known as *recursive binary splitting*. We begin splitting at the top of the tree and then go down by successively splitting the prediction space into two new branches. This step is completed by dividing the prediction space into high\-dimensional rectangles and minimizing the following equation:
\\\[
RSS\=\\sum\_{i: x\_i \\in R\_1(j,s)}(y\_i\-\\hat{y}\_{R\_1})^2 \+ \\sum\_{i: x\_i \\in R\_2(j,s)}(y\_i\-\\hat{y}\_{R\_2})^2
\\]
where \\(RSS\\) is the residual sum of squares, \\(y\_i\\) is the observed predicted variable for the observations \\(i\=(1,2,3, \\dots, N)\\) in the training data, \\(j\\) is the index for the \\(j^{th}\\) split, \\(s\\) is the cutpoint for a given predictor \\(X\_i\\), \\(\\hat{y}\_{R\_1}\\) is the mean response for the observations in the \\(R\_1(j,s)\\) region of the training data and \\(\\hat{y}\_{R\_2}\\) is the mean response for the observations in the \\(R\_2(j,s)\\) region of the training data.
2. Once all the regions \\(R\_1, \\dots, R\_J\\) have been created, we predict the response for a given observation using the mean of the observations in the region of the training data to which that observation belongs.
### 7\.1\.2 Classification trees
A classification tree is very similar to a regression tree, except that the decision tree predicts a qualitative (i.e., categorical) variable rather than a quantitative (i.e., continuous and numerical) variable. The procedure for splitting the data in multiple branches is the same as the one we described for the regression tree above. The only difference is that instead of using the mean of the observations in the region of the training data, we assume that each observation belongs to the *mode* class (i.e., most commonly occurring class) of the observations in the region of the training data. Also, rather than minimizing \\(RSS\\), we try to minimize the *classification error rate*, which is the fraction of the training observations in a given region that do not belong to the most common class:
\\\[
E \= 1 \- max\_k(\\hat{p}\_{mk})
\\]
where \\(\\hat{p}\_{mk}\\) is the proportion of training observations in the \\(m^{th}\\) region that are from the \\(k^{th}\\) class. However, only classification error is **NOT** good enough to split decision trees. Therefore, there are two other indices for the same purpose:
1. **The Gini index**:
\\\[
G \= \\sum\_{k\=1}^{K}\\hat{p}\_{mk}(1\-\\hat{p}\_{mk})
\\]
where \\(K\\) represents the number of classes. This is essentially a measure of total variance across the \\(K\\) classes. A small Gini index indicates that a node contains predominantly observations from a single class.
2. **Entropy**:
\\\[
Entropy \= \-\\sum\_{k\=1}^{K}\\hat{p}\_{mk}\\text{log}\\hat{p}\_{mk}
\\]
Like the Gini index, the entropy will also take on a small value if the \\(m^{th}\\) node is pure.
When building a classification tree, either the Gini index or the entropy is typically used to evaluate the quality of a particular split, as they are more sensitive to the changes in the splits than the classification error rate. Typically, the Gini index is better for minimizing misclassification, while the Entropy is better for exploratory analysis.
### 7\.1\.3 Pruning decision trees
Sometimes decision trees end up having many branches and nodes, yielding a model that overfits the training data and poorly fits the validation or test data. To eliminate this overfitting problem, we may prefer to have a smaller and more interpretable tree with fewer splits at the cost of a little bias. One strategy to achieve this is to grow a very large tree and then prune it back in order to obtain a *subtree*.
Given a subtree, we can estimate its error in the test or validation data. However, estimating the error for every possible subtree would be computationally too expensive. A more feasible way is to use *cost complexity pruning* by getting a sequence of trees indexed by a nonnegative tuning parameter \\(\\alpha\\) – which also known as the complexity parameter (cp). The cp parameter controls a trade\-off between the subtree’s complexity and its fit to the training data. As the cp parameter increases from zero, branches in the decision tree get pruned in a nested and predictable fashion. To determine the ideal value for the cp parameter, we can try different values of cp in a validation set or use cross\-validation (e.g., *K*\-fold approach). By checking the error (using either RSS, or Gini index, or Entropy depending on the prediction problem) for different sizes of decision trees, we can determine the ideal point to prune the tree.
### 7\.1\.1 Regression trees
In regression trees, the following two steps will allow us to create a decision tree model:
1. We divide the prediction space (with several predictors) into distinct and non\-overlapping regions, using a *top\-down*, *greedy* approach – which is also known as *recursive binary splitting*. We begin splitting at the top of the tree and then go down by successively splitting the prediction space into two new branches. This step is completed by dividing the prediction space into high\-dimensional rectangles and minimizing the following equation:
\\\[
RSS\=\\sum\_{i: x\_i \\in R\_1(j,s)}(y\_i\-\\hat{y}\_{R\_1})^2 \+ \\sum\_{i: x\_i \\in R\_2(j,s)}(y\_i\-\\hat{y}\_{R\_2})^2
\\]
where \\(RSS\\) is the residual sum of squares, \\(y\_i\\) is the observed predicted variable for the observations \\(i\=(1,2,3, \\dots, N)\\) in the training data, \\(j\\) is the index for the \\(j^{th}\\) split, \\(s\\) is the cutpoint for a given predictor \\(X\_i\\), \\(\\hat{y}\_{R\_1}\\) is the mean response for the observations in the \\(R\_1(j,s)\\) region of the training data and \\(\\hat{y}\_{R\_2}\\) is the mean response for the observations in the \\(R\_2(j,s)\\) region of the training data.
2. Once all the regions \\(R\_1, \\dots, R\_J\\) have been created, we predict the response for a given observation using the mean of the observations in the region of the training data to which that observation belongs.
### 7\.1\.2 Classification trees
A classification tree is very similar to a regression tree, except that the decision tree predicts a qualitative (i.e., categorical) variable rather than a quantitative (i.e., continuous and numerical) variable. The procedure for splitting the data in multiple branches is the same as the one we described for the regression tree above. The only difference is that instead of using the mean of the observations in the region of the training data, we assume that each observation belongs to the *mode* class (i.e., most commonly occurring class) of the observations in the region of the training data. Also, rather than minimizing \\(RSS\\), we try to minimize the *classification error rate*, which is the fraction of the training observations in a given region that do not belong to the most common class:
\\\[
E \= 1 \- max\_k(\\hat{p}\_{mk})
\\]
where \\(\\hat{p}\_{mk}\\) is the proportion of training observations in the \\(m^{th}\\) region that are from the \\(k^{th}\\) class. However, only classification error is **NOT** good enough to split decision trees. Therefore, there are two other indices for the same purpose:
1. **The Gini index**:
\\\[
G \= \\sum\_{k\=1}^{K}\\hat{p}\_{mk}(1\-\\hat{p}\_{mk})
\\]
where \\(K\\) represents the number of classes. This is essentially a measure of total variance across the \\(K\\) classes. A small Gini index indicates that a node contains predominantly observations from a single class.
2. **Entropy**:
\\\[
Entropy \= \-\\sum\_{k\=1}^{K}\\hat{p}\_{mk}\\text{log}\\hat{p}\_{mk}
\\]
Like the Gini index, the entropy will also take on a small value if the \\(m^{th}\\) node is pure.
When building a classification tree, either the Gini index or the entropy is typically used to evaluate the quality of a particular split, as they are more sensitive to the changes in the splits than the classification error rate. Typically, the Gini index is better for minimizing misclassification, while the Entropy is better for exploratory analysis.
### 7\.1\.3 Pruning decision trees
Sometimes decision trees end up having many branches and nodes, yielding a model that overfits the training data and poorly fits the validation or test data. To eliminate this overfitting problem, we may prefer to have a smaller and more interpretable tree with fewer splits at the cost of a little bias. One strategy to achieve this is to grow a very large tree and then prune it back in order to obtain a *subtree*.
Given a subtree, we can estimate its error in the test or validation data. However, estimating the error for every possible subtree would be computationally too expensive. A more feasible way is to use *cost complexity pruning* by getting a sequence of trees indexed by a nonnegative tuning parameter \\(\\alpha\\) – which also known as the complexity parameter (cp). The cp parameter controls a trade\-off between the subtree’s complexity and its fit to the training data. As the cp parameter increases from zero, branches in the decision tree get pruned in a nested and predictable fashion. To determine the ideal value for the cp parameter, we can try different values of cp in a validation set or use cross\-validation (e.g., *K*\-fold approach). By checking the error (using either RSS, or Gini index, or Entropy depending on the prediction problem) for different sizes of decision trees, we can determine the ideal point to prune the tree.
7\.2 Decision trees in R
------------------------
In the following example, we will build a classification tree model, using the science scores from PISA 2015\. Using a set of predictors in the **pisa** dataset, we will predict whether students are above or below the mean scale score for science. The average science score in PISA 2015 was 493 across all participating countries (see [PISA 2015 Results in Focus](https://www.oecd.org/pisa/pisa-2015-results-in-focus.pdf) for more details). Using this score as a cut\-off value, we will first create a binary variable called `science_perf` where `science_perf`\= High if a student’s science score is equal or larger than 493; otherwise `science_perf`\= Low.
```
pisa <- pisa[, science_perf := as.factor(ifelse(science >= 493, "High", "Low"))]
```
In addition, we will subset the students from the United States and Canada and choose some variables (rather than the entire set of variables) to make our example relatively simple and manageable in terms of time. We will use the following variables in our model:
| Label | Description |
| --- | --- |
| WEALTH | Family wealth (WLE) |
| HEDRES | Home educational resources (WLE) |
| ENVAWARE | Environmental Awareness (WLE) |
| ICTRES | ICT Resources (WLE) |
| EPIST | Epistemological beliefs (WLE) |
| HOMEPOS | Home possessions (WLE) |
| ESCS | Index of economic, social and cultural status (WLE) |
| reading | Students’ reading score in PISA 2015 |
| math | Students’ math score in PISA 2015 |
We call this new dataset `pisa_small`.
```
pisa_small <- subset(pisa, CNT %in% c("Canada", "United States"),
select = c(science_perf, WEALTH, HEDRES, ENVAWARE, ICTRES,
EPIST, HOMEPOS, ESCS, reading, math))
```
Before we begin the analysis, we need to install and load all the required packages.
```
decision_packages <- c("caret", "rpart", "rpart.plot", "randomForest", "modelr")
install.packages(decision_packages)
library("caret")
library("rpart")
library("rpart.plot")
library("randomForest")
library("modelr")
# Already installed packages that we will use
library("data.table")
library("dplyr")
library("ggplot2")
```
Next, we will split our dataset into a training dataset and a test dataset. We will train the decision tree on the training data and check its accuracy using the test data. In order to replicate the results later on, we need to set the seed – which will allow us to fix the randomization. Next, we remove the missing cases, save it as a new dataset, and then use `createDataPartition()` from the `caret` package to create an index to split the dataset as 70% to 30% using p \= 0\.7\.
```
# Set the seed before splitting the data
set.seed(442019)
# We need to remove missing cases
pisa_nm <- na.omit(pisa_small)
# Split the data into training and test
index <- createDataPartition(pisa_nm$science_perf, p = 0.7, list = FALSE)
train_dat <- pisa_nm[index, ]
test_dat <- pisa_nm[-index, ]
nrow(train_dat)
```
```
## [1] 16561
```
```
nrow(test_dat)
```
```
## [1] 7097
```
Alternatively, we could simply create the index using random number generation with `sample.int()`.
```
n <- nrow(pisa_nm)
index <- sample.int(n, size = round(0.7 * n))
```
To build a decision tree model, we will use the `rpart` function from the `rpart` package. In the function, there are several elements:
* `formula = science_perf ~ .` defines the dependent variable (i.e., science\_perf) and the predictors (and `~` is the separator). Because we use `science_perf ~ .`, we use all variables in the dataset (except for science\_perf) as our predictors. We could also write the same formula as `science_perf ~ math + reading + ESCS + ... + WEALTH` by specifying each variable individually.
* `data = train_dat` defines the dataset we are using for the analysis.
* `method = "class"` defines what type of decision tree we are building. `method = "class"` defines a classification tree and `method = "anova"` defines a regression tree.
* `control` is a list of control (i.e., tuning) elements for the decision tree algorithm. `minsplit` defines the minimum number of observations that must exist in a node (default \= 20\); `cp` is the complexity parameter to prune the subtrees that don’t improve the model fit (default \= 0\.01, if `cp` \= 0, then no pruning); `xval` is the number of cross\-validations (default \= 10, if `xval` \= 0, then no cross validation).
* `parms` is a list of optional parameters for the splitting function.`anova` splitting (i.e., regression trees) has no parameters. For `class` splitting (i.e., classification tree), the most important option is the split index – which is either `"gini"` for the Gini index or `"information"` for the Entropy index. Splitting based on `information` can be slightly slower compared to the Gini index (see the [vignette](https://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf) for more information).
We will start building our decision tree model `df_fit1` (standing for decision tree fit for model 1\) with no pruning (i.e., `cp = 0`) and no cross\-validation as we have a test dataset already (i.e., `xval = 0`). We will use the Gini index for the splitting.
```
dt_fit1 <- rpart(formula = science_perf ~ .,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0,
xval = 0),
parms = list(split = "gini"))
```
The estimated model is very likely to have too many nodes because we set `cp = 0`. Due to having many nodes, first we will examine the results graphically, before we attempt to print the output. Although the `rpart` package can draw decision tree plots, they are very basic. Therefore, we will use the `rpart.plot` function from the `rpart.plot` package to draw a nicer decision tree plot. Let’s see the results graphically using the default settings of the `rpart.plot` function.
```
rpart.plot(dt_fit1)
```
How does the model look like? It is **NOT** very interpretable, isn’t it? We definitely need to prune the trees; otherwise the model yields a very complex model with many nodes – which is very likely to overfit the data. In the following model, we use `cp = 0.005`. Remember that as we increase `cp`, the pruning for the model will also increase. The higher the cp value, the shorter the trees with possibly fewer predictors.
```
dt_fit2 <- rpart(formula = science_perf ~ .,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0.005,
xval = 0),
parms = list(split = "gini"))
rpart.plot(dt_fit2)
```
We could also estimate the same model with the Entropy as the split criterion, `split = "information"`, and the results would be similar (not necessarily the tree itself, but its classification performance).
```
dt_fit2 <- rpart(formula = science_perf ~ .,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0.005,
xval = 0),
parms = list(split = "information"))
```
Now our model is less complex compared compared to the previous model. In the above decision tree plot, each node shows:
* the predicted class (High or low)
* the predicted probability of the second class (i.e., “Low”)
* the percentage of observations in the node
Let’s play with the colors to make the trees even more distinct. Also, we will adjust which values should be shown in the nodes, using `extra = 8` (see other possible options [HERE](http://www.milbo.org/doc/prp.pdf)). Each node in the new plot shows:
* the predicted class (High or low)
* the predicted probability of the fitted class
```
rpart.plot(dt_fit2, extra = 8, box.palette = "RdBu", shadow.col = "gray")
```
An alternative way to prune the model is to use the `prune()` function from the `rpart` package. In the following example, we will use our initial complex model `dt_fit1` and prune it.
```
dt_fit1_prune <- prune(dt_fit1, cp = 0.005)
rpart.plot(dt_fit1_prune, extra = 8, box.palette = "RdBu", shadow.col = "gray")
```
which would yield the same model that we estimated. Now let’s print the output of our model using `printcp()`:
```
printcp(dt_fit2)
```
```
##
## Classification tree:
## rpart(formula = science_perf ~ ., data = train_dat, method = "class",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0.005, xval = 0))
##
## Variables actually used in tree construction:
## [1] math reading
##
## Root node error: 6461/16561 = 0.39
##
## n= 16561
##
## CP nsplit rel error
## 1 0.746 0 1.00
## 2 0.015 1 0.25
## 3 0.015 3 0.22
## 4 0.008 4 0.21
## 5 0.005 6 0.19
```
In the output, `CP` refers to the complexity parameter, `nsplit` is the number of splits in the decision tree based on the complexity parameter, and `rel error` is the relative error (i.e., \\(1 \- R^2\\)) of the solution. This is the error for predictions of the data that were used to estimate the model. The section of `Variables actually used in tree construction` shows which variables have been used in the final model. In our example, only math and reading have been used. What happened to the other variables?
In addition to `printcp()`, we can use `summary()` to print out more detailed results with all splits.
```
summary(dt_fit2)
```
We don’t print the entire summary output here. Instead, we want to focus on a specific section in the output:
```
Variable importance
math reading ENVAWARE ESCS EPIST HOMEPOS
46 37 5 4 4 4
```
Similarly, `varImp()` from the `caret` package also gives us a similar output:
```
varImp(dt_fit2)
```
```
## Overall
## ENVAWARE 582.079
## EPIST 791.499
## ESCS 427.819
## HEDRES 5.288
## HOMEPOS 17.914
## math 5529.902
## reading 5752.549
## WEALTH 7.573
## ICTRES 0.000
```
Both of these show the importance of the variables for our estimated decision tree model. The larger the values are, the more crucial they are for the model. In our example, math and reading seem to be highly important for the decision tree model, whereas ICTRES is the least important variable. The variables that were not very important for the model are those that were not included in the final model. These variables are possibly have very low correlations with our outcome variable, `science_perf`.
We can use `rpart.rules` to print out the decision rules from the trees. By default, the output from this function shows the probability of the **second** class for each decision/split being made (i.e., the category “low” in our example) and what percent of the observations fall into this category.
```
rpart.rules(dt_fit2, cover = TRUE)
```
```
## science_perf cover
## 0.03 when math >= 478 & reading >= 499 53%
## 0.21 when math < 478 & reading >= 539 1%
## 0.34 when math >= 478 & reading is 468 to 499 6%
## 0.40 when math is 458 to 478 & reading is 507 to 539 2%
## 0.68 when math < 458 & reading is 507 to 539 2%
## 0.70 when math >= 478 & reading < 468 3%
## 0.95 when math < 478 & reading < 507 33%
```
Furthermore, we need to check the classification accuracy of the estimated decision tree with the **test** data. Otherwise, it is hard to justify whether or not the estimated decision tree would work accurately for prediction. Below we estimate the predicted classes (either high or low) from the test data by applying the estimated model.First we obtain model predictions using `predict()` and then turn the results into a data frame called `dt_pred`.
```
dt_pred <- predict(dt_fit2, test_dat) %>%
as.data.frame()
head(dt_pred)
```
```
## High Low
## 1 0.97465 0.02535
## 2 0.05406 0.94594
## 3 0.05406 0.94594
## 4 0.66243 0.33757
## 5 0.97465 0.02535
## 6 0.05406 0.94594
```
This dataset shows each observation’s (i.e., students from the test data) probability of falling into either *high* or *low* categories based on the decision rules that we estimated. We will turn these probabilities into binary classifications, depending on whether or not they are \>\= \\(50\\%\\). Then, we will compare these estimates with the actual classes in the test data (i.e., `test_dat$science_perf`) in order to create a confusion matrix.
```
dt_pred <- mutate(dt_pred,
science_perf = as.factor(ifelse(High >= 0.5, "High", "Low"))
) %>%
select(science_perf)
confusionMatrix(dt_pred$science_perf, test_dat$science_perf)
```
```
## Confusion Matrix and Statistics
##
## Reference
## Prediction High Low
## High 4076 316
## Low 252 2453
##
## Accuracy : 0.92
## 95% CI : (0.913, 0.926)
## No Information Rate : 0.61
## P-Value [Acc > NIR] : < 2e-16
##
## Kappa : 0.831
##
## Mcnemar's Test P-Value : 0.00821
##
## Sensitivity : 0.942
## Specificity : 0.886
## Pos Pred Value : 0.928
## Neg Pred Value : 0.907
## Prevalence : 0.610
## Detection Rate : 0.574
## Detection Prevalence : 0.619
## Balanced Accuracy : 0.914
##
## 'Positive' Class : High
##
```
The output shows that the overall accuracy is around \\(92\\%\\), sensitivit is \\(94\\%\\), and specificity is \\(89\\%\\). For only two variables, this is very good. However, sometimes we do not have predictors that are highly correlated with the outcome variables. In such cases, the model tuning might take much longer.
Let’s assume that we did **NOT** have reading and math in our dataset. We still want to predict `science_perf` using the remaining variables.
```
dt_fit3a <- rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES + EPIST +
HOMEPOS +ESCS,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0.001,
xval = 0),
parms = list(split = "gini"))
rpart.plot(dt_fit3a, extra = 8, box.palette = "RdBu", shadow.col = "gray")
```
Now, let’s change cp to 0\.005\.
```
dt_fit3b <- rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES + EPIST +
HOMEPOS + ESCS,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0.005,
xval = 0),
parms = list(split = "gini"))
rpart.plot(dt_fit3b, extra = 8, box.palette = "RdBu", shadow.col = "gray")
```
Since we also care about the accuracy, sensitivity, and specificity of these models, we can turn this experiment into a small function.
```
decision_check <- function(cp) {
require("rpart")
require("dplyr")
dt <- rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES + EPIST +
HOMEPOS + ESCS,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = cp,
xval = 0),
parms = list(split = "gini"))
dt_pred <- predict(dt, test_dat) %>%
as.data.frame() %>%
mutate(science_perf = as.factor(ifelse(High >= 0.5, "High", "Low"))) %>%
select(science_perf)
cm <- confusionMatrix(dt_pred$science_perf, test_dat$science_perf)
results <- data.frame(cp = cp,
Accuracy = round(cm$overall[1], 3),
Sensitivity = round(cm$byClass[1], 3),
Specificity = round(cm$byClass[2], 3))
return(results)
}
result <- NULL
for(i in seq(from=0.001, to=0.08, by = 0.005)) {
result <- rbind(result, decision_check(cp = i))
}
result <- result[order(result$Accuracy, result$Sensitivity, result$Specificity),]
result
```
```
## cp Accuracy Sensitivity Specificity
## Accuracy9 0.046 0.675 0.947 0.250
## Accuracy10 0.051 0.675 0.947 0.250
## Accuracy11 0.056 0.675 0.947 0.250
## Accuracy12 0.061 0.675 0.947 0.250
## Accuracy13 0.066 0.675 0.947 0.250
## Accuracy14 0.071 0.675 0.947 0.250
## Accuracy15 0.076 0.675 0.947 0.250
## Accuracy3 0.016 0.686 0.757 0.574
## Accuracy4 0.021 0.686 0.757 0.574
## Accuracy5 0.026 0.686 0.757 0.574
## Accuracy6 0.031 0.686 0.757 0.574
## Accuracy7 0.036 0.686 0.757 0.574
## Accuracy8 0.041 0.686 0.757 0.574
## Accuracy1 0.006 0.694 0.850 0.449
## Accuracy2 0.011 0.694 0.850 0.449
## Accuracy 0.001 0.705 0.835 0.502
```
We can also visulize the results using `ggplot2`. First, we wil transform the `result` dataset into a long format and then use this new dataset (called `result_long`) in `ggplot()`.
```
result_long <- melt(as.data.table(result),
id.vars = c("cp"),
measure = c("Accuracy", "Sensitivity", "Specificity"),
variable.name = "Index",
value.name = "Value")
ggplot(data = result_long,
mapping = aes(x = cp, y = Value)) +
geom_point(aes(color = Index), size = 3) +
labs(x = "Complexity Parameter", y = "Value") +
theme_bw()
```
In the plot, we see that there is a trade\-off between sensitivity and specificity. Depending on the situation, we may prefer higher sensitivity (e.g., correctly identifying those who have “high” science scores) or higher specificity (e.g., correctly identifying those who have “low” science scores). For example, if we want to know who is performing poorly in science (so that we can design additional instructional materials), we may want the model to identify “low” performers more accurately.
### 7\.2\.1 Cross\-validation
As you may remember, we set `xval = 0` in our decision tree models because we did not want to run any cross\-validation samples. However, cross\-validations (e.g., *K*\-fold approach) are highly useful when we do not have a test or validation dataset, or our dataset is to small to split into training and test data. A typical way to use cross\-validation in decision trees is to not specify a cp (i.e., complexity parameter) and perform cross validation. In the following example, we will assume that our dataset is not too big and thus we want to run 10 cross\-validation samples (i.e., splits) as we build our decision tree model. Note that we use `cp = 0` this time.
```
dt_fit4 <- rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
EPIST + HOMEPOS + ESCS,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0,
xval = 10),
parms = list(split = "gini"))
```
In the results, we can evaluate the cross\-validated error (i.e., X\-val Relative Error) and choose the complexity parameter that would give us an acceptable value. Then, we can use this cp value and prune the trees. We use `plotcp()` function to visualize the cross\-validation results.
```
printcp(dt_fit4)
```
```
##
## Classification tree:
## rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
## EPIST + HOMEPOS + ESCS, data = train_dat, method = "class",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0, xval = 10))
##
## Variables actually used in tree construction:
## [1] ENVAWARE EPIST ESCS HEDRES HOMEPOS ICTRES WEALTH
##
## Root node error: 6461/16561 = 0.39
##
## n= 16561
##
## CP nsplit rel error xerror xstd
## 1 7.7e-02 0 1.00 1.00 0.0097
## 2 4.4e-02 2 0.85 0.85 0.0094
## 3 1.2e-02 3 0.80 0.80 0.0092
## 4 5.6e-03 5 0.78 0.79 0.0092
## 5 2.6e-03 7 0.77 0.78 0.0092
## 6 2.3e-03 9 0.76 0.78 0.0092
## 7 1.9e-03 11 0.76 0.78 0.0092
## 8 1.9e-03 13 0.75 0.78 0.0092
## 9 1.7e-03 16 0.75 0.78 0.0092
## 10 1.5e-03 18 0.74 0.78 0.0092
## 11 1.5e-03 24 0.73 0.78 0.0092
## 12 1.4e-03 30 0.73 0.78 0.0092
## 13 1.2e-03 34 0.72 0.77 0.0091
## 14 1.1e-03 35 0.72 0.77 0.0091
## 15 1.1e-03 47 0.70 0.77 0.0091
## 16 8.5e-04 52 0.70 0.77 0.0091
## 17 8.3e-04 54 0.70 0.78 0.0092
## 18 7.7e-04 57 0.69 0.78 0.0092
## 19 7.2e-04 69 0.68 0.78 0.0092
## 20 6.7e-04 72 0.68 0.78 0.0092
## 21 6.2e-04 84 0.67 0.78 0.0092
## 22 5.8e-04 101 0.66 0.79 0.0092
## 23 5.7e-04 106 0.66 0.79 0.0092
## 24 5.4e-04 123 0.65 0.79 0.0092
## 25 5.2e-04 148 0.64 0.79 0.0092
## 26 4.9e-04 153 0.63 0.80 0.0092
## 27 4.6e-04 159 0.63 0.80 0.0092
## 28 4.3e-04 198 0.61 0.80 0.0092
## 29 4.1e-04 211 0.60 0.81 0.0092
## 30 4.0e-04 231 0.59 0.81 0.0093
## 31 3.9e-04 254 0.58 0.81 0.0093
## 32 3.6e-04 275 0.57 0.81 0.0093
## 33 3.3e-04 298 0.56 0.81 0.0093
## 34 3.1e-04 310 0.56 0.82 0.0093
## 35 2.7e-04 360 0.54 0.82 0.0093
## 36 2.6e-04 380 0.54 0.83 0.0093
## 37 2.5e-04 399 0.53 0.84 0.0093
## 38 2.3e-04 411 0.53 0.84 0.0094
## 39 2.2e-04 456 0.52 0.84 0.0094
## 40 2.2e-04 467 0.51 0.84 0.0094
## 41 2.1e-04 495 0.50 0.85 0.0094
## 42 1.9e-04 507 0.50 0.85 0.0094
## 43 1.9e-04 521 0.50 0.85 0.0094
## 44 1.7e-04 529 0.50 0.85 0.0094
## 45 1.5e-04 538 0.49 0.87 0.0094
## 46 1.3e-04 632 0.48 0.87 0.0094
## 47 1.2e-04 638 0.48 0.88 0.0095
## 48 1.0e-04 646 0.48 0.88 0.0095
## 49 9.3e-05 667 0.47 0.88 0.0095
## 50 7.7e-05 672 0.47 0.89 0.0095
## 51 6.9e-05 716 0.47 0.89 0.0095
## 52 5.8e-05 725 0.47 0.90 0.0095
## 53 5.2e-05 740 0.47 0.90 0.0095
## 54 3.9e-05 770 0.47 0.90 0.0095
## 55 2.2e-05 782 0.47 0.90 0.0095
## 56 0.0e+00 796 0.47 0.91 0.0095
```
```
plotcp(dt_fit4)
```
Next, we can modify our model as follows:
```
dt_fit5 <- rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
EPIST + HOMEPOS + ESCS,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0.0039,
xval = 0),
parms = list(split = "gini"))
printcp(dt_fit5)
```
```
##
## Classification tree:
## rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
## EPIST + HOMEPOS + ESCS, data = train_dat, method = "class",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0.0039, xval = 0))
##
## Variables actually used in tree construction:
## [1] ENVAWARE EPIST ESCS HOMEPOS
##
## Root node error: 6461/16561 = 0.39
##
## n= 16561
##
## CP nsplit rel error
## 1 0.0772 0 1.00
## 2 0.0436 2 0.85
## 3 0.0119 3 0.80
## 4 0.0056 5 0.78
## 5 0.0039 7 0.77
```
```
rpart.plot(dt_fit5, extra = 8, box.palette = "RdBu", shadow.col = "gray")
```
Lastly, for the sake of brevity, we demonstrate a short regression tree example below where we predict math scores (a continuous variable) using the same set of variables. This time we use `method = "anova"` in the `rpart()` function to estimate a regression tree.
Let’s begin with cross\-validation and check how \\(R^2\\) changes depending on the number of splits.
```
rt_fit1 <- rpart(formula = math ~ WEALTH + HEDRES + ENVAWARE +
ICTRES + EPIST + HOMEPOS + ESCS,
data = train_dat,
method = "anova",
control = rpart.control(minsplit = 20,
cp = 0.001,
xval = 10),
parms = list(split = "gini"))
printcp(rt_fit1)
```
```
##
## Regression tree:
## rpart(formula = math ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
## EPIST + HOMEPOS + ESCS, data = train_dat, method = "anova",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0.001, xval = 10))
##
## Variables actually used in tree construction:
## [1] ENVAWARE EPIST ESCS HEDRES WEALTH
##
## Root node error: 1e+08/16561 = 6318
##
## n= 16561
##
## CP nsplit rel error xerror xstd
## 1 0.1122 0 1.00 1.00 0.0101
## 2 0.0382 1 0.89 0.89 0.0092
## 3 0.0353 2 0.85 0.86 0.0090
## 4 0.0167 3 0.81 0.82 0.0086
## 5 0.0078 4 0.80 0.80 0.0084
## 6 0.0070 5 0.79 0.79 0.0084
## 7 0.0064 6 0.78 0.79 0.0083
## 8 0.0041 7 0.78 0.78 0.0083
## 9 0.0033 8 0.77 0.78 0.0083
## 10 0.0030 9 0.77 0.78 0.0083
## 11 0.0029 10 0.77 0.78 0.0083
## 12 0.0025 11 0.76 0.77 0.0082
## 13 0.0021 12 0.76 0.77 0.0082
## 14 0.0021 13 0.76 0.77 0.0082
## 15 0.0020 14 0.76 0.77 0.0082
## 16 0.0018 15 0.75 0.77 0.0082
## 17 0.0017 17 0.75 0.77 0.0082
## 18 0.0017 18 0.75 0.77 0.0082
## 19 0.0017 19 0.75 0.77 0.0082
## 20 0.0016 20 0.75 0.77 0.0082
## 21 0.0015 21 0.74 0.77 0.0082
## 22 0.0013 22 0.74 0.77 0.0082
## 23 0.0012 23 0.74 0.76 0.0082
## 24 0.0012 25 0.74 0.76 0.0082
## 25 0.0011 27 0.74 0.76 0.0081
## 26 0.0011 28 0.74 0.76 0.0081
## 27 0.0011 29 0.73 0.76 0.0081
## 28 0.0011 30 0.73 0.76 0.0081
## 29 0.0010 31 0.73 0.76 0.0081
## 30 0.0010 32 0.73 0.76 0.0081
```
Then, we can adjust our model based on the suggestions from the previous plot. Note that we use `extra = 100` in the `rpart.plot()` function to show percentages (*Note*: `rpart.plot` has different *extra* options depending on whether it is a classification or regression tree).
```
rt_fit2 <- rpart(formula = math ~ WEALTH + HEDRES + ENVAWARE +
ICTRES + EPIST + HOMEPOS + ESCS,
data = train_dat,
method = "anova",
control = rpart.control(minsplit = 20,
cp = 0.007,
xval = 0),
parms = list(split = "gini"))
printcp(rt_fit2)
```
```
##
## Regression tree:
## rpart(formula = math ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
## EPIST + HOMEPOS + ESCS, data = train_dat, method = "anova",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0.007, xval = 0))
##
## Variables actually used in tree construction:
## [1] ENVAWARE EPIST ESCS
##
## Root node error: 1e+08/16561 = 6318
##
## n= 16561
##
## CP nsplit rel error
## 1 0.1122 0 1.00
## 2 0.0382 1 0.89
## 3 0.0353 2 0.85
## 4 0.0167 3 0.81
## 5 0.0078 4 0.80
## 6 0.0070 5 0.79
## 7 0.0070 6 0.78
```
```
rpart.plot(rt_fit2, extra = 100, box.palette = "RdBu", shadow.col = "gray")
```
To evaluate the model accuracy, we cannot use the classification\-based indices anymore because we built a regression tree, not a classification tree. Two useful measures that we can for evaluating regression trees are the mean absolute error (mae) and the root mean square error (rmse). The `modelr` package has several functions – such as `mae()` and `rmse()` – to evaluate regression\-based models. Using the training and (more importantly) test data, we can evaluate the accuracy of the decision tree model that we estimated above.
```
# Training data
mae(model = rt_fit2, data = train_dat)
```
```
## [1] 56.49
```
```
rmse(model = rt_fit2, data = train_dat)
```
```
## [1] 70.32
```
```
# Test data
mae(model = rt_fit2, data = test_dat)
```
```
## [1] 56.66
```
```
rmse(model = rt_fit2, data = test_dat)
```
```
## [1] 70.42
```
We seem to have slightly less error with the training data than the test data. Is this finding suprising to you?
---
### 7\.2\.1 Cross\-validation
As you may remember, we set `xval = 0` in our decision tree models because we did not want to run any cross\-validation samples. However, cross\-validations (e.g., *K*\-fold approach) are highly useful when we do not have a test or validation dataset, or our dataset is to small to split into training and test data. A typical way to use cross\-validation in decision trees is to not specify a cp (i.e., complexity parameter) and perform cross validation. In the following example, we will assume that our dataset is not too big and thus we want to run 10 cross\-validation samples (i.e., splits) as we build our decision tree model. Note that we use `cp = 0` this time.
```
dt_fit4 <- rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
EPIST + HOMEPOS + ESCS,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0,
xval = 10),
parms = list(split = "gini"))
```
In the results, we can evaluate the cross\-validated error (i.e., X\-val Relative Error) and choose the complexity parameter that would give us an acceptable value. Then, we can use this cp value and prune the trees. We use `plotcp()` function to visualize the cross\-validation results.
```
printcp(dt_fit4)
```
```
##
## Classification tree:
## rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
## EPIST + HOMEPOS + ESCS, data = train_dat, method = "class",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0, xval = 10))
##
## Variables actually used in tree construction:
## [1] ENVAWARE EPIST ESCS HEDRES HOMEPOS ICTRES WEALTH
##
## Root node error: 6461/16561 = 0.39
##
## n= 16561
##
## CP nsplit rel error xerror xstd
## 1 7.7e-02 0 1.00 1.00 0.0097
## 2 4.4e-02 2 0.85 0.85 0.0094
## 3 1.2e-02 3 0.80 0.80 0.0092
## 4 5.6e-03 5 0.78 0.79 0.0092
## 5 2.6e-03 7 0.77 0.78 0.0092
## 6 2.3e-03 9 0.76 0.78 0.0092
## 7 1.9e-03 11 0.76 0.78 0.0092
## 8 1.9e-03 13 0.75 0.78 0.0092
## 9 1.7e-03 16 0.75 0.78 0.0092
## 10 1.5e-03 18 0.74 0.78 0.0092
## 11 1.5e-03 24 0.73 0.78 0.0092
## 12 1.4e-03 30 0.73 0.78 0.0092
## 13 1.2e-03 34 0.72 0.77 0.0091
## 14 1.1e-03 35 0.72 0.77 0.0091
## 15 1.1e-03 47 0.70 0.77 0.0091
## 16 8.5e-04 52 0.70 0.77 0.0091
## 17 8.3e-04 54 0.70 0.78 0.0092
## 18 7.7e-04 57 0.69 0.78 0.0092
## 19 7.2e-04 69 0.68 0.78 0.0092
## 20 6.7e-04 72 0.68 0.78 0.0092
## 21 6.2e-04 84 0.67 0.78 0.0092
## 22 5.8e-04 101 0.66 0.79 0.0092
## 23 5.7e-04 106 0.66 0.79 0.0092
## 24 5.4e-04 123 0.65 0.79 0.0092
## 25 5.2e-04 148 0.64 0.79 0.0092
## 26 4.9e-04 153 0.63 0.80 0.0092
## 27 4.6e-04 159 0.63 0.80 0.0092
## 28 4.3e-04 198 0.61 0.80 0.0092
## 29 4.1e-04 211 0.60 0.81 0.0092
## 30 4.0e-04 231 0.59 0.81 0.0093
## 31 3.9e-04 254 0.58 0.81 0.0093
## 32 3.6e-04 275 0.57 0.81 0.0093
## 33 3.3e-04 298 0.56 0.81 0.0093
## 34 3.1e-04 310 0.56 0.82 0.0093
## 35 2.7e-04 360 0.54 0.82 0.0093
## 36 2.6e-04 380 0.54 0.83 0.0093
## 37 2.5e-04 399 0.53 0.84 0.0093
## 38 2.3e-04 411 0.53 0.84 0.0094
## 39 2.2e-04 456 0.52 0.84 0.0094
## 40 2.2e-04 467 0.51 0.84 0.0094
## 41 2.1e-04 495 0.50 0.85 0.0094
## 42 1.9e-04 507 0.50 0.85 0.0094
## 43 1.9e-04 521 0.50 0.85 0.0094
## 44 1.7e-04 529 0.50 0.85 0.0094
## 45 1.5e-04 538 0.49 0.87 0.0094
## 46 1.3e-04 632 0.48 0.87 0.0094
## 47 1.2e-04 638 0.48 0.88 0.0095
## 48 1.0e-04 646 0.48 0.88 0.0095
## 49 9.3e-05 667 0.47 0.88 0.0095
## 50 7.7e-05 672 0.47 0.89 0.0095
## 51 6.9e-05 716 0.47 0.89 0.0095
## 52 5.8e-05 725 0.47 0.90 0.0095
## 53 5.2e-05 740 0.47 0.90 0.0095
## 54 3.9e-05 770 0.47 0.90 0.0095
## 55 2.2e-05 782 0.47 0.90 0.0095
## 56 0.0e+00 796 0.47 0.91 0.0095
```
```
plotcp(dt_fit4)
```
Next, we can modify our model as follows:
```
dt_fit5 <- rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
EPIST + HOMEPOS + ESCS,
data = train_dat,
method = "class",
control = rpart.control(minsplit = 20,
cp = 0.0039,
xval = 0),
parms = list(split = "gini"))
printcp(dt_fit5)
```
```
##
## Classification tree:
## rpart(formula = science_perf ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
## EPIST + HOMEPOS + ESCS, data = train_dat, method = "class",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0.0039, xval = 0))
##
## Variables actually used in tree construction:
## [1] ENVAWARE EPIST ESCS HOMEPOS
##
## Root node error: 6461/16561 = 0.39
##
## n= 16561
##
## CP nsplit rel error
## 1 0.0772 0 1.00
## 2 0.0436 2 0.85
## 3 0.0119 3 0.80
## 4 0.0056 5 0.78
## 5 0.0039 7 0.77
```
```
rpart.plot(dt_fit5, extra = 8, box.palette = "RdBu", shadow.col = "gray")
```
Lastly, for the sake of brevity, we demonstrate a short regression tree example below where we predict math scores (a continuous variable) using the same set of variables. This time we use `method = "anova"` in the `rpart()` function to estimate a regression tree.
Let’s begin with cross\-validation and check how \\(R^2\\) changes depending on the number of splits.
```
rt_fit1 <- rpart(formula = math ~ WEALTH + HEDRES + ENVAWARE +
ICTRES + EPIST + HOMEPOS + ESCS,
data = train_dat,
method = "anova",
control = rpart.control(minsplit = 20,
cp = 0.001,
xval = 10),
parms = list(split = "gini"))
printcp(rt_fit1)
```
```
##
## Regression tree:
## rpart(formula = math ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
## EPIST + HOMEPOS + ESCS, data = train_dat, method = "anova",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0.001, xval = 10))
##
## Variables actually used in tree construction:
## [1] ENVAWARE EPIST ESCS HEDRES WEALTH
##
## Root node error: 1e+08/16561 = 6318
##
## n= 16561
##
## CP nsplit rel error xerror xstd
## 1 0.1122 0 1.00 1.00 0.0101
## 2 0.0382 1 0.89 0.89 0.0092
## 3 0.0353 2 0.85 0.86 0.0090
## 4 0.0167 3 0.81 0.82 0.0086
## 5 0.0078 4 0.80 0.80 0.0084
## 6 0.0070 5 0.79 0.79 0.0084
## 7 0.0064 6 0.78 0.79 0.0083
## 8 0.0041 7 0.78 0.78 0.0083
## 9 0.0033 8 0.77 0.78 0.0083
## 10 0.0030 9 0.77 0.78 0.0083
## 11 0.0029 10 0.77 0.78 0.0083
## 12 0.0025 11 0.76 0.77 0.0082
## 13 0.0021 12 0.76 0.77 0.0082
## 14 0.0021 13 0.76 0.77 0.0082
## 15 0.0020 14 0.76 0.77 0.0082
## 16 0.0018 15 0.75 0.77 0.0082
## 17 0.0017 17 0.75 0.77 0.0082
## 18 0.0017 18 0.75 0.77 0.0082
## 19 0.0017 19 0.75 0.77 0.0082
## 20 0.0016 20 0.75 0.77 0.0082
## 21 0.0015 21 0.74 0.77 0.0082
## 22 0.0013 22 0.74 0.77 0.0082
## 23 0.0012 23 0.74 0.76 0.0082
## 24 0.0012 25 0.74 0.76 0.0082
## 25 0.0011 27 0.74 0.76 0.0081
## 26 0.0011 28 0.74 0.76 0.0081
## 27 0.0011 29 0.73 0.76 0.0081
## 28 0.0011 30 0.73 0.76 0.0081
## 29 0.0010 31 0.73 0.76 0.0081
## 30 0.0010 32 0.73 0.76 0.0081
```
Then, we can adjust our model based on the suggestions from the previous plot. Note that we use `extra = 100` in the `rpart.plot()` function to show percentages (*Note*: `rpart.plot` has different *extra* options depending on whether it is a classification or regression tree).
```
rt_fit2 <- rpart(formula = math ~ WEALTH + HEDRES + ENVAWARE +
ICTRES + EPIST + HOMEPOS + ESCS,
data = train_dat,
method = "anova",
control = rpart.control(minsplit = 20,
cp = 0.007,
xval = 0),
parms = list(split = "gini"))
printcp(rt_fit2)
```
```
##
## Regression tree:
## rpart(formula = math ~ WEALTH + HEDRES + ENVAWARE + ICTRES +
## EPIST + HOMEPOS + ESCS, data = train_dat, method = "anova",
## parms = list(split = "gini"), control = rpart.control(minsplit = 20,
## cp = 0.007, xval = 0))
##
## Variables actually used in tree construction:
## [1] ENVAWARE EPIST ESCS
##
## Root node error: 1e+08/16561 = 6318
##
## n= 16561
##
## CP nsplit rel error
## 1 0.1122 0 1.00
## 2 0.0382 1 0.89
## 3 0.0353 2 0.85
## 4 0.0167 3 0.81
## 5 0.0078 4 0.80
## 6 0.0070 5 0.79
## 7 0.0070 6 0.78
```
```
rpart.plot(rt_fit2, extra = 100, box.palette = "RdBu", shadow.col = "gray")
```
To evaluate the model accuracy, we cannot use the classification\-based indices anymore because we built a regression tree, not a classification tree. Two useful measures that we can for evaluating regression trees are the mean absolute error (mae) and the root mean square error (rmse). The `modelr` package has several functions – such as `mae()` and `rmse()` – to evaluate regression\-based models. Using the training and (more importantly) test data, we can evaluate the accuracy of the decision tree model that we estimated above.
```
# Training data
mae(model = rt_fit2, data = train_dat)
```
```
## [1] 56.49
```
```
rmse(model = rt_fit2, data = train_dat)
```
```
## [1] 70.32
```
```
# Test data
mae(model = rt_fit2, data = test_dat)
```
```
## [1] 56.66
```
```
rmse(model = rt_fit2, data = test_dat)
```
```
## [1] 70.42
```
We seem to have slightly less error with the training data than the test data. Is this finding suprising to you?
---
7\.3 Random Forests
-------------------
Decision trees can sometimes be non\-robust because a small change in the data may cause a significant change in the final estimated tree. Therefore, whenever a decision tree approach is not completely stable, an alternative method – such as **random forests** – can be more suitable for supervised ML applications. Unlike the decision tree approach where there is a single solution from the same sample, random forest builds multiple decision trees by splitting the data into multiple sub\-samples and merges them together to get a more accurate and stable prediction.
The underlying mechanism of random forests is very similar to that of decision trees. However, random forests first build lots of bushy trees and then average them to reduce the overall variance. Figure [7\.2](supervised-machine-learning---part-i.html#fig:fig6-2) shows how a random forest would look like with three trees.
Figure 7\.2: An example of random forests approach
Random forest adds additional randomness to the model, while growing the trees. Instead of searching for the most important feature (i.e., predictor) while splitting a node, it searches for the best feature among a random subset of features. That is, only a random subset of the features is taken into consideration by the algorithm for splitting a node. This results in a wide diversity that generally results in a better model. For example, if there is a strong predictor among a set of predictors, a decision tree would typically rely on this particular predictor to make predictions and build trees. However, random forests force each split to consider only a set of the predictors – which would result in trees that utilize not only the strong predictor but also other predictors that are moderately correlated with the outcome variable.
Random forest has nearly the same tuning parameters as a decision tree. Also, like decision trees, random forests can be used for both classification and regression problems. However, there are some differences between the two approaches. Unlike in decision trees, it is easier to control and prevent overfitting in random forests. This is because random forests create random subsets of the features and build much smaller trees using these subsets. Afterwards, it combines the subtrees. It should be noted that this procedure makes random forests computationally slower, depending on how many trees random forest builds. Therefore, it may not be effective for *real\-time* predictions.
The random forest algorithm is used in a lot of different fields, like banking, stock market, medicine, and e\-commerce. For example, random forests can be used to detect customers who will use the bank’s services more frequently than others and repay their debt in time. It can also used to detect fraud customers who want to scam the bank. In educational testing, we can use random forests to analyze a student’s assessment history (e.g., test scores, response times, demographic variables, grade level, and so on) to identify whether the student has any learning difficulties. Similarly, we can use examinee\-related variables, test scores, and test administration date to identify whether an examinee is likely to re\-take the test (e.g., TOEFL or GRE) in the future.
7\.4 Random forests in R
------------------------
In R, `randomForest` and `caret` packages can be used to apply the random forest algorithm to classification and regression problems. The use of the `randomForest()` function is similar to that of `rpart()`. The main elements that we need to define are:
* **formula**: A regression\-like formula defining the dependent variable and the predictors – it is the same as the one for `rpart()`.
* **data**: The dataset that we use to train the model.
* **importance**: If TRUE, then importance of the predictors is assessed in the model.
* **ntree**: Number of trees to grow in the model; we often start with a large number and then reduce it as we adjust the model based on the results. A large number for **ntree** can significantly increase the estimation time for the model.
There are also other elements that we can change depending on whether it is a classification or regression model (see `?randomForest` for more details). In the following example, we will focus on the same classification problem that we used before for decision trees. We initially set `ntree = 1000` to get 1000 trees in total but we will evaluate whether we need all of these trees to have an accurate model.
```
library("randomForest")
library("caret")
rf_fit1 <- randomForest(formula = science_perf ~ .,
data = train_dat,
importance = TRUE, ntree = 1000)
print(rf_fit1)
```
```
##
## Call:
## randomForest(formula = science_perf ~ ., data = train_dat, importance = TRUE, ntree = 1000)
## Type of random forest: classification
## Number of trees: 1000
## No. of variables tried at each split: 3
##
## OOB estimate of error rate: 7.58%
## Confusion matrix:
## High Low class.error
## High 9464 636 0.06297
## Low 619 5842 0.09581
```
In the output, we see the confusion matrix along with classification error and out\-of\-bag (OOB) error. OBB is a method of measuring the prediction error of random forests, finding the mean prediction error on each training sample, using only the trees that did not have in their bootstrap sample. The results show that the overall OBB error is around \\(7\.6\\%\\), while the classification error is \\(6\\%\\) for the *high* category and around \\(10\\%\\) for the *low* category.
Next, by checking the level error across the number of trees, we can determine the ideal number of trees for our model.
```
plot(rf_fit1)
```
The plot shows that the error level does not go down any further after roughly 50 trees. So, we can run our model again by using `ntree = 50` this time.
```
rf_fit2 <- randomForest(formula = science_perf ~ .,
data = train_dat,
importance = TRUE, ntree = 50)
print(rf_fit2)
```
```
##
## Call:
## randomForest(formula = science_perf ~ ., data = train_dat, importance = TRUE, ntree = 50)
## Type of random forest: classification
## Number of trees: 50
## No. of variables tried at each split: 3
##
## OOB estimate of error rate: 7.95%
## Confusion matrix:
## High Low class.error
## High 9459 641 0.06347
## Low 675 5786 0.10447
```
We can see the overall accuracy of model (\\(92\.12\\%\\)) as follows:
```
sum(diag(rf_fit2$confusion)) / nrow(train_dat)
```
```
## [1] 0.9205
```
As we did for the decision trees, we can check the importance of the predictors in the model, using `importance()` and `varImpPlot()`. With `importance()`, we will first import the importance measures, turn it into a data.frame, save the row names as predictor names, and finally sort the data by MeanDecreaseGini (or, you can also see the basic output using only `importance(rf_fit2)`)
```
importance(rf_fit2) %>%
as.data.frame() %>%
mutate(Predictors = row.names(.)) %>%
arrange(desc(MeanDecreaseGini))
```
```
## High Low MeanDecreaseAccuracy MeanDecreaseGini Predictors
## math 28.659 33.636 39.132 3483.8 math
## reading 36.009 34.864 47.183 2748.0 reading
## EPIST 1.738 1.235 1.907 362.2 EPIST
## ENVAWARE 4.234 6.218 7.870 292.7 ENVAWARE
## ESCS 5.396 3.215 6.759 281.2 ESCS
## HOMEPOS 6.820 6.219 11.009 218.7 HOMEPOS
## WEALTH 6.796 9.105 10.888 197.7 WEALTH
## ICTRES 5.246 3.812 6.575 161.3 ICTRES
## HEDRES 7.454 1.510 5.714 133.5 HEDRES
```
```
varImpPlot(rf_fit2,
main = "Importance of Variables for Science Performance")
```
The output shows different importance measures for the predictors that we used in the model. `MeanDecreaseAccuracy` and `MeanDecreaseGini` represent the overall classification error rate (or, mean squared error for regression) and the total decrease in node impurities from splitting on the variable, averaged over all trees. In the output, math and reading are the two predictors that seem to influence the model performance substantially, whereas EPIST and HEDRES are the least important variables. `varImpPlot()` presents the same information visually.
Next, we check the confusion matrix to see the accuracy, sensitivity, and specificity of our model.
```
rf_pred <- predict(rf_fit2, test_dat) %>%
as.data.frame() %>%
mutate(science_perf = as.factor(`.`)) %>%
select(science_perf)
confusionMatrix(rf_pred$science_perf, test_dat$science_perf)
```
```
## Confusion Matrix and Statistics
##
## Reference
## Prediction High Low
## High 4058 274
## Low 270 2495
##
## Accuracy : 0.923
## 95% CI : (0.917, 0.929)
## No Information Rate : 0.61
## P-Value [Acc > NIR] : <2e-16
##
## Kappa : 0.839
##
## Mcnemar's Test P-Value : 0.898
##
## Sensitivity : 0.938
## Specificity : 0.901
## Pos Pred Value : 0.937
## Neg Pred Value : 0.902
## Prevalence : 0.610
## Detection Rate : 0.572
## Detection Prevalence : 0.610
## Balanced Accuracy : 0.919
##
## 'Positive' Class : High
##
```
The results show that the accuracy is quite high (\\(92\\%\\)). Similarly, sensitivity and specificity are also very high. This is not necessarily surprising because we already knew that the math and reading scores are highly correlated with the science performance. Also, our decision tree model yielded very similar results.
Finally, let’s visualize the classification results using `ggplot2`. First, we will create a new dataset called `rf_class` with the predicted and actual classifications (from the test data) based on the random forest model. Then, we will visualize the correct and incorrect classifications using a bar chart and a point plot with jittering.
```
rf_class <- data.frame(actual = test_dat$science_perf,
predicted = rf_pred$science_perf) %>%
mutate(Status = ifelse(actual == predicted, TRUE, FALSE))
ggplot(data = rf_class,
mapping = aes(x = predicted, fill = Status)) +
geom_bar(position = "dodge") +
labs(x = "Predicted Science Performance",
y = "Actual Science Performance") +
theme_bw()
```
```
ggplot(data = rf_class,
mapping = aes(x = predicted, y = actual,
color = Status, shape = Status)) +
geom_jitter(size = 2, alpha = 0.6) +
labs(x = "Predicted Science Performance",
y = "Actual Science Performance") +
theme_bw()
```
Like decision trees, random forests can also be used for cross\-validation, using the package `rfUtilities` that utilizes the objects returned from the `randomForest()` function. Below we show how cross\-validation would work for random forests (output is not shown). Using the `randomForest` object that we estimated earlier (i.e.,, `rf_fit2`), we can run cross validations as follows:
```
install.packages("rfUtilities")
library("rfUtilities")
rf_fit2_cv <- rf.crossValidation(
x = rf_fit2,
xdata = train_dat,
p=0.10, # Proportion of data to test (the rest is training)
n=10, # Number of cross validation samples
ntree = 50)
# Plot cross validation verses model producers accuracy
par(mfrow=c(1,2))
plot(rf_fit2_cv, type = "cv", main = "CV producers accuracy")
plot(rf_fit2_cv, type = "model", main = "Model producers accuracy")
par(mfrow=c(1,1))
# Plot cross validation verses model oob
par(mfrow=c(1,2))
plot(rf_fit2_cv, type = "cv", stat = "oob", main = "CV oob error")
plot(rf_fit2_cv, type = "model", stat = "oob", main = "Model oob error")
par(mfrow=c(1,1))
```
| Big Data |
okanbulut.github.io | https://okanbulut.github.io/bigdata/supervised-machine-learning---part-ii.html |
8 Supervised Machine Learning \- Part II
========================================
8\.1 Support Vector Machines
----------------------------
The support vector machine (SVM) is a family of related techniques developed in the 80s in computer science. They can be used in either a classification or a regression framework, but are principally known for/applied to classification (of which they are considered one of the best classification techniques because of their flexibility). Following James et al. (2013\), we will make the distinction here between maximal margin classifiers (basically a support vector classifier with a cost parameter of 0 and a separating hyperplane), support vector classifiers (or an SVM with a linear kernel), and support vector machines (which employ non\-linear kernels).
### 8\.1\.1 Maximal Margin Classifier
#### 8\.1\.1\.1 Hyperplane
The concept of a hyperplane is a critical concept in SVM, therefore, we need to understand what exactly a hyperplane is to understand SVM. A **hyperplane** is a subspace whose dimension is one less than that of the ambient space. Specifically, in a *p*\-dimensional space, a **hyperplane** is a flat affline subspace of dimensional *p \- 1*, where affline refers to the fact that the subspace need not pass through the origin.
We define a hyperplane as
\\\[
\\beta\_0 \+ \\beta\_1X\_1 \+ \\beta\_2X\_2 \+ \\dots \+ \\beta\_pX\_p \= 0
\\]
Where \\(X\_1, X\_2, ..., X\_p\\) are predictors (or *features*). Therefore, for any observation of \\(X \= (X\_1, X\_2, \\dots, X\_p)^T\\) that *satisfies* the above equation, the observation falls directly onto the hyperplane. However, a value of \\(X\\) does not need to fall onto the hyperplane, but could fall on either side of the hyperplane such that either
\\\[
\\beta\_0 \+ \\beta\_1X\_1 \+ \\beta\_2X\_2 \+ \\cdots \+ \\beta\_pX\_p \> 0
\\]
or
\\\[
\\beta\_0 \+ \\beta\_1X\_1 \+ \\beta\_2X\_2 \+ \\cdots \+ \\beta\_pX\_p \< 0
\\]
occurs. In that situation, the value of \\(X\\) lies on one of the two sides of the hyperplane and the hyperplane acts to split the *p*\-dimensional space into two halves.
Figure [8\.1](supervised-machine-learning---part-ii.html#fig:hyper) shows the hyperplane \\(.5 \+ 1X\_1 \+ \-4X\_2 \= 0\\). If we plug a value of \\(X\_1\\) and \\(X\_2\\) into this equation, we know based on the sign alone if the points falls on one side of the hyperplane or if it falls directly onto the hyperplane. In Figure [8\.1](supervised-machine-learning---part-ii.html#fig:hyper) all the points in the red region will have negative signs (i.e., if we plug in the values of \\(X\_1\\) and \\(X\_2\\) into the above equation the sign will be negative), while all the points in the blue region would be positive, whereas any points that would have no sign are represented by the black line (the hyperplane).
Figure 8\.1: The hyperplane, \\(.5 \+ 1X\_1 \+ \-4X\_2 \= 0\\), is black line, the red points occur in the region where \\(.5 \+ 1X\_1 \+ \-4X\_2 \> 0\\), while the blue points occur in the region where \\(.5 \+ 1X\_1 \+ \-4X\_2 \< 0\\).
Wee can apply this idea of a hyperplane to classifying observations. We learned earlier how it important it is when applying machine learning techniques to split our data into training and testing data sets to avoid overfitting. We can split our *n \+ m* by *p* matrix of observations into an *n* by *p* \\(\\mathbf{X}\\) matrix of training observations, which fall into one of two classes for \\(Y \= y\_1, .., y\_n\\) where \\(Y\_i \\in {\-1, 1}\\) and an *m* by *p* matrix \\(\\mathbf{X^\*}\\) of testing observations. Using just the training data, our goal is develop a model that will correctly classify our testing data using just a hyperplane and we will do this by creating a **separating hyperplane** (a hyperplane that will separate our classes).
Let’s assume we have the training data in Figure [8\.2](supervised-machine-learning---part-ii.html#fig:hyperex) and that the blue points correspond to one class (labelled as \\(y \= 1\\)) and the red points correspond to the other class (\\(y \= \-1\\)). The separating hyper plane has the property that:
\\\[
\\beta\_0 \+ \\beta\_1x\_{i1} \+ \\beta\_2x\_{i2} \+ \\dots \+ \\beta\_px\_{p1} \> 0 \\quad \\text{if} \\quad y\_i \= 1
\\]
and
\\\[
\\beta\_0 \+ \\beta\_1x\_{i1} \+ \\beta\_2x\_{i2} \+ \\dots \+ \\beta\_px\_{p1} \< 0 \\quad \\text{if} \\quad y\_i \= \-1
\\]
Or more succintly,
\\\[
y\_i(\\beta\_0 \+ \\beta\_1x\_{i1} \+ \\beta\_2x\_{i2} \+ \\dots \+ \\beta\_px\_{p1}) \> 0
\\]
Ideally, we would create a hyperplane that perfectly separates the classes based on \\(X\_1\\) and \\(X\_2\\). However, as Figure [8\.2](supervised-machine-learning---part-ii.html#fig:hyperex) makes clear, we can create many separating hyperplanes of which 3 of these are shown. In fact, it’s often the case that an infinite number of separating hyperplanes could be created when the classes are perfectly separable. What we need to do is to develop some kind of a criterion for selecting one of the many separating hyperplanes.
Figure 8\.2: Candidate hyperplanes to separate the two classes.
For any given hyperplane, we have two pieces of information available for each observation: 1\) the side of the hyperplane it lies on (represented by its sign) and 2\) the distance it is from the hyperplane. The natural criterion for selecting a separating hyperplane is to **maximize the distance** it is from from the training observations. Therefore, we compute the distance that each training observation is from a candidate hyperplane. The minimal such distance from the observation to the hyperplane is known as the **margin**. Then we will select the hyperplane with the largest margin (the **maximal margin hyperplane**) and classify observations based on which side of this hyperplane they fall (**maximal margin classifier**). The hope is that a classifier with a large margin on the training data will also have a large margin on the test observations and subsequently classify well.
Figure [8\.3](supervised-machine-learning---part-ii.html#fig:mmc) depicts a maximal margin classifier. The red line corresponds to the maximal margin hyperplane and the distance between one of the dotted lines and the black line is the **margin**. The black and white points along the boundary of the margin are the **support vectors**. It is clear in Figure [8\.3](supervised-machine-learning---part-ii.html#fig:mmc) that the maximal margin hyperplane depends only on these two support vectors. If they are moved, the maximal margin hyperplane moves, however, if any other observations are moved they would have no effect on this hyperplane *unless* they crossed the boundary of the margin.
Figure 8\.3: Maximal margin hyperplane. Source: <https://tinyurl.com/y493pww8>
The problem in practice is that a separating hyperplane usually doesn’t exist. Even if a separating hyperplane existed, we may not want to use the maximal margin hyperplane as it would perfectly classify all of the observations and may be too sensitive to individual observations and subsequently overfitting.
Figure [8\.4](supervised-machine-learning---part-ii.html#fig:fig95) from James, et al. (2013\) clearly illustrates this problem. The left figure shows the maximal margin hyperplane (solid) in a completely separable solution. The figure on the right shows that when a new observation is introduced that the maximal margin hyperplane (solid) shifts rather dramatically relative to its original location (dashed).
Figure 8\.4: The impact of adding one observations to the maximal margin hyperplane from James et al. (2013\).
### 8\.1\.2 Support Vector Classifier
Our hope for a hyperplane is that it would be relatively insensitive to individual observations, while still classifying training observations well. That is, we would like to have what is termed a **soft margin classifier** or a **support vector classifier**. Essentially, we are willing to allow some observations to be on the incorrect side of the margin (classified correctly) or even the incorrect side of the hyperplane (incorrectly classified) if our classifier, overall, performs well.
We do this by introducing a tuning parameter, C, which determines the number and the severity of violations to the margin/hyperplane we are willing to tolerate. As C increases, our **tolerance** for violations will increase and subsequently our margin will widen. C, thus, represents a **bias\-variance tradeoff**, when C is small bias should be low, but variance will likely be high, whereas when C is large, bias is likely high but our variance is typically small. C will be selected, optimally, through cross\-validation (as we’ll see later).
The observations that lie on the margin or violate the margin are the only ones that will affect the hyperplane and the classifier (similar to the maximal margin classifier). These observations are the **support vectors** and only they will affect the support vector classifier. When C is large, there will be many support vectors, whereas when C is small, the number of support vectors will be less.
Because the support vector classifier depends on only the on the support vectors (which could be very few) this means they are quite **robust to the observations that are far** from the hyperplane. This makes this technique similar to logistic regression.
#### 8\.1\.2\.1 Example
In our example, we’ll try and classify whether someone scores at or above the mean on the science scale we created earlier. To do support vector classifiers (and SVMs) in R, we’ll use the `e1071` package (though the `caret` package could be used, too).
```
# check if e1071 is installed
# if not, install it
if (!("e1071" %in% installed.packages()[,"Package"])) {
install.packages("e1071")
library("e1071")
} else {
library("e1071")
}
```
The `svm` function in the `e1071` package requires that the outcome variable is a factor. So, we’ll do a mean split (at the OECD mean of 493\) on the `science` scale and convert it to a factor.
```
pisa[, sci_class := as.factor(ifelse(science >= 493, 1, -1))]
```
While, I’m coding this variable as 1 and \-1 to be consistent with the notation above, it doesn’t matter to the `svm` function. The only thing the `svm` function needs to perform classification and not regression is that the outcome is a factor. If the outcome has just two values, a 1 and \-1, but is not a factor, `svm` will perform regression.
We will use the following variables in our model:
| Label | Description |
| --- | --- |
| WEALTH | Family wealth (WLE) |
| HEDRES | Home educational resources (WLE) |
| ENVAWARE | Environmental Awareness (WLE) |
| ICTRES | ICT Resources (WLE) |
| EPIST | Epistemological beliefs (WLE) |
| HOMEPOS | Home possessions (WLE) |
| ESCS | Index of economic, social and cultural status (WLE) |
| reading | Reading score |
| math | Math score |
We’ll subset the variables to make it easier and in order for the model fitting to be performed in a reasonable amount of time in R, we’ll just subset the United States and Canada.
```
pisa_sub <- subset(pisa, CNT %in% c("Canada", "United States"), select = c(sci_class, WEALTH, HEDRES, ENVAWARE, ICTRES, EPIST, HOMEPOS, ESCS, reading, math))
```
To fit a support vector classier, we use the `svm` function. Before we get started, let’s divide the data set into a training and a testing data set. We will use a 66/33 split, though other splits could be used (e.g., 50/50\).
```
# set a random seed
set.seed(442019)
# svm uses listwise deletion, so we should just drop
# the observations now
pisa_m <- na.omit(pisa_sub)
# select the rows that will go into the training data set.
train <- sample(1:nrow(pisa_m), 2/3 * nrow(pisa_m))
# subset the data based on the rows that were selected to be in training data set.
train_dat <- pisa_m[train, ]
test_dat <- pisa_m[-train, ]
```
To perform support vector classification, we pass the `svm` function the `kernel = "linear"` argument. We also need to specify our tolerance, which is represented by the `cost` argument. The `cost` parameter is essentially the inverse of the tolerance parameter, C, described above. When the `cost` value is low, the tolerance is high (i.e., the margin is wide and there are lots of support vectors) and when the `cost` value is high, the tolerance is low (i.e., narrower margin). By default `cost = 1` and we will tune this parameter via cross\-validation momentarily. For now, we’ll just fit the model.
```
svc_fit <- svm(sci_class ~., data = train_dat, kernel = "linear")
```
We can obtain basic information about our model using the `summary` function.
```
summary(svc_fit)
```
```
##
## Call:
## svm(formula = sci_class ~ ., data = train_dat, kernel = "linear")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 1
## gamma: 0.1111111
##
## Number of Support Vectors: 2782
##
## ( 1390 1392 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
We see there are 2782 support vectors: 1390 in class \-1 and 1392 in class 1\. We can also plot our model but we need to specific the two features we want to plot (because our model has nine feature). Let’s look at the model with `math` on the y\-axis and `reading` on the x\-axis.
```
plot(svc_fit, data = train_dat, math ~ reading)
```
Figure 8\.5: Support vector classifier plot for all the training data.
In this figure, the red points correspond to observations that belong to class 1 (below the mean on science), while the black points correspond to observations that belong to class \-1 (at/above the mean on science); the Xs are the support vectors, while the Os are the non\-support vector observations; the upper triangle (purple) are for class 1, while the lower triangle (blue) is for class \-1\. While the decision boundary looks jagged, it’s just an artifact of the way it’s drawn with this function. We can see that many observations are misclassified (i.e., some red points are in the lower triangle and some black points are in the upper triangle). However, there are a lot of observations shown in this figure and it is difficult to discern the nature of the misclassification.
As was discussed in the section on data visualization, with this many points on a figure it is difficult to evaluate patterns, not to mention that the figure is extremely slow to render. Therefore, let’s take a random sample of 1,000 observations to get a better sense of our classifier. This is shown in Figure [8\.6](supervised-machine-learning---part-ii.html#fig:svcplotran).
```
set.seed(1)
ran_obs <- sample(1:nrow(train_dat), 1000)
plot(svc_fit, data = train_dat[ran_obs, ], math ~ reading)
```
Figure 8\.6: Support vector classifier plot for all a random subsample (n \= 1000\) of training observations.
Notice that few points are crossing the hyperplane (i.e., are misclassified). This looks like the classier is doing pretty good.
Initially when we fit the support vector classifier we used the default cost parameter, but we really should select this parameter through tuning via cross\-validation as we might be able to do an even better job at classifying. The `e1071` package includes a `tune` function which makes this easy and automatic. It performs the tuning via 10\-folds cross\-validation by default, which is probably a fine tradeoff (see James, et al. 2013 for a comparison of k\-folds vs. leave one out cross\-validation). We need to provide the `tune` function with a range of cost values (which again corresponds to our tolerance to violate the margin and hyperplane).
```
tune_svc <- tune(svm, sci_class ~., data = train_dat,
kernel="linear",
ranges = list(cost = c(.01, .1, 1, 5, 10)))
```
On my Macbook Pro (2\.6 GHz Intel Core i7 and 16 GB RAM) it takes approximately 2 minutes run this. Without doing this subsetting, it will take quite a bit longer to do.
We can view the cross\-validation errors by using the `summary` function on this object.
```
summary(tune_svc)
```
```
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 1
##
## - best performance: 0.07316727
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01 0.07342096 0.005135708
## 2 0.10 0.07354766 0.004985649
## 3 1.00 0.07316727 0.004952085
## 4 5.00 0.07329406 0.004879146
## 5 10.00 0.07335747 0.004887063
```
And then select the best model and view it.
```
best_svc <- tune_svc$best.model
summary(best_svc)
```
```
##
## Call:
## best.tune(method = svm, train.x = sci_class ~ ., data = train_dat,
## ranges = list(cost = c(0.01, 0.1, 1, 5, 10)), kernel = "linear")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 1
## gamma: 0.1111111
##
## Number of Support Vectors: 2782
##
## ( 1390 1392 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
Next, we write a function to evaluate our classifier that has one argument that takes a confusion matrix.
```
#' Evaluate classifier
#'
#' Evaluates a classifier (e.g. SVM, logistic regression)
#' @param tab a confusion matrix
eval_classifier <- function(tab, print = F){
n <- sum(tab)
TP <- tab[2,2]
FN <- tab[2,1]
FP <- tab[1,2]
TN <- tab[1,1]
classify.rate <- (TP + TN) / n
TP.rate <- TP / (TP + FN)
TN.rate <- TN / (TN + FP)
object <- data.frame(accuracy = classify.rate,
sensitivity = TP.rate,
specificity = TN.rate)
object
}
```
The confusion matrix is just a list of all possible outcomes (true positives, true negatives, false positives, and false negatives). A confusion matrix for our `best_svc` can be created by:
```
# to create a confusion matrix this order is important!
# observed values first and predict values second!
svc_cm_train <- table(train_dat$sci_class,
predict(best_svc))
svc_cm_train
```
```
##
## -1 1
## -1 5563 606
## 1 550 9053
```
The top\-left are the true negatives, the bottom\-left are the false negatives, the top\-right are the false positives, and the bottom\-right are the true positives. We can request the accuracy (the % of observations that were correctly classified), the sensitivity (the % of observations that were in class 1 that were correctly identified), and specificity (the % of observations that were in class \-1 that were correctly identified) using the `eval_classifier` function.
```
eval_classifier(svc_cm_train)
```
```
## accuracy sensitivity specificity
## 1 0.9267056 0.9427262 0.9017669
```
Performance is pretty good overall. We see that class \-1 (specificity) isn’t classified as well as class 1 (sensitivity). These statistics are likely overly optimistic as we are evaluating our model using the training data (the same data that we used to build our model). How well does the model perform on the testing data?
```
svc_cm_test <- table(test_dat$sci_class,
predict(best_svc, newdata = test_dat))
svc_cm_test
```
```
##
## -1 1
## -1 2780 281
## 1 278 4547
```
```
eval_classifier(svc_cm_test)
```
```
## accuracy sensitivity specificity
## 1 0.9291149 0.9423834 0.9081999
```
Still impressively high! This is a very good classifier indeed. This is likely because `math` and `reading` are so highly correlated with `science` scores.
We can extract the coefficients from our model that make up our decision boundary.
```
beta0 <- best_svc$rho
beta <- drop(t(best_svc$coefs) %*% as.matrix(train_dat[best_svc$index, -1]))
beta0
```
```
## [1] -1.220883
```
```
beta
```
```
## WEALTH HEDRES ENVAWARE ICTRES EPIST
## 0.04398688 -0.24398165 0.36167882 -0.09803825 0.04652237
## HOMEPOS ESCS reading math
## 0.22005477 -0.15065808 188.02960807 196.93421586
```
With more complicated SVMs with non\-linear kernels, coefficients don’t make any sense and generally are of little interest with applying these models.
#### 8\.1\.2\.2 Comparison to logistic regression
Support vector classifiers are quite similar to logistic regression. This has to do with them having similar loss functions (the functions used to estimate the parameters). In situations where the classes are well separated, SVM (more generally), tend to do better than logistic regression and when they are not well separated, logistic regression tends to do better (James, et al., 2013\).
Let’s compare logistic regression to the support vector classier. We’ll begin by fitting the model
```
lr_fit <- glm(sci_class ~. , data = train_dat, family = "binomial")
```
and then viewing the coefficients.
```
coef(lr_fit)
```
```
## (Intercept) WEALTH HEDRES ENVAWARE ICTRES
## -41.82682653 0.11666541 -0.26667828 0.30159987 -0.13594566
## EPIST HOMEPOS ESCS reading math
## 0.05053261 0.20699211 -0.24568642 0.03917470 0.04651408
```
How does it do relative to our best support vector classifier on the training and the testing data sets? For the training data set
```
lr_cm_train <- table(train_dat$sci_class,
round(predict(lr_fit, type = "response")))
lr_cm_train
```
```
##
## 0 1
## -1 5567 602
## 1 541 9062
```
```
eval_classifier(lr_cm_train)
```
```
## accuracy sensitivity specificity
## 1 0.9275298 0.9436634 0.9024153
```
and then for the testing data set.
```
lr_cm_test <- table(test_dat$sci_class,
round(predict(lr_fit, newdata = test_dat, type = "response")))
lr_cm_test
```
```
##
## 0 1
## -1 2780 281
## 1 275 4550
```
```
eval_classifier(lr_cm_test)
```
```
## accuracy sensitivity specificity
## 1 0.9294953 0.9430052 0.9081999
```
Equivalent out to the hundredths place. Either model would be fine here.
#### 8\.1\.2\.3 Using Apache Spark for machine learning
Apache Spark is also capable of running support vector classifiers. It does this using the `ml_linear_svc` function. The amazing thing about this is that you can use it to run the entire data set (i.e., there is no need to subset out a portion of the countries). If we tried to do this with the `e1071` package it would be very impractical and take forever, but with Apache Spark it is feasible and reasonably quick (just a few minutes).
We’ll again use the `sparklyr` package to interface with Spark and use the `dplyr` package to simplify interacting with Spark.
```
library(sparklyr)
library(dplyr)
```
We first need to establish a connection with Spark and then copy a subsetted PISA data set to Spark.
```
sc <- spark_connect(master = "local")
spark_sub <- subset(pisa,
select = c(sci_class, WEALTH, HEDRES, ENVAWARE, ICTRES,
EPIST, HOMEPOS, ESCS, reading, math))
spark_sub <- na.omit(spark_sub) # can't handle missing data
pisa_tbl <- copy_to(sc, spark_sub, overwrite = TRUE)
```
Now, we’ll let Spark partition the data into a training and a test data set.
```
partition <- pisa_tbl %>%
sdf_partition(training = 2/3, test = 1/3, seed = 442019)
pisa_training <- partition$training
pisa_test <- partition$test
```
We are ready to run the classifier in Spark. Unlike the `svm` function, the tolerance parameter is called `reg_param`. This parameter should be optimally selected like it was for `svm`. By default the tolerance is 1e\-06\.
```
svc_spark <- pisa_training %>%
ml_linear_svc(sci_class ~ .)
```
We then use the `ml_predict` function to predict the classes.
```
svc_pred <- ml_predict(svc_spark, pisa_training) %>%
select(sci_class, predicted_label) %>%
collect()
```
Then print the confusion matrix and the criteria that we’ve been using to evaluate our models.
```
table(svc_pred)
```
```
## predicted_label
## sci_class -1 1
## -1 145111 12353
## 1 10753 121967
```
```
eval_classifier(table(svc_pred))
```
```
## accuracy sensitivity specificity
## 1 0.9204 0.919 0.9216
```
Again, this is really good. How does it look on the testing data?
```
svc_pred_test <- ml_predict(svc_spark, pisa_test) %>%
select(sci_class, predicted_label) %>%
collect()
```
```
table(svc_pred_test)
```
```
## predicted_label
## sci_class -1 1
## -1 72577 6199
## 1 5438 60953
```
```
eval_classifier(table(svc_pred_test))
```
```
## accuracy sensitivity specificity
## 1 0.9198 0.9181 0.9213
```
Pretty impressive. We can also Apache Spark to fit logistic regression using the `ml_logistic_regression` function.
```
spark_lr <- pisa_training %>%
ml_logistic_regression(sci_class ~ .)
```
And view the performance on the training and test data sets.
```
## Training data
svc_pred_lr <- ml_predict(spark_lr, pisa_training) %>%
select(sci_class, predicted_label) %>%
collect()
table(svc_pred_lr)
```
```
## predicted_label
## sci_class -1 1
## -1 146217 11247
## 1 11133 121587
```
```
eval_classifier(table(svc_pred_lr))
```
```
## accuracy sensitivity specificity
## 1 0.9229 0.9161 0.9286
```
```
## Test data
svc_pred_test_lr <- ml_predict(spark_lr, pisa_test) %>%
select(sci_class, predicted_label) %>%
collect()
table(svc_pred_test_lr)
```
```
## predicted_label
## sci_class -1 1
## -1 73098 5678
## 1 5646 60745
```
```
eval_classifier(table(svc_pred_test_lr))
```
```
## accuracy sensitivity specificity
## 1 0.922 0.915 0.9279
```
We could also use the logistic regression in R as it’s pretty quick even with this large of a data set (in fact, it’s slightly quicker).
Finally, it is quite common to evaluate these models using AUC. We can let Apache Spark do this for the test data sets.
```
# extract predictions
pred_svc <- ml_predict(svc_spark, pisa_test)
pred_lr <- ml_predict(spark_lr, pisa_test)
ml_binary_classification_evaluator(pred_svc)
```
```
## [1] 0.9795
```
```
ml_binary_classification_evaluator(pred_lr)
```
```
## [1] 0.9805
```
We want these values as close to 1 as a possible. These values are all quite large and corroborate that these are both good classifiers.
### 8\.1\.3 Support Vector Machine
SVM is an extension of support vector classifiers using **kernels** that allow for a non\-linear boundary between the classes. Without getting into the weeds, to solve a support vector classifier problem all you need to know is the inner products of the observations. Assuming that \\(x\_i\\) and \\(x\_i'\\) are two observations and \\(p\\) is the number of predictors (features), their inner product is defined as:
\\\[
\\langle x\_i, x\_i'\\rangle \= \\begin{bmatrix}
x\_{i1} x\_{i2} \\dots x\_{ip}
\\end{bmatrix}
\\begin{bmatrix}
x\_{i1}' \\\\
x\_{i2}' \\\\
\\vdots \\\\
x\_{ip}'
\\end{bmatrix} \= x\_{i1}x\_{i1}' \+ x\_{i2}x\_{i2}' \+ \\dots x\_{ip}x\_{ip}'
\\]
More succinctly, \\(\\langle x\_i, x\_i'\\rangle \= \\sum\_{i \= 1}^p x\_{ij}x\_{ij}'\\). We can replace the inner product with a more general form, \\(K(x\_i, x\_i')\\), where \\(K\\) is a kernel (a function that quantifies the similarity of two observations). When,
\\\[
K(x\_i, x\_i') \= \\sum\_{i \= 1}^p x\_{ij}x\_{ij}
\\]
We have the linear kernel and this is the support vector classifier. However, we can use a more flexible kernel. Such as:
\\\[
K(x\_i, x\_i') \= (1 \+ \\sum\_{i \= 1}^p x\_{ij}x\_{ij})^d
\\]
which is known as a **polynomial kernel** of degree \\(d\\) and when \\(d \> 1\\) we have much more flexible decision boundary than we do for support vector classifiers (when \\(d \= 1\\) we are back to the support vector classifier).
Another very common kernel is the **radial kernel**, which is given by:
\\\[
K(x\_i, x\_i') \= \\exp\\left(\-\\gamma \\sum\_{i \= 1}^p (x\_{ij} \- x\_{ij})^2\\right)
\\]
where \\(\\gamma\\) is a positive constant. Note, both \\(d\\) and \\(\\gamma\\) are selected via tuning and cross\-validation.
Both of these kernels are worth considering when the decision boundary is non\-linear. Figure [8\.7](supervised-machine-learning---part-ii.html#fig:james2) from James, et al. (2013\) gives an example of a non\-linear boundary. We see that the classes are not linearly separated and if we tried to use a linear decision boundary, we would end up with a very poor classifier. Therefore, we need to use a more flexible kernel. In both cases, we should expect that an SVM would greatly outperform both a support vector classifier and logistic regression.
Figure 8\.7: Non\-linear decision boundary with a polynomial kernel (left) and radial kernel (right) from James et al., 2013\.
#### 8\.1\.3\.1 Examples
We will continue trying to build the best classifier of whether someone scored in the upper or lower half on the science scale and again use the `svm` function in the `e1071` package. For brevity, we’ll consider only the radial kernel. By default gamma is set to 1\. We’ll explicitly set it to 1 below and cost to 1\.
```
svm_fit <- svm(sci_class ~., data = train_dat,
cost = 1,
gamma = 1,
kernel = "radial")
```
Again, we can request some basic information about our model.
```
summary(svm_fit)
```
```
##
## Call:
## svm(formula = sci_class ~ ., data = train_dat, cost = 1, gamma = 1,
## kernel = "radial")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 1
## gamma: 1
##
## Number of Support Vectors: 6988
##
## ( 3676 3312 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
This time we see we have 6988 support vectors, 3676 in class \-1 and 3312 in class 1\. Quite a bit more support vectors than the support vector classifier. Lets visually inspect this model by plotting it against the math and reading features on the same subset of test takers (Figure [8\.8](supervised-machine-learning---part-ii.html#fig:svmplot).
```
plot(svm_fit, data = train_dat[ran_obs, ], math ~ reading)
```
Figure 8\.8: Support vector classifier plot for all a random subsample (n \= 1000\) of training observations.
We see that the decision boundary is now clearly no longer linear and we again see decent classification. Before we investigate the fit of the model, we should tune it.
```
tune_svm <- tune(svm, sci_class ~., data = train_dat,
kernel = "radial",
ranges = list(cost = c(.01, .1, 1, 5, 10),
gamma = c(0.5, 1, 2, 3, 4)))
```
We can see which model was selected
```
summary(tune_svm)
```
```
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost gamma
## 0.1 0.5
##
## - best performance: 0.07583144
##
## - Detailed performance results:
## cost gamma error dispersion
## 1 0.01 0.5 0.17670590 0.010347886
## 2 0.10 0.5 0.07583144 0.008233504
## 3 1.00 0.5 0.07754307 0.009223528
## 4 5.00 0.5 0.08375680 0.008098257
## 5 10.00 0.5 0.08718054 0.008157393
## 6 0.01 1.0 0.36425229 0.011955406
## 7 0.10 1.0 0.13162657 0.007210614
## 8 1.00 1.0 0.08242504 0.008590571
## 9 5.00 1.0 0.09402859 0.009848512
## 10 10.00 1.0 0.10074908 0.007984562
## 11 0.01 2.0 0.39113500 0.011126599
## 12 0.10 2.0 0.31409966 0.010609909
## 13 1.00 2.0 0.11469785 0.006880824
## 14 5.00 2.0 0.12363760 0.006591525
## 15 10.00 2.0 0.12465198 0.006523243
## 16 0.01 3.0 0.39113500 0.011126599
## 17 0.10 3.0 0.38257549 0.012981991
## 18 1.00 3.0 0.17562831 0.007488136
## 19 5.00 3.0 0.17277475 0.006502038
## 20 10.00 3.0 0.17309173 0.006145790
## 21 0.01 4.0 0.39113500 0.011126599
## 22 0.10 4.0 0.39107163 0.011072976
## 23 1.00 4.0 0.23960159 0.011545434
## 24 5.00 4.0 0.22641364 0.008709051
## 25 10.00 4.0 0.22641360 0.008779341
```
And then select the best model and view it.
```
best_svm <- tune_svm$best.model
summary(best_svm)
```
```
##
## Call:
## best.tune(method = svm, train.x = sci_class ~ ., data = train_dat,
## ranges = list(cost = c(0.01, 0.1, 1, 5, 10), gamma = c(0.5,
## 1, 2, 3, 4)), kernel = "radial")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 0.1
## gamma: 0.5
##
## Number of Support Vectors: 6138
##
## ( 3095 3043 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
Finally, we see how well this predicts on both the training observations
```
svm_cm_train <- table(train_dat$sci_class,
predict(best_svm))
svm_cm_train
```
```
##
## -1 1
## -1 5620 549
## 1 519 9084
```
```
eval_classifier(svm_cm_train)
```
```
## accuracy sensitivity specificity
## 1 0.9322851 0.9459544 0.9110066
```
and finally the testing observations.
```
svm_cm_test <- table(test_dat$sci_class,
predict(best_svm, newdata = test_dat))
svm_cm_test
```
```
##
## -1 1
## -1 2781 280
## 1 291 4534
```
```
eval_classifier(svm_cm_test)
```
```
## accuracy sensitivity specificity
## 1 0.9275932 0.9396891 0.9085266
```
Performance is very comparable to the support vector classifier and logistic regression implying there isn’t much gain from the use of non\-linear decision boundary.
### 8\.1\.4 Lab
For the lab, we’ll try to build the best classifier for the “Do you expect your child will go into a ?” item. Using the following variables (and any variables that you think might be relevant in the codebook) and **data for just Mexico**, try and build the best classifier. Do the following steps:
1. Split the data into a training and a testing data set. Rather than using a 66/33 split, try a 50/50 or a 75/25 split.
2. Fit a decision tree **or** random forest
* Prune your model and plot your model (if using decision trees)
* Determine the ideal number of trees (if using random forests)
3. Fit a support vector machine
* Consider different kernels (e.g., linear and radial)
* Visually inspect your model by plotting it against a few features. Create a few different plots.
* Tune the parameters.
+ How many support vectors do you have?
+ Did you notice much difference in the error rates?
+ Does your model have a high tolerance?
* (OPTIONAL): When fitting the support vector classifier, you could try and fit it using Apache Spark
+ If you do this, use the `ml_binary_classification_evaluator` function to calculate AUC.
4. Run a logistic regression
* Examine the coefficients table
5. Evaluate the fit of your models using the `eval_classifier` function on the testing data.
* Which model(s) fits the best? Can you improve it?
6. Record your accuracy, sensitivity, and specificity for all the models (decision tree or random forest and SVM) to share.
The following table contains the list of variables you could consider (this were introduced earlier):
| Label | Description |
| --- | --- |
| DISCLISCI | Disciplinary climate in science classes (WLE) |
| TEACHSUP | Teacher support in a science classes of students choice (WLE) |
| IBTEACH | Inquiry\-based science teaching an learning practices (WLE) |
| TDTEACH | Teacher\-directed science instruction (WLE) |
| ENVAWARE | Environmental Awareness (WLE) |
| JOYSCIE | Enjoyment of science (WLE) |
| INTBRSCI | Interest in broad science topics (WLE) |
| INSTSCIE | Instrumental motivation (WLE) |
| SCIEEFF | Science self\-efficacy (WLE) |
| EPIST | Epistemological beliefs (WLE) |
| SCIEACT | Index science activities (WLE) |
| BSMJ | Student’s expected occupational status (SEI) |
| MISCED | Mother’s Education (ISCED) |
| FISCED | Father’s Education (ISCED) |
| OUTHOURS | Out\-of\-School Study Time per week (Sum) |
| SMINS | Learning time (minutes per week) \- |
| TMINS | Learning time (minutes per week) \- in total |
| BELONG | Subjective well\-being: Sense of Belonging to School (WLE) |
| ANXTEST | Personality: Test Anxiety (WLE) |
| MOTIVAT | Student Attitudes, Preferences and Self\-related beliefs: Achieving motivation (WLE) |
| COOPERATE | Collaboration and teamwork dispositions: Enjoy cooperation (WLE) |
| PERFEED | Perceived Feedback (WLE) |
| unfairteacher | Teacher Fairness (Sum) |
| HEDRES | Home educational resources (WLE) |
| HOMEPOS | Home possessions (WLE) |
| ICTRES | ICT Resources (WLE) |
| WEALTH | Family wealth (WLE) |
| ESCS | Index of economic, social and cultural status (WLE) |
| math | Students’ math scores |
| reading | Students’ reading scores |
8\.1 Support Vector Machines
----------------------------
The support vector machine (SVM) is a family of related techniques developed in the 80s in computer science. They can be used in either a classification or a regression framework, but are principally known for/applied to classification (of which they are considered one of the best classification techniques because of their flexibility). Following James et al. (2013\), we will make the distinction here between maximal margin classifiers (basically a support vector classifier with a cost parameter of 0 and a separating hyperplane), support vector classifiers (or an SVM with a linear kernel), and support vector machines (which employ non\-linear kernels).
### 8\.1\.1 Maximal Margin Classifier
#### 8\.1\.1\.1 Hyperplane
The concept of a hyperplane is a critical concept in SVM, therefore, we need to understand what exactly a hyperplane is to understand SVM. A **hyperplane** is a subspace whose dimension is one less than that of the ambient space. Specifically, in a *p*\-dimensional space, a **hyperplane** is a flat affline subspace of dimensional *p \- 1*, where affline refers to the fact that the subspace need not pass through the origin.
We define a hyperplane as
\\\[
\\beta\_0 \+ \\beta\_1X\_1 \+ \\beta\_2X\_2 \+ \\dots \+ \\beta\_pX\_p \= 0
\\]
Where \\(X\_1, X\_2, ..., X\_p\\) are predictors (or *features*). Therefore, for any observation of \\(X \= (X\_1, X\_2, \\dots, X\_p)^T\\) that *satisfies* the above equation, the observation falls directly onto the hyperplane. However, a value of \\(X\\) does not need to fall onto the hyperplane, but could fall on either side of the hyperplane such that either
\\\[
\\beta\_0 \+ \\beta\_1X\_1 \+ \\beta\_2X\_2 \+ \\cdots \+ \\beta\_pX\_p \> 0
\\]
or
\\\[
\\beta\_0 \+ \\beta\_1X\_1 \+ \\beta\_2X\_2 \+ \\cdots \+ \\beta\_pX\_p \< 0
\\]
occurs. In that situation, the value of \\(X\\) lies on one of the two sides of the hyperplane and the hyperplane acts to split the *p*\-dimensional space into two halves.
Figure [8\.1](supervised-machine-learning---part-ii.html#fig:hyper) shows the hyperplane \\(.5 \+ 1X\_1 \+ \-4X\_2 \= 0\\). If we plug a value of \\(X\_1\\) and \\(X\_2\\) into this equation, we know based on the sign alone if the points falls on one side of the hyperplane or if it falls directly onto the hyperplane. In Figure [8\.1](supervised-machine-learning---part-ii.html#fig:hyper) all the points in the red region will have negative signs (i.e., if we plug in the values of \\(X\_1\\) and \\(X\_2\\) into the above equation the sign will be negative), while all the points in the blue region would be positive, whereas any points that would have no sign are represented by the black line (the hyperplane).
Figure 8\.1: The hyperplane, \\(.5 \+ 1X\_1 \+ \-4X\_2 \= 0\\), is black line, the red points occur in the region where \\(.5 \+ 1X\_1 \+ \-4X\_2 \> 0\\), while the blue points occur in the region where \\(.5 \+ 1X\_1 \+ \-4X\_2 \< 0\\).
Wee can apply this idea of a hyperplane to classifying observations. We learned earlier how it important it is when applying machine learning techniques to split our data into training and testing data sets to avoid overfitting. We can split our *n \+ m* by *p* matrix of observations into an *n* by *p* \\(\\mathbf{X}\\) matrix of training observations, which fall into one of two classes for \\(Y \= y\_1, .., y\_n\\) where \\(Y\_i \\in {\-1, 1}\\) and an *m* by *p* matrix \\(\\mathbf{X^\*}\\) of testing observations. Using just the training data, our goal is develop a model that will correctly classify our testing data using just a hyperplane and we will do this by creating a **separating hyperplane** (a hyperplane that will separate our classes).
Let’s assume we have the training data in Figure [8\.2](supervised-machine-learning---part-ii.html#fig:hyperex) and that the blue points correspond to one class (labelled as \\(y \= 1\\)) and the red points correspond to the other class (\\(y \= \-1\\)). The separating hyper plane has the property that:
\\\[
\\beta\_0 \+ \\beta\_1x\_{i1} \+ \\beta\_2x\_{i2} \+ \\dots \+ \\beta\_px\_{p1} \> 0 \\quad \\text{if} \\quad y\_i \= 1
\\]
and
\\\[
\\beta\_0 \+ \\beta\_1x\_{i1} \+ \\beta\_2x\_{i2} \+ \\dots \+ \\beta\_px\_{p1} \< 0 \\quad \\text{if} \\quad y\_i \= \-1
\\]
Or more succintly,
\\\[
y\_i(\\beta\_0 \+ \\beta\_1x\_{i1} \+ \\beta\_2x\_{i2} \+ \\dots \+ \\beta\_px\_{p1}) \> 0
\\]
Ideally, we would create a hyperplane that perfectly separates the classes based on \\(X\_1\\) and \\(X\_2\\). However, as Figure [8\.2](supervised-machine-learning---part-ii.html#fig:hyperex) makes clear, we can create many separating hyperplanes of which 3 of these are shown. In fact, it’s often the case that an infinite number of separating hyperplanes could be created when the classes are perfectly separable. What we need to do is to develop some kind of a criterion for selecting one of the many separating hyperplanes.
Figure 8\.2: Candidate hyperplanes to separate the two classes.
For any given hyperplane, we have two pieces of information available for each observation: 1\) the side of the hyperplane it lies on (represented by its sign) and 2\) the distance it is from the hyperplane. The natural criterion for selecting a separating hyperplane is to **maximize the distance** it is from from the training observations. Therefore, we compute the distance that each training observation is from a candidate hyperplane. The minimal such distance from the observation to the hyperplane is known as the **margin**. Then we will select the hyperplane with the largest margin (the **maximal margin hyperplane**) and classify observations based on which side of this hyperplane they fall (**maximal margin classifier**). The hope is that a classifier with a large margin on the training data will also have a large margin on the test observations and subsequently classify well.
Figure [8\.3](supervised-machine-learning---part-ii.html#fig:mmc) depicts a maximal margin classifier. The red line corresponds to the maximal margin hyperplane and the distance between one of the dotted lines and the black line is the **margin**. The black and white points along the boundary of the margin are the **support vectors**. It is clear in Figure [8\.3](supervised-machine-learning---part-ii.html#fig:mmc) that the maximal margin hyperplane depends only on these two support vectors. If they are moved, the maximal margin hyperplane moves, however, if any other observations are moved they would have no effect on this hyperplane *unless* they crossed the boundary of the margin.
Figure 8\.3: Maximal margin hyperplane. Source: <https://tinyurl.com/y493pww8>
The problem in practice is that a separating hyperplane usually doesn’t exist. Even if a separating hyperplane existed, we may not want to use the maximal margin hyperplane as it would perfectly classify all of the observations and may be too sensitive to individual observations and subsequently overfitting.
Figure [8\.4](supervised-machine-learning---part-ii.html#fig:fig95) from James, et al. (2013\) clearly illustrates this problem. The left figure shows the maximal margin hyperplane (solid) in a completely separable solution. The figure on the right shows that when a new observation is introduced that the maximal margin hyperplane (solid) shifts rather dramatically relative to its original location (dashed).
Figure 8\.4: The impact of adding one observations to the maximal margin hyperplane from James et al. (2013\).
### 8\.1\.2 Support Vector Classifier
Our hope for a hyperplane is that it would be relatively insensitive to individual observations, while still classifying training observations well. That is, we would like to have what is termed a **soft margin classifier** or a **support vector classifier**. Essentially, we are willing to allow some observations to be on the incorrect side of the margin (classified correctly) or even the incorrect side of the hyperplane (incorrectly classified) if our classifier, overall, performs well.
We do this by introducing a tuning parameter, C, which determines the number and the severity of violations to the margin/hyperplane we are willing to tolerate. As C increases, our **tolerance** for violations will increase and subsequently our margin will widen. C, thus, represents a **bias\-variance tradeoff**, when C is small bias should be low, but variance will likely be high, whereas when C is large, bias is likely high but our variance is typically small. C will be selected, optimally, through cross\-validation (as we’ll see later).
The observations that lie on the margin or violate the margin are the only ones that will affect the hyperplane and the classifier (similar to the maximal margin classifier). These observations are the **support vectors** and only they will affect the support vector classifier. When C is large, there will be many support vectors, whereas when C is small, the number of support vectors will be less.
Because the support vector classifier depends on only the on the support vectors (which could be very few) this means they are quite **robust to the observations that are far** from the hyperplane. This makes this technique similar to logistic regression.
#### 8\.1\.2\.1 Example
In our example, we’ll try and classify whether someone scores at or above the mean on the science scale we created earlier. To do support vector classifiers (and SVMs) in R, we’ll use the `e1071` package (though the `caret` package could be used, too).
```
# check if e1071 is installed
# if not, install it
if (!("e1071" %in% installed.packages()[,"Package"])) {
install.packages("e1071")
library("e1071")
} else {
library("e1071")
}
```
The `svm` function in the `e1071` package requires that the outcome variable is a factor. So, we’ll do a mean split (at the OECD mean of 493\) on the `science` scale and convert it to a factor.
```
pisa[, sci_class := as.factor(ifelse(science >= 493, 1, -1))]
```
While, I’m coding this variable as 1 and \-1 to be consistent with the notation above, it doesn’t matter to the `svm` function. The only thing the `svm` function needs to perform classification and not regression is that the outcome is a factor. If the outcome has just two values, a 1 and \-1, but is not a factor, `svm` will perform regression.
We will use the following variables in our model:
| Label | Description |
| --- | --- |
| WEALTH | Family wealth (WLE) |
| HEDRES | Home educational resources (WLE) |
| ENVAWARE | Environmental Awareness (WLE) |
| ICTRES | ICT Resources (WLE) |
| EPIST | Epistemological beliefs (WLE) |
| HOMEPOS | Home possessions (WLE) |
| ESCS | Index of economic, social and cultural status (WLE) |
| reading | Reading score |
| math | Math score |
We’ll subset the variables to make it easier and in order for the model fitting to be performed in a reasonable amount of time in R, we’ll just subset the United States and Canada.
```
pisa_sub <- subset(pisa, CNT %in% c("Canada", "United States"), select = c(sci_class, WEALTH, HEDRES, ENVAWARE, ICTRES, EPIST, HOMEPOS, ESCS, reading, math))
```
To fit a support vector classier, we use the `svm` function. Before we get started, let’s divide the data set into a training and a testing data set. We will use a 66/33 split, though other splits could be used (e.g., 50/50\).
```
# set a random seed
set.seed(442019)
# svm uses listwise deletion, so we should just drop
# the observations now
pisa_m <- na.omit(pisa_sub)
# select the rows that will go into the training data set.
train <- sample(1:nrow(pisa_m), 2/3 * nrow(pisa_m))
# subset the data based on the rows that were selected to be in training data set.
train_dat <- pisa_m[train, ]
test_dat <- pisa_m[-train, ]
```
To perform support vector classification, we pass the `svm` function the `kernel = "linear"` argument. We also need to specify our tolerance, which is represented by the `cost` argument. The `cost` parameter is essentially the inverse of the tolerance parameter, C, described above. When the `cost` value is low, the tolerance is high (i.e., the margin is wide and there are lots of support vectors) and when the `cost` value is high, the tolerance is low (i.e., narrower margin). By default `cost = 1` and we will tune this parameter via cross\-validation momentarily. For now, we’ll just fit the model.
```
svc_fit <- svm(sci_class ~., data = train_dat, kernel = "linear")
```
We can obtain basic information about our model using the `summary` function.
```
summary(svc_fit)
```
```
##
## Call:
## svm(formula = sci_class ~ ., data = train_dat, kernel = "linear")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 1
## gamma: 0.1111111
##
## Number of Support Vectors: 2782
##
## ( 1390 1392 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
We see there are 2782 support vectors: 1390 in class \-1 and 1392 in class 1\. We can also plot our model but we need to specific the two features we want to plot (because our model has nine feature). Let’s look at the model with `math` on the y\-axis and `reading` on the x\-axis.
```
plot(svc_fit, data = train_dat, math ~ reading)
```
Figure 8\.5: Support vector classifier plot for all the training data.
In this figure, the red points correspond to observations that belong to class 1 (below the mean on science), while the black points correspond to observations that belong to class \-1 (at/above the mean on science); the Xs are the support vectors, while the Os are the non\-support vector observations; the upper triangle (purple) are for class 1, while the lower triangle (blue) is for class \-1\. While the decision boundary looks jagged, it’s just an artifact of the way it’s drawn with this function. We can see that many observations are misclassified (i.e., some red points are in the lower triangle and some black points are in the upper triangle). However, there are a lot of observations shown in this figure and it is difficult to discern the nature of the misclassification.
As was discussed in the section on data visualization, with this many points on a figure it is difficult to evaluate patterns, not to mention that the figure is extremely slow to render. Therefore, let’s take a random sample of 1,000 observations to get a better sense of our classifier. This is shown in Figure [8\.6](supervised-machine-learning---part-ii.html#fig:svcplotran).
```
set.seed(1)
ran_obs <- sample(1:nrow(train_dat), 1000)
plot(svc_fit, data = train_dat[ran_obs, ], math ~ reading)
```
Figure 8\.6: Support vector classifier plot for all a random subsample (n \= 1000\) of training observations.
Notice that few points are crossing the hyperplane (i.e., are misclassified). This looks like the classier is doing pretty good.
Initially when we fit the support vector classifier we used the default cost parameter, but we really should select this parameter through tuning via cross\-validation as we might be able to do an even better job at classifying. The `e1071` package includes a `tune` function which makes this easy and automatic. It performs the tuning via 10\-folds cross\-validation by default, which is probably a fine tradeoff (see James, et al. 2013 for a comparison of k\-folds vs. leave one out cross\-validation). We need to provide the `tune` function with a range of cost values (which again corresponds to our tolerance to violate the margin and hyperplane).
```
tune_svc <- tune(svm, sci_class ~., data = train_dat,
kernel="linear",
ranges = list(cost = c(.01, .1, 1, 5, 10)))
```
On my Macbook Pro (2\.6 GHz Intel Core i7 and 16 GB RAM) it takes approximately 2 minutes run this. Without doing this subsetting, it will take quite a bit longer to do.
We can view the cross\-validation errors by using the `summary` function on this object.
```
summary(tune_svc)
```
```
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 1
##
## - best performance: 0.07316727
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01 0.07342096 0.005135708
## 2 0.10 0.07354766 0.004985649
## 3 1.00 0.07316727 0.004952085
## 4 5.00 0.07329406 0.004879146
## 5 10.00 0.07335747 0.004887063
```
And then select the best model and view it.
```
best_svc <- tune_svc$best.model
summary(best_svc)
```
```
##
## Call:
## best.tune(method = svm, train.x = sci_class ~ ., data = train_dat,
## ranges = list(cost = c(0.01, 0.1, 1, 5, 10)), kernel = "linear")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 1
## gamma: 0.1111111
##
## Number of Support Vectors: 2782
##
## ( 1390 1392 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
Next, we write a function to evaluate our classifier that has one argument that takes a confusion matrix.
```
#' Evaluate classifier
#'
#' Evaluates a classifier (e.g. SVM, logistic regression)
#' @param tab a confusion matrix
eval_classifier <- function(tab, print = F){
n <- sum(tab)
TP <- tab[2,2]
FN <- tab[2,1]
FP <- tab[1,2]
TN <- tab[1,1]
classify.rate <- (TP + TN) / n
TP.rate <- TP / (TP + FN)
TN.rate <- TN / (TN + FP)
object <- data.frame(accuracy = classify.rate,
sensitivity = TP.rate,
specificity = TN.rate)
object
}
```
The confusion matrix is just a list of all possible outcomes (true positives, true negatives, false positives, and false negatives). A confusion matrix for our `best_svc` can be created by:
```
# to create a confusion matrix this order is important!
# observed values first and predict values second!
svc_cm_train <- table(train_dat$sci_class,
predict(best_svc))
svc_cm_train
```
```
##
## -1 1
## -1 5563 606
## 1 550 9053
```
The top\-left are the true negatives, the bottom\-left are the false negatives, the top\-right are the false positives, and the bottom\-right are the true positives. We can request the accuracy (the % of observations that were correctly classified), the sensitivity (the % of observations that were in class 1 that were correctly identified), and specificity (the % of observations that were in class \-1 that were correctly identified) using the `eval_classifier` function.
```
eval_classifier(svc_cm_train)
```
```
## accuracy sensitivity specificity
## 1 0.9267056 0.9427262 0.9017669
```
Performance is pretty good overall. We see that class \-1 (specificity) isn’t classified as well as class 1 (sensitivity). These statistics are likely overly optimistic as we are evaluating our model using the training data (the same data that we used to build our model). How well does the model perform on the testing data?
```
svc_cm_test <- table(test_dat$sci_class,
predict(best_svc, newdata = test_dat))
svc_cm_test
```
```
##
## -1 1
## -1 2780 281
## 1 278 4547
```
```
eval_classifier(svc_cm_test)
```
```
## accuracy sensitivity specificity
## 1 0.9291149 0.9423834 0.9081999
```
Still impressively high! This is a very good classifier indeed. This is likely because `math` and `reading` are so highly correlated with `science` scores.
We can extract the coefficients from our model that make up our decision boundary.
```
beta0 <- best_svc$rho
beta <- drop(t(best_svc$coefs) %*% as.matrix(train_dat[best_svc$index, -1]))
beta0
```
```
## [1] -1.220883
```
```
beta
```
```
## WEALTH HEDRES ENVAWARE ICTRES EPIST
## 0.04398688 -0.24398165 0.36167882 -0.09803825 0.04652237
## HOMEPOS ESCS reading math
## 0.22005477 -0.15065808 188.02960807 196.93421586
```
With more complicated SVMs with non\-linear kernels, coefficients don’t make any sense and generally are of little interest with applying these models.
#### 8\.1\.2\.2 Comparison to logistic regression
Support vector classifiers are quite similar to logistic regression. This has to do with them having similar loss functions (the functions used to estimate the parameters). In situations where the classes are well separated, SVM (more generally), tend to do better than logistic regression and when they are not well separated, logistic regression tends to do better (James, et al., 2013\).
Let’s compare logistic regression to the support vector classier. We’ll begin by fitting the model
```
lr_fit <- glm(sci_class ~. , data = train_dat, family = "binomial")
```
and then viewing the coefficients.
```
coef(lr_fit)
```
```
## (Intercept) WEALTH HEDRES ENVAWARE ICTRES
## -41.82682653 0.11666541 -0.26667828 0.30159987 -0.13594566
## EPIST HOMEPOS ESCS reading math
## 0.05053261 0.20699211 -0.24568642 0.03917470 0.04651408
```
How does it do relative to our best support vector classifier on the training and the testing data sets? For the training data set
```
lr_cm_train <- table(train_dat$sci_class,
round(predict(lr_fit, type = "response")))
lr_cm_train
```
```
##
## 0 1
## -1 5567 602
## 1 541 9062
```
```
eval_classifier(lr_cm_train)
```
```
## accuracy sensitivity specificity
## 1 0.9275298 0.9436634 0.9024153
```
and then for the testing data set.
```
lr_cm_test <- table(test_dat$sci_class,
round(predict(lr_fit, newdata = test_dat, type = "response")))
lr_cm_test
```
```
##
## 0 1
## -1 2780 281
## 1 275 4550
```
```
eval_classifier(lr_cm_test)
```
```
## accuracy sensitivity specificity
## 1 0.9294953 0.9430052 0.9081999
```
Equivalent out to the hundredths place. Either model would be fine here.
#### 8\.1\.2\.3 Using Apache Spark for machine learning
Apache Spark is also capable of running support vector classifiers. It does this using the `ml_linear_svc` function. The amazing thing about this is that you can use it to run the entire data set (i.e., there is no need to subset out a portion of the countries). If we tried to do this with the `e1071` package it would be very impractical and take forever, but with Apache Spark it is feasible and reasonably quick (just a few minutes).
We’ll again use the `sparklyr` package to interface with Spark and use the `dplyr` package to simplify interacting with Spark.
```
library(sparklyr)
library(dplyr)
```
We first need to establish a connection with Spark and then copy a subsetted PISA data set to Spark.
```
sc <- spark_connect(master = "local")
spark_sub <- subset(pisa,
select = c(sci_class, WEALTH, HEDRES, ENVAWARE, ICTRES,
EPIST, HOMEPOS, ESCS, reading, math))
spark_sub <- na.omit(spark_sub) # can't handle missing data
pisa_tbl <- copy_to(sc, spark_sub, overwrite = TRUE)
```
Now, we’ll let Spark partition the data into a training and a test data set.
```
partition <- pisa_tbl %>%
sdf_partition(training = 2/3, test = 1/3, seed = 442019)
pisa_training <- partition$training
pisa_test <- partition$test
```
We are ready to run the classifier in Spark. Unlike the `svm` function, the tolerance parameter is called `reg_param`. This parameter should be optimally selected like it was for `svm`. By default the tolerance is 1e\-06\.
```
svc_spark <- pisa_training %>%
ml_linear_svc(sci_class ~ .)
```
We then use the `ml_predict` function to predict the classes.
```
svc_pred <- ml_predict(svc_spark, pisa_training) %>%
select(sci_class, predicted_label) %>%
collect()
```
Then print the confusion matrix and the criteria that we’ve been using to evaluate our models.
```
table(svc_pred)
```
```
## predicted_label
## sci_class -1 1
## -1 145111 12353
## 1 10753 121967
```
```
eval_classifier(table(svc_pred))
```
```
## accuracy sensitivity specificity
## 1 0.9204 0.919 0.9216
```
Again, this is really good. How does it look on the testing data?
```
svc_pred_test <- ml_predict(svc_spark, pisa_test) %>%
select(sci_class, predicted_label) %>%
collect()
```
```
table(svc_pred_test)
```
```
## predicted_label
## sci_class -1 1
## -1 72577 6199
## 1 5438 60953
```
```
eval_classifier(table(svc_pred_test))
```
```
## accuracy sensitivity specificity
## 1 0.9198 0.9181 0.9213
```
Pretty impressive. We can also Apache Spark to fit logistic regression using the `ml_logistic_regression` function.
```
spark_lr <- pisa_training %>%
ml_logistic_regression(sci_class ~ .)
```
And view the performance on the training and test data sets.
```
## Training data
svc_pred_lr <- ml_predict(spark_lr, pisa_training) %>%
select(sci_class, predicted_label) %>%
collect()
table(svc_pred_lr)
```
```
## predicted_label
## sci_class -1 1
## -1 146217 11247
## 1 11133 121587
```
```
eval_classifier(table(svc_pred_lr))
```
```
## accuracy sensitivity specificity
## 1 0.9229 0.9161 0.9286
```
```
## Test data
svc_pred_test_lr <- ml_predict(spark_lr, pisa_test) %>%
select(sci_class, predicted_label) %>%
collect()
table(svc_pred_test_lr)
```
```
## predicted_label
## sci_class -1 1
## -1 73098 5678
## 1 5646 60745
```
```
eval_classifier(table(svc_pred_test_lr))
```
```
## accuracy sensitivity specificity
## 1 0.922 0.915 0.9279
```
We could also use the logistic regression in R as it’s pretty quick even with this large of a data set (in fact, it’s slightly quicker).
Finally, it is quite common to evaluate these models using AUC. We can let Apache Spark do this for the test data sets.
```
# extract predictions
pred_svc <- ml_predict(svc_spark, pisa_test)
pred_lr <- ml_predict(spark_lr, pisa_test)
ml_binary_classification_evaluator(pred_svc)
```
```
## [1] 0.9795
```
```
ml_binary_classification_evaluator(pred_lr)
```
```
## [1] 0.9805
```
We want these values as close to 1 as a possible. These values are all quite large and corroborate that these are both good classifiers.
### 8\.1\.3 Support Vector Machine
SVM is an extension of support vector classifiers using **kernels** that allow for a non\-linear boundary between the classes. Without getting into the weeds, to solve a support vector classifier problem all you need to know is the inner products of the observations. Assuming that \\(x\_i\\) and \\(x\_i'\\) are two observations and \\(p\\) is the number of predictors (features), their inner product is defined as:
\\\[
\\langle x\_i, x\_i'\\rangle \= \\begin{bmatrix}
x\_{i1} x\_{i2} \\dots x\_{ip}
\\end{bmatrix}
\\begin{bmatrix}
x\_{i1}' \\\\
x\_{i2}' \\\\
\\vdots \\\\
x\_{ip}'
\\end{bmatrix} \= x\_{i1}x\_{i1}' \+ x\_{i2}x\_{i2}' \+ \\dots x\_{ip}x\_{ip}'
\\]
More succinctly, \\(\\langle x\_i, x\_i'\\rangle \= \\sum\_{i \= 1}^p x\_{ij}x\_{ij}'\\). We can replace the inner product with a more general form, \\(K(x\_i, x\_i')\\), where \\(K\\) is a kernel (a function that quantifies the similarity of two observations). When,
\\\[
K(x\_i, x\_i') \= \\sum\_{i \= 1}^p x\_{ij}x\_{ij}
\\]
We have the linear kernel and this is the support vector classifier. However, we can use a more flexible kernel. Such as:
\\\[
K(x\_i, x\_i') \= (1 \+ \\sum\_{i \= 1}^p x\_{ij}x\_{ij})^d
\\]
which is known as a **polynomial kernel** of degree \\(d\\) and when \\(d \> 1\\) we have much more flexible decision boundary than we do for support vector classifiers (when \\(d \= 1\\) we are back to the support vector classifier).
Another very common kernel is the **radial kernel**, which is given by:
\\\[
K(x\_i, x\_i') \= \\exp\\left(\-\\gamma \\sum\_{i \= 1}^p (x\_{ij} \- x\_{ij})^2\\right)
\\]
where \\(\\gamma\\) is a positive constant. Note, both \\(d\\) and \\(\\gamma\\) are selected via tuning and cross\-validation.
Both of these kernels are worth considering when the decision boundary is non\-linear. Figure [8\.7](supervised-machine-learning---part-ii.html#fig:james2) from James, et al. (2013\) gives an example of a non\-linear boundary. We see that the classes are not linearly separated and if we tried to use a linear decision boundary, we would end up with a very poor classifier. Therefore, we need to use a more flexible kernel. In both cases, we should expect that an SVM would greatly outperform both a support vector classifier and logistic regression.
Figure 8\.7: Non\-linear decision boundary with a polynomial kernel (left) and radial kernel (right) from James et al., 2013\.
#### 8\.1\.3\.1 Examples
We will continue trying to build the best classifier of whether someone scored in the upper or lower half on the science scale and again use the `svm` function in the `e1071` package. For brevity, we’ll consider only the radial kernel. By default gamma is set to 1\. We’ll explicitly set it to 1 below and cost to 1\.
```
svm_fit <- svm(sci_class ~., data = train_dat,
cost = 1,
gamma = 1,
kernel = "radial")
```
Again, we can request some basic information about our model.
```
summary(svm_fit)
```
```
##
## Call:
## svm(formula = sci_class ~ ., data = train_dat, cost = 1, gamma = 1,
## kernel = "radial")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 1
## gamma: 1
##
## Number of Support Vectors: 6988
##
## ( 3676 3312 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
This time we see we have 6988 support vectors, 3676 in class \-1 and 3312 in class 1\. Quite a bit more support vectors than the support vector classifier. Lets visually inspect this model by plotting it against the math and reading features on the same subset of test takers (Figure [8\.8](supervised-machine-learning---part-ii.html#fig:svmplot).
```
plot(svm_fit, data = train_dat[ran_obs, ], math ~ reading)
```
Figure 8\.8: Support vector classifier plot for all a random subsample (n \= 1000\) of training observations.
We see that the decision boundary is now clearly no longer linear and we again see decent classification. Before we investigate the fit of the model, we should tune it.
```
tune_svm <- tune(svm, sci_class ~., data = train_dat,
kernel = "radial",
ranges = list(cost = c(.01, .1, 1, 5, 10),
gamma = c(0.5, 1, 2, 3, 4)))
```
We can see which model was selected
```
summary(tune_svm)
```
```
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost gamma
## 0.1 0.5
##
## - best performance: 0.07583144
##
## - Detailed performance results:
## cost gamma error dispersion
## 1 0.01 0.5 0.17670590 0.010347886
## 2 0.10 0.5 0.07583144 0.008233504
## 3 1.00 0.5 0.07754307 0.009223528
## 4 5.00 0.5 0.08375680 0.008098257
## 5 10.00 0.5 0.08718054 0.008157393
## 6 0.01 1.0 0.36425229 0.011955406
## 7 0.10 1.0 0.13162657 0.007210614
## 8 1.00 1.0 0.08242504 0.008590571
## 9 5.00 1.0 0.09402859 0.009848512
## 10 10.00 1.0 0.10074908 0.007984562
## 11 0.01 2.0 0.39113500 0.011126599
## 12 0.10 2.0 0.31409966 0.010609909
## 13 1.00 2.0 0.11469785 0.006880824
## 14 5.00 2.0 0.12363760 0.006591525
## 15 10.00 2.0 0.12465198 0.006523243
## 16 0.01 3.0 0.39113500 0.011126599
## 17 0.10 3.0 0.38257549 0.012981991
## 18 1.00 3.0 0.17562831 0.007488136
## 19 5.00 3.0 0.17277475 0.006502038
## 20 10.00 3.0 0.17309173 0.006145790
## 21 0.01 4.0 0.39113500 0.011126599
## 22 0.10 4.0 0.39107163 0.011072976
## 23 1.00 4.0 0.23960159 0.011545434
## 24 5.00 4.0 0.22641364 0.008709051
## 25 10.00 4.0 0.22641360 0.008779341
```
And then select the best model and view it.
```
best_svm <- tune_svm$best.model
summary(best_svm)
```
```
##
## Call:
## best.tune(method = svm, train.x = sci_class ~ ., data = train_dat,
## ranges = list(cost = c(0.01, 0.1, 1, 5, 10), gamma = c(0.5,
## 1, 2, 3, 4)), kernel = "radial")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 0.1
## gamma: 0.5
##
## Number of Support Vectors: 6138
##
## ( 3095 3043 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
Finally, we see how well this predicts on both the training observations
```
svm_cm_train <- table(train_dat$sci_class,
predict(best_svm))
svm_cm_train
```
```
##
## -1 1
## -1 5620 549
## 1 519 9084
```
```
eval_classifier(svm_cm_train)
```
```
## accuracy sensitivity specificity
## 1 0.9322851 0.9459544 0.9110066
```
and finally the testing observations.
```
svm_cm_test <- table(test_dat$sci_class,
predict(best_svm, newdata = test_dat))
svm_cm_test
```
```
##
## -1 1
## -1 2781 280
## 1 291 4534
```
```
eval_classifier(svm_cm_test)
```
```
## accuracy sensitivity specificity
## 1 0.9275932 0.9396891 0.9085266
```
Performance is very comparable to the support vector classifier and logistic regression implying there isn’t much gain from the use of non\-linear decision boundary.
### 8\.1\.4 Lab
For the lab, we’ll try to build the best classifier for the “Do you expect your child will go into a ?” item. Using the following variables (and any variables that you think might be relevant in the codebook) and **data for just Mexico**, try and build the best classifier. Do the following steps:
1. Split the data into a training and a testing data set. Rather than using a 66/33 split, try a 50/50 or a 75/25 split.
2. Fit a decision tree **or** random forest
* Prune your model and plot your model (if using decision trees)
* Determine the ideal number of trees (if using random forests)
3. Fit a support vector machine
* Consider different kernels (e.g., linear and radial)
* Visually inspect your model by plotting it against a few features. Create a few different plots.
* Tune the parameters.
+ How many support vectors do you have?
+ Did you notice much difference in the error rates?
+ Does your model have a high tolerance?
* (OPTIONAL): When fitting the support vector classifier, you could try and fit it using Apache Spark
+ If you do this, use the `ml_binary_classification_evaluator` function to calculate AUC.
4. Run a logistic regression
* Examine the coefficients table
5. Evaluate the fit of your models using the `eval_classifier` function on the testing data.
* Which model(s) fits the best? Can you improve it?
6. Record your accuracy, sensitivity, and specificity for all the models (decision tree or random forest and SVM) to share.
The following table contains the list of variables you could consider (this were introduced earlier):
| Label | Description |
| --- | --- |
| DISCLISCI | Disciplinary climate in science classes (WLE) |
| TEACHSUP | Teacher support in a science classes of students choice (WLE) |
| IBTEACH | Inquiry\-based science teaching an learning practices (WLE) |
| TDTEACH | Teacher\-directed science instruction (WLE) |
| ENVAWARE | Environmental Awareness (WLE) |
| JOYSCIE | Enjoyment of science (WLE) |
| INTBRSCI | Interest in broad science topics (WLE) |
| INSTSCIE | Instrumental motivation (WLE) |
| SCIEEFF | Science self\-efficacy (WLE) |
| EPIST | Epistemological beliefs (WLE) |
| SCIEACT | Index science activities (WLE) |
| BSMJ | Student’s expected occupational status (SEI) |
| MISCED | Mother’s Education (ISCED) |
| FISCED | Father’s Education (ISCED) |
| OUTHOURS | Out\-of\-School Study Time per week (Sum) |
| SMINS | Learning time (minutes per week) \- |
| TMINS | Learning time (minutes per week) \- in total |
| BELONG | Subjective well\-being: Sense of Belonging to School (WLE) |
| ANXTEST | Personality: Test Anxiety (WLE) |
| MOTIVAT | Student Attitudes, Preferences and Self\-related beliefs: Achieving motivation (WLE) |
| COOPERATE | Collaboration and teamwork dispositions: Enjoy cooperation (WLE) |
| PERFEED | Perceived Feedback (WLE) |
| unfairteacher | Teacher Fairness (Sum) |
| HEDRES | Home educational resources (WLE) |
| HOMEPOS | Home possessions (WLE) |
| ICTRES | ICT Resources (WLE) |
| WEALTH | Family wealth (WLE) |
| ESCS | Index of economic, social and cultural status (WLE) |
| math | Students’ math scores |
| reading | Students’ reading scores |
### 8\.1\.1 Maximal Margin Classifier
#### 8\.1\.1\.1 Hyperplane
The concept of a hyperplane is a critical concept in SVM, therefore, we need to understand what exactly a hyperplane is to understand SVM. A **hyperplane** is a subspace whose dimension is one less than that of the ambient space. Specifically, in a *p*\-dimensional space, a **hyperplane** is a flat affline subspace of dimensional *p \- 1*, where affline refers to the fact that the subspace need not pass through the origin.
We define a hyperplane as
\\\[
\\beta\_0 \+ \\beta\_1X\_1 \+ \\beta\_2X\_2 \+ \\dots \+ \\beta\_pX\_p \= 0
\\]
Where \\(X\_1, X\_2, ..., X\_p\\) are predictors (or *features*). Therefore, for any observation of \\(X \= (X\_1, X\_2, \\dots, X\_p)^T\\) that *satisfies* the above equation, the observation falls directly onto the hyperplane. However, a value of \\(X\\) does not need to fall onto the hyperplane, but could fall on either side of the hyperplane such that either
\\\[
\\beta\_0 \+ \\beta\_1X\_1 \+ \\beta\_2X\_2 \+ \\cdots \+ \\beta\_pX\_p \> 0
\\]
or
\\\[
\\beta\_0 \+ \\beta\_1X\_1 \+ \\beta\_2X\_2 \+ \\cdots \+ \\beta\_pX\_p \< 0
\\]
occurs. In that situation, the value of \\(X\\) lies on one of the two sides of the hyperplane and the hyperplane acts to split the *p*\-dimensional space into two halves.
Figure [8\.1](supervised-machine-learning---part-ii.html#fig:hyper) shows the hyperplane \\(.5 \+ 1X\_1 \+ \-4X\_2 \= 0\\). If we plug a value of \\(X\_1\\) and \\(X\_2\\) into this equation, we know based on the sign alone if the points falls on one side of the hyperplane or if it falls directly onto the hyperplane. In Figure [8\.1](supervised-machine-learning---part-ii.html#fig:hyper) all the points in the red region will have negative signs (i.e., if we plug in the values of \\(X\_1\\) and \\(X\_2\\) into the above equation the sign will be negative), while all the points in the blue region would be positive, whereas any points that would have no sign are represented by the black line (the hyperplane).
Figure 8\.1: The hyperplane, \\(.5 \+ 1X\_1 \+ \-4X\_2 \= 0\\), is black line, the red points occur in the region where \\(.5 \+ 1X\_1 \+ \-4X\_2 \> 0\\), while the blue points occur in the region where \\(.5 \+ 1X\_1 \+ \-4X\_2 \< 0\\).
Wee can apply this idea of a hyperplane to classifying observations. We learned earlier how it important it is when applying machine learning techniques to split our data into training and testing data sets to avoid overfitting. We can split our *n \+ m* by *p* matrix of observations into an *n* by *p* \\(\\mathbf{X}\\) matrix of training observations, which fall into one of two classes for \\(Y \= y\_1, .., y\_n\\) where \\(Y\_i \\in {\-1, 1}\\) and an *m* by *p* matrix \\(\\mathbf{X^\*}\\) of testing observations. Using just the training data, our goal is develop a model that will correctly classify our testing data using just a hyperplane and we will do this by creating a **separating hyperplane** (a hyperplane that will separate our classes).
Let’s assume we have the training data in Figure [8\.2](supervised-machine-learning---part-ii.html#fig:hyperex) and that the blue points correspond to one class (labelled as \\(y \= 1\\)) and the red points correspond to the other class (\\(y \= \-1\\)). The separating hyper plane has the property that:
\\\[
\\beta\_0 \+ \\beta\_1x\_{i1} \+ \\beta\_2x\_{i2} \+ \\dots \+ \\beta\_px\_{p1} \> 0 \\quad \\text{if} \\quad y\_i \= 1
\\]
and
\\\[
\\beta\_0 \+ \\beta\_1x\_{i1} \+ \\beta\_2x\_{i2} \+ \\dots \+ \\beta\_px\_{p1} \< 0 \\quad \\text{if} \\quad y\_i \= \-1
\\]
Or more succintly,
\\\[
y\_i(\\beta\_0 \+ \\beta\_1x\_{i1} \+ \\beta\_2x\_{i2} \+ \\dots \+ \\beta\_px\_{p1}) \> 0
\\]
Ideally, we would create a hyperplane that perfectly separates the classes based on \\(X\_1\\) and \\(X\_2\\). However, as Figure [8\.2](supervised-machine-learning---part-ii.html#fig:hyperex) makes clear, we can create many separating hyperplanes of which 3 of these are shown. In fact, it’s often the case that an infinite number of separating hyperplanes could be created when the classes are perfectly separable. What we need to do is to develop some kind of a criterion for selecting one of the many separating hyperplanes.
Figure 8\.2: Candidate hyperplanes to separate the two classes.
For any given hyperplane, we have two pieces of information available for each observation: 1\) the side of the hyperplane it lies on (represented by its sign) and 2\) the distance it is from the hyperplane. The natural criterion for selecting a separating hyperplane is to **maximize the distance** it is from from the training observations. Therefore, we compute the distance that each training observation is from a candidate hyperplane. The minimal such distance from the observation to the hyperplane is known as the **margin**. Then we will select the hyperplane with the largest margin (the **maximal margin hyperplane**) and classify observations based on which side of this hyperplane they fall (**maximal margin classifier**). The hope is that a classifier with a large margin on the training data will also have a large margin on the test observations and subsequently classify well.
Figure [8\.3](supervised-machine-learning---part-ii.html#fig:mmc) depicts a maximal margin classifier. The red line corresponds to the maximal margin hyperplane and the distance between one of the dotted lines and the black line is the **margin**. The black and white points along the boundary of the margin are the **support vectors**. It is clear in Figure [8\.3](supervised-machine-learning---part-ii.html#fig:mmc) that the maximal margin hyperplane depends only on these two support vectors. If they are moved, the maximal margin hyperplane moves, however, if any other observations are moved they would have no effect on this hyperplane *unless* they crossed the boundary of the margin.
Figure 8\.3: Maximal margin hyperplane. Source: <https://tinyurl.com/y493pww8>
The problem in practice is that a separating hyperplane usually doesn’t exist. Even if a separating hyperplane existed, we may not want to use the maximal margin hyperplane as it would perfectly classify all of the observations and may be too sensitive to individual observations and subsequently overfitting.
Figure [8\.4](supervised-machine-learning---part-ii.html#fig:fig95) from James, et al. (2013\) clearly illustrates this problem. The left figure shows the maximal margin hyperplane (solid) in a completely separable solution. The figure on the right shows that when a new observation is introduced that the maximal margin hyperplane (solid) shifts rather dramatically relative to its original location (dashed).
Figure 8\.4: The impact of adding one observations to the maximal margin hyperplane from James et al. (2013\).
#### 8\.1\.1\.1 Hyperplane
The concept of a hyperplane is a critical concept in SVM, therefore, we need to understand what exactly a hyperplane is to understand SVM. A **hyperplane** is a subspace whose dimension is one less than that of the ambient space. Specifically, in a *p*\-dimensional space, a **hyperplane** is a flat affline subspace of dimensional *p \- 1*, where affline refers to the fact that the subspace need not pass through the origin.
We define a hyperplane as
\\\[
\\beta\_0 \+ \\beta\_1X\_1 \+ \\beta\_2X\_2 \+ \\dots \+ \\beta\_pX\_p \= 0
\\]
Where \\(X\_1, X\_2, ..., X\_p\\) are predictors (or *features*). Therefore, for any observation of \\(X \= (X\_1, X\_2, \\dots, X\_p)^T\\) that *satisfies* the above equation, the observation falls directly onto the hyperplane. However, a value of \\(X\\) does not need to fall onto the hyperplane, but could fall on either side of the hyperplane such that either
\\\[
\\beta\_0 \+ \\beta\_1X\_1 \+ \\beta\_2X\_2 \+ \\cdots \+ \\beta\_pX\_p \> 0
\\]
or
\\\[
\\beta\_0 \+ \\beta\_1X\_1 \+ \\beta\_2X\_2 \+ \\cdots \+ \\beta\_pX\_p \< 0
\\]
occurs. In that situation, the value of \\(X\\) lies on one of the two sides of the hyperplane and the hyperplane acts to split the *p*\-dimensional space into two halves.
Figure [8\.1](supervised-machine-learning---part-ii.html#fig:hyper) shows the hyperplane \\(.5 \+ 1X\_1 \+ \-4X\_2 \= 0\\). If we plug a value of \\(X\_1\\) and \\(X\_2\\) into this equation, we know based on the sign alone if the points falls on one side of the hyperplane or if it falls directly onto the hyperplane. In Figure [8\.1](supervised-machine-learning---part-ii.html#fig:hyper) all the points in the red region will have negative signs (i.e., if we plug in the values of \\(X\_1\\) and \\(X\_2\\) into the above equation the sign will be negative), while all the points in the blue region would be positive, whereas any points that would have no sign are represented by the black line (the hyperplane).
Figure 8\.1: The hyperplane, \\(.5 \+ 1X\_1 \+ \-4X\_2 \= 0\\), is black line, the red points occur in the region where \\(.5 \+ 1X\_1 \+ \-4X\_2 \> 0\\), while the blue points occur in the region where \\(.5 \+ 1X\_1 \+ \-4X\_2 \< 0\\).
Wee can apply this idea of a hyperplane to classifying observations. We learned earlier how it important it is when applying machine learning techniques to split our data into training and testing data sets to avoid overfitting. We can split our *n \+ m* by *p* matrix of observations into an *n* by *p* \\(\\mathbf{X}\\) matrix of training observations, which fall into one of two classes for \\(Y \= y\_1, .., y\_n\\) where \\(Y\_i \\in {\-1, 1}\\) and an *m* by *p* matrix \\(\\mathbf{X^\*}\\) of testing observations. Using just the training data, our goal is develop a model that will correctly classify our testing data using just a hyperplane and we will do this by creating a **separating hyperplane** (a hyperplane that will separate our classes).
Let’s assume we have the training data in Figure [8\.2](supervised-machine-learning---part-ii.html#fig:hyperex) and that the blue points correspond to one class (labelled as \\(y \= 1\\)) and the red points correspond to the other class (\\(y \= \-1\\)). The separating hyper plane has the property that:
\\\[
\\beta\_0 \+ \\beta\_1x\_{i1} \+ \\beta\_2x\_{i2} \+ \\dots \+ \\beta\_px\_{p1} \> 0 \\quad \\text{if} \\quad y\_i \= 1
\\]
and
\\\[
\\beta\_0 \+ \\beta\_1x\_{i1} \+ \\beta\_2x\_{i2} \+ \\dots \+ \\beta\_px\_{p1} \< 0 \\quad \\text{if} \\quad y\_i \= \-1
\\]
Or more succintly,
\\\[
y\_i(\\beta\_0 \+ \\beta\_1x\_{i1} \+ \\beta\_2x\_{i2} \+ \\dots \+ \\beta\_px\_{p1}) \> 0
\\]
Ideally, we would create a hyperplane that perfectly separates the classes based on \\(X\_1\\) and \\(X\_2\\). However, as Figure [8\.2](supervised-machine-learning---part-ii.html#fig:hyperex) makes clear, we can create many separating hyperplanes of which 3 of these are shown. In fact, it’s often the case that an infinite number of separating hyperplanes could be created when the classes are perfectly separable. What we need to do is to develop some kind of a criterion for selecting one of the many separating hyperplanes.
Figure 8\.2: Candidate hyperplanes to separate the two classes.
For any given hyperplane, we have two pieces of information available for each observation: 1\) the side of the hyperplane it lies on (represented by its sign) and 2\) the distance it is from the hyperplane. The natural criterion for selecting a separating hyperplane is to **maximize the distance** it is from from the training observations. Therefore, we compute the distance that each training observation is from a candidate hyperplane. The minimal such distance from the observation to the hyperplane is known as the **margin**. Then we will select the hyperplane with the largest margin (the **maximal margin hyperplane**) and classify observations based on which side of this hyperplane they fall (**maximal margin classifier**). The hope is that a classifier with a large margin on the training data will also have a large margin on the test observations and subsequently classify well.
Figure [8\.3](supervised-machine-learning---part-ii.html#fig:mmc) depicts a maximal margin classifier. The red line corresponds to the maximal margin hyperplane and the distance between one of the dotted lines and the black line is the **margin**. The black and white points along the boundary of the margin are the **support vectors**. It is clear in Figure [8\.3](supervised-machine-learning---part-ii.html#fig:mmc) that the maximal margin hyperplane depends only on these two support vectors. If they are moved, the maximal margin hyperplane moves, however, if any other observations are moved they would have no effect on this hyperplane *unless* they crossed the boundary of the margin.
Figure 8\.3: Maximal margin hyperplane. Source: <https://tinyurl.com/y493pww8>
The problem in practice is that a separating hyperplane usually doesn’t exist. Even if a separating hyperplane existed, we may not want to use the maximal margin hyperplane as it would perfectly classify all of the observations and may be too sensitive to individual observations and subsequently overfitting.
Figure [8\.4](supervised-machine-learning---part-ii.html#fig:fig95) from James, et al. (2013\) clearly illustrates this problem. The left figure shows the maximal margin hyperplane (solid) in a completely separable solution. The figure on the right shows that when a new observation is introduced that the maximal margin hyperplane (solid) shifts rather dramatically relative to its original location (dashed).
Figure 8\.4: The impact of adding one observations to the maximal margin hyperplane from James et al. (2013\).
### 8\.1\.2 Support Vector Classifier
Our hope for a hyperplane is that it would be relatively insensitive to individual observations, while still classifying training observations well. That is, we would like to have what is termed a **soft margin classifier** or a **support vector classifier**. Essentially, we are willing to allow some observations to be on the incorrect side of the margin (classified correctly) or even the incorrect side of the hyperplane (incorrectly classified) if our classifier, overall, performs well.
We do this by introducing a tuning parameter, C, which determines the number and the severity of violations to the margin/hyperplane we are willing to tolerate. As C increases, our **tolerance** for violations will increase and subsequently our margin will widen. C, thus, represents a **bias\-variance tradeoff**, when C is small bias should be low, but variance will likely be high, whereas when C is large, bias is likely high but our variance is typically small. C will be selected, optimally, through cross\-validation (as we’ll see later).
The observations that lie on the margin or violate the margin are the only ones that will affect the hyperplane and the classifier (similar to the maximal margin classifier). These observations are the **support vectors** and only they will affect the support vector classifier. When C is large, there will be many support vectors, whereas when C is small, the number of support vectors will be less.
Because the support vector classifier depends on only the on the support vectors (which could be very few) this means they are quite **robust to the observations that are far** from the hyperplane. This makes this technique similar to logistic regression.
#### 8\.1\.2\.1 Example
In our example, we’ll try and classify whether someone scores at or above the mean on the science scale we created earlier. To do support vector classifiers (and SVMs) in R, we’ll use the `e1071` package (though the `caret` package could be used, too).
```
# check if e1071 is installed
# if not, install it
if (!("e1071" %in% installed.packages()[,"Package"])) {
install.packages("e1071")
library("e1071")
} else {
library("e1071")
}
```
The `svm` function in the `e1071` package requires that the outcome variable is a factor. So, we’ll do a mean split (at the OECD mean of 493\) on the `science` scale and convert it to a factor.
```
pisa[, sci_class := as.factor(ifelse(science >= 493, 1, -1))]
```
While, I’m coding this variable as 1 and \-1 to be consistent with the notation above, it doesn’t matter to the `svm` function. The only thing the `svm` function needs to perform classification and not regression is that the outcome is a factor. If the outcome has just two values, a 1 and \-1, but is not a factor, `svm` will perform regression.
We will use the following variables in our model:
| Label | Description |
| --- | --- |
| WEALTH | Family wealth (WLE) |
| HEDRES | Home educational resources (WLE) |
| ENVAWARE | Environmental Awareness (WLE) |
| ICTRES | ICT Resources (WLE) |
| EPIST | Epistemological beliefs (WLE) |
| HOMEPOS | Home possessions (WLE) |
| ESCS | Index of economic, social and cultural status (WLE) |
| reading | Reading score |
| math | Math score |
We’ll subset the variables to make it easier and in order for the model fitting to be performed in a reasonable amount of time in R, we’ll just subset the United States and Canada.
```
pisa_sub <- subset(pisa, CNT %in% c("Canada", "United States"), select = c(sci_class, WEALTH, HEDRES, ENVAWARE, ICTRES, EPIST, HOMEPOS, ESCS, reading, math))
```
To fit a support vector classier, we use the `svm` function. Before we get started, let’s divide the data set into a training and a testing data set. We will use a 66/33 split, though other splits could be used (e.g., 50/50\).
```
# set a random seed
set.seed(442019)
# svm uses listwise deletion, so we should just drop
# the observations now
pisa_m <- na.omit(pisa_sub)
# select the rows that will go into the training data set.
train <- sample(1:nrow(pisa_m), 2/3 * nrow(pisa_m))
# subset the data based on the rows that were selected to be in training data set.
train_dat <- pisa_m[train, ]
test_dat <- pisa_m[-train, ]
```
To perform support vector classification, we pass the `svm` function the `kernel = "linear"` argument. We also need to specify our tolerance, which is represented by the `cost` argument. The `cost` parameter is essentially the inverse of the tolerance parameter, C, described above. When the `cost` value is low, the tolerance is high (i.e., the margin is wide and there are lots of support vectors) and when the `cost` value is high, the tolerance is low (i.e., narrower margin). By default `cost = 1` and we will tune this parameter via cross\-validation momentarily. For now, we’ll just fit the model.
```
svc_fit <- svm(sci_class ~., data = train_dat, kernel = "linear")
```
We can obtain basic information about our model using the `summary` function.
```
summary(svc_fit)
```
```
##
## Call:
## svm(formula = sci_class ~ ., data = train_dat, kernel = "linear")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 1
## gamma: 0.1111111
##
## Number of Support Vectors: 2782
##
## ( 1390 1392 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
We see there are 2782 support vectors: 1390 in class \-1 and 1392 in class 1\. We can also plot our model but we need to specific the two features we want to plot (because our model has nine feature). Let’s look at the model with `math` on the y\-axis and `reading` on the x\-axis.
```
plot(svc_fit, data = train_dat, math ~ reading)
```
Figure 8\.5: Support vector classifier plot for all the training data.
In this figure, the red points correspond to observations that belong to class 1 (below the mean on science), while the black points correspond to observations that belong to class \-1 (at/above the mean on science); the Xs are the support vectors, while the Os are the non\-support vector observations; the upper triangle (purple) are for class 1, while the lower triangle (blue) is for class \-1\. While the decision boundary looks jagged, it’s just an artifact of the way it’s drawn with this function. We can see that many observations are misclassified (i.e., some red points are in the lower triangle and some black points are in the upper triangle). However, there are a lot of observations shown in this figure and it is difficult to discern the nature of the misclassification.
As was discussed in the section on data visualization, with this many points on a figure it is difficult to evaluate patterns, not to mention that the figure is extremely slow to render. Therefore, let’s take a random sample of 1,000 observations to get a better sense of our classifier. This is shown in Figure [8\.6](supervised-machine-learning---part-ii.html#fig:svcplotran).
```
set.seed(1)
ran_obs <- sample(1:nrow(train_dat), 1000)
plot(svc_fit, data = train_dat[ran_obs, ], math ~ reading)
```
Figure 8\.6: Support vector classifier plot for all a random subsample (n \= 1000\) of training observations.
Notice that few points are crossing the hyperplane (i.e., are misclassified). This looks like the classier is doing pretty good.
Initially when we fit the support vector classifier we used the default cost parameter, but we really should select this parameter through tuning via cross\-validation as we might be able to do an even better job at classifying. The `e1071` package includes a `tune` function which makes this easy and automatic. It performs the tuning via 10\-folds cross\-validation by default, which is probably a fine tradeoff (see James, et al. 2013 for a comparison of k\-folds vs. leave one out cross\-validation). We need to provide the `tune` function with a range of cost values (which again corresponds to our tolerance to violate the margin and hyperplane).
```
tune_svc <- tune(svm, sci_class ~., data = train_dat,
kernel="linear",
ranges = list(cost = c(.01, .1, 1, 5, 10)))
```
On my Macbook Pro (2\.6 GHz Intel Core i7 and 16 GB RAM) it takes approximately 2 minutes run this. Without doing this subsetting, it will take quite a bit longer to do.
We can view the cross\-validation errors by using the `summary` function on this object.
```
summary(tune_svc)
```
```
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 1
##
## - best performance: 0.07316727
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01 0.07342096 0.005135708
## 2 0.10 0.07354766 0.004985649
## 3 1.00 0.07316727 0.004952085
## 4 5.00 0.07329406 0.004879146
## 5 10.00 0.07335747 0.004887063
```
And then select the best model and view it.
```
best_svc <- tune_svc$best.model
summary(best_svc)
```
```
##
## Call:
## best.tune(method = svm, train.x = sci_class ~ ., data = train_dat,
## ranges = list(cost = c(0.01, 0.1, 1, 5, 10)), kernel = "linear")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 1
## gamma: 0.1111111
##
## Number of Support Vectors: 2782
##
## ( 1390 1392 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
Next, we write a function to evaluate our classifier that has one argument that takes a confusion matrix.
```
#' Evaluate classifier
#'
#' Evaluates a classifier (e.g. SVM, logistic regression)
#' @param tab a confusion matrix
eval_classifier <- function(tab, print = F){
n <- sum(tab)
TP <- tab[2,2]
FN <- tab[2,1]
FP <- tab[1,2]
TN <- tab[1,1]
classify.rate <- (TP + TN) / n
TP.rate <- TP / (TP + FN)
TN.rate <- TN / (TN + FP)
object <- data.frame(accuracy = classify.rate,
sensitivity = TP.rate,
specificity = TN.rate)
object
}
```
The confusion matrix is just a list of all possible outcomes (true positives, true negatives, false positives, and false negatives). A confusion matrix for our `best_svc` can be created by:
```
# to create a confusion matrix this order is important!
# observed values first and predict values second!
svc_cm_train <- table(train_dat$sci_class,
predict(best_svc))
svc_cm_train
```
```
##
## -1 1
## -1 5563 606
## 1 550 9053
```
The top\-left are the true negatives, the bottom\-left are the false negatives, the top\-right are the false positives, and the bottom\-right are the true positives. We can request the accuracy (the % of observations that were correctly classified), the sensitivity (the % of observations that were in class 1 that were correctly identified), and specificity (the % of observations that were in class \-1 that were correctly identified) using the `eval_classifier` function.
```
eval_classifier(svc_cm_train)
```
```
## accuracy sensitivity specificity
## 1 0.9267056 0.9427262 0.9017669
```
Performance is pretty good overall. We see that class \-1 (specificity) isn’t classified as well as class 1 (sensitivity). These statistics are likely overly optimistic as we are evaluating our model using the training data (the same data that we used to build our model). How well does the model perform on the testing data?
```
svc_cm_test <- table(test_dat$sci_class,
predict(best_svc, newdata = test_dat))
svc_cm_test
```
```
##
## -1 1
## -1 2780 281
## 1 278 4547
```
```
eval_classifier(svc_cm_test)
```
```
## accuracy sensitivity specificity
## 1 0.9291149 0.9423834 0.9081999
```
Still impressively high! This is a very good classifier indeed. This is likely because `math` and `reading` are so highly correlated with `science` scores.
We can extract the coefficients from our model that make up our decision boundary.
```
beta0 <- best_svc$rho
beta <- drop(t(best_svc$coefs) %*% as.matrix(train_dat[best_svc$index, -1]))
beta0
```
```
## [1] -1.220883
```
```
beta
```
```
## WEALTH HEDRES ENVAWARE ICTRES EPIST
## 0.04398688 -0.24398165 0.36167882 -0.09803825 0.04652237
## HOMEPOS ESCS reading math
## 0.22005477 -0.15065808 188.02960807 196.93421586
```
With more complicated SVMs with non\-linear kernels, coefficients don’t make any sense and generally are of little interest with applying these models.
#### 8\.1\.2\.2 Comparison to logistic regression
Support vector classifiers are quite similar to logistic regression. This has to do with them having similar loss functions (the functions used to estimate the parameters). In situations where the classes are well separated, SVM (more generally), tend to do better than logistic regression and when they are not well separated, logistic regression tends to do better (James, et al., 2013\).
Let’s compare logistic regression to the support vector classier. We’ll begin by fitting the model
```
lr_fit <- glm(sci_class ~. , data = train_dat, family = "binomial")
```
and then viewing the coefficients.
```
coef(lr_fit)
```
```
## (Intercept) WEALTH HEDRES ENVAWARE ICTRES
## -41.82682653 0.11666541 -0.26667828 0.30159987 -0.13594566
## EPIST HOMEPOS ESCS reading math
## 0.05053261 0.20699211 -0.24568642 0.03917470 0.04651408
```
How does it do relative to our best support vector classifier on the training and the testing data sets? For the training data set
```
lr_cm_train <- table(train_dat$sci_class,
round(predict(lr_fit, type = "response")))
lr_cm_train
```
```
##
## 0 1
## -1 5567 602
## 1 541 9062
```
```
eval_classifier(lr_cm_train)
```
```
## accuracy sensitivity specificity
## 1 0.9275298 0.9436634 0.9024153
```
and then for the testing data set.
```
lr_cm_test <- table(test_dat$sci_class,
round(predict(lr_fit, newdata = test_dat, type = "response")))
lr_cm_test
```
```
##
## 0 1
## -1 2780 281
## 1 275 4550
```
```
eval_classifier(lr_cm_test)
```
```
## accuracy sensitivity specificity
## 1 0.9294953 0.9430052 0.9081999
```
Equivalent out to the hundredths place. Either model would be fine here.
#### 8\.1\.2\.3 Using Apache Spark for machine learning
Apache Spark is also capable of running support vector classifiers. It does this using the `ml_linear_svc` function. The amazing thing about this is that you can use it to run the entire data set (i.e., there is no need to subset out a portion of the countries). If we tried to do this with the `e1071` package it would be very impractical and take forever, but with Apache Spark it is feasible and reasonably quick (just a few minutes).
We’ll again use the `sparklyr` package to interface with Spark and use the `dplyr` package to simplify interacting with Spark.
```
library(sparklyr)
library(dplyr)
```
We first need to establish a connection with Spark and then copy a subsetted PISA data set to Spark.
```
sc <- spark_connect(master = "local")
spark_sub <- subset(pisa,
select = c(sci_class, WEALTH, HEDRES, ENVAWARE, ICTRES,
EPIST, HOMEPOS, ESCS, reading, math))
spark_sub <- na.omit(spark_sub) # can't handle missing data
pisa_tbl <- copy_to(sc, spark_sub, overwrite = TRUE)
```
Now, we’ll let Spark partition the data into a training and a test data set.
```
partition <- pisa_tbl %>%
sdf_partition(training = 2/3, test = 1/3, seed = 442019)
pisa_training <- partition$training
pisa_test <- partition$test
```
We are ready to run the classifier in Spark. Unlike the `svm` function, the tolerance parameter is called `reg_param`. This parameter should be optimally selected like it was for `svm`. By default the tolerance is 1e\-06\.
```
svc_spark <- pisa_training %>%
ml_linear_svc(sci_class ~ .)
```
We then use the `ml_predict` function to predict the classes.
```
svc_pred <- ml_predict(svc_spark, pisa_training) %>%
select(sci_class, predicted_label) %>%
collect()
```
Then print the confusion matrix and the criteria that we’ve been using to evaluate our models.
```
table(svc_pred)
```
```
## predicted_label
## sci_class -1 1
## -1 145111 12353
## 1 10753 121967
```
```
eval_classifier(table(svc_pred))
```
```
## accuracy sensitivity specificity
## 1 0.9204 0.919 0.9216
```
Again, this is really good. How does it look on the testing data?
```
svc_pred_test <- ml_predict(svc_spark, pisa_test) %>%
select(sci_class, predicted_label) %>%
collect()
```
```
table(svc_pred_test)
```
```
## predicted_label
## sci_class -1 1
## -1 72577 6199
## 1 5438 60953
```
```
eval_classifier(table(svc_pred_test))
```
```
## accuracy sensitivity specificity
## 1 0.9198 0.9181 0.9213
```
Pretty impressive. We can also Apache Spark to fit logistic regression using the `ml_logistic_regression` function.
```
spark_lr <- pisa_training %>%
ml_logistic_regression(sci_class ~ .)
```
And view the performance on the training and test data sets.
```
## Training data
svc_pred_lr <- ml_predict(spark_lr, pisa_training) %>%
select(sci_class, predicted_label) %>%
collect()
table(svc_pred_lr)
```
```
## predicted_label
## sci_class -1 1
## -1 146217 11247
## 1 11133 121587
```
```
eval_classifier(table(svc_pred_lr))
```
```
## accuracy sensitivity specificity
## 1 0.9229 0.9161 0.9286
```
```
## Test data
svc_pred_test_lr <- ml_predict(spark_lr, pisa_test) %>%
select(sci_class, predicted_label) %>%
collect()
table(svc_pred_test_lr)
```
```
## predicted_label
## sci_class -1 1
## -1 73098 5678
## 1 5646 60745
```
```
eval_classifier(table(svc_pred_test_lr))
```
```
## accuracy sensitivity specificity
## 1 0.922 0.915 0.9279
```
We could also use the logistic regression in R as it’s pretty quick even with this large of a data set (in fact, it’s slightly quicker).
Finally, it is quite common to evaluate these models using AUC. We can let Apache Spark do this for the test data sets.
```
# extract predictions
pred_svc <- ml_predict(svc_spark, pisa_test)
pred_lr <- ml_predict(spark_lr, pisa_test)
ml_binary_classification_evaluator(pred_svc)
```
```
## [1] 0.9795
```
```
ml_binary_classification_evaluator(pred_lr)
```
```
## [1] 0.9805
```
We want these values as close to 1 as a possible. These values are all quite large and corroborate that these are both good classifiers.
#### 8\.1\.2\.1 Example
In our example, we’ll try and classify whether someone scores at or above the mean on the science scale we created earlier. To do support vector classifiers (and SVMs) in R, we’ll use the `e1071` package (though the `caret` package could be used, too).
```
# check if e1071 is installed
# if not, install it
if (!("e1071" %in% installed.packages()[,"Package"])) {
install.packages("e1071")
library("e1071")
} else {
library("e1071")
}
```
The `svm` function in the `e1071` package requires that the outcome variable is a factor. So, we’ll do a mean split (at the OECD mean of 493\) on the `science` scale and convert it to a factor.
```
pisa[, sci_class := as.factor(ifelse(science >= 493, 1, -1))]
```
While, I’m coding this variable as 1 and \-1 to be consistent with the notation above, it doesn’t matter to the `svm` function. The only thing the `svm` function needs to perform classification and not regression is that the outcome is a factor. If the outcome has just two values, a 1 and \-1, but is not a factor, `svm` will perform regression.
We will use the following variables in our model:
| Label | Description |
| --- | --- |
| WEALTH | Family wealth (WLE) |
| HEDRES | Home educational resources (WLE) |
| ENVAWARE | Environmental Awareness (WLE) |
| ICTRES | ICT Resources (WLE) |
| EPIST | Epistemological beliefs (WLE) |
| HOMEPOS | Home possessions (WLE) |
| ESCS | Index of economic, social and cultural status (WLE) |
| reading | Reading score |
| math | Math score |
We’ll subset the variables to make it easier and in order for the model fitting to be performed in a reasonable amount of time in R, we’ll just subset the United States and Canada.
```
pisa_sub <- subset(pisa, CNT %in% c("Canada", "United States"), select = c(sci_class, WEALTH, HEDRES, ENVAWARE, ICTRES, EPIST, HOMEPOS, ESCS, reading, math))
```
To fit a support vector classier, we use the `svm` function. Before we get started, let’s divide the data set into a training and a testing data set. We will use a 66/33 split, though other splits could be used (e.g., 50/50\).
```
# set a random seed
set.seed(442019)
# svm uses listwise deletion, so we should just drop
# the observations now
pisa_m <- na.omit(pisa_sub)
# select the rows that will go into the training data set.
train <- sample(1:nrow(pisa_m), 2/3 * nrow(pisa_m))
# subset the data based on the rows that were selected to be in training data set.
train_dat <- pisa_m[train, ]
test_dat <- pisa_m[-train, ]
```
To perform support vector classification, we pass the `svm` function the `kernel = "linear"` argument. We also need to specify our tolerance, which is represented by the `cost` argument. The `cost` parameter is essentially the inverse of the tolerance parameter, C, described above. When the `cost` value is low, the tolerance is high (i.e., the margin is wide and there are lots of support vectors) and when the `cost` value is high, the tolerance is low (i.e., narrower margin). By default `cost = 1` and we will tune this parameter via cross\-validation momentarily. For now, we’ll just fit the model.
```
svc_fit <- svm(sci_class ~., data = train_dat, kernel = "linear")
```
We can obtain basic information about our model using the `summary` function.
```
summary(svc_fit)
```
```
##
## Call:
## svm(formula = sci_class ~ ., data = train_dat, kernel = "linear")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 1
## gamma: 0.1111111
##
## Number of Support Vectors: 2782
##
## ( 1390 1392 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
We see there are 2782 support vectors: 1390 in class \-1 and 1392 in class 1\. We can also plot our model but we need to specific the two features we want to plot (because our model has nine feature). Let’s look at the model with `math` on the y\-axis and `reading` on the x\-axis.
```
plot(svc_fit, data = train_dat, math ~ reading)
```
Figure 8\.5: Support vector classifier plot for all the training data.
In this figure, the red points correspond to observations that belong to class 1 (below the mean on science), while the black points correspond to observations that belong to class \-1 (at/above the mean on science); the Xs are the support vectors, while the Os are the non\-support vector observations; the upper triangle (purple) are for class 1, while the lower triangle (blue) is for class \-1\. While the decision boundary looks jagged, it’s just an artifact of the way it’s drawn with this function. We can see that many observations are misclassified (i.e., some red points are in the lower triangle and some black points are in the upper triangle). However, there are a lot of observations shown in this figure and it is difficult to discern the nature of the misclassification.
As was discussed in the section on data visualization, with this many points on a figure it is difficult to evaluate patterns, not to mention that the figure is extremely slow to render. Therefore, let’s take a random sample of 1,000 observations to get a better sense of our classifier. This is shown in Figure [8\.6](supervised-machine-learning---part-ii.html#fig:svcplotran).
```
set.seed(1)
ran_obs <- sample(1:nrow(train_dat), 1000)
plot(svc_fit, data = train_dat[ran_obs, ], math ~ reading)
```
Figure 8\.6: Support vector classifier plot for all a random subsample (n \= 1000\) of training observations.
Notice that few points are crossing the hyperplane (i.e., are misclassified). This looks like the classier is doing pretty good.
Initially when we fit the support vector classifier we used the default cost parameter, but we really should select this parameter through tuning via cross\-validation as we might be able to do an even better job at classifying. The `e1071` package includes a `tune` function which makes this easy and automatic. It performs the tuning via 10\-folds cross\-validation by default, which is probably a fine tradeoff (see James, et al. 2013 for a comparison of k\-folds vs. leave one out cross\-validation). We need to provide the `tune` function with a range of cost values (which again corresponds to our tolerance to violate the margin and hyperplane).
```
tune_svc <- tune(svm, sci_class ~., data = train_dat,
kernel="linear",
ranges = list(cost = c(.01, .1, 1, 5, 10)))
```
On my Macbook Pro (2\.6 GHz Intel Core i7 and 16 GB RAM) it takes approximately 2 minutes run this. Without doing this subsetting, it will take quite a bit longer to do.
We can view the cross\-validation errors by using the `summary` function on this object.
```
summary(tune_svc)
```
```
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost
## 1
##
## - best performance: 0.07316727
##
## - Detailed performance results:
## cost error dispersion
## 1 0.01 0.07342096 0.005135708
## 2 0.10 0.07354766 0.004985649
## 3 1.00 0.07316727 0.004952085
## 4 5.00 0.07329406 0.004879146
## 5 10.00 0.07335747 0.004887063
```
And then select the best model and view it.
```
best_svc <- tune_svc$best.model
summary(best_svc)
```
```
##
## Call:
## best.tune(method = svm, train.x = sci_class ~ ., data = train_dat,
## ranges = list(cost = c(0.01, 0.1, 1, 5, 10)), kernel = "linear")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: linear
## cost: 1
## gamma: 0.1111111
##
## Number of Support Vectors: 2782
##
## ( 1390 1392 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
Next, we write a function to evaluate our classifier that has one argument that takes a confusion matrix.
```
#' Evaluate classifier
#'
#' Evaluates a classifier (e.g. SVM, logistic regression)
#' @param tab a confusion matrix
eval_classifier <- function(tab, print = F){
n <- sum(tab)
TP <- tab[2,2]
FN <- tab[2,1]
FP <- tab[1,2]
TN <- tab[1,1]
classify.rate <- (TP + TN) / n
TP.rate <- TP / (TP + FN)
TN.rate <- TN / (TN + FP)
object <- data.frame(accuracy = classify.rate,
sensitivity = TP.rate,
specificity = TN.rate)
object
}
```
The confusion matrix is just a list of all possible outcomes (true positives, true negatives, false positives, and false negatives). A confusion matrix for our `best_svc` can be created by:
```
# to create a confusion matrix this order is important!
# observed values first and predict values second!
svc_cm_train <- table(train_dat$sci_class,
predict(best_svc))
svc_cm_train
```
```
##
## -1 1
## -1 5563 606
## 1 550 9053
```
The top\-left are the true negatives, the bottom\-left are the false negatives, the top\-right are the false positives, and the bottom\-right are the true positives. We can request the accuracy (the % of observations that were correctly classified), the sensitivity (the % of observations that were in class 1 that were correctly identified), and specificity (the % of observations that were in class \-1 that were correctly identified) using the `eval_classifier` function.
```
eval_classifier(svc_cm_train)
```
```
## accuracy sensitivity specificity
## 1 0.9267056 0.9427262 0.9017669
```
Performance is pretty good overall. We see that class \-1 (specificity) isn’t classified as well as class 1 (sensitivity). These statistics are likely overly optimistic as we are evaluating our model using the training data (the same data that we used to build our model). How well does the model perform on the testing data?
```
svc_cm_test <- table(test_dat$sci_class,
predict(best_svc, newdata = test_dat))
svc_cm_test
```
```
##
## -1 1
## -1 2780 281
## 1 278 4547
```
```
eval_classifier(svc_cm_test)
```
```
## accuracy sensitivity specificity
## 1 0.9291149 0.9423834 0.9081999
```
Still impressively high! This is a very good classifier indeed. This is likely because `math` and `reading` are so highly correlated with `science` scores.
We can extract the coefficients from our model that make up our decision boundary.
```
beta0 <- best_svc$rho
beta <- drop(t(best_svc$coefs) %*% as.matrix(train_dat[best_svc$index, -1]))
beta0
```
```
## [1] -1.220883
```
```
beta
```
```
## WEALTH HEDRES ENVAWARE ICTRES EPIST
## 0.04398688 -0.24398165 0.36167882 -0.09803825 0.04652237
## HOMEPOS ESCS reading math
## 0.22005477 -0.15065808 188.02960807 196.93421586
```
With more complicated SVMs with non\-linear kernels, coefficients don’t make any sense and generally are of little interest with applying these models.
#### 8\.1\.2\.2 Comparison to logistic regression
Support vector classifiers are quite similar to logistic regression. This has to do with them having similar loss functions (the functions used to estimate the parameters). In situations where the classes are well separated, SVM (more generally), tend to do better than logistic regression and when they are not well separated, logistic regression tends to do better (James, et al., 2013\).
Let’s compare logistic regression to the support vector classier. We’ll begin by fitting the model
```
lr_fit <- glm(sci_class ~. , data = train_dat, family = "binomial")
```
and then viewing the coefficients.
```
coef(lr_fit)
```
```
## (Intercept) WEALTH HEDRES ENVAWARE ICTRES
## -41.82682653 0.11666541 -0.26667828 0.30159987 -0.13594566
## EPIST HOMEPOS ESCS reading math
## 0.05053261 0.20699211 -0.24568642 0.03917470 0.04651408
```
How does it do relative to our best support vector classifier on the training and the testing data sets? For the training data set
```
lr_cm_train <- table(train_dat$sci_class,
round(predict(lr_fit, type = "response")))
lr_cm_train
```
```
##
## 0 1
## -1 5567 602
## 1 541 9062
```
```
eval_classifier(lr_cm_train)
```
```
## accuracy sensitivity specificity
## 1 0.9275298 0.9436634 0.9024153
```
and then for the testing data set.
```
lr_cm_test <- table(test_dat$sci_class,
round(predict(lr_fit, newdata = test_dat, type = "response")))
lr_cm_test
```
```
##
## 0 1
## -1 2780 281
## 1 275 4550
```
```
eval_classifier(lr_cm_test)
```
```
## accuracy sensitivity specificity
## 1 0.9294953 0.9430052 0.9081999
```
Equivalent out to the hundredths place. Either model would be fine here.
#### 8\.1\.2\.3 Using Apache Spark for machine learning
Apache Spark is also capable of running support vector classifiers. It does this using the `ml_linear_svc` function. The amazing thing about this is that you can use it to run the entire data set (i.e., there is no need to subset out a portion of the countries). If we tried to do this with the `e1071` package it would be very impractical and take forever, but with Apache Spark it is feasible and reasonably quick (just a few minutes).
We’ll again use the `sparklyr` package to interface with Spark and use the `dplyr` package to simplify interacting with Spark.
```
library(sparklyr)
library(dplyr)
```
We first need to establish a connection with Spark and then copy a subsetted PISA data set to Spark.
```
sc <- spark_connect(master = "local")
spark_sub <- subset(pisa,
select = c(sci_class, WEALTH, HEDRES, ENVAWARE, ICTRES,
EPIST, HOMEPOS, ESCS, reading, math))
spark_sub <- na.omit(spark_sub) # can't handle missing data
pisa_tbl <- copy_to(sc, spark_sub, overwrite = TRUE)
```
Now, we’ll let Spark partition the data into a training and a test data set.
```
partition <- pisa_tbl %>%
sdf_partition(training = 2/3, test = 1/3, seed = 442019)
pisa_training <- partition$training
pisa_test <- partition$test
```
We are ready to run the classifier in Spark. Unlike the `svm` function, the tolerance parameter is called `reg_param`. This parameter should be optimally selected like it was for `svm`. By default the tolerance is 1e\-06\.
```
svc_spark <- pisa_training %>%
ml_linear_svc(sci_class ~ .)
```
We then use the `ml_predict` function to predict the classes.
```
svc_pred <- ml_predict(svc_spark, pisa_training) %>%
select(sci_class, predicted_label) %>%
collect()
```
Then print the confusion matrix and the criteria that we’ve been using to evaluate our models.
```
table(svc_pred)
```
```
## predicted_label
## sci_class -1 1
## -1 145111 12353
## 1 10753 121967
```
```
eval_classifier(table(svc_pred))
```
```
## accuracy sensitivity specificity
## 1 0.9204 0.919 0.9216
```
Again, this is really good. How does it look on the testing data?
```
svc_pred_test <- ml_predict(svc_spark, pisa_test) %>%
select(sci_class, predicted_label) %>%
collect()
```
```
table(svc_pred_test)
```
```
## predicted_label
## sci_class -1 1
## -1 72577 6199
## 1 5438 60953
```
```
eval_classifier(table(svc_pred_test))
```
```
## accuracy sensitivity specificity
## 1 0.9198 0.9181 0.9213
```
Pretty impressive. We can also Apache Spark to fit logistic regression using the `ml_logistic_regression` function.
```
spark_lr <- pisa_training %>%
ml_logistic_regression(sci_class ~ .)
```
And view the performance on the training and test data sets.
```
## Training data
svc_pred_lr <- ml_predict(spark_lr, pisa_training) %>%
select(sci_class, predicted_label) %>%
collect()
table(svc_pred_lr)
```
```
## predicted_label
## sci_class -1 1
## -1 146217 11247
## 1 11133 121587
```
```
eval_classifier(table(svc_pred_lr))
```
```
## accuracy sensitivity specificity
## 1 0.9229 0.9161 0.9286
```
```
## Test data
svc_pred_test_lr <- ml_predict(spark_lr, pisa_test) %>%
select(sci_class, predicted_label) %>%
collect()
table(svc_pred_test_lr)
```
```
## predicted_label
## sci_class -1 1
## -1 73098 5678
## 1 5646 60745
```
```
eval_classifier(table(svc_pred_test_lr))
```
```
## accuracy sensitivity specificity
## 1 0.922 0.915 0.9279
```
We could also use the logistic regression in R as it’s pretty quick even with this large of a data set (in fact, it’s slightly quicker).
Finally, it is quite common to evaluate these models using AUC. We can let Apache Spark do this for the test data sets.
```
# extract predictions
pred_svc <- ml_predict(svc_spark, pisa_test)
pred_lr <- ml_predict(spark_lr, pisa_test)
ml_binary_classification_evaluator(pred_svc)
```
```
## [1] 0.9795
```
```
ml_binary_classification_evaluator(pred_lr)
```
```
## [1] 0.9805
```
We want these values as close to 1 as a possible. These values are all quite large and corroborate that these are both good classifiers.
### 8\.1\.3 Support Vector Machine
SVM is an extension of support vector classifiers using **kernels** that allow for a non\-linear boundary between the classes. Without getting into the weeds, to solve a support vector classifier problem all you need to know is the inner products of the observations. Assuming that \\(x\_i\\) and \\(x\_i'\\) are two observations and \\(p\\) is the number of predictors (features), their inner product is defined as:
\\\[
\\langle x\_i, x\_i'\\rangle \= \\begin{bmatrix}
x\_{i1} x\_{i2} \\dots x\_{ip}
\\end{bmatrix}
\\begin{bmatrix}
x\_{i1}' \\\\
x\_{i2}' \\\\
\\vdots \\\\
x\_{ip}'
\\end{bmatrix} \= x\_{i1}x\_{i1}' \+ x\_{i2}x\_{i2}' \+ \\dots x\_{ip}x\_{ip}'
\\]
More succinctly, \\(\\langle x\_i, x\_i'\\rangle \= \\sum\_{i \= 1}^p x\_{ij}x\_{ij}'\\). We can replace the inner product with a more general form, \\(K(x\_i, x\_i')\\), where \\(K\\) is a kernel (a function that quantifies the similarity of two observations). When,
\\\[
K(x\_i, x\_i') \= \\sum\_{i \= 1}^p x\_{ij}x\_{ij}
\\]
We have the linear kernel and this is the support vector classifier. However, we can use a more flexible kernel. Such as:
\\\[
K(x\_i, x\_i') \= (1 \+ \\sum\_{i \= 1}^p x\_{ij}x\_{ij})^d
\\]
which is known as a **polynomial kernel** of degree \\(d\\) and when \\(d \> 1\\) we have much more flexible decision boundary than we do for support vector classifiers (when \\(d \= 1\\) we are back to the support vector classifier).
Another very common kernel is the **radial kernel**, which is given by:
\\\[
K(x\_i, x\_i') \= \\exp\\left(\-\\gamma \\sum\_{i \= 1}^p (x\_{ij} \- x\_{ij})^2\\right)
\\]
where \\(\\gamma\\) is a positive constant. Note, both \\(d\\) and \\(\\gamma\\) are selected via tuning and cross\-validation.
Both of these kernels are worth considering when the decision boundary is non\-linear. Figure [8\.7](supervised-machine-learning---part-ii.html#fig:james2) from James, et al. (2013\) gives an example of a non\-linear boundary. We see that the classes are not linearly separated and if we tried to use a linear decision boundary, we would end up with a very poor classifier. Therefore, we need to use a more flexible kernel. In both cases, we should expect that an SVM would greatly outperform both a support vector classifier and logistic regression.
Figure 8\.7: Non\-linear decision boundary with a polynomial kernel (left) and radial kernel (right) from James et al., 2013\.
#### 8\.1\.3\.1 Examples
We will continue trying to build the best classifier of whether someone scored in the upper or lower half on the science scale and again use the `svm` function in the `e1071` package. For brevity, we’ll consider only the radial kernel. By default gamma is set to 1\. We’ll explicitly set it to 1 below and cost to 1\.
```
svm_fit <- svm(sci_class ~., data = train_dat,
cost = 1,
gamma = 1,
kernel = "radial")
```
Again, we can request some basic information about our model.
```
summary(svm_fit)
```
```
##
## Call:
## svm(formula = sci_class ~ ., data = train_dat, cost = 1, gamma = 1,
## kernel = "radial")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 1
## gamma: 1
##
## Number of Support Vectors: 6988
##
## ( 3676 3312 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
This time we see we have 6988 support vectors, 3676 in class \-1 and 3312 in class 1\. Quite a bit more support vectors than the support vector classifier. Lets visually inspect this model by plotting it against the math and reading features on the same subset of test takers (Figure [8\.8](supervised-machine-learning---part-ii.html#fig:svmplot).
```
plot(svm_fit, data = train_dat[ran_obs, ], math ~ reading)
```
Figure 8\.8: Support vector classifier plot for all a random subsample (n \= 1000\) of training observations.
We see that the decision boundary is now clearly no longer linear and we again see decent classification. Before we investigate the fit of the model, we should tune it.
```
tune_svm <- tune(svm, sci_class ~., data = train_dat,
kernel = "radial",
ranges = list(cost = c(.01, .1, 1, 5, 10),
gamma = c(0.5, 1, 2, 3, 4)))
```
We can see which model was selected
```
summary(tune_svm)
```
```
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost gamma
## 0.1 0.5
##
## - best performance: 0.07583144
##
## - Detailed performance results:
## cost gamma error dispersion
## 1 0.01 0.5 0.17670590 0.010347886
## 2 0.10 0.5 0.07583144 0.008233504
## 3 1.00 0.5 0.07754307 0.009223528
## 4 5.00 0.5 0.08375680 0.008098257
## 5 10.00 0.5 0.08718054 0.008157393
## 6 0.01 1.0 0.36425229 0.011955406
## 7 0.10 1.0 0.13162657 0.007210614
## 8 1.00 1.0 0.08242504 0.008590571
## 9 5.00 1.0 0.09402859 0.009848512
## 10 10.00 1.0 0.10074908 0.007984562
## 11 0.01 2.0 0.39113500 0.011126599
## 12 0.10 2.0 0.31409966 0.010609909
## 13 1.00 2.0 0.11469785 0.006880824
## 14 5.00 2.0 0.12363760 0.006591525
## 15 10.00 2.0 0.12465198 0.006523243
## 16 0.01 3.0 0.39113500 0.011126599
## 17 0.10 3.0 0.38257549 0.012981991
## 18 1.00 3.0 0.17562831 0.007488136
## 19 5.00 3.0 0.17277475 0.006502038
## 20 10.00 3.0 0.17309173 0.006145790
## 21 0.01 4.0 0.39113500 0.011126599
## 22 0.10 4.0 0.39107163 0.011072976
## 23 1.00 4.0 0.23960159 0.011545434
## 24 5.00 4.0 0.22641364 0.008709051
## 25 10.00 4.0 0.22641360 0.008779341
```
And then select the best model and view it.
```
best_svm <- tune_svm$best.model
summary(best_svm)
```
```
##
## Call:
## best.tune(method = svm, train.x = sci_class ~ ., data = train_dat,
## ranges = list(cost = c(0.01, 0.1, 1, 5, 10), gamma = c(0.5,
## 1, 2, 3, 4)), kernel = "radial")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 0.1
## gamma: 0.5
##
## Number of Support Vectors: 6138
##
## ( 3095 3043 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
Finally, we see how well this predicts on both the training observations
```
svm_cm_train <- table(train_dat$sci_class,
predict(best_svm))
svm_cm_train
```
```
##
## -1 1
## -1 5620 549
## 1 519 9084
```
```
eval_classifier(svm_cm_train)
```
```
## accuracy sensitivity specificity
## 1 0.9322851 0.9459544 0.9110066
```
and finally the testing observations.
```
svm_cm_test <- table(test_dat$sci_class,
predict(best_svm, newdata = test_dat))
svm_cm_test
```
```
##
## -1 1
## -1 2781 280
## 1 291 4534
```
```
eval_classifier(svm_cm_test)
```
```
## accuracy sensitivity specificity
## 1 0.9275932 0.9396891 0.9085266
```
Performance is very comparable to the support vector classifier and logistic regression implying there isn’t much gain from the use of non\-linear decision boundary.
#### 8\.1\.3\.1 Examples
We will continue trying to build the best classifier of whether someone scored in the upper or lower half on the science scale and again use the `svm` function in the `e1071` package. For brevity, we’ll consider only the radial kernel. By default gamma is set to 1\. We’ll explicitly set it to 1 below and cost to 1\.
```
svm_fit <- svm(sci_class ~., data = train_dat,
cost = 1,
gamma = 1,
kernel = "radial")
```
Again, we can request some basic information about our model.
```
summary(svm_fit)
```
```
##
## Call:
## svm(formula = sci_class ~ ., data = train_dat, cost = 1, gamma = 1,
## kernel = "radial")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 1
## gamma: 1
##
## Number of Support Vectors: 6988
##
## ( 3676 3312 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
This time we see we have 6988 support vectors, 3676 in class \-1 and 3312 in class 1\. Quite a bit more support vectors than the support vector classifier. Lets visually inspect this model by plotting it against the math and reading features on the same subset of test takers (Figure [8\.8](supervised-machine-learning---part-ii.html#fig:svmplot).
```
plot(svm_fit, data = train_dat[ran_obs, ], math ~ reading)
```
Figure 8\.8: Support vector classifier plot for all a random subsample (n \= 1000\) of training observations.
We see that the decision boundary is now clearly no longer linear and we again see decent classification. Before we investigate the fit of the model, we should tune it.
```
tune_svm <- tune(svm, sci_class ~., data = train_dat,
kernel = "radial",
ranges = list(cost = c(.01, .1, 1, 5, 10),
gamma = c(0.5, 1, 2, 3, 4)))
```
We can see which model was selected
```
summary(tune_svm)
```
```
##
## Parameter tuning of 'svm':
##
## - sampling method: 10-fold cross validation
##
## - best parameters:
## cost gamma
## 0.1 0.5
##
## - best performance: 0.07583144
##
## - Detailed performance results:
## cost gamma error dispersion
## 1 0.01 0.5 0.17670590 0.010347886
## 2 0.10 0.5 0.07583144 0.008233504
## 3 1.00 0.5 0.07754307 0.009223528
## 4 5.00 0.5 0.08375680 0.008098257
## 5 10.00 0.5 0.08718054 0.008157393
## 6 0.01 1.0 0.36425229 0.011955406
## 7 0.10 1.0 0.13162657 0.007210614
## 8 1.00 1.0 0.08242504 0.008590571
## 9 5.00 1.0 0.09402859 0.009848512
## 10 10.00 1.0 0.10074908 0.007984562
## 11 0.01 2.0 0.39113500 0.011126599
## 12 0.10 2.0 0.31409966 0.010609909
## 13 1.00 2.0 0.11469785 0.006880824
## 14 5.00 2.0 0.12363760 0.006591525
## 15 10.00 2.0 0.12465198 0.006523243
## 16 0.01 3.0 0.39113500 0.011126599
## 17 0.10 3.0 0.38257549 0.012981991
## 18 1.00 3.0 0.17562831 0.007488136
## 19 5.00 3.0 0.17277475 0.006502038
## 20 10.00 3.0 0.17309173 0.006145790
## 21 0.01 4.0 0.39113500 0.011126599
## 22 0.10 4.0 0.39107163 0.011072976
## 23 1.00 4.0 0.23960159 0.011545434
## 24 5.00 4.0 0.22641364 0.008709051
## 25 10.00 4.0 0.22641360 0.008779341
```
And then select the best model and view it.
```
best_svm <- tune_svm$best.model
summary(best_svm)
```
```
##
## Call:
## best.tune(method = svm, train.x = sci_class ~ ., data = train_dat,
## ranges = list(cost = c(0.01, 0.1, 1, 5, 10), gamma = c(0.5,
## 1, 2, 3, 4)), kernel = "radial")
##
##
## Parameters:
## SVM-Type: C-classification
## SVM-Kernel: radial
## cost: 0.1
## gamma: 0.5
##
## Number of Support Vectors: 6138
##
## ( 3095 3043 )
##
##
## Number of Classes: 2
##
## Levels:
## -1 1
```
Finally, we see how well this predicts on both the training observations
```
svm_cm_train <- table(train_dat$sci_class,
predict(best_svm))
svm_cm_train
```
```
##
## -1 1
## -1 5620 549
## 1 519 9084
```
```
eval_classifier(svm_cm_train)
```
```
## accuracy sensitivity specificity
## 1 0.9322851 0.9459544 0.9110066
```
and finally the testing observations.
```
svm_cm_test <- table(test_dat$sci_class,
predict(best_svm, newdata = test_dat))
svm_cm_test
```
```
##
## -1 1
## -1 2781 280
## 1 291 4534
```
```
eval_classifier(svm_cm_test)
```
```
## accuracy sensitivity specificity
## 1 0.9275932 0.9396891 0.9085266
```
Performance is very comparable to the support vector classifier and logistic regression implying there isn’t much gain from the use of non\-linear decision boundary.
### 8\.1\.4 Lab
For the lab, we’ll try to build the best classifier for the “Do you expect your child will go into a ?” item. Using the following variables (and any variables that you think might be relevant in the codebook) and **data for just Mexico**, try and build the best classifier. Do the following steps:
1. Split the data into a training and a testing data set. Rather than using a 66/33 split, try a 50/50 or a 75/25 split.
2. Fit a decision tree **or** random forest
* Prune your model and plot your model (if using decision trees)
* Determine the ideal number of trees (if using random forests)
3. Fit a support vector machine
* Consider different kernels (e.g., linear and radial)
* Visually inspect your model by plotting it against a few features. Create a few different plots.
* Tune the parameters.
+ How many support vectors do you have?
+ Did you notice much difference in the error rates?
+ Does your model have a high tolerance?
* (OPTIONAL): When fitting the support vector classifier, you could try and fit it using Apache Spark
+ If you do this, use the `ml_binary_classification_evaluator` function to calculate AUC.
4. Run a logistic regression
* Examine the coefficients table
5. Evaluate the fit of your models using the `eval_classifier` function on the testing data.
* Which model(s) fits the best? Can you improve it?
6. Record your accuracy, sensitivity, and specificity for all the models (decision tree or random forest and SVM) to share.
The following table contains the list of variables you could consider (this were introduced earlier):
| Label | Description |
| --- | --- |
| DISCLISCI | Disciplinary climate in science classes (WLE) |
| TEACHSUP | Teacher support in a science classes of students choice (WLE) |
| IBTEACH | Inquiry\-based science teaching an learning practices (WLE) |
| TDTEACH | Teacher\-directed science instruction (WLE) |
| ENVAWARE | Environmental Awareness (WLE) |
| JOYSCIE | Enjoyment of science (WLE) |
| INTBRSCI | Interest in broad science topics (WLE) |
| INSTSCIE | Instrumental motivation (WLE) |
| SCIEEFF | Science self\-efficacy (WLE) |
| EPIST | Epistemological beliefs (WLE) |
| SCIEACT | Index science activities (WLE) |
| BSMJ | Student’s expected occupational status (SEI) |
| MISCED | Mother’s Education (ISCED) |
| FISCED | Father’s Education (ISCED) |
| OUTHOURS | Out\-of\-School Study Time per week (Sum) |
| SMINS | Learning time (minutes per week) \- |
| TMINS | Learning time (minutes per week) \- in total |
| BELONG | Subjective well\-being: Sense of Belonging to School (WLE) |
| ANXTEST | Personality: Test Anxiety (WLE) |
| MOTIVAT | Student Attitudes, Preferences and Self\-related beliefs: Achieving motivation (WLE) |
| COOPERATE | Collaboration and teamwork dispositions: Enjoy cooperation (WLE) |
| PERFEED | Perceived Feedback (WLE) |
| unfairteacher | Teacher Fairness (Sum) |
| HEDRES | Home educational resources (WLE) |
| HOMEPOS | Home possessions (WLE) |
| ICTRES | ICT Resources (WLE) |
| WEALTH | Family wealth (WLE) |
| ESCS | Index of economic, social and cultural status (WLE) |
| math | Students’ math scores |
| reading | Students’ reading scores |
| Big Data |
bayesball.github.io | https://bayesball.github.io/EDA/introduction.html |
2 Introduction
==============
2\.1 What is data analysis?
---------------------------
This is a course in “data analysis”. Although you have heard this expression many times, you probably don’t have a clear idea of what it means. Here is a description of data analysis written by Paul Velleman and David Hoaglin in their article “Data Analysis”, in *Perspectives on Contemporary Statistics*.
“As the link between statistics and diverse fields of application, data analysis confronts the challenge of turning data into useful knowledge. Data analysis combines an attitude and a process, supported by well\-chosen techniques. The attitude distills the scientist’s curiosity about regularity, pattern, and exception. The process iteratively peels off patterns so that we can look beneath them for more subtle (and often more interesting) patterns. The techniques make few assumptions about the data and deliberately accommodate the unexpected.”
Essentially exploratory data analysis (abbreviated as EDA) can be viewed as numerical detective work. We are confronted with one or more batches of data and we are trying to uncover patterns in the numbers. Data analysis can be thought of as detective work since it has similarities to the work of a detective, such as the famous Sherlock Holmes, who uncovers a mystery (like the identity of a murderer) from the different evidence that he collects. Our objective in data analysis is to summarize the general structure in the numbers. By doing this, we can describe in a relatively simple way of what the data is telling us.
2\.2 What is data and where do we find it?
------------------------------------------
Data are simply numbers with a particular context. For example, the number 610 is data when you are told that 610 represents the number of people who immigrated to the United States from Austria in 1998\. Data doesn’t need to be a random sample from some hypothetical population. It is simply numbers that we care about and wish to organize and summarize in some effective way.
In this class, you will need to find your own datasets. Where do you find data? Actually, data is present everywhere – in newspapers, the Internet, and in textbooks. One convenient source of a wide variety of data is the well\-known almanac or book\-of\-facts. (I will be using the 2010 New York Times Almanac which is typical of a world almanac that is available for sale.) Many types of almanacs are available at the library. I recommend purchasing one of the inexpensive paperback almanacs sold at a bookstore. It will be convenient to access an assortment of datasets if you have an almanac readily available.
2\.3 Meet our first data
------------------------
Here is a good time to introduce you to our first dataset. Browsing through my almanac, there is a section on immigration in the United States. In this section, the following table is displayed on page 310 that shows the estimated number of U. S. immigrants in 2008 from various countries. The table gives the name of each country, the region of the world in which the country belongs, and the 2008 immigration count from that country. This data is contained in the dataset `immigrants` in the `LearnEDA` package. I have listed the first few rows of this data table.
```
library(LearnEDAfunctions)
head(immigrants)
```
```
## Country Region Count.1998 Count.2008
## 1 Austria Europe 610 1505
## 2 Belgium Europe 557 829
## 3 Czechoslovakia Europe 931 1650
## 4 Denmark Europe 447 551
## 5 France Europe 2961 5246
## 6 Germany Europe 6923 8456
```
Obviously, it is hard to understand general patterns in these immigration numbers by just looking at the table. We are interested in using graphs and summaries to better understand the general structure in these data.
It is usually helpful to list some general questions that we have about these data. Here are a few questions that quickly come to mind:
* What countries have contributed the most immigrants to the United States?
* What is a typical number of immigrants from a country?
* Are Asian countries contributing more or less immigrants than countries from Europe?
* The magazine *Time* had an issue that focused on the U.S./Mexico border. Are we getting an unusually large number of immigrants from Mexico?
* Which countries are contributing a large number of immigrants relative to their population size?
We probably won’t answer all of these questions in our preliminary data analysis, but it is always important to think of things that you wish to learn from your data.
2\.4 How does exploratory data analysis (EDA) differ from confirmatory data analysis (CDA)?
-------------------------------------------------------------------------------------------
So we can think of exploratory data analysis (EDA) simply as looking for patterns (and deviations from these patterns) in data. This type of data analysis is fundamentally different from the way that most of us learned statistics. Let’s illustrate the typical way we learned to analyze a single batch of data.
In this approach, we assume that the data represent a random sample drawn from a hypothetical normally distributed population. With this assumption, a \`\`best” guess at the average of the population is the mean. We consider several inferential questions. We construct an interval that we are confident contains the population mean, and we make decisions about the value of the population mean by one or more hypothesis tests. This methodology, based on the t distribution, is well known and available using any statistical software package.
This approach is called confirmatory data analysis or CDA. We analyze data by the use of probability models. We write down a family of probability models that could have generated the observed data, and we learn about the data by estimating the unknown parameters of these models.
2\.5 How is EDA different from CDA?
-----------------------------------
First, in EDA we make no assumptions about an underlying population. We don’t assume that the data represent independent observations or that the data come from a population with a prescribed shape. The data are simply viewed as numbers that we wish to explore.
Second, the goals of EDA are different from that of CDA. The goal of CDA is to learn about the underlying population – statistical inference is the goal of CDA. In contrast, there are no inferential goals in EDA. We are focusing our analysis on the data at hand, instead of worrying about characteristics of a hypothetical population.
I don’t want to give you the impression that EDA is in some sense better than CDA. Rather, data analysis typically consists of both EDA and CDA. In a typical data analysis, we will use EDA methods to discover basic patterns or structure in the data. Then we may later use inferential methods to make statements about underlying populations.
2\.6 John Tukey’s contribution
------------------------------
Exploratory data analysis will always be associated with John Tukey, who was one of the greatest statisticians in history. It would be wrong to say that Tukey invented EDA. Rather, Tukey was the first to organize a collection of methods and associated philosophy into what we call EDA. There is an orange text called *EDA* that describes this work. Some of the data analysis methods Tukey used were novel and he gave them interesting names, like stem\-and\-leaf, boxplot, resistant smooth, and rootogram. It is natural for students to focus on the particular data analysis methods developed by Tukey. But Tukey’s legacy in this area was not the EDA methods but rather the particular data analysis philosophy that underlies the development of these methods.
2\.7 Four principles of EDA (the four R’s)}
-------------------------------------------
Although we will discuss a variety of methods useful for exploring data, they all share different characteristics that underlie the philosophy of EDA. We call these principles the four R’s since each principle starts with the letter r.
* **Revelation:** In EDA, there is an emphasis on using graphs to find patterns or displaying fits. Effective displays of data can communicate information in way that is not possible by the use of numbers. A good rule of thumb is to always graph your data before commuting any summary statistic. There will be a lot of graphing in this course, and we’ll discuss guidelines for constructing effective graphs.
* **Resistance:** In EDA, we wish to describe the general pattern in the majority of the data. In this detective work, we don’t want our search to be unusually affected by a few unusual observations. So it is important that our exploratory method is resistant or insensitive to outliers. When we look at a single batch of numbers, we’ll see that the median or middle value (when the data are arranged in ascending order) is an example of a resistant measure. The mean or arithmetic average is nonresistant since it can be distorted by one or more extreme values.
* **Reexpression:** We will see that the natural scale that the data is presented is not always the best scale for displaying or summarizing the data. In many situations, we wish to reexpress the data to a new scale by taking a square root or a logarithm. In this class, we will talk about a useful class of reexpressions, called the power family, and give guidance on the \`\`best” choice of power reexpression to simplify the data analysis.
* **Residual:** In a typical data analysis, we will find a general pattern, which we call the FIT. The description of the FIT may be very informative. But in EDA we wish to look for deviations in the data from the general pattern in the FIT. We look at the residuals, which are defined as the difference between the data and the FIT.
\\\[
RESIDUAL \= DATA \- FIT
\\]
In many situations, we will see that a careful investigation of the residuals may be more interesting than the fit.
2\.8 An exploratory look at our data
------------------------------------
Let’s illustrate a few of the EDA principles in an analysis of our immigration data. We start with a graph of the immigration counts. A simple graph is a stripchart that represents each value by a dot over the appropriate place on a number line. The points have been randomly jittered in the vertical direction so one can see overlapping points.
```
library(tidyverse)
ggplot(immigrants,
aes(x = Count.2008, y = 1)) +
geom_jitter() + ylim(0, 2) +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
Looking at this stripchart, we see that most of the immigration counts are clustered between 0 and 30,000 with only six countries appear to have large counts.
This is really not a very useful graphical display since most of the data is bunched up towards the value zero. In other words, this data is strongly right\-skewed. Due to this right\-skewness, all we have learned is that there are a few countries with large numbers of immigrants and Mexico, with 188,015 immigrants, stands out.
What we see in this stripchart is that the original data (counts of immigrants) is not the best scale for viewing in a graph. We can improve the presentation of these data by reexpressing the counts by taking logs. That is, we reexpress Austria’s count 1505 to log(1505\) \= 3\.18, reexpress Belgium’s count 557 to log(557\) \= 2\.75, and so on for all of the immigrant counts. (By the way, when we take logs in this class, they will all be log base 10\.)
```
immigrants <- mutate(immigrants,
log.Count = log10(Count.2008))
```
Here is a stripchart of the logarithms of the immigrant counts:
```
ggplot(immigrants,
aes(x = log.Count, y = 0)) +
geom_jitter() + ylim(-1, 1) +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
This is a much better graphical display for viewing these data. The log counts are evenly spread out between 2\.50 and 5\.20 and one can see more interesting structure in the data. In particular, we see a clump of countries with log immigration counts around 3, and a second concentration of log counts around 4\. A typical log immigration count can be seen to be about 3\.75\. We still see Mexico’s large log count of 5\.27; but now we also see a small log count at 2\.58 that corresponds to Norway.
We can summarize these data by the typical value of 3\.75 – this is our FIT to these data. We can compute residuals by subtracting the FIT from each log count:
\\\[
RESIDUAL \= \\log COUNT \- FIT
\\]
\\\[
\= \\log COUNT \- 3\.75
\\]
```
immigrants <- mutate(immigrants,
Residual = log.Count - 3.75)
```
The stripchart below graphs the residuals of the data:
```
ggplot(immigrants,
aes(x = Residual, y = 0)) +
geom_jitter() + ylim(-1, 1) +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
Now we can look at these data in more detail. We see that, on the log count scale, two countries have immigration counts that are 1 smaller than the average; also there is one country (Mexico) that is approximately 1\.5 larger than the average. This residual graph tells how close the log counts are to the average of 3\.75\.
This example illustrates a few of the EDA principles that we’ll be using throughout the course.
* It illustrates the use of graphical displays to see basic patterns in data.
* It shows that the scale of the original data may not be the best scale for viewing data and one can reexpress the data by a suitable transformation (here logs) to improve the presentation of the data.
* One can easily summarize the reexpressed data by a \`\`typical” value, and residuals can be computed that show the deviations of the data from this typical value.
**What have we learned?**
Let’s return to our questions about these data to see what we have learned in this brief data analysis:
* What countries have contributed the most immigrants to the United States?
ANSWER: Mexico is by far the leader in supplying immigrants to the U.S.
* What is a typical number of immigrants from a country?
ANSWER: On the log scale, a typical immigrant count is 3\.75\.
* Are Asian countries contributing more or less immigrants than countries from Europe?
ANSWER: We didn’t address this question in our brief analysis, but we’ll soon talk about how we can compare batches of data.
* The magazine *Time* had an issue that focused on the U.S./Mexico border. Are we getting an unusually large number of immigrants from Mexico?
ANSWER: Yes – this analysis confirms that Mexico is supplying many immigrants.
* Which countries are contributing a large number of immigrants relative to their population size?
ANSWER: In order to answer this question, we would need to collect the populations of the countries in the table. This would be an interesting study.
2\.1 What is data analysis?
---------------------------
This is a course in “data analysis”. Although you have heard this expression many times, you probably don’t have a clear idea of what it means. Here is a description of data analysis written by Paul Velleman and David Hoaglin in their article “Data Analysis”, in *Perspectives on Contemporary Statistics*.
“As the link between statistics and diverse fields of application, data analysis confronts the challenge of turning data into useful knowledge. Data analysis combines an attitude and a process, supported by well\-chosen techniques. The attitude distills the scientist’s curiosity about regularity, pattern, and exception. The process iteratively peels off patterns so that we can look beneath them for more subtle (and often more interesting) patterns. The techniques make few assumptions about the data and deliberately accommodate the unexpected.”
Essentially exploratory data analysis (abbreviated as EDA) can be viewed as numerical detective work. We are confronted with one or more batches of data and we are trying to uncover patterns in the numbers. Data analysis can be thought of as detective work since it has similarities to the work of a detective, such as the famous Sherlock Holmes, who uncovers a mystery (like the identity of a murderer) from the different evidence that he collects. Our objective in data analysis is to summarize the general structure in the numbers. By doing this, we can describe in a relatively simple way of what the data is telling us.
2\.2 What is data and where do we find it?
------------------------------------------
Data are simply numbers with a particular context. For example, the number 610 is data when you are told that 610 represents the number of people who immigrated to the United States from Austria in 1998\. Data doesn’t need to be a random sample from some hypothetical population. It is simply numbers that we care about and wish to organize and summarize in some effective way.
In this class, you will need to find your own datasets. Where do you find data? Actually, data is present everywhere – in newspapers, the Internet, and in textbooks. One convenient source of a wide variety of data is the well\-known almanac or book\-of\-facts. (I will be using the 2010 New York Times Almanac which is typical of a world almanac that is available for sale.) Many types of almanacs are available at the library. I recommend purchasing one of the inexpensive paperback almanacs sold at a bookstore. It will be convenient to access an assortment of datasets if you have an almanac readily available.
2\.3 Meet our first data
------------------------
Here is a good time to introduce you to our first dataset. Browsing through my almanac, there is a section on immigration in the United States. In this section, the following table is displayed on page 310 that shows the estimated number of U. S. immigrants in 2008 from various countries. The table gives the name of each country, the region of the world in which the country belongs, and the 2008 immigration count from that country. This data is contained in the dataset `immigrants` in the `LearnEDA` package. I have listed the first few rows of this data table.
```
library(LearnEDAfunctions)
head(immigrants)
```
```
## Country Region Count.1998 Count.2008
## 1 Austria Europe 610 1505
## 2 Belgium Europe 557 829
## 3 Czechoslovakia Europe 931 1650
## 4 Denmark Europe 447 551
## 5 France Europe 2961 5246
## 6 Germany Europe 6923 8456
```
Obviously, it is hard to understand general patterns in these immigration numbers by just looking at the table. We are interested in using graphs and summaries to better understand the general structure in these data.
It is usually helpful to list some general questions that we have about these data. Here are a few questions that quickly come to mind:
* What countries have contributed the most immigrants to the United States?
* What is a typical number of immigrants from a country?
* Are Asian countries contributing more or less immigrants than countries from Europe?
* The magazine *Time* had an issue that focused on the U.S./Mexico border. Are we getting an unusually large number of immigrants from Mexico?
* Which countries are contributing a large number of immigrants relative to their population size?
We probably won’t answer all of these questions in our preliminary data analysis, but it is always important to think of things that you wish to learn from your data.
2\.4 How does exploratory data analysis (EDA) differ from confirmatory data analysis (CDA)?
-------------------------------------------------------------------------------------------
So we can think of exploratory data analysis (EDA) simply as looking for patterns (and deviations from these patterns) in data. This type of data analysis is fundamentally different from the way that most of us learned statistics. Let’s illustrate the typical way we learned to analyze a single batch of data.
In this approach, we assume that the data represent a random sample drawn from a hypothetical normally distributed population. With this assumption, a \`\`best” guess at the average of the population is the mean. We consider several inferential questions. We construct an interval that we are confident contains the population mean, and we make decisions about the value of the population mean by one or more hypothesis tests. This methodology, based on the t distribution, is well known and available using any statistical software package.
This approach is called confirmatory data analysis or CDA. We analyze data by the use of probability models. We write down a family of probability models that could have generated the observed data, and we learn about the data by estimating the unknown parameters of these models.
2\.5 How is EDA different from CDA?
-----------------------------------
First, in EDA we make no assumptions about an underlying population. We don’t assume that the data represent independent observations or that the data come from a population with a prescribed shape. The data are simply viewed as numbers that we wish to explore.
Second, the goals of EDA are different from that of CDA. The goal of CDA is to learn about the underlying population – statistical inference is the goal of CDA. In contrast, there are no inferential goals in EDA. We are focusing our analysis on the data at hand, instead of worrying about characteristics of a hypothetical population.
I don’t want to give you the impression that EDA is in some sense better than CDA. Rather, data analysis typically consists of both EDA and CDA. In a typical data analysis, we will use EDA methods to discover basic patterns or structure in the data. Then we may later use inferential methods to make statements about underlying populations.
2\.6 John Tukey’s contribution
------------------------------
Exploratory data analysis will always be associated with John Tukey, who was one of the greatest statisticians in history. It would be wrong to say that Tukey invented EDA. Rather, Tukey was the first to organize a collection of methods and associated philosophy into what we call EDA. There is an orange text called *EDA* that describes this work. Some of the data analysis methods Tukey used were novel and he gave them interesting names, like stem\-and\-leaf, boxplot, resistant smooth, and rootogram. It is natural for students to focus on the particular data analysis methods developed by Tukey. But Tukey’s legacy in this area was not the EDA methods but rather the particular data analysis philosophy that underlies the development of these methods.
2\.7 Four principles of EDA (the four R’s)}
-------------------------------------------
Although we will discuss a variety of methods useful for exploring data, they all share different characteristics that underlie the philosophy of EDA. We call these principles the four R’s since each principle starts with the letter r.
* **Revelation:** In EDA, there is an emphasis on using graphs to find patterns or displaying fits. Effective displays of data can communicate information in way that is not possible by the use of numbers. A good rule of thumb is to always graph your data before commuting any summary statistic. There will be a lot of graphing in this course, and we’ll discuss guidelines for constructing effective graphs.
* **Resistance:** In EDA, we wish to describe the general pattern in the majority of the data. In this detective work, we don’t want our search to be unusually affected by a few unusual observations. So it is important that our exploratory method is resistant or insensitive to outliers. When we look at a single batch of numbers, we’ll see that the median or middle value (when the data are arranged in ascending order) is an example of a resistant measure. The mean or arithmetic average is nonresistant since it can be distorted by one or more extreme values.
* **Reexpression:** We will see that the natural scale that the data is presented is not always the best scale for displaying or summarizing the data. In many situations, we wish to reexpress the data to a new scale by taking a square root or a logarithm. In this class, we will talk about a useful class of reexpressions, called the power family, and give guidance on the \`\`best” choice of power reexpression to simplify the data analysis.
* **Residual:** In a typical data analysis, we will find a general pattern, which we call the FIT. The description of the FIT may be very informative. But in EDA we wish to look for deviations in the data from the general pattern in the FIT. We look at the residuals, which are defined as the difference between the data and the FIT.
\\\[
RESIDUAL \= DATA \- FIT
\\]
In many situations, we will see that a careful investigation of the residuals may be more interesting than the fit.
2\.8 An exploratory look at our data
------------------------------------
Let’s illustrate a few of the EDA principles in an analysis of our immigration data. We start with a graph of the immigration counts. A simple graph is a stripchart that represents each value by a dot over the appropriate place on a number line. The points have been randomly jittered in the vertical direction so one can see overlapping points.
```
library(tidyverse)
ggplot(immigrants,
aes(x = Count.2008, y = 1)) +
geom_jitter() + ylim(0, 2) +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
Looking at this stripchart, we see that most of the immigration counts are clustered between 0 and 30,000 with only six countries appear to have large counts.
This is really not a very useful graphical display since most of the data is bunched up towards the value zero. In other words, this data is strongly right\-skewed. Due to this right\-skewness, all we have learned is that there are a few countries with large numbers of immigrants and Mexico, with 188,015 immigrants, stands out.
What we see in this stripchart is that the original data (counts of immigrants) is not the best scale for viewing in a graph. We can improve the presentation of these data by reexpressing the counts by taking logs. That is, we reexpress Austria’s count 1505 to log(1505\) \= 3\.18, reexpress Belgium’s count 557 to log(557\) \= 2\.75, and so on for all of the immigrant counts. (By the way, when we take logs in this class, they will all be log base 10\.)
```
immigrants <- mutate(immigrants,
log.Count = log10(Count.2008))
```
Here is a stripchart of the logarithms of the immigrant counts:
```
ggplot(immigrants,
aes(x = log.Count, y = 0)) +
geom_jitter() + ylim(-1, 1) +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
This is a much better graphical display for viewing these data. The log counts are evenly spread out between 2\.50 and 5\.20 and one can see more interesting structure in the data. In particular, we see a clump of countries with log immigration counts around 3, and a second concentration of log counts around 4\. A typical log immigration count can be seen to be about 3\.75\. We still see Mexico’s large log count of 5\.27; but now we also see a small log count at 2\.58 that corresponds to Norway.
We can summarize these data by the typical value of 3\.75 – this is our FIT to these data. We can compute residuals by subtracting the FIT from each log count:
\\\[
RESIDUAL \= \\log COUNT \- FIT
\\]
\\\[
\= \\log COUNT \- 3\.75
\\]
```
immigrants <- mutate(immigrants,
Residual = log.Count - 3.75)
```
The stripchart below graphs the residuals of the data:
```
ggplot(immigrants,
aes(x = Residual, y = 0)) +
geom_jitter() + ylim(-1, 1) +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
Now we can look at these data in more detail. We see that, on the log count scale, two countries have immigration counts that are 1 smaller than the average; also there is one country (Mexico) that is approximately 1\.5 larger than the average. This residual graph tells how close the log counts are to the average of 3\.75\.
This example illustrates a few of the EDA principles that we’ll be using throughout the course.
* It illustrates the use of graphical displays to see basic patterns in data.
* It shows that the scale of the original data may not be the best scale for viewing data and one can reexpress the data by a suitable transformation (here logs) to improve the presentation of the data.
* One can easily summarize the reexpressed data by a \`\`typical” value, and residuals can be computed that show the deviations of the data from this typical value.
**What have we learned?**
Let’s return to our questions about these data to see what we have learned in this brief data analysis:
* What countries have contributed the most immigrants to the United States?
ANSWER: Mexico is by far the leader in supplying immigrants to the U.S.
* What is a typical number of immigrants from a country?
ANSWER: On the log scale, a typical immigrant count is 3\.75\.
* Are Asian countries contributing more or less immigrants than countries from Europe?
ANSWER: We didn’t address this question in our brief analysis, but we’ll soon talk about how we can compare batches of data.
* The magazine *Time* had an issue that focused on the U.S./Mexico border. Are we getting an unusually large number of immigrants from Mexico?
ANSWER: Yes – this analysis confirms that Mexico is supplying many immigrants.
* Which countries are contributing a large number of immigrants relative to their population size?
ANSWER: In order to answer this question, we would need to collect the populations of the countries in the table. This would be an interesting study.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/introduction.html |
2 Introduction
==============
2\.1 What is data analysis?
---------------------------
This is a course in “data analysis”. Although you have heard this expression many times, you probably don’t have a clear idea of what it means. Here is a description of data analysis written by Paul Velleman and David Hoaglin in their article “Data Analysis”, in *Perspectives on Contemporary Statistics*.
“As the link between statistics and diverse fields of application, data analysis confronts the challenge of turning data into useful knowledge. Data analysis combines an attitude and a process, supported by well\-chosen techniques. The attitude distills the scientist’s curiosity about regularity, pattern, and exception. The process iteratively peels off patterns so that we can look beneath them for more subtle (and often more interesting) patterns. The techniques make few assumptions about the data and deliberately accommodate the unexpected.”
Essentially exploratory data analysis (abbreviated as EDA) can be viewed as numerical detective work. We are confronted with one or more batches of data and we are trying to uncover patterns in the numbers. Data analysis can be thought of as detective work since it has similarities to the work of a detective, such as the famous Sherlock Holmes, who uncovers a mystery (like the identity of a murderer) from the different evidence that he collects. Our objective in data analysis is to summarize the general structure in the numbers. By doing this, we can describe in a relatively simple way of what the data is telling us.
2\.2 What is data and where do we find it?
------------------------------------------
Data are simply numbers with a particular context. For example, the number 610 is data when you are told that 610 represents the number of people who immigrated to the United States from Austria in 1998\. Data doesn’t need to be a random sample from some hypothetical population. It is simply numbers that we care about and wish to organize and summarize in some effective way.
In this class, you will need to find your own datasets. Where do you find data? Actually, data is present everywhere – in newspapers, the Internet, and in textbooks. One convenient source of a wide variety of data is the well\-known almanac or book\-of\-facts. (I will be using the 2010 New York Times Almanac which is typical of a world almanac that is available for sale.) Many types of almanacs are available at the library. I recommend purchasing one of the inexpensive paperback almanacs sold at a bookstore. It will be convenient to access an assortment of datasets if you have an almanac readily available.
2\.3 Meet our first data
------------------------
Here is a good time to introduce you to our first dataset. Browsing through my almanac, there is a section on immigration in the United States. In this section, the following table is displayed on page 310 that shows the estimated number of U. S. immigrants in 2008 from various countries. The table gives the name of each country, the region of the world in which the country belongs, and the 2008 immigration count from that country. This data is contained in the dataset `immigrants` in the `LearnEDA` package. I have listed the first few rows of this data table.
```
library(LearnEDAfunctions)
head(immigrants)
```
```
## Country Region Count.1998 Count.2008
## 1 Austria Europe 610 1505
## 2 Belgium Europe 557 829
## 3 Czechoslovakia Europe 931 1650
## 4 Denmark Europe 447 551
## 5 France Europe 2961 5246
## 6 Germany Europe 6923 8456
```
Obviously, it is hard to understand general patterns in these immigration numbers by just looking at the table. We are interested in using graphs and summaries to better understand the general structure in these data.
It is usually helpful to list some general questions that we have about these data. Here are a few questions that quickly come to mind:
* What countries have contributed the most immigrants to the United States?
* What is a typical number of immigrants from a country?
* Are Asian countries contributing more or less immigrants than countries from Europe?
* The magazine *Time* had an issue that focused on the U.S./Mexico border. Are we getting an unusually large number of immigrants from Mexico?
* Which countries are contributing a large number of immigrants relative to their population size?
We probably won’t answer all of these questions in our preliminary data analysis, but it is always important to think of things that you wish to learn from your data.
2\.4 How does exploratory data analysis (EDA) differ from confirmatory data analysis (CDA)?
-------------------------------------------------------------------------------------------
So we can think of exploratory data analysis (EDA) simply as looking for patterns (and deviations from these patterns) in data. This type of data analysis is fundamentally different from the way that most of us learned statistics. Let’s illustrate the typical way we learned to analyze a single batch of data.
In this approach, we assume that the data represent a random sample drawn from a hypothetical normally distributed population. With this assumption, a \`\`best” guess at the average of the population is the mean. We consider several inferential questions. We construct an interval that we are confident contains the population mean, and we make decisions about the value of the population mean by one or more hypothesis tests. This methodology, based on the t distribution, is well known and available using any statistical software package.
This approach is called confirmatory data analysis or CDA. We analyze data by the use of probability models. We write down a family of probability models that could have generated the observed data, and we learn about the data by estimating the unknown parameters of these models.
2\.5 How is EDA different from CDA?
-----------------------------------
First, in EDA we make no assumptions about an underlying population. We don’t assume that the data represent independent observations or that the data come from a population with a prescribed shape. The data are simply viewed as numbers that we wish to explore.
Second, the goals of EDA are different from that of CDA. The goal of CDA is to learn about the underlying population – statistical inference is the goal of CDA. In contrast, there are no inferential goals in EDA. We are focusing our analysis on the data at hand, instead of worrying about characteristics of a hypothetical population.
I don’t want to give you the impression that EDA is in some sense better than CDA. Rather, data analysis typically consists of both EDA and CDA. In a typical data analysis, we will use EDA methods to discover basic patterns or structure in the data. Then we may later use inferential methods to make statements about underlying populations.
2\.6 John Tukey’s contribution
------------------------------
Exploratory data analysis will always be associated with John Tukey, who was one of the greatest statisticians in history. It would be wrong to say that Tukey invented EDA. Rather, Tukey was the first to organize a collection of methods and associated philosophy into what we call EDA. There is an orange text called *EDA* that describes this work. Some of the data analysis methods Tukey used were novel and he gave them interesting names, like stem\-and\-leaf, boxplot, resistant smooth, and rootogram. It is natural for students to focus on the particular data analysis methods developed by Tukey. But Tukey’s legacy in this area was not the EDA methods but rather the particular data analysis philosophy that underlies the development of these methods.
2\.7 Four principles of EDA (the four R’s)}
-------------------------------------------
Although we will discuss a variety of methods useful for exploring data, they all share different characteristics that underlie the philosophy of EDA. We call these principles the four R’s since each principle starts with the letter r.
* **Revelation:** In EDA, there is an emphasis on using graphs to find patterns or displaying fits. Effective displays of data can communicate information in way that is not possible by the use of numbers. A good rule of thumb is to always graph your data before commuting any summary statistic. There will be a lot of graphing in this course, and we’ll discuss guidelines for constructing effective graphs.
* **Resistance:** In EDA, we wish to describe the general pattern in the majority of the data. In this detective work, we don’t want our search to be unusually affected by a few unusual observations. So it is important that our exploratory method is resistant or insensitive to outliers. When we look at a single batch of numbers, we’ll see that the median or middle value (when the data are arranged in ascending order) is an example of a resistant measure. The mean or arithmetic average is nonresistant since it can be distorted by one or more extreme values.
* **Reexpression:** We will see that the natural scale that the data is presented is not always the best scale for displaying or summarizing the data. In many situations, we wish to reexpress the data to a new scale by taking a square root or a logarithm. In this class, we will talk about a useful class of reexpressions, called the power family, and give guidance on the \`\`best” choice of power reexpression to simplify the data analysis.
* **Residual:** In a typical data analysis, we will find a general pattern, which we call the FIT. The description of the FIT may be very informative. But in EDA we wish to look for deviations in the data from the general pattern in the FIT. We look at the residuals, which are defined as the difference between the data and the FIT.
\\\[
RESIDUAL \= DATA \- FIT
\\]
In many situations, we will see that a careful investigation of the residuals may be more interesting than the fit.
2\.8 An exploratory look at our data
------------------------------------
Let’s illustrate a few of the EDA principles in an analysis of our immigration data. We start with a graph of the immigration counts. A simple graph is a stripchart that represents each value by a dot over the appropriate place on a number line. The points have been randomly jittered in the vertical direction so one can see overlapping points.
```
library(tidyverse)
ggplot(immigrants,
aes(x = Count.2008, y = 1)) +
geom_jitter() + ylim(0, 2) +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
Looking at this stripchart, we see that most of the immigration counts are clustered between 0 and 30,000 with only six countries appear to have large counts.
This is really not a very useful graphical display since most of the data is bunched up towards the value zero. In other words, this data is strongly right\-skewed. Due to this right\-skewness, all we have learned is that there are a few countries with large numbers of immigrants and Mexico, with 188,015 immigrants, stands out.
What we see in this stripchart is that the original data (counts of immigrants) is not the best scale for viewing in a graph. We can improve the presentation of these data by reexpressing the counts by taking logs. That is, we reexpress Austria’s count 1505 to log(1505\) \= 3\.18, reexpress Belgium’s count 557 to log(557\) \= 2\.75, and so on for all of the immigrant counts. (By the way, when we take logs in this class, they will all be log base 10\.)
```
immigrants <- mutate(immigrants,
log.Count = log10(Count.2008))
```
Here is a stripchart of the logarithms of the immigrant counts:
```
ggplot(immigrants,
aes(x = log.Count, y = 0)) +
geom_jitter() + ylim(-1, 1) +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
This is a much better graphical display for viewing these data. The log counts are evenly spread out between 2\.50 and 5\.20 and one can see more interesting structure in the data. In particular, we see a clump of countries with log immigration counts around 3, and a second concentration of log counts around 4\. A typical log immigration count can be seen to be about 3\.75\. We still see Mexico’s large log count of 5\.27; but now we also see a small log count at 2\.58 that corresponds to Norway.
We can summarize these data by the typical value of 3\.75 – this is our FIT to these data. We can compute residuals by subtracting the FIT from each log count:
\\\[
RESIDUAL \= \\log COUNT \- FIT
\\]
\\\[
\= \\log COUNT \- 3\.75
\\]
```
immigrants <- mutate(immigrants,
Residual = log.Count - 3.75)
```
The stripchart below graphs the residuals of the data:
```
ggplot(immigrants,
aes(x = Residual, y = 0)) +
geom_jitter() + ylim(-1, 1) +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
Now we can look at these data in more detail. We see that, on the log count scale, two countries have immigration counts that are 1 smaller than the average; also there is one country (Mexico) that is approximately 1\.5 larger than the average. This residual graph tells how close the log counts are to the average of 3\.75\.
This example illustrates a few of the EDA principles that we’ll be using throughout the course.
* It illustrates the use of graphical displays to see basic patterns in data.
* It shows that the scale of the original data may not be the best scale for viewing data and one can reexpress the data by a suitable transformation (here logs) to improve the presentation of the data.
* One can easily summarize the reexpressed data by a \`\`typical” value, and residuals can be computed that show the deviations of the data from this typical value.
**What have we learned?**
Let’s return to our questions about these data to see what we have learned in this brief data analysis:
* What countries have contributed the most immigrants to the United States?
ANSWER: Mexico is by far the leader in supplying immigrants to the U.S.
* What is a typical number of immigrants from a country?
ANSWER: On the log scale, a typical immigrant count is 3\.75\.
* Are Asian countries contributing more or less immigrants than countries from Europe?
ANSWER: We didn’t address this question in our brief analysis, but we’ll soon talk about how we can compare batches of data.
* The magazine *Time* had an issue that focused on the U.S./Mexico border. Are we getting an unusually large number of immigrants from Mexico?
ANSWER: Yes – this analysis confirms that Mexico is supplying many immigrants.
* Which countries are contributing a large number of immigrants relative to their population size?
ANSWER: In order to answer this question, we would need to collect the populations of the countries in the table. This would be an interesting study.
2\.1 What is data analysis?
---------------------------
This is a course in “data analysis”. Although you have heard this expression many times, you probably don’t have a clear idea of what it means. Here is a description of data analysis written by Paul Velleman and David Hoaglin in their article “Data Analysis”, in *Perspectives on Contemporary Statistics*.
“As the link between statistics and diverse fields of application, data analysis confronts the challenge of turning data into useful knowledge. Data analysis combines an attitude and a process, supported by well\-chosen techniques. The attitude distills the scientist’s curiosity about regularity, pattern, and exception. The process iteratively peels off patterns so that we can look beneath them for more subtle (and often more interesting) patterns. The techniques make few assumptions about the data and deliberately accommodate the unexpected.”
Essentially exploratory data analysis (abbreviated as EDA) can be viewed as numerical detective work. We are confronted with one or more batches of data and we are trying to uncover patterns in the numbers. Data analysis can be thought of as detective work since it has similarities to the work of a detective, such as the famous Sherlock Holmes, who uncovers a mystery (like the identity of a murderer) from the different evidence that he collects. Our objective in data analysis is to summarize the general structure in the numbers. By doing this, we can describe in a relatively simple way of what the data is telling us.
2\.2 What is data and where do we find it?
------------------------------------------
Data are simply numbers with a particular context. For example, the number 610 is data when you are told that 610 represents the number of people who immigrated to the United States from Austria in 1998\. Data doesn’t need to be a random sample from some hypothetical population. It is simply numbers that we care about and wish to organize and summarize in some effective way.
In this class, you will need to find your own datasets. Where do you find data? Actually, data is present everywhere – in newspapers, the Internet, and in textbooks. One convenient source of a wide variety of data is the well\-known almanac or book\-of\-facts. (I will be using the 2010 New York Times Almanac which is typical of a world almanac that is available for sale.) Many types of almanacs are available at the library. I recommend purchasing one of the inexpensive paperback almanacs sold at a bookstore. It will be convenient to access an assortment of datasets if you have an almanac readily available.
2\.3 Meet our first data
------------------------
Here is a good time to introduce you to our first dataset. Browsing through my almanac, there is a section on immigration in the United States. In this section, the following table is displayed on page 310 that shows the estimated number of U. S. immigrants in 2008 from various countries. The table gives the name of each country, the region of the world in which the country belongs, and the 2008 immigration count from that country. This data is contained in the dataset `immigrants` in the `LearnEDA` package. I have listed the first few rows of this data table.
```
library(LearnEDAfunctions)
head(immigrants)
```
```
## Country Region Count.1998 Count.2008
## 1 Austria Europe 610 1505
## 2 Belgium Europe 557 829
## 3 Czechoslovakia Europe 931 1650
## 4 Denmark Europe 447 551
## 5 France Europe 2961 5246
## 6 Germany Europe 6923 8456
```
Obviously, it is hard to understand general patterns in these immigration numbers by just looking at the table. We are interested in using graphs and summaries to better understand the general structure in these data.
It is usually helpful to list some general questions that we have about these data. Here are a few questions that quickly come to mind:
* What countries have contributed the most immigrants to the United States?
* What is a typical number of immigrants from a country?
* Are Asian countries contributing more or less immigrants than countries from Europe?
* The magazine *Time* had an issue that focused on the U.S./Mexico border. Are we getting an unusually large number of immigrants from Mexico?
* Which countries are contributing a large number of immigrants relative to their population size?
We probably won’t answer all of these questions in our preliminary data analysis, but it is always important to think of things that you wish to learn from your data.
2\.4 How does exploratory data analysis (EDA) differ from confirmatory data analysis (CDA)?
-------------------------------------------------------------------------------------------
So we can think of exploratory data analysis (EDA) simply as looking for patterns (and deviations from these patterns) in data. This type of data analysis is fundamentally different from the way that most of us learned statistics. Let’s illustrate the typical way we learned to analyze a single batch of data.
In this approach, we assume that the data represent a random sample drawn from a hypothetical normally distributed population. With this assumption, a \`\`best” guess at the average of the population is the mean. We consider several inferential questions. We construct an interval that we are confident contains the population mean, and we make decisions about the value of the population mean by one or more hypothesis tests. This methodology, based on the t distribution, is well known and available using any statistical software package.
This approach is called confirmatory data analysis or CDA. We analyze data by the use of probability models. We write down a family of probability models that could have generated the observed data, and we learn about the data by estimating the unknown parameters of these models.
2\.5 How is EDA different from CDA?
-----------------------------------
First, in EDA we make no assumptions about an underlying population. We don’t assume that the data represent independent observations or that the data come from a population with a prescribed shape. The data are simply viewed as numbers that we wish to explore.
Second, the goals of EDA are different from that of CDA. The goal of CDA is to learn about the underlying population – statistical inference is the goal of CDA. In contrast, there are no inferential goals in EDA. We are focusing our analysis on the data at hand, instead of worrying about characteristics of a hypothetical population.
I don’t want to give you the impression that EDA is in some sense better than CDA. Rather, data analysis typically consists of both EDA and CDA. In a typical data analysis, we will use EDA methods to discover basic patterns or structure in the data. Then we may later use inferential methods to make statements about underlying populations.
2\.6 John Tukey’s contribution
------------------------------
Exploratory data analysis will always be associated with John Tukey, who was one of the greatest statisticians in history. It would be wrong to say that Tukey invented EDA. Rather, Tukey was the first to organize a collection of methods and associated philosophy into what we call EDA. There is an orange text called *EDA* that describes this work. Some of the data analysis methods Tukey used were novel and he gave them interesting names, like stem\-and\-leaf, boxplot, resistant smooth, and rootogram. It is natural for students to focus on the particular data analysis methods developed by Tukey. But Tukey’s legacy in this area was not the EDA methods but rather the particular data analysis philosophy that underlies the development of these methods.
2\.7 Four principles of EDA (the four R’s)}
-------------------------------------------
Although we will discuss a variety of methods useful for exploring data, they all share different characteristics that underlie the philosophy of EDA. We call these principles the four R’s since each principle starts with the letter r.
* **Revelation:** In EDA, there is an emphasis on using graphs to find patterns or displaying fits. Effective displays of data can communicate information in way that is not possible by the use of numbers. A good rule of thumb is to always graph your data before commuting any summary statistic. There will be a lot of graphing in this course, and we’ll discuss guidelines for constructing effective graphs.
* **Resistance:** In EDA, we wish to describe the general pattern in the majority of the data. In this detective work, we don’t want our search to be unusually affected by a few unusual observations. So it is important that our exploratory method is resistant or insensitive to outliers. When we look at a single batch of numbers, we’ll see that the median or middle value (when the data are arranged in ascending order) is an example of a resistant measure. The mean or arithmetic average is nonresistant since it can be distorted by one or more extreme values.
* **Reexpression:** We will see that the natural scale that the data is presented is not always the best scale for displaying or summarizing the data. In many situations, we wish to reexpress the data to a new scale by taking a square root or a logarithm. In this class, we will talk about a useful class of reexpressions, called the power family, and give guidance on the \`\`best” choice of power reexpression to simplify the data analysis.
* **Residual:** In a typical data analysis, we will find a general pattern, which we call the FIT. The description of the FIT may be very informative. But in EDA we wish to look for deviations in the data from the general pattern in the FIT. We look at the residuals, which are defined as the difference between the data and the FIT.
\\\[
RESIDUAL \= DATA \- FIT
\\]
In many situations, we will see that a careful investigation of the residuals may be more interesting than the fit.
2\.8 An exploratory look at our data
------------------------------------
Let’s illustrate a few of the EDA principles in an analysis of our immigration data. We start with a graph of the immigration counts. A simple graph is a stripchart that represents each value by a dot over the appropriate place on a number line. The points have been randomly jittered in the vertical direction so one can see overlapping points.
```
library(tidyverse)
ggplot(immigrants,
aes(x = Count.2008, y = 1)) +
geom_jitter() + ylim(0, 2) +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
Looking at this stripchart, we see that most of the immigration counts are clustered between 0 and 30,000 with only six countries appear to have large counts.
This is really not a very useful graphical display since most of the data is bunched up towards the value zero. In other words, this data is strongly right\-skewed. Due to this right\-skewness, all we have learned is that there are a few countries with large numbers of immigrants and Mexico, with 188,015 immigrants, stands out.
What we see in this stripchart is that the original data (counts of immigrants) is not the best scale for viewing in a graph. We can improve the presentation of these data by reexpressing the counts by taking logs. That is, we reexpress Austria’s count 1505 to log(1505\) \= 3\.18, reexpress Belgium’s count 557 to log(557\) \= 2\.75, and so on for all of the immigrant counts. (By the way, when we take logs in this class, they will all be log base 10\.)
```
immigrants <- mutate(immigrants,
log.Count = log10(Count.2008))
```
Here is a stripchart of the logarithms of the immigrant counts:
```
ggplot(immigrants,
aes(x = log.Count, y = 0)) +
geom_jitter() + ylim(-1, 1) +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
This is a much better graphical display for viewing these data. The log counts are evenly spread out between 2\.50 and 5\.20 and one can see more interesting structure in the data. In particular, we see a clump of countries with log immigration counts around 3, and a second concentration of log counts around 4\. A typical log immigration count can be seen to be about 3\.75\. We still see Mexico’s large log count of 5\.27; but now we also see a small log count at 2\.58 that corresponds to Norway.
We can summarize these data by the typical value of 3\.75 – this is our FIT to these data. We can compute residuals by subtracting the FIT from each log count:
\\\[
RESIDUAL \= \\log COUNT \- FIT
\\]
\\\[
\= \\log COUNT \- 3\.75
\\]
```
immigrants <- mutate(immigrants,
Residual = log.Count - 3.75)
```
The stripchart below graphs the residuals of the data:
```
ggplot(immigrants,
aes(x = Residual, y = 0)) +
geom_jitter() + ylim(-1, 1) +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
Now we can look at these data in more detail. We see that, on the log count scale, two countries have immigration counts that are 1 smaller than the average; also there is one country (Mexico) that is approximately 1\.5 larger than the average. This residual graph tells how close the log counts are to the average of 3\.75\.
This example illustrates a few of the EDA principles that we’ll be using throughout the course.
* It illustrates the use of graphical displays to see basic patterns in data.
* It shows that the scale of the original data may not be the best scale for viewing data and one can reexpress the data by a suitable transformation (here logs) to improve the presentation of the data.
* One can easily summarize the reexpressed data by a \`\`typical” value, and residuals can be computed that show the deviations of the data from this typical value.
**What have we learned?**
Let’s return to our questions about these data to see what we have learned in this brief data analysis:
* What countries have contributed the most immigrants to the United States?
ANSWER: Mexico is by far the leader in supplying immigrants to the U.S.
* What is a typical number of immigrants from a country?
ANSWER: On the log scale, a typical immigrant count is 3\.75\.
* Are Asian countries contributing more or less immigrants than countries from Europe?
ANSWER: We didn’t address this question in our brief analysis, but we’ll soon talk about how we can compare batches of data.
* The magazine *Time* had an issue that focused on the U.S./Mexico border. Are we getting an unusually large number of immigrants from Mexico?
ANSWER: Yes – this analysis confirms that Mexico is supplying many immigrants.
* Which countries are contributing a large number of immigrants relative to their population size?
ANSWER: In order to answer this question, we would need to collect the populations of the countries in the table. This would be an interesting study.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/working-with-a-single-batch-displays.html |
3 Working with a Single Batch – Displays
========================================
3\.1 Meet the Data
------------------
**Data: ACT Average Composite Scores by State, 2006\-2007**
Source: ACT, Inc. from the World Almanac and Book of Facts 2008\.
One of the most important standardized tests given in the United States is the ACT exam. This test is used by many universities in deciding acceptance of prospective freshmen. The table below shows the ACT average composite score for 26 states. Here we are focusing only on the states where at least half of the high school graduates took this exam.
```
State ACT State ACT
Avg Avg
-------------------------------------
AL 20.3 MO 21.6
AR 20.5 MT 21.9
CO 20.4 NE 22.1
FL 19.9 NM 20.2
ID 21.4 ND 21.6
IL 20.5 OH 21.6
IA 22.3 OK 20.7
KS 21.9 SD 21.9
KY 20.7 TN 20.7
LA 20.1 UT 21.7
MI 21.5 WV 20.6
MN 22.5 WI 22.3
MS 18.9 WY 21.5
```
3\.2 The Basic Stemplot
-----------------------
Our first task in working with this batch of data is to organize it in some way so that we can see the distribution of ACT averages. A simple, yet effective display of a small amount of data is a stem and leaf diagram, or stemplot for short.
Here are the steps for drawing a stemplot.
* First, divide each data value into a stem and a leaf.
Here it is convenient to divide a ACT average, such as Alabama’s
20\.3 value into a
```
stem of 20 and a leaf of 3.
```
(Note: here we are dividing at the decimal point, but this won’t usually be the case.)
* Next, we write down all of the possible stems.
```
18
19
20
21
22
```
* We record values by placing the leaf for each data item on its corresponding stem.
So Alabama’s 20\.3 value is recorded as
```
18
19
20 3
21
22
```
We next record Arizona’s 20\.5 value as
```
18
19
20 35
21
22
```
* Continuing in this fashion, we record all 26 ACT averages.
```
1 | 2: represents 1.2
leaf unit: 0.1
18 9
19 9
20 1234556777
21 4556667999
22 1335
```
Note that we have indicated the unit for each leaf. This is important since we have thrown away the decimal point in creating the stemplot. If we look at the stemplot, the first value is
```
18 9
```
which we interpret as 189 (.1\) \= 18\.9 since the unit is .1\.
The stemplot is a quick way of grouping the averages. It resembles the better\-known graphical display, the histogram. If you were to construct a histogram using the intervals 18\-19, 19\-20, and so on, you would obtain the following picture that resembles the stemplot above.
```
library(LearnEDAfunctions)
library(ggplot2)
ggplot(act.scores.06.07, aes(ACT)) +
geom_histogram(breaks = 17:24,
color="black", fill="white")
```
However, the stemplot has one strong advantage over a histogram. You can actually see the data values that fall in each interval. For example, we see that the last line contains the four largest ACT averages 22\.1, 22\.3, 22\.3, 22\.5 corresponding respectively to the states Nebraska, Iowa, Wisconsin, and Minnesota . In a histogram, we would lose this information about individual states when we group the data items in the individual classes.
3\.3 Looking at a Data Distribution
-----------------------------------
What do we look for when we display data using a graph like a stemplot?
1. First, we look at the **general shape** of the data. (The stemplot of the ACT averages has been redrawn below.)
```
18 9
19 9
20 1234556777
21 4556667999
22 1335
```
Generally, we distinguish between three basic data shapes – symmetric, skewed right and skewed left.
**Symmetric** is when the majority of the data is in the middle and the values drop off at the same rate at the low end and the right end. You can imagine dividing the data into two halves, where the left half is approximately a mirror image of the right half.
**Skewed right** is where the data values at the high end decrease at a much slower rate than the values at the low end. Conversely, **skewed left** is where the data values at the low end decrease at a slower rate than the values at the high end.
One can represent these three shapes by means of smoothed curves.
What can we say about our dataset? It is hard to tell (we will try alternative displays of these data to get a better look), but to me it appears somewhat skewed left. Most of the ACT averages fall in the 20\-21 range and the values at the low end decrease at a slower rate than averages at the high end.
2. After we think about shape, we want to talk about a **typical** or average value}. We will later describe different types of “averages”, but we notice the large number of ACT averages in the 21 line, so 21\.something would be a typical value.
3. Next, we describe the **spread or variation** in the data values. We see that most of the ACT averages fall between 20 and 22\.1, with only a couple of states with averages below 20\.
4. Last, we discuss any **unusual data values** or any **distinctive features** of this distribution. Here we might talk about
* unusually high or low values
* any gaps in the data
* the presence of several clusters of observations
* granularity in the data – possibly all of the data values end with an even last digit
Here we don’t see anything particularly unusual. The two low ACT averages were already mentioned. Part of the reason why we don’t see more is that we could improve our graphical display. This motivates talking about variations of the basic stemplot.
3\.4 Stemplot variations – breaking the stem at different locations
-------------------------------------------------------------------
In our stemplot, we formed the stem and leaf by dividing at the decimal point. But other break points are possible.
To illustrate, we could break Alabama’s 20\.3 ACT average between the units and tens place:
```
2 | 03
```
Then we would jot down Alabama’s value on the stemplot
```
1
2 0
```
Note that we use the one\-digit leaf 0 – we drop off the last digit 2 in 03\. (We typically draw stemplots using single digits for leaves.)
By the way, it is better to drop and not round. Rounding takes more time than dropping. Also it is easier to retrieve the original data from the stemplot when you drop digits.
With this breakpoint, we get the following display for the 26 ACT averages:
```
1 | 2: represents 12
leaf unit: 1
n: 26
1 89
2 000000000011111111112222
```
This is not a very effective display, since all the data is bunched up on only two lines.
Another possibility is to break an ACT average between the tenth and hundredth places. If we write Alabama’s value 20\.3 as 20\.30, we break as follows:
```
203 | 0
```
Here there are quite a few possible stems – from 189 to 225\. We could write down the corresponding display, but it would consist of 37 lines. Given that we have only 26 data values to show, it should be clear that the display would be stretched out too much.
3\.5 Stemplot variations – 2, 5, and 10 leaves per stem
-------------------------------------------------------
There is another choice in constructing a stemplot – the number of possible leaves on each stem. In our basic display (shown again)
```
18 79
19 69
20 123345556777
21 024455566667788999
22 00011233355899
23 125
```
there are 10 possible leaves on each line (that is, 0, 1, 2, …, 9\), and so we call this display a stemplot with 10 leaves per stem.
One way of stretching out this display is to divide the ten possible leaves into a small group (0 through 4\) and a large group (5 through 9\). To draw this stemplot, we write down each stem twice
```
18*
18.
19*
19.
20*
20.
21*
21.
22*
22.
23*
23.
```
(the \* indicates the first line and . the second) and then record the leaves
```
1 | 2: represents 1.2
leaf unit: 0.1
n: 26
18. 9
19*
19. 9
20* 1234
20. 556777
21* 4
21. 556667999
22* 133
22. 5
```
We call this display a stemplot with 5 leaves per stem. I think this is a better graph than the 10\-lines\-per\-stem since we see more structure. Now we see
* two clusters of observations – one cluster in the 20\* line and a second in the 21\. line. It might be reasonable to say that there are two modes or humps in the data.
* the 18\.7 and 18\.9 values appears to be somewhat low, since there is a gap between these ACT averages and the next largest
Another possible stemplot is to divide the 10 possible leaves in our basic display into 5 groups. We write the 18 line five times
```
18*
18t
18f
18s
18.
```
The 0, 1 leaves are written on the first line (\*), the 2, 3 leaves are put on the 2nd (t) line, the 4, 5 leaves on the (f) line, the 6, 7 leaves on the (s) line and the 8, 9 leaves on the (.) line. The use of the t, f, s labels is helpful, since TWO and THREE start with t, FOUR and FIVE start with f, and SIX and SEVEN start with s. (I guess this idea wouldn’t be helpful in drawing a stemplot with Chinese letters.)
We call this display a stemplot with 2 leaves per stem, since each line has two possible leaves. This stemplot for our data is shown below.
```
1 | 2: represents 1.2
leaf unit: 0.1
n: 26
18. | 9
19* |
t |
f |
s |
19. | 9
20* | 1
t | 23
f | 455
s | 6777
20. |
21* |
t |
f | 455
s | 6667
21. | 999
22* | 1
t | 33
f | 5
```
Which is a better display, the previous one with 5 leaves per stem, or this one with 2 leaves per stem? This last display looks too spread out to me. You do see the two clusters in this 2\-leaves\-per\-stem stemplot, but there are more gaps introduced since we stretched out the display.
3\.6 Guidance in Constructing a Stemplot
----------------------------------------
In constructing a stemplot, there are two choices to make:
* how to break between the stem and leaf
* how many leaves per stem to use
It is best to try a few different stemplots and use the one that you think best represents the data. You will get some experience hand\-drawing stemplots – every time you should try at least two displays. The second display is usually quick to draw once the first display is done.
3\.7 Making the Stemplot Resistant
----------------------------------
Let’s illustrate constructing stemplots using a second example. In our almanac (page 953\), the the weights of the heaviest fish caught are listed for various species. I have jotted down the weights in pounds for the first 25 fish in the table (from Albacore to Summer Flounder):
```
88 155 85 21 26 13 10 563 78 31 19 18 21 23 135
98 35 133 88 113 94 9 36 20 22
```
To construct a stemplot, we first might trying breaking between the unit and the tens digits. If we do, we will get 56 lines, which won’t fit on a single page. The problem is that there is a single large weight 563 (corresponding to a Giant Sea Bass) which is larger than all of the remaining observations. We would like our display to be resistant or not distorted by a single large value. So what we do is to draw a stemplot of the values with the large value listed on a separate line labelled “HI”.
```
0 9
1 0389
2 011236
3 156
4
5
6
7 8
8 588
9 48
10
11 3
12
13 35
14
15 5
HI 563,
Unit = 1
```
This graph shows two clusters of weights, one in the 10\-20 pound range, and a second in the 80’s. This display might spread out the data too much. So, as an alternative, let’s try breaking the data between the 10’s and 100’s places and using two leaves per stem.
```
0* 01111
0t 222222333
0f
0s 7
0. 88899
1* 1
1t 33
1f 5
HI 563,
Unit = 10
```
I like this display better than the previous one. I still see two basic clusters in the datasets separated by a gap. This means that many of the record fish weights are modest size (corresponding to small fish), a second group of weights correspond to moderate\-size fish, and we can’t ignore the large weight of the Giant Sea Bass.
3\.8 A Few Closing Comments
---------------------------
* Although we have focused our discussion on the stemplot, it is not always the best graphical display. A stemplot is most effective for a dataset with no more than 50 values. If you have a larger dataset, it is likely that you would need too many stemplot lines. For large datasets, it is better to use a histogram.
* It is instructive to experiment with different choices of breakpoint and leaves per stem to find the “best” stemplot. But it is handy to have a formula which gives a suggested stemplot for a given dataset.
Here is a useful rule\-of\-thumb. The stemplot should have L lines where
\\\[
L \= \[10 \\log10(n)],
\\]
where \\(\[ \\, ]\\) stands for the integer part of the argument.
How should we use this formula?
* Find the range R of the data and compute L.
* Divide R by L, and round the answer to 2, 5, or 10\.
* This answer gives you the breakpoint and the number of leaves per stem.
Let’s illustrate this rule for our ACT scores. We have n \= 26 scores and LO \= 18\.7, HI \= 23\.5\.
The range is equal to
```
(R <- 23.5 - 18.7)
```
```
## [1] 4.8
```
and
```
(L <- floor(10 * log10(26)))
```
```
## [1] 14
```
So the ratio of R over L is given by
```
R / L
```
```
## [1] 0.3428571
```
which I round to 0\.2 (it is closer to 0\.2 than to 0\.5\) So the distance between the smallest value in two consecutive lines of the stemplot should be 0\.2\.
This rule tells us to use the display where we break 18\.7 at the decimal point, and use two leaves per stem. This is the stemplot that started like
```
18*
18t
18f
18s 7
18. 9
```
Actually, we decided that stemplot with 5 leaves per stem seemed better, but at least this rule gives us a stemplot that is close to the best one.
3\.1 Meet the Data
------------------
**Data: ACT Average Composite Scores by State, 2006\-2007**
Source: ACT, Inc. from the World Almanac and Book of Facts 2008\.
One of the most important standardized tests given in the United States is the ACT exam. This test is used by many universities in deciding acceptance of prospective freshmen. The table below shows the ACT average composite score for 26 states. Here we are focusing only on the states where at least half of the high school graduates took this exam.
```
State ACT State ACT
Avg Avg
-------------------------------------
AL 20.3 MO 21.6
AR 20.5 MT 21.9
CO 20.4 NE 22.1
FL 19.9 NM 20.2
ID 21.4 ND 21.6
IL 20.5 OH 21.6
IA 22.3 OK 20.7
KS 21.9 SD 21.9
KY 20.7 TN 20.7
LA 20.1 UT 21.7
MI 21.5 WV 20.6
MN 22.5 WI 22.3
MS 18.9 WY 21.5
```
3\.2 The Basic Stemplot
-----------------------
Our first task in working with this batch of data is to organize it in some way so that we can see the distribution of ACT averages. A simple, yet effective display of a small amount of data is a stem and leaf diagram, or stemplot for short.
Here are the steps for drawing a stemplot.
* First, divide each data value into a stem and a leaf.
Here it is convenient to divide a ACT average, such as Alabama’s
20\.3 value into a
```
stem of 20 and a leaf of 3.
```
(Note: here we are dividing at the decimal point, but this won’t usually be the case.)
* Next, we write down all of the possible stems.
```
18
19
20
21
22
```
* We record values by placing the leaf for each data item on its corresponding stem.
So Alabama’s 20\.3 value is recorded as
```
18
19
20 3
21
22
```
We next record Arizona’s 20\.5 value as
```
18
19
20 35
21
22
```
* Continuing in this fashion, we record all 26 ACT averages.
```
1 | 2: represents 1.2
leaf unit: 0.1
18 9
19 9
20 1234556777
21 4556667999
22 1335
```
Note that we have indicated the unit for each leaf. This is important since we have thrown away the decimal point in creating the stemplot. If we look at the stemplot, the first value is
```
18 9
```
which we interpret as 189 (.1\) \= 18\.9 since the unit is .1\.
The stemplot is a quick way of grouping the averages. It resembles the better\-known graphical display, the histogram. If you were to construct a histogram using the intervals 18\-19, 19\-20, and so on, you would obtain the following picture that resembles the stemplot above.
```
library(LearnEDAfunctions)
library(ggplot2)
ggplot(act.scores.06.07, aes(ACT)) +
geom_histogram(breaks = 17:24,
color="black", fill="white")
```
However, the stemplot has one strong advantage over a histogram. You can actually see the data values that fall in each interval. For example, we see that the last line contains the four largest ACT averages 22\.1, 22\.3, 22\.3, 22\.5 corresponding respectively to the states Nebraska, Iowa, Wisconsin, and Minnesota . In a histogram, we would lose this information about individual states when we group the data items in the individual classes.
3\.3 Looking at a Data Distribution
-----------------------------------
What do we look for when we display data using a graph like a stemplot?
1. First, we look at the **general shape** of the data. (The stemplot of the ACT averages has been redrawn below.)
```
18 9
19 9
20 1234556777
21 4556667999
22 1335
```
Generally, we distinguish between three basic data shapes – symmetric, skewed right and skewed left.
**Symmetric** is when the majority of the data is in the middle and the values drop off at the same rate at the low end and the right end. You can imagine dividing the data into two halves, where the left half is approximately a mirror image of the right half.
**Skewed right** is where the data values at the high end decrease at a much slower rate than the values at the low end. Conversely, **skewed left** is where the data values at the low end decrease at a slower rate than the values at the high end.
One can represent these three shapes by means of smoothed curves.
What can we say about our dataset? It is hard to tell (we will try alternative displays of these data to get a better look), but to me it appears somewhat skewed left. Most of the ACT averages fall in the 20\-21 range and the values at the low end decrease at a slower rate than averages at the high end.
2. After we think about shape, we want to talk about a **typical** or average value}. We will later describe different types of “averages”, but we notice the large number of ACT averages in the 21 line, so 21\.something would be a typical value.
3. Next, we describe the **spread or variation** in the data values. We see that most of the ACT averages fall between 20 and 22\.1, with only a couple of states with averages below 20\.
4. Last, we discuss any **unusual data values** or any **distinctive features** of this distribution. Here we might talk about
* unusually high or low values
* any gaps in the data
* the presence of several clusters of observations
* granularity in the data – possibly all of the data values end with an even last digit
Here we don’t see anything particularly unusual. The two low ACT averages were already mentioned. Part of the reason why we don’t see more is that we could improve our graphical display. This motivates talking about variations of the basic stemplot.
3\.4 Stemplot variations – breaking the stem at different locations
-------------------------------------------------------------------
In our stemplot, we formed the stem and leaf by dividing at the decimal point. But other break points are possible.
To illustrate, we could break Alabama’s 20\.3 ACT average between the units and tens place:
```
2 | 03
```
Then we would jot down Alabama’s value on the stemplot
```
1
2 0
```
Note that we use the one\-digit leaf 0 – we drop off the last digit 2 in 03\. (We typically draw stemplots using single digits for leaves.)
By the way, it is better to drop and not round. Rounding takes more time than dropping. Also it is easier to retrieve the original data from the stemplot when you drop digits.
With this breakpoint, we get the following display for the 26 ACT averages:
```
1 | 2: represents 12
leaf unit: 1
n: 26
1 89
2 000000000011111111112222
```
This is not a very effective display, since all the data is bunched up on only two lines.
Another possibility is to break an ACT average between the tenth and hundredth places. If we write Alabama’s value 20\.3 as 20\.30, we break as follows:
```
203 | 0
```
Here there are quite a few possible stems – from 189 to 225\. We could write down the corresponding display, but it would consist of 37 lines. Given that we have only 26 data values to show, it should be clear that the display would be stretched out too much.
3\.5 Stemplot variations – 2, 5, and 10 leaves per stem
-------------------------------------------------------
There is another choice in constructing a stemplot – the number of possible leaves on each stem. In our basic display (shown again)
```
18 79
19 69
20 123345556777
21 024455566667788999
22 00011233355899
23 125
```
there are 10 possible leaves on each line (that is, 0, 1, 2, …, 9\), and so we call this display a stemplot with 10 leaves per stem.
One way of stretching out this display is to divide the ten possible leaves into a small group (0 through 4\) and a large group (5 through 9\). To draw this stemplot, we write down each stem twice
```
18*
18.
19*
19.
20*
20.
21*
21.
22*
22.
23*
23.
```
(the \* indicates the first line and . the second) and then record the leaves
```
1 | 2: represents 1.2
leaf unit: 0.1
n: 26
18. 9
19*
19. 9
20* 1234
20. 556777
21* 4
21. 556667999
22* 133
22. 5
```
We call this display a stemplot with 5 leaves per stem. I think this is a better graph than the 10\-lines\-per\-stem since we see more structure. Now we see
* two clusters of observations – one cluster in the 20\* line and a second in the 21\. line. It might be reasonable to say that there are two modes or humps in the data.
* the 18\.7 and 18\.9 values appears to be somewhat low, since there is a gap between these ACT averages and the next largest
Another possible stemplot is to divide the 10 possible leaves in our basic display into 5 groups. We write the 18 line five times
```
18*
18t
18f
18s
18.
```
The 0, 1 leaves are written on the first line (\*), the 2, 3 leaves are put on the 2nd (t) line, the 4, 5 leaves on the (f) line, the 6, 7 leaves on the (s) line and the 8, 9 leaves on the (.) line. The use of the t, f, s labels is helpful, since TWO and THREE start with t, FOUR and FIVE start with f, and SIX and SEVEN start with s. (I guess this idea wouldn’t be helpful in drawing a stemplot with Chinese letters.)
We call this display a stemplot with 2 leaves per stem, since each line has two possible leaves. This stemplot for our data is shown below.
```
1 | 2: represents 1.2
leaf unit: 0.1
n: 26
18. | 9
19* |
t |
f |
s |
19. | 9
20* | 1
t | 23
f | 455
s | 6777
20. |
21* |
t |
f | 455
s | 6667
21. | 999
22* | 1
t | 33
f | 5
```
Which is a better display, the previous one with 5 leaves per stem, or this one with 2 leaves per stem? This last display looks too spread out to me. You do see the two clusters in this 2\-leaves\-per\-stem stemplot, but there are more gaps introduced since we stretched out the display.
3\.6 Guidance in Constructing a Stemplot
----------------------------------------
In constructing a stemplot, there are two choices to make:
* how to break between the stem and leaf
* how many leaves per stem to use
It is best to try a few different stemplots and use the one that you think best represents the data. You will get some experience hand\-drawing stemplots – every time you should try at least two displays. The second display is usually quick to draw once the first display is done.
3\.7 Making the Stemplot Resistant
----------------------------------
Let’s illustrate constructing stemplots using a second example. In our almanac (page 953\), the the weights of the heaviest fish caught are listed for various species. I have jotted down the weights in pounds for the first 25 fish in the table (from Albacore to Summer Flounder):
```
88 155 85 21 26 13 10 563 78 31 19 18 21 23 135
98 35 133 88 113 94 9 36 20 22
```
To construct a stemplot, we first might trying breaking between the unit and the tens digits. If we do, we will get 56 lines, which won’t fit on a single page. The problem is that there is a single large weight 563 (corresponding to a Giant Sea Bass) which is larger than all of the remaining observations. We would like our display to be resistant or not distorted by a single large value. So what we do is to draw a stemplot of the values with the large value listed on a separate line labelled “HI”.
```
0 9
1 0389
2 011236
3 156
4
5
6
7 8
8 588
9 48
10
11 3
12
13 35
14
15 5
HI 563,
Unit = 1
```
This graph shows two clusters of weights, one in the 10\-20 pound range, and a second in the 80’s. This display might spread out the data too much. So, as an alternative, let’s try breaking the data between the 10’s and 100’s places and using two leaves per stem.
```
0* 01111
0t 222222333
0f
0s 7
0. 88899
1* 1
1t 33
1f 5
HI 563,
Unit = 10
```
I like this display better than the previous one. I still see two basic clusters in the datasets separated by a gap. This means that many of the record fish weights are modest size (corresponding to small fish), a second group of weights correspond to moderate\-size fish, and we can’t ignore the large weight of the Giant Sea Bass.
3\.8 A Few Closing Comments
---------------------------
* Although we have focused our discussion on the stemplot, it is not always the best graphical display. A stemplot is most effective for a dataset with no more than 50 values. If you have a larger dataset, it is likely that you would need too many stemplot lines. For large datasets, it is better to use a histogram.
* It is instructive to experiment with different choices of breakpoint and leaves per stem to find the “best” stemplot. But it is handy to have a formula which gives a suggested stemplot for a given dataset.
Here is a useful rule\-of\-thumb. The stemplot should have L lines where
\\\[
L \= \[10 \\log10(n)],
\\]
where \\(\[ \\, ]\\) stands for the integer part of the argument.
How should we use this formula?
* Find the range R of the data and compute L.
* Divide R by L, and round the answer to 2, 5, or 10\.
* This answer gives you the breakpoint and the number of leaves per stem.
Let’s illustrate this rule for our ACT scores. We have n \= 26 scores and LO \= 18\.7, HI \= 23\.5\.
The range is equal to
```
(R <- 23.5 - 18.7)
```
```
## [1] 4.8
```
and
```
(L <- floor(10 * log10(26)))
```
```
## [1] 14
```
So the ratio of R over L is given by
```
R / L
```
```
## [1] 0.3428571
```
which I round to 0\.2 (it is closer to 0\.2 than to 0\.5\) So the distance between the smallest value in two consecutive lines of the stemplot should be 0\.2\.
This rule tells us to use the display where we break 18\.7 at the decimal point, and use two leaves per stem. This is the stemplot that started like
```
18*
18t
18f
18s 7
18. 9
```
Actually, we decided that stemplot with 5 leaves per stem seemed better, but at least this rule gives us a stemplot that is close to the best one.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/working-with-a-single-batch-displays.html |
3 Working with a Single Batch – Displays
========================================
3\.1 Meet the Data
------------------
**Data: ACT Average Composite Scores by State, 2006\-2007**
Source: ACT, Inc. from the World Almanac and Book of Facts 2008\.
One of the most important standardized tests given in the United States is the ACT exam. This test is used by many universities in deciding acceptance of prospective freshmen. The table below shows the ACT average composite score for 26 states. Here we are focusing only on the states where at least half of the high school graduates took this exam.
```
State ACT State ACT
Avg Avg
-------------------------------------
AL 20.3 MO 21.6
AR 20.5 MT 21.9
CO 20.4 NE 22.1
FL 19.9 NM 20.2
ID 21.4 ND 21.6
IL 20.5 OH 21.6
IA 22.3 OK 20.7
KS 21.9 SD 21.9
KY 20.7 TN 20.7
LA 20.1 UT 21.7
MI 21.5 WV 20.6
MN 22.5 WI 22.3
MS 18.9 WY 21.5
```
3\.2 The Basic Stemplot
-----------------------
Our first task in working with this batch of data is to organize it in some way so that we can see the distribution of ACT averages. A simple, yet effective display of a small amount of data is a stem and leaf diagram, or stemplot for short.
Here are the steps for drawing a stemplot.
* First, divide each data value into a stem and a leaf.
Here it is convenient to divide a ACT average, such as Alabama’s
20\.3 value into a
```
stem of 20 and a leaf of 3.
```
(Note: here we are dividing at the decimal point, but this won’t usually be the case.)
* Next, we write down all of the possible stems.
```
18
19
20
21
22
```
* We record values by placing the leaf for each data item on its corresponding stem.
So Alabama’s 20\.3 value is recorded as
```
18
19
20 3
21
22
```
We next record Arizona’s 20\.5 value as
```
18
19
20 35
21
22
```
* Continuing in this fashion, we record all 26 ACT averages.
```
1 | 2: represents 1.2
leaf unit: 0.1
18 9
19 9
20 1234556777
21 4556667999
22 1335
```
Note that we have indicated the unit for each leaf. This is important since we have thrown away the decimal point in creating the stemplot. If we look at the stemplot, the first value is
```
18 9
```
which we interpret as 189 (.1\) \= 18\.9 since the unit is .1\.
The stemplot is a quick way of grouping the averages. It resembles the better\-known graphical display, the histogram. If you were to construct a histogram using the intervals 18\-19, 19\-20, and so on, you would obtain the following picture that resembles the stemplot above.
```
library(LearnEDAfunctions)
library(ggplot2)
ggplot(act.scores.06.07, aes(ACT)) +
geom_histogram(breaks = 17:24,
color="black", fill="white")
```
However, the stemplot has one strong advantage over a histogram. You can actually see the data values that fall in each interval. For example, we see that the last line contains the four largest ACT averages 22\.1, 22\.3, 22\.3, 22\.5 corresponding respectively to the states Nebraska, Iowa, Wisconsin, and Minnesota . In a histogram, we would lose this information about individual states when we group the data items in the individual classes.
3\.3 Looking at a Data Distribution
-----------------------------------
What do we look for when we display data using a graph like a stemplot?
1. First, we look at the **general shape** of the data. (The stemplot of the ACT averages has been redrawn below.)
```
18 9
19 9
20 1234556777
21 4556667999
22 1335
```
Generally, we distinguish between three basic data shapes – symmetric, skewed right and skewed left.
**Symmetric** is when the majority of the data is in the middle and the values drop off at the same rate at the low end and the right end. You can imagine dividing the data into two halves, where the left half is approximately a mirror image of the right half.
**Skewed right** is where the data values at the high end decrease at a much slower rate than the values at the low end. Conversely, **skewed left** is where the data values at the low end decrease at a slower rate than the values at the high end.
One can represent these three shapes by means of smoothed curves.
What can we say about our dataset? It is hard to tell (we will try alternative displays of these data to get a better look), but to me it appears somewhat skewed left. Most of the ACT averages fall in the 20\-21 range and the values at the low end decrease at a slower rate than averages at the high end.
2. After we think about shape, we want to talk about a **typical** or average value}. We will later describe different types of “averages”, but we notice the large number of ACT averages in the 21 line, so 21\.something would be a typical value.
3. Next, we describe the **spread or variation** in the data values. We see that most of the ACT averages fall between 20 and 22\.1, with only a couple of states with averages below 20\.
4. Last, we discuss any **unusual data values** or any **distinctive features** of this distribution. Here we might talk about
* unusually high or low values
* any gaps in the data
* the presence of several clusters of observations
* granularity in the data – possibly all of the data values end with an even last digit
Here we don’t see anything particularly unusual. The two low ACT averages were already mentioned. Part of the reason why we don’t see more is that we could improve our graphical display. This motivates talking about variations of the basic stemplot.
3\.4 Stemplot variations – breaking the stem at different locations
-------------------------------------------------------------------
In our stemplot, we formed the stem and leaf by dividing at the decimal point. But other break points are possible.
To illustrate, we could break Alabama’s 20\.3 ACT average between the units and tens place:
```
2 | 03
```
Then we would jot down Alabama’s value on the stemplot
```
1
2 0
```
Note that we use the one\-digit leaf 0 – we drop off the last digit 2 in 03\. (We typically draw stemplots using single digits for leaves.)
By the way, it is better to drop and not round. Rounding takes more time than dropping. Also it is easier to retrieve the original data from the stemplot when you drop digits.
With this breakpoint, we get the following display for the 26 ACT averages:
```
1 | 2: represents 12
leaf unit: 1
n: 26
1 89
2 000000000011111111112222
```
This is not a very effective display, since all the data is bunched up on only two lines.
Another possibility is to break an ACT average between the tenth and hundredth places. If we write Alabama’s value 20\.3 as 20\.30, we break as follows:
```
203 | 0
```
Here there are quite a few possible stems – from 189 to 225\. We could write down the corresponding display, but it would consist of 37 lines. Given that we have only 26 data values to show, it should be clear that the display would be stretched out too much.
3\.5 Stemplot variations – 2, 5, and 10 leaves per stem
-------------------------------------------------------
There is another choice in constructing a stemplot – the number of possible leaves on each stem. In our basic display (shown again)
```
18 79
19 69
20 123345556777
21 024455566667788999
22 00011233355899
23 125
```
there are 10 possible leaves on each line (that is, 0, 1, 2, …, 9\), and so we call this display a stemplot with 10 leaves per stem.
One way of stretching out this display is to divide the ten possible leaves into a small group (0 through 4\) and a large group (5 through 9\). To draw this stemplot, we write down each stem twice
```
18*
18.
19*
19.
20*
20.
21*
21.
22*
22.
23*
23.
```
(the \* indicates the first line and . the second) and then record the leaves
```
1 | 2: represents 1.2
leaf unit: 0.1
n: 26
18. 9
19*
19. 9
20* 1234
20. 556777
21* 4
21. 556667999
22* 133
22. 5
```
We call this display a stemplot with 5 leaves per stem. I think this is a better graph than the 10\-lines\-per\-stem since we see more structure. Now we see
* two clusters of observations – one cluster in the 20\* line and a second in the 21\. line. It might be reasonable to say that there are two modes or humps in the data.
* the 18\.7 and 18\.9 values appears to be somewhat low, since there is a gap between these ACT averages and the next largest
Another possible stemplot is to divide the 10 possible leaves in our basic display into 5 groups. We write the 18 line five times
```
18*
18t
18f
18s
18.
```
The 0, 1 leaves are written on the first line (\*), the 2, 3 leaves are put on the 2nd (t) line, the 4, 5 leaves on the (f) line, the 6, 7 leaves on the (s) line and the 8, 9 leaves on the (.) line. The use of the t, f, s labels is helpful, since TWO and THREE start with t, FOUR and FIVE start with f, and SIX and SEVEN start with s. (I guess this idea wouldn’t be helpful in drawing a stemplot with Chinese letters.)
We call this display a stemplot with 2 leaves per stem, since each line has two possible leaves. This stemplot for our data is shown below.
```
1 | 2: represents 1.2
leaf unit: 0.1
n: 26
18. | 9
19* |
t |
f |
s |
19. | 9
20* | 1
t | 23
f | 455
s | 6777
20. |
21* |
t |
f | 455
s | 6667
21. | 999
22* | 1
t | 33
f | 5
```
Which is a better display, the previous one with 5 leaves per stem, or this one with 2 leaves per stem? This last display looks too spread out to me. You do see the two clusters in this 2\-leaves\-per\-stem stemplot, but there are more gaps introduced since we stretched out the display.
3\.6 Guidance in Constructing a Stemplot
----------------------------------------
In constructing a stemplot, there are two choices to make:
* how to break between the stem and leaf
* how many leaves per stem to use
It is best to try a few different stemplots and use the one that you think best represents the data. You will get some experience hand\-drawing stemplots – every time you should try at least two displays. The second display is usually quick to draw once the first display is done.
3\.7 Making the Stemplot Resistant
----------------------------------
Let’s illustrate constructing stemplots using a second example. In our almanac (page 953\), the the weights of the heaviest fish caught are listed for various species. I have jotted down the weights in pounds for the first 25 fish in the table (from Albacore to Summer Flounder):
```
88 155 85 21 26 13 10 563 78 31 19 18 21 23 135
98 35 133 88 113 94 9 36 20 22
```
To construct a stemplot, we first might trying breaking between the unit and the tens digits. If we do, we will get 56 lines, which won’t fit on a single page. The problem is that there is a single large weight 563 (corresponding to a Giant Sea Bass) which is larger than all of the remaining observations. We would like our display to be resistant or not distorted by a single large value. So what we do is to draw a stemplot of the values with the large value listed on a separate line labelled “HI”.
```
0 9
1 0389
2 011236
3 156
4
5
6
7 8
8 588
9 48
10
11 3
12
13 35
14
15 5
HI 563,
Unit = 1
```
This graph shows two clusters of weights, one in the 10\-20 pound range, and a second in the 80’s. This display might spread out the data too much. So, as an alternative, let’s try breaking the data between the 10’s and 100’s places and using two leaves per stem.
```
0* 01111
0t 222222333
0f
0s 7
0. 88899
1* 1
1t 33
1f 5
HI 563,
Unit = 10
```
I like this display better than the previous one. I still see two basic clusters in the datasets separated by a gap. This means that many of the record fish weights are modest size (corresponding to small fish), a second group of weights correspond to moderate\-size fish, and we can’t ignore the large weight of the Giant Sea Bass.
3\.8 A Few Closing Comments
---------------------------
* Although we have focused our discussion on the stemplot, it is not always the best graphical display. A stemplot is most effective for a dataset with no more than 50 values. If you have a larger dataset, it is likely that you would need too many stemplot lines. For large datasets, it is better to use a histogram.
* It is instructive to experiment with different choices of breakpoint and leaves per stem to find the “best” stemplot. But it is handy to have a formula which gives a suggested stemplot for a given dataset.
Here is a useful rule\-of\-thumb. The stemplot should have L lines where
\\\[
L \= \[10 \\log10(n)],
\\]
where \\(\[ \\, ]\\) stands for the integer part of the argument.
How should we use this formula?
* Find the range R of the data and compute L.
* Divide R by L, and round the answer to 2, 5, or 10\.
* This answer gives you the breakpoint and the number of leaves per stem.
Let’s illustrate this rule for our ACT scores. We have n \= 26 scores and LO \= 18\.7, HI \= 23\.5\.
The range is equal to
```
(R <- 23.5 - 18.7)
```
```
## [1] 4.8
```
and
```
(L <- floor(10 * log10(26)))
```
```
## [1] 14
```
So the ratio of R over L is given by
```
R / L
```
```
## [1] 0.3428571
```
which I round to 0\.2 (it is closer to 0\.2 than to 0\.5\) So the distance between the smallest value in two consecutive lines of the stemplot should be 0\.2\.
This rule tells us to use the display where we break 18\.7 at the decimal point, and use two leaves per stem. This is the stemplot that started like
```
18*
18t
18f
18s 7
18. 9
```
Actually, we decided that stemplot with 5 leaves per stem seemed better, but at least this rule gives us a stemplot that is close to the best one.
3\.1 Meet the Data
------------------
**Data: ACT Average Composite Scores by State, 2006\-2007**
Source: ACT, Inc. from the World Almanac and Book of Facts 2008\.
One of the most important standardized tests given in the United States is the ACT exam. This test is used by many universities in deciding acceptance of prospective freshmen. The table below shows the ACT average composite score for 26 states. Here we are focusing only on the states where at least half of the high school graduates took this exam.
```
State ACT State ACT
Avg Avg
-------------------------------------
AL 20.3 MO 21.6
AR 20.5 MT 21.9
CO 20.4 NE 22.1
FL 19.9 NM 20.2
ID 21.4 ND 21.6
IL 20.5 OH 21.6
IA 22.3 OK 20.7
KS 21.9 SD 21.9
KY 20.7 TN 20.7
LA 20.1 UT 21.7
MI 21.5 WV 20.6
MN 22.5 WI 22.3
MS 18.9 WY 21.5
```
3\.2 The Basic Stemplot
-----------------------
Our first task in working with this batch of data is to organize it in some way so that we can see the distribution of ACT averages. A simple, yet effective display of a small amount of data is a stem and leaf diagram, or stemplot for short.
Here are the steps for drawing a stemplot.
* First, divide each data value into a stem and a leaf.
Here it is convenient to divide a ACT average, such as Alabama’s
20\.3 value into a
```
stem of 20 and a leaf of 3.
```
(Note: here we are dividing at the decimal point, but this won’t usually be the case.)
* Next, we write down all of the possible stems.
```
18
19
20
21
22
```
* We record values by placing the leaf for each data item on its corresponding stem.
So Alabama’s 20\.3 value is recorded as
```
18
19
20 3
21
22
```
We next record Arizona’s 20\.5 value as
```
18
19
20 35
21
22
```
* Continuing in this fashion, we record all 26 ACT averages.
```
1 | 2: represents 1.2
leaf unit: 0.1
18 9
19 9
20 1234556777
21 4556667999
22 1335
```
Note that we have indicated the unit for each leaf. This is important since we have thrown away the decimal point in creating the stemplot. If we look at the stemplot, the first value is
```
18 9
```
which we interpret as 189 (.1\) \= 18\.9 since the unit is .1\.
The stemplot is a quick way of grouping the averages. It resembles the better\-known graphical display, the histogram. If you were to construct a histogram using the intervals 18\-19, 19\-20, and so on, you would obtain the following picture that resembles the stemplot above.
```
library(LearnEDAfunctions)
library(ggplot2)
ggplot(act.scores.06.07, aes(ACT)) +
geom_histogram(breaks = 17:24,
color="black", fill="white")
```
However, the stemplot has one strong advantage over a histogram. You can actually see the data values that fall in each interval. For example, we see that the last line contains the four largest ACT averages 22\.1, 22\.3, 22\.3, 22\.5 corresponding respectively to the states Nebraska, Iowa, Wisconsin, and Minnesota . In a histogram, we would lose this information about individual states when we group the data items in the individual classes.
3\.3 Looking at a Data Distribution
-----------------------------------
What do we look for when we display data using a graph like a stemplot?
1. First, we look at the **general shape** of the data. (The stemplot of the ACT averages has been redrawn below.)
```
18 9
19 9
20 1234556777
21 4556667999
22 1335
```
Generally, we distinguish between three basic data shapes – symmetric, skewed right and skewed left.
**Symmetric** is when the majority of the data is in the middle and the values drop off at the same rate at the low end and the right end. You can imagine dividing the data into two halves, where the left half is approximately a mirror image of the right half.
**Skewed right** is where the data values at the high end decrease at a much slower rate than the values at the low end. Conversely, **skewed left** is where the data values at the low end decrease at a slower rate than the values at the high end.
One can represent these three shapes by means of smoothed curves.
What can we say about our dataset? It is hard to tell (we will try alternative displays of these data to get a better look), but to me it appears somewhat skewed left. Most of the ACT averages fall in the 20\-21 range and the values at the low end decrease at a slower rate than averages at the high end.
2. After we think about shape, we want to talk about a **typical** or average value}. We will later describe different types of “averages”, but we notice the large number of ACT averages in the 21 line, so 21\.something would be a typical value.
3. Next, we describe the **spread or variation** in the data values. We see that most of the ACT averages fall between 20 and 22\.1, with only a couple of states with averages below 20\.
4. Last, we discuss any **unusual data values** or any **distinctive features** of this distribution. Here we might talk about
* unusually high or low values
* any gaps in the data
* the presence of several clusters of observations
* granularity in the data – possibly all of the data values end with an even last digit
Here we don’t see anything particularly unusual. The two low ACT averages were already mentioned. Part of the reason why we don’t see more is that we could improve our graphical display. This motivates talking about variations of the basic stemplot.
3\.4 Stemplot variations – breaking the stem at different locations
-------------------------------------------------------------------
In our stemplot, we formed the stem and leaf by dividing at the decimal point. But other break points are possible.
To illustrate, we could break Alabama’s 20\.3 ACT average between the units and tens place:
```
2 | 03
```
Then we would jot down Alabama’s value on the stemplot
```
1
2 0
```
Note that we use the one\-digit leaf 0 – we drop off the last digit 2 in 03\. (We typically draw stemplots using single digits for leaves.)
By the way, it is better to drop and not round. Rounding takes more time than dropping. Also it is easier to retrieve the original data from the stemplot when you drop digits.
With this breakpoint, we get the following display for the 26 ACT averages:
```
1 | 2: represents 12
leaf unit: 1
n: 26
1 89
2 000000000011111111112222
```
This is not a very effective display, since all the data is bunched up on only two lines.
Another possibility is to break an ACT average between the tenth and hundredth places. If we write Alabama’s value 20\.3 as 20\.30, we break as follows:
```
203 | 0
```
Here there are quite a few possible stems – from 189 to 225\. We could write down the corresponding display, but it would consist of 37 lines. Given that we have only 26 data values to show, it should be clear that the display would be stretched out too much.
3\.5 Stemplot variations – 2, 5, and 10 leaves per stem
-------------------------------------------------------
There is another choice in constructing a stemplot – the number of possible leaves on each stem. In our basic display (shown again)
```
18 79
19 69
20 123345556777
21 024455566667788999
22 00011233355899
23 125
```
there are 10 possible leaves on each line (that is, 0, 1, 2, …, 9\), and so we call this display a stemplot with 10 leaves per stem.
One way of stretching out this display is to divide the ten possible leaves into a small group (0 through 4\) and a large group (5 through 9\). To draw this stemplot, we write down each stem twice
```
18*
18.
19*
19.
20*
20.
21*
21.
22*
22.
23*
23.
```
(the \* indicates the first line and . the second) and then record the leaves
```
1 | 2: represents 1.2
leaf unit: 0.1
n: 26
18. 9
19*
19. 9
20* 1234
20. 556777
21* 4
21. 556667999
22* 133
22. 5
```
We call this display a stemplot with 5 leaves per stem. I think this is a better graph than the 10\-lines\-per\-stem since we see more structure. Now we see
* two clusters of observations – one cluster in the 20\* line and a second in the 21\. line. It might be reasonable to say that there are two modes or humps in the data.
* the 18\.7 and 18\.9 values appears to be somewhat low, since there is a gap between these ACT averages and the next largest
Another possible stemplot is to divide the 10 possible leaves in our basic display into 5 groups. We write the 18 line five times
```
18*
18t
18f
18s
18.
```
The 0, 1 leaves are written on the first line (\*), the 2, 3 leaves are put on the 2nd (t) line, the 4, 5 leaves on the (f) line, the 6, 7 leaves on the (s) line and the 8, 9 leaves on the (.) line. The use of the t, f, s labels is helpful, since TWO and THREE start with t, FOUR and FIVE start with f, and SIX and SEVEN start with s. (I guess this idea wouldn’t be helpful in drawing a stemplot with Chinese letters.)
We call this display a stemplot with 2 leaves per stem, since each line has two possible leaves. This stemplot for our data is shown below.
```
1 | 2: represents 1.2
leaf unit: 0.1
n: 26
18. | 9
19* |
t |
f |
s |
19. | 9
20* | 1
t | 23
f | 455
s | 6777
20. |
21* |
t |
f | 455
s | 6667
21. | 999
22* | 1
t | 33
f | 5
```
Which is a better display, the previous one with 5 leaves per stem, or this one with 2 leaves per stem? This last display looks too spread out to me. You do see the two clusters in this 2\-leaves\-per\-stem stemplot, but there are more gaps introduced since we stretched out the display.
3\.6 Guidance in Constructing a Stemplot
----------------------------------------
In constructing a stemplot, there are two choices to make:
* how to break between the stem and leaf
* how many leaves per stem to use
It is best to try a few different stemplots and use the one that you think best represents the data. You will get some experience hand\-drawing stemplots – every time you should try at least two displays. The second display is usually quick to draw once the first display is done.
3\.7 Making the Stemplot Resistant
----------------------------------
Let’s illustrate constructing stemplots using a second example. In our almanac (page 953\), the the weights of the heaviest fish caught are listed for various species. I have jotted down the weights in pounds for the first 25 fish in the table (from Albacore to Summer Flounder):
```
88 155 85 21 26 13 10 563 78 31 19 18 21 23 135
98 35 133 88 113 94 9 36 20 22
```
To construct a stemplot, we first might trying breaking between the unit and the tens digits. If we do, we will get 56 lines, which won’t fit on a single page. The problem is that there is a single large weight 563 (corresponding to a Giant Sea Bass) which is larger than all of the remaining observations. We would like our display to be resistant or not distorted by a single large value. So what we do is to draw a stemplot of the values with the large value listed on a separate line labelled “HI”.
```
0 9
1 0389
2 011236
3 156
4
5
6
7 8
8 588
9 48
10
11 3
12
13 35
14
15 5
HI 563,
Unit = 1
```
This graph shows two clusters of weights, one in the 10\-20 pound range, and a second in the 80’s. This display might spread out the data too much. So, as an alternative, let’s try breaking the data between the 10’s and 100’s places and using two leaves per stem.
```
0* 01111
0t 222222333
0f
0s 7
0. 88899
1* 1
1t 33
1f 5
HI 563,
Unit = 10
```
I like this display better than the previous one. I still see two basic clusters in the datasets separated by a gap. This means that many of the record fish weights are modest size (corresponding to small fish), a second group of weights correspond to moderate\-size fish, and we can’t ignore the large weight of the Giant Sea Bass.
3\.8 A Few Closing Comments
---------------------------
* Although we have focused our discussion on the stemplot, it is not always the best graphical display. A stemplot is most effective for a dataset with no more than 50 values. If you have a larger dataset, it is likely that you would need too many stemplot lines. For large datasets, it is better to use a histogram.
* It is instructive to experiment with different choices of breakpoint and leaves per stem to find the “best” stemplot. But it is handy to have a formula which gives a suggested stemplot for a given dataset.
Here is a useful rule\-of\-thumb. The stemplot should have L lines where
\\\[
L \= \[10 \\log10(n)],
\\]
where \\(\[ \\, ]\\) stands for the integer part of the argument.
How should we use this formula?
* Find the range R of the data and compute L.
* Divide R by L, and round the answer to 2, 5, or 10\.
* This answer gives you the breakpoint and the number of leaves per stem.
Let’s illustrate this rule for our ACT scores. We have n \= 26 scores and LO \= 18\.7, HI \= 23\.5\.
The range is equal to
```
(R <- 23.5 - 18.7)
```
```
## [1] 4.8
```
and
```
(L <- floor(10 * log10(26)))
```
```
## [1] 14
```
So the ratio of R over L is given by
```
R / L
```
```
## [1] 0.3428571
```
which I round to 0\.2 (it is closer to 0\.2 than to 0\.5\) So the distance between the smallest value in two consecutive lines of the stemplot should be 0\.2\.
This rule tells us to use the display where we break 18\.7 at the decimal point, and use two leaves per stem. This is the stemplot that started like
```
18*
18t
18f
18s 7
18. 9
```
Actually, we decided that stemplot with 5 leaves per stem seemed better, but at least this rule gives us a stemplot that is close to the best one.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/single-batch-summaries.html |
4 Single Batch: Summaries
=========================
4\.1 Meet the Data
------------------
**Data: Percentage change in population 2000\-2009 for each state.**
**Source: The 2010 New York Times Almanac, page 277, and the U.S. Census Bureau website <http://www.census.gov>.**
This data (some that is displayed in the following table) shows the change in population (measured in terms of a percentage) for all states in the United States between the years 2000 and 2009 (roughly between the 2000 and 2010 census). This is interesting data, since we are interested in the regions of the U.S. which are growing fast and slow. Specifically, we might want to know
* what is a typical growth rate for a state in the last 9 years?
* are there states whose growths are significantly different from the typical growth rate?
* do the states with large population growths correspond to particular regions of the U.S.?
In this topic, we’ll discuss simple ways of summarizing a dataset. These summaries and associated displays will help in answering some of these questions.
```
library(LearnEDAfunctions)
library(tidyverse)
select(pop.change, State, Pct.change) %>% head()
```
```
## State Pct.change
## 1 Alabama 5.9
## 2 Alaska 11.3
## 3 Arizona 28.6
## 4 Arkansas 8.1
## 5 California 9.1
## 6 Colorado 16.8
```
We begin by constructing a stemplot of the growth percentages. We break between the ones and tens places and use two leaves per stem. We have one unusual value – Nevada at the high end that we show on a separate HI line.
```
aplpack::stem.leaf(pop.change$Pct.change, depth=FALSE)
```
```
## 1 | 2: represents 12
## leaf unit: 1
## n: 51
## 0* | 000001
## t | 222333333
## f | 4445555
## s | 66677777
## 0. | 889
## 1* | 000111
## t | 233
## f |
## s | 666
## 1. | 89
## 2* | 0
## t |
## f | 4
## s |
## 2. | 8
## HI: 32.3
```
4\.2 Ranks and Depths
---------------------
To describe our summaries which we will call letter values, we have to first define a few terms. The rank of an observation is its order when data is arranged from lowest to highest. For example, if we have the following six test scores
\\\[
40, 43, 65, 77, 100, 66,
\\]
40 has rank 1, 43 has rank 2, 77 has rank 5, etc.
We can distinguish between two ranks – a downward rank (abbreviated drank) is the rank of an observation when the data are arranged from HI to LO. In contrast, the upward rank (abbreviated urank) of an observation is its rank when data are arranged from LO to HI.
In our test score example,
```
43 has upward rank 2 and downward rank 5.
```
If \\(n\\) is the number of data values, it should be clear that
```
drank + urank = n+1
```
The depth of an observation is the smaller of the two ranks. That is,
```
depth = minimum{drank, urank}.
```
The extreme observations, the smallest and the largest, will each have a depth of 1\. The table below gives the downward ranks, the upward ranks, and the depths for our test scores:
```
DATA 40 43 65 66 77 100
-----------------------------
URANK 1 2 3 4 5 6
DRANK 6 5 4 3 2 1
DEPTH 1 2 3 3 2 1
```
4\.3 Letter Values: A Set of Summary Values
-------------------------------------------
We define our summaries, called letter values, using depths. The first letter value, the median (denoted by \\(M\\)), is the value that divides the data into a lower half and an upper half. The depth of the median is \\((n\+1\)/2\\), where \\(n\\) is the number of items in our batch.
```
Depth of median = (n + 1) / 2
```
The median divides the data into halves. We can continue by dividing each half (the lower half and the upper half) into halves. These summaries are called fourths (denoted by the letter \\(F\\)). We find them by computing their depths. The depth of a fourth is found by taking the integer part of the depth of the median, adding 1, and then dividing by 2:
```
Depth of fourth = ([Depth of median] + 1) / 2
```
Let’s compute the median and the fourths for the state growth percentages. Here
```
n = 51
```
and so
```
depth(M) = (51 + 1) / 2 = 26 and depth(F) = (26 + 1) / 2 = 13 1/2.
```
So the median \\(M\\) is the 26th smallest (or largest) observation. The fourths, called the lower fourth and the upper fourth, are the observations that have depth 13 1/2\. When we say a depth of 13 1/2, we mean that we wish to average the observations that have depths of 13 and 14\.
4\.4 Counting In
----------------
To find the median and fourths for our example, it its useful to add some extra numbers to our display. On each line of the stemplot, we write (on the left) the number of observations found on that line and more extreme lines. We see that there are 6 observations on the first line (and above), 15 observations are on the second line and above. Looking from the bottom, we see there are 2 observations on the bottom line (and below), there are 3 observations on the next\-to\-next\-to\-bottom line and below, etc. We call this
```
counting in
```
We count in from both ends until we reach half of the data. We stop counting in at 22 at the top since one additional line of 8 would put us over 50% of the data; likewise we stop at counting in 18 from the bottom since one additional line would include more than half the data. The (8\) on the fifth line is not counting in – it just tells us that there are 8 observations in this middle row.
```
aplpack::stem.leaf(pop.change$Pct.change, depth=TRUE)
```
```
## 1 | 2: represents 12
## leaf unit: 1
## n: 51
## 6 0* | 000001
## 15 t | 222333333
## 22 f | 4445555
## (8) s | 66677777
## 21 0. | 889
## 18 1* | 000111
## 12 t | 233
## f |
## 9 s | 666
## 6 1. | 89
## 4 2* | 0
## t |
## 3 f | 4
## s |
## 2 2. | 8
## HI: 32.3
```
Let’s find the median and fourths from the stemplot. The median has depth(\\(M\\)) \= 26, and we see that this corresponds to \\(M\\) \= 07\. Recall that depth(\\(F\\)) \= 13 1/2\. Counting from the lowest observation, the observations with depths of 13 and 14 are 03 and 03, so the lower fourth is \\(F\_L\\) \= (03 \+ 03\)/2 \= 3\. Counting from the largest observation, we see that the data values 11 and 11 have depths 13 and 14, so the upper fourth is \\(F\_U\\) \= (11 \+ 11\)/2 \= 11\.
4\.5 Five\-number Summary
-------------------------
We can summarize our batch of data using five numbers: the smallest observation (\\(LO\\)), the lower fourth \\(F\_L\\), the median \\(M\\), the upper fourth \\(F\_U\\), and the largest observation (\\(HI\\)). Collectively, these numbers are called the five\-number summary. Here the five\-number summary is
```
fivenum(pop.change$Pct.change)
```
```
## [1] 0.30 3.65 7.00 11.60 32.30
```
What have we learned? A typical growth percentage of a state is 7 percent; approximately half of the states have growth percentages smaller than 7% and half have larger growth percentages. Moreover, since 3, 7, 11 divide the data into quarters, one quarter of the states have growth percentages smaller than 3%, one quarter of the states have growth percentages between 3% and 7% one quarter of the states have growth percentages between 7% and 11%, and one quarter of the states have growths between 11% and 32%. The extreme value is interesting: looking back at the data table, we see that Nevada has gained 32% in population.
4\.6 Other Letter Values
------------------------
Sometimes we will find it useful to compute other letter values that divide the tail regions of the data into smaller regions. Suppose we divide the lower quarter and the upper quarter of the data into halves – the dividing points are called eighths. The depth of an eighth is given by the formula
```
Depth of eighth = ([Depth of fourth] + 1) / 2
```
In our example, we found depth(\\(F\\)) \= 13 1/2, so
```
Depth of eighth = ([13 1/2] + 1) / 2 = 7 .
```
The lower eighth and upper eighth have depths equal to 7\. We return to our stemplot and find the 7th smallest and 7th largest values, which are 2 and 16\. Approximately one eighth of the percentage increases in growth are smaller than 2%, and one eighth of the increases are larger than 16%.
For larger datasets, we will continue to divide the tail region to get other letter values as shown in the following table. Note that the depth of a letter value is found by using the depth of the previous letter value.
| Letter Value | Name | Depth |
| --- | --- | --- |
| \\(M\\) | Median | (\[\\(n\\)] \+ 1\) / 2 |
| \\(F\\) | Fourth | (\[depth(\\(M\\))] \+ 1\) / 2 |
| \\(E\\) | Eighth | (\[depth(\\(F\\))] \+ 1\) / 2 |
| \\(D\\) | Sixteenth | (\[depth(\\(E\\))] \+ 1\) / 2 |
| \\(C\\) | Thirty\-secondth | (\[depth(\\(D\\))] \+ 1\) / 2 |
| \\(B\\) | Sixty\-fourth | (\[depth(\\(C\\))] \+ 1\) / 2 |
| \\(A\\) | One hundred and twenty\-eighth | (\[depth(\\(B\\))] \+ 1\) / 2 |
We will find these letter values useful in assessing the symmetry of a batch of data.
The `lval` function computes the set of letter values along with the mids and differences.
```
lval(pop.change$Pct.change)
```
```
## depth lo hi mids spreads
## M 26.0 7.00 7.00 7.000 0.00
## H 13.5 3.65 11.60 7.625 7.95
## E 7.0 2.10 16.80 9.450 14.70
## D 4.0 0.70 20.10 10.400 19.40
## C 2.5 0.50 26.65 13.575 26.15
## B 1.0 0.30 32.30 16.300 32.00
```
4\.7 Measures of Center
-----------------------
Now that we have defined letter values, what is a good measurement of the center of a batch? A common measure is the mean, denoted by \\(\\bar x\\), obtained by summing up the values and dividing by the number of observations. For exploratory work, we prefer the use of the median \\(M\\).
Why is the median preferable to the mean?
* The median has a simpler interpretation than the mean — \\(M\\) divides the data into a lower half and an upper half.
* Unlike the mean, the median \\(M\\) is resistant to extreme values. You are probably aware that a single large observation can have a significant impact on the value of . (Think of computing the mean salary for a company with 100 hourly workers and a president with a relatively large salary. The president’s salary will have a large impact on the mean salary.)
One criticism of the median is that it is dependent only on a single or two middle values in the batch. An alternative resistant measure of center is the tri\-mean, which is a weighted average of the median and the two fourths:
The trimean is resistant (like the median \\(M\\)), since it cannot be distorted by a few large or small extreme values. But, by combining the fourths and the median, the tri\-mean can reflect the lack of symmetry in the middle half of the data.
4\.8 Measures of Spread
-----------------------
The usual measure of spread is the standard deviation \\(s\\) that is based on computing deviations from the mean. It suffers from the same lack\-of\-resistance problem as the mean – a single large value can distort the value of \\(s\\). So the standard deviation is not suitable for exploratory work.
For similar reasons, the range \\(R \= HI \- LO\\) is a poor measure of spread since it is based on only the two extreme values, and these two values may not reflect the general dispersion in the batch.
A better resistant measure of spread is the fourth\-spread, denoted \\(dF\\), that is defined by the distance between the lower and upper fourths:
The fourth\-spread has a simple interpretation – it’s the width of the middle 50% of the data.
4\.9 Identifying Outliers
-------------------------
John Tukey devised a rule\-of\-thumb for identifying extreme observations in a batch. This rule\-of\-thumb is not designed to formally label particular data items as outliers. Rather this method sets apart a few unusually observations that may deserve further study.
The idea here is to set lower and upper fences in the data. If any of the observations fall beyond the fences, they are designated as possible outliers.
We first define a step which is equal to 1 1/2 times the fourth\-spread:
\\\[
STEP \= 1\.5 \\times (F\_U \- F\_L).
\\]
Then the lower fence is defined by one step smaller than the lower quartile, and the upper fence is defined as one step larger than the upper quartile:
\\\[
fence\_{lower} \= F\_L \- STEP, \\, \\, fence\_{upper} \= F\_U \+ STEP.
\\]
Any observations that fall beyond the fences are called \`\`outside”.
Tukey thought it was useful to have two sets of fences. The fences defined above can be called inner fences. To obtain outer fences, we got out two steps from the fourths:
\\\[
FENCE\_{lower} \= F\_L \- 2 \\times STEP, \\, \\, FENCE\_{upper} \= F\_U \+ 2 \\times STEP.
\\]
(We will call these outer fences FENCES.) Observations that fall beyond the outer fences can be regarded as \`\`really out”.
4\.10 A New Example
-------------------
For a second example, our almanac (The World Almanac 2001, page 237\) gives the average gestation (in days) for 43 species of animals. Here’s part of the data and associated stemplot:
```
head(gestation.periods)
```
```
## Animal Period
## 1 Ass 365
## 2 Baboon 187
## 3 Bear_black 219
## 4 Bear_grizzly 225
## 5 Bear_polar 240
## 6 Beaver 105
```
```
aplpack::stem.leaf(gestation.periods$Period)
```
```
## 1 | 2: represents 120
## leaf unit: 10
## n: 43
## 7 0* | 1123334
## 14 0. | 5666699
## 18 1* | 0001
## (4) 1. | 5568
## 21 2* | 0123344
## 14 2. | 5588
## 10 3* | 3
## 9 3. | 566
## 6 4* | 0
## 5 4. | 558
## HI: 645 660
```
Here the dataset looks somewhat right skewed. There are a large number of animals (the small variety) with short gestation periods under 100 days. Also we see a cluster of periods in the 200\-240 range. We note the two large values – each exceeding 600 days. We’re not surprised that these correspond to the two elephants in the table.
Let’s compute some letter values.
1. There are \\(n\\) \= 43 values, so the depth of the median is \\(d(M)\\) \= (43\+1\)/2 \= 22\. Looking at the stemplot, we see that the 22nd value is 18, so \\(M\\) \= 18\.
2. To find fourths, we compute the depth: \\(d(F)\\) \= (22\+1\)/2 \= 11 1/2\. The lower and upper fourths are found by averaging the 11th and 12th values at each end. Looking at the stemplot, we find
\\\[
F\_L \= (6 \+ 6\)/2 \= 6, \\, \\, F\_U \= (28\+28\)/2 \= 28 .
\\]
3. We can keep going to find additional letter values. The depth of the eighth is \\(d(E) \= (11\+1\)/2 \= 6\\). Looking at the stemplot, these values are
\\\[
E\_L \= 3, E\_U \= 40
\\]
4. We set our fences to look for outliers. The fourth spread is
\\\[
dF \= 28 \- 6 \= 22
\\]
and so a step is
\\\[
STEP \= 1\.5 (22\) \= 33 .
\\]
The inner fences are located at
\\\[
F\_L \- STEP \= 6 \- 33 \= \-27, \\, \\, F\_U \+ STEP \= 28 \+ 33 \= 61
\\]
and the outer fences at
\\\[
FL \- 2 \\times STEP \= 6 \- 2(33\) \= \-60, \\, \\, F\_U \+ 2 \\times STEP \= 61 \+ 33 \= 94\.
\\]
Do we have any outliers? Yes, the two elephant gestation periods are beyond the inner fence but within the outer fence at the high end. I think we would all agree that elephants are unusually large animals which likely goes together with their long gestation periods.
4\.11 Relationship with Normal Data
-----------------------------------
In introductory statistics, we spend a lot of time talking about the normal distribution. If we have a bunch of normally distributed data, what do the fourths look like? Also should we expect to find any outliers?
Consider the normal curve with mean \\(\\mu\\) and standard deviation \\(\\sigma\\) that represents a population of normal measurements. It is easy to check that 50% of the probability content of a normal curve falls between \\(\\mu \- 0\.6745 \\sigma\\) and \\(\\mu \+ 0\.6745 \\sigma\\) . So for normal measurements, \\(F\_L \= \\mu \- 0\.6745\\) and \\(F\_U \= \\mu \+ 0\.6745 \\sigma\\) and the fourth\-spread is \\(d\_F \= 2 (0\.6745\) \\sigma \= 1\.349 \\sigma\\).
As an aside, this relationship gives us an alternative estimate of the standard deviation \\(s\\). Solving \\(d\_F \= 1\.349 \\sigma\\) for \\(\\sigma\\) gives the relationship
\\\[
\\sigma \= d\_F / 1\.349\.
\\]
So a simple way of estimating a standard deviation divides the fourth spread by 1\.349\. This is called the F pseudosigma. Why is this better than the usual estimate of \\(\\sigma\\)? It’s better since, unlike the usual estimate, the F pseudosigma is resistant to extreme observations.
Continuing our discussion, how many outliers should we find for normal data? For normal data,
\\\[
STEP \= 1\.5 (1\.349 \\sigma ) \= 2\.0235 \\sigma
\\]
and the inner fences will be
\\\[
F\_L \- STEP \= \\mu \- 0\.6745 \\sigma \- 2\.0235 \\sigma \= \\mu \- 2\.6980 \\sigma
\\]
\\\[
F\_U \+ STEP \= \\mu \+ 0\.6745 \\sigma \+ 2\.0235\\sigma \= \\mu \+ 2\.6980 \\sigma.
\\]
The probability of being outside \\(( \\mu \- 2\.6980\\sigma , \\mu \+ 2\.6980 \\sigma )\\) for a normal curve is .007\. This means that only 0\.7 % of normally distributed data will be classified as outliers. So, it is pretty rare to see outliers for normal data.
COMMENT: There is a slight flaw in the above argument. The normal curve represents the distribution for a large sample of normal data and 0\.7% of this large sample will be outlying. If we take a small sample, then we will generally see a higher fraction of outliers. In fact, it has been established that the fraction of outliers for a normal sample of size \\(n\\) is approximately
```
.00698 + .4 / n
```
For example, if we take a sample of size \\(n\\) \= 20, then the proportion of outliers will be
```
.00698 + .4/20 =.027
```
If we take repeated samples of size 20, then approximately 2\.7 % of all these observations will be outlying.
I checked this result in a simulation. I took repeated samples of size 20 from a normal distribution. In 1000 samples, I found a total of 327 outliers. The fraction of outliers was 327/20000 \= 0\.016, which is a bit smaller than the result above. But this fraction is larger than the fraction 0\.00698 from a “large” normal sample.
4\.1 Meet the Data
------------------
**Data: Percentage change in population 2000\-2009 for each state.**
**Source: The 2010 New York Times Almanac, page 277, and the U.S. Census Bureau website <http://www.census.gov>.**
This data (some that is displayed in the following table) shows the change in population (measured in terms of a percentage) for all states in the United States between the years 2000 and 2009 (roughly between the 2000 and 2010 census). This is interesting data, since we are interested in the regions of the U.S. which are growing fast and slow. Specifically, we might want to know
* what is a typical growth rate for a state in the last 9 years?
* are there states whose growths are significantly different from the typical growth rate?
* do the states with large population growths correspond to particular regions of the U.S.?
In this topic, we’ll discuss simple ways of summarizing a dataset. These summaries and associated displays will help in answering some of these questions.
```
library(LearnEDAfunctions)
library(tidyverse)
select(pop.change, State, Pct.change) %>% head()
```
```
## State Pct.change
## 1 Alabama 5.9
## 2 Alaska 11.3
## 3 Arizona 28.6
## 4 Arkansas 8.1
## 5 California 9.1
## 6 Colorado 16.8
```
We begin by constructing a stemplot of the growth percentages. We break between the ones and tens places and use two leaves per stem. We have one unusual value – Nevada at the high end that we show on a separate HI line.
```
aplpack::stem.leaf(pop.change$Pct.change, depth=FALSE)
```
```
## 1 | 2: represents 12
## leaf unit: 1
## n: 51
## 0* | 000001
## t | 222333333
## f | 4445555
## s | 66677777
## 0. | 889
## 1* | 000111
## t | 233
## f |
## s | 666
## 1. | 89
## 2* | 0
## t |
## f | 4
## s |
## 2. | 8
## HI: 32.3
```
4\.2 Ranks and Depths
---------------------
To describe our summaries which we will call letter values, we have to first define a few terms. The rank of an observation is its order when data is arranged from lowest to highest. For example, if we have the following six test scores
\\\[
40, 43, 65, 77, 100, 66,
\\]
40 has rank 1, 43 has rank 2, 77 has rank 5, etc.
We can distinguish between two ranks – a downward rank (abbreviated drank) is the rank of an observation when the data are arranged from HI to LO. In contrast, the upward rank (abbreviated urank) of an observation is its rank when data are arranged from LO to HI.
In our test score example,
```
43 has upward rank 2 and downward rank 5.
```
If \\(n\\) is the number of data values, it should be clear that
```
drank + urank = n+1
```
The depth of an observation is the smaller of the two ranks. That is,
```
depth = minimum{drank, urank}.
```
The extreme observations, the smallest and the largest, will each have a depth of 1\. The table below gives the downward ranks, the upward ranks, and the depths for our test scores:
```
DATA 40 43 65 66 77 100
-----------------------------
URANK 1 2 3 4 5 6
DRANK 6 5 4 3 2 1
DEPTH 1 2 3 3 2 1
```
4\.3 Letter Values: A Set of Summary Values
-------------------------------------------
We define our summaries, called letter values, using depths. The first letter value, the median (denoted by \\(M\\)), is the value that divides the data into a lower half and an upper half. The depth of the median is \\((n\+1\)/2\\), where \\(n\\) is the number of items in our batch.
```
Depth of median = (n + 1) / 2
```
The median divides the data into halves. We can continue by dividing each half (the lower half and the upper half) into halves. These summaries are called fourths (denoted by the letter \\(F\\)). We find them by computing their depths. The depth of a fourth is found by taking the integer part of the depth of the median, adding 1, and then dividing by 2:
```
Depth of fourth = ([Depth of median] + 1) / 2
```
Let’s compute the median and the fourths for the state growth percentages. Here
```
n = 51
```
and so
```
depth(M) = (51 + 1) / 2 = 26 and depth(F) = (26 + 1) / 2 = 13 1/2.
```
So the median \\(M\\) is the 26th smallest (or largest) observation. The fourths, called the lower fourth and the upper fourth, are the observations that have depth 13 1/2\. When we say a depth of 13 1/2, we mean that we wish to average the observations that have depths of 13 and 14\.
4\.4 Counting In
----------------
To find the median and fourths for our example, it its useful to add some extra numbers to our display. On each line of the stemplot, we write (on the left) the number of observations found on that line and more extreme lines. We see that there are 6 observations on the first line (and above), 15 observations are on the second line and above. Looking from the bottom, we see there are 2 observations on the bottom line (and below), there are 3 observations on the next\-to\-next\-to\-bottom line and below, etc. We call this
```
counting in
```
We count in from both ends until we reach half of the data. We stop counting in at 22 at the top since one additional line of 8 would put us over 50% of the data; likewise we stop at counting in 18 from the bottom since one additional line would include more than half the data. The (8\) on the fifth line is not counting in – it just tells us that there are 8 observations in this middle row.
```
aplpack::stem.leaf(pop.change$Pct.change, depth=TRUE)
```
```
## 1 | 2: represents 12
## leaf unit: 1
## n: 51
## 6 0* | 000001
## 15 t | 222333333
## 22 f | 4445555
## (8) s | 66677777
## 21 0. | 889
## 18 1* | 000111
## 12 t | 233
## f |
## 9 s | 666
## 6 1. | 89
## 4 2* | 0
## t |
## 3 f | 4
## s |
## 2 2. | 8
## HI: 32.3
```
Let’s find the median and fourths from the stemplot. The median has depth(\\(M\\)) \= 26, and we see that this corresponds to \\(M\\) \= 07\. Recall that depth(\\(F\\)) \= 13 1/2\. Counting from the lowest observation, the observations with depths of 13 and 14 are 03 and 03, so the lower fourth is \\(F\_L\\) \= (03 \+ 03\)/2 \= 3\. Counting from the largest observation, we see that the data values 11 and 11 have depths 13 and 14, so the upper fourth is \\(F\_U\\) \= (11 \+ 11\)/2 \= 11\.
4\.5 Five\-number Summary
-------------------------
We can summarize our batch of data using five numbers: the smallest observation (\\(LO\\)), the lower fourth \\(F\_L\\), the median \\(M\\), the upper fourth \\(F\_U\\), and the largest observation (\\(HI\\)). Collectively, these numbers are called the five\-number summary. Here the five\-number summary is
```
fivenum(pop.change$Pct.change)
```
```
## [1] 0.30 3.65 7.00 11.60 32.30
```
What have we learned? A typical growth percentage of a state is 7 percent; approximately half of the states have growth percentages smaller than 7% and half have larger growth percentages. Moreover, since 3, 7, 11 divide the data into quarters, one quarter of the states have growth percentages smaller than 3%, one quarter of the states have growth percentages between 3% and 7% one quarter of the states have growth percentages between 7% and 11%, and one quarter of the states have growths between 11% and 32%. The extreme value is interesting: looking back at the data table, we see that Nevada has gained 32% in population.
4\.6 Other Letter Values
------------------------
Sometimes we will find it useful to compute other letter values that divide the tail regions of the data into smaller regions. Suppose we divide the lower quarter and the upper quarter of the data into halves – the dividing points are called eighths. The depth of an eighth is given by the formula
```
Depth of eighth = ([Depth of fourth] + 1) / 2
```
In our example, we found depth(\\(F\\)) \= 13 1/2, so
```
Depth of eighth = ([13 1/2] + 1) / 2 = 7 .
```
The lower eighth and upper eighth have depths equal to 7\. We return to our stemplot and find the 7th smallest and 7th largest values, which are 2 and 16\. Approximately one eighth of the percentage increases in growth are smaller than 2%, and one eighth of the increases are larger than 16%.
For larger datasets, we will continue to divide the tail region to get other letter values as shown in the following table. Note that the depth of a letter value is found by using the depth of the previous letter value.
| Letter Value | Name | Depth |
| --- | --- | --- |
| \\(M\\) | Median | (\[\\(n\\)] \+ 1\) / 2 |
| \\(F\\) | Fourth | (\[depth(\\(M\\))] \+ 1\) / 2 |
| \\(E\\) | Eighth | (\[depth(\\(F\\))] \+ 1\) / 2 |
| \\(D\\) | Sixteenth | (\[depth(\\(E\\))] \+ 1\) / 2 |
| \\(C\\) | Thirty\-secondth | (\[depth(\\(D\\))] \+ 1\) / 2 |
| \\(B\\) | Sixty\-fourth | (\[depth(\\(C\\))] \+ 1\) / 2 |
| \\(A\\) | One hundred and twenty\-eighth | (\[depth(\\(B\\))] \+ 1\) / 2 |
We will find these letter values useful in assessing the symmetry of a batch of data.
The `lval` function computes the set of letter values along with the mids and differences.
```
lval(pop.change$Pct.change)
```
```
## depth lo hi mids spreads
## M 26.0 7.00 7.00 7.000 0.00
## H 13.5 3.65 11.60 7.625 7.95
## E 7.0 2.10 16.80 9.450 14.70
## D 4.0 0.70 20.10 10.400 19.40
## C 2.5 0.50 26.65 13.575 26.15
## B 1.0 0.30 32.30 16.300 32.00
```
4\.7 Measures of Center
-----------------------
Now that we have defined letter values, what is a good measurement of the center of a batch? A common measure is the mean, denoted by \\(\\bar x\\), obtained by summing up the values and dividing by the number of observations. For exploratory work, we prefer the use of the median \\(M\\).
Why is the median preferable to the mean?
* The median has a simpler interpretation than the mean — \\(M\\) divides the data into a lower half and an upper half.
* Unlike the mean, the median \\(M\\) is resistant to extreme values. You are probably aware that a single large observation can have a significant impact on the value of . (Think of computing the mean salary for a company with 100 hourly workers and a president with a relatively large salary. The president’s salary will have a large impact on the mean salary.)
One criticism of the median is that it is dependent only on a single or two middle values in the batch. An alternative resistant measure of center is the tri\-mean, which is a weighted average of the median and the two fourths:
The trimean is resistant (like the median \\(M\\)), since it cannot be distorted by a few large or small extreme values. But, by combining the fourths and the median, the tri\-mean can reflect the lack of symmetry in the middle half of the data.
4\.8 Measures of Spread
-----------------------
The usual measure of spread is the standard deviation \\(s\\) that is based on computing deviations from the mean. It suffers from the same lack\-of\-resistance problem as the mean – a single large value can distort the value of \\(s\\). So the standard deviation is not suitable for exploratory work.
For similar reasons, the range \\(R \= HI \- LO\\) is a poor measure of spread since it is based on only the two extreme values, and these two values may not reflect the general dispersion in the batch.
A better resistant measure of spread is the fourth\-spread, denoted \\(dF\\), that is defined by the distance between the lower and upper fourths:
The fourth\-spread has a simple interpretation – it’s the width of the middle 50% of the data.
4\.9 Identifying Outliers
-------------------------
John Tukey devised a rule\-of\-thumb for identifying extreme observations in a batch. This rule\-of\-thumb is not designed to formally label particular data items as outliers. Rather this method sets apart a few unusually observations that may deserve further study.
The idea here is to set lower and upper fences in the data. If any of the observations fall beyond the fences, they are designated as possible outliers.
We first define a step which is equal to 1 1/2 times the fourth\-spread:
\\\[
STEP \= 1\.5 \\times (F\_U \- F\_L).
\\]
Then the lower fence is defined by one step smaller than the lower quartile, and the upper fence is defined as one step larger than the upper quartile:
\\\[
fence\_{lower} \= F\_L \- STEP, \\, \\, fence\_{upper} \= F\_U \+ STEP.
\\]
Any observations that fall beyond the fences are called \`\`outside”.
Tukey thought it was useful to have two sets of fences. The fences defined above can be called inner fences. To obtain outer fences, we got out two steps from the fourths:
\\\[
FENCE\_{lower} \= F\_L \- 2 \\times STEP, \\, \\, FENCE\_{upper} \= F\_U \+ 2 \\times STEP.
\\]
(We will call these outer fences FENCES.) Observations that fall beyond the outer fences can be regarded as \`\`really out”.
4\.10 A New Example
-------------------
For a second example, our almanac (The World Almanac 2001, page 237\) gives the average gestation (in days) for 43 species of animals. Here’s part of the data and associated stemplot:
```
head(gestation.periods)
```
```
## Animal Period
## 1 Ass 365
## 2 Baboon 187
## 3 Bear_black 219
## 4 Bear_grizzly 225
## 5 Bear_polar 240
## 6 Beaver 105
```
```
aplpack::stem.leaf(gestation.periods$Period)
```
```
## 1 | 2: represents 120
## leaf unit: 10
## n: 43
## 7 0* | 1123334
## 14 0. | 5666699
## 18 1* | 0001
## (4) 1. | 5568
## 21 2* | 0123344
## 14 2. | 5588
## 10 3* | 3
## 9 3. | 566
## 6 4* | 0
## 5 4. | 558
## HI: 645 660
```
Here the dataset looks somewhat right skewed. There are a large number of animals (the small variety) with short gestation periods under 100 days. Also we see a cluster of periods in the 200\-240 range. We note the two large values – each exceeding 600 days. We’re not surprised that these correspond to the two elephants in the table.
Let’s compute some letter values.
1. There are \\(n\\) \= 43 values, so the depth of the median is \\(d(M)\\) \= (43\+1\)/2 \= 22\. Looking at the stemplot, we see that the 22nd value is 18, so \\(M\\) \= 18\.
2. To find fourths, we compute the depth: \\(d(F)\\) \= (22\+1\)/2 \= 11 1/2\. The lower and upper fourths are found by averaging the 11th and 12th values at each end. Looking at the stemplot, we find
\\\[
F\_L \= (6 \+ 6\)/2 \= 6, \\, \\, F\_U \= (28\+28\)/2 \= 28 .
\\]
3. We can keep going to find additional letter values. The depth of the eighth is \\(d(E) \= (11\+1\)/2 \= 6\\). Looking at the stemplot, these values are
\\\[
E\_L \= 3, E\_U \= 40
\\]
4. We set our fences to look for outliers. The fourth spread is
\\\[
dF \= 28 \- 6 \= 22
\\]
and so a step is
\\\[
STEP \= 1\.5 (22\) \= 33 .
\\]
The inner fences are located at
\\\[
F\_L \- STEP \= 6 \- 33 \= \-27, \\, \\, F\_U \+ STEP \= 28 \+ 33 \= 61
\\]
and the outer fences at
\\\[
FL \- 2 \\times STEP \= 6 \- 2(33\) \= \-60, \\, \\, F\_U \+ 2 \\times STEP \= 61 \+ 33 \= 94\.
\\]
Do we have any outliers? Yes, the two elephant gestation periods are beyond the inner fence but within the outer fence at the high end. I think we would all agree that elephants are unusually large animals which likely goes together with their long gestation periods.
4\.11 Relationship with Normal Data
-----------------------------------
In introductory statistics, we spend a lot of time talking about the normal distribution. If we have a bunch of normally distributed data, what do the fourths look like? Also should we expect to find any outliers?
Consider the normal curve with mean \\(\\mu\\) and standard deviation \\(\\sigma\\) that represents a population of normal measurements. It is easy to check that 50% of the probability content of a normal curve falls between \\(\\mu \- 0\.6745 \\sigma\\) and \\(\\mu \+ 0\.6745 \\sigma\\) . So for normal measurements, \\(F\_L \= \\mu \- 0\.6745\\) and \\(F\_U \= \\mu \+ 0\.6745 \\sigma\\) and the fourth\-spread is \\(d\_F \= 2 (0\.6745\) \\sigma \= 1\.349 \\sigma\\).
As an aside, this relationship gives us an alternative estimate of the standard deviation \\(s\\). Solving \\(d\_F \= 1\.349 \\sigma\\) for \\(\\sigma\\) gives the relationship
\\\[
\\sigma \= d\_F / 1\.349\.
\\]
So a simple way of estimating a standard deviation divides the fourth spread by 1\.349\. This is called the F pseudosigma. Why is this better than the usual estimate of \\(\\sigma\\)? It’s better since, unlike the usual estimate, the F pseudosigma is resistant to extreme observations.
Continuing our discussion, how many outliers should we find for normal data? For normal data,
\\\[
STEP \= 1\.5 (1\.349 \\sigma ) \= 2\.0235 \\sigma
\\]
and the inner fences will be
\\\[
F\_L \- STEP \= \\mu \- 0\.6745 \\sigma \- 2\.0235 \\sigma \= \\mu \- 2\.6980 \\sigma
\\]
\\\[
F\_U \+ STEP \= \\mu \+ 0\.6745 \\sigma \+ 2\.0235\\sigma \= \\mu \+ 2\.6980 \\sigma.
\\]
The probability of being outside \\(( \\mu \- 2\.6980\\sigma , \\mu \+ 2\.6980 \\sigma )\\) for a normal curve is .007\. This means that only 0\.7 % of normally distributed data will be classified as outliers. So, it is pretty rare to see outliers for normal data.
COMMENT: There is a slight flaw in the above argument. The normal curve represents the distribution for a large sample of normal data and 0\.7% of this large sample will be outlying. If we take a small sample, then we will generally see a higher fraction of outliers. In fact, it has been established that the fraction of outliers for a normal sample of size \\(n\\) is approximately
```
.00698 + .4 / n
```
For example, if we take a sample of size \\(n\\) \= 20, then the proportion of outliers will be
```
.00698 + .4/20 =.027
```
If we take repeated samples of size 20, then approximately 2\.7 % of all these observations will be outlying.
I checked this result in a simulation. I took repeated samples of size 20 from a normal distribution. In 1000 samples, I found a total of 327 outliers. The fraction of outliers was 327/20000 \= 0\.016, which is a bit smaller than the result above. But this fraction is larger than the fraction 0\.00698 from a “large” normal sample.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/single-batch-summaries.html |
4 Single Batch: Summaries
=========================
4\.1 Meet the Data
------------------
**Data: Percentage change in population 2000\-2009 for each state.**
**Source: The 2010 New York Times Almanac, page 277, and the U.S. Census Bureau website <http://www.census.gov>.**
This data (some that is displayed in the following table) shows the change in population (measured in terms of a percentage) for all states in the United States between the years 2000 and 2009 (roughly between the 2000 and 2010 census). This is interesting data, since we are interested in the regions of the U.S. which are growing fast and slow. Specifically, we might want to know
* what is a typical growth rate for a state in the last 9 years?
* are there states whose growths are significantly different from the typical growth rate?
* do the states with large population growths correspond to particular regions of the U.S.?
In this topic, we’ll discuss simple ways of summarizing a dataset. These summaries and associated displays will help in answering some of these questions.
```
library(LearnEDAfunctions)
library(tidyverse)
select(pop.change, State, Pct.change) %>% head()
```
```
## State Pct.change
## 1 Alabama 5.9
## 2 Alaska 11.3
## 3 Arizona 28.6
## 4 Arkansas 8.1
## 5 California 9.1
## 6 Colorado 16.8
```
We begin by constructing a stemplot of the growth percentages. We break between the ones and tens places and use two leaves per stem. We have one unusual value – Nevada at the high end that we show on a separate HI line.
```
aplpack::stem.leaf(pop.change$Pct.change, depth=FALSE)
```
```
## 1 | 2: represents 12
## leaf unit: 1
## n: 51
## 0* | 000001
## t | 222333333
## f | 4445555
## s | 66677777
## 0. | 889
## 1* | 000111
## t | 233
## f |
## s | 666
## 1. | 89
## 2* | 0
## t |
## f | 4
## s |
## 2. | 8
## HI: 32.3
```
4\.2 Ranks and Depths
---------------------
To describe our summaries which we will call letter values, we have to first define a few terms. The rank of an observation is its order when data is arranged from lowest to highest. For example, if we have the following six test scores
\\\[
40, 43, 65, 77, 100, 66,
\\]
40 has rank 1, 43 has rank 2, 77 has rank 5, etc.
We can distinguish between two ranks – a downward rank (abbreviated drank) is the rank of an observation when the data are arranged from HI to LO. In contrast, the upward rank (abbreviated urank) of an observation is its rank when data are arranged from LO to HI.
In our test score example,
```
43 has upward rank 2 and downward rank 5.
```
If \\(n\\) is the number of data values, it should be clear that
```
drank + urank = n+1
```
The depth of an observation is the smaller of the two ranks. That is,
```
depth = minimum{drank, urank}.
```
The extreme observations, the smallest and the largest, will each have a depth of 1\. The table below gives the downward ranks, the upward ranks, and the depths for our test scores:
```
DATA 40 43 65 66 77 100
-----------------------------
URANK 1 2 3 4 5 6
DRANK 6 5 4 3 2 1
DEPTH 1 2 3 3 2 1
```
4\.3 Letter Values: A Set of Summary Values
-------------------------------------------
We define our summaries, called letter values, using depths. The first letter value, the median (denoted by \\(M\\)), is the value that divides the data into a lower half and an upper half. The depth of the median is \\((n\+1\)/2\\), where \\(n\\) is the number of items in our batch.
```
Depth of median = (n + 1) / 2
```
The median divides the data into halves. We can continue by dividing each half (the lower half and the upper half) into halves. These summaries are called fourths (denoted by the letter \\(F\\)). We find them by computing their depths. The depth of a fourth is found by taking the integer part of the depth of the median, adding 1, and then dividing by 2:
```
Depth of fourth = ([Depth of median] + 1) / 2
```
Let’s compute the median and the fourths for the state growth percentages. Here
```
n = 51
```
and so
```
depth(M) = (51 + 1) / 2 = 26 and depth(F) = (26 + 1) / 2 = 13 1/2.
```
So the median \\(M\\) is the 26th smallest (or largest) observation. The fourths, called the lower fourth and the upper fourth, are the observations that have depth 13 1/2\. When we say a depth of 13 1/2, we mean that we wish to average the observations that have depths of 13 and 14\.
4\.4 Counting In
----------------
To find the median and fourths for our example, it its useful to add some extra numbers to our display. On each line of the stemplot, we write (on the left) the number of observations found on that line and more extreme lines. We see that there are 6 observations on the first line (and above), 15 observations are on the second line and above. Looking from the bottom, we see there are 2 observations on the bottom line (and below), there are 3 observations on the next\-to\-next\-to\-bottom line and below, etc. We call this
```
counting in
```
We count in from both ends until we reach half of the data. We stop counting in at 22 at the top since one additional line of 8 would put us over 50% of the data; likewise we stop at counting in 18 from the bottom since one additional line would include more than half the data. The (8\) on the fifth line is not counting in – it just tells us that there are 8 observations in this middle row.
```
aplpack::stem.leaf(pop.change$Pct.change, depth=TRUE)
```
```
## 1 | 2: represents 12
## leaf unit: 1
## n: 51
## 6 0* | 000001
## 15 t | 222333333
## 22 f | 4445555
## (8) s | 66677777
## 21 0. | 889
## 18 1* | 000111
## 12 t | 233
## f |
## 9 s | 666
## 6 1. | 89
## 4 2* | 0
## t |
## 3 f | 4
## s |
## 2 2. | 8
## HI: 32.3
```
Let’s find the median and fourths from the stemplot. The median has depth(\\(M\\)) \= 26, and we see that this corresponds to \\(M\\) \= 07\. Recall that depth(\\(F\\)) \= 13 1/2\. Counting from the lowest observation, the observations with depths of 13 and 14 are 03 and 03, so the lower fourth is \\(F\_L\\) \= (03 \+ 03\)/2 \= 3\. Counting from the largest observation, we see that the data values 11 and 11 have depths 13 and 14, so the upper fourth is \\(F\_U\\) \= (11 \+ 11\)/2 \= 11\.
4\.5 Five\-number Summary
-------------------------
We can summarize our batch of data using five numbers: the smallest observation (\\(LO\\)), the lower fourth \\(F\_L\\), the median \\(M\\), the upper fourth \\(F\_U\\), and the largest observation (\\(HI\\)). Collectively, these numbers are called the five\-number summary. Here the five\-number summary is
```
fivenum(pop.change$Pct.change)
```
```
## [1] 0.30 3.65 7.00 11.60 32.30
```
What have we learned? A typical growth percentage of a state is 7 percent; approximately half of the states have growth percentages smaller than 7% and half have larger growth percentages. Moreover, since 3, 7, 11 divide the data into quarters, one quarter of the states have growth percentages smaller than 3%, one quarter of the states have growth percentages between 3% and 7% one quarter of the states have growth percentages between 7% and 11%, and one quarter of the states have growths between 11% and 32%. The extreme value is interesting: looking back at the data table, we see that Nevada has gained 32% in population.
4\.6 Other Letter Values
------------------------
Sometimes we will find it useful to compute other letter values that divide the tail regions of the data into smaller regions. Suppose we divide the lower quarter and the upper quarter of the data into halves – the dividing points are called eighths. The depth of an eighth is given by the formula
```
Depth of eighth = ([Depth of fourth] + 1) / 2
```
In our example, we found depth(\\(F\\)) \= 13 1/2, so
```
Depth of eighth = ([13 1/2] + 1) / 2 = 7 .
```
The lower eighth and upper eighth have depths equal to 7\. We return to our stemplot and find the 7th smallest and 7th largest values, which are 2 and 16\. Approximately one eighth of the percentage increases in growth are smaller than 2%, and one eighth of the increases are larger than 16%.
For larger datasets, we will continue to divide the tail region to get other letter values as shown in the following table. Note that the depth of a letter value is found by using the depth of the previous letter value.
| Letter Value | Name | Depth |
| --- | --- | --- |
| \\(M\\) | Median | (\[\\(n\\)] \+ 1\) / 2 |
| \\(F\\) | Fourth | (\[depth(\\(M\\))] \+ 1\) / 2 |
| \\(E\\) | Eighth | (\[depth(\\(F\\))] \+ 1\) / 2 |
| \\(D\\) | Sixteenth | (\[depth(\\(E\\))] \+ 1\) / 2 |
| \\(C\\) | Thirty\-secondth | (\[depth(\\(D\\))] \+ 1\) / 2 |
| \\(B\\) | Sixty\-fourth | (\[depth(\\(C\\))] \+ 1\) / 2 |
| \\(A\\) | One hundred and twenty\-eighth | (\[depth(\\(B\\))] \+ 1\) / 2 |
We will find these letter values useful in assessing the symmetry of a batch of data.
The `lval` function computes the set of letter values along with the mids and differences.
```
lval(pop.change$Pct.change)
```
```
## depth lo hi mids spreads
## M 26.0 7.00 7.00 7.000 0.00
## H 13.5 3.65 11.60 7.625 7.95
## E 7.0 2.10 16.80 9.450 14.70
## D 4.0 0.70 20.10 10.400 19.40
## C 2.5 0.50 26.65 13.575 26.15
## B 1.0 0.30 32.30 16.300 32.00
```
4\.7 Measures of Center
-----------------------
Now that we have defined letter values, what is a good measurement of the center of a batch? A common measure is the mean, denoted by \\(\\bar x\\), obtained by summing up the values and dividing by the number of observations. For exploratory work, we prefer the use of the median \\(M\\).
Why is the median preferable to the mean?
* The median has a simpler interpretation than the mean — \\(M\\) divides the data into a lower half and an upper half.
* Unlike the mean, the median \\(M\\) is resistant to extreme values. You are probably aware that a single large observation can have a significant impact on the value of . (Think of computing the mean salary for a company with 100 hourly workers and a president with a relatively large salary. The president’s salary will have a large impact on the mean salary.)
One criticism of the median is that it is dependent only on a single or two middle values in the batch. An alternative resistant measure of center is the tri\-mean, which is a weighted average of the median and the two fourths:
The trimean is resistant (like the median \\(M\\)), since it cannot be distorted by a few large or small extreme values. But, by combining the fourths and the median, the tri\-mean can reflect the lack of symmetry in the middle half of the data.
4\.8 Measures of Spread
-----------------------
The usual measure of spread is the standard deviation \\(s\\) that is based on computing deviations from the mean. It suffers from the same lack\-of\-resistance problem as the mean – a single large value can distort the value of \\(s\\). So the standard deviation is not suitable for exploratory work.
For similar reasons, the range \\(R \= HI \- LO\\) is a poor measure of spread since it is based on only the two extreme values, and these two values may not reflect the general dispersion in the batch.
A better resistant measure of spread is the fourth\-spread, denoted \\(dF\\), that is defined by the distance between the lower and upper fourths:
The fourth\-spread has a simple interpretation – it’s the width of the middle 50% of the data.
4\.9 Identifying Outliers
-------------------------
John Tukey devised a rule\-of\-thumb for identifying extreme observations in a batch. This rule\-of\-thumb is not designed to formally label particular data items as outliers. Rather this method sets apart a few unusually observations that may deserve further study.
The idea here is to set lower and upper fences in the data. If any of the observations fall beyond the fences, they are designated as possible outliers.
We first define a step which is equal to 1 1/2 times the fourth\-spread:
\\\[
STEP \= 1\.5 \\times (F\_U \- F\_L).
\\]
Then the lower fence is defined by one step smaller than the lower quartile, and the upper fence is defined as one step larger than the upper quartile:
\\\[
fence\_{lower} \= F\_L \- STEP, \\, \\, fence\_{upper} \= F\_U \+ STEP.
\\]
Any observations that fall beyond the fences are called \`\`outside”.
Tukey thought it was useful to have two sets of fences. The fences defined above can be called inner fences. To obtain outer fences, we got out two steps from the fourths:
\\\[
FENCE\_{lower} \= F\_L \- 2 \\times STEP, \\, \\, FENCE\_{upper} \= F\_U \+ 2 \\times STEP.
\\]
(We will call these outer fences FENCES.) Observations that fall beyond the outer fences can be regarded as \`\`really out”.
4\.10 A New Example
-------------------
For a second example, our almanac (The World Almanac 2001, page 237\) gives the average gestation (in days) for 43 species of animals. Here’s part of the data and associated stemplot:
```
head(gestation.periods)
```
```
## Animal Period
## 1 Ass 365
## 2 Baboon 187
## 3 Bear_black 219
## 4 Bear_grizzly 225
## 5 Bear_polar 240
## 6 Beaver 105
```
```
aplpack::stem.leaf(gestation.periods$Period)
```
```
## 1 | 2: represents 120
## leaf unit: 10
## n: 43
## 7 0* | 1123334
## 14 0. | 5666699
## 18 1* | 0001
## (4) 1. | 5568
## 21 2* | 0123344
## 14 2. | 5588
## 10 3* | 3
## 9 3. | 566
## 6 4* | 0
## 5 4. | 558
## HI: 645 660
```
Here the dataset looks somewhat right skewed. There are a large number of animals (the small variety) with short gestation periods under 100 days. Also we see a cluster of periods in the 200\-240 range. We note the two large values – each exceeding 600 days. We’re not surprised that these correspond to the two elephants in the table.
Let’s compute some letter values.
1. There are \\(n\\) \= 43 values, so the depth of the median is \\(d(M)\\) \= (43\+1\)/2 \= 22\. Looking at the stemplot, we see that the 22nd value is 18, so \\(M\\) \= 18\.
2. To find fourths, we compute the depth: \\(d(F)\\) \= (22\+1\)/2 \= 11 1/2\. The lower and upper fourths are found by averaging the 11th and 12th values at each end. Looking at the stemplot, we find
\\\[
F\_L \= (6 \+ 6\)/2 \= 6, \\, \\, F\_U \= (28\+28\)/2 \= 28 .
\\]
3. We can keep going to find additional letter values. The depth of the eighth is \\(d(E) \= (11\+1\)/2 \= 6\\). Looking at the stemplot, these values are
\\\[
E\_L \= 3, E\_U \= 40
\\]
4. We set our fences to look for outliers. The fourth spread is
\\\[
dF \= 28 \- 6 \= 22
\\]
and so a step is
\\\[
STEP \= 1\.5 (22\) \= 33 .
\\]
The inner fences are located at
\\\[
F\_L \- STEP \= 6 \- 33 \= \-27, \\, \\, F\_U \+ STEP \= 28 \+ 33 \= 61
\\]
and the outer fences at
\\\[
FL \- 2 \\times STEP \= 6 \- 2(33\) \= \-60, \\, \\, F\_U \+ 2 \\times STEP \= 61 \+ 33 \= 94\.
\\]
Do we have any outliers? Yes, the two elephant gestation periods are beyond the inner fence but within the outer fence at the high end. I think we would all agree that elephants are unusually large animals which likely goes together with their long gestation periods.
4\.11 Relationship with Normal Data
-----------------------------------
In introductory statistics, we spend a lot of time talking about the normal distribution. If we have a bunch of normally distributed data, what do the fourths look like? Also should we expect to find any outliers?
Consider the normal curve with mean \\(\\mu\\) and standard deviation \\(\\sigma\\) that represents a population of normal measurements. It is easy to check that 50% of the probability content of a normal curve falls between \\(\\mu \- 0\.6745 \\sigma\\) and \\(\\mu \+ 0\.6745 \\sigma\\) . So for normal measurements, \\(F\_L \= \\mu \- 0\.6745\\) and \\(F\_U \= \\mu \+ 0\.6745 \\sigma\\) and the fourth\-spread is \\(d\_F \= 2 (0\.6745\) \\sigma \= 1\.349 \\sigma\\).
As an aside, this relationship gives us an alternative estimate of the standard deviation \\(s\\). Solving \\(d\_F \= 1\.349 \\sigma\\) for \\(\\sigma\\) gives the relationship
\\\[
\\sigma \= d\_F / 1\.349\.
\\]
So a simple way of estimating a standard deviation divides the fourth spread by 1\.349\. This is called the F pseudosigma. Why is this better than the usual estimate of \\(\\sigma\\)? It’s better since, unlike the usual estimate, the F pseudosigma is resistant to extreme observations.
Continuing our discussion, how many outliers should we find for normal data? For normal data,
\\\[
STEP \= 1\.5 (1\.349 \\sigma ) \= 2\.0235 \\sigma
\\]
and the inner fences will be
\\\[
F\_L \- STEP \= \\mu \- 0\.6745 \\sigma \- 2\.0235 \\sigma \= \\mu \- 2\.6980 \\sigma
\\]
\\\[
F\_U \+ STEP \= \\mu \+ 0\.6745 \\sigma \+ 2\.0235\\sigma \= \\mu \+ 2\.6980 \\sigma.
\\]
The probability of being outside \\(( \\mu \- 2\.6980\\sigma , \\mu \+ 2\.6980 \\sigma )\\) for a normal curve is .007\. This means that only 0\.7 % of normally distributed data will be classified as outliers. So, it is pretty rare to see outliers for normal data.
COMMENT: There is a slight flaw in the above argument. The normal curve represents the distribution for a large sample of normal data and 0\.7% of this large sample will be outlying. If we take a small sample, then we will generally see a higher fraction of outliers. In fact, it has been established that the fraction of outliers for a normal sample of size \\(n\\) is approximately
```
.00698 + .4 / n
```
For example, if we take a sample of size \\(n\\) \= 20, then the proportion of outliers will be
```
.00698 + .4/20 =.027
```
If we take repeated samples of size 20, then approximately 2\.7 % of all these observations will be outlying.
I checked this result in a simulation. I took repeated samples of size 20 from a normal distribution. In 1000 samples, I found a total of 327 outliers. The fraction of outliers was 327/20000 \= 0\.016, which is a bit smaller than the result above. But this fraction is larger than the fraction 0\.00698 from a “large” normal sample.
4\.1 Meet the Data
------------------
**Data: Percentage change in population 2000\-2009 for each state.**
**Source: The 2010 New York Times Almanac, page 277, and the U.S. Census Bureau website <http://www.census.gov>.**
This data (some that is displayed in the following table) shows the change in population (measured in terms of a percentage) for all states in the United States between the years 2000 and 2009 (roughly between the 2000 and 2010 census). This is interesting data, since we are interested in the regions of the U.S. which are growing fast and slow. Specifically, we might want to know
* what is a typical growth rate for a state in the last 9 years?
* are there states whose growths are significantly different from the typical growth rate?
* do the states with large population growths correspond to particular regions of the U.S.?
In this topic, we’ll discuss simple ways of summarizing a dataset. These summaries and associated displays will help in answering some of these questions.
```
library(LearnEDAfunctions)
library(tidyverse)
select(pop.change, State, Pct.change) %>% head()
```
```
## State Pct.change
## 1 Alabama 5.9
## 2 Alaska 11.3
## 3 Arizona 28.6
## 4 Arkansas 8.1
## 5 California 9.1
## 6 Colorado 16.8
```
We begin by constructing a stemplot of the growth percentages. We break between the ones and tens places and use two leaves per stem. We have one unusual value – Nevada at the high end that we show on a separate HI line.
```
aplpack::stem.leaf(pop.change$Pct.change, depth=FALSE)
```
```
## 1 | 2: represents 12
## leaf unit: 1
## n: 51
## 0* | 000001
## t | 222333333
## f | 4445555
## s | 66677777
## 0. | 889
## 1* | 000111
## t | 233
## f |
## s | 666
## 1. | 89
## 2* | 0
## t |
## f | 4
## s |
## 2. | 8
## HI: 32.3
```
4\.2 Ranks and Depths
---------------------
To describe our summaries which we will call letter values, we have to first define a few terms. The rank of an observation is its order when data is arranged from lowest to highest. For example, if we have the following six test scores
\\\[
40, 43, 65, 77, 100, 66,
\\]
40 has rank 1, 43 has rank 2, 77 has rank 5, etc.
We can distinguish between two ranks – a downward rank (abbreviated drank) is the rank of an observation when the data are arranged from HI to LO. In contrast, the upward rank (abbreviated urank) of an observation is its rank when data are arranged from LO to HI.
In our test score example,
```
43 has upward rank 2 and downward rank 5.
```
If \\(n\\) is the number of data values, it should be clear that
```
drank + urank = n+1
```
The depth of an observation is the smaller of the two ranks. That is,
```
depth = minimum{drank, urank}.
```
The extreme observations, the smallest and the largest, will each have a depth of 1\. The table below gives the downward ranks, the upward ranks, and the depths for our test scores:
```
DATA 40 43 65 66 77 100
-----------------------------
URANK 1 2 3 4 5 6
DRANK 6 5 4 3 2 1
DEPTH 1 2 3 3 2 1
```
4\.3 Letter Values: A Set of Summary Values
-------------------------------------------
We define our summaries, called letter values, using depths. The first letter value, the median (denoted by \\(M\\)), is the value that divides the data into a lower half and an upper half. The depth of the median is \\((n\+1\)/2\\), where \\(n\\) is the number of items in our batch.
```
Depth of median = (n + 1) / 2
```
The median divides the data into halves. We can continue by dividing each half (the lower half and the upper half) into halves. These summaries are called fourths (denoted by the letter \\(F\\)). We find them by computing their depths. The depth of a fourth is found by taking the integer part of the depth of the median, adding 1, and then dividing by 2:
```
Depth of fourth = ([Depth of median] + 1) / 2
```
Let’s compute the median and the fourths for the state growth percentages. Here
```
n = 51
```
and so
```
depth(M) = (51 + 1) / 2 = 26 and depth(F) = (26 + 1) / 2 = 13 1/2.
```
So the median \\(M\\) is the 26th smallest (or largest) observation. The fourths, called the lower fourth and the upper fourth, are the observations that have depth 13 1/2\. When we say a depth of 13 1/2, we mean that we wish to average the observations that have depths of 13 and 14\.
4\.4 Counting In
----------------
To find the median and fourths for our example, it its useful to add some extra numbers to our display. On each line of the stemplot, we write (on the left) the number of observations found on that line and more extreme lines. We see that there are 6 observations on the first line (and above), 15 observations are on the second line and above. Looking from the bottom, we see there are 2 observations on the bottom line (and below), there are 3 observations on the next\-to\-next\-to\-bottom line and below, etc. We call this
```
counting in
```
We count in from both ends until we reach half of the data. We stop counting in at 22 at the top since one additional line of 8 would put us over 50% of the data; likewise we stop at counting in 18 from the bottom since one additional line would include more than half the data. The (8\) on the fifth line is not counting in – it just tells us that there are 8 observations in this middle row.
```
aplpack::stem.leaf(pop.change$Pct.change, depth=TRUE)
```
```
## 1 | 2: represents 12
## leaf unit: 1
## n: 51
## 6 0* | 000001
## 15 t | 222333333
## 22 f | 4445555
## (8) s | 66677777
## 21 0. | 889
## 18 1* | 000111
## 12 t | 233
## f |
## 9 s | 666
## 6 1. | 89
## 4 2* | 0
## t |
## 3 f | 4
## s |
## 2 2. | 8
## HI: 32.3
```
Let’s find the median and fourths from the stemplot. The median has depth(\\(M\\)) \= 26, and we see that this corresponds to \\(M\\) \= 07\. Recall that depth(\\(F\\)) \= 13 1/2\. Counting from the lowest observation, the observations with depths of 13 and 14 are 03 and 03, so the lower fourth is \\(F\_L\\) \= (03 \+ 03\)/2 \= 3\. Counting from the largest observation, we see that the data values 11 and 11 have depths 13 and 14, so the upper fourth is \\(F\_U\\) \= (11 \+ 11\)/2 \= 11\.
4\.5 Five\-number Summary
-------------------------
We can summarize our batch of data using five numbers: the smallest observation (\\(LO\\)), the lower fourth \\(F\_L\\), the median \\(M\\), the upper fourth \\(F\_U\\), and the largest observation (\\(HI\\)). Collectively, these numbers are called the five\-number summary. Here the five\-number summary is
```
fivenum(pop.change$Pct.change)
```
```
## [1] 0.30 3.65 7.00 11.60 32.30
```
What have we learned? A typical growth percentage of a state is 7 percent; approximately half of the states have growth percentages smaller than 7% and half have larger growth percentages. Moreover, since 3, 7, 11 divide the data into quarters, one quarter of the states have growth percentages smaller than 3%, one quarter of the states have growth percentages between 3% and 7% one quarter of the states have growth percentages between 7% and 11%, and one quarter of the states have growths between 11% and 32%. The extreme value is interesting: looking back at the data table, we see that Nevada has gained 32% in population.
4\.6 Other Letter Values
------------------------
Sometimes we will find it useful to compute other letter values that divide the tail regions of the data into smaller regions. Suppose we divide the lower quarter and the upper quarter of the data into halves – the dividing points are called eighths. The depth of an eighth is given by the formula
```
Depth of eighth = ([Depth of fourth] + 1) / 2
```
In our example, we found depth(\\(F\\)) \= 13 1/2, so
```
Depth of eighth = ([13 1/2] + 1) / 2 = 7 .
```
The lower eighth and upper eighth have depths equal to 7\. We return to our stemplot and find the 7th smallest and 7th largest values, which are 2 and 16\. Approximately one eighth of the percentage increases in growth are smaller than 2%, and one eighth of the increases are larger than 16%.
For larger datasets, we will continue to divide the tail region to get other letter values as shown in the following table. Note that the depth of a letter value is found by using the depth of the previous letter value.
| Letter Value | Name | Depth |
| --- | --- | --- |
| \\(M\\) | Median | (\[\\(n\\)] \+ 1\) / 2 |
| \\(F\\) | Fourth | (\[depth(\\(M\\))] \+ 1\) / 2 |
| \\(E\\) | Eighth | (\[depth(\\(F\\))] \+ 1\) / 2 |
| \\(D\\) | Sixteenth | (\[depth(\\(E\\))] \+ 1\) / 2 |
| \\(C\\) | Thirty\-secondth | (\[depth(\\(D\\))] \+ 1\) / 2 |
| \\(B\\) | Sixty\-fourth | (\[depth(\\(C\\))] \+ 1\) / 2 |
| \\(A\\) | One hundred and twenty\-eighth | (\[depth(\\(B\\))] \+ 1\) / 2 |
We will find these letter values useful in assessing the symmetry of a batch of data.
The `lval` function computes the set of letter values along with the mids and differences.
```
lval(pop.change$Pct.change)
```
```
## depth lo hi mids spreads
## M 26.0 7.00 7.00 7.000 0.00
## H 13.5 3.65 11.60 7.625 7.95
## E 7.0 2.10 16.80 9.450 14.70
## D 4.0 0.70 20.10 10.400 19.40
## C 2.5 0.50 26.65 13.575 26.15
## B 1.0 0.30 32.30 16.300 32.00
```
4\.7 Measures of Center
-----------------------
Now that we have defined letter values, what is a good measurement of the center of a batch? A common measure is the mean, denoted by \\(\\bar x\\), obtained by summing up the values and dividing by the number of observations. For exploratory work, we prefer the use of the median \\(M\\).
Why is the median preferable to the mean?
* The median has a simpler interpretation than the mean — \\(M\\) divides the data into a lower half and an upper half.
* Unlike the mean, the median \\(M\\) is resistant to extreme values. You are probably aware that a single large observation can have a significant impact on the value of . (Think of computing the mean salary for a company with 100 hourly workers and a president with a relatively large salary. The president’s salary will have a large impact on the mean salary.)
One criticism of the median is that it is dependent only on a single or two middle values in the batch. An alternative resistant measure of center is the tri\-mean, which is a weighted average of the median and the two fourths:
The trimean is resistant (like the median \\(M\\)), since it cannot be distorted by a few large or small extreme values. But, by combining the fourths and the median, the tri\-mean can reflect the lack of symmetry in the middle half of the data.
4\.8 Measures of Spread
-----------------------
The usual measure of spread is the standard deviation \\(s\\) that is based on computing deviations from the mean. It suffers from the same lack\-of\-resistance problem as the mean – a single large value can distort the value of \\(s\\). So the standard deviation is not suitable for exploratory work.
For similar reasons, the range \\(R \= HI \- LO\\) is a poor measure of spread since it is based on only the two extreme values, and these two values may not reflect the general dispersion in the batch.
A better resistant measure of spread is the fourth\-spread, denoted \\(dF\\), that is defined by the distance between the lower and upper fourths:
The fourth\-spread has a simple interpretation – it’s the width of the middle 50% of the data.
4\.9 Identifying Outliers
-------------------------
John Tukey devised a rule\-of\-thumb for identifying extreme observations in a batch. This rule\-of\-thumb is not designed to formally label particular data items as outliers. Rather this method sets apart a few unusually observations that may deserve further study.
The idea here is to set lower and upper fences in the data. If any of the observations fall beyond the fences, they are designated as possible outliers.
We first define a step which is equal to 1 1/2 times the fourth\-spread:
\\\[
STEP \= 1\.5 \\times (F\_U \- F\_L).
\\]
Then the lower fence is defined by one step smaller than the lower quartile, and the upper fence is defined as one step larger than the upper quartile:
\\\[
fence\_{lower} \= F\_L \- STEP, \\, \\, fence\_{upper} \= F\_U \+ STEP.
\\]
Any observations that fall beyond the fences are called \`\`outside”.
Tukey thought it was useful to have two sets of fences. The fences defined above can be called inner fences. To obtain outer fences, we got out two steps from the fourths:
\\\[
FENCE\_{lower} \= F\_L \- 2 \\times STEP, \\, \\, FENCE\_{upper} \= F\_U \+ 2 \\times STEP.
\\]
(We will call these outer fences FENCES.) Observations that fall beyond the outer fences can be regarded as \`\`really out”.
4\.10 A New Example
-------------------
For a second example, our almanac (The World Almanac 2001, page 237\) gives the average gestation (in days) for 43 species of animals. Here’s part of the data and associated stemplot:
```
head(gestation.periods)
```
```
## Animal Period
## 1 Ass 365
## 2 Baboon 187
## 3 Bear_black 219
## 4 Bear_grizzly 225
## 5 Bear_polar 240
## 6 Beaver 105
```
```
aplpack::stem.leaf(gestation.periods$Period)
```
```
## 1 | 2: represents 120
## leaf unit: 10
## n: 43
## 7 0* | 1123334
## 14 0. | 5666699
## 18 1* | 0001
## (4) 1. | 5568
## 21 2* | 0123344
## 14 2. | 5588
## 10 3* | 3
## 9 3. | 566
## 6 4* | 0
## 5 4. | 558
## HI: 645 660
```
Here the dataset looks somewhat right skewed. There are a large number of animals (the small variety) with short gestation periods under 100 days. Also we see a cluster of periods in the 200\-240 range. We note the two large values – each exceeding 600 days. We’re not surprised that these correspond to the two elephants in the table.
Let’s compute some letter values.
1. There are \\(n\\) \= 43 values, so the depth of the median is \\(d(M)\\) \= (43\+1\)/2 \= 22\. Looking at the stemplot, we see that the 22nd value is 18, so \\(M\\) \= 18\.
2. To find fourths, we compute the depth: \\(d(F)\\) \= (22\+1\)/2 \= 11 1/2\. The lower and upper fourths are found by averaging the 11th and 12th values at each end. Looking at the stemplot, we find
\\\[
F\_L \= (6 \+ 6\)/2 \= 6, \\, \\, F\_U \= (28\+28\)/2 \= 28 .
\\]
3. We can keep going to find additional letter values. The depth of the eighth is \\(d(E) \= (11\+1\)/2 \= 6\\). Looking at the stemplot, these values are
\\\[
E\_L \= 3, E\_U \= 40
\\]
4. We set our fences to look for outliers. The fourth spread is
\\\[
dF \= 28 \- 6 \= 22
\\]
and so a step is
\\\[
STEP \= 1\.5 (22\) \= 33 .
\\]
The inner fences are located at
\\\[
F\_L \- STEP \= 6 \- 33 \= \-27, \\, \\, F\_U \+ STEP \= 28 \+ 33 \= 61
\\]
and the outer fences at
\\\[
FL \- 2 \\times STEP \= 6 \- 2(33\) \= \-60, \\, \\, F\_U \+ 2 \\times STEP \= 61 \+ 33 \= 94\.
\\]
Do we have any outliers? Yes, the two elephant gestation periods are beyond the inner fence but within the outer fence at the high end. I think we would all agree that elephants are unusually large animals which likely goes together with their long gestation periods.
4\.11 Relationship with Normal Data
-----------------------------------
In introductory statistics, we spend a lot of time talking about the normal distribution. If we have a bunch of normally distributed data, what do the fourths look like? Also should we expect to find any outliers?
Consider the normal curve with mean \\(\\mu\\) and standard deviation \\(\\sigma\\) that represents a population of normal measurements. It is easy to check that 50% of the probability content of a normal curve falls between \\(\\mu \- 0\.6745 \\sigma\\) and \\(\\mu \+ 0\.6745 \\sigma\\) . So for normal measurements, \\(F\_L \= \\mu \- 0\.6745\\) and \\(F\_U \= \\mu \+ 0\.6745 \\sigma\\) and the fourth\-spread is \\(d\_F \= 2 (0\.6745\) \\sigma \= 1\.349 \\sigma\\).
As an aside, this relationship gives us an alternative estimate of the standard deviation \\(s\\). Solving \\(d\_F \= 1\.349 \\sigma\\) for \\(\\sigma\\) gives the relationship
\\\[
\\sigma \= d\_F / 1\.349\.
\\]
So a simple way of estimating a standard deviation divides the fourth spread by 1\.349\. This is called the F pseudosigma. Why is this better than the usual estimate of \\(\\sigma\\)? It’s better since, unlike the usual estimate, the F pseudosigma is resistant to extreme observations.
Continuing our discussion, how many outliers should we find for normal data? For normal data,
\\\[
STEP \= 1\.5 (1\.349 \\sigma ) \= 2\.0235 \\sigma
\\]
and the inner fences will be
\\\[
F\_L \- STEP \= \\mu \- 0\.6745 \\sigma \- 2\.0235 \\sigma \= \\mu \- 2\.6980 \\sigma
\\]
\\\[
F\_U \+ STEP \= \\mu \+ 0\.6745 \\sigma \+ 2\.0235\\sigma \= \\mu \+ 2\.6980 \\sigma.
\\]
The probability of being outside \\(( \\mu \- 2\.6980\\sigma , \\mu \+ 2\.6980 \\sigma )\\) for a normal curve is .007\. This means that only 0\.7 % of normally distributed data will be classified as outliers. So, it is pretty rare to see outliers for normal data.
COMMENT: There is a slight flaw in the above argument. The normal curve represents the distribution for a large sample of normal data and 0\.7% of this large sample will be outlying. If we take a small sample, then we will generally see a higher fraction of outliers. In fact, it has been established that the fraction of outliers for a normal sample of size \\(n\\) is approximately
```
.00698 + .4 / n
```
For example, if we take a sample of size \\(n\\) \= 20, then the proportion of outliers will be
```
.00698 + .4/20 =.027
```
If we take repeated samples of size 20, then approximately 2\.7 % of all these observations will be outlying.
I checked this result in a simulation. I took repeated samples of size 20 from a normal distribution. In 1000 samples, I found a total of 327 outliers. The fraction of outliers was 327/20000 \= 0\.016, which is a bit smaller than the result above. But this fraction is larger than the fraction 0\.00698 from a “large” normal sample.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/boxplots.html |
5 Boxplots
==========
5\.1 The Data:
--------------
In this topic, we start discussing how to compare batches of data effectively. Our dataset is taken from the 2001 Boston Marathon race. On the `www.bostonmarathon.org` website, one can obtain results for participants of different genders, ages, and home countries. Here we focus on the time\-to\-completion for woman runners. We take a sample of women of ages 20, 30, 40, 50, and 60\. In the display below, we show the data and then construct parallel stemplots of the times (in minutes) for all the runners in our study. The unit in our stemplot is one, so the shortest time among all 20\-year women in our sample had a finish time of 150 minutes, which is equivalent to 2 1/2 hours.
```
Official times (minutes) of some women runners in the 2001 Boston Marathon.
Age=20
244 213 274 240 225 269 214 223 271 237
232 229 209 272 230 229 203 236 222 239
233 150
age=30
194 207 259 287 319 252 237 330 236 210
226 213 241 235 194 216 272 227 278 211
219 259 237 234 205
age=40
286 256 247 166 275 284 239 235 163 214
227 346 210 223 238 221 271 224 248 231
314 224 258 244 262
age=50
281 287 222 251 253 302 235 231 254 253
262 231 230 284 326 349 269 327 258 270
260 279 263 245 271
age=60
219 338 278 315 278 258 274 233 280 270
271
PARALLEL STEMPLOTS
One unit = 1 minute.
AGE=20 AGE=30 AGE=40 AGE=50 AGE=60
15 0 15 15 15 15
16 16 16 36 16 16
17 17 17 17 17
18 18 18 18 18
19 19 44 19 19 19
20 39 20 57 20 20 20
21 34 21 01369 21 04 21 21 9
22 23599 22 67 22 13447 22 2 22
23 023679 23 45677 23 1589 23 0115 23 3
24 04 24 1 24 478 24 5 24
25 25 299 25 68 25 13348 25 8
26 9 26 26 2 26 0239 26
27 124 27 28 27 15 27 019 27 01488
28 28 7 28 46 28 147 28 0
29 29 29 29 29
30 30 30 30 2 30
31 31 9 31 4 31 31 5
32 32 32 32 67 32
33 33 0 33 33 33 8
34 34 6 34 34 9 34
```
We are interested in graphically comparing the batches of times from the five age groups. An effective display is based on a boxplot, which is a graph of a five\-number summary with outliers indicated.
5\.2 Constructing A Single Boxplot
----------------------------------
Let’s first illustrate the construction of a single boxplot for the times of the 20\-year old women. There are \\(n\\) \= 22 runners. So the location of the median is (22\+1\)/2 \= 11 1/2, and the location of the fourths is (11\+1\)/2 \= 6\. From the stemplot, we find
\\\[
LO \= 150, F\_L \= 222, M \= 231, F\_U \= 240, HI \= 274 .
\\]
Do we have any outliers? Here the fourth spread is \\(d\_F \= 240 \- 222 \= 18\\) and a step is 1\.5 (18\) \= 27\. The inner fences are at
\\\[
222 \- 27 \= 195 \\,\\, {\\rm and} \\,\\, 240 \+ 27 \= 267
\\]
Looking at the stemplot, we see one time (150\) beyond the lower fence and four times (269, 271, 272, 274\) beyond the upper fence. Certainly the low outlier is interesting since that corresponds to a very fast marathon runner.
To draw a boxplot:
1. Draw a number line with tic marks covering the range of the data.
2. Draw a box where the lines of the box correspond to the locations of the fourths and the median. (See diagram.)
3. Indicate the outliers using separate plotting points.
4. To complete the box, draw lines out from the box to the most extreme values that are not outliers. (These points are called adjacent values.)
Of course, we don’t need our labels and so our boxplot would look like
Here is a software generated boxplot display using the `ggplot2` package in R.
```
library(LearnEDAfunctions)
library(tidyverse)
ggplot(filter(boston.marathon, age == 20),
aes(x = 1, y = time)) + xlim(0, 2) +
geom_boxplot() + coord_flip() +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
5\.3 Interpreting a Boxplot
---------------------------
Before we use boxplots to compare batches, let us spend some time interpreting a boxplot for a single batch. The figure below shows the histogram and corresponding boxplot for two datasets. The first dataset (left side) is symmetric with long tails on both sides.
If we look at the corresponding boxplot of this symmetric dataset, we see
* the location of the median (red line) is roughly half\-way across the box (the location of the fourths)
* the lengths of the right and left whiskers (the lines extending from the box) are about the same – this means that the width of the lower quarter of the data is equal to the width of the upper quarter
Let’s contrast this boxplot of the symmetric batch with the boxplot of the batch on the right. From the histogram, we see that this data is skewed right – most of the data is in the 0\-4 range and the values tail off towards large values. If we look at the corresponding boxplot, we see
* the length of the box from the median to the upper fourth is longer than the length from the lower fourth to the median – this indicates skewness in the middle half of the data
* the length of the right whisker is significantly longer than the length of the left whisker – this shows right skewness in the tail portion of the data
After some practice looking at boxplots, you’ll see that a boxplot is pretty informative about the shape of a batch.
5\.4 Boxplots to Compare Batches
--------------------------------
Now we are ready to use boxplots to compare the batches of running times for the different age groups. For each batch, we compute (1\) the five\-number summary, (2\) the fences, and (3\) indicate any outliers. Below, we have summarized our calculations for the five age groups, and then we use the calculations to construct boxplots for the batches. We display all of the boxplots on a single plot using one scale.
```
Age = 20
Depth Lower Upper
N= 22
M 11.5 231.000
F 6.0 222.000 240.000
STEP = 27
FENCES = 195, 258
OUTLIERS: 150, 269, 271, 272, 274
Age = 30
Depth Lower Upper
N= 25
M 13.0 235.000
F 7.0 213.000 259.000
STEP = 69
FENCES: 144, 328
OUTLIERS: 330, 346
Age = 40
Depth Lower Upper
N= 25
M 13.0 239.000
F 7.0 224.000 262.000
STEP = 57
FENCES = 167, 319
OUTLIERS: 163, 166
Age = 50
Depth Lower Upper
N= 25
M 13.0 262.000
H 7.0 251.000 281.000
STEP = 45
FENCES: 206, 326
OUTLIERS: 327, 349
Age = 60
Depth Lower Upper
N= 11
M 6.0 274.000
H 3.5 264.000 279.000
STEP = 22.5
FENCES: 241.5, 301.5
OUTLIERS: 219, 233, 315, 338
```
```
ggplot(boston.marathon, aes(x = factor(age), y = time)) +
geom_boxplot() + coord_flip() +
xlab("Age") + ylab("Time")
```
What do we see in this display of boxplots?
* It is easier to interpret this display when the boxplots are sorted by the medians of the groups. Here this sorting occurs naturally, since the 20 year\-olds generally have smaller times than the 30 year\-olds, and the 30 year\-olds have smaller times for the 40 year\-olds, and so on.
* We notice a number of outlying points. In each age group, there are one or two unusually large times. Since we give special recognition to short times, we notice the 20\-year woman who ran the race in 150 minutes.
* If we focus on the three middle age groups, we notice that each group has about the same spread. (The spreads of the times for the 20 year\-olds and the 60 year\-olds are a bit smaller.) The lengths of the boxes for the three groups are about the same, indicating they have similar fourth spreads.
5\.5 Comparisons using Medians
------------------------------
When batches have similar spreads, it is easy to make comparisons. Let’s illustrate this for the three middle age groups that have similar spreads. The medians and fourth spreads for these batches are
```
Median Fourth-Spread
age30 235 min 46 min
age40 239 min 38 min
age50 262 min 30 min
```
Since the times for the 30\-year\-old and 40\-year\-old groups have approximately the same spread, the batch of 40\-year\-old times can be obtained by adding 4 minutes (the difference in medians) to the batch of 30\-year\-old times. In other words,
\\\[
age40 \= age30 \+ 4
\\]
which means that the 40\-year\-olds run, on average, 4 minutes longer than the 30\-year\-older runners.
Similarly, comparing the two older groups, we can say that
\\\[
age50 \= age40 \+ 23
\\]
which means that the batch of 50\-year\-old times can be found by adding 23 minutes to the 40\-year\-old times.
Do older women runners run slower than younger women in the Boston Marathon? Looking back at our boxplot display and comparing medians of the five groups, we see that women of ages 20, 30, and 40 have (approximately) the same median completion time. The median time for the 50 year\-old runners seems significantly higher that the times for the 20\-40 year\-olds, and the runners of age 60 have a significantly higher median than the 50\-year\-olds. So it appears that the best times for women marathoners are in a broad range between 20 and 40 years, and the times don’t appear to deteriorate until after age 40\.
This is a nice illustration, since the batches of data had similar spreads and this facilitated comparisons by comparing medians. We will see in our next example that batches can have varying spreads and this motivates a reexpression or change in the scale of the data so that the reexpressed batches have similar spreads
5\.1 The Data:
--------------
In this topic, we start discussing how to compare batches of data effectively. Our dataset is taken from the 2001 Boston Marathon race. On the `www.bostonmarathon.org` website, one can obtain results for participants of different genders, ages, and home countries. Here we focus on the time\-to\-completion for woman runners. We take a sample of women of ages 20, 30, 40, 50, and 60\. In the display below, we show the data and then construct parallel stemplots of the times (in minutes) for all the runners in our study. The unit in our stemplot is one, so the shortest time among all 20\-year women in our sample had a finish time of 150 minutes, which is equivalent to 2 1/2 hours.
```
Official times (minutes) of some women runners in the 2001 Boston Marathon.
Age=20
244 213 274 240 225 269 214 223 271 237
232 229 209 272 230 229 203 236 222 239
233 150
age=30
194 207 259 287 319 252 237 330 236 210
226 213 241 235 194 216 272 227 278 211
219 259 237 234 205
age=40
286 256 247 166 275 284 239 235 163 214
227 346 210 223 238 221 271 224 248 231
314 224 258 244 262
age=50
281 287 222 251 253 302 235 231 254 253
262 231 230 284 326 349 269 327 258 270
260 279 263 245 271
age=60
219 338 278 315 278 258 274 233 280 270
271
PARALLEL STEMPLOTS
One unit = 1 minute.
AGE=20 AGE=30 AGE=40 AGE=50 AGE=60
15 0 15 15 15 15
16 16 16 36 16 16
17 17 17 17 17
18 18 18 18 18
19 19 44 19 19 19
20 39 20 57 20 20 20
21 34 21 01369 21 04 21 21 9
22 23599 22 67 22 13447 22 2 22
23 023679 23 45677 23 1589 23 0115 23 3
24 04 24 1 24 478 24 5 24
25 25 299 25 68 25 13348 25 8
26 9 26 26 2 26 0239 26
27 124 27 28 27 15 27 019 27 01488
28 28 7 28 46 28 147 28 0
29 29 29 29 29
30 30 30 30 2 30
31 31 9 31 4 31 31 5
32 32 32 32 67 32
33 33 0 33 33 33 8
34 34 6 34 34 9 34
```
We are interested in graphically comparing the batches of times from the five age groups. An effective display is based on a boxplot, which is a graph of a five\-number summary with outliers indicated.
5\.2 Constructing A Single Boxplot
----------------------------------
Let’s first illustrate the construction of a single boxplot for the times of the 20\-year old women. There are \\(n\\) \= 22 runners. So the location of the median is (22\+1\)/2 \= 11 1/2, and the location of the fourths is (11\+1\)/2 \= 6\. From the stemplot, we find
\\\[
LO \= 150, F\_L \= 222, M \= 231, F\_U \= 240, HI \= 274 .
\\]
Do we have any outliers? Here the fourth spread is \\(d\_F \= 240 \- 222 \= 18\\) and a step is 1\.5 (18\) \= 27\. The inner fences are at
\\\[
222 \- 27 \= 195 \\,\\, {\\rm and} \\,\\, 240 \+ 27 \= 267
\\]
Looking at the stemplot, we see one time (150\) beyond the lower fence and four times (269, 271, 272, 274\) beyond the upper fence. Certainly the low outlier is interesting since that corresponds to a very fast marathon runner.
To draw a boxplot:
1. Draw a number line with tic marks covering the range of the data.
2. Draw a box where the lines of the box correspond to the locations of the fourths and the median. (See diagram.)
3. Indicate the outliers using separate plotting points.
4. To complete the box, draw lines out from the box to the most extreme values that are not outliers. (These points are called adjacent values.)
Of course, we don’t need our labels and so our boxplot would look like
Here is a software generated boxplot display using the `ggplot2` package in R.
```
library(LearnEDAfunctions)
library(tidyverse)
ggplot(filter(boston.marathon, age == 20),
aes(x = 1, y = time)) + xlim(0, 2) +
geom_boxplot() + coord_flip() +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
5\.3 Interpreting a Boxplot
---------------------------
Before we use boxplots to compare batches, let us spend some time interpreting a boxplot for a single batch. The figure below shows the histogram and corresponding boxplot for two datasets. The first dataset (left side) is symmetric with long tails on both sides.
If we look at the corresponding boxplot of this symmetric dataset, we see
* the location of the median (red line) is roughly half\-way across the box (the location of the fourths)
* the lengths of the right and left whiskers (the lines extending from the box) are about the same – this means that the width of the lower quarter of the data is equal to the width of the upper quarter
Let’s contrast this boxplot of the symmetric batch with the boxplot of the batch on the right. From the histogram, we see that this data is skewed right – most of the data is in the 0\-4 range and the values tail off towards large values. If we look at the corresponding boxplot, we see
* the length of the box from the median to the upper fourth is longer than the length from the lower fourth to the median – this indicates skewness in the middle half of the data
* the length of the right whisker is significantly longer than the length of the left whisker – this shows right skewness in the tail portion of the data
After some practice looking at boxplots, you’ll see that a boxplot is pretty informative about the shape of a batch.
5\.4 Boxplots to Compare Batches
--------------------------------
Now we are ready to use boxplots to compare the batches of running times for the different age groups. For each batch, we compute (1\) the five\-number summary, (2\) the fences, and (3\) indicate any outliers. Below, we have summarized our calculations for the five age groups, and then we use the calculations to construct boxplots for the batches. We display all of the boxplots on a single plot using one scale.
```
Age = 20
Depth Lower Upper
N= 22
M 11.5 231.000
F 6.0 222.000 240.000
STEP = 27
FENCES = 195, 258
OUTLIERS: 150, 269, 271, 272, 274
Age = 30
Depth Lower Upper
N= 25
M 13.0 235.000
F 7.0 213.000 259.000
STEP = 69
FENCES: 144, 328
OUTLIERS: 330, 346
Age = 40
Depth Lower Upper
N= 25
M 13.0 239.000
F 7.0 224.000 262.000
STEP = 57
FENCES = 167, 319
OUTLIERS: 163, 166
Age = 50
Depth Lower Upper
N= 25
M 13.0 262.000
H 7.0 251.000 281.000
STEP = 45
FENCES: 206, 326
OUTLIERS: 327, 349
Age = 60
Depth Lower Upper
N= 11
M 6.0 274.000
H 3.5 264.000 279.000
STEP = 22.5
FENCES: 241.5, 301.5
OUTLIERS: 219, 233, 315, 338
```
```
ggplot(boston.marathon, aes(x = factor(age), y = time)) +
geom_boxplot() + coord_flip() +
xlab("Age") + ylab("Time")
```
What do we see in this display of boxplots?
* It is easier to interpret this display when the boxplots are sorted by the medians of the groups. Here this sorting occurs naturally, since the 20 year\-olds generally have smaller times than the 30 year\-olds, and the 30 year\-olds have smaller times for the 40 year\-olds, and so on.
* We notice a number of outlying points. In each age group, there are one or two unusually large times. Since we give special recognition to short times, we notice the 20\-year woman who ran the race in 150 minutes.
* If we focus on the three middle age groups, we notice that each group has about the same spread. (The spreads of the times for the 20 year\-olds and the 60 year\-olds are a bit smaller.) The lengths of the boxes for the three groups are about the same, indicating they have similar fourth spreads.
5\.5 Comparisons using Medians
------------------------------
When batches have similar spreads, it is easy to make comparisons. Let’s illustrate this for the three middle age groups that have similar spreads. The medians and fourth spreads for these batches are
```
Median Fourth-Spread
age30 235 min 46 min
age40 239 min 38 min
age50 262 min 30 min
```
Since the times for the 30\-year\-old and 40\-year\-old groups have approximately the same spread, the batch of 40\-year\-old times can be obtained by adding 4 minutes (the difference in medians) to the batch of 30\-year\-old times. In other words,
\\\[
age40 \= age30 \+ 4
\\]
which means that the 40\-year\-olds run, on average, 4 minutes longer than the 30\-year\-older runners.
Similarly, comparing the two older groups, we can say that
\\\[
age50 \= age40 \+ 23
\\]
which means that the batch of 50\-year\-old times can be found by adding 23 minutes to the 40\-year\-old times.
Do older women runners run slower than younger women in the Boston Marathon? Looking back at our boxplot display and comparing medians of the five groups, we see that women of ages 20, 30, and 40 have (approximately) the same median completion time. The median time for the 50 year\-old runners seems significantly higher that the times for the 20\-40 year\-olds, and the runners of age 60 have a significantly higher median than the 50\-year\-olds. So it appears that the best times for women marathoners are in a broad range between 20 and 40 years, and the times don’t appear to deteriorate until after age 40\.
This is a nice illustration, since the batches of data had similar spreads and this facilitated comparisons by comparing medians. We will see in our next example that batches can have varying spreads and this motivates a reexpression or change in the scale of the data so that the reexpressed batches have similar spreads
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/boxplots.html |
5 Boxplots
==========
5\.1 The Data:
--------------
In this topic, we start discussing how to compare batches of data effectively. Our dataset is taken from the 2001 Boston Marathon race. On the `www.bostonmarathon.org` website, one can obtain results for participants of different genders, ages, and home countries. Here we focus on the time\-to\-completion for woman runners. We take a sample of women of ages 20, 30, 40, 50, and 60\. In the display below, we show the data and then construct parallel stemplots of the times (in minutes) for all the runners in our study. The unit in our stemplot is one, so the shortest time among all 20\-year women in our sample had a finish time of 150 minutes, which is equivalent to 2 1/2 hours.
```
Official times (minutes) of some women runners in the 2001 Boston Marathon.
Age=20
244 213 274 240 225 269 214 223 271 237
232 229 209 272 230 229 203 236 222 239
233 150
age=30
194 207 259 287 319 252 237 330 236 210
226 213 241 235 194 216 272 227 278 211
219 259 237 234 205
age=40
286 256 247 166 275 284 239 235 163 214
227 346 210 223 238 221 271 224 248 231
314 224 258 244 262
age=50
281 287 222 251 253 302 235 231 254 253
262 231 230 284 326 349 269 327 258 270
260 279 263 245 271
age=60
219 338 278 315 278 258 274 233 280 270
271
PARALLEL STEMPLOTS
One unit = 1 minute.
AGE=20 AGE=30 AGE=40 AGE=50 AGE=60
15 0 15 15 15 15
16 16 16 36 16 16
17 17 17 17 17
18 18 18 18 18
19 19 44 19 19 19
20 39 20 57 20 20 20
21 34 21 01369 21 04 21 21 9
22 23599 22 67 22 13447 22 2 22
23 023679 23 45677 23 1589 23 0115 23 3
24 04 24 1 24 478 24 5 24
25 25 299 25 68 25 13348 25 8
26 9 26 26 2 26 0239 26
27 124 27 28 27 15 27 019 27 01488
28 28 7 28 46 28 147 28 0
29 29 29 29 29
30 30 30 30 2 30
31 31 9 31 4 31 31 5
32 32 32 32 67 32
33 33 0 33 33 33 8
34 34 6 34 34 9 34
```
We are interested in graphically comparing the batches of times from the five age groups. An effective display is based on a boxplot, which is a graph of a five\-number summary with outliers indicated.
5\.2 Constructing A Single Boxplot
----------------------------------
Let’s first illustrate the construction of a single boxplot for the times of the 20\-year old women. There are \\(n\\) \= 22 runners. So the location of the median is (22\+1\)/2 \= 11 1/2, and the location of the fourths is (11\+1\)/2 \= 6\. From the stemplot, we find
\\\[
LO \= 150, F\_L \= 222, M \= 231, F\_U \= 240, HI \= 274 .
\\]
Do we have any outliers? Here the fourth spread is \\(d\_F \= 240 \- 222 \= 18\\) and a step is 1\.5 (18\) \= 27\. The inner fences are at
\\\[
222 \- 27 \= 195 \\,\\, {\\rm and} \\,\\, 240 \+ 27 \= 267
\\]
Looking at the stemplot, we see one time (150\) beyond the lower fence and four times (269, 271, 272, 274\) beyond the upper fence. Certainly the low outlier is interesting since that corresponds to a very fast marathon runner.
To draw a boxplot:
1. Draw a number line with tic marks covering the range of the data.
2. Draw a box where the lines of the box correspond to the locations of the fourths and the median. (See diagram.)
3. Indicate the outliers using separate plotting points.
4. To complete the box, draw lines out from the box to the most extreme values that are not outliers. (These points are called adjacent values.)
Of course, we don’t need our labels and so our boxplot would look like
Here is a software generated boxplot display using the `ggplot2` package in R.
```
library(LearnEDAfunctions)
library(tidyverse)
ggplot(filter(boston.marathon, age == 20),
aes(x = 1, y = time)) + xlim(0, 2) +
geom_boxplot() + coord_flip() +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
5\.3 Interpreting a Boxplot
---------------------------
Before we use boxplots to compare batches, let us spend some time interpreting a boxplot for a single batch. The figure below shows the histogram and corresponding boxplot for two datasets. The first dataset (left side) is symmetric with long tails on both sides.
If we look at the corresponding boxplot of this symmetric dataset, we see
* the location of the median (red line) is roughly half\-way across the box (the location of the fourths)
* the lengths of the right and left whiskers (the lines extending from the box) are about the same – this means that the width of the lower quarter of the data is equal to the width of the upper quarter
Let’s contrast this boxplot of the symmetric batch with the boxplot of the batch on the right. From the histogram, we see that this data is skewed right – most of the data is in the 0\-4 range and the values tail off towards large values. If we look at the corresponding boxplot, we see
* the length of the box from the median to the upper fourth is longer than the length from the lower fourth to the median – this indicates skewness in the middle half of the data
* the length of the right whisker is significantly longer than the length of the left whisker – this shows right skewness in the tail portion of the data
After some practice looking at boxplots, you’ll see that a boxplot is pretty informative about the shape of a batch.
5\.4 Boxplots to Compare Batches
--------------------------------
Now we are ready to use boxplots to compare the batches of running times for the different age groups. For each batch, we compute (1\) the five\-number summary, (2\) the fences, and (3\) indicate any outliers. Below, we have summarized our calculations for the five age groups, and then we use the calculations to construct boxplots for the batches. We display all of the boxplots on a single plot using one scale.
```
Age = 20
Depth Lower Upper
N= 22
M 11.5 231.000
F 6.0 222.000 240.000
STEP = 27
FENCES = 195, 258
OUTLIERS: 150, 269, 271, 272, 274
Age = 30
Depth Lower Upper
N= 25
M 13.0 235.000
F 7.0 213.000 259.000
STEP = 69
FENCES: 144, 328
OUTLIERS: 330, 346
Age = 40
Depth Lower Upper
N= 25
M 13.0 239.000
F 7.0 224.000 262.000
STEP = 57
FENCES = 167, 319
OUTLIERS: 163, 166
Age = 50
Depth Lower Upper
N= 25
M 13.0 262.000
H 7.0 251.000 281.000
STEP = 45
FENCES: 206, 326
OUTLIERS: 327, 349
Age = 60
Depth Lower Upper
N= 11
M 6.0 274.000
H 3.5 264.000 279.000
STEP = 22.5
FENCES: 241.5, 301.5
OUTLIERS: 219, 233, 315, 338
```
```
ggplot(boston.marathon, aes(x = factor(age), y = time)) +
geom_boxplot() + coord_flip() +
xlab("Age") + ylab("Time")
```
What do we see in this display of boxplots?
* It is easier to interpret this display when the boxplots are sorted by the medians of the groups. Here this sorting occurs naturally, since the 20 year\-olds generally have smaller times than the 30 year\-olds, and the 30 year\-olds have smaller times for the 40 year\-olds, and so on.
* We notice a number of outlying points. In each age group, there are one or two unusually large times. Since we give special recognition to short times, we notice the 20\-year woman who ran the race in 150 minutes.
* If we focus on the three middle age groups, we notice that each group has about the same spread. (The spreads of the times for the 20 year\-olds and the 60 year\-olds are a bit smaller.) The lengths of the boxes for the three groups are about the same, indicating they have similar fourth spreads.
5\.5 Comparisons using Medians
------------------------------
When batches have similar spreads, it is easy to make comparisons. Let’s illustrate this for the three middle age groups that have similar spreads. The medians and fourth spreads for these batches are
```
Median Fourth-Spread
age30 235 min 46 min
age40 239 min 38 min
age50 262 min 30 min
```
Since the times for the 30\-year\-old and 40\-year\-old groups have approximately the same spread, the batch of 40\-year\-old times can be obtained by adding 4 minutes (the difference in medians) to the batch of 30\-year\-old times. In other words,
\\\[
age40 \= age30 \+ 4
\\]
which means that the 40\-year\-olds run, on average, 4 minutes longer than the 30\-year\-older runners.
Similarly, comparing the two older groups, we can say that
\\\[
age50 \= age40 \+ 23
\\]
which means that the batch of 50\-year\-old times can be found by adding 23 minutes to the 40\-year\-old times.
Do older women runners run slower than younger women in the Boston Marathon? Looking back at our boxplot display and comparing medians of the five groups, we see that women of ages 20, 30, and 40 have (approximately) the same median completion time. The median time for the 50 year\-old runners seems significantly higher that the times for the 20\-40 year\-olds, and the runners of age 60 have a significantly higher median than the 50\-year\-olds. So it appears that the best times for women marathoners are in a broad range between 20 and 40 years, and the times don’t appear to deteriorate until after age 40\.
This is a nice illustration, since the batches of data had similar spreads and this facilitated comparisons by comparing medians. We will see in our next example that batches can have varying spreads and this motivates a reexpression or change in the scale of the data so that the reexpressed batches have similar spreads
5\.1 The Data:
--------------
In this topic, we start discussing how to compare batches of data effectively. Our dataset is taken from the 2001 Boston Marathon race. On the `www.bostonmarathon.org` website, one can obtain results for participants of different genders, ages, and home countries. Here we focus on the time\-to\-completion for woman runners. We take a sample of women of ages 20, 30, 40, 50, and 60\. In the display below, we show the data and then construct parallel stemplots of the times (in minutes) for all the runners in our study. The unit in our stemplot is one, so the shortest time among all 20\-year women in our sample had a finish time of 150 minutes, which is equivalent to 2 1/2 hours.
```
Official times (minutes) of some women runners in the 2001 Boston Marathon.
Age=20
244 213 274 240 225 269 214 223 271 237
232 229 209 272 230 229 203 236 222 239
233 150
age=30
194 207 259 287 319 252 237 330 236 210
226 213 241 235 194 216 272 227 278 211
219 259 237 234 205
age=40
286 256 247 166 275 284 239 235 163 214
227 346 210 223 238 221 271 224 248 231
314 224 258 244 262
age=50
281 287 222 251 253 302 235 231 254 253
262 231 230 284 326 349 269 327 258 270
260 279 263 245 271
age=60
219 338 278 315 278 258 274 233 280 270
271
PARALLEL STEMPLOTS
One unit = 1 minute.
AGE=20 AGE=30 AGE=40 AGE=50 AGE=60
15 0 15 15 15 15
16 16 16 36 16 16
17 17 17 17 17
18 18 18 18 18
19 19 44 19 19 19
20 39 20 57 20 20 20
21 34 21 01369 21 04 21 21 9
22 23599 22 67 22 13447 22 2 22
23 023679 23 45677 23 1589 23 0115 23 3
24 04 24 1 24 478 24 5 24
25 25 299 25 68 25 13348 25 8
26 9 26 26 2 26 0239 26
27 124 27 28 27 15 27 019 27 01488
28 28 7 28 46 28 147 28 0
29 29 29 29 29
30 30 30 30 2 30
31 31 9 31 4 31 31 5
32 32 32 32 67 32
33 33 0 33 33 33 8
34 34 6 34 34 9 34
```
We are interested in graphically comparing the batches of times from the five age groups. An effective display is based on a boxplot, which is a graph of a five\-number summary with outliers indicated.
5\.2 Constructing A Single Boxplot
----------------------------------
Let’s first illustrate the construction of a single boxplot for the times of the 20\-year old women. There are \\(n\\) \= 22 runners. So the location of the median is (22\+1\)/2 \= 11 1/2, and the location of the fourths is (11\+1\)/2 \= 6\. From the stemplot, we find
\\\[
LO \= 150, F\_L \= 222, M \= 231, F\_U \= 240, HI \= 274 .
\\]
Do we have any outliers? Here the fourth spread is \\(d\_F \= 240 \- 222 \= 18\\) and a step is 1\.5 (18\) \= 27\. The inner fences are at
\\\[
222 \- 27 \= 195 \\,\\, {\\rm and} \\,\\, 240 \+ 27 \= 267
\\]
Looking at the stemplot, we see one time (150\) beyond the lower fence and four times (269, 271, 272, 274\) beyond the upper fence. Certainly the low outlier is interesting since that corresponds to a very fast marathon runner.
To draw a boxplot:
1. Draw a number line with tic marks covering the range of the data.
2. Draw a box where the lines of the box correspond to the locations of the fourths and the median. (See diagram.)
3. Indicate the outliers using separate plotting points.
4. To complete the box, draw lines out from the box to the most extreme values that are not outliers. (These points are called adjacent values.)
Of course, we don’t need our labels and so our boxplot would look like
Here is a software generated boxplot display using the `ggplot2` package in R.
```
library(LearnEDAfunctions)
library(tidyverse)
ggplot(filter(boston.marathon, age == 20),
aes(x = 1, y = time)) + xlim(0, 2) +
geom_boxplot() + coord_flip() +
theme(axis.title.y=element_blank(),
axis.text.y=element_blank(),
axis.ticks.y=element_blank())
```
5\.3 Interpreting a Boxplot
---------------------------
Before we use boxplots to compare batches, let us spend some time interpreting a boxplot for a single batch. The figure below shows the histogram and corresponding boxplot for two datasets. The first dataset (left side) is symmetric with long tails on both sides.
If we look at the corresponding boxplot of this symmetric dataset, we see
* the location of the median (red line) is roughly half\-way across the box (the location of the fourths)
* the lengths of the right and left whiskers (the lines extending from the box) are about the same – this means that the width of the lower quarter of the data is equal to the width of the upper quarter
Let’s contrast this boxplot of the symmetric batch with the boxplot of the batch on the right. From the histogram, we see that this data is skewed right – most of the data is in the 0\-4 range and the values tail off towards large values. If we look at the corresponding boxplot, we see
* the length of the box from the median to the upper fourth is longer than the length from the lower fourth to the median – this indicates skewness in the middle half of the data
* the length of the right whisker is significantly longer than the length of the left whisker – this shows right skewness in the tail portion of the data
After some practice looking at boxplots, you’ll see that a boxplot is pretty informative about the shape of a batch.
5\.4 Boxplots to Compare Batches
--------------------------------
Now we are ready to use boxplots to compare the batches of running times for the different age groups. For each batch, we compute (1\) the five\-number summary, (2\) the fences, and (3\) indicate any outliers. Below, we have summarized our calculations for the five age groups, and then we use the calculations to construct boxplots for the batches. We display all of the boxplots on a single plot using one scale.
```
Age = 20
Depth Lower Upper
N= 22
M 11.5 231.000
F 6.0 222.000 240.000
STEP = 27
FENCES = 195, 258
OUTLIERS: 150, 269, 271, 272, 274
Age = 30
Depth Lower Upper
N= 25
M 13.0 235.000
F 7.0 213.000 259.000
STEP = 69
FENCES: 144, 328
OUTLIERS: 330, 346
Age = 40
Depth Lower Upper
N= 25
M 13.0 239.000
F 7.0 224.000 262.000
STEP = 57
FENCES = 167, 319
OUTLIERS: 163, 166
Age = 50
Depth Lower Upper
N= 25
M 13.0 262.000
H 7.0 251.000 281.000
STEP = 45
FENCES: 206, 326
OUTLIERS: 327, 349
Age = 60
Depth Lower Upper
N= 11
M 6.0 274.000
H 3.5 264.000 279.000
STEP = 22.5
FENCES: 241.5, 301.5
OUTLIERS: 219, 233, 315, 338
```
```
ggplot(boston.marathon, aes(x = factor(age), y = time)) +
geom_boxplot() + coord_flip() +
xlab("Age") + ylab("Time")
```
What do we see in this display of boxplots?
* It is easier to interpret this display when the boxplots are sorted by the medians of the groups. Here this sorting occurs naturally, since the 20 year\-olds generally have smaller times than the 30 year\-olds, and the 30 year\-olds have smaller times for the 40 year\-olds, and so on.
* We notice a number of outlying points. In each age group, there are one or two unusually large times. Since we give special recognition to short times, we notice the 20\-year woman who ran the race in 150 minutes.
* If we focus on the three middle age groups, we notice that each group has about the same spread. (The spreads of the times for the 20 year\-olds and the 60 year\-olds are a bit smaller.) The lengths of the boxes for the three groups are about the same, indicating they have similar fourth spreads.
5\.5 Comparisons using Medians
------------------------------
When batches have similar spreads, it is easy to make comparisons. Let’s illustrate this for the three middle age groups that have similar spreads. The medians and fourth spreads for these batches are
```
Median Fourth-Spread
age30 235 min 46 min
age40 239 min 38 min
age50 262 min 30 min
```
Since the times for the 30\-year\-old and 40\-year\-old groups have approximately the same spread, the batch of 40\-year\-old times can be obtained by adding 4 minutes (the difference in medians) to the batch of 30\-year\-old times. In other words,
\\\[
age40 \= age30 \+ 4
\\]
which means that the 40\-year\-olds run, on average, 4 minutes longer than the 30\-year\-older runners.
Similarly, comparing the two older groups, we can say that
\\\[
age50 \= age40 \+ 23
\\]
which means that the batch of 50\-year\-old times can be found by adding 23 minutes to the 40\-year\-old times.
Do older women runners run slower than younger women in the Boston Marathon? Looking back at our boxplot display and comparing medians of the five groups, we see that women of ages 20, 30, and 40 have (approximately) the same median completion time. The median time for the 50 year\-old runners seems significantly higher that the times for the 20\-40 year\-olds, and the runners of age 60 have a significantly higher median than the 50\-year\-olds. So it appears that the best times for women marathoners are in a broad range between 20 and 40 years, and the times don’t appear to deteriorate until after age 40\.
This is a nice illustration, since the batches of data had similar spreads and this facilitated comparisons by comparing medians. We will see in our next example that batches can have varying spreads and this motivates a reexpression or change in the scale of the data so that the reexpressed batches have similar spreads
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/spread-level-plot.html |
6 Spread Level Plot
===================
6\.1 Let’s Meet the Data
------------------------
The following table displays the population densities of each state in the U.S. for the years 1960, 1970, 1980, 1990, 2000, 2008\. Here population density is defined to be the number of people who live in the state per square mile, counting only land area. We know that the U.S. population has been increasing substantially in this last century and since the land area has remained roughly constant, this means that the population densities have been increasing. Our goal here is to effectively compare the six batches of densities to investigate the rate of change in this time period.
```
library(LearnEDAfunctions)
library(tidyverse)
pop.densities %>% select(-State) %>% head()
```
```
## Abbrev y1960 y1970 y1980 y1990 y2000 y2008
## 1 AL 64.37 67.88 76.74 79.62 87.6 91.87
## 2 AK 0.40 0.53 0.70 0.96 1.1 1.20
## 3 AZ 11.46 15.62 23.92 32.26 45.2 57.20
## 4 AR 34.30 36.94 43.91 45.15 51.3 54.84
## 5 CA 100.77 128.05 151.76 191.15 217.2 235.68
## 6 CO 16.91 21.30 27.86 31.76 41.5 47.62
```
6\.2 Comparing Batches in the Raw Scale
---------------------------------------
We begin by comparing the six batches of population densities (1960, 1970, 1980, 1990, 2000, 2008\) using parallel boxplots. Recall our basic construction process:
* we construct stem and leafs for each batch
* we find 5\-number summaries of each
* we set fences for each batch and identify outliers
* we construct parallel boxplots, ordering by the median value.
I had R construct the boxplots – here’s the display:
```
pop.densities %>% select(-State, -Abbrev) %>%
gather(Year, Time) -> stacked.data
ggplot(stacked.data, aes(Year, Time)) +
geom_boxplot() + coord_flip()
```
6\.3 What’s Wrong?
------------------
It’s hard to compare the six batches of densities. Why?
* Each batch is strongly right\-skewed. The length of the right tail is longer than the length of the left tail (look at the lengths of the whiskers) and there are a number of outliers at the high end.
* The batches have different spreads. If we look at the length of the boxes (the fourth spread), we see that the 2008 data is more spread out than the 1980 data which is more spread out than the 1960 data.
* Looking further, we see a tendency for the batches of larger densities to have larger spreads. This is obvious if you compare the 2008 densities (large median and large spread) with the 1960 densities (smaller median and smaller spread).
* In other words, we see a dependence between spread and level in this boxplot display.
It is difficult to compare these batches since they have unequal spreads. This is a common problem. If we have several batches that contain positive values (like counts or amounts), the batches with larger values will tend also to have larger spreads.
6\.4 The Spread vs Level Plot
-----------------------------
We can see the relationship between the averages and spreads by use of a spread vs. level plot. For each batch, we compute the median \\(M\\) and the fourth spread \\(d\_F\\). Then we construct a scatterplot of the (\\(\\log M, \\log d\_F\\)) values. (In this class, we will generally take logs to the base 10 power. Actually, it doesn’t make any difference in most settings what type of log we take, but it is easier to interpret a base 10 log.)
The work for the spread vs level plot for our example is shown in the table below the graph. For each batch, we list the median and the fourth spreads and the corresponding logs. Then we plot log M against log df.
```
(S <- spread_level_plot(stacked.data, Time, Year))
```
```
## # A tibble: 6 × 5
## Year M df log.M log.df
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 y1960 67.7 92.8 1.83 1.97
## 2 y1970 72.4 112. 1.86 2.05
## 3 y1980 81.0 130. 1.91 2.11
## 4 y1990 79.6 150. 1.90 2.18
## 5 y2000 88.6 161. 1.95 2.21
## 6 y2008 98.4 171. 1.99 2.23
```
Clearly there is a positive association in the graph, indicating that batches with small medians tend also to have small dfs (spreads).
6\.5 Reexpression
-----------------
We can correct the dependence between spread and level by reexpressing the data to a different scale. We focus on a special case of reexpressions, called power transformations, that have the form
\\\[
data^p,
\\]
where \\(p\\) is the power of the transformation. If \\(p \= 1\\), we just have the raw or original scale. The idea here is to choose a value of \\(p\\) not equal to 1 that might help in removing the dependence between spread and level.
Here is a simple algorithm to find the power \\(p\\).
1. Find the medians and fourth\-spreads for all batches and compute
\\(\\log M\\) and \\(\\log d\_F\\).
2. Construct a scatterplot of \\(\\log M\\) against \\(\\log d\_F\\).
(we’re already done the first two steps)
3. Fit a line by eye that goes through the points. (There can be some danger in fitting a least\-squares line – we’ll explain this problem later.)
4. If \\(b\\) is the slope of the best line, then the power of the reexpression will be
\\\[
p \= 1 \- b.
\\]
In the spread versus plot of \\((\\log M, \\log d\_F)\\), a best line is drawn on top.
The slope of the line (as shown in the plot) is b \= 1\.7\. So the power of the reexpression is
\\\[
p \= 1 \- b \= 1 \- 1\.7 \= \-0\.7 ,
\\]
which is approximately \\(\-0\.5\\).
So this method suggests that we should reexpress the density data by taking a \\(\-0\.5\\) power.
6\.6 Reanalysis of the data in new scale
----------------------------------------
Let’s check if this method works. We redo our comparison of the 1960, 1970, 1980, 1990, 2000, 2008 batches using the data
```
stacked.data %>%
mutate(Reexpressed = Time ^ (-0.5)) ->
stacked.data
ggplot(stacked.data, aes(Year, Reexpressed)) +
geom_boxplot() + coord_flip()
```
Actually this display doesn’t look much better than our original picture. There is still right skewness in each batch and we can’t help but notice the number of high outliers.
But there is one improvement – the spreads of the middle half of each batch are roughly equal and we have removed the dependence between level and spread.
We can check out this point by performing a spread vs. level plot for the reexpressed data. In the table, we’ve listed the median \\(M\\) and the fourth spread \\(d\_F\\) for the rexpressed data and computed the logs of \\(M\\) and \\(d\_F\\). We look at a scatterplot of log \\(M\\) against log \\(d\_F\\) for the reexpressed data.
```
(S <- spread_level_plot(stacked.data, Reexpressed, Year))
```
```
## # A tibble: 6 × 5
## Year M df log.M log.df
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 y1960 0.122 0.120 -0.915 -0.922
## 2 y1970 0.117 0.117 -0.930 -0.933
## 3 y1980 0.111 0.108 -0.954 -0.966
## 4 y1990 0.112 0.103 -0.951 -0.989
## 5 y2000 0.106 0.0851 -0.974 -1.07
## 6 y2008 0.101 0.0809 -0.997 -1.09
```
Actually, we still see a positive relationship between level and spread, which indicates that a further reexpression may be needed to remove or, at least, decrease the relationship between level and spread. (One can show that reexpressing the data by a 0\.1 power seems to do a better job in reducing the dependence between spread and level.)
6\.7 Wrap Up
------------
What have we learned?
* Batches are harder to compare when they have unequal spreads.
* The goal is to reexpress the data by a power transformation so that the reexpressed data has equal spreads across batches.
* We have introduced a simple plot (spread vs. level) which is designed to find the correct choice of the power p to accomplish our goal.
* However, this method may not work, and so we should reanalyze our data using the reexpression to see if the dependence between spread and level is reduced.
6\.1 Let’s Meet the Data
------------------------
The following table displays the population densities of each state in the U.S. for the years 1960, 1970, 1980, 1990, 2000, 2008\. Here population density is defined to be the number of people who live in the state per square mile, counting only land area. We know that the U.S. population has been increasing substantially in this last century and since the land area has remained roughly constant, this means that the population densities have been increasing. Our goal here is to effectively compare the six batches of densities to investigate the rate of change in this time period.
```
library(LearnEDAfunctions)
library(tidyverse)
pop.densities %>% select(-State) %>% head()
```
```
## Abbrev y1960 y1970 y1980 y1990 y2000 y2008
## 1 AL 64.37 67.88 76.74 79.62 87.6 91.87
## 2 AK 0.40 0.53 0.70 0.96 1.1 1.20
## 3 AZ 11.46 15.62 23.92 32.26 45.2 57.20
## 4 AR 34.30 36.94 43.91 45.15 51.3 54.84
## 5 CA 100.77 128.05 151.76 191.15 217.2 235.68
## 6 CO 16.91 21.30 27.86 31.76 41.5 47.62
```
6\.2 Comparing Batches in the Raw Scale
---------------------------------------
We begin by comparing the six batches of population densities (1960, 1970, 1980, 1990, 2000, 2008\) using parallel boxplots. Recall our basic construction process:
* we construct stem and leafs for each batch
* we find 5\-number summaries of each
* we set fences for each batch and identify outliers
* we construct parallel boxplots, ordering by the median value.
I had R construct the boxplots – here’s the display:
```
pop.densities %>% select(-State, -Abbrev) %>%
gather(Year, Time) -> stacked.data
ggplot(stacked.data, aes(Year, Time)) +
geom_boxplot() + coord_flip()
```
6\.3 What’s Wrong?
------------------
It’s hard to compare the six batches of densities. Why?
* Each batch is strongly right\-skewed. The length of the right tail is longer than the length of the left tail (look at the lengths of the whiskers) and there are a number of outliers at the high end.
* The batches have different spreads. If we look at the length of the boxes (the fourth spread), we see that the 2008 data is more spread out than the 1980 data which is more spread out than the 1960 data.
* Looking further, we see a tendency for the batches of larger densities to have larger spreads. This is obvious if you compare the 2008 densities (large median and large spread) with the 1960 densities (smaller median and smaller spread).
* In other words, we see a dependence between spread and level in this boxplot display.
It is difficult to compare these batches since they have unequal spreads. This is a common problem. If we have several batches that contain positive values (like counts or amounts), the batches with larger values will tend also to have larger spreads.
6\.4 The Spread vs Level Plot
-----------------------------
We can see the relationship between the averages and spreads by use of a spread vs. level plot. For each batch, we compute the median \\(M\\) and the fourth spread \\(d\_F\\). Then we construct a scatterplot of the (\\(\\log M, \\log d\_F\\)) values. (In this class, we will generally take logs to the base 10 power. Actually, it doesn’t make any difference in most settings what type of log we take, but it is easier to interpret a base 10 log.)
The work for the spread vs level plot for our example is shown in the table below the graph. For each batch, we list the median and the fourth spreads and the corresponding logs. Then we plot log M against log df.
```
(S <- spread_level_plot(stacked.data, Time, Year))
```
```
## # A tibble: 6 × 5
## Year M df log.M log.df
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 y1960 67.7 92.8 1.83 1.97
## 2 y1970 72.4 112. 1.86 2.05
## 3 y1980 81.0 130. 1.91 2.11
## 4 y1990 79.6 150. 1.90 2.18
## 5 y2000 88.6 161. 1.95 2.21
## 6 y2008 98.4 171. 1.99 2.23
```
Clearly there is a positive association in the graph, indicating that batches with small medians tend also to have small dfs (spreads).
6\.5 Reexpression
-----------------
We can correct the dependence between spread and level by reexpressing the data to a different scale. We focus on a special case of reexpressions, called power transformations, that have the form
\\\[
data^p,
\\]
where \\(p\\) is the power of the transformation. If \\(p \= 1\\), we just have the raw or original scale. The idea here is to choose a value of \\(p\\) not equal to 1 that might help in removing the dependence between spread and level.
Here is a simple algorithm to find the power \\(p\\).
1. Find the medians and fourth\-spreads for all batches and compute
\\(\\log M\\) and \\(\\log d\_F\\).
2. Construct a scatterplot of \\(\\log M\\) against \\(\\log d\_F\\).
(we’re already done the first two steps)
3. Fit a line by eye that goes through the points. (There can be some danger in fitting a least\-squares line – we’ll explain this problem later.)
4. If \\(b\\) is the slope of the best line, then the power of the reexpression will be
\\\[
p \= 1 \- b.
\\]
In the spread versus plot of \\((\\log M, \\log d\_F)\\), a best line is drawn on top.
The slope of the line (as shown in the plot) is b \= 1\.7\. So the power of the reexpression is
\\\[
p \= 1 \- b \= 1 \- 1\.7 \= \-0\.7 ,
\\]
which is approximately \\(\-0\.5\\).
So this method suggests that we should reexpress the density data by taking a \\(\-0\.5\\) power.
6\.6 Reanalysis of the data in new scale
----------------------------------------
Let’s check if this method works. We redo our comparison of the 1960, 1970, 1980, 1990, 2000, 2008 batches using the data
```
stacked.data %>%
mutate(Reexpressed = Time ^ (-0.5)) ->
stacked.data
ggplot(stacked.data, aes(Year, Reexpressed)) +
geom_boxplot() + coord_flip()
```
Actually this display doesn’t look much better than our original picture. There is still right skewness in each batch and we can’t help but notice the number of high outliers.
But there is one improvement – the spreads of the middle half of each batch are roughly equal and we have removed the dependence between level and spread.
We can check out this point by performing a spread vs. level plot for the reexpressed data. In the table, we’ve listed the median \\(M\\) and the fourth spread \\(d\_F\\) for the rexpressed data and computed the logs of \\(M\\) and \\(d\_F\\). We look at a scatterplot of log \\(M\\) against log \\(d\_F\\) for the reexpressed data.
```
(S <- spread_level_plot(stacked.data, Reexpressed, Year))
```
```
## # A tibble: 6 × 5
## Year M df log.M log.df
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 y1960 0.122 0.120 -0.915 -0.922
## 2 y1970 0.117 0.117 -0.930 -0.933
## 3 y1980 0.111 0.108 -0.954 -0.966
## 4 y1990 0.112 0.103 -0.951 -0.989
## 5 y2000 0.106 0.0851 -0.974 -1.07
## 6 y2008 0.101 0.0809 -0.997 -1.09
```
Actually, we still see a positive relationship between level and spread, which indicates that a further reexpression may be needed to remove or, at least, decrease the relationship between level and spread. (One can show that reexpressing the data by a 0\.1 power seems to do a better job in reducing the dependence between spread and level.)
6\.7 Wrap Up
------------
What have we learned?
* Batches are harder to compare when they have unequal spreads.
* The goal is to reexpress the data by a power transformation so that the reexpressed data has equal spreads across batches.
* We have introduced a simple plot (spread vs. level) which is designed to find the correct choice of the power p to accomplish our goal.
* However, this method may not work, and so we should reanalyze our data using the reexpression to see if the dependence between spread and level is reduced.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/spread-level-plot.html |
6 Spread Level Plot
===================
6\.1 Let’s Meet the Data
------------------------
The following table displays the population densities of each state in the U.S. for the years 1960, 1970, 1980, 1990, 2000, 2008\. Here population density is defined to be the number of people who live in the state per square mile, counting only land area. We know that the U.S. population has been increasing substantially in this last century and since the land area has remained roughly constant, this means that the population densities have been increasing. Our goal here is to effectively compare the six batches of densities to investigate the rate of change in this time period.
```
library(LearnEDAfunctions)
library(tidyverse)
pop.densities %>% select(-State) %>% head()
```
```
## Abbrev y1960 y1970 y1980 y1990 y2000 y2008
## 1 AL 64.37 67.88 76.74 79.62 87.6 91.87
## 2 AK 0.40 0.53 0.70 0.96 1.1 1.20
## 3 AZ 11.46 15.62 23.92 32.26 45.2 57.20
## 4 AR 34.30 36.94 43.91 45.15 51.3 54.84
## 5 CA 100.77 128.05 151.76 191.15 217.2 235.68
## 6 CO 16.91 21.30 27.86 31.76 41.5 47.62
```
6\.2 Comparing Batches in the Raw Scale
---------------------------------------
We begin by comparing the six batches of population densities (1960, 1970, 1980, 1990, 2000, 2008\) using parallel boxplots. Recall our basic construction process:
* we construct stem and leafs for each batch
* we find 5\-number summaries of each
* we set fences for each batch and identify outliers
* we construct parallel boxplots, ordering by the median value.
I had R construct the boxplots – here’s the display:
```
pop.densities %>% select(-State, -Abbrev) %>%
gather(Year, Time) -> stacked.data
ggplot(stacked.data, aes(Year, Time)) +
geom_boxplot() + coord_flip()
```
6\.3 What’s Wrong?
------------------
It’s hard to compare the six batches of densities. Why?
* Each batch is strongly right\-skewed. The length of the right tail is longer than the length of the left tail (look at the lengths of the whiskers) and there are a number of outliers at the high end.
* The batches have different spreads. If we look at the length of the boxes (the fourth spread), we see that the 2008 data is more spread out than the 1980 data which is more spread out than the 1960 data.
* Looking further, we see a tendency for the batches of larger densities to have larger spreads. This is obvious if you compare the 2008 densities (large median and large spread) with the 1960 densities (smaller median and smaller spread).
* In other words, we see a dependence between spread and level in this boxplot display.
It is difficult to compare these batches since they have unequal spreads. This is a common problem. If we have several batches that contain positive values (like counts or amounts), the batches with larger values will tend also to have larger spreads.
6\.4 The Spread vs Level Plot
-----------------------------
We can see the relationship between the averages and spreads by use of a spread vs. level plot. For each batch, we compute the median \\(M\\) and the fourth spread \\(d\_F\\). Then we construct a scatterplot of the (\\(\\log M, \\log d\_F\\)) values. (In this class, we will generally take logs to the base 10 power. Actually, it doesn’t make any difference in most settings what type of log we take, but it is easier to interpret a base 10 log.)
The work for the spread vs level plot for our example is shown in the table below the graph. For each batch, we list the median and the fourth spreads and the corresponding logs. Then we plot log M against log df.
```
(S <- spread_level_plot(stacked.data, Time, Year))
```
```
## # A tibble: 6 × 5
## Year M df log.M log.df
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 y1960 67.7 92.8 1.83 1.97
## 2 y1970 72.4 112. 1.86 2.05
## 3 y1980 81.0 130. 1.91 2.11
## 4 y1990 79.6 150. 1.90 2.18
## 5 y2000 88.6 161. 1.95 2.21
## 6 y2008 98.4 171. 1.99 2.23
```
Clearly there is a positive association in the graph, indicating that batches with small medians tend also to have small dfs (spreads).
6\.5 Reexpression
-----------------
We can correct the dependence between spread and level by reexpressing the data to a different scale. We focus on a special case of reexpressions, called power transformations, that have the form
\\\[
data^p,
\\]
where \\(p\\) is the power of the transformation. If \\(p \= 1\\), we just have the raw or original scale. The idea here is to choose a value of \\(p\\) not equal to 1 that might help in removing the dependence between spread and level.
Here is a simple algorithm to find the power \\(p\\).
1. Find the medians and fourth\-spreads for all batches and compute
\\(\\log M\\) and \\(\\log d\_F\\).
2. Construct a scatterplot of \\(\\log M\\) against \\(\\log d\_F\\).
(we’re already done the first two steps)
3. Fit a line by eye that goes through the points. (There can be some danger in fitting a least\-squares line – we’ll explain this problem later.)
4. If \\(b\\) is the slope of the best line, then the power of the reexpression will be
\\\[
p \= 1 \- b.
\\]
In the spread versus plot of \\((\\log M, \\log d\_F)\\), a best line is drawn on top.
The slope of the line (as shown in the plot) is b \= 1\.7\. So the power of the reexpression is
\\\[
p \= 1 \- b \= 1 \- 1\.7 \= \-0\.7 ,
\\]
which is approximately \\(\-0\.5\\).
So this method suggests that we should reexpress the density data by taking a \\(\-0\.5\\) power.
6\.6 Reanalysis of the data in new scale
----------------------------------------
Let’s check if this method works. We redo our comparison of the 1960, 1970, 1980, 1990, 2000, 2008 batches using the data
```
stacked.data %>%
mutate(Reexpressed = Time ^ (-0.5)) ->
stacked.data
ggplot(stacked.data, aes(Year, Reexpressed)) +
geom_boxplot() + coord_flip()
```
Actually this display doesn’t look much better than our original picture. There is still right skewness in each batch and we can’t help but notice the number of high outliers.
But there is one improvement – the spreads of the middle half of each batch are roughly equal and we have removed the dependence between level and spread.
We can check out this point by performing a spread vs. level plot for the reexpressed data. In the table, we’ve listed the median \\(M\\) and the fourth spread \\(d\_F\\) for the rexpressed data and computed the logs of \\(M\\) and \\(d\_F\\). We look at a scatterplot of log \\(M\\) against log \\(d\_F\\) for the reexpressed data.
```
(S <- spread_level_plot(stacked.data, Reexpressed, Year))
```
```
## # A tibble: 6 × 5
## Year M df log.M log.df
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 y1960 0.122 0.120 -0.915 -0.922
## 2 y1970 0.117 0.117 -0.930 -0.933
## 3 y1980 0.111 0.108 -0.954 -0.966
## 4 y1990 0.112 0.103 -0.951 -0.989
## 5 y2000 0.106 0.0851 -0.974 -1.07
## 6 y2008 0.101 0.0809 -0.997 -1.09
```
Actually, we still see a positive relationship between level and spread, which indicates that a further reexpression may be needed to remove or, at least, decrease the relationship between level and spread. (One can show that reexpressing the data by a 0\.1 power seems to do a better job in reducing the dependence between spread and level.)
6\.7 Wrap Up
------------
What have we learned?
* Batches are harder to compare when they have unequal spreads.
* The goal is to reexpress the data by a power transformation so that the reexpressed data has equal spreads across batches.
* We have introduced a simple plot (spread vs. level) which is designed to find the correct choice of the power p to accomplish our goal.
* However, this method may not work, and so we should reanalyze our data using the reexpression to see if the dependence between spread and level is reduced.
6\.1 Let’s Meet the Data
------------------------
The following table displays the population densities of each state in the U.S. for the years 1960, 1970, 1980, 1990, 2000, 2008\. Here population density is defined to be the number of people who live in the state per square mile, counting only land area. We know that the U.S. population has been increasing substantially in this last century and since the land area has remained roughly constant, this means that the population densities have been increasing. Our goal here is to effectively compare the six batches of densities to investigate the rate of change in this time period.
```
library(LearnEDAfunctions)
library(tidyverse)
pop.densities %>% select(-State) %>% head()
```
```
## Abbrev y1960 y1970 y1980 y1990 y2000 y2008
## 1 AL 64.37 67.88 76.74 79.62 87.6 91.87
## 2 AK 0.40 0.53 0.70 0.96 1.1 1.20
## 3 AZ 11.46 15.62 23.92 32.26 45.2 57.20
## 4 AR 34.30 36.94 43.91 45.15 51.3 54.84
## 5 CA 100.77 128.05 151.76 191.15 217.2 235.68
## 6 CO 16.91 21.30 27.86 31.76 41.5 47.62
```
6\.2 Comparing Batches in the Raw Scale
---------------------------------------
We begin by comparing the six batches of population densities (1960, 1970, 1980, 1990, 2000, 2008\) using parallel boxplots. Recall our basic construction process:
* we construct stem and leafs for each batch
* we find 5\-number summaries of each
* we set fences for each batch and identify outliers
* we construct parallel boxplots, ordering by the median value.
I had R construct the boxplots – here’s the display:
```
pop.densities %>% select(-State, -Abbrev) %>%
gather(Year, Time) -> stacked.data
ggplot(stacked.data, aes(Year, Time)) +
geom_boxplot() + coord_flip()
```
6\.3 What’s Wrong?
------------------
It’s hard to compare the six batches of densities. Why?
* Each batch is strongly right\-skewed. The length of the right tail is longer than the length of the left tail (look at the lengths of the whiskers) and there are a number of outliers at the high end.
* The batches have different spreads. If we look at the length of the boxes (the fourth spread), we see that the 2008 data is more spread out than the 1980 data which is more spread out than the 1960 data.
* Looking further, we see a tendency for the batches of larger densities to have larger spreads. This is obvious if you compare the 2008 densities (large median and large spread) with the 1960 densities (smaller median and smaller spread).
* In other words, we see a dependence between spread and level in this boxplot display.
It is difficult to compare these batches since they have unequal spreads. This is a common problem. If we have several batches that contain positive values (like counts or amounts), the batches with larger values will tend also to have larger spreads.
6\.4 The Spread vs Level Plot
-----------------------------
We can see the relationship between the averages and spreads by use of a spread vs. level plot. For each batch, we compute the median \\(M\\) and the fourth spread \\(d\_F\\). Then we construct a scatterplot of the (\\(\\log M, \\log d\_F\\)) values. (In this class, we will generally take logs to the base 10 power. Actually, it doesn’t make any difference in most settings what type of log we take, but it is easier to interpret a base 10 log.)
The work for the spread vs level plot for our example is shown in the table below the graph. For each batch, we list the median and the fourth spreads and the corresponding logs. Then we plot log M against log df.
```
(S <- spread_level_plot(stacked.data, Time, Year))
```
```
## # A tibble: 6 × 5
## Year M df log.M log.df
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 y1960 67.7 92.8 1.83 1.97
## 2 y1970 72.4 112. 1.86 2.05
## 3 y1980 81.0 130. 1.91 2.11
## 4 y1990 79.6 150. 1.90 2.18
## 5 y2000 88.6 161. 1.95 2.21
## 6 y2008 98.4 171. 1.99 2.23
```
Clearly there is a positive association in the graph, indicating that batches with small medians tend also to have small dfs (spreads).
6\.5 Reexpression
-----------------
We can correct the dependence between spread and level by reexpressing the data to a different scale. We focus on a special case of reexpressions, called power transformations, that have the form
\\\[
data^p,
\\]
where \\(p\\) is the power of the transformation. If \\(p \= 1\\), we just have the raw or original scale. The idea here is to choose a value of \\(p\\) not equal to 1 that might help in removing the dependence between spread and level.
Here is a simple algorithm to find the power \\(p\\).
1. Find the medians and fourth\-spreads for all batches and compute
\\(\\log M\\) and \\(\\log d\_F\\).
2. Construct a scatterplot of \\(\\log M\\) against \\(\\log d\_F\\).
(we’re already done the first two steps)
3. Fit a line by eye that goes through the points. (There can be some danger in fitting a least\-squares line – we’ll explain this problem later.)
4. If \\(b\\) is the slope of the best line, then the power of the reexpression will be
\\\[
p \= 1 \- b.
\\]
In the spread versus plot of \\((\\log M, \\log d\_F)\\), a best line is drawn on top.
The slope of the line (as shown in the plot) is b \= 1\.7\. So the power of the reexpression is
\\\[
p \= 1 \- b \= 1 \- 1\.7 \= \-0\.7 ,
\\]
which is approximately \\(\-0\.5\\).
So this method suggests that we should reexpress the density data by taking a \\(\-0\.5\\) power.
6\.6 Reanalysis of the data in new scale
----------------------------------------
Let’s check if this method works. We redo our comparison of the 1960, 1970, 1980, 1990, 2000, 2008 batches using the data
```
stacked.data %>%
mutate(Reexpressed = Time ^ (-0.5)) ->
stacked.data
ggplot(stacked.data, aes(Year, Reexpressed)) +
geom_boxplot() + coord_flip()
```
Actually this display doesn’t look much better than our original picture. There is still right skewness in each batch and we can’t help but notice the number of high outliers.
But there is one improvement – the spreads of the middle half of each batch are roughly equal and we have removed the dependence between level and spread.
We can check out this point by performing a spread vs. level plot for the reexpressed data. In the table, we’ve listed the median \\(M\\) and the fourth spread \\(d\_F\\) for the rexpressed data and computed the logs of \\(M\\) and \\(d\_F\\). We look at a scatterplot of log \\(M\\) against log \\(d\_F\\) for the reexpressed data.
```
(S <- spread_level_plot(stacked.data, Reexpressed, Year))
```
```
## # A tibble: 6 × 5
## Year M df log.M log.df
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 y1960 0.122 0.120 -0.915 -0.922
## 2 y1970 0.117 0.117 -0.930 -0.933
## 3 y1980 0.111 0.108 -0.954 -0.966
## 4 y1990 0.112 0.103 -0.951 -0.989
## 5 y2000 0.106 0.0851 -0.974 -1.07
## 6 y2008 0.101 0.0809 -0.997 -1.09
```
Actually, we still see a positive relationship between level and spread, which indicates that a further reexpression may be needed to remove or, at least, decrease the relationship between level and spread. (One can show that reexpressing the data by a 0\.1 power seems to do a better job in reducing the dependence between spread and level.)
6\.7 Wrap Up
------------
What have we learned?
* Batches are harder to compare when they have unequal spreads.
* The goal is to reexpress the data by a power transformation so that the reexpressed data has equal spreads across batches.
* We have introduced a simple plot (spread vs. level) which is designed to find the correct choice of the power p to accomplish our goal.
* However, this method may not work, and so we should reanalyze our data using the reexpression to see if the dependence between spread and level is reduced.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/comparing-batches-iii.html |
7 Comparing Batches III
=======================
7\.1 A Case Study where the Spread vs Level Plot Works
------------------------------------------------------
**Baseball data: Team homerun numbers for years 1900, …, 2000\.**
Background: In baseball, the most dramatic play is the home run, where the batter hits the pitch over the outfield fence. In a current typical baseball game, you may see 1\-3 home runs hit. Home runs were not always so common. Here we compare the quantity of home runs hit over the years. Specifically, we look at the total numbers of home runs hit by all teams in the Major League for the years 1900, 1910, …, 2000\.
To compare these 11 batches, we use parallel boxplots. In the `LearnEDA` package, the data is available in the data frame `homeruns.2000.`
The boxplot function is used to construct parallel boxplots of the team home runs by year.
```
library(LearnEDAfunctions)
library(tidyverse)
ggplot(homeruns.2000, aes(factor(YEARS), HOMERUNS)) +
geom_boxplot() + coord_flip() +
xlab("Year")
```
Looking at this graph, what do we see?
* We see that there were a small number of home runs hit by teams in the early years (1900\-1920\) compared with the later years (1930\-2000\).
* Also the spreads of the batches are not the same. The batches with the small home run numbers are also the ones with the smallest spreads.
* So there appears to be a dependence between spread and level here.
To construct a spread vs. level plot, we apply the R function `spread.level.plot()`. This function outputs a table of the medians, dfs, log10(medians), log10(dfs) for all years. Also it constructs a spread versus level graph with a \`\`best line” line superimposed.
```
spread_level_plot(homeruns.2000, HOMERUNS, YEARS)
```
```
## # A tibble: 11 × 5
## YEARS M df log.M log.df
## <int> <dbl> <dbl> <dbl> <dbl>
## 1 1900 31 8 1.49 0.903
## 2 1910 22.5 16.8 1.35 1.22
## 3 1920 34.5 20.2 1.54 1.31
## 4 1930 84 54.8 1.92 1.74
## 5 1940 92 42.5 1.96 1.63
## 6 1950 129 58 2.11 1.76
## 7 1960 126. 22 2.10 1.34
## 8 1970 137 52.8 2.14 1.72
## 9 1980 114. 42 2.06 1.62
## 10 1990 126. 40.8 2.10 1.61
## 11 2000 181 47.5 2.26 1.68
```
We see a positive association in the graph, indicating a dependence between level (measured by \\(\\log M\\)) and spread (measured by \\(\\log d\_F)\\).
We fit a line to this graph. We use a resistant procedure called \`\`resistant line” to fit this line (we’ll talk more about this procedure later in this course). The slope of this line is \\(b \= .64\\). So, by using our rule\-of\-thumb, we should reexpress our home run data by a power transformation with power \\(p \= 1 \- b \= 1 \- .64\\) which is approximately \\(p \= .5\\). In other words, this method suggests taking a root transformation.
On R, we create a new variable roots that will contain the square roots of the home run numbers.
```
homeruns.2000 %>% mutate(roots = sqrt(HOMERUNS)) ->
homeruns.2000
```
We construct a parallel boxplot display of the batches of root home run numbers.
```
ggplot(homeruns.2000, aes(factor(YEARS), roots)) +
geom_boxplot() + coord_flip() +
xlab("Year")
```
This plot looks better – the spreads of the batches are more similar. The batches with the smallest home run counts (1900\-1920\) have spreads that are similar in size to the spreads of the batches with large counts.
If you perform a spread vs. level plot of this reexpressed data (that is, the root data), you won’t see much of a relationship between \\(\\log M\\) and \\(\\log d\_F\\). (Remember, \\(M\\) and \\(d\_F\\) are computed using the root home run data.)
7\.2 A Case Study where the Spread vs Level Plot Doesn’t Work
-------------------------------------------------------------
**Music Data: Time in seconds of tracks of the Fab Four**
Background: The Beatles were a famous pop group that played in the 60’s and 70’s. The style of the Beatles’ music changed over their career – this change in style is reflected by the length of their songs.
We look at six Beatles’ albums: The BBC Tapes, Rubber Soul, Revolver, Magical Mystery Tour, Sgt. Pepper, and The White Album. For each album, we measure the time (in seconds) for all of the songs. The data is stored in the `LearnEDA` data frame `beatles`.
```
ggplot(beatles, aes(album, time)) +
geom_boxplot() + coord_flip() +
xlab("Album") +
ylab("Time (Seconds)")
```
Here are parallel boxplots of the times of the songs on the six albums. We see differences in the average song lengths – Magical Mystery Tour and The White Album tend to have longer songs than Rubber Soul and Revolver. But we also see differences in the spreads of the batches and so we try our spread vs. level plot to suggest a possible reexpression of the data.
The table below shows the medians, fourth\-spreads, and logs for the six batches, followed by a spread vs level plot.
```
spread_level_plot(beatles, time, album)
```
```
## # A tibble: 6 × 5
## album M df log.M log.df
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 BBC_tapes 136 36 2.13 1.56
## 2 Magical_Mystery_Tour 187 61 2.27 1.79
## 3 Revolver 150. 29.2 2.18 1.47
## 4 Rubber_Soul 149 23.2 2.17 1.37
## 5 Sargent_Pepper 163 49 2.21 1.69
## 6 The_White_Album 176 62.8 2.25 1.80
```
We see a positive association in this plot – a resistant fit to this graph gives a slope of 3\.1 which suggests a power of \\(p \= 1 \- 3\.1 \= \-2\.1\\) which is approximately equal to \\(\-2\\).
We try out this reexpression – we transform the time data to \\((time)^{\-2}\\). The first graph shows parallel boxplots of \\((time)^{\-2}\\); the second graph does a spread vs. level plot for this reexpressed data.
```
beatles %>%
mutate(New = 10 * (time) ^ (-2)) ->
beatles
ggplot(beatles, aes(album, New)) +
geom_boxplot() +
coord_flip() + xlab("ALBUM") +
ylab("Reexpressed Data")
```
```
spread_level_plot(beatles, New, album)
```
```
## # A tibble: 6 × 5
## album M df log.M log.df
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 BBC_tapes 0.000541 0.000274 -3.27 -3.56
## 2 Magical_Mystery_Tour 0.000286 0.000153 -3.54 -3.81
## 3 Revolver 0.000442 0.000193 -3.36 -3.72
## 4 Rubber_Soul 0.000450 0.000135 -3.35 -3.87
## 5 Sargent_Pepper 0.000376 0.000176 -3.42 -3.75
## 6 The_White_Album 0.000323 0.000240 -3.49 -3.62
```
Are we successful in this case in reducing the dependence between spread and level?
1. First, look at the spread vs level plot for the reexpressed data \\((time)^{\-2}\\). I don’t see much of a relationship between \\(\\log M\\) and \\(\\log d\_F\\) in this plot, suggesting that we have removed the trend between spread and level.
2. Next, look at the boxplot display of the reexpressed data – we do see some differences in spreads between the batches. Are the batches of the reexpressed data more similar in spread than the batches of the raw data? Let’s compare the spreads (\\(d\_F\\)s) side by side.
```
dF (raw) dF (reexpressed)
38.00 0.287
69.00 0.176
35.50 0.227
25.25 0.147
50.50 0.183
74.00 0.272
```
I see a slight improvement using this reexpression. The spreads of the raw data range from 25\.25 to 74 – the largest spread is 74/25\.25 \= 2\.9 times the smallest. Looking at the reexpressed data, the spreads range from .147 to .287 – the ratio is 2\.0\. Actually, this is a small improvement – it probably doesn’t make any sense in this case to reexpress the times.
7\.3 Some final comments about spread vs. level plots
-----------------------------------------------------
1. In practice, one chooses a power transformation where p is a multiple of one\-half, like \\(p \= 1/2, p \= 0, p \= \-1/2\\) , etc. (We’ll see soon that the \\(p \= 0\\) power corresponds to taking a log.)
2. If the spread versus level plot suggests that you should take a power of p, you should check the effectiveness of the reexpression by
o Constructing parallel boxplots of the data in the new scale
o Making a spread versus level plot using the reexpressed data
3. Sometimes a certain form of reexpression is routinely made. For example, population counts are typically reexpressed using logs.
7\.4 Why does the spread vs. level plot work?
---------------------------------------------
Using some analytic work (not shown here), we can see when the spread versus level plot method is going to work. Generally the method will work when the fourth spread in the new scale is small relative to the median in the new scale.
Let’s return to our music example. Here is the table of the medians and fourth\-spreads of the reexpressed song lengths:
```
M df
0.541 0.287
0.286 0.176
0.442 0.227
0.450 0.147
0.376 0.183
0.323 0.272
```
If we divide the fourth\-spreads by the corresponding medians, we get the values
```
0.5305 0.6154 0.5136 0.3267 0.4867 0.8421
```
These aren’t really that small (I was hoping for ratios that were 0\.2 or smaller). This explains why the spread vs level doesn’t work very well for this example.
7\.1 A Case Study where the Spread vs Level Plot Works
------------------------------------------------------
**Baseball data: Team homerun numbers for years 1900, …, 2000\.**
Background: In baseball, the most dramatic play is the home run, where the batter hits the pitch over the outfield fence. In a current typical baseball game, you may see 1\-3 home runs hit. Home runs were not always so common. Here we compare the quantity of home runs hit over the years. Specifically, we look at the total numbers of home runs hit by all teams in the Major League for the years 1900, 1910, …, 2000\.
To compare these 11 batches, we use parallel boxplots. In the `LearnEDA` package, the data is available in the data frame `homeruns.2000.`
The boxplot function is used to construct parallel boxplots of the team home runs by year.
```
library(LearnEDAfunctions)
library(tidyverse)
ggplot(homeruns.2000, aes(factor(YEARS), HOMERUNS)) +
geom_boxplot() + coord_flip() +
xlab("Year")
```
Looking at this graph, what do we see?
* We see that there were a small number of home runs hit by teams in the early years (1900\-1920\) compared with the later years (1930\-2000\).
* Also the spreads of the batches are not the same. The batches with the small home run numbers are also the ones with the smallest spreads.
* So there appears to be a dependence between spread and level here.
To construct a spread vs. level plot, we apply the R function `spread.level.plot()`. This function outputs a table of the medians, dfs, log10(medians), log10(dfs) for all years. Also it constructs a spread versus level graph with a \`\`best line” line superimposed.
```
spread_level_plot(homeruns.2000, HOMERUNS, YEARS)
```
```
## # A tibble: 11 × 5
## YEARS M df log.M log.df
## <int> <dbl> <dbl> <dbl> <dbl>
## 1 1900 31 8 1.49 0.903
## 2 1910 22.5 16.8 1.35 1.22
## 3 1920 34.5 20.2 1.54 1.31
## 4 1930 84 54.8 1.92 1.74
## 5 1940 92 42.5 1.96 1.63
## 6 1950 129 58 2.11 1.76
## 7 1960 126. 22 2.10 1.34
## 8 1970 137 52.8 2.14 1.72
## 9 1980 114. 42 2.06 1.62
## 10 1990 126. 40.8 2.10 1.61
## 11 2000 181 47.5 2.26 1.68
```
We see a positive association in the graph, indicating a dependence between level (measured by \\(\\log M\\)) and spread (measured by \\(\\log d\_F)\\).
We fit a line to this graph. We use a resistant procedure called \`\`resistant line” to fit this line (we’ll talk more about this procedure later in this course). The slope of this line is \\(b \= .64\\). So, by using our rule\-of\-thumb, we should reexpress our home run data by a power transformation with power \\(p \= 1 \- b \= 1 \- .64\\) which is approximately \\(p \= .5\\). In other words, this method suggests taking a root transformation.
On R, we create a new variable roots that will contain the square roots of the home run numbers.
```
homeruns.2000 %>% mutate(roots = sqrt(HOMERUNS)) ->
homeruns.2000
```
We construct a parallel boxplot display of the batches of root home run numbers.
```
ggplot(homeruns.2000, aes(factor(YEARS), roots)) +
geom_boxplot() + coord_flip() +
xlab("Year")
```
This plot looks better – the spreads of the batches are more similar. The batches with the smallest home run counts (1900\-1920\) have spreads that are similar in size to the spreads of the batches with large counts.
If you perform a spread vs. level plot of this reexpressed data (that is, the root data), you won’t see much of a relationship between \\(\\log M\\) and \\(\\log d\_F\\). (Remember, \\(M\\) and \\(d\_F\\) are computed using the root home run data.)
7\.2 A Case Study where the Spread vs Level Plot Doesn’t Work
-------------------------------------------------------------
**Music Data: Time in seconds of tracks of the Fab Four**
Background: The Beatles were a famous pop group that played in the 60’s and 70’s. The style of the Beatles’ music changed over their career – this change in style is reflected by the length of their songs.
We look at six Beatles’ albums: The BBC Tapes, Rubber Soul, Revolver, Magical Mystery Tour, Sgt. Pepper, and The White Album. For each album, we measure the time (in seconds) for all of the songs. The data is stored in the `LearnEDA` data frame `beatles`.
```
ggplot(beatles, aes(album, time)) +
geom_boxplot() + coord_flip() +
xlab("Album") +
ylab("Time (Seconds)")
```
Here are parallel boxplots of the times of the songs on the six albums. We see differences in the average song lengths – Magical Mystery Tour and The White Album tend to have longer songs than Rubber Soul and Revolver. But we also see differences in the spreads of the batches and so we try our spread vs. level plot to suggest a possible reexpression of the data.
The table below shows the medians, fourth\-spreads, and logs for the six batches, followed by a spread vs level plot.
```
spread_level_plot(beatles, time, album)
```
```
## # A tibble: 6 × 5
## album M df log.M log.df
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 BBC_tapes 136 36 2.13 1.56
## 2 Magical_Mystery_Tour 187 61 2.27 1.79
## 3 Revolver 150. 29.2 2.18 1.47
## 4 Rubber_Soul 149 23.2 2.17 1.37
## 5 Sargent_Pepper 163 49 2.21 1.69
## 6 The_White_Album 176 62.8 2.25 1.80
```
We see a positive association in this plot – a resistant fit to this graph gives a slope of 3\.1 which suggests a power of \\(p \= 1 \- 3\.1 \= \-2\.1\\) which is approximately equal to \\(\-2\\).
We try out this reexpression – we transform the time data to \\((time)^{\-2}\\). The first graph shows parallel boxplots of \\((time)^{\-2}\\); the second graph does a spread vs. level plot for this reexpressed data.
```
beatles %>%
mutate(New = 10 * (time) ^ (-2)) ->
beatles
ggplot(beatles, aes(album, New)) +
geom_boxplot() +
coord_flip() + xlab("ALBUM") +
ylab("Reexpressed Data")
```
```
spread_level_plot(beatles, New, album)
```
```
## # A tibble: 6 × 5
## album M df log.M log.df
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 BBC_tapes 0.000541 0.000274 -3.27 -3.56
## 2 Magical_Mystery_Tour 0.000286 0.000153 -3.54 -3.81
## 3 Revolver 0.000442 0.000193 -3.36 -3.72
## 4 Rubber_Soul 0.000450 0.000135 -3.35 -3.87
## 5 Sargent_Pepper 0.000376 0.000176 -3.42 -3.75
## 6 The_White_Album 0.000323 0.000240 -3.49 -3.62
```
Are we successful in this case in reducing the dependence between spread and level?
1. First, look at the spread vs level plot for the reexpressed data \\((time)^{\-2}\\). I don’t see much of a relationship between \\(\\log M\\) and \\(\\log d\_F\\) in this plot, suggesting that we have removed the trend between spread and level.
2. Next, look at the boxplot display of the reexpressed data – we do see some differences in spreads between the batches. Are the batches of the reexpressed data more similar in spread than the batches of the raw data? Let’s compare the spreads (\\(d\_F\\)s) side by side.
```
dF (raw) dF (reexpressed)
38.00 0.287
69.00 0.176
35.50 0.227
25.25 0.147
50.50 0.183
74.00 0.272
```
I see a slight improvement using this reexpression. The spreads of the raw data range from 25\.25 to 74 – the largest spread is 74/25\.25 \= 2\.9 times the smallest. Looking at the reexpressed data, the spreads range from .147 to .287 – the ratio is 2\.0\. Actually, this is a small improvement – it probably doesn’t make any sense in this case to reexpress the times.
7\.3 Some final comments about spread vs. level plots
-----------------------------------------------------
1. In practice, one chooses a power transformation where p is a multiple of one\-half, like \\(p \= 1/2, p \= 0, p \= \-1/2\\) , etc. (We’ll see soon that the \\(p \= 0\\) power corresponds to taking a log.)
2. If the spread versus level plot suggests that you should take a power of p, you should check the effectiveness of the reexpression by
o Constructing parallel boxplots of the data in the new scale
o Making a spread versus level plot using the reexpressed data
3. Sometimes a certain form of reexpression is routinely made. For example, population counts are typically reexpressed using logs.
7\.4 Why does the spread vs. level plot work?
---------------------------------------------
Using some analytic work (not shown here), we can see when the spread versus level plot method is going to work. Generally the method will work when the fourth spread in the new scale is small relative to the median in the new scale.
Let’s return to our music example. Here is the table of the medians and fourth\-spreads of the reexpressed song lengths:
```
M df
0.541 0.287
0.286 0.176
0.442 0.227
0.450 0.147
0.376 0.183
0.323 0.272
```
If we divide the fourth\-spreads by the corresponding medians, we get the values
```
0.5305 0.6154 0.5136 0.3267 0.4867 0.8421
```
These aren’t really that small (I was hoping for ratios that were 0\.2 or smaller). This explains why the spread vs level doesn’t work very well for this example.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/comparing-batches-iii.html |
7 Comparing Batches III
=======================
7\.1 A Case Study where the Spread vs Level Plot Works
------------------------------------------------------
**Baseball data: Team homerun numbers for years 1900, …, 2000\.**
Background: In baseball, the most dramatic play is the home run, where the batter hits the pitch over the outfield fence. In a current typical baseball game, you may see 1\-3 home runs hit. Home runs were not always so common. Here we compare the quantity of home runs hit over the years. Specifically, we look at the total numbers of home runs hit by all teams in the Major League for the years 1900, 1910, …, 2000\.
To compare these 11 batches, we use parallel boxplots. In the `LearnEDA` package, the data is available in the data frame `homeruns.2000.`
The boxplot function is used to construct parallel boxplots of the team home runs by year.
```
library(LearnEDAfunctions)
library(tidyverse)
ggplot(homeruns.2000, aes(factor(YEARS), HOMERUNS)) +
geom_boxplot() + coord_flip() +
xlab("Year")
```
Looking at this graph, what do we see?
* We see that there were a small number of home runs hit by teams in the early years (1900\-1920\) compared with the later years (1930\-2000\).
* Also the spreads of the batches are not the same. The batches with the small home run numbers are also the ones with the smallest spreads.
* So there appears to be a dependence between spread and level here.
To construct a spread vs. level plot, we apply the R function `spread.level.plot()`. This function outputs a table of the medians, dfs, log10(medians), log10(dfs) for all years. Also it constructs a spread versus level graph with a \`\`best line” line superimposed.
```
spread_level_plot(homeruns.2000, HOMERUNS, YEARS)
```
```
## # A tibble: 11 × 5
## YEARS M df log.M log.df
## <int> <dbl> <dbl> <dbl> <dbl>
## 1 1900 31 8 1.49 0.903
## 2 1910 22.5 16.8 1.35 1.22
## 3 1920 34.5 20.2 1.54 1.31
## 4 1930 84 54.8 1.92 1.74
## 5 1940 92 42.5 1.96 1.63
## 6 1950 129 58 2.11 1.76
## 7 1960 126. 22 2.10 1.34
## 8 1970 137 52.8 2.14 1.72
## 9 1980 114. 42 2.06 1.62
## 10 1990 126. 40.8 2.10 1.61
## 11 2000 181 47.5 2.26 1.68
```
We see a positive association in the graph, indicating a dependence between level (measured by \\(\\log M\\)) and spread (measured by \\(\\log d\_F)\\).
We fit a line to this graph. We use a resistant procedure called \`\`resistant line” to fit this line (we’ll talk more about this procedure later in this course). The slope of this line is \\(b \= .64\\). So, by using our rule\-of\-thumb, we should reexpress our home run data by a power transformation with power \\(p \= 1 \- b \= 1 \- .64\\) which is approximately \\(p \= .5\\). In other words, this method suggests taking a root transformation.
On R, we create a new variable roots that will contain the square roots of the home run numbers.
```
homeruns.2000 %>% mutate(roots = sqrt(HOMERUNS)) ->
homeruns.2000
```
We construct a parallel boxplot display of the batches of root home run numbers.
```
ggplot(homeruns.2000, aes(factor(YEARS), roots)) +
geom_boxplot() + coord_flip() +
xlab("Year")
```
This plot looks better – the spreads of the batches are more similar. The batches with the smallest home run counts (1900\-1920\) have spreads that are similar in size to the spreads of the batches with large counts.
If you perform a spread vs. level plot of this reexpressed data (that is, the root data), you won’t see much of a relationship between \\(\\log M\\) and \\(\\log d\_F\\). (Remember, \\(M\\) and \\(d\_F\\) are computed using the root home run data.)
7\.2 A Case Study where the Spread vs Level Plot Doesn’t Work
-------------------------------------------------------------
**Music Data: Time in seconds of tracks of the Fab Four**
Background: The Beatles were a famous pop group that played in the 60’s and 70’s. The style of the Beatles’ music changed over their career – this change in style is reflected by the length of their songs.
We look at six Beatles’ albums: The BBC Tapes, Rubber Soul, Revolver, Magical Mystery Tour, Sgt. Pepper, and The White Album. For each album, we measure the time (in seconds) for all of the songs. The data is stored in the `LearnEDA` data frame `beatles`.
```
ggplot(beatles, aes(album, time)) +
geom_boxplot() + coord_flip() +
xlab("Album") +
ylab("Time (Seconds)")
```
Here are parallel boxplots of the times of the songs on the six albums. We see differences in the average song lengths – Magical Mystery Tour and The White Album tend to have longer songs than Rubber Soul and Revolver. But we also see differences in the spreads of the batches and so we try our spread vs. level plot to suggest a possible reexpression of the data.
The table below shows the medians, fourth\-spreads, and logs for the six batches, followed by a spread vs level plot.
```
spread_level_plot(beatles, time, album)
```
```
## # A tibble: 6 × 5
## album M df log.M log.df
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 BBC_tapes 136 36 2.13 1.56
## 2 Magical_Mystery_Tour 187 61 2.27 1.79
## 3 Revolver 150. 29.2 2.18 1.47
## 4 Rubber_Soul 149 23.2 2.17 1.37
## 5 Sargent_Pepper 163 49 2.21 1.69
## 6 The_White_Album 176 62.8 2.25 1.80
```
We see a positive association in this plot – a resistant fit to this graph gives a slope of 3\.1 which suggests a power of \\(p \= 1 \- 3\.1 \= \-2\.1\\) which is approximately equal to \\(\-2\\).
We try out this reexpression – we transform the time data to \\((time)^{\-2}\\). The first graph shows parallel boxplots of \\((time)^{\-2}\\); the second graph does a spread vs. level plot for this reexpressed data.
```
beatles %>%
mutate(New = 10 * (time) ^ (-2)) ->
beatles
ggplot(beatles, aes(album, New)) +
geom_boxplot() +
coord_flip() + xlab("ALBUM") +
ylab("Reexpressed Data")
```
```
spread_level_plot(beatles, New, album)
```
```
## # A tibble: 6 × 5
## album M df log.M log.df
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 BBC_tapes 0.000541 0.000274 -3.27 -3.56
## 2 Magical_Mystery_Tour 0.000286 0.000153 -3.54 -3.81
## 3 Revolver 0.000442 0.000193 -3.36 -3.72
## 4 Rubber_Soul 0.000450 0.000135 -3.35 -3.87
## 5 Sargent_Pepper 0.000376 0.000176 -3.42 -3.75
## 6 The_White_Album 0.000323 0.000240 -3.49 -3.62
```
Are we successful in this case in reducing the dependence between spread and level?
1. First, look at the spread vs level plot for the reexpressed data \\((time)^{\-2}\\). I don’t see much of a relationship between \\(\\log M\\) and \\(\\log d\_F\\) in this plot, suggesting that we have removed the trend between spread and level.
2. Next, look at the boxplot display of the reexpressed data – we do see some differences in spreads between the batches. Are the batches of the reexpressed data more similar in spread than the batches of the raw data? Let’s compare the spreads (\\(d\_F\\)s) side by side.
```
dF (raw) dF (reexpressed)
38.00 0.287
69.00 0.176
35.50 0.227
25.25 0.147
50.50 0.183
74.00 0.272
```
I see a slight improvement using this reexpression. The spreads of the raw data range from 25\.25 to 74 – the largest spread is 74/25\.25 \= 2\.9 times the smallest. Looking at the reexpressed data, the spreads range from .147 to .287 – the ratio is 2\.0\. Actually, this is a small improvement – it probably doesn’t make any sense in this case to reexpress the times.
7\.3 Some final comments about spread vs. level plots
-----------------------------------------------------
1. In practice, one chooses a power transformation where p is a multiple of one\-half, like \\(p \= 1/2, p \= 0, p \= \-1/2\\) , etc. (We’ll see soon that the \\(p \= 0\\) power corresponds to taking a log.)
2. If the spread versus level plot suggests that you should take a power of p, you should check the effectiveness of the reexpression by
o Constructing parallel boxplots of the data in the new scale
o Making a spread versus level plot using the reexpressed data
3. Sometimes a certain form of reexpression is routinely made. For example, population counts are typically reexpressed using logs.
7\.4 Why does the spread vs. level plot work?
---------------------------------------------
Using some analytic work (not shown here), we can see when the spread versus level plot method is going to work. Generally the method will work when the fourth spread in the new scale is small relative to the median in the new scale.
Let’s return to our music example. Here is the table of the medians and fourth\-spreads of the reexpressed song lengths:
```
M df
0.541 0.287
0.286 0.176
0.442 0.227
0.450 0.147
0.376 0.183
0.323 0.272
```
If we divide the fourth\-spreads by the corresponding medians, we get the values
```
0.5305 0.6154 0.5136 0.3267 0.4867 0.8421
```
These aren’t really that small (I was hoping for ratios that were 0\.2 or smaller). This explains why the spread vs level doesn’t work very well for this example.
7\.1 A Case Study where the Spread vs Level Plot Works
------------------------------------------------------
**Baseball data: Team homerun numbers for years 1900, …, 2000\.**
Background: In baseball, the most dramatic play is the home run, where the batter hits the pitch over the outfield fence. In a current typical baseball game, you may see 1\-3 home runs hit. Home runs were not always so common. Here we compare the quantity of home runs hit over the years. Specifically, we look at the total numbers of home runs hit by all teams in the Major League for the years 1900, 1910, …, 2000\.
To compare these 11 batches, we use parallel boxplots. In the `LearnEDA` package, the data is available in the data frame `homeruns.2000.`
The boxplot function is used to construct parallel boxplots of the team home runs by year.
```
library(LearnEDAfunctions)
library(tidyverse)
ggplot(homeruns.2000, aes(factor(YEARS), HOMERUNS)) +
geom_boxplot() + coord_flip() +
xlab("Year")
```
Looking at this graph, what do we see?
* We see that there were a small number of home runs hit by teams in the early years (1900\-1920\) compared with the later years (1930\-2000\).
* Also the spreads of the batches are not the same. The batches with the small home run numbers are also the ones with the smallest spreads.
* So there appears to be a dependence between spread and level here.
To construct a spread vs. level plot, we apply the R function `spread.level.plot()`. This function outputs a table of the medians, dfs, log10(medians), log10(dfs) for all years. Also it constructs a spread versus level graph with a \`\`best line” line superimposed.
```
spread_level_plot(homeruns.2000, HOMERUNS, YEARS)
```
```
## # A tibble: 11 × 5
## YEARS M df log.M log.df
## <int> <dbl> <dbl> <dbl> <dbl>
## 1 1900 31 8 1.49 0.903
## 2 1910 22.5 16.8 1.35 1.22
## 3 1920 34.5 20.2 1.54 1.31
## 4 1930 84 54.8 1.92 1.74
## 5 1940 92 42.5 1.96 1.63
## 6 1950 129 58 2.11 1.76
## 7 1960 126. 22 2.10 1.34
## 8 1970 137 52.8 2.14 1.72
## 9 1980 114. 42 2.06 1.62
## 10 1990 126. 40.8 2.10 1.61
## 11 2000 181 47.5 2.26 1.68
```
We see a positive association in the graph, indicating a dependence between level (measured by \\(\\log M\\)) and spread (measured by \\(\\log d\_F)\\).
We fit a line to this graph. We use a resistant procedure called \`\`resistant line” to fit this line (we’ll talk more about this procedure later in this course). The slope of this line is \\(b \= .64\\). So, by using our rule\-of\-thumb, we should reexpress our home run data by a power transformation with power \\(p \= 1 \- b \= 1 \- .64\\) which is approximately \\(p \= .5\\). In other words, this method suggests taking a root transformation.
On R, we create a new variable roots that will contain the square roots of the home run numbers.
```
homeruns.2000 %>% mutate(roots = sqrt(HOMERUNS)) ->
homeruns.2000
```
We construct a parallel boxplot display of the batches of root home run numbers.
```
ggplot(homeruns.2000, aes(factor(YEARS), roots)) +
geom_boxplot() + coord_flip() +
xlab("Year")
```
This plot looks better – the spreads of the batches are more similar. The batches with the smallest home run counts (1900\-1920\) have spreads that are similar in size to the spreads of the batches with large counts.
If you perform a spread vs. level plot of this reexpressed data (that is, the root data), you won’t see much of a relationship between \\(\\log M\\) and \\(\\log d\_F\\). (Remember, \\(M\\) and \\(d\_F\\) are computed using the root home run data.)
7\.2 A Case Study where the Spread vs Level Plot Doesn’t Work
-------------------------------------------------------------
**Music Data: Time in seconds of tracks of the Fab Four**
Background: The Beatles were a famous pop group that played in the 60’s and 70’s. The style of the Beatles’ music changed over their career – this change in style is reflected by the length of their songs.
We look at six Beatles’ albums: The BBC Tapes, Rubber Soul, Revolver, Magical Mystery Tour, Sgt. Pepper, and The White Album. For each album, we measure the time (in seconds) for all of the songs. The data is stored in the `LearnEDA` data frame `beatles`.
```
ggplot(beatles, aes(album, time)) +
geom_boxplot() + coord_flip() +
xlab("Album") +
ylab("Time (Seconds)")
```
Here are parallel boxplots of the times of the songs on the six albums. We see differences in the average song lengths – Magical Mystery Tour and The White Album tend to have longer songs than Rubber Soul and Revolver. But we also see differences in the spreads of the batches and so we try our spread vs. level plot to suggest a possible reexpression of the data.
The table below shows the medians, fourth\-spreads, and logs for the six batches, followed by a spread vs level plot.
```
spread_level_plot(beatles, time, album)
```
```
## # A tibble: 6 × 5
## album M df log.M log.df
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 BBC_tapes 136 36 2.13 1.56
## 2 Magical_Mystery_Tour 187 61 2.27 1.79
## 3 Revolver 150. 29.2 2.18 1.47
## 4 Rubber_Soul 149 23.2 2.17 1.37
## 5 Sargent_Pepper 163 49 2.21 1.69
## 6 The_White_Album 176 62.8 2.25 1.80
```
We see a positive association in this plot – a resistant fit to this graph gives a slope of 3\.1 which suggests a power of \\(p \= 1 \- 3\.1 \= \-2\.1\\) which is approximately equal to \\(\-2\\).
We try out this reexpression – we transform the time data to \\((time)^{\-2}\\). The first graph shows parallel boxplots of \\((time)^{\-2}\\); the second graph does a spread vs. level plot for this reexpressed data.
```
beatles %>%
mutate(New = 10 * (time) ^ (-2)) ->
beatles
ggplot(beatles, aes(album, New)) +
geom_boxplot() +
coord_flip() + xlab("ALBUM") +
ylab("Reexpressed Data")
```
```
spread_level_plot(beatles, New, album)
```
```
## # A tibble: 6 × 5
## album M df log.M log.df
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 BBC_tapes 0.000541 0.000274 -3.27 -3.56
## 2 Magical_Mystery_Tour 0.000286 0.000153 -3.54 -3.81
## 3 Revolver 0.000442 0.000193 -3.36 -3.72
## 4 Rubber_Soul 0.000450 0.000135 -3.35 -3.87
## 5 Sargent_Pepper 0.000376 0.000176 -3.42 -3.75
## 6 The_White_Album 0.000323 0.000240 -3.49 -3.62
```
Are we successful in this case in reducing the dependence between spread and level?
1. First, look at the spread vs level plot for the reexpressed data \\((time)^{\-2}\\). I don’t see much of a relationship between \\(\\log M\\) and \\(\\log d\_F\\) in this plot, suggesting that we have removed the trend between spread and level.
2. Next, look at the boxplot display of the reexpressed data – we do see some differences in spreads between the batches. Are the batches of the reexpressed data more similar in spread than the batches of the raw data? Let’s compare the spreads (\\(d\_F\\)s) side by side.
```
dF (raw) dF (reexpressed)
38.00 0.287
69.00 0.176
35.50 0.227
25.25 0.147
50.50 0.183
74.00 0.272
```
I see a slight improvement using this reexpression. The spreads of the raw data range from 25\.25 to 74 – the largest spread is 74/25\.25 \= 2\.9 times the smallest. Looking at the reexpressed data, the spreads range from .147 to .287 – the ratio is 2\.0\. Actually, this is a small improvement – it probably doesn’t make any sense in this case to reexpress the times.
7\.3 Some final comments about spread vs. level plots
-----------------------------------------------------
1. In practice, one chooses a power transformation where p is a multiple of one\-half, like \\(p \= 1/2, p \= 0, p \= \-1/2\\) , etc. (We’ll see soon that the \\(p \= 0\\) power corresponds to taking a log.)
2. If the spread versus level plot suggests that you should take a power of p, you should check the effectiveness of the reexpression by
o Constructing parallel boxplots of the data in the new scale
o Making a spread versus level plot using the reexpressed data
3. Sometimes a certain form of reexpression is routinely made. For example, population counts are typically reexpressed using logs.
7\.4 Why does the spread vs. level plot work?
---------------------------------------------
Using some analytic work (not shown here), we can see when the spread versus level plot method is going to work. Generally the method will work when the fourth spread in the new scale is small relative to the median in the new scale.
Let’s return to our music example. Here is the table of the medians and fourth\-spreads of the reexpressed song lengths:
```
M df
0.541 0.287
0.286 0.176
0.442 0.227
0.450 0.147
0.376 0.183
0.323 0.272
```
If we divide the fourth\-spreads by the corresponding medians, we get the values
```
0.5305 0.6154 0.5136 0.3267 0.4867 0.8421
```
These aren’t really that small (I was hoping for ratios that were 0\.2 or smaller). This explains why the spread vs level doesn’t work very well for this example.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/transformations.html |
8 Transformations
=================
In this lecture, we talk about transforming or reexpressing data. In the last two lectures, we have illustrated taking power transformations in order to reduce the dependence between spread and level. Here we talk more about the reasons why transformations can be helpful and more formally define the class of power transformations.
8\.1 Why do we rexpress data?
-----------------------------
Simply, we transform to make data easier to understand.
Here is a simple case in point. As you probably know, I’m a baseball fan and I’m interested in baseball data. Suppose we look at the number of home runs hit by all major league players in the year 1961\. (I chose 1961 since it was a famous year for hitting home runs – in particular, Roger Maris set the season record by hitting 61 home runs that season.) Below I have displayed these home run numbers using a histogram.
```
library(LearnEDAfunctions)
library(tidyverse)
ggplot(homeruns.61, aes(HR)) +
geom_histogram(color = "black", fill = "white")
```
This data has strong right\-skewness and is hard to analyze. All we can tell from the histogram is that most of the home run numbers are close to zero with a few large values. It is hard to distinguish the small values and I would have a hard time specifying an average home run number.
Generally, why is data hard to interpret?
* Strong asymmetry such as displayed in the above histogram.
* A large number of outliers. These outliers can distort standard measures of average and spread, such as the mean and standard deviation. Also it creates difficulties just in graphing the data.
* Batches with different averages and widely differing spreads. We saw this problem in the previous two lectures.
* Large and systematic residuals after fitting a model. We’ll talk more about patterns in residuals when we get to the plotting section of the class.
To make data easier to interpret, we want to choose a transformation which will change the shape of the data.
8\.2 The shape of the data
--------------------------
The shape of the data is what you get when you draw a smooth curve over the histogram.
```
ggplot(homeruns.61) +
geom_histogram(aes(x = HR, y = ..density..)) +
geom_density(aes(x = HR),
color = "red")
```
If we remove all of the labeling information and bars, we can focus on the shape:
We are interested in finding a reexpression that can change this undesirable strong right\-skewed shape.
8\.3 Trivial reexpressions
--------------------------
There are some reexpressions that we could try that would change the values, but would have no impact on the shape of the distribution – we call these trivial reexpressions. For example, in baseball a home run counts as four bases. We could reexpress home runs to bases:
\\\[
Bases \= 4 (Home runs)
\\]
If we constructed a histogram of all the bases numbers for all players, what would it look like? The shape would be identical to the right\-skewed one shown above. In other words, multiplying a data by a positive constant will have no impact on the shape of the data. Similarly, it might be obvious that if we add any constant to the data, the shape of the data wouldn’t change. To summarize, if \\(X\\) is our raw data and we reexpress \\(X\\) by the linear transformation
\\\[
Y \= a X \+ b, \\, {\\rm where} \\, a \> 0,
\\]
the shape of the \\(Y\\) data will be identical to the shape of the \\(X\\) data.
8\.4 Nontrivial expressions: the power family
---------------------------------------------
We are interested in finding reexpressions that can change the shape of the data so that it is easier to analyze. A convenient family of transformations is the power family. The basic form of this transformation is given by
\\\[
T\_p(X) \= X^p.
\\]
If we choose \\(p \= 1\\), then \\(T\_p(X) \= X\\), the raw data. If we choose any value \\(p\\) that isn’t equal to 1, then will have a shape that is different from the raw data \\(X\\).
Let’s illustrate this fact using our home run data. In the figure below, we show the raw data (\\(p \= 1\\)) and power transformations using (\\(p \= .5\\)) and (\\(p \= .001\\)). We make one small adjustment to the basic power transformation to account for the large number of zeros in the data. We first change \\(X\\) to \\(X \+ .5\\) (add .5 to each observation), and then take powers of \\(X \+ .5\\).
If we look from the raw data to the roots (\\(p \= .5\\)) to the \\(p\\) \= .001, we see distinctive changes in the data shape. The large amount of data close to zero in the raw plot gets squeezed out in the \\(p \= .5\\) and \\(p \= .001\\) graphs. In fact, in the \\(p \= .001\\) plot, we are starting to see much more structure in the large data values.
8\.5 Some properties of the power family
----------------------------------------
Above we presented the `basic form" of the power transformation. An alternative form of this power transformation is the`matched form” –
\\\[
T\_p(X) \= \\frac{X^p \- 1}{p}.
\\]
Suppose we graph this function (for fixed \\(p\\)) as a function of \\(x\\).
Note that for any value of the power \\(p\\),
* the graph goes through the point (1, 0\) – that is \\(T\_p(1\) \= 0\\)
* the derivative of the graph at \\(x \= 0\\) is equal to 1 – that is, \\(T\_p'(1\) \= 1\\)
Below we have graphed the function \\(T\_p(x)\\) for the powers \\(p \= 1\.5, 1, .5, 0, \-.5\\).
What do we notice in the graphs?
* we confirm that all of the graphs go through (1, 0\) and have the same slope at that point
* all of the curves are increasing in \\(x\\)
(This is an important property – when we apply this matched transformation to our raw data, we will preserve the order in the data.)
* with respect to concavity
* if \\(p\\) \> 1, the graph is concave up
* if \\(p\\) \< 1, the graph is concave down
* if \\(p\\) \= 1, we have a linear function
* as \\(p\\) moves away from 1, the curve becomes more curved, which means that the concavity increases
What impact does this concavity have on changing the shape of our data?
* If \\(p\\) \> 1, then the graph is concave up, which means that the transformation will expand the scale more for large \\(x\\) than for small \\(x\\).
* Similarly, if the power \\(p \< 1\\) (graph is concave down), the transformation will expand the scale more for small \\(x\\) than for large \\(x\\).
Note that if \\(p \< 0\\), the coefficient of \\(x^p\\) will be negative. This might seem odd (we don’t usually have data with negative sign), but this is necessary to make all of these matched transformations increasing in \\(x\\).
8\.6 The log transformation
---------------------------
The log transformation actually is a special case of a power transformation. Consider the matched form of the power family. Fix \\(x\\) and let the power \\(p\\) approach zero. Then it is a straightforward calculation to show that
\\\[
T\_p(x) \= \\frac{x^p \- 1}{p} \\rightarrow ln(x),
\\]
where \\(\\ln()\\) is the natural log function. So a log function essentially is a power transformation for \\(p \= 0\\).
Does it matter what type of log we take when we reexpress? No. The only difference between log(base e) and log (base anything else) is a scalar multiple, which is a trivial reexpression. In this case, we will generally take log base 10\. We do so for ease of interpretation and communication.
8\.7 Ladders of powers
----------------------
John Tukey used a ladder motif to describe different values \\(p\\) of the power transformation. We view the powers as rungs of a ladder.
The raw data can be represented as a \\(p \= 1\\) reexpression. Each move down the ladder (by a multiple of .5\) is called one step and represents one unit change in the curvature of the transformation. If we wish to reexpress our data, we might change our data by taking roots, which is one step away from the raw data. If we wish to make a more drastic transformation, we might take another step, trying logs, which is the \\(p \= 0\\) transformation.
In the next lecture, we’ll get some experience taking power transformations with the objective of making a data set more symmetric.
8\.1 Why do we rexpress data?
-----------------------------
Simply, we transform to make data easier to understand.
Here is a simple case in point. As you probably know, I’m a baseball fan and I’m interested in baseball data. Suppose we look at the number of home runs hit by all major league players in the year 1961\. (I chose 1961 since it was a famous year for hitting home runs – in particular, Roger Maris set the season record by hitting 61 home runs that season.) Below I have displayed these home run numbers using a histogram.
```
library(LearnEDAfunctions)
library(tidyverse)
ggplot(homeruns.61, aes(HR)) +
geom_histogram(color = "black", fill = "white")
```
This data has strong right\-skewness and is hard to analyze. All we can tell from the histogram is that most of the home run numbers are close to zero with a few large values. It is hard to distinguish the small values and I would have a hard time specifying an average home run number.
Generally, why is data hard to interpret?
* Strong asymmetry such as displayed in the above histogram.
* A large number of outliers. These outliers can distort standard measures of average and spread, such as the mean and standard deviation. Also it creates difficulties just in graphing the data.
* Batches with different averages and widely differing spreads. We saw this problem in the previous two lectures.
* Large and systematic residuals after fitting a model. We’ll talk more about patterns in residuals when we get to the plotting section of the class.
To make data easier to interpret, we want to choose a transformation which will change the shape of the data.
8\.2 The shape of the data
--------------------------
The shape of the data is what you get when you draw a smooth curve over the histogram.
```
ggplot(homeruns.61) +
geom_histogram(aes(x = HR, y = ..density..)) +
geom_density(aes(x = HR),
color = "red")
```
If we remove all of the labeling information and bars, we can focus on the shape:
We are interested in finding a reexpression that can change this undesirable strong right\-skewed shape.
8\.3 Trivial reexpressions
--------------------------
There are some reexpressions that we could try that would change the values, but would have no impact on the shape of the distribution – we call these trivial reexpressions. For example, in baseball a home run counts as four bases. We could reexpress home runs to bases:
\\\[
Bases \= 4 (Home runs)
\\]
If we constructed a histogram of all the bases numbers for all players, what would it look like? The shape would be identical to the right\-skewed one shown above. In other words, multiplying a data by a positive constant will have no impact on the shape of the data. Similarly, it might be obvious that if we add any constant to the data, the shape of the data wouldn’t change. To summarize, if \\(X\\) is our raw data and we reexpress \\(X\\) by the linear transformation
\\\[
Y \= a X \+ b, \\, {\\rm where} \\, a \> 0,
\\]
the shape of the \\(Y\\) data will be identical to the shape of the \\(X\\) data.
8\.4 Nontrivial expressions: the power family
---------------------------------------------
We are interested in finding reexpressions that can change the shape of the data so that it is easier to analyze. A convenient family of transformations is the power family. The basic form of this transformation is given by
\\\[
T\_p(X) \= X^p.
\\]
If we choose \\(p \= 1\\), then \\(T\_p(X) \= X\\), the raw data. If we choose any value \\(p\\) that isn’t equal to 1, then will have a shape that is different from the raw data \\(X\\).
Let’s illustrate this fact using our home run data. In the figure below, we show the raw data (\\(p \= 1\\)) and power transformations using (\\(p \= .5\\)) and (\\(p \= .001\\)). We make one small adjustment to the basic power transformation to account for the large number of zeros in the data. We first change \\(X\\) to \\(X \+ .5\\) (add .5 to each observation), and then take powers of \\(X \+ .5\\).
If we look from the raw data to the roots (\\(p \= .5\\)) to the \\(p\\) \= .001, we see distinctive changes in the data shape. The large amount of data close to zero in the raw plot gets squeezed out in the \\(p \= .5\\) and \\(p \= .001\\) graphs. In fact, in the \\(p \= .001\\) plot, we are starting to see much more structure in the large data values.
8\.5 Some properties of the power family
----------------------------------------
Above we presented the `basic form" of the power transformation. An alternative form of this power transformation is the`matched form” –
\\\[
T\_p(X) \= \\frac{X^p \- 1}{p}.
\\]
Suppose we graph this function (for fixed \\(p\\)) as a function of \\(x\\).
Note that for any value of the power \\(p\\),
* the graph goes through the point (1, 0\) – that is \\(T\_p(1\) \= 0\\)
* the derivative of the graph at \\(x \= 0\\) is equal to 1 – that is, \\(T\_p'(1\) \= 1\\)
Below we have graphed the function \\(T\_p(x)\\) for the powers \\(p \= 1\.5, 1, .5, 0, \-.5\\).
What do we notice in the graphs?
* we confirm that all of the graphs go through (1, 0\) and have the same slope at that point
* all of the curves are increasing in \\(x\\)
(This is an important property – when we apply this matched transformation to our raw data, we will preserve the order in the data.)
* with respect to concavity
* if \\(p\\) \> 1, the graph is concave up
* if \\(p\\) \< 1, the graph is concave down
* if \\(p\\) \= 1, we have a linear function
* as \\(p\\) moves away from 1, the curve becomes more curved, which means that the concavity increases
What impact does this concavity have on changing the shape of our data?
* If \\(p\\) \> 1, then the graph is concave up, which means that the transformation will expand the scale more for large \\(x\\) than for small \\(x\\).
* Similarly, if the power \\(p \< 1\\) (graph is concave down), the transformation will expand the scale more for small \\(x\\) than for large \\(x\\).
Note that if \\(p \< 0\\), the coefficient of \\(x^p\\) will be negative. This might seem odd (we don’t usually have data with negative sign), but this is necessary to make all of these matched transformations increasing in \\(x\\).
8\.6 The log transformation
---------------------------
The log transformation actually is a special case of a power transformation. Consider the matched form of the power family. Fix \\(x\\) and let the power \\(p\\) approach zero. Then it is a straightforward calculation to show that
\\\[
T\_p(x) \= \\frac{x^p \- 1}{p} \\rightarrow ln(x),
\\]
where \\(\\ln()\\) is the natural log function. So a log function essentially is a power transformation for \\(p \= 0\\).
Does it matter what type of log we take when we reexpress? No. The only difference between log(base e) and log (base anything else) is a scalar multiple, which is a trivial reexpression. In this case, we will generally take log base 10\. We do so for ease of interpretation and communication.
8\.7 Ladders of powers
----------------------
John Tukey used a ladder motif to describe different values \\(p\\) of the power transformation. We view the powers as rungs of a ladder.
The raw data can be represented as a \\(p \= 1\\) reexpression. Each move down the ladder (by a multiple of .5\) is called one step and represents one unit change in the curvature of the transformation. If we wish to reexpress our data, we might change our data by taking roots, which is one step away from the raw data. If we wish to make a more drastic transformation, we might take another step, trying logs, which is the \\(p \= 0\\) transformation.
In the next lecture, we’ll get some experience taking power transformations with the objective of making a data set more symmetric.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/transformations.html |
8 Transformations
=================
In this lecture, we talk about transforming or reexpressing data. In the last two lectures, we have illustrated taking power transformations in order to reduce the dependence between spread and level. Here we talk more about the reasons why transformations can be helpful and more formally define the class of power transformations.
8\.1 Why do we rexpress data?
-----------------------------
Simply, we transform to make data easier to understand.
Here is a simple case in point. As you probably know, I’m a baseball fan and I’m interested in baseball data. Suppose we look at the number of home runs hit by all major league players in the year 1961\. (I chose 1961 since it was a famous year for hitting home runs – in particular, Roger Maris set the season record by hitting 61 home runs that season.) Below I have displayed these home run numbers using a histogram.
```
library(LearnEDAfunctions)
library(tidyverse)
ggplot(homeruns.61, aes(HR)) +
geom_histogram(color = "black", fill = "white")
```
This data has strong right\-skewness and is hard to analyze. All we can tell from the histogram is that most of the home run numbers are close to zero with a few large values. It is hard to distinguish the small values and I would have a hard time specifying an average home run number.
Generally, why is data hard to interpret?
* Strong asymmetry such as displayed in the above histogram.
* A large number of outliers. These outliers can distort standard measures of average and spread, such as the mean and standard deviation. Also it creates difficulties just in graphing the data.
* Batches with different averages and widely differing spreads. We saw this problem in the previous two lectures.
* Large and systematic residuals after fitting a model. We’ll talk more about patterns in residuals when we get to the plotting section of the class.
To make data easier to interpret, we want to choose a transformation which will change the shape of the data.
8\.2 The shape of the data
--------------------------
The shape of the data is what you get when you draw a smooth curve over the histogram.
```
ggplot(homeruns.61) +
geom_histogram(aes(x = HR, y = ..density..)) +
geom_density(aes(x = HR),
color = "red")
```
If we remove all of the labeling information and bars, we can focus on the shape:
We are interested in finding a reexpression that can change this undesirable strong right\-skewed shape.
8\.3 Trivial reexpressions
--------------------------
There are some reexpressions that we could try that would change the values, but would have no impact on the shape of the distribution – we call these trivial reexpressions. For example, in baseball a home run counts as four bases. We could reexpress home runs to bases:
\\\[
Bases \= 4 (Home runs)
\\]
If we constructed a histogram of all the bases numbers for all players, what would it look like? The shape would be identical to the right\-skewed one shown above. In other words, multiplying a data by a positive constant will have no impact on the shape of the data. Similarly, it might be obvious that if we add any constant to the data, the shape of the data wouldn’t change. To summarize, if \\(X\\) is our raw data and we reexpress \\(X\\) by the linear transformation
\\\[
Y \= a X \+ b, \\, {\\rm where} \\, a \> 0,
\\]
the shape of the \\(Y\\) data will be identical to the shape of the \\(X\\) data.
8\.4 Nontrivial expressions: the power family
---------------------------------------------
We are interested in finding reexpressions that can change the shape of the data so that it is easier to analyze. A convenient family of transformations is the power family. The basic form of this transformation is given by
\\\[
T\_p(X) \= X^p.
\\]
If we choose \\(p \= 1\\), then \\(T\_p(X) \= X\\), the raw data. If we choose any value \\(p\\) that isn’t equal to 1, then will have a shape that is different from the raw data \\(X\\).
Let’s illustrate this fact using our home run data. In the figure below, we show the raw data (\\(p \= 1\\)) and power transformations using (\\(p \= .5\\)) and (\\(p \= .001\\)). We make one small adjustment to the basic power transformation to account for the large number of zeros in the data. We first change \\(X\\) to \\(X \+ .5\\) (add .5 to each observation), and then take powers of \\(X \+ .5\\).
If we look from the raw data to the roots (\\(p \= .5\\)) to the \\(p\\) \= .001, we see distinctive changes in the data shape. The large amount of data close to zero in the raw plot gets squeezed out in the \\(p \= .5\\) and \\(p \= .001\\) graphs. In fact, in the \\(p \= .001\\) plot, we are starting to see much more structure in the large data values.
8\.5 Some properties of the power family
----------------------------------------
Above we presented the `basic form" of the power transformation. An alternative form of this power transformation is the`matched form” –
\\\[
T\_p(X) \= \\frac{X^p \- 1}{p}.
\\]
Suppose we graph this function (for fixed \\(p\\)) as a function of \\(x\\).
Note that for any value of the power \\(p\\),
* the graph goes through the point (1, 0\) – that is \\(T\_p(1\) \= 0\\)
* the derivative of the graph at \\(x \= 0\\) is equal to 1 – that is, \\(T\_p'(1\) \= 1\\)
Below we have graphed the function \\(T\_p(x)\\) for the powers \\(p \= 1\.5, 1, .5, 0, \-.5\\).
What do we notice in the graphs?
* we confirm that all of the graphs go through (1, 0\) and have the same slope at that point
* all of the curves are increasing in \\(x\\)
(This is an important property – when we apply this matched transformation to our raw data, we will preserve the order in the data.)
* with respect to concavity
* if \\(p\\) \> 1, the graph is concave up
* if \\(p\\) \< 1, the graph is concave down
* if \\(p\\) \= 1, we have a linear function
* as \\(p\\) moves away from 1, the curve becomes more curved, which means that the concavity increases
What impact does this concavity have on changing the shape of our data?
* If \\(p\\) \> 1, then the graph is concave up, which means that the transformation will expand the scale more for large \\(x\\) than for small \\(x\\).
* Similarly, if the power \\(p \< 1\\) (graph is concave down), the transformation will expand the scale more for small \\(x\\) than for large \\(x\\).
Note that if \\(p \< 0\\), the coefficient of \\(x^p\\) will be negative. This might seem odd (we don’t usually have data with negative sign), but this is necessary to make all of these matched transformations increasing in \\(x\\).
8\.6 The log transformation
---------------------------
The log transformation actually is a special case of a power transformation. Consider the matched form of the power family. Fix \\(x\\) and let the power \\(p\\) approach zero. Then it is a straightforward calculation to show that
\\\[
T\_p(x) \= \\frac{x^p \- 1}{p} \\rightarrow ln(x),
\\]
where \\(\\ln()\\) is the natural log function. So a log function essentially is a power transformation for \\(p \= 0\\).
Does it matter what type of log we take when we reexpress? No. The only difference between log(base e) and log (base anything else) is a scalar multiple, which is a trivial reexpression. In this case, we will generally take log base 10\. We do so for ease of interpretation and communication.
8\.7 Ladders of powers
----------------------
John Tukey used a ladder motif to describe different values \\(p\\) of the power transformation. We view the powers as rungs of a ladder.
The raw data can be represented as a \\(p \= 1\\) reexpression. Each move down the ladder (by a multiple of .5\) is called one step and represents one unit change in the curvature of the transformation. If we wish to reexpress our data, we might change our data by taking roots, which is one step away from the raw data. If we wish to make a more drastic transformation, we might take another step, trying logs, which is the \\(p \= 0\\) transformation.
In the next lecture, we’ll get some experience taking power transformations with the objective of making a data set more symmetric.
8\.1 Why do we rexpress data?
-----------------------------
Simply, we transform to make data easier to understand.
Here is a simple case in point. As you probably know, I’m a baseball fan and I’m interested in baseball data. Suppose we look at the number of home runs hit by all major league players in the year 1961\. (I chose 1961 since it was a famous year for hitting home runs – in particular, Roger Maris set the season record by hitting 61 home runs that season.) Below I have displayed these home run numbers using a histogram.
```
library(LearnEDAfunctions)
library(tidyverse)
ggplot(homeruns.61, aes(HR)) +
geom_histogram(color = "black", fill = "white")
```
This data has strong right\-skewness and is hard to analyze. All we can tell from the histogram is that most of the home run numbers are close to zero with a few large values. It is hard to distinguish the small values and I would have a hard time specifying an average home run number.
Generally, why is data hard to interpret?
* Strong asymmetry such as displayed in the above histogram.
* A large number of outliers. These outliers can distort standard measures of average and spread, such as the mean and standard deviation. Also it creates difficulties just in graphing the data.
* Batches with different averages and widely differing spreads. We saw this problem in the previous two lectures.
* Large and systematic residuals after fitting a model. We’ll talk more about patterns in residuals when we get to the plotting section of the class.
To make data easier to interpret, we want to choose a transformation which will change the shape of the data.
8\.2 The shape of the data
--------------------------
The shape of the data is what you get when you draw a smooth curve over the histogram.
```
ggplot(homeruns.61) +
geom_histogram(aes(x = HR, y = ..density..)) +
geom_density(aes(x = HR),
color = "red")
```
If we remove all of the labeling information and bars, we can focus on the shape:
We are interested in finding a reexpression that can change this undesirable strong right\-skewed shape.
8\.3 Trivial reexpressions
--------------------------
There are some reexpressions that we could try that would change the values, but would have no impact on the shape of the distribution – we call these trivial reexpressions. For example, in baseball a home run counts as four bases. We could reexpress home runs to bases:
\\\[
Bases \= 4 (Home runs)
\\]
If we constructed a histogram of all the bases numbers for all players, what would it look like? The shape would be identical to the right\-skewed one shown above. In other words, multiplying a data by a positive constant will have no impact on the shape of the data. Similarly, it might be obvious that if we add any constant to the data, the shape of the data wouldn’t change. To summarize, if \\(X\\) is our raw data and we reexpress \\(X\\) by the linear transformation
\\\[
Y \= a X \+ b, \\, {\\rm where} \\, a \> 0,
\\]
the shape of the \\(Y\\) data will be identical to the shape of the \\(X\\) data.
8\.4 Nontrivial expressions: the power family
---------------------------------------------
We are interested in finding reexpressions that can change the shape of the data so that it is easier to analyze. A convenient family of transformations is the power family. The basic form of this transformation is given by
\\\[
T\_p(X) \= X^p.
\\]
If we choose \\(p \= 1\\), then \\(T\_p(X) \= X\\), the raw data. If we choose any value \\(p\\) that isn’t equal to 1, then will have a shape that is different from the raw data \\(X\\).
Let’s illustrate this fact using our home run data. In the figure below, we show the raw data (\\(p \= 1\\)) and power transformations using (\\(p \= .5\\)) and (\\(p \= .001\\)). We make one small adjustment to the basic power transformation to account for the large number of zeros in the data. We first change \\(X\\) to \\(X \+ .5\\) (add .5 to each observation), and then take powers of \\(X \+ .5\\).
If we look from the raw data to the roots (\\(p \= .5\\)) to the \\(p\\) \= .001, we see distinctive changes in the data shape. The large amount of data close to zero in the raw plot gets squeezed out in the \\(p \= .5\\) and \\(p \= .001\\) graphs. In fact, in the \\(p \= .001\\) plot, we are starting to see much more structure in the large data values.
8\.5 Some properties of the power family
----------------------------------------
Above we presented the `basic form" of the power transformation. An alternative form of this power transformation is the`matched form” –
\\\[
T\_p(X) \= \\frac{X^p \- 1}{p}.
\\]
Suppose we graph this function (for fixed \\(p\\)) as a function of \\(x\\).
Note that for any value of the power \\(p\\),
* the graph goes through the point (1, 0\) – that is \\(T\_p(1\) \= 0\\)
* the derivative of the graph at \\(x \= 0\\) is equal to 1 – that is, \\(T\_p'(1\) \= 1\\)
Below we have graphed the function \\(T\_p(x)\\) for the powers \\(p \= 1\.5, 1, .5, 0, \-.5\\).
What do we notice in the graphs?
* we confirm that all of the graphs go through (1, 0\) and have the same slope at that point
* all of the curves are increasing in \\(x\\)
(This is an important property – when we apply this matched transformation to our raw data, we will preserve the order in the data.)
* with respect to concavity
* if \\(p\\) \> 1, the graph is concave up
* if \\(p\\) \< 1, the graph is concave down
* if \\(p\\) \= 1, we have a linear function
* as \\(p\\) moves away from 1, the curve becomes more curved, which means that the concavity increases
What impact does this concavity have on changing the shape of our data?
* If \\(p\\) \> 1, then the graph is concave up, which means that the transformation will expand the scale more for large \\(x\\) than for small \\(x\\).
* Similarly, if the power \\(p \< 1\\) (graph is concave down), the transformation will expand the scale more for small \\(x\\) than for large \\(x\\).
Note that if \\(p \< 0\\), the coefficient of \\(x^p\\) will be negative. This might seem odd (we don’t usually have data with negative sign), but this is necessary to make all of these matched transformations increasing in \\(x\\).
8\.6 The log transformation
---------------------------
The log transformation actually is a special case of a power transformation. Consider the matched form of the power family. Fix \\(x\\) and let the power \\(p\\) approach zero. Then it is a straightforward calculation to show that
\\\[
T\_p(x) \= \\frac{x^p \- 1}{p} \\rightarrow ln(x),
\\]
where \\(\\ln()\\) is the natural log function. So a log function essentially is a power transformation for \\(p \= 0\\).
Does it matter what type of log we take when we reexpress? No. The only difference between log(base e) and log (base anything else) is a scalar multiple, which is a trivial reexpression. In this case, we will generally take log base 10\. We do so for ease of interpretation and communication.
8\.7 Ladders of powers
----------------------
John Tukey used a ladder motif to describe different values \\(p\\) of the power transformation. We view the powers as rungs of a ladder.
The raw data can be represented as a \\(p \= 1\\) reexpression. Each move down the ladder (by a multiple of .5\) is called one step and represents one unit change in the curvature of the transformation. If we wish to reexpress our data, we might change our data by taking roots, which is one step away from the raw data. If we wish to make a more drastic transformation, we might take another step, trying logs, which is the \\(p \= 0\\) transformation.
In the next lecture, we’ll get some experience taking power transformations with the objective of making a data set more symmetric.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/reexpressing-for-symmetry.html |
9 Reexpressing for Symmetry
===========================
9\.1 Data for the day
---------------------
There is a great variation in the care of infants across different countries. One way of measuring the quality of care of newborns is by the infant mortality rate, which is defined to be the number who die for each 1000 births. The dataset `mortality.rate` in the `LearnEDA` package gives the mortality rates (years 2005 to 2010\) for 62 countries (Afghanistan through Ghana). (Data comes from page 493 from the 2010 New York Times Almanac.) Part of the dataset is displayed below.
```
library(LearnEDAfunctions)
library(tidyverse)
head(mortality.rates)
```
```
## Country Rate
## 1 Afghanistan 157
## 2 Albania 16
## 3 Algeria 31
## 4 Angola 118
## 5 Argentina 13
## 6 Armenia 25
```
Here is a stemplot of the raw data:
```
aplpack::stem.leaf(mortality.rates$Rate)
```
```
## 1 | 2: represents 12
## leaf unit: 1
## n: 62
## 17 0 | 34444445556667899
## 26 1 | 000233679
## (7) 2 | 0123356
## 29 3 | 01556
## 24 4 | 35568
## 19 5 | 14
## 17 6 | 27
## 15 7 | 3799
## 11 8 | 0557
## 7 9 | 8
## 6 10 | 06
## 4 11 | 38
## 12 |
## 2 13 | 0
## HI: 157
```
We see right\-skewness in these data. Most of the mortality rates (corresponding to the more developed countries) are in the 0\-30 range, and we notice several large rates (130, 157\).
9\.2 Why is this data hard to interpret?
----------------------------------------
When data is right or left skewed, then it is more difficult to analyze. Why?
* Most of the data is bunched up at one end of the distribution. This makes it hard to distinguish data values within the bunch.
* The presence of outliers distorts the graphical display. Because there are gaps at the high end, only a small part of the display contains a majority of the data.
* It is difficult to talk about an \`\`average” value, since it is not well\-defined. The median and mean will be different values.
* It is harder to interpret a measure of spread like a standard deviation or fourth\-spread when the data is skewed.
It is desirable to reexpress the data to make it more symmetric. We’ll accomplish this by a suitable choice of power transformation.
9\.3 Checking for symmetry by looking at midsummaries
-----------------------------------------------------
In checking for symmetry, it is useful to have some tools for detecting symmetry of a batch of numbers. One useful method looks at the sequence of midsummaries.
First, a midsummary (or mid for short) is the average of the two letter values. The first midsummary is the median \\(M\\). The next midsummary is the average of the fourths – we call this the midfourth:
\\\[
midfourth \= \\frac{F\_U \+ F\_L}{2}.
\\]
Likewise, the mideighth is the average of the lower and upper eights, and so on. The R function `lval` from the `LearnEDAfunctions` package (illustrated here for the infant mortality data) shows the letter values and the corresponding mids:
```
(letter.values <- lval(mortality.rates$Rate))
```
```
## depth lo hi mids spreads
## M 31.5 24 24.0 24.00 0.0
## H 16.0 9 67.0 38.00 58.0
## E 8.5 5 86.0 45.50 81.0
## D 4.5 4 109.5 56.75 105.5
## C 2.5 4 124.0 64.00 120.0
## B 1.0 3 157.0 80.00 154.0
```
We can detect symmetry, or lack of symmetry, of a batch by looking at the sequence of midsummaries:
```
select(letter.values, mids)
```
```
## mids
## M 24.00
## H 38.00
## E 45.50
## D 56.75
## C 64.00
## B 80.00
```
If this sequence
* is increasing (like it is here), then this indicates right skewness
* is decreasing, we have left skewness
* doesn’t show any trend, then we have approximate symmetry
It is helpful to plot the midsummaries as a function of the letter value (Median is 1, Fourth is 2, etc). Clearly there is a positive trend in the plot, suggesting right skewness in the data.
```
letter.values %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Raw Data")
```
9\.4 Reexpressing to achieve approximate symmetry
-------------------------------------------------
When we have skewness, then we move along the ladder of powers (of a power transformation) to look for a reexpression that will make the data set roughly symmetric. If we have right skewness (which is pretty common), then we move down the ladder of powers in our search for a good reexpression. Since the raw data is the \\(p \= 1\\) transformation, we first take on step down on the ladder which corresponds to taking roots (\\(p \= 1/2\\)).
We take roots of the data. Here is a stemplot, letter value display, and plot of the midsummaries for the roots:
```
roots <- sqrt(mortality.rates$Rate)
aplpack::stem.leaf(roots)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 62
## 1 1 | 7
## 15 2 | 00000022244468
## 23 3 | 00111466
## (8) 4 | 01345677
## (6) 5 | 004599
## 25 6 | 057779
## 19 7 | 138
## 16 8 | 157889
## 10 9 | 2238
## 6 10 | 0268
## 2 11 | 4
## 1 12 | 5
```
```
(root.lv <- lval(roots))
```
```
## depth lo hi mids spreads
## M 31.5 4.897916 4.897916 4.897916 0.000000
## H 16.0 3.000000 8.185353 5.592676 5.185353
## E 8.5 2.236068 9.273462 5.754765 7.037394
## D 4.5 2.000000 10.462888 6.231444 8.462888
## C 2.5 2.000000 11.132267 6.566134 9.132267
## B 1.0 1.732051 12.529964 7.131007 10.797913
```
```
root.lv %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Root Data")
```
Things have improved. Comparing the stemplot of the roots with the stemplot of the raw mortality rates, the roots look less skewed, suggesting that we are moving in the right direction on the ladder of powers. But the data set is not symmetric – this is confirmed by the plot of the midsummaries which shows a clear positive trend.
If we take another step down the ladder of powers, we arrive at logs \\((p \= 0\)\\). We display the stemplot, letter values, and plot of mids for the log mortality rates.
```
logs <- log(mortality.rates$Rate)
aplpack::stem.leaf(logs)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 62
## 7 1* | 0333333
## 14 1. | 6667779
## 21 2* | 0113334
## 27 2. | 557899
## (8) 3* | 00112244
## 27 3. | 5557888899
## 17 4* | 1223333444
## 7 4. | 566778
## 1 5* | 0
```
```
(logs.lv <- lval(logs))
```
```
## depth lo hi mids spreads
## M 31.5 3.177185 3.177185 3.177185 0.000000
## H 16.0 2.197225 4.204693 3.200959 2.007468
## E 8.5 1.609438 4.454280 3.031859 2.844842
## D 4.5 1.386294 4.695413 3.040854 3.309119
## C 2.5 1.386294 4.819110 3.102702 3.432815
## B 1.0 1.098612 5.056246 3.077429 3.957634
```
```
logs.lv %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Log Data")
```
Things look a bit better. The stemplot looks pretty symmetric to me. Looking at the plot of the mids, there is a decreasing trend for the first three points, and then the plot looks pretty constant. This means that there is some skewness in the middle portion of the logs, but there is little skewness in the tails (the tails are the extreme portions of the data).
If logs are a good reexpression, then it wouldn’t make any sense to go further down the ladder of powers. But let’s check and try taking a \\(p \= \-1/2\\) rexpression which corresponds to reciprocal roots (\\(1 / \\sqrt{mortality \\, rate}\\) ). Actually we take the reexpression
\\\[
\- \\frac{1}{\\sqrt{mortality \\, rate}}.
\\]
We do this since we want all of our power transformations to be increasing functions of our raw data.
Below we show the stemplot, the letter\-value display, and the graph of the mids for the reciprocal roots.
```
recroots <- - 1 / sqrt(mortality.rates$Rate)
aplpack::stem.leaf(recroots)
```
```
## 1 | 2: represents 0.12
## leaf unit: 0.01
## n: 62
## 1 -5. | 7
## 7 -5* | 000000
## -4. |
## 13 -4* | 444000
## 15 -3. | 75
## 20 -3* | 33111
## 24 -2. | 8775
## (8) -2* | 42211000
## 30 -1. | 9876665
## 23 -1* | 444443221111100000
## 5 -0. | 99987
```
```
(recroots.lv <- lval(recroots))
```
```
## depth lo hi mids spreads
## M 31.5 -0.2042572 -0.20425721 -0.2042572 0.0000000
## H 16.0 -0.3333333 -0.12216944 -0.2277514 0.2111639
## E 8.5 -0.4472136 -0.10783824 -0.2775259 0.3393754
## D 4.5 -0.5000000 -0.09560034 -0.2978002 0.4043997
## C 2.5 -0.5000000 -0.08988163 -0.2949408 0.4101184
## B 1.0 -0.5773503 -0.07980869 -0.3285795 0.4975416
```
```
recroots.lv %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Reciprocal Roots")
```
Looking at the stemplot, the distribution of the reciprocal roots looks left\-skewed. There is a negative trend in the midsummaries that confirms this left\-skewness. (Actually the graph of the mids of the reciprocal roots looks similar to the graph of the mids of the logs. But I’m combining all of the information that we get from a visual scan of the stemplot and the midsummaries.)
So this analysis suggests that we should take the log of the mortality rates to achieve approximate symmetry.
9\.5 Hinkley’s quick method
---------------------------
David Hinkley suggested a simple measure of asymmetry of a batch. This measure can be used together with the family of power transformations to suggest an appropriate reexpression.
He suggested looking at the statistic
\\\[
d \= \\frac{\\bar X \- M}{measure \\, of \\, scale},
\\]
where \\(\\bar X\\) is the mean, \\(M\\) is the median, and the denominator is any measure of scale of the batch. In the following, we will use the fourth\-spread as our scale measure.
To interpret d …
* if d \> 0, this indicates that the mean is larger than the median which reflects right\-skewness of the batch
* if d \< 0, this indicates left\-skewness
* if d is approximately 0, then the batch appears roughly symmetric
For our batch of mortality rates, we can compute
\\\[
\\bar X \= 39\.80645, M \= 24, d\_F \= F\_U \- F\_L \= 67 ??? 9 \= 58
\\]
\\\[
d \= \\frac{39\.80645 \- 24}{58} \= 0\.2725,
\\]
which indicates right\-skewness in the batch.
As before, we move down the ladder of powers to suggest possible reexpressions. We use Hinkley’s statistic to measure the skewness in the reexpressed batch. We choose the value of the power p so that the value of the skewness measure d is approximately equal to 0\.
Using the `hinkley` function, we compute Hinkley’s measure for the roots, logs, and reciprocal roots.
```
hinkley(roots)
```
```
## h
## 0.1332907
```
```
hinkley(logs)
```
```
## h
## -0.02235429
```
```
hinkley(recroots)
```
```
## h
## -0.1950111
```
Looking at the values of d, the \`\`correct” reexpression appears to be between \\(p \= .5\\) (roots) and \\(p \= 0\\) (logs), although the value closest to 0 corresponds to the log reexpression.
In practice, one uses Hinkley’s method together with other methods such as the midsummary approach to assess symmetry and find an appropriate choice of power transformation.
9\.1 Data for the day
---------------------
There is a great variation in the care of infants across different countries. One way of measuring the quality of care of newborns is by the infant mortality rate, which is defined to be the number who die for each 1000 births. The dataset `mortality.rate` in the `LearnEDA` package gives the mortality rates (years 2005 to 2010\) for 62 countries (Afghanistan through Ghana). (Data comes from page 493 from the 2010 New York Times Almanac.) Part of the dataset is displayed below.
```
library(LearnEDAfunctions)
library(tidyverse)
head(mortality.rates)
```
```
## Country Rate
## 1 Afghanistan 157
## 2 Albania 16
## 3 Algeria 31
## 4 Angola 118
## 5 Argentina 13
## 6 Armenia 25
```
Here is a stemplot of the raw data:
```
aplpack::stem.leaf(mortality.rates$Rate)
```
```
## 1 | 2: represents 12
## leaf unit: 1
## n: 62
## 17 0 | 34444445556667899
## 26 1 | 000233679
## (7) 2 | 0123356
## 29 3 | 01556
## 24 4 | 35568
## 19 5 | 14
## 17 6 | 27
## 15 7 | 3799
## 11 8 | 0557
## 7 9 | 8
## 6 10 | 06
## 4 11 | 38
## 12 |
## 2 13 | 0
## HI: 157
```
We see right\-skewness in these data. Most of the mortality rates (corresponding to the more developed countries) are in the 0\-30 range, and we notice several large rates (130, 157\).
9\.2 Why is this data hard to interpret?
----------------------------------------
When data is right or left skewed, then it is more difficult to analyze. Why?
* Most of the data is bunched up at one end of the distribution. This makes it hard to distinguish data values within the bunch.
* The presence of outliers distorts the graphical display. Because there are gaps at the high end, only a small part of the display contains a majority of the data.
* It is difficult to talk about an \`\`average” value, since it is not well\-defined. The median and mean will be different values.
* It is harder to interpret a measure of spread like a standard deviation or fourth\-spread when the data is skewed.
It is desirable to reexpress the data to make it more symmetric. We’ll accomplish this by a suitable choice of power transformation.
9\.3 Checking for symmetry by looking at midsummaries
-----------------------------------------------------
In checking for symmetry, it is useful to have some tools for detecting symmetry of a batch of numbers. One useful method looks at the sequence of midsummaries.
First, a midsummary (or mid for short) is the average of the two letter values. The first midsummary is the median \\(M\\). The next midsummary is the average of the fourths – we call this the midfourth:
\\\[
midfourth \= \\frac{F\_U \+ F\_L}{2}.
\\]
Likewise, the mideighth is the average of the lower and upper eights, and so on. The R function `lval` from the `LearnEDAfunctions` package (illustrated here for the infant mortality data) shows the letter values and the corresponding mids:
```
(letter.values <- lval(mortality.rates$Rate))
```
```
## depth lo hi mids spreads
## M 31.5 24 24.0 24.00 0.0
## H 16.0 9 67.0 38.00 58.0
## E 8.5 5 86.0 45.50 81.0
## D 4.5 4 109.5 56.75 105.5
## C 2.5 4 124.0 64.00 120.0
## B 1.0 3 157.0 80.00 154.0
```
We can detect symmetry, or lack of symmetry, of a batch by looking at the sequence of midsummaries:
```
select(letter.values, mids)
```
```
## mids
## M 24.00
## H 38.00
## E 45.50
## D 56.75
## C 64.00
## B 80.00
```
If this sequence
* is increasing (like it is here), then this indicates right skewness
* is decreasing, we have left skewness
* doesn’t show any trend, then we have approximate symmetry
It is helpful to plot the midsummaries as a function of the letter value (Median is 1, Fourth is 2, etc). Clearly there is a positive trend in the plot, suggesting right skewness in the data.
```
letter.values %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Raw Data")
```
9\.4 Reexpressing to achieve approximate symmetry
-------------------------------------------------
When we have skewness, then we move along the ladder of powers (of a power transformation) to look for a reexpression that will make the data set roughly symmetric. If we have right skewness (which is pretty common), then we move down the ladder of powers in our search for a good reexpression. Since the raw data is the \\(p \= 1\\) transformation, we first take on step down on the ladder which corresponds to taking roots (\\(p \= 1/2\\)).
We take roots of the data. Here is a stemplot, letter value display, and plot of the midsummaries for the roots:
```
roots <- sqrt(mortality.rates$Rate)
aplpack::stem.leaf(roots)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 62
## 1 1 | 7
## 15 2 | 00000022244468
## 23 3 | 00111466
## (8) 4 | 01345677
## (6) 5 | 004599
## 25 6 | 057779
## 19 7 | 138
## 16 8 | 157889
## 10 9 | 2238
## 6 10 | 0268
## 2 11 | 4
## 1 12 | 5
```
```
(root.lv <- lval(roots))
```
```
## depth lo hi mids spreads
## M 31.5 4.897916 4.897916 4.897916 0.000000
## H 16.0 3.000000 8.185353 5.592676 5.185353
## E 8.5 2.236068 9.273462 5.754765 7.037394
## D 4.5 2.000000 10.462888 6.231444 8.462888
## C 2.5 2.000000 11.132267 6.566134 9.132267
## B 1.0 1.732051 12.529964 7.131007 10.797913
```
```
root.lv %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Root Data")
```
Things have improved. Comparing the stemplot of the roots with the stemplot of the raw mortality rates, the roots look less skewed, suggesting that we are moving in the right direction on the ladder of powers. But the data set is not symmetric – this is confirmed by the plot of the midsummaries which shows a clear positive trend.
If we take another step down the ladder of powers, we arrive at logs \\((p \= 0\)\\). We display the stemplot, letter values, and plot of mids for the log mortality rates.
```
logs <- log(mortality.rates$Rate)
aplpack::stem.leaf(logs)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 62
## 7 1* | 0333333
## 14 1. | 6667779
## 21 2* | 0113334
## 27 2. | 557899
## (8) 3* | 00112244
## 27 3. | 5557888899
## 17 4* | 1223333444
## 7 4. | 566778
## 1 5* | 0
```
```
(logs.lv <- lval(logs))
```
```
## depth lo hi mids spreads
## M 31.5 3.177185 3.177185 3.177185 0.000000
## H 16.0 2.197225 4.204693 3.200959 2.007468
## E 8.5 1.609438 4.454280 3.031859 2.844842
## D 4.5 1.386294 4.695413 3.040854 3.309119
## C 2.5 1.386294 4.819110 3.102702 3.432815
## B 1.0 1.098612 5.056246 3.077429 3.957634
```
```
logs.lv %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Log Data")
```
Things look a bit better. The stemplot looks pretty symmetric to me. Looking at the plot of the mids, there is a decreasing trend for the first three points, and then the plot looks pretty constant. This means that there is some skewness in the middle portion of the logs, but there is little skewness in the tails (the tails are the extreme portions of the data).
If logs are a good reexpression, then it wouldn’t make any sense to go further down the ladder of powers. But let’s check and try taking a \\(p \= \-1/2\\) rexpression which corresponds to reciprocal roots (\\(1 / \\sqrt{mortality \\, rate}\\) ). Actually we take the reexpression
\\\[
\- \\frac{1}{\\sqrt{mortality \\, rate}}.
\\]
We do this since we want all of our power transformations to be increasing functions of our raw data.
Below we show the stemplot, the letter\-value display, and the graph of the mids for the reciprocal roots.
```
recroots <- - 1 / sqrt(mortality.rates$Rate)
aplpack::stem.leaf(recroots)
```
```
## 1 | 2: represents 0.12
## leaf unit: 0.01
## n: 62
## 1 -5. | 7
## 7 -5* | 000000
## -4. |
## 13 -4* | 444000
## 15 -3. | 75
## 20 -3* | 33111
## 24 -2. | 8775
## (8) -2* | 42211000
## 30 -1. | 9876665
## 23 -1* | 444443221111100000
## 5 -0. | 99987
```
```
(recroots.lv <- lval(recroots))
```
```
## depth lo hi mids spreads
## M 31.5 -0.2042572 -0.20425721 -0.2042572 0.0000000
## H 16.0 -0.3333333 -0.12216944 -0.2277514 0.2111639
## E 8.5 -0.4472136 -0.10783824 -0.2775259 0.3393754
## D 4.5 -0.5000000 -0.09560034 -0.2978002 0.4043997
## C 2.5 -0.5000000 -0.08988163 -0.2949408 0.4101184
## B 1.0 -0.5773503 -0.07980869 -0.3285795 0.4975416
```
```
recroots.lv %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Reciprocal Roots")
```
Looking at the stemplot, the distribution of the reciprocal roots looks left\-skewed. There is a negative trend in the midsummaries that confirms this left\-skewness. (Actually the graph of the mids of the reciprocal roots looks similar to the graph of the mids of the logs. But I’m combining all of the information that we get from a visual scan of the stemplot and the midsummaries.)
So this analysis suggests that we should take the log of the mortality rates to achieve approximate symmetry.
9\.5 Hinkley’s quick method
---------------------------
David Hinkley suggested a simple measure of asymmetry of a batch. This measure can be used together with the family of power transformations to suggest an appropriate reexpression.
He suggested looking at the statistic
\\\[
d \= \\frac{\\bar X \- M}{measure \\, of \\, scale},
\\]
where \\(\\bar X\\) is the mean, \\(M\\) is the median, and the denominator is any measure of scale of the batch. In the following, we will use the fourth\-spread as our scale measure.
To interpret d …
* if d \> 0, this indicates that the mean is larger than the median which reflects right\-skewness of the batch
* if d \< 0, this indicates left\-skewness
* if d is approximately 0, then the batch appears roughly symmetric
For our batch of mortality rates, we can compute
\\\[
\\bar X \= 39\.80645, M \= 24, d\_F \= F\_U \- F\_L \= 67 ??? 9 \= 58
\\]
\\\[
d \= \\frac{39\.80645 \- 24}{58} \= 0\.2725,
\\]
which indicates right\-skewness in the batch.
As before, we move down the ladder of powers to suggest possible reexpressions. We use Hinkley’s statistic to measure the skewness in the reexpressed batch. We choose the value of the power p so that the value of the skewness measure d is approximately equal to 0\.
Using the `hinkley` function, we compute Hinkley’s measure for the roots, logs, and reciprocal roots.
```
hinkley(roots)
```
```
## h
## 0.1332907
```
```
hinkley(logs)
```
```
## h
## -0.02235429
```
```
hinkley(recroots)
```
```
## h
## -0.1950111
```
Looking at the values of d, the \`\`correct” reexpression appears to be between \\(p \= .5\\) (roots) and \\(p \= 0\\) (logs), although the value closest to 0 corresponds to the log reexpression.
In practice, one uses Hinkley’s method together with other methods such as the midsummary approach to assess symmetry and find an appropriate choice of power transformation.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/reexpressing-for-symmetry.html |
9 Reexpressing for Symmetry
===========================
9\.1 Data for the day
---------------------
There is a great variation in the care of infants across different countries. One way of measuring the quality of care of newborns is by the infant mortality rate, which is defined to be the number who die for each 1000 births. The dataset `mortality.rate` in the `LearnEDA` package gives the mortality rates (years 2005 to 2010\) for 62 countries (Afghanistan through Ghana). (Data comes from page 493 from the 2010 New York Times Almanac.) Part of the dataset is displayed below.
```
library(LearnEDAfunctions)
library(tidyverse)
head(mortality.rates)
```
```
## Country Rate
## 1 Afghanistan 157
## 2 Albania 16
## 3 Algeria 31
## 4 Angola 118
## 5 Argentina 13
## 6 Armenia 25
```
Here is a stemplot of the raw data:
```
aplpack::stem.leaf(mortality.rates$Rate)
```
```
## 1 | 2: represents 12
## leaf unit: 1
## n: 62
## 17 0 | 34444445556667899
## 26 1 | 000233679
## (7) 2 | 0123356
## 29 3 | 01556
## 24 4 | 35568
## 19 5 | 14
## 17 6 | 27
## 15 7 | 3799
## 11 8 | 0557
## 7 9 | 8
## 6 10 | 06
## 4 11 | 38
## 12 |
## 2 13 | 0
## HI: 157
```
We see right\-skewness in these data. Most of the mortality rates (corresponding to the more developed countries) are in the 0\-30 range, and we notice several large rates (130, 157\).
9\.2 Why is this data hard to interpret?
----------------------------------------
When data is right or left skewed, then it is more difficult to analyze. Why?
* Most of the data is bunched up at one end of the distribution. This makes it hard to distinguish data values within the bunch.
* The presence of outliers distorts the graphical display. Because there are gaps at the high end, only a small part of the display contains a majority of the data.
* It is difficult to talk about an \`\`average” value, since it is not well\-defined. The median and mean will be different values.
* It is harder to interpret a measure of spread like a standard deviation or fourth\-spread when the data is skewed.
It is desirable to reexpress the data to make it more symmetric. We’ll accomplish this by a suitable choice of power transformation.
9\.3 Checking for symmetry by looking at midsummaries
-----------------------------------------------------
In checking for symmetry, it is useful to have some tools for detecting symmetry of a batch of numbers. One useful method looks at the sequence of midsummaries.
First, a midsummary (or mid for short) is the average of the two letter values. The first midsummary is the median \\(M\\). The next midsummary is the average of the fourths – we call this the midfourth:
\\\[
midfourth \= \\frac{F\_U \+ F\_L}{2}.
\\]
Likewise, the mideighth is the average of the lower and upper eights, and so on. The R function `lval` from the `LearnEDAfunctions` package (illustrated here for the infant mortality data) shows the letter values and the corresponding mids:
```
(letter.values <- lval(mortality.rates$Rate))
```
```
## depth lo hi mids spreads
## M 31.5 24 24.0 24.00 0.0
## H 16.0 9 67.0 38.00 58.0
## E 8.5 5 86.0 45.50 81.0
## D 4.5 4 109.5 56.75 105.5
## C 2.5 4 124.0 64.00 120.0
## B 1.0 3 157.0 80.00 154.0
```
We can detect symmetry, or lack of symmetry, of a batch by looking at the sequence of midsummaries:
```
select(letter.values, mids)
```
```
## mids
## M 24.00
## H 38.00
## E 45.50
## D 56.75
## C 64.00
## B 80.00
```
If this sequence
* is increasing (like it is here), then this indicates right skewness
* is decreasing, we have left skewness
* doesn’t show any trend, then we have approximate symmetry
It is helpful to plot the midsummaries as a function of the letter value (Median is 1, Fourth is 2, etc). Clearly there is a positive trend in the plot, suggesting right skewness in the data.
```
letter.values %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Raw Data")
```
9\.4 Reexpressing to achieve approximate symmetry
-------------------------------------------------
When we have skewness, then we move along the ladder of powers (of a power transformation) to look for a reexpression that will make the data set roughly symmetric. If we have right skewness (which is pretty common), then we move down the ladder of powers in our search for a good reexpression. Since the raw data is the \\(p \= 1\\) transformation, we first take on step down on the ladder which corresponds to taking roots (\\(p \= 1/2\\)).
We take roots of the data. Here is a stemplot, letter value display, and plot of the midsummaries for the roots:
```
roots <- sqrt(mortality.rates$Rate)
aplpack::stem.leaf(roots)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 62
## 1 1 | 7
## 15 2 | 00000022244468
## 23 3 | 00111466
## (8) 4 | 01345677
## (6) 5 | 004599
## 25 6 | 057779
## 19 7 | 138
## 16 8 | 157889
## 10 9 | 2238
## 6 10 | 0268
## 2 11 | 4
## 1 12 | 5
```
```
(root.lv <- lval(roots))
```
```
## depth lo hi mids spreads
## M 31.5 4.897916 4.897916 4.897916 0.000000
## H 16.0 3.000000 8.185353 5.592676 5.185353
## E 8.5 2.236068 9.273462 5.754765 7.037394
## D 4.5 2.000000 10.462888 6.231444 8.462888
## C 2.5 2.000000 11.132267 6.566134 9.132267
## B 1.0 1.732051 12.529964 7.131007 10.797913
```
```
root.lv %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Root Data")
```
Things have improved. Comparing the stemplot of the roots with the stemplot of the raw mortality rates, the roots look less skewed, suggesting that we are moving in the right direction on the ladder of powers. But the data set is not symmetric – this is confirmed by the plot of the midsummaries which shows a clear positive trend.
If we take another step down the ladder of powers, we arrive at logs \\((p \= 0\)\\). We display the stemplot, letter values, and plot of mids for the log mortality rates.
```
logs <- log(mortality.rates$Rate)
aplpack::stem.leaf(logs)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 62
## 7 1* | 0333333
## 14 1. | 6667779
## 21 2* | 0113334
## 27 2. | 557899
## (8) 3* | 00112244
## 27 3. | 5557888899
## 17 4* | 1223333444
## 7 4. | 566778
## 1 5* | 0
```
```
(logs.lv <- lval(logs))
```
```
## depth lo hi mids spreads
## M 31.5 3.177185 3.177185 3.177185 0.000000
## H 16.0 2.197225 4.204693 3.200959 2.007468
## E 8.5 1.609438 4.454280 3.031859 2.844842
## D 4.5 1.386294 4.695413 3.040854 3.309119
## C 2.5 1.386294 4.819110 3.102702 3.432815
## B 1.0 1.098612 5.056246 3.077429 3.957634
```
```
logs.lv %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Log Data")
```
Things look a bit better. The stemplot looks pretty symmetric to me. Looking at the plot of the mids, there is a decreasing trend for the first three points, and then the plot looks pretty constant. This means that there is some skewness in the middle portion of the logs, but there is little skewness in the tails (the tails are the extreme portions of the data).
If logs are a good reexpression, then it wouldn’t make any sense to go further down the ladder of powers. But let’s check and try taking a \\(p \= \-1/2\\) rexpression which corresponds to reciprocal roots (\\(1 / \\sqrt{mortality \\, rate}\\) ). Actually we take the reexpression
\\\[
\- \\frac{1}{\\sqrt{mortality \\, rate}}.
\\]
We do this since we want all of our power transformations to be increasing functions of our raw data.
Below we show the stemplot, the letter\-value display, and the graph of the mids for the reciprocal roots.
```
recroots <- - 1 / sqrt(mortality.rates$Rate)
aplpack::stem.leaf(recroots)
```
```
## 1 | 2: represents 0.12
## leaf unit: 0.01
## n: 62
## 1 -5. | 7
## 7 -5* | 000000
## -4. |
## 13 -4* | 444000
## 15 -3. | 75
## 20 -3* | 33111
## 24 -2. | 8775
## (8) -2* | 42211000
## 30 -1. | 9876665
## 23 -1* | 444443221111100000
## 5 -0. | 99987
```
```
(recroots.lv <- lval(recroots))
```
```
## depth lo hi mids spreads
## M 31.5 -0.2042572 -0.20425721 -0.2042572 0.0000000
## H 16.0 -0.3333333 -0.12216944 -0.2277514 0.2111639
## E 8.5 -0.4472136 -0.10783824 -0.2775259 0.3393754
## D 4.5 -0.5000000 -0.09560034 -0.2978002 0.4043997
## C 2.5 -0.5000000 -0.08988163 -0.2949408 0.4101184
## B 1.0 -0.5773503 -0.07980869 -0.3285795 0.4975416
```
```
recroots.lv %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Reciprocal Roots")
```
Looking at the stemplot, the distribution of the reciprocal roots looks left\-skewed. There is a negative trend in the midsummaries that confirms this left\-skewness. (Actually the graph of the mids of the reciprocal roots looks similar to the graph of the mids of the logs. But I’m combining all of the information that we get from a visual scan of the stemplot and the midsummaries.)
So this analysis suggests that we should take the log of the mortality rates to achieve approximate symmetry.
9\.5 Hinkley’s quick method
---------------------------
David Hinkley suggested a simple measure of asymmetry of a batch. This measure can be used together with the family of power transformations to suggest an appropriate reexpression.
He suggested looking at the statistic
\\\[
d \= \\frac{\\bar X \- M}{measure \\, of \\, scale},
\\]
where \\(\\bar X\\) is the mean, \\(M\\) is the median, and the denominator is any measure of scale of the batch. In the following, we will use the fourth\-spread as our scale measure.
To interpret d …
* if d \> 0, this indicates that the mean is larger than the median which reflects right\-skewness of the batch
* if d \< 0, this indicates left\-skewness
* if d is approximately 0, then the batch appears roughly symmetric
For our batch of mortality rates, we can compute
\\\[
\\bar X \= 39\.80645, M \= 24, d\_F \= F\_U \- F\_L \= 67 ??? 9 \= 58
\\]
\\\[
d \= \\frac{39\.80645 \- 24}{58} \= 0\.2725,
\\]
which indicates right\-skewness in the batch.
As before, we move down the ladder of powers to suggest possible reexpressions. We use Hinkley’s statistic to measure the skewness in the reexpressed batch. We choose the value of the power p so that the value of the skewness measure d is approximately equal to 0\.
Using the `hinkley` function, we compute Hinkley’s measure for the roots, logs, and reciprocal roots.
```
hinkley(roots)
```
```
## h
## 0.1332907
```
```
hinkley(logs)
```
```
## h
## -0.02235429
```
```
hinkley(recroots)
```
```
## h
## -0.1950111
```
Looking at the values of d, the \`\`correct” reexpression appears to be between \\(p \= .5\\) (roots) and \\(p \= 0\\) (logs), although the value closest to 0 corresponds to the log reexpression.
In practice, one uses Hinkley’s method together with other methods such as the midsummary approach to assess symmetry and find an appropriate choice of power transformation.
9\.1 Data for the day
---------------------
There is a great variation in the care of infants across different countries. One way of measuring the quality of care of newborns is by the infant mortality rate, which is defined to be the number who die for each 1000 births. The dataset `mortality.rate` in the `LearnEDA` package gives the mortality rates (years 2005 to 2010\) for 62 countries (Afghanistan through Ghana). (Data comes from page 493 from the 2010 New York Times Almanac.) Part of the dataset is displayed below.
```
library(LearnEDAfunctions)
library(tidyverse)
head(mortality.rates)
```
```
## Country Rate
## 1 Afghanistan 157
## 2 Albania 16
## 3 Algeria 31
## 4 Angola 118
## 5 Argentina 13
## 6 Armenia 25
```
Here is a stemplot of the raw data:
```
aplpack::stem.leaf(mortality.rates$Rate)
```
```
## 1 | 2: represents 12
## leaf unit: 1
## n: 62
## 17 0 | 34444445556667899
## 26 1 | 000233679
## (7) 2 | 0123356
## 29 3 | 01556
## 24 4 | 35568
## 19 5 | 14
## 17 6 | 27
## 15 7 | 3799
## 11 8 | 0557
## 7 9 | 8
## 6 10 | 06
## 4 11 | 38
## 12 |
## 2 13 | 0
## HI: 157
```
We see right\-skewness in these data. Most of the mortality rates (corresponding to the more developed countries) are in the 0\-30 range, and we notice several large rates (130, 157\).
9\.2 Why is this data hard to interpret?
----------------------------------------
When data is right or left skewed, then it is more difficult to analyze. Why?
* Most of the data is bunched up at one end of the distribution. This makes it hard to distinguish data values within the bunch.
* The presence of outliers distorts the graphical display. Because there are gaps at the high end, only a small part of the display contains a majority of the data.
* It is difficult to talk about an \`\`average” value, since it is not well\-defined. The median and mean will be different values.
* It is harder to interpret a measure of spread like a standard deviation or fourth\-spread when the data is skewed.
It is desirable to reexpress the data to make it more symmetric. We’ll accomplish this by a suitable choice of power transformation.
9\.3 Checking for symmetry by looking at midsummaries
-----------------------------------------------------
In checking for symmetry, it is useful to have some tools for detecting symmetry of a batch of numbers. One useful method looks at the sequence of midsummaries.
First, a midsummary (or mid for short) is the average of the two letter values. The first midsummary is the median \\(M\\). The next midsummary is the average of the fourths – we call this the midfourth:
\\\[
midfourth \= \\frac{F\_U \+ F\_L}{2}.
\\]
Likewise, the mideighth is the average of the lower and upper eights, and so on. The R function `lval` from the `LearnEDAfunctions` package (illustrated here for the infant mortality data) shows the letter values and the corresponding mids:
```
(letter.values <- lval(mortality.rates$Rate))
```
```
## depth lo hi mids spreads
## M 31.5 24 24.0 24.00 0.0
## H 16.0 9 67.0 38.00 58.0
## E 8.5 5 86.0 45.50 81.0
## D 4.5 4 109.5 56.75 105.5
## C 2.5 4 124.0 64.00 120.0
## B 1.0 3 157.0 80.00 154.0
```
We can detect symmetry, or lack of symmetry, of a batch by looking at the sequence of midsummaries:
```
select(letter.values, mids)
```
```
## mids
## M 24.00
## H 38.00
## E 45.50
## D 56.75
## C 64.00
## B 80.00
```
If this sequence
* is increasing (like it is here), then this indicates right skewness
* is decreasing, we have left skewness
* doesn’t show any trend, then we have approximate symmetry
It is helpful to plot the midsummaries as a function of the letter value (Median is 1, Fourth is 2, etc). Clearly there is a positive trend in the plot, suggesting right skewness in the data.
```
letter.values %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Raw Data")
```
9\.4 Reexpressing to achieve approximate symmetry
-------------------------------------------------
When we have skewness, then we move along the ladder of powers (of a power transformation) to look for a reexpression that will make the data set roughly symmetric. If we have right skewness (which is pretty common), then we move down the ladder of powers in our search for a good reexpression. Since the raw data is the \\(p \= 1\\) transformation, we first take on step down on the ladder which corresponds to taking roots (\\(p \= 1/2\\)).
We take roots of the data. Here is a stemplot, letter value display, and plot of the midsummaries for the roots:
```
roots <- sqrt(mortality.rates$Rate)
aplpack::stem.leaf(roots)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 62
## 1 1 | 7
## 15 2 | 00000022244468
## 23 3 | 00111466
## (8) 4 | 01345677
## (6) 5 | 004599
## 25 6 | 057779
## 19 7 | 138
## 16 8 | 157889
## 10 9 | 2238
## 6 10 | 0268
## 2 11 | 4
## 1 12 | 5
```
```
(root.lv <- lval(roots))
```
```
## depth lo hi mids spreads
## M 31.5 4.897916 4.897916 4.897916 0.000000
## H 16.0 3.000000 8.185353 5.592676 5.185353
## E 8.5 2.236068 9.273462 5.754765 7.037394
## D 4.5 2.000000 10.462888 6.231444 8.462888
## C 2.5 2.000000 11.132267 6.566134 9.132267
## B 1.0 1.732051 12.529964 7.131007 10.797913
```
```
root.lv %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Root Data")
```
Things have improved. Comparing the stemplot of the roots with the stemplot of the raw mortality rates, the roots look less skewed, suggesting that we are moving in the right direction on the ladder of powers. But the data set is not symmetric – this is confirmed by the plot of the midsummaries which shows a clear positive trend.
If we take another step down the ladder of powers, we arrive at logs \\((p \= 0\)\\). We display the stemplot, letter values, and plot of mids for the log mortality rates.
```
logs <- log(mortality.rates$Rate)
aplpack::stem.leaf(logs)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 62
## 7 1* | 0333333
## 14 1. | 6667779
## 21 2* | 0113334
## 27 2. | 557899
## (8) 3* | 00112244
## 27 3. | 5557888899
## 17 4* | 1223333444
## 7 4. | 566778
## 1 5* | 0
```
```
(logs.lv <- lval(logs))
```
```
## depth lo hi mids spreads
## M 31.5 3.177185 3.177185 3.177185 0.000000
## H 16.0 2.197225 4.204693 3.200959 2.007468
## E 8.5 1.609438 4.454280 3.031859 2.844842
## D 4.5 1.386294 4.695413 3.040854 3.309119
## C 2.5 1.386294 4.819110 3.102702 3.432815
## B 1.0 1.098612 5.056246 3.077429 3.957634
```
```
logs.lv %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Log Data")
```
Things look a bit better. The stemplot looks pretty symmetric to me. Looking at the plot of the mids, there is a decreasing trend for the first three points, and then the plot looks pretty constant. This means that there is some skewness in the middle portion of the logs, but there is little skewness in the tails (the tails are the extreme portions of the data).
If logs are a good reexpression, then it wouldn’t make any sense to go further down the ladder of powers. But let’s check and try taking a \\(p \= \-1/2\\) rexpression which corresponds to reciprocal roots (\\(1 / \\sqrt{mortality \\, rate}\\) ). Actually we take the reexpression
\\\[
\- \\frac{1}{\\sqrt{mortality \\, rate}}.
\\]
We do this since we want all of our power transformations to be increasing functions of our raw data.
Below we show the stemplot, the letter\-value display, and the graph of the mids for the reciprocal roots.
```
recroots <- - 1 / sqrt(mortality.rates$Rate)
aplpack::stem.leaf(recroots)
```
```
## 1 | 2: represents 0.12
## leaf unit: 0.01
## n: 62
## 1 -5. | 7
## 7 -5* | 000000
## -4. |
## 13 -4* | 444000
## 15 -3. | 75
## 20 -3* | 33111
## 24 -2. | 8775
## (8) -2* | 42211000
## 30 -1. | 9876665
## 23 -1* | 444443221111100000
## 5 -0. | 99987
```
```
(recroots.lv <- lval(recroots))
```
```
## depth lo hi mids spreads
## M 31.5 -0.2042572 -0.20425721 -0.2042572 0.0000000
## H 16.0 -0.3333333 -0.12216944 -0.2277514 0.2111639
## E 8.5 -0.4472136 -0.10783824 -0.2775259 0.3393754
## D 4.5 -0.5000000 -0.09560034 -0.2978002 0.4043997
## C 2.5 -0.5000000 -0.08988163 -0.2949408 0.4101184
## B 1.0 -0.5773503 -0.07980869 -0.3285795 0.4975416
```
```
recroots.lv %>% mutate(LV = 1:6) %>%
ggplot(aes(LV, mids)) +
geom_point() + ggtitle("Reciprocal Roots")
```
Looking at the stemplot, the distribution of the reciprocal roots looks left\-skewed. There is a negative trend in the midsummaries that confirms this left\-skewness. (Actually the graph of the mids of the reciprocal roots looks similar to the graph of the mids of the logs. But I’m combining all of the information that we get from a visual scan of the stemplot and the midsummaries.)
So this analysis suggests that we should take the log of the mortality rates to achieve approximate symmetry.
9\.5 Hinkley’s quick method
---------------------------
David Hinkley suggested a simple measure of asymmetry of a batch. This measure can be used together with the family of power transformations to suggest an appropriate reexpression.
He suggested looking at the statistic
\\\[
d \= \\frac{\\bar X \- M}{measure \\, of \\, scale},
\\]
where \\(\\bar X\\) is the mean, \\(M\\) is the median, and the denominator is any measure of scale of the batch. In the following, we will use the fourth\-spread as our scale measure.
To interpret d …
* if d \> 0, this indicates that the mean is larger than the median which reflects right\-skewness of the batch
* if d \< 0, this indicates left\-skewness
* if d is approximately 0, then the batch appears roughly symmetric
For our batch of mortality rates, we can compute
\\\[
\\bar X \= 39\.80645, M \= 24, d\_F \= F\_U \- F\_L \= 67 ??? 9 \= 58
\\]
\\\[
d \= \\frac{39\.80645 \- 24}{58} \= 0\.2725,
\\]
which indicates right\-skewness in the batch.
As before, we move down the ladder of powers to suggest possible reexpressions. We use Hinkley’s statistic to measure the skewness in the reexpressed batch. We choose the value of the power p so that the value of the skewness measure d is approximately equal to 0\.
Using the `hinkley` function, we compute Hinkley’s measure for the roots, logs, and reciprocal roots.
```
hinkley(roots)
```
```
## h
## 0.1332907
```
```
hinkley(logs)
```
```
## h
## -0.02235429
```
```
hinkley(recroots)
```
```
## h
## -0.1950111
```
Looking at the values of d, the \`\`correct” reexpression appears to be between \\(p \= .5\\) (roots) and \\(p \= 0\\) (logs), although the value closest to 0 corresponds to the log reexpression.
In practice, one uses Hinkley’s method together with other methods such as the midsummary approach to assess symmetry and find an appropriate choice of power transformation.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/reexpressing-for-symmetry-ii.html |
10 Reexpressing for Symmetry II
===============================
10\.1 Data for the day
----------------------
Where are the farms in the United States? The 2001 New York Times Almanac gives the number of farms (in 1000’s) for each of the 50 states in 1999 – the data is shown below. We will use this example to illustrate methods for determining the symmetry of a batch and to decide on appropriate reexpressions to make the batch more symmetric.
The dataset `farms` in the `LearnEDA` package contains this data. The first few rows are displayed below.
```
library(LearnEDAfunctions)
head(farms)
```
```
## state count
## 1 Al 48
## 2 Als 1
## 3 Ar 8
## 4 Ark 49
## 5 Ca 89
## 6 Col 29
```
Here is a stemplot of the data.
```
aplpack::stem.leaf(farms$count,
unit=10, m=5, trim.outliers=FALSE)
```
```
## 1 | 2: represents 120
## leaf unit: 10
## n: 50
## 16 0* | 0000000000001111
## (9) t | 222223333
## (12) f | 444444555555
## 13 s | 6677
## 9 0. | 8888999
## 2 1* | 1
## t |
## f |
## s |
## 1. |
## 2* |
## 1 t | 2
```
What do we see? Obviously we note the big outlier at 22 – looking at the data, we see that this corresponds to the number of farms in Texas. Otherwise, we see some right skewness in the data. Next we look at the sequence of midsummaries shown in the letter value display below.
```
lval(farms$count)
```
```
## depth lo hi mids spreads
## M 25.5 39.5 39.5 39.5 0
## H 13.0 10.0 65.0 37.5 55
## E 7.0 6.0 84.0 45.0 78
## D 4.0 3.0 91.0 47.0 88
## C 2.5 2.0 104.0 53.0 102
## B 1.0 1.0 227.0 114.0 226
```
The median (39\.5\) is larger than the mid\-fourth (37\.5\) – this indicates some left\-skewness in the middle half of the data. Then the midsummaries increase from the mid\-fourth to the mid\-extremes – this tells us that the outside half of the data is right\-skewed.
10\.2 Symmetry plot
-------------------
We now introduce a new plot to learn about the symmetry of a batch – not surprisingly, this is called a symmetry plot.
To make a symmetry plot …
* First order the n data values – call the ordered values \\(y\_{(1\)}, ..., y\_{(n)}\\).
* If \\(M\\) denotes the median then you plot the points
\\\[
u\_i \= y\_{(n\+1\-i)} \- M \\, \\, ({\\rm vertical})
\\]
against
\\\[
v\_i \= M \- y\_{(i)} \\, \\, ({\\rm horizontal})
\\]
for \\(i \= 1, ..., n/2\\) (or \\((n\+1\)/2\\) if \\(n\\) is odd).
* Add the line \\(u \= v\\) to the graph.
Here is the symmetry plot for the farm numbers. Here we have 50 numbers, so we will plotting 25 \\((u, v)\\) points.
```
symplot(farms$count)
```
How do you interpret a symmetry plot? Some guidelines follow:
1. If the points fall close to the line \\(u \= v\\), then the data is nearly symmetric.
2. If the data is left\-skewed, then the points fall below the line as follows:
3. Likewise, if the data is right\-skewed, the points fall above the line.
4. The plot is nondecreasing – the points close to the origin correspond to values of the data close to the median \\(M\\) and the point on the far right correspond to the extremes.
Let’s return to the symmetry plot for the farm numbers.
If we look from left to right, we first see a number of point under the line \\(u \= v\\), and then we see points above the line. This tells us that there is left\-skewness in the middle of the batch and right skewness in the tail portion of the batch. These statements are consistent with what we saw in the sequence of midsummaries.
To remove the right\-skewness that we see, we go down the ladder of powers and try power transformations \\(p\\) that are smaller than \\(p \= 1\\).
10\.3 Roots and logs
--------------------
We first try roots (\\(p \= .5\\)). Below, we show a stemplot, the letter value display and the symmetry plot.
```
roots <- sqrt(farms$count)
aplpack::stem.leaf(roots)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 50
## 5 1 | 00777
## 11 2 | 044668
## 14 3 | 014
## 17 4 | 005
## 24 5 | 0023457
## (6) 6 | 234579
## 20 7 | 0002466
## 13 8 | 00889
## 8 9 | 014558
## 2 10 | 4
## 11 |
## 12 |
## 13 |
## 14 |
## 1 15 | 0
```
```
symplot(roots)
```
What do we see in the displays?
* **Stemplot** Here the roots look pretty uniform distributed from 0 to 100\. Looking more carefully, I see some left\-skewness in this 0\-100 region. Also there is one outlier at the high end.
* **Letter\-value display** Actually, there is only a small trend, if any, in the sequence of midsummaries. There is a drop from the median to the mid\-fourth, indicating some left skewness in the middle half of the data. Also, there is an increase from mid\-C to mid\-extreme, showing some right\-skewness in the tails of the batch of roots.
* **Symmetry plot** Practically all the points fall under the line \\(u \= v\\) indicating left\-skewness in the middle portion of the data. The only point above the line is at the far right, which is a reflection of the single outlier.
Let’s continue down the ladder of powers and try the \\(p \= 0\\) power (logs). Again, we show the stemplot, the letter\-value display and the symmetry plot.
```
logs <- log(farms$count)
aplpack::stem.leaf(logs)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 50
## 2 0* | 00
## 0. |
## 6 1* | 0003
## 10 1. | 7799
## 14 2* | 0134
## 16 2. | 77
## 24 3* | 02233444
## (10) 3. | 6677888999
## 16 4* | 00011333344
## 5 4. | 5557
## 1 5* | 4
```
```
symplot(logs)
```
Here it should be clear that the logs of the farm numbers are left\-skewed. The shape of the stemplot is left\-skewed. The midsummaries are steadily decreasing and all the points in the symmetry plot are under the line, again reflecting the left\-skewness.
10\.4 Hinkley’s method
----------------------
A simple method of finding a suitable transformation, discussed in the last lecture, is based on Hinkley’s \\(d\\) statistic, which is based on the difference between the mean and median.
Here are the values of the Hinkley statistic for the raw, root and log data.
```
hinkley(farms$count)
```
```
## h
## 0.08566038
```
```
hinkley(roots)
```
```
## h
## -0.0724257
```
```
hinkley(logs)
```
```
## h
## -0.2407449
```
Since the raw data (\\(p \= 1\\)) has a positive value of \\(d\\) and roots (\\(p \= .5\\)) has a negative \\(d\\) value, this might suggest choosing a power reexpression (\\(p\\)) between .5 and 1\. In this example, there is not a strong reason to reexpress, and roots might be a little more symmetric than the raw data.
10\.5 Matched transformations
-----------------------------
We are interested in comparing the effects of taking different reexpressions, such as taking logs and roots. But when we take a reexpression such as log, we mess up the scale of the raw data (logs are much smaller than the raw data), and so it is difficult to make a comparison.
It would be helpful if the raw and reexpressed data were roughly on the same scale so we can easily compare the two datasets. We can accomplish this by means of a procedure called matching.
In the following, we denote our raw data by \\(x\\)
and our reexpressed data by \\(y \= T(x)\\).
Now we know that we can apply a further linear transformation which we can call
\\\[
z \= a \+ b y \= a \+ b T(x)
\\]
The change from \\(y\\) to \\(z\\) is a trivial transformation and won’t change the shape of the data.
We want to choose the constants \\(a\\) and \\(b\\) so that the \\(z\\) batch resembles the \\(x\\) batch. We can accomplish this in many ways. We describe two of them.
10\.6 Matching Method 1
-----------------------
Here we choose two values in the raw scale (\\(x\\)) that will be the same in the new scale (\\(z\\)). Choose two points \\(x\_1\\) and \\(x\_2\\) in the original scale – the corresponding points in the new scale will be \\(z\_1\\) and \\(z\_2\\). We wish to find values of the constants \\(a\\), \\(b\\) so that
\\\[
z\_1\= a \+ b T(x\_1\) \= x\_1
\\]
\\\[
z\_2 \= a \+ b T(x\_2\) \= x\_2
\\]
10\.7 Matching Method 2
-----------------------
Here we choose one point that will be same in the raw (\\(x\\)) and new (\\(z\\)) scales. Also, by placing a condition on the derivative, we ensure that data close to the chosen value will be the same in the two scales. Choose point \\(x\_0\\) such that
\\\[
z\_0 \= a \+ b T(x\_0\) \= x\_0
\\]
and the derivative at z with respect to x at xo is equal to 1
\\\[
\\frac{d}{dx}\|\_{x\_0} \= \\frac{d\[a\+bT(x)]}{dx}\|\_{x\_0}\= b \\frac{d T(x)}{dx}\|\_{x\=x\_0} \= 1\.
\\]
If we solve for \\(a\\) and \\(b\\) from the two equations, we get the solution:
\\\[
z \= x\_0 \+ \\frac{T(x) \- T(x\_0\)}{T'(x\_0\)}.
\\]
In usual practice, we choose \\(x\_0\\) to be some central value in the raw data such as the median.
In the case of power functions, where the power \\(p\\) is not zero,
\\\[
T(x) \= x^p,
\\]
and the matching reexpression is
\\\[
z \= x\_0 \+ \\frac{x^p \- x\_0^p}{p x\_0^{p\-1}}.
\\]
In the case of the log (base 10\) reexpression where \\(T(x) \= log(x)\\), the matching transformation has the form
\\\[
z \= x\_0 \+ \\frac{\\log(x) \-\\log(x\_0\)}{\\log(e)/x\_0}.
\\]
Let’s illustrate matching for our farm numbers example. We use the second matching method and let \\(x\_0 \= 39\.5\\), the median of the raw data. Using the above equations, we calculate the matching root and log transformations in R:
```
matched.roots <- 39.5 + (sqrt(farms$count) -
sqrt(39.5)) / (.5*39.5 ^ (-.5))
matched.logs <- 39.5 + (log10(farms$count) -
log10(39.5)) / (log10(exp(1)) / 39.5)
```
These calculations can also be done using the function `mtrans` in the `LearnBayes` package:
```
raw <- farms$count
matched.roots <- mtrans(raw, 0.5)
matched.logs <- mtrans(raw, 0)
```
To compare the raw, matched roots, and matched logs, we use parallel boxplots shown below.
```
boxplot(data.frame(raw, matched.roots, matched.logs))
```
Note that this matching has given the three batches the same median (\\(x\_0\\)) and the batches have similar spreads. We can focus on the shapes of the batches that is indicated by the position of the median within the box and the lengths of the whiskers.
What do we see in this boxplot display?
* Looking at the raw (farms) data, the middle 50% of the data looks pretty symmetric and there is right skewness in the tails.
* The roots are more symmetric in the tails, but this is offset by some left skewness in the middle 50%.
* The logs look left skewed both in the middle 50% and the tails.
10\.1 Data for the day
----------------------
Where are the farms in the United States? The 2001 New York Times Almanac gives the number of farms (in 1000’s) for each of the 50 states in 1999 – the data is shown below. We will use this example to illustrate methods for determining the symmetry of a batch and to decide on appropriate reexpressions to make the batch more symmetric.
The dataset `farms` in the `LearnEDA` package contains this data. The first few rows are displayed below.
```
library(LearnEDAfunctions)
head(farms)
```
```
## state count
## 1 Al 48
## 2 Als 1
## 3 Ar 8
## 4 Ark 49
## 5 Ca 89
## 6 Col 29
```
Here is a stemplot of the data.
```
aplpack::stem.leaf(farms$count,
unit=10, m=5, trim.outliers=FALSE)
```
```
## 1 | 2: represents 120
## leaf unit: 10
## n: 50
## 16 0* | 0000000000001111
## (9) t | 222223333
## (12) f | 444444555555
## 13 s | 6677
## 9 0. | 8888999
## 2 1* | 1
## t |
## f |
## s |
## 1. |
## 2* |
## 1 t | 2
```
What do we see? Obviously we note the big outlier at 22 – looking at the data, we see that this corresponds to the number of farms in Texas. Otherwise, we see some right skewness in the data. Next we look at the sequence of midsummaries shown in the letter value display below.
```
lval(farms$count)
```
```
## depth lo hi mids spreads
## M 25.5 39.5 39.5 39.5 0
## H 13.0 10.0 65.0 37.5 55
## E 7.0 6.0 84.0 45.0 78
## D 4.0 3.0 91.0 47.0 88
## C 2.5 2.0 104.0 53.0 102
## B 1.0 1.0 227.0 114.0 226
```
The median (39\.5\) is larger than the mid\-fourth (37\.5\) – this indicates some left\-skewness in the middle half of the data. Then the midsummaries increase from the mid\-fourth to the mid\-extremes – this tells us that the outside half of the data is right\-skewed.
10\.2 Symmetry plot
-------------------
We now introduce a new plot to learn about the symmetry of a batch – not surprisingly, this is called a symmetry plot.
To make a symmetry plot …
* First order the n data values – call the ordered values \\(y\_{(1\)}, ..., y\_{(n)}\\).
* If \\(M\\) denotes the median then you plot the points
\\\[
u\_i \= y\_{(n\+1\-i)} \- M \\, \\, ({\\rm vertical})
\\]
against
\\\[
v\_i \= M \- y\_{(i)} \\, \\, ({\\rm horizontal})
\\]
for \\(i \= 1, ..., n/2\\) (or \\((n\+1\)/2\\) if \\(n\\) is odd).
* Add the line \\(u \= v\\) to the graph.
Here is the symmetry plot for the farm numbers. Here we have 50 numbers, so we will plotting 25 \\((u, v)\\) points.
```
symplot(farms$count)
```
How do you interpret a symmetry plot? Some guidelines follow:
1. If the points fall close to the line \\(u \= v\\), then the data is nearly symmetric.
2. If the data is left\-skewed, then the points fall below the line as follows:
3. Likewise, if the data is right\-skewed, the points fall above the line.
4. The plot is nondecreasing – the points close to the origin correspond to values of the data close to the median \\(M\\) and the point on the far right correspond to the extremes.
Let’s return to the symmetry plot for the farm numbers.
If we look from left to right, we first see a number of point under the line \\(u \= v\\), and then we see points above the line. This tells us that there is left\-skewness in the middle of the batch and right skewness in the tail portion of the batch. These statements are consistent with what we saw in the sequence of midsummaries.
To remove the right\-skewness that we see, we go down the ladder of powers and try power transformations \\(p\\) that are smaller than \\(p \= 1\\).
10\.3 Roots and logs
--------------------
We first try roots (\\(p \= .5\\)). Below, we show a stemplot, the letter value display and the symmetry plot.
```
roots <- sqrt(farms$count)
aplpack::stem.leaf(roots)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 50
## 5 1 | 00777
## 11 2 | 044668
## 14 3 | 014
## 17 4 | 005
## 24 5 | 0023457
## (6) 6 | 234579
## 20 7 | 0002466
## 13 8 | 00889
## 8 9 | 014558
## 2 10 | 4
## 11 |
## 12 |
## 13 |
## 14 |
## 1 15 | 0
```
```
symplot(roots)
```
What do we see in the displays?
* **Stemplot** Here the roots look pretty uniform distributed from 0 to 100\. Looking more carefully, I see some left\-skewness in this 0\-100 region. Also there is one outlier at the high end.
* **Letter\-value display** Actually, there is only a small trend, if any, in the sequence of midsummaries. There is a drop from the median to the mid\-fourth, indicating some left skewness in the middle half of the data. Also, there is an increase from mid\-C to mid\-extreme, showing some right\-skewness in the tails of the batch of roots.
* **Symmetry plot** Practically all the points fall under the line \\(u \= v\\) indicating left\-skewness in the middle portion of the data. The only point above the line is at the far right, which is a reflection of the single outlier.
Let’s continue down the ladder of powers and try the \\(p \= 0\\) power (logs). Again, we show the stemplot, the letter\-value display and the symmetry plot.
```
logs <- log(farms$count)
aplpack::stem.leaf(logs)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 50
## 2 0* | 00
## 0. |
## 6 1* | 0003
## 10 1. | 7799
## 14 2* | 0134
## 16 2. | 77
## 24 3* | 02233444
## (10) 3. | 6677888999
## 16 4* | 00011333344
## 5 4. | 5557
## 1 5* | 4
```
```
symplot(logs)
```
Here it should be clear that the logs of the farm numbers are left\-skewed. The shape of the stemplot is left\-skewed. The midsummaries are steadily decreasing and all the points in the symmetry plot are under the line, again reflecting the left\-skewness.
10\.4 Hinkley’s method
----------------------
A simple method of finding a suitable transformation, discussed in the last lecture, is based on Hinkley’s \\(d\\) statistic, which is based on the difference between the mean and median.
Here are the values of the Hinkley statistic for the raw, root and log data.
```
hinkley(farms$count)
```
```
## h
## 0.08566038
```
```
hinkley(roots)
```
```
## h
## -0.0724257
```
```
hinkley(logs)
```
```
## h
## -0.2407449
```
Since the raw data (\\(p \= 1\\)) has a positive value of \\(d\\) and roots (\\(p \= .5\\)) has a negative \\(d\\) value, this might suggest choosing a power reexpression (\\(p\\)) between .5 and 1\. In this example, there is not a strong reason to reexpress, and roots might be a little more symmetric than the raw data.
10\.5 Matched transformations
-----------------------------
We are interested in comparing the effects of taking different reexpressions, such as taking logs and roots. But when we take a reexpression such as log, we mess up the scale of the raw data (logs are much smaller than the raw data), and so it is difficult to make a comparison.
It would be helpful if the raw and reexpressed data were roughly on the same scale so we can easily compare the two datasets. We can accomplish this by means of a procedure called matching.
In the following, we denote our raw data by \\(x\\)
and our reexpressed data by \\(y \= T(x)\\).
Now we know that we can apply a further linear transformation which we can call
\\\[
z \= a \+ b y \= a \+ b T(x)
\\]
The change from \\(y\\) to \\(z\\) is a trivial transformation and won’t change the shape of the data.
We want to choose the constants \\(a\\) and \\(b\\) so that the \\(z\\) batch resembles the \\(x\\) batch. We can accomplish this in many ways. We describe two of them.
10\.6 Matching Method 1
-----------------------
Here we choose two values in the raw scale (\\(x\\)) that will be the same in the new scale (\\(z\\)). Choose two points \\(x\_1\\) and \\(x\_2\\) in the original scale – the corresponding points in the new scale will be \\(z\_1\\) and \\(z\_2\\). We wish to find values of the constants \\(a\\), \\(b\\) so that
\\\[
z\_1\= a \+ b T(x\_1\) \= x\_1
\\]
\\\[
z\_2 \= a \+ b T(x\_2\) \= x\_2
\\]
10\.7 Matching Method 2
-----------------------
Here we choose one point that will be same in the raw (\\(x\\)) and new (\\(z\\)) scales. Also, by placing a condition on the derivative, we ensure that data close to the chosen value will be the same in the two scales. Choose point \\(x\_0\\) such that
\\\[
z\_0 \= a \+ b T(x\_0\) \= x\_0
\\]
and the derivative at z with respect to x at xo is equal to 1
\\\[
\\frac{d}{dx}\|\_{x\_0} \= \\frac{d\[a\+bT(x)]}{dx}\|\_{x\_0}\= b \\frac{d T(x)}{dx}\|\_{x\=x\_0} \= 1\.
\\]
If we solve for \\(a\\) and \\(b\\) from the two equations, we get the solution:
\\\[
z \= x\_0 \+ \\frac{T(x) \- T(x\_0\)}{T'(x\_0\)}.
\\]
In usual practice, we choose \\(x\_0\\) to be some central value in the raw data such as the median.
In the case of power functions, where the power \\(p\\) is not zero,
\\\[
T(x) \= x^p,
\\]
and the matching reexpression is
\\\[
z \= x\_0 \+ \\frac{x^p \- x\_0^p}{p x\_0^{p\-1}}.
\\]
In the case of the log (base 10\) reexpression where \\(T(x) \= log(x)\\), the matching transformation has the form
\\\[
z \= x\_0 \+ \\frac{\\log(x) \-\\log(x\_0\)}{\\log(e)/x\_0}.
\\]
Let’s illustrate matching for our farm numbers example. We use the second matching method and let \\(x\_0 \= 39\.5\\), the median of the raw data. Using the above equations, we calculate the matching root and log transformations in R:
```
matched.roots <- 39.5 + (sqrt(farms$count) -
sqrt(39.5)) / (.5*39.5 ^ (-.5))
matched.logs <- 39.5 + (log10(farms$count) -
log10(39.5)) / (log10(exp(1)) / 39.5)
```
These calculations can also be done using the function `mtrans` in the `LearnBayes` package:
```
raw <- farms$count
matched.roots <- mtrans(raw, 0.5)
matched.logs <- mtrans(raw, 0)
```
To compare the raw, matched roots, and matched logs, we use parallel boxplots shown below.
```
boxplot(data.frame(raw, matched.roots, matched.logs))
```
Note that this matching has given the three batches the same median (\\(x\_0\\)) and the batches have similar spreads. We can focus on the shapes of the batches that is indicated by the position of the median within the box and the lengths of the whiskers.
What do we see in this boxplot display?
* Looking at the raw (farms) data, the middle 50% of the data looks pretty symmetric and there is right skewness in the tails.
* The roots are more symmetric in the tails, but this is offset by some left skewness in the middle 50%.
* The logs look left skewed both in the middle 50% and the tails.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/reexpressing-for-symmetry-ii.html |
10 Reexpressing for Symmetry II
===============================
10\.1 Data for the day
----------------------
Where are the farms in the United States? The 2001 New York Times Almanac gives the number of farms (in 1000’s) for each of the 50 states in 1999 – the data is shown below. We will use this example to illustrate methods for determining the symmetry of a batch and to decide on appropriate reexpressions to make the batch more symmetric.
The dataset `farms` in the `LearnEDA` package contains this data. The first few rows are displayed below.
```
library(LearnEDAfunctions)
head(farms)
```
```
## state count
## 1 Al 48
## 2 Als 1
## 3 Ar 8
## 4 Ark 49
## 5 Ca 89
## 6 Col 29
```
Here is a stemplot of the data.
```
aplpack::stem.leaf(farms$count,
unit=10, m=5, trim.outliers=FALSE)
```
```
## 1 | 2: represents 120
## leaf unit: 10
## n: 50
## 16 0* | 0000000000001111
## (9) t | 222223333
## (12) f | 444444555555
## 13 s | 6677
## 9 0. | 8888999
## 2 1* | 1
## t |
## f |
## s |
## 1. |
## 2* |
## 1 t | 2
```
What do we see? Obviously we note the big outlier at 22 – looking at the data, we see that this corresponds to the number of farms in Texas. Otherwise, we see some right skewness in the data. Next we look at the sequence of midsummaries shown in the letter value display below.
```
lval(farms$count)
```
```
## depth lo hi mids spreads
## M 25.5 39.5 39.5 39.5 0
## H 13.0 10.0 65.0 37.5 55
## E 7.0 6.0 84.0 45.0 78
## D 4.0 3.0 91.0 47.0 88
## C 2.5 2.0 104.0 53.0 102
## B 1.0 1.0 227.0 114.0 226
```
The median (39\.5\) is larger than the mid\-fourth (37\.5\) – this indicates some left\-skewness in the middle half of the data. Then the midsummaries increase from the mid\-fourth to the mid\-extremes – this tells us that the outside half of the data is right\-skewed.
10\.2 Symmetry plot
-------------------
We now introduce a new plot to learn about the symmetry of a batch – not surprisingly, this is called a symmetry plot.
To make a symmetry plot …
* First order the n data values – call the ordered values \\(y\_{(1\)}, ..., y\_{(n)}\\).
* If \\(M\\) denotes the median then you plot the points
\\\[
u\_i \= y\_{(n\+1\-i)} \- M \\, \\, ({\\rm vertical})
\\]
against
\\\[
v\_i \= M \- y\_{(i)} \\, \\, ({\\rm horizontal})
\\]
for \\(i \= 1, ..., n/2\\) (or \\((n\+1\)/2\\) if \\(n\\) is odd).
* Add the line \\(u \= v\\) to the graph.
Here is the symmetry plot for the farm numbers. Here we have 50 numbers, so we will plotting 25 \\((u, v)\\) points.
```
symplot(farms$count)
```
How do you interpret a symmetry plot? Some guidelines follow:
1. If the points fall close to the line \\(u \= v\\), then the data is nearly symmetric.
2. If the data is left\-skewed, then the points fall below the line as follows:
3. Likewise, if the data is right\-skewed, the points fall above the line.
4. The plot is nondecreasing – the points close to the origin correspond to values of the data close to the median \\(M\\) and the point on the far right correspond to the extremes.
Let’s return to the symmetry plot for the farm numbers.
If we look from left to right, we first see a number of point under the line \\(u \= v\\), and then we see points above the line. This tells us that there is left\-skewness in the middle of the batch and right skewness in the tail portion of the batch. These statements are consistent with what we saw in the sequence of midsummaries.
To remove the right\-skewness that we see, we go down the ladder of powers and try power transformations \\(p\\) that are smaller than \\(p \= 1\\).
10\.3 Roots and logs
--------------------
We first try roots (\\(p \= .5\\)). Below, we show a stemplot, the letter value display and the symmetry plot.
```
roots <- sqrt(farms$count)
aplpack::stem.leaf(roots)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 50
## 5 1 | 00777
## 11 2 | 044668
## 14 3 | 014
## 17 4 | 005
## 24 5 | 0023457
## (6) 6 | 234579
## 20 7 | 0002466
## 13 8 | 00889
## 8 9 | 014558
## 2 10 | 4
## 11 |
## 12 |
## 13 |
## 14 |
## 1 15 | 0
```
```
symplot(roots)
```
What do we see in the displays?
* **Stemplot** Here the roots look pretty uniform distributed from 0 to 100\. Looking more carefully, I see some left\-skewness in this 0\-100 region. Also there is one outlier at the high end.
* **Letter\-value display** Actually, there is only a small trend, if any, in the sequence of midsummaries. There is a drop from the median to the mid\-fourth, indicating some left skewness in the middle half of the data. Also, there is an increase from mid\-C to mid\-extreme, showing some right\-skewness in the tails of the batch of roots.
* **Symmetry plot** Practically all the points fall under the line \\(u \= v\\) indicating left\-skewness in the middle portion of the data. The only point above the line is at the far right, which is a reflection of the single outlier.
Let’s continue down the ladder of powers and try the \\(p \= 0\\) power (logs). Again, we show the stemplot, the letter\-value display and the symmetry plot.
```
logs <- log(farms$count)
aplpack::stem.leaf(logs)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 50
## 2 0* | 00
## 0. |
## 6 1* | 0003
## 10 1. | 7799
## 14 2* | 0134
## 16 2. | 77
## 24 3* | 02233444
## (10) 3. | 6677888999
## 16 4* | 00011333344
## 5 4. | 5557
## 1 5* | 4
```
```
symplot(logs)
```
Here it should be clear that the logs of the farm numbers are left\-skewed. The shape of the stemplot is left\-skewed. The midsummaries are steadily decreasing and all the points in the symmetry plot are under the line, again reflecting the left\-skewness.
10\.4 Hinkley’s method
----------------------
A simple method of finding a suitable transformation, discussed in the last lecture, is based on Hinkley’s \\(d\\) statistic, which is based on the difference between the mean and median.
Here are the values of the Hinkley statistic for the raw, root and log data.
```
hinkley(farms$count)
```
```
## h
## 0.08566038
```
```
hinkley(roots)
```
```
## h
## -0.0724257
```
```
hinkley(logs)
```
```
## h
## -0.2407449
```
Since the raw data (\\(p \= 1\\)) has a positive value of \\(d\\) and roots (\\(p \= .5\\)) has a negative \\(d\\) value, this might suggest choosing a power reexpression (\\(p\\)) between .5 and 1\. In this example, there is not a strong reason to reexpress, and roots might be a little more symmetric than the raw data.
10\.5 Matched transformations
-----------------------------
We are interested in comparing the effects of taking different reexpressions, such as taking logs and roots. But when we take a reexpression such as log, we mess up the scale of the raw data (logs are much smaller than the raw data), and so it is difficult to make a comparison.
It would be helpful if the raw and reexpressed data were roughly on the same scale so we can easily compare the two datasets. We can accomplish this by means of a procedure called matching.
In the following, we denote our raw data by \\(x\\)
and our reexpressed data by \\(y \= T(x)\\).
Now we know that we can apply a further linear transformation which we can call
\\\[
z \= a \+ b y \= a \+ b T(x)
\\]
The change from \\(y\\) to \\(z\\) is a trivial transformation and won’t change the shape of the data.
We want to choose the constants \\(a\\) and \\(b\\) so that the \\(z\\) batch resembles the \\(x\\) batch. We can accomplish this in many ways. We describe two of them.
10\.6 Matching Method 1
-----------------------
Here we choose two values in the raw scale (\\(x\\)) that will be the same in the new scale (\\(z\\)). Choose two points \\(x\_1\\) and \\(x\_2\\) in the original scale – the corresponding points in the new scale will be \\(z\_1\\) and \\(z\_2\\). We wish to find values of the constants \\(a\\), \\(b\\) so that
\\\[
z\_1\= a \+ b T(x\_1\) \= x\_1
\\]
\\\[
z\_2 \= a \+ b T(x\_2\) \= x\_2
\\]
10\.7 Matching Method 2
-----------------------
Here we choose one point that will be same in the raw (\\(x\\)) and new (\\(z\\)) scales. Also, by placing a condition on the derivative, we ensure that data close to the chosen value will be the same in the two scales. Choose point \\(x\_0\\) such that
\\\[
z\_0 \= a \+ b T(x\_0\) \= x\_0
\\]
and the derivative at z with respect to x at xo is equal to 1
\\\[
\\frac{d}{dx}\|\_{x\_0} \= \\frac{d\[a\+bT(x)]}{dx}\|\_{x\_0}\= b \\frac{d T(x)}{dx}\|\_{x\=x\_0} \= 1\.
\\]
If we solve for \\(a\\) and \\(b\\) from the two equations, we get the solution:
\\\[
z \= x\_0 \+ \\frac{T(x) \- T(x\_0\)}{T'(x\_0\)}.
\\]
In usual practice, we choose \\(x\_0\\) to be some central value in the raw data such as the median.
In the case of power functions, where the power \\(p\\) is not zero,
\\\[
T(x) \= x^p,
\\]
and the matching reexpression is
\\\[
z \= x\_0 \+ \\frac{x^p \- x\_0^p}{p x\_0^{p\-1}}.
\\]
In the case of the log (base 10\) reexpression where \\(T(x) \= log(x)\\), the matching transformation has the form
\\\[
z \= x\_0 \+ \\frac{\\log(x) \-\\log(x\_0\)}{\\log(e)/x\_0}.
\\]
Let’s illustrate matching for our farm numbers example. We use the second matching method and let \\(x\_0 \= 39\.5\\), the median of the raw data. Using the above equations, we calculate the matching root and log transformations in R:
```
matched.roots <- 39.5 + (sqrt(farms$count) -
sqrt(39.5)) / (.5*39.5 ^ (-.5))
matched.logs <- 39.5 + (log10(farms$count) -
log10(39.5)) / (log10(exp(1)) / 39.5)
```
These calculations can also be done using the function `mtrans` in the `LearnBayes` package:
```
raw <- farms$count
matched.roots <- mtrans(raw, 0.5)
matched.logs <- mtrans(raw, 0)
```
To compare the raw, matched roots, and matched logs, we use parallel boxplots shown below.
```
boxplot(data.frame(raw, matched.roots, matched.logs))
```
Note that this matching has given the three batches the same median (\\(x\_0\\)) and the batches have similar spreads. We can focus on the shapes of the batches that is indicated by the position of the median within the box and the lengths of the whiskers.
What do we see in this boxplot display?
* Looking at the raw (farms) data, the middle 50% of the data looks pretty symmetric and there is right skewness in the tails.
* The roots are more symmetric in the tails, but this is offset by some left skewness in the middle 50%.
* The logs look left skewed both in the middle 50% and the tails.
10\.1 Data for the day
----------------------
Where are the farms in the United States? The 2001 New York Times Almanac gives the number of farms (in 1000’s) for each of the 50 states in 1999 – the data is shown below. We will use this example to illustrate methods for determining the symmetry of a batch and to decide on appropriate reexpressions to make the batch more symmetric.
The dataset `farms` in the `LearnEDA` package contains this data. The first few rows are displayed below.
```
library(LearnEDAfunctions)
head(farms)
```
```
## state count
## 1 Al 48
## 2 Als 1
## 3 Ar 8
## 4 Ark 49
## 5 Ca 89
## 6 Col 29
```
Here is a stemplot of the data.
```
aplpack::stem.leaf(farms$count,
unit=10, m=5, trim.outliers=FALSE)
```
```
## 1 | 2: represents 120
## leaf unit: 10
## n: 50
## 16 0* | 0000000000001111
## (9) t | 222223333
## (12) f | 444444555555
## 13 s | 6677
## 9 0. | 8888999
## 2 1* | 1
## t |
## f |
## s |
## 1. |
## 2* |
## 1 t | 2
```
What do we see? Obviously we note the big outlier at 22 – looking at the data, we see that this corresponds to the number of farms in Texas. Otherwise, we see some right skewness in the data. Next we look at the sequence of midsummaries shown in the letter value display below.
```
lval(farms$count)
```
```
## depth lo hi mids spreads
## M 25.5 39.5 39.5 39.5 0
## H 13.0 10.0 65.0 37.5 55
## E 7.0 6.0 84.0 45.0 78
## D 4.0 3.0 91.0 47.0 88
## C 2.5 2.0 104.0 53.0 102
## B 1.0 1.0 227.0 114.0 226
```
The median (39\.5\) is larger than the mid\-fourth (37\.5\) – this indicates some left\-skewness in the middle half of the data. Then the midsummaries increase from the mid\-fourth to the mid\-extremes – this tells us that the outside half of the data is right\-skewed.
10\.2 Symmetry plot
-------------------
We now introduce a new plot to learn about the symmetry of a batch – not surprisingly, this is called a symmetry plot.
To make a symmetry plot …
* First order the n data values – call the ordered values \\(y\_{(1\)}, ..., y\_{(n)}\\).
* If \\(M\\) denotes the median then you plot the points
\\\[
u\_i \= y\_{(n\+1\-i)} \- M \\, \\, ({\\rm vertical})
\\]
against
\\\[
v\_i \= M \- y\_{(i)} \\, \\, ({\\rm horizontal})
\\]
for \\(i \= 1, ..., n/2\\) (or \\((n\+1\)/2\\) if \\(n\\) is odd).
* Add the line \\(u \= v\\) to the graph.
Here is the symmetry plot for the farm numbers. Here we have 50 numbers, so we will plotting 25 \\((u, v)\\) points.
```
symplot(farms$count)
```
How do you interpret a symmetry plot? Some guidelines follow:
1. If the points fall close to the line \\(u \= v\\), then the data is nearly symmetric.
2. If the data is left\-skewed, then the points fall below the line as follows:
3. Likewise, if the data is right\-skewed, the points fall above the line.
4. The plot is nondecreasing – the points close to the origin correspond to values of the data close to the median \\(M\\) and the point on the far right correspond to the extremes.
Let’s return to the symmetry plot for the farm numbers.
If we look from left to right, we first see a number of point under the line \\(u \= v\\), and then we see points above the line. This tells us that there is left\-skewness in the middle of the batch and right skewness in the tail portion of the batch. These statements are consistent with what we saw in the sequence of midsummaries.
To remove the right\-skewness that we see, we go down the ladder of powers and try power transformations \\(p\\) that are smaller than \\(p \= 1\\).
10\.3 Roots and logs
--------------------
We first try roots (\\(p \= .5\\)). Below, we show a stemplot, the letter value display and the symmetry plot.
```
roots <- sqrt(farms$count)
aplpack::stem.leaf(roots)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 50
## 5 1 | 00777
## 11 2 | 044668
## 14 3 | 014
## 17 4 | 005
## 24 5 | 0023457
## (6) 6 | 234579
## 20 7 | 0002466
## 13 8 | 00889
## 8 9 | 014558
## 2 10 | 4
## 11 |
## 12 |
## 13 |
## 14 |
## 1 15 | 0
```
```
symplot(roots)
```
What do we see in the displays?
* **Stemplot** Here the roots look pretty uniform distributed from 0 to 100\. Looking more carefully, I see some left\-skewness in this 0\-100 region. Also there is one outlier at the high end.
* **Letter\-value display** Actually, there is only a small trend, if any, in the sequence of midsummaries. There is a drop from the median to the mid\-fourth, indicating some left skewness in the middle half of the data. Also, there is an increase from mid\-C to mid\-extreme, showing some right\-skewness in the tails of the batch of roots.
* **Symmetry plot** Practically all the points fall under the line \\(u \= v\\) indicating left\-skewness in the middle portion of the data. The only point above the line is at the far right, which is a reflection of the single outlier.
Let’s continue down the ladder of powers and try the \\(p \= 0\\) power (logs). Again, we show the stemplot, the letter\-value display and the symmetry plot.
```
logs <- log(farms$count)
aplpack::stem.leaf(logs)
```
```
## 1 | 2: represents 1.2
## leaf unit: 0.1
## n: 50
## 2 0* | 00
## 0. |
## 6 1* | 0003
## 10 1. | 7799
## 14 2* | 0134
## 16 2. | 77
## 24 3* | 02233444
## (10) 3. | 6677888999
## 16 4* | 00011333344
## 5 4. | 5557
## 1 5* | 4
```
```
symplot(logs)
```
Here it should be clear that the logs of the farm numbers are left\-skewed. The shape of the stemplot is left\-skewed. The midsummaries are steadily decreasing and all the points in the symmetry plot are under the line, again reflecting the left\-skewness.
10\.4 Hinkley’s method
----------------------
A simple method of finding a suitable transformation, discussed in the last lecture, is based on Hinkley’s \\(d\\) statistic, which is based on the difference between the mean and median.
Here are the values of the Hinkley statistic for the raw, root and log data.
```
hinkley(farms$count)
```
```
## h
## 0.08566038
```
```
hinkley(roots)
```
```
## h
## -0.0724257
```
```
hinkley(logs)
```
```
## h
## -0.2407449
```
Since the raw data (\\(p \= 1\\)) has a positive value of \\(d\\) and roots (\\(p \= .5\\)) has a negative \\(d\\) value, this might suggest choosing a power reexpression (\\(p\\)) between .5 and 1\. In this example, there is not a strong reason to reexpress, and roots might be a little more symmetric than the raw data.
10\.5 Matched transformations
-----------------------------
We are interested in comparing the effects of taking different reexpressions, such as taking logs and roots. But when we take a reexpression such as log, we mess up the scale of the raw data (logs are much smaller than the raw data), and so it is difficult to make a comparison.
It would be helpful if the raw and reexpressed data were roughly on the same scale so we can easily compare the two datasets. We can accomplish this by means of a procedure called matching.
In the following, we denote our raw data by \\(x\\)
and our reexpressed data by \\(y \= T(x)\\).
Now we know that we can apply a further linear transformation which we can call
\\\[
z \= a \+ b y \= a \+ b T(x)
\\]
The change from \\(y\\) to \\(z\\) is a trivial transformation and won’t change the shape of the data.
We want to choose the constants \\(a\\) and \\(b\\) so that the \\(z\\) batch resembles the \\(x\\) batch. We can accomplish this in many ways. We describe two of them.
10\.6 Matching Method 1
-----------------------
Here we choose two values in the raw scale (\\(x\\)) that will be the same in the new scale (\\(z\\)). Choose two points \\(x\_1\\) and \\(x\_2\\) in the original scale – the corresponding points in the new scale will be \\(z\_1\\) and \\(z\_2\\). We wish to find values of the constants \\(a\\), \\(b\\) so that
\\\[
z\_1\= a \+ b T(x\_1\) \= x\_1
\\]
\\\[
z\_2 \= a \+ b T(x\_2\) \= x\_2
\\]
10\.7 Matching Method 2
-----------------------
Here we choose one point that will be same in the raw (\\(x\\)) and new (\\(z\\)) scales. Also, by placing a condition on the derivative, we ensure that data close to the chosen value will be the same in the two scales. Choose point \\(x\_0\\) such that
\\\[
z\_0 \= a \+ b T(x\_0\) \= x\_0
\\]
and the derivative at z with respect to x at xo is equal to 1
\\\[
\\frac{d}{dx}\|\_{x\_0} \= \\frac{d\[a\+bT(x)]}{dx}\|\_{x\_0}\= b \\frac{d T(x)}{dx}\|\_{x\=x\_0} \= 1\.
\\]
If we solve for \\(a\\) and \\(b\\) from the two equations, we get the solution:
\\\[
z \= x\_0 \+ \\frac{T(x) \- T(x\_0\)}{T'(x\_0\)}.
\\]
In usual practice, we choose \\(x\_0\\) to be some central value in the raw data such as the median.
In the case of power functions, where the power \\(p\\) is not zero,
\\\[
T(x) \= x^p,
\\]
and the matching reexpression is
\\\[
z \= x\_0 \+ \\frac{x^p \- x\_0^p}{p x\_0^{p\-1}}.
\\]
In the case of the log (base 10\) reexpression where \\(T(x) \= log(x)\\), the matching transformation has the form
\\\[
z \= x\_0 \+ \\frac{\\log(x) \-\\log(x\_0\)}{\\log(e)/x\_0}.
\\]
Let’s illustrate matching for our farm numbers example. We use the second matching method and let \\(x\_0 \= 39\.5\\), the median of the raw data. Using the above equations, we calculate the matching root and log transformations in R:
```
matched.roots <- 39.5 + (sqrt(farms$count) -
sqrt(39.5)) / (.5*39.5 ^ (-.5))
matched.logs <- 39.5 + (log10(farms$count) -
log10(39.5)) / (log10(exp(1)) / 39.5)
```
These calculations can also be done using the function `mtrans` in the `LearnBayes` package:
```
raw <- farms$count
matched.roots <- mtrans(raw, 0.5)
matched.logs <- mtrans(raw, 0)
```
To compare the raw, matched roots, and matched logs, we use parallel boxplots shown below.
```
boxplot(data.frame(raw, matched.roots, matched.logs))
```
Note that this matching has given the three batches the same median (\\(x\_0\\)) and the batches have similar spreads. We can focus on the shapes of the batches that is indicated by the position of the median within the box and the lengths of the whiskers.
What do we see in this boxplot display?
* Looking at the raw (farms) data, the middle 50% of the data looks pretty symmetric and there is right skewness in the tails.
* The roots are more symmetric in the tails, but this is offset by some left skewness in the middle 50%.
* The logs look left skewed both in the middle 50% and the tails.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/introduction-to-plotting.html |
12 Introduction to Plotting
===========================
In this lecture, we introduce plotting of two\-variable data and make some general comments about what we want to learn when we plot data.
12\.1 Meet the data
-------------------
Here is some data from one of the most famous track and field events, the Boston Marathon. This 26 mile race is run on Patriot’s Day every April. It receives a lot of attention in the media and runners from all over the world compete. The table below (from the 2001 ESPN Information Please Sports Almanac) gives the winning time in minutes of the men’s marathon for each year from 1950 to 2000\. One interesting note is that the race has not always been the same length over the years. The length of the race was 26 miles, 385 through 1927\-52 and all the years since 1957; it was (only) 25 miles, 958 yards in the years 1953\-56\.
This dataset is stored in `boston.marathon.wtimes` in the `LearnEDAfunctions` package.
```
library(LearnEDAfunctions)
library(tidyverse)
head(boston.marathon.wtimes)
```
```
## year minutes
## 1 1897 175
## 2 1898 162
## 3 1899 174
## 4 1900 159
## 5 1901 149
## 6 1902 163
```
12\.2 Graph the data
--------------------
We are interested in how the race times change over the years. An obvious graph to make is a plot of TIME (vertical) against YEAR (horizontal) shown below. (This kind of graph is called a time\-series plot, but we generally won’t give graphs special names.)
```
ggplot(boston.marathon.wtimes,
aes(year, minutes)) +
geom_point() +
xlab("YEAR") + ylab("TIME")
```
Looking at this graph, an obvious pattern is that the times are generally decreasing from 1950 to 2000\. This means that the best runners are getting faster. We would notice this for practically all track\-and\-field events – athletes are improving due to better equipment, better training, etc.
When we see a pattern, we want to describe it in some detailed way. It is sort of obvious that runners are getting faster, but what is the rate of this change?
12\.3 Describing the fit
------------------------
To find the rate at which the runners are getting faster, we want to fit a curve to the data. The simplest type of curve that we can fit is a line and we focus on fitting lines in this class. The pattern in the above scatterplot looks approximately linear (straight\-line), so it seems reasonable to fit a line. The figure below shows the scatterplot with a line fit on top.
```
ggplot(boston.marathon.wtimes,
aes(year, minutes)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE) +
xlab("YEAR") + ylab("TIME")
```
What is the interpretation of this line fit? The slope is \\(m \= \-.3256\\), so this means that the winning time in the marathon has generally been decreasing at a rate of .33 minutes (or about 20 seconds) per year. That’s a pretty large rate of decrease. One wonders if the winning time will continue to decrease at this rate for future years.
12\.4 Looking at the residuals
------------------------------
We’ve summarized the basic pattern in our scatterplot by using a line. But now we wish to look deeper. Is there any structure in these data beyond what we see in the general decrease in the winning time over years? To look for deeper structure, we look at the residuals. A residual is the difference between the observed winning time and its fit – that is,
\\\[
RESIDUAL \= ACTUAL TIME \- FIT .
\\]
Here the fit is the line
\\\[
FIT \= \-.3256 (YEAR \- 1975\) \+ 134\.83
\\]
We construct a residual plot by graphing the residuals (vertical) against the time (horizontal). We add a reference line at residual \= 0 – points close to this horizontal line are well\-predicted using this fit.
```
boston.marathon.wtimes %>%
mutate(FIT = -.3256 *
(boston.marathon.wtimes$year - 1975) + 134.83,
Residual = minutes - FIT) %>%
ggplot(aes(year, Residual)) +
geom_point() +
geom_hline(yintercept = 0, color = "red") +
xlab("YEAR")
```
What do we see in this residual plot?
1. First, note that there is no general trend, up or down, in this plot. We have removed the trend by subtracting the fit from the times. Since there is no trend, our eye won’t be focusing at the general pattern that we saw earlier in our plot of time against year and we can look for deeper structure.
2. Although there is no general trend, I do notice a pattern in these residuals. The residuals for years 1950 until the early 1990’s seem to be equally scattered on both sides of zero. However, the last six residuals are all positive. This means that the linear fit underestimates the winning time for these recent years.
3. Remember our earlier remark that the distance for the Boston Marathon was slightly smaller for the years 1953\-1956? Looking at the residual plot, note that the residuals for these four years are all negative. This is what we would expect given the shorter length of the race for these years.
4. Another pattern that I notice in this plot is that there is a change in the variability of the residuals across time. The residuals from 1950 to 1980 generally fall between \-5 and 5 minutes. But the residuals for the years 1980 through 2000 fall in a much tighter band about 0, suggesting that there is smaller variation in these winning times.
5. Another thing we look for in this plot is any unusually large residuals (positive or negative). There are a couple of extreme values, say 1950 and 1951 (positive) and 1956 (negative). But the 1950 and 1951 outliers are explained by the larger variation in best winning times for these early years, and we just explained that the negative residual in 1956 was explained by the shorter distance of the race.
12\.5 An alternative fit
------------------------
The patterns in the residual plot suggest that maybe a line fit is not the best for these data. Later we will talk about a useful method called a resistant smooth of fitting a smooth curve through time series data.
We won’t talk about the details of this smooth yet, but here is the result of this smooth applied to our marathon data.
```
ggplot(boston.marathon.wtimes,
aes(year, minutes)) +
geom_point() +
geom_smooth(method = "loess", span = 0.5,
se = FALSE) +
xlab("YEAR") + ylab("TIME")
```
This smooth effectively shows some of the patterns in the graph that we observed earlier. There is a general decrease in the best winning time in the race over years. But …
* there is a dip in the graph in the mid\-50’s – we explained earlier this was due to the shorter running distance
* there is a pretty steady decrease in the times between 1960 and 1980, although one might say that there is a leveling\-off about 1970
* the times since 1980 have stayed pretty constant, suggesting that maybe there is a leveling off in performance in this race
12\.1 Meet the data
-------------------
Here is some data from one of the most famous track and field events, the Boston Marathon. This 26 mile race is run on Patriot’s Day every April. It receives a lot of attention in the media and runners from all over the world compete. The table below (from the 2001 ESPN Information Please Sports Almanac) gives the winning time in minutes of the men’s marathon for each year from 1950 to 2000\. One interesting note is that the race has not always been the same length over the years. The length of the race was 26 miles, 385 through 1927\-52 and all the years since 1957; it was (only) 25 miles, 958 yards in the years 1953\-56\.
This dataset is stored in `boston.marathon.wtimes` in the `LearnEDAfunctions` package.
```
library(LearnEDAfunctions)
library(tidyverse)
head(boston.marathon.wtimes)
```
```
## year minutes
## 1 1897 175
## 2 1898 162
## 3 1899 174
## 4 1900 159
## 5 1901 149
## 6 1902 163
```
12\.2 Graph the data
--------------------
We are interested in how the race times change over the years. An obvious graph to make is a plot of TIME (vertical) against YEAR (horizontal) shown below. (This kind of graph is called a time\-series plot, but we generally won’t give graphs special names.)
```
ggplot(boston.marathon.wtimes,
aes(year, minutes)) +
geom_point() +
xlab("YEAR") + ylab("TIME")
```
Looking at this graph, an obvious pattern is that the times are generally decreasing from 1950 to 2000\. This means that the best runners are getting faster. We would notice this for practically all track\-and\-field events – athletes are improving due to better equipment, better training, etc.
When we see a pattern, we want to describe it in some detailed way. It is sort of obvious that runners are getting faster, but what is the rate of this change?
12\.3 Describing the fit
------------------------
To find the rate at which the runners are getting faster, we want to fit a curve to the data. The simplest type of curve that we can fit is a line and we focus on fitting lines in this class. The pattern in the above scatterplot looks approximately linear (straight\-line), so it seems reasonable to fit a line. The figure below shows the scatterplot with a line fit on top.
```
ggplot(boston.marathon.wtimes,
aes(year, minutes)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE) +
xlab("YEAR") + ylab("TIME")
```
What is the interpretation of this line fit? The slope is \\(m \= \-.3256\\), so this means that the winning time in the marathon has generally been decreasing at a rate of .33 minutes (or about 20 seconds) per year. That’s a pretty large rate of decrease. One wonders if the winning time will continue to decrease at this rate for future years.
12\.4 Looking at the residuals
------------------------------
We’ve summarized the basic pattern in our scatterplot by using a line. But now we wish to look deeper. Is there any structure in these data beyond what we see in the general decrease in the winning time over years? To look for deeper structure, we look at the residuals. A residual is the difference between the observed winning time and its fit – that is,
\\\[
RESIDUAL \= ACTUAL TIME \- FIT .
\\]
Here the fit is the line
\\\[
FIT \= \-.3256 (YEAR \- 1975\) \+ 134\.83
\\]
We construct a residual plot by graphing the residuals (vertical) against the time (horizontal). We add a reference line at residual \= 0 – points close to this horizontal line are well\-predicted using this fit.
```
boston.marathon.wtimes %>%
mutate(FIT = -.3256 *
(boston.marathon.wtimes$year - 1975) + 134.83,
Residual = minutes - FIT) %>%
ggplot(aes(year, Residual)) +
geom_point() +
geom_hline(yintercept = 0, color = "red") +
xlab("YEAR")
```
What do we see in this residual plot?
1. First, note that there is no general trend, up or down, in this plot. We have removed the trend by subtracting the fit from the times. Since there is no trend, our eye won’t be focusing at the general pattern that we saw earlier in our plot of time against year and we can look for deeper structure.
2. Although there is no general trend, I do notice a pattern in these residuals. The residuals for years 1950 until the early 1990’s seem to be equally scattered on both sides of zero. However, the last six residuals are all positive. This means that the linear fit underestimates the winning time for these recent years.
3. Remember our earlier remark that the distance for the Boston Marathon was slightly smaller for the years 1953\-1956? Looking at the residual plot, note that the residuals for these four years are all negative. This is what we would expect given the shorter length of the race for these years.
4. Another pattern that I notice in this plot is that there is a change in the variability of the residuals across time. The residuals from 1950 to 1980 generally fall between \-5 and 5 minutes. But the residuals for the years 1980 through 2000 fall in a much tighter band about 0, suggesting that there is smaller variation in these winning times.
5. Another thing we look for in this plot is any unusually large residuals (positive or negative). There are a couple of extreme values, say 1950 and 1951 (positive) and 1956 (negative). But the 1950 and 1951 outliers are explained by the larger variation in best winning times for these early years, and we just explained that the negative residual in 1956 was explained by the shorter distance of the race.
12\.5 An alternative fit
------------------------
The patterns in the residual plot suggest that maybe a line fit is not the best for these data. Later we will talk about a useful method called a resistant smooth of fitting a smooth curve through time series data.
We won’t talk about the details of this smooth yet, but here is the result of this smooth applied to our marathon data.
```
ggplot(boston.marathon.wtimes,
aes(year, minutes)) +
geom_point() +
geom_smooth(method = "loess", span = 0.5,
se = FALSE) +
xlab("YEAR") + ylab("TIME")
```
This smooth effectively shows some of the patterns in the graph that we observed earlier. There is a general decrease in the best winning time in the race over years. But …
* there is a dip in the graph in the mid\-50’s – we explained earlier this was due to the shorter running distance
* there is a pretty steady decrease in the times between 1960 and 1980, although one might say that there is a leveling\-off about 1970
* the times since 1980 have stayed pretty constant, suggesting that maybe there is a leveling off in performance in this race
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/introduction-to-plotting.html |
12 Introduction to Plotting
===========================
In this lecture, we introduce plotting of two\-variable data and make some general comments about what we want to learn when we plot data.
12\.1 Meet the data
-------------------
Here is some data from one of the most famous track and field events, the Boston Marathon. This 26 mile race is run on Patriot’s Day every April. It receives a lot of attention in the media and runners from all over the world compete. The table below (from the 2001 ESPN Information Please Sports Almanac) gives the winning time in minutes of the men’s marathon for each year from 1950 to 2000\. One interesting note is that the race has not always been the same length over the years. The length of the race was 26 miles, 385 through 1927\-52 and all the years since 1957; it was (only) 25 miles, 958 yards in the years 1953\-56\.
This dataset is stored in `boston.marathon.wtimes` in the `LearnEDAfunctions` package.
```
library(LearnEDAfunctions)
library(tidyverse)
head(boston.marathon.wtimes)
```
```
## year minutes
## 1 1897 175
## 2 1898 162
## 3 1899 174
## 4 1900 159
## 5 1901 149
## 6 1902 163
```
12\.2 Graph the data
--------------------
We are interested in how the race times change over the years. An obvious graph to make is a plot of TIME (vertical) against YEAR (horizontal) shown below. (This kind of graph is called a time\-series plot, but we generally won’t give graphs special names.)
```
ggplot(boston.marathon.wtimes,
aes(year, minutes)) +
geom_point() +
xlab("YEAR") + ylab("TIME")
```
Looking at this graph, an obvious pattern is that the times are generally decreasing from 1950 to 2000\. This means that the best runners are getting faster. We would notice this for practically all track\-and\-field events – athletes are improving due to better equipment, better training, etc.
When we see a pattern, we want to describe it in some detailed way. It is sort of obvious that runners are getting faster, but what is the rate of this change?
12\.3 Describing the fit
------------------------
To find the rate at which the runners are getting faster, we want to fit a curve to the data. The simplest type of curve that we can fit is a line and we focus on fitting lines in this class. The pattern in the above scatterplot looks approximately linear (straight\-line), so it seems reasonable to fit a line. The figure below shows the scatterplot with a line fit on top.
```
ggplot(boston.marathon.wtimes,
aes(year, minutes)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE) +
xlab("YEAR") + ylab("TIME")
```
What is the interpretation of this line fit? The slope is \\(m \= \-.3256\\), so this means that the winning time in the marathon has generally been decreasing at a rate of .33 minutes (or about 20 seconds) per year. That’s a pretty large rate of decrease. One wonders if the winning time will continue to decrease at this rate for future years.
12\.4 Looking at the residuals
------------------------------
We’ve summarized the basic pattern in our scatterplot by using a line. But now we wish to look deeper. Is there any structure in these data beyond what we see in the general decrease in the winning time over years? To look for deeper structure, we look at the residuals. A residual is the difference between the observed winning time and its fit – that is,
\\\[
RESIDUAL \= ACTUAL TIME \- FIT .
\\]
Here the fit is the line
\\\[
FIT \= \-.3256 (YEAR \- 1975\) \+ 134\.83
\\]
We construct a residual plot by graphing the residuals (vertical) against the time (horizontal). We add a reference line at residual \= 0 – points close to this horizontal line are well\-predicted using this fit.
```
boston.marathon.wtimes %>%
mutate(FIT = -.3256 *
(boston.marathon.wtimes$year - 1975) + 134.83,
Residual = minutes - FIT) %>%
ggplot(aes(year, Residual)) +
geom_point() +
geom_hline(yintercept = 0, color = "red") +
xlab("YEAR")
```
What do we see in this residual plot?
1. First, note that there is no general trend, up or down, in this plot. We have removed the trend by subtracting the fit from the times. Since there is no trend, our eye won’t be focusing at the general pattern that we saw earlier in our plot of time against year and we can look for deeper structure.
2. Although there is no general trend, I do notice a pattern in these residuals. The residuals for years 1950 until the early 1990’s seem to be equally scattered on both sides of zero. However, the last six residuals are all positive. This means that the linear fit underestimates the winning time for these recent years.
3. Remember our earlier remark that the distance for the Boston Marathon was slightly smaller for the years 1953\-1956? Looking at the residual plot, note that the residuals for these four years are all negative. This is what we would expect given the shorter length of the race for these years.
4. Another pattern that I notice in this plot is that there is a change in the variability of the residuals across time. The residuals from 1950 to 1980 generally fall between \-5 and 5 minutes. But the residuals for the years 1980 through 2000 fall in a much tighter band about 0, suggesting that there is smaller variation in these winning times.
5. Another thing we look for in this plot is any unusually large residuals (positive or negative). There are a couple of extreme values, say 1950 and 1951 (positive) and 1956 (negative). But the 1950 and 1951 outliers are explained by the larger variation in best winning times for these early years, and we just explained that the negative residual in 1956 was explained by the shorter distance of the race.
12\.5 An alternative fit
------------------------
The patterns in the residual plot suggest that maybe a line fit is not the best for these data. Later we will talk about a useful method called a resistant smooth of fitting a smooth curve through time series data.
We won’t talk about the details of this smooth yet, but here is the result of this smooth applied to our marathon data.
```
ggplot(boston.marathon.wtimes,
aes(year, minutes)) +
geom_point() +
geom_smooth(method = "loess", span = 0.5,
se = FALSE) +
xlab("YEAR") + ylab("TIME")
```
This smooth effectively shows some of the patterns in the graph that we observed earlier. There is a general decrease in the best winning time in the race over years. But …
* there is a dip in the graph in the mid\-50’s – we explained earlier this was due to the shorter running distance
* there is a pretty steady decrease in the times between 1960 and 1980, although one might say that there is a leveling\-off about 1970
* the times since 1980 have stayed pretty constant, suggesting that maybe there is a leveling off in performance in this race
12\.1 Meet the data
-------------------
Here is some data from one of the most famous track and field events, the Boston Marathon. This 26 mile race is run on Patriot’s Day every April. It receives a lot of attention in the media and runners from all over the world compete. The table below (from the 2001 ESPN Information Please Sports Almanac) gives the winning time in minutes of the men’s marathon for each year from 1950 to 2000\. One interesting note is that the race has not always been the same length over the years. The length of the race was 26 miles, 385 through 1927\-52 and all the years since 1957; it was (only) 25 miles, 958 yards in the years 1953\-56\.
This dataset is stored in `boston.marathon.wtimes` in the `LearnEDAfunctions` package.
```
library(LearnEDAfunctions)
library(tidyverse)
head(boston.marathon.wtimes)
```
```
## year minutes
## 1 1897 175
## 2 1898 162
## 3 1899 174
## 4 1900 159
## 5 1901 149
## 6 1902 163
```
12\.2 Graph the data
--------------------
We are interested in how the race times change over the years. An obvious graph to make is a plot of TIME (vertical) against YEAR (horizontal) shown below. (This kind of graph is called a time\-series plot, but we generally won’t give graphs special names.)
```
ggplot(boston.marathon.wtimes,
aes(year, minutes)) +
geom_point() +
xlab("YEAR") + ylab("TIME")
```
Looking at this graph, an obvious pattern is that the times are generally decreasing from 1950 to 2000\. This means that the best runners are getting faster. We would notice this for practically all track\-and\-field events – athletes are improving due to better equipment, better training, etc.
When we see a pattern, we want to describe it in some detailed way. It is sort of obvious that runners are getting faster, but what is the rate of this change?
12\.3 Describing the fit
------------------------
To find the rate at which the runners are getting faster, we want to fit a curve to the data. The simplest type of curve that we can fit is a line and we focus on fitting lines in this class. The pattern in the above scatterplot looks approximately linear (straight\-line), so it seems reasonable to fit a line. The figure below shows the scatterplot with a line fit on top.
```
ggplot(boston.marathon.wtimes,
aes(year, minutes)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE) +
xlab("YEAR") + ylab("TIME")
```
What is the interpretation of this line fit? The slope is \\(m \= \-.3256\\), so this means that the winning time in the marathon has generally been decreasing at a rate of .33 minutes (or about 20 seconds) per year. That’s a pretty large rate of decrease. One wonders if the winning time will continue to decrease at this rate for future years.
12\.4 Looking at the residuals
------------------------------
We’ve summarized the basic pattern in our scatterplot by using a line. But now we wish to look deeper. Is there any structure in these data beyond what we see in the general decrease in the winning time over years? To look for deeper structure, we look at the residuals. A residual is the difference between the observed winning time and its fit – that is,
\\\[
RESIDUAL \= ACTUAL TIME \- FIT .
\\]
Here the fit is the line
\\\[
FIT \= \-.3256 (YEAR \- 1975\) \+ 134\.83
\\]
We construct a residual plot by graphing the residuals (vertical) against the time (horizontal). We add a reference line at residual \= 0 – points close to this horizontal line are well\-predicted using this fit.
```
boston.marathon.wtimes %>%
mutate(FIT = -.3256 *
(boston.marathon.wtimes$year - 1975) + 134.83,
Residual = minutes - FIT) %>%
ggplot(aes(year, Residual)) +
geom_point() +
geom_hline(yintercept = 0, color = "red") +
xlab("YEAR")
```
What do we see in this residual plot?
1. First, note that there is no general trend, up or down, in this plot. We have removed the trend by subtracting the fit from the times. Since there is no trend, our eye won’t be focusing at the general pattern that we saw earlier in our plot of time against year and we can look for deeper structure.
2. Although there is no general trend, I do notice a pattern in these residuals. The residuals for years 1950 until the early 1990’s seem to be equally scattered on both sides of zero. However, the last six residuals are all positive. This means that the linear fit underestimates the winning time for these recent years.
3. Remember our earlier remark that the distance for the Boston Marathon was slightly smaller for the years 1953\-1956? Looking at the residual plot, note that the residuals for these four years are all negative. This is what we would expect given the shorter length of the race for these years.
4. Another pattern that I notice in this plot is that there is a change in the variability of the residuals across time. The residuals from 1950 to 1980 generally fall between \-5 and 5 minutes. But the residuals for the years 1980 through 2000 fall in a much tighter band about 0, suggesting that there is smaller variation in these winning times.
5. Another thing we look for in this plot is any unusually large residuals (positive or negative). There are a couple of extreme values, say 1950 and 1951 (positive) and 1956 (negative). But the 1950 and 1951 outliers are explained by the larger variation in best winning times for these early years, and we just explained that the negative residual in 1956 was explained by the shorter distance of the race.
12\.5 An alternative fit
------------------------
The patterns in the residual plot suggest that maybe a line fit is not the best for these data. Later we will talk about a useful method called a resistant smooth of fitting a smooth curve through time series data.
We won’t talk about the details of this smooth yet, but here is the result of this smooth applied to our marathon data.
```
ggplot(boston.marathon.wtimes,
aes(year, minutes)) +
geom_point() +
geom_smooth(method = "loess", span = 0.5,
se = FALSE) +
xlab("YEAR") + ylab("TIME")
```
This smooth effectively shows some of the patterns in the graph that we observed earlier. There is a general decrease in the best winning time in the race over years. But …
* there is a dip in the graph in the mid\-50’s – we explained earlier this was due to the shorter running distance
* there is a pretty steady decrease in the times between 1960 and 1980, although one might say that there is a leveling\-off about 1970
* the times since 1980 have stayed pretty constant, suggesting that maybe there is a leveling off in performance in this race
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/resistant-line.html |
13 Resistant Line
=================
In this chapter, we start to explore paired data where you suspect a relationship between \\(x\\) and \\(y\\). The focus here on how to fit a line to data in a “resistant” fashion, so the fit is relatively insensitive to extreme points.
13\.1 Meet the data
-------------------
Our data today is taken from the 2001 New York Times Almanac, p. 287\. A table is shown which gives the median sales prices (in thousands of dollars) of existing single\-family homes for selected metropolitan areas for the years 1985, 1990, 1995, 1999, and 2000\. We will look only at the years 1985 and 2000 and delete the cities for which either the 1985 or the 2000 median house price is missing.
This dataset is available as `home.prices` in the `LearnEDAfunctions` package:
```
library(LearnEDAfunctions)
library(tidyverse)
head(home.prices)
```
```
## City y1985 y2000
## 1 Atlanta 66.2 125.4
## 2 Baltimore 72.6 145.2
## 3 Chicago 81.1 166.7
## 4 Cincinnati 60.2 124.0
## 5 Cleveland 64.4 121.3
## 6 Denver 84.3 181.5
```
We start by plotting these data on a scatterplot where the \\(x\\) variable is the 1985 price and the \\(y\\) variable is the 2000 price. We get the following figure.
```
ggplot(home.prices, aes(y1985, y2000)) +
geom_point()
```
We see a positive trend in this graph which makes sense – cities that had high house prices in 1985 tended also to have high prices in 2000\. We want to describe this relationship using a simple function like a line.
But this is not the best graph of these data. Why? Well, most of the points fall in the lower left portion of the figure. This happens since both the set of 1985 house prices and the 2000 house prices are right skewed. We can improve this plot by reexpressing both the x and y variables by a power transformation. We’ll talk more later about reexpressing variables in graphs, but take my word that a good reexpression to use in this case is a log.
If we take logs of both sets of house prices, we get a new scatterplot of the log (1985 prices) and log (2000 prices). Looking at the figure, note that the points are more evenly spread out – from left to right and from down to up.
```
home.prices %>%
mutate(log.1985 = log10(y1985),
log.2000 = log10(y2000)) -> home.prices
ggplot(home.prices, aes(log.1985, log.2000)) +
geom_point()
```
Since there appears to be a linear relationship between log (2000 price) and log (1985 price), it seems reasonable to fit a line. We describe a simple way of fitting a line that is not sensitive to outlying points – we call this procedure a resistant line.
13\.2 Three summary points
--------------------------
The first step to fitting a line
* divides the data into three groups and then
* finds a summary point in each group
We divide the data by the \\(x\\)\-values – the lower third of the \\(x\\)\-values form the first group, the middle third of the \\(x\\)\-values form the 2nd group, and the upper third of the \\(x\\)\-values make up the third group. This works fine if we have, say 15 points – then an equal number will be in each group. If we have 16 points, it’s reasonable to make the group sizes 5, 6, 5; if we have 17 points, then a symmetric way to go assigns 6, 5, 6 to the groups.
Here we have 21 cities, so there will be 7 cities in each group.
Our summary point for a group will be
```
(median x value, median y value).
```
In the below table, the data has been sorted by 1985 price.
```
home.prices %>%
select(City, log.1985, log.2000) %>%
arrange(log.1985)
```
```
## City log.1985 log.2000
## 1 Detroit 1.713491 2.137354
## 2 Tampa 1.766413 2.016197
## 3 Cincinnati 1.779596 2.093422
## 4 Kansas_City 1.788168 2.077004
## 5 Cleveland 1.808886 2.083861
## 6 St._Louis 1.817565 1.999565
## 7 Atlanta 1.820858 2.098298
## 8 Milwaukee 1.829304 2.139564
## 9 Baltimore 1.860937 2.161967
## 10 Philadelphia 1.869232 2.065953
## 11 Phoenix 1.873902 2.116940
## 12 Minneapolis 1.876218 2.148294
## 13 Houston 1.895423 2.024486
## 14 Miami 1.905796 2.140508
## 15 Chicago 1.909021 2.221936
## 16 Denver 1.925828 2.258877
## 17 Wash._D.C. 1.987219 2.249198
## 18 San_Diego 2.031004 2.400365
## 19 Los_Angeles 2.097604 2.307282
## 20 New_York_City 2.127105 2.345374
## 21 San_Francisco 2.161667 2.621799
```
The first group consists of the seven cities with the smallest 1985 house prices. The median of the log 1985 prices for this group is 1\.79 and the median of the log 2000 prices is 2\.08 – so the left summary point is
\\\[
(x\_L, y\_L) \= (1\.79, 2\.08\).
\\]
In a similar fashion, we find summary points for the center and right groups:
\\\[
(x\_C, y\_C) \= (1\.87, 2\.14\), \\, \\, (x\_R, y\_R) \= (2\.03, 2\.31\).
\\]
(Note that a summary point may or may not be an actual data point.)
In the figure below, we’ve drawn vertical lines showing the division of the points into three groups, and the summary points are indicated by red dots.
```
ggplot(home.prices, aes(log.1985, log.2000)) +
geom_point() +
geom_vline(xintercept = c(1.825, 1.91)) +
geom_point(data = data.frame(x=c(1.79, 1.87, 2.03),
y=c(2.08, 2.14, 2.31)),
aes(x, y), size = 3, color="red",
shape = 4, stroke = 2)
```
13\.3 Fitting a line to three points
------------------------------------
We all know how to fit a line that goes through two points. How about a line that goes through (approximately) three points?
We write the line in the form
\\\[
y \= a\_0 \+ b\_0 (x \- x\_C) ,
\\]
where \\(b\_0\\) is the slope and \\(a\_0\\) is the value of \\(y\\) when \\(x\\) is equal to the middle summary point \\(x\_C\\).
We find the slope of this line by using the left and right summary points:
\\\[
b\_0 \= \\frac{y\_R \- y\_L}{x\_R \- x\_L} .
\\]
Actually, it is better to work with the summary points in R computed to higher precision. Here the slope would be
```
(b0 <- 0.920)
```
```
## [1] 0.92
```
To find the intercept, we first note that
\\\[
a\_0 \= y \- b\_0 (x \- x\_C),
\\]
and then define \\(a\_0\\) to be the mean of the {\\(y \- b\_0(x \- x\_C)\\)}, averaged over the three summary points:
\\\[
a\_0 \= \\frac{1}{3} \\left(\[y\_L \- b\_0 (x\_L \- x\_C)] \+ y\_C \+ \[y\_R \- b\_0 (x\_R \- x\_C) ] \\right).
\\]
Here the intercept turns out to be
```
(a0 <- 2.155)
```
```
## [1] 2.155
```
So the three\-group line is
\\\[
y \= .920 (x \- 1\.87\) \+ 2\.155
\\]
This line is graphed on the scatterplot below.
```
ggplot(home.prices, aes(log.1985, log.2000)) +
geom_point() +
geom_abline(slope = 0.920,
intercept = -0.920 * 1.87 + 2.155)
```
13\.4 Improving the line by fitting a line to the residuals
-----------------------------------------------------------
Is this the best line through the points? To check, we examine the residuals which are the vertical deviations from the points to the line. A little more formally, we define a residual as
\\\[
RESIDUAL \= DATA \- FIT,
\\]
where \\(DATA\\) is the \\(y\\) value and \\(FIT\\) is the predicted value of \\(y\\) from the line fit:
\\\[
FIT \= a\_0 \+ b\_0 (x \- x\_C).
\\]
Let’s find the residual for Detroit. Its \\(y\\) value (log 2000 house price) is \\(DATA \= 2\.14\\) and its predicted value from the line is
\\\[
FIT \= .920 (1\.71 \- 1\.87\) \+ 2\.155 \= 2\.01
\\]
so Detroit’s residual is
\\\[
RESIDUAL \= 2\.14 \- 2\.01 \= .013 .
\\]
The following R code uses the function `rline` to fit a single iteration of the resistant line. Outputs of this function are the intercept and slope, value of \\(x\_C\\), and the residuals. We create
a data frame that shows the fits and residuals for all cities.
```
myfit <- rline(log.2000 ~ log.1985, home.prices)
home.prices <-
mutate(home.prices,
FIT = myfit$a + myfit$b * (log.1985 - myfit$xC),
RESIDUAL = log.2000 - FIT)
select(home.prices, City, log.1985, log.2000, FIT, RESIDUAL)
```
```
## City log.1985 log.2000 FIT RESIDUAL
## 1 Atlanta 1.820858 2.098298 2.106212 -0.0079142188
## 2 Baltimore 1.860937 2.161967 2.143086 0.0188805053
## 3 Chicago 1.909021 2.221936 2.187326 0.0346095764
## 4 Cincinnati 1.779596 2.093422 2.068249 0.0251725826
## 5 Cleveland 1.808886 2.083861 2.095197 -0.0113360002
## 6 Denver 1.925828 2.258877 2.202789 0.0560875782
## 7 Detroit 1.713491 2.137354 2.007428 0.1299258047
## 8 Houston 1.895423 2.024486 2.174815 -0.1503292285
## 9 Kansas_City 1.788168 2.077004 2.076136 0.0008686638
## 10 Los_Angeles 2.097604 2.307282 2.360832 -0.0535502542
## 11 Miami 1.905796 2.140508 2.184359 -0.0438508423
## 12 Milwaukee 1.829304 2.139564 2.113982 0.0255819655
## 13 Minneapolis 1.876218 2.148294 2.157146 -0.0088515042
## 14 New_York_City 2.127105 2.345374 2.387974 -0.0426004858
## 15 Philadelphia 1.869232 2.065953 2.150718 -0.0847650389
## 16 Phoenix 1.873902 2.116940 2.155015 -0.0380748953
## 17 St._Louis 1.817565 1.999565 2.103182 -0.1036168913
## 18 San_Diego 2.031004 2.400365 2.299557 0.1008083642
## 19 San_Francisco 2.161667 2.621799 2.419774 0.2020256651
## 20 Tampa 1.766413 2.016197 2.056119 -0.0399221336
## 21 Wash._D.C. 1.987219 2.249198 2.259272 -0.0100741031
```
We graph the residuals on the vertical axis against the log 1985 prices below.
```
ggplot(home.prices, aes(log.1985, RESIDUAL)) +
geom_point() +
geom_hline(yintercept = 0, color = "red")
```
To see if we have fit a good line, we look for a pattern in the residuals. If there is some pattern – say, the residual plot seems to be increasing, then this tells us that we can improve our line fit.
We try to improve our line fit by fitting a line to the \\((x, RESIDUAL)\\) data. We use the same 3\-group method to fit our line. Some of the calculations are summarized in the table above.
* We first find three summary points. We already know the summary \\(x\\) values are 1\.79, 1\.87, 2\.03\. Looking at the residuals in each group, we find the summary residual values are respectively 0, \-.03, .03\. So our 3 summary points are
\\\[
(1\.79, 0\), (1\.87, \-.03\), (2\.03, .03\)
\\]
* We find the slope \\(d0\\) and the intercept \\(g0\\) as we did before. The slope is
\\\[ d0 \= (.03 \- 0\) / (2\.03 \- 1\.79\) \= .125\\]
and the intercept is
\\\[
g0 \= 1/3\[(0 \- .125 (1\.79 \- 1\.87\)) \+ (\-.03\) \+ (.03 \- .125 (2\.03 \- 1\.87\))] \= \-.003
\\]
So our line fit to the \\((x, RESIDUAL)\\) data is
\\\[
RESID \= \-.003 \+ .125 (x \- 1\.87\)
\\]
* Our new fit to the \\((x, y)\\) data has the form
\\\[
y \= a\_1 \+ b\_1 (x \- x\_C),
\\]
where we find the slope \\(b\_1\\) and the intercept \\(a\_1\\) are found by adding the slopes and intercepts from the two fits:
\\\[
b\_1 \= b\_0 \+ d\_0, a\_1 \= a\_0 \+ g\_0\.
\\]
Here
\\\[ b\_1 \= .958 \+ .125 \= 1\.083, a\_1 \= 2\.151 \- .003 \= 2\.148\.\\]
So our new fit to the data is
\\\[
y \= 1\.083 (x \- 1\.87\) \+ 2\.15
\\]
Now we can continue this procedure as follows:
* Find the residuals from this fit.
* Find three summary points of \\((x, RESID)\\) and fit a 3\-group line.
* Update the slope and intercept of the fit to the \\((x, y)\\) data.
In practice, we do this on R, and continue this procedure until there is little change in the adjustments to the slope and intercept.
For our example, I had the the function `rline` do ten iterations of this procedure with the following results (SLOPE is the current estimate of the slope of the resistant line and INTERCEPT is the current estimate at the intercept).
```
Results <- data.frame(Iteration=NULL, Slope=NULL, Intercept=NULL)
for(iterations in 1:10){
fit <- rline(log.2000 ~ log.1985, home.prices,
iter=iterations)
Results <- rbind(Results,
data.frame(Iteration=iterations,
Slope=fit$b, Intercept=fit$a))
}
Results
```
```
## Iteration Slope Intercept
## 1 1 0.9200503 2.155015
## 2 2 1.0951636 2.147055
## 3 3 1.2067010 2.149614
## 4 4 1.2777029 2.151248
## 5 5 1.3194267 2.152652
## 6 6 1.3439454 2.153477
## 7 7 1.3583537 2.153962
## 8 8 1.3668206 2.154247
## 9 9 1.3717961 2.154415
## 10 10 1.3747200 2.154513
```
Note that after ten iterations, the procedure has essentially converged and the resistant line has equation
\\\[
y \= 1\.3747 (x \- 1\.87\) \+ 2\.1545
\\]
This is typically the case, although there exist some examples where the procedure doesn’t converge.
13\.5 Comparison with a Least\-Squares Fit
------------------------------------------
We have just described a resistant method of fitting a line. We should explain why this is preferable to the popular least\-squares fit that you learned in your first stats course.
The least\-squares fit to these data is given by
```
lm(log.2000 ~ I(log.1985 - 1.87), data=home.prices)
```
```
##
## Call:
## lm(formula = log.2000 ~ I(log.1985 - 1.87), data = home.prices)
##
## Coefficients:
## (Intercept) I(log.1985 - 1.87)
## 2.148 1.044
```
If you compare the resistant line with the least\-squares line, they look pretty close. The slope of the resistant line is .88 which is a little bit smaller than the least\-squares slope of 1\.04\.
The big difference between the two fits is how they react to outliers. To illustrate this, notice that San Francisco has an median house price of 418\.6 (thousand dollars). Suppose instead that the median price was 1000, so log median price \= 3\.00 (instead of 2\.62\). What effect would this change have on our line fits?
We refit these data (with the unusally large price) using the resistant and least\-squares methods.
```
home.prices %>%
mutate(log.2000a = log.2000,
log.2000a = replace(log.2000a, y2000 == 418.6,
3.00)) -> home.prices
rline(log.2000a ~ log.1985,
home.prices, 5)[c("a", "b", "xC")]
```
```
## $a
## [1] 2.152652
##
## $b
## [1] 1.319427
##
## $xC
## [1] 1.873902
```
```
lm(log.2000a ~ I(log.1985 - 1.874), data = home.prices)
```
```
##
## Call:
## lm(formula = log.2000a ~ I(log.1985 - 1.874), data = home.prices)
##
## Coefficients:
## (Intercept) I(log.1985 - 1.874)
## 2.162 1.384
```
The resistant line is
\\\[
y \= 1\.319 (x \- 1\.874\) \+ 2\.153
\\]
which is identical to the line that we found earlier. The change in the largest house price had no effect on the fits since the resistant line is based on computing median values of \\(x\\) and \\(y\\) in each group.
In contrast, the least\-squares fit with the large house price is
\\\[
y \= \- 0\.408 \+ 1\.38 x
\\]
which is different from the earlier least\-squares fit – the slope has increased from 1\.04 to 1\.37\. So a single extreme observation can have a big effect on the least\-squares fit. The least\-squares line suffers from the same lack\-of\-resistance problem as our familiar measure of center, the mean.
13\.6 Interpreting the fit
--------------------------
Remember we initially reexpressed the house price data to logs – can we express our \`\`best line” in terms of the original house price data?
Our resistant line fit was
\\\[
y \= 1\.3747 (x \- 1\.87\) \+ 2\.1545
\\]
or
\\\[
y \= 1\.3747 x \-0\.416189
\\]
which means
\\\[
\\log ({\\rm house \\, price \\, in \\, 2000}) \= 1\.3747 \\log(
{\\rm house \\, price \\, in \\, 1985}) \-0\.416189\.
\\]
If we take each side to the 10th power, we get the equivalent equation
\\\[
{\\rm house \\, price \\, in \\, 2000} \= \[{\\rm house \\, price \\, in \\, 2000}]^{1\.3747} \\times 10^{\- 0\.416}.
\\]
So a linear fit to the \\((\\log x, \\log y)\\) data is the same as an exponential\-type fit in the \\((x, y)\\) data.
13\.1 Meet the data
-------------------
Our data today is taken from the 2001 New York Times Almanac, p. 287\. A table is shown which gives the median sales prices (in thousands of dollars) of existing single\-family homes for selected metropolitan areas for the years 1985, 1990, 1995, 1999, and 2000\. We will look only at the years 1985 and 2000 and delete the cities for which either the 1985 or the 2000 median house price is missing.
This dataset is available as `home.prices` in the `LearnEDAfunctions` package:
```
library(LearnEDAfunctions)
library(tidyverse)
head(home.prices)
```
```
## City y1985 y2000
## 1 Atlanta 66.2 125.4
## 2 Baltimore 72.6 145.2
## 3 Chicago 81.1 166.7
## 4 Cincinnati 60.2 124.0
## 5 Cleveland 64.4 121.3
## 6 Denver 84.3 181.5
```
We start by plotting these data on a scatterplot where the \\(x\\) variable is the 1985 price and the \\(y\\) variable is the 2000 price. We get the following figure.
```
ggplot(home.prices, aes(y1985, y2000)) +
geom_point()
```
We see a positive trend in this graph which makes sense – cities that had high house prices in 1985 tended also to have high prices in 2000\. We want to describe this relationship using a simple function like a line.
But this is not the best graph of these data. Why? Well, most of the points fall in the lower left portion of the figure. This happens since both the set of 1985 house prices and the 2000 house prices are right skewed. We can improve this plot by reexpressing both the x and y variables by a power transformation. We’ll talk more later about reexpressing variables in graphs, but take my word that a good reexpression to use in this case is a log.
If we take logs of both sets of house prices, we get a new scatterplot of the log (1985 prices) and log (2000 prices). Looking at the figure, note that the points are more evenly spread out – from left to right and from down to up.
```
home.prices %>%
mutate(log.1985 = log10(y1985),
log.2000 = log10(y2000)) -> home.prices
ggplot(home.prices, aes(log.1985, log.2000)) +
geom_point()
```
Since there appears to be a linear relationship between log (2000 price) and log (1985 price), it seems reasonable to fit a line. We describe a simple way of fitting a line that is not sensitive to outlying points – we call this procedure a resistant line.
13\.2 Three summary points
--------------------------
The first step to fitting a line
* divides the data into three groups and then
* finds a summary point in each group
We divide the data by the \\(x\\)\-values – the lower third of the \\(x\\)\-values form the first group, the middle third of the \\(x\\)\-values form the 2nd group, and the upper third of the \\(x\\)\-values make up the third group. This works fine if we have, say 15 points – then an equal number will be in each group. If we have 16 points, it’s reasonable to make the group sizes 5, 6, 5; if we have 17 points, then a symmetric way to go assigns 6, 5, 6 to the groups.
Here we have 21 cities, so there will be 7 cities in each group.
Our summary point for a group will be
```
(median x value, median y value).
```
In the below table, the data has been sorted by 1985 price.
```
home.prices %>%
select(City, log.1985, log.2000) %>%
arrange(log.1985)
```
```
## City log.1985 log.2000
## 1 Detroit 1.713491 2.137354
## 2 Tampa 1.766413 2.016197
## 3 Cincinnati 1.779596 2.093422
## 4 Kansas_City 1.788168 2.077004
## 5 Cleveland 1.808886 2.083861
## 6 St._Louis 1.817565 1.999565
## 7 Atlanta 1.820858 2.098298
## 8 Milwaukee 1.829304 2.139564
## 9 Baltimore 1.860937 2.161967
## 10 Philadelphia 1.869232 2.065953
## 11 Phoenix 1.873902 2.116940
## 12 Minneapolis 1.876218 2.148294
## 13 Houston 1.895423 2.024486
## 14 Miami 1.905796 2.140508
## 15 Chicago 1.909021 2.221936
## 16 Denver 1.925828 2.258877
## 17 Wash._D.C. 1.987219 2.249198
## 18 San_Diego 2.031004 2.400365
## 19 Los_Angeles 2.097604 2.307282
## 20 New_York_City 2.127105 2.345374
## 21 San_Francisco 2.161667 2.621799
```
The first group consists of the seven cities with the smallest 1985 house prices. The median of the log 1985 prices for this group is 1\.79 and the median of the log 2000 prices is 2\.08 – so the left summary point is
\\\[
(x\_L, y\_L) \= (1\.79, 2\.08\).
\\]
In a similar fashion, we find summary points for the center and right groups:
\\\[
(x\_C, y\_C) \= (1\.87, 2\.14\), \\, \\, (x\_R, y\_R) \= (2\.03, 2\.31\).
\\]
(Note that a summary point may or may not be an actual data point.)
In the figure below, we’ve drawn vertical lines showing the division of the points into three groups, and the summary points are indicated by red dots.
```
ggplot(home.prices, aes(log.1985, log.2000)) +
geom_point() +
geom_vline(xintercept = c(1.825, 1.91)) +
geom_point(data = data.frame(x=c(1.79, 1.87, 2.03),
y=c(2.08, 2.14, 2.31)),
aes(x, y), size = 3, color="red",
shape = 4, stroke = 2)
```
13\.3 Fitting a line to three points
------------------------------------
We all know how to fit a line that goes through two points. How about a line that goes through (approximately) three points?
We write the line in the form
\\\[
y \= a\_0 \+ b\_0 (x \- x\_C) ,
\\]
where \\(b\_0\\) is the slope and \\(a\_0\\) is the value of \\(y\\) when \\(x\\) is equal to the middle summary point \\(x\_C\\).
We find the slope of this line by using the left and right summary points:
\\\[
b\_0 \= \\frac{y\_R \- y\_L}{x\_R \- x\_L} .
\\]
Actually, it is better to work with the summary points in R computed to higher precision. Here the slope would be
```
(b0 <- 0.920)
```
```
## [1] 0.92
```
To find the intercept, we first note that
\\\[
a\_0 \= y \- b\_0 (x \- x\_C),
\\]
and then define \\(a\_0\\) to be the mean of the {\\(y \- b\_0(x \- x\_C)\\)}, averaged over the three summary points:
\\\[
a\_0 \= \\frac{1}{3} \\left(\[y\_L \- b\_0 (x\_L \- x\_C)] \+ y\_C \+ \[y\_R \- b\_0 (x\_R \- x\_C) ] \\right).
\\]
Here the intercept turns out to be
```
(a0 <- 2.155)
```
```
## [1] 2.155
```
So the three\-group line is
\\\[
y \= .920 (x \- 1\.87\) \+ 2\.155
\\]
This line is graphed on the scatterplot below.
```
ggplot(home.prices, aes(log.1985, log.2000)) +
geom_point() +
geom_abline(slope = 0.920,
intercept = -0.920 * 1.87 + 2.155)
```
13\.4 Improving the line by fitting a line to the residuals
-----------------------------------------------------------
Is this the best line through the points? To check, we examine the residuals which are the vertical deviations from the points to the line. A little more formally, we define a residual as
\\\[
RESIDUAL \= DATA \- FIT,
\\]
where \\(DATA\\) is the \\(y\\) value and \\(FIT\\) is the predicted value of \\(y\\) from the line fit:
\\\[
FIT \= a\_0 \+ b\_0 (x \- x\_C).
\\]
Let’s find the residual for Detroit. Its \\(y\\) value (log 2000 house price) is \\(DATA \= 2\.14\\) and its predicted value from the line is
\\\[
FIT \= .920 (1\.71 \- 1\.87\) \+ 2\.155 \= 2\.01
\\]
so Detroit’s residual is
\\\[
RESIDUAL \= 2\.14 \- 2\.01 \= .013 .
\\]
The following R code uses the function `rline` to fit a single iteration of the resistant line. Outputs of this function are the intercept and slope, value of \\(x\_C\\), and the residuals. We create
a data frame that shows the fits and residuals for all cities.
```
myfit <- rline(log.2000 ~ log.1985, home.prices)
home.prices <-
mutate(home.prices,
FIT = myfit$a + myfit$b * (log.1985 - myfit$xC),
RESIDUAL = log.2000 - FIT)
select(home.prices, City, log.1985, log.2000, FIT, RESIDUAL)
```
```
## City log.1985 log.2000 FIT RESIDUAL
## 1 Atlanta 1.820858 2.098298 2.106212 -0.0079142188
## 2 Baltimore 1.860937 2.161967 2.143086 0.0188805053
## 3 Chicago 1.909021 2.221936 2.187326 0.0346095764
## 4 Cincinnati 1.779596 2.093422 2.068249 0.0251725826
## 5 Cleveland 1.808886 2.083861 2.095197 -0.0113360002
## 6 Denver 1.925828 2.258877 2.202789 0.0560875782
## 7 Detroit 1.713491 2.137354 2.007428 0.1299258047
## 8 Houston 1.895423 2.024486 2.174815 -0.1503292285
## 9 Kansas_City 1.788168 2.077004 2.076136 0.0008686638
## 10 Los_Angeles 2.097604 2.307282 2.360832 -0.0535502542
## 11 Miami 1.905796 2.140508 2.184359 -0.0438508423
## 12 Milwaukee 1.829304 2.139564 2.113982 0.0255819655
## 13 Minneapolis 1.876218 2.148294 2.157146 -0.0088515042
## 14 New_York_City 2.127105 2.345374 2.387974 -0.0426004858
## 15 Philadelphia 1.869232 2.065953 2.150718 -0.0847650389
## 16 Phoenix 1.873902 2.116940 2.155015 -0.0380748953
## 17 St._Louis 1.817565 1.999565 2.103182 -0.1036168913
## 18 San_Diego 2.031004 2.400365 2.299557 0.1008083642
## 19 San_Francisco 2.161667 2.621799 2.419774 0.2020256651
## 20 Tampa 1.766413 2.016197 2.056119 -0.0399221336
## 21 Wash._D.C. 1.987219 2.249198 2.259272 -0.0100741031
```
We graph the residuals on the vertical axis against the log 1985 prices below.
```
ggplot(home.prices, aes(log.1985, RESIDUAL)) +
geom_point() +
geom_hline(yintercept = 0, color = "red")
```
To see if we have fit a good line, we look for a pattern in the residuals. If there is some pattern – say, the residual plot seems to be increasing, then this tells us that we can improve our line fit.
We try to improve our line fit by fitting a line to the \\((x, RESIDUAL)\\) data. We use the same 3\-group method to fit our line. Some of the calculations are summarized in the table above.
* We first find three summary points. We already know the summary \\(x\\) values are 1\.79, 1\.87, 2\.03\. Looking at the residuals in each group, we find the summary residual values are respectively 0, \-.03, .03\. So our 3 summary points are
\\\[
(1\.79, 0\), (1\.87, \-.03\), (2\.03, .03\)
\\]
* We find the slope \\(d0\\) and the intercept \\(g0\\) as we did before. The slope is
\\\[ d0 \= (.03 \- 0\) / (2\.03 \- 1\.79\) \= .125\\]
and the intercept is
\\\[
g0 \= 1/3\[(0 \- .125 (1\.79 \- 1\.87\)) \+ (\-.03\) \+ (.03 \- .125 (2\.03 \- 1\.87\))] \= \-.003
\\]
So our line fit to the \\((x, RESIDUAL)\\) data is
\\\[
RESID \= \-.003 \+ .125 (x \- 1\.87\)
\\]
* Our new fit to the \\((x, y)\\) data has the form
\\\[
y \= a\_1 \+ b\_1 (x \- x\_C),
\\]
where we find the slope \\(b\_1\\) and the intercept \\(a\_1\\) are found by adding the slopes and intercepts from the two fits:
\\\[
b\_1 \= b\_0 \+ d\_0, a\_1 \= a\_0 \+ g\_0\.
\\]
Here
\\\[ b\_1 \= .958 \+ .125 \= 1\.083, a\_1 \= 2\.151 \- .003 \= 2\.148\.\\]
So our new fit to the data is
\\\[
y \= 1\.083 (x \- 1\.87\) \+ 2\.15
\\]
Now we can continue this procedure as follows:
* Find the residuals from this fit.
* Find three summary points of \\((x, RESID)\\) and fit a 3\-group line.
* Update the slope and intercept of the fit to the \\((x, y)\\) data.
In practice, we do this on R, and continue this procedure until there is little change in the adjustments to the slope and intercept.
For our example, I had the the function `rline` do ten iterations of this procedure with the following results (SLOPE is the current estimate of the slope of the resistant line and INTERCEPT is the current estimate at the intercept).
```
Results <- data.frame(Iteration=NULL, Slope=NULL, Intercept=NULL)
for(iterations in 1:10){
fit <- rline(log.2000 ~ log.1985, home.prices,
iter=iterations)
Results <- rbind(Results,
data.frame(Iteration=iterations,
Slope=fit$b, Intercept=fit$a))
}
Results
```
```
## Iteration Slope Intercept
## 1 1 0.9200503 2.155015
## 2 2 1.0951636 2.147055
## 3 3 1.2067010 2.149614
## 4 4 1.2777029 2.151248
## 5 5 1.3194267 2.152652
## 6 6 1.3439454 2.153477
## 7 7 1.3583537 2.153962
## 8 8 1.3668206 2.154247
## 9 9 1.3717961 2.154415
## 10 10 1.3747200 2.154513
```
Note that after ten iterations, the procedure has essentially converged and the resistant line has equation
\\\[
y \= 1\.3747 (x \- 1\.87\) \+ 2\.1545
\\]
This is typically the case, although there exist some examples where the procedure doesn’t converge.
13\.5 Comparison with a Least\-Squares Fit
------------------------------------------
We have just described a resistant method of fitting a line. We should explain why this is preferable to the popular least\-squares fit that you learned in your first stats course.
The least\-squares fit to these data is given by
```
lm(log.2000 ~ I(log.1985 - 1.87), data=home.prices)
```
```
##
## Call:
## lm(formula = log.2000 ~ I(log.1985 - 1.87), data = home.prices)
##
## Coefficients:
## (Intercept) I(log.1985 - 1.87)
## 2.148 1.044
```
If you compare the resistant line with the least\-squares line, they look pretty close. The slope of the resistant line is .88 which is a little bit smaller than the least\-squares slope of 1\.04\.
The big difference between the two fits is how they react to outliers. To illustrate this, notice that San Francisco has an median house price of 418\.6 (thousand dollars). Suppose instead that the median price was 1000, so log median price \= 3\.00 (instead of 2\.62\). What effect would this change have on our line fits?
We refit these data (with the unusally large price) using the resistant and least\-squares methods.
```
home.prices %>%
mutate(log.2000a = log.2000,
log.2000a = replace(log.2000a, y2000 == 418.6,
3.00)) -> home.prices
rline(log.2000a ~ log.1985,
home.prices, 5)[c("a", "b", "xC")]
```
```
## $a
## [1] 2.152652
##
## $b
## [1] 1.319427
##
## $xC
## [1] 1.873902
```
```
lm(log.2000a ~ I(log.1985 - 1.874), data = home.prices)
```
```
##
## Call:
## lm(formula = log.2000a ~ I(log.1985 - 1.874), data = home.prices)
##
## Coefficients:
## (Intercept) I(log.1985 - 1.874)
## 2.162 1.384
```
The resistant line is
\\\[
y \= 1\.319 (x \- 1\.874\) \+ 2\.153
\\]
which is identical to the line that we found earlier. The change in the largest house price had no effect on the fits since the resistant line is based on computing median values of \\(x\\) and \\(y\\) in each group.
In contrast, the least\-squares fit with the large house price is
\\\[
y \= \- 0\.408 \+ 1\.38 x
\\]
which is different from the earlier least\-squares fit – the slope has increased from 1\.04 to 1\.37\. So a single extreme observation can have a big effect on the least\-squares fit. The least\-squares line suffers from the same lack\-of\-resistance problem as our familiar measure of center, the mean.
13\.6 Interpreting the fit
--------------------------
Remember we initially reexpressed the house price data to logs – can we express our \`\`best line” in terms of the original house price data?
Our resistant line fit was
\\\[
y \= 1\.3747 (x \- 1\.87\) \+ 2\.1545
\\]
or
\\\[
y \= 1\.3747 x \-0\.416189
\\]
which means
\\\[
\\log ({\\rm house \\, price \\, in \\, 2000}) \= 1\.3747 \\log(
{\\rm house \\, price \\, in \\, 1985}) \-0\.416189\.
\\]
If we take each side to the 10th power, we get the equivalent equation
\\\[
{\\rm house \\, price \\, in \\, 2000} \= \[{\\rm house \\, price \\, in \\, 2000}]^{1\.3747} \\times 10^{\- 0\.416}.
\\]
So a linear fit to the \\((\\log x, \\log y)\\) data is the same as an exponential\-type fit in the \\((x, y)\\) data.
| Data Science |
bayesball.github.io | https://bayesball.github.io/EDA/resistant-line.html |
13 Resistant Line
=================
In this chapter, we start to explore paired data where you suspect a relationship between \\(x\\) and \\(y\\). The focus here on how to fit a line to data in a “resistant” fashion, so the fit is relatively insensitive to extreme points.
13\.1 Meet the data
-------------------
Our data today is taken from the 2001 New York Times Almanac, p. 287\. A table is shown which gives the median sales prices (in thousands of dollars) of existing single\-family homes for selected metropolitan areas for the years 1985, 1990, 1995, 1999, and 2000\. We will look only at the years 1985 and 2000 and delete the cities for which either the 1985 or the 2000 median house price is missing.
This dataset is available as `home.prices` in the `LearnEDAfunctions` package:
```
library(LearnEDAfunctions)
library(tidyverse)
head(home.prices)
```
```
## City y1985 y2000
## 1 Atlanta 66.2 125.4
## 2 Baltimore 72.6 145.2
## 3 Chicago 81.1 166.7
## 4 Cincinnati 60.2 124.0
## 5 Cleveland 64.4 121.3
## 6 Denver 84.3 181.5
```
We start by plotting these data on a scatterplot where the \\(x\\) variable is the 1985 price and the \\(y\\) variable is the 2000 price. We get the following figure.
```
ggplot(home.prices, aes(y1985, y2000)) +
geom_point()
```
We see a positive trend in this graph which makes sense – cities that had high house prices in 1985 tended also to have high prices in 2000\. We want to describe this relationship using a simple function like a line.
But this is not the best graph of these data. Why? Well, most of the points fall in the lower left portion of the figure. This happens since both the set of 1985 house prices and the 2000 house prices are right skewed. We can improve this plot by reexpressing both the x and y variables by a power transformation. We’ll talk more later about reexpressing variables in graphs, but take my word that a good reexpression to use in this case is a log.
If we take logs of both sets of house prices, we get a new scatterplot of the log (1985 prices) and log (2000 prices). Looking at the figure, note that the points are more evenly spread out – from left to right and from down to up.
```
home.prices %>%
mutate(log.1985 = log10(y1985),
log.2000 = log10(y2000)) -> home.prices
ggplot(home.prices, aes(log.1985, log.2000)) +
geom_point()
```
Since there appears to be a linear relationship between log (2000 price) and log (1985 price), it seems reasonable to fit a line. We describe a simple way of fitting a line that is not sensitive to outlying points – we call this procedure a resistant line.
13\.2 Three summary points
--------------------------
The first step to fitting a line
* divides the data into three groups and then
* finds a summary point in each group
We divide the data by the \\(x\\)\-values – the lower third of the \\(x\\)\-values form the first group, the middle third of the \\(x\\)\-values form the 2nd group, and the upper third of the \\(x\\)\-values make up the third group. This works fine if we have, say 15 points – then an equal number will be in each group. If we have 16 points, it’s reasonable to make the group sizes 5, 6, 5; if we have 17 points, then a symmetric way to go assigns 6, 5, 6 to the groups.
Here we have 21 cities, so there will be 7 cities in each group.
Our summary point for a group will be
```
(median x value, median y value).
```
In the below table, the data has been sorted by 1985 price.
```
home.prices %>%
select(City, log.1985, log.2000) %>%
arrange(log.1985)
```
```
## City log.1985 log.2000
## 1 Detroit 1.713491 2.137354
## 2 Tampa 1.766413 2.016197
## 3 Cincinnati 1.779596 2.093422
## 4 Kansas_City 1.788168 2.077004
## 5 Cleveland 1.808886 2.083861
## 6 St._Louis 1.817565 1.999565
## 7 Atlanta 1.820858 2.098298
## 8 Milwaukee 1.829304 2.139564
## 9 Baltimore 1.860937 2.161967
## 10 Philadelphia 1.869232 2.065953
## 11 Phoenix 1.873902 2.116940
## 12 Minneapolis 1.876218 2.148294
## 13 Houston 1.895423 2.024486
## 14 Miami 1.905796 2.140508
## 15 Chicago 1.909021 2.221936
## 16 Denver 1.925828 2.258877
## 17 Wash._D.C. 1.987219 2.249198
## 18 San_Diego 2.031004 2.400365
## 19 Los_Angeles 2.097604 2.307282
## 20 New_York_City 2.127105 2.345374
## 21 San_Francisco 2.161667 2.621799
```
The first group consists of the seven cities with the smallest 1985 house prices. The median of the log 1985 prices for this group is 1\.79 and the median of the log 2000 prices is 2\.08 – so the left summary point is
\\\[
(x\_L, y\_L) \= (1\.79, 2\.08\).
\\]
In a similar fashion, we find summary points for the center and right groups:
\\\[
(x\_C, y\_C) \= (1\.87, 2\.14\), \\, \\, (x\_R, y\_R) \= (2\.03, 2\.31\).
\\]
(Note that a summary point may or may not be an actual data point.)
In the figure below, we’ve drawn vertical lines showing the division of the points into three groups, and the summary points are indicated by red dots.
```
ggplot(home.prices, aes(log.1985, log.2000)) +
geom_point() +
geom_vline(xintercept = c(1.825, 1.91)) +
geom_point(data = data.frame(x=c(1.79, 1.87, 2.03),
y=c(2.08, 2.14, 2.31)),
aes(x, y), size = 3, color="red",
shape = 4, stroke = 2)
```
13\.3 Fitting a line to three points
------------------------------------
We all know how to fit a line that goes through two points. How about a line that goes through (approximately) three points?
We write the line in the form
\\\[
y \= a\_0 \+ b\_0 (x \- x\_C) ,
\\]
where \\(b\_0\\) is the slope and \\(a\_0\\) is the value of \\(y\\) when \\(x\\) is equal to the middle summary point \\(x\_C\\).
We find the slope of this line by using the left and right summary points:
\\\[
b\_0 \= \\frac{y\_R \- y\_L}{x\_R \- x\_L} .
\\]
Actually, it is better to work with the summary points in R computed to higher precision. Here the slope would be
```
(b0 <- 0.920)
```
```
## [1] 0.92
```
To find the intercept, we first note that
\\\[
a\_0 \= y \- b\_0 (x \- x\_C),
\\]
and then define \\(a\_0\\) to be the mean of the {\\(y \- b\_0(x \- x\_C)\\)}, averaged over the three summary points:
\\\[
a\_0 \= \\frac{1}{3} \\left(\[y\_L \- b\_0 (x\_L \- x\_C)] \+ y\_C \+ \[y\_R \- b\_0 (x\_R \- x\_C) ] \\right).
\\]
Here the intercept turns out to be
```
(a0 <- 2.155)
```
```
## [1] 2.155
```
So the three\-group line is
\\\[
y \= .920 (x \- 1\.87\) \+ 2\.155
\\]
This line is graphed on the scatterplot below.
```
ggplot(home.prices, aes(log.1985, log.2000)) +
geom_point() +
geom_abline(slope = 0.920,
intercept = -0.920 * 1.87 + 2.155)
```
13\.4 Improving the line by fitting a line to the residuals
-----------------------------------------------------------
Is this the best line through the points? To check, we examine the residuals which are the vertical deviations from the points to the line. A little more formally, we define a residual as
\\\[
RESIDUAL \= DATA \- FIT,
\\]
where \\(DATA\\) is the \\(y\\) value and \\(FIT\\) is the predicted value of \\(y\\) from the line fit:
\\\[
FIT \= a\_0 \+ b\_0 (x \- x\_C).
\\]
Let’s find the residual for Detroit. Its \\(y\\) value (log 2000 house price) is \\(DATA \= 2\.14\\) and its predicted value from the line is
\\\[
FIT \= .920 (1\.71 \- 1\.87\) \+ 2\.155 \= 2\.01
\\]
so Detroit’s residual is
\\\[
RESIDUAL \= 2\.14 \- 2\.01 \= .013 .
\\]
The following R code uses the function `rline` to fit a single iteration of the resistant line. Outputs of this function are the intercept and slope, value of \\(x\_C\\), and the residuals. We create
a data frame that shows the fits and residuals for all cities.
```
myfit <- rline(log.2000 ~ log.1985, home.prices)
home.prices <-
mutate(home.prices,
FIT = myfit$a + myfit$b * (log.1985 - myfit$xC),
RESIDUAL = log.2000 - FIT)
select(home.prices, City, log.1985, log.2000, FIT, RESIDUAL)
```
```
## City log.1985 log.2000 FIT RESIDUAL
## 1 Atlanta 1.820858 2.098298 2.106212 -0.0079142188
## 2 Baltimore 1.860937 2.161967 2.143086 0.0188805053
## 3 Chicago 1.909021 2.221936 2.187326 0.0346095764
## 4 Cincinnati 1.779596 2.093422 2.068249 0.0251725826
## 5 Cleveland 1.808886 2.083861 2.095197 -0.0113360002
## 6 Denver 1.925828 2.258877 2.202789 0.0560875782
## 7 Detroit 1.713491 2.137354 2.007428 0.1299258047
## 8 Houston 1.895423 2.024486 2.174815 -0.1503292285
## 9 Kansas_City 1.788168 2.077004 2.076136 0.0008686638
## 10 Los_Angeles 2.097604 2.307282 2.360832 -0.0535502542
## 11 Miami 1.905796 2.140508 2.184359 -0.0438508423
## 12 Milwaukee 1.829304 2.139564 2.113982 0.0255819655
## 13 Minneapolis 1.876218 2.148294 2.157146 -0.0088515042
## 14 New_York_City 2.127105 2.345374 2.387974 -0.0426004858
## 15 Philadelphia 1.869232 2.065953 2.150718 -0.0847650389
## 16 Phoenix 1.873902 2.116940 2.155015 -0.0380748953
## 17 St._Louis 1.817565 1.999565 2.103182 -0.1036168913
## 18 San_Diego 2.031004 2.400365 2.299557 0.1008083642
## 19 San_Francisco 2.161667 2.621799 2.419774 0.2020256651
## 20 Tampa 1.766413 2.016197 2.056119 -0.0399221336
## 21 Wash._D.C. 1.987219 2.249198 2.259272 -0.0100741031
```
We graph the residuals on the vertical axis against the log 1985 prices below.
```
ggplot(home.prices, aes(log.1985, RESIDUAL)) +
geom_point() +
geom_hline(yintercept = 0, color = "red")
```
To see if we have fit a good line, we look for a pattern in the residuals. If there is some pattern – say, the residual plot seems to be increasing, then this tells us that we can improve our line fit.
We try to improve our line fit by fitting a line to the \\((x, RESIDUAL)\\) data. We use the same 3\-group method to fit our line. Some of the calculations are summarized in the table above.
* We first find three summary points. We already know the summary \\(x\\) values are 1\.79, 1\.87, 2\.03\. Looking at the residuals in each group, we find the summary residual values are respectively 0, \-.03, .03\. So our 3 summary points are
\\\[
(1\.79, 0\), (1\.87, \-.03\), (2\.03, .03\)
\\]
* We find the slope \\(d0\\) and the intercept \\(g0\\) as we did before. The slope is
\\\[ d0 \= (.03 \- 0\) / (2\.03 \- 1\.79\) \= .125\\]
and the intercept is
\\\[
g0 \= 1/3\[(0 \- .125 (1\.79 \- 1\.87\)) \+ (\-.03\) \+ (.03 \- .125 (2\.03 \- 1\.87\))] \= \-.003
\\]
So our line fit to the \\((x, RESIDUAL)\\) data is
\\\[
RESID \= \-.003 \+ .125 (x \- 1\.87\)
\\]
* Our new fit to the \\((x, y)\\) data has the form
\\\[
y \= a\_1 \+ b\_1 (x \- x\_C),
\\]
where we find the slope \\(b\_1\\) and the intercept \\(a\_1\\) are found by adding the slopes and intercepts from the two fits:
\\\[
b\_1 \= b\_0 \+ d\_0, a\_1 \= a\_0 \+ g\_0\.
\\]
Here
\\\[ b\_1 \= .958 \+ .125 \= 1\.083, a\_1 \= 2\.151 \- .003 \= 2\.148\.\\]
So our new fit to the data is
\\\[
y \= 1\.083 (x \- 1\.87\) \+ 2\.15
\\]
Now we can continue this procedure as follows:
* Find the residuals from this fit.
* Find three summary points of \\((x, RESID)\\) and fit a 3\-group line.
* Update the slope and intercept of the fit to the \\((x, y)\\) data.
In practice, we do this on R, and continue this procedure until there is little change in the adjustments to the slope and intercept.
For our example, I had the the function `rline` do ten iterations of this procedure with the following results (SLOPE is the current estimate of the slope of the resistant line and INTERCEPT is the current estimate at the intercept).
```
Results <- data.frame(Iteration=NULL, Slope=NULL, Intercept=NULL)
for(iterations in 1:10){
fit <- rline(log.2000 ~ log.1985, home.prices,
iter=iterations)
Results <- rbind(Results,
data.frame(Iteration=iterations,
Slope=fit$b, Intercept=fit$a))
}
Results
```
```
## Iteration Slope Intercept
## 1 1 0.9200503 2.155015
## 2 2 1.0951636 2.147055
## 3 3 1.2067010 2.149614
## 4 4 1.2777029 2.151248
## 5 5 1.3194267 2.152652
## 6 6 1.3439454 2.153477
## 7 7 1.3583537 2.153962
## 8 8 1.3668206 2.154247
## 9 9 1.3717961 2.154415
## 10 10 1.3747200 2.154513
```
Note that after ten iterations, the procedure has essentially converged and the resistant line has equation
\\\[
y \= 1\.3747 (x \- 1\.87\) \+ 2\.1545
\\]
This is typically the case, although there exist some examples where the procedure doesn’t converge.
13\.5 Comparison with a Least\-Squares Fit
------------------------------------------
We have just described a resistant method of fitting a line. We should explain why this is preferable to the popular least\-squares fit that you learned in your first stats course.
The least\-squares fit to these data is given by
```
lm(log.2000 ~ I(log.1985 - 1.87), data=home.prices)
```
```
##
## Call:
## lm(formula = log.2000 ~ I(log.1985 - 1.87), data = home.prices)
##
## Coefficients:
## (Intercept) I(log.1985 - 1.87)
## 2.148 1.044
```
If you compare the resistant line with the least\-squares line, they look pretty close. The slope of the resistant line is .88 which is a little bit smaller than the least\-squares slope of 1\.04\.
The big difference between the two fits is how they react to outliers. To illustrate this, notice that San Francisco has an median house price of 418\.6 (thousand dollars). Suppose instead that the median price was 1000, so log median price \= 3\.00 (instead of 2\.62\). What effect would this change have on our line fits?
We refit these data (with the unusally large price) using the resistant and least\-squares methods.
```
home.prices %>%
mutate(log.2000a = log.2000,
log.2000a = replace(log.2000a, y2000 == 418.6,
3.00)) -> home.prices
rline(log.2000a ~ log.1985,
home.prices, 5)[c("a", "b", "xC")]
```
```
## $a
## [1] 2.152652
##
## $b
## [1] 1.319427
##
## $xC
## [1] 1.873902
```
```
lm(log.2000a ~ I(log.1985 - 1.874), data = home.prices)
```
```
##
## Call:
## lm(formula = log.2000a ~ I(log.1985 - 1.874), data = home.prices)
##
## Coefficients:
## (Intercept) I(log.1985 - 1.874)
## 2.162 1.384
```
The resistant line is
\\\[
y \= 1\.319 (x \- 1\.874\) \+ 2\.153
\\]
which is identical to the line that we found earlier. The change in the largest house price had no effect on the fits since the resistant line is based on computing median values of \\(x\\) and \\(y\\) in each group.
In contrast, the least\-squares fit with the large house price is
\\\[
y \= \- 0\.408 \+ 1\.38 x
\\]
which is different from the earlier least\-squares fit – the slope has increased from 1\.04 to 1\.37\. So a single extreme observation can have a big effect on the least\-squares fit. The least\-squares line suffers from the same lack\-of\-resistance problem as our familiar measure of center, the mean.
13\.6 Interpreting the fit
--------------------------
Remember we initially reexpressed the house price data to logs – can we express our \`\`best line” in terms of the original house price data?
Our resistant line fit was
\\\[
y \= 1\.3747 (x \- 1\.87\) \+ 2\.1545
\\]
or
\\\[
y \= 1\.3747 x \-0\.416189
\\]
which means
\\\[
\\log ({\\rm house \\, price \\, in \\, 2000}) \= 1\.3747 \\log(
{\\rm house \\, price \\, in \\, 1985}) \-0\.416189\.
\\]
If we take each side to the 10th power, we get the equivalent equation
\\\[
{\\rm house \\, price \\, in \\, 2000} \= \[{\\rm house \\, price \\, in \\, 2000}]^{1\.3747} \\times 10^{\- 0\.416}.
\\]
So a linear fit to the \\((\\log x, \\log y)\\) data is the same as an exponential\-type fit in the \\((x, y)\\) data.
13\.1 Meet the data
-------------------
Our data today is taken from the 2001 New York Times Almanac, p. 287\. A table is shown which gives the median sales prices (in thousands of dollars) of existing single\-family homes for selected metropolitan areas for the years 1985, 1990, 1995, 1999, and 2000\. We will look only at the years 1985 and 2000 and delete the cities for which either the 1985 or the 2000 median house price is missing.
This dataset is available as `home.prices` in the `LearnEDAfunctions` package:
```
library(LearnEDAfunctions)
library(tidyverse)
head(home.prices)
```
```
## City y1985 y2000
## 1 Atlanta 66.2 125.4
## 2 Baltimore 72.6 145.2
## 3 Chicago 81.1 166.7
## 4 Cincinnati 60.2 124.0
## 5 Cleveland 64.4 121.3
## 6 Denver 84.3 181.5
```
We start by plotting these data on a scatterplot where the \\(x\\) variable is the 1985 price and the \\(y\\) variable is the 2000 price. We get the following figure.
```
ggplot(home.prices, aes(y1985, y2000)) +
geom_point()
```
We see a positive trend in this graph which makes sense – cities that had high house prices in 1985 tended also to have high prices in 2000\. We want to describe this relationship using a simple function like a line.
But this is not the best graph of these data. Why? Well, most of the points fall in the lower left portion of the figure. This happens since both the set of 1985 house prices and the 2000 house prices are right skewed. We can improve this plot by reexpressing both the x and y variables by a power transformation. We’ll talk more later about reexpressing variables in graphs, but take my word that a good reexpression to use in this case is a log.
If we take logs of both sets of house prices, we get a new scatterplot of the log (1985 prices) and log (2000 prices). Looking at the figure, note that the points are more evenly spread out – from left to right and from down to up.
```
home.prices %>%
mutate(log.1985 = log10(y1985),
log.2000 = log10(y2000)) -> home.prices
ggplot(home.prices, aes(log.1985, log.2000)) +
geom_point()
```
Since there appears to be a linear relationship between log (2000 price) and log (1985 price), it seems reasonable to fit a line. We describe a simple way of fitting a line that is not sensitive to outlying points – we call this procedure a resistant line.
13\.2 Three summary points
--------------------------
The first step to fitting a line
* divides the data into three groups and then
* finds a summary point in each group
We divide the data by the \\(x\\)\-values – the lower third of the \\(x\\)\-values form the first group, the middle third of the \\(x\\)\-values form the 2nd group, and the upper third of the \\(x\\)\-values make up the third group. This works fine if we have, say 15 points – then an equal number will be in each group. If we have 16 points, it’s reasonable to make the group sizes 5, 6, 5; if we have 17 points, then a symmetric way to go assigns 6, 5, 6 to the groups.
Here we have 21 cities, so there will be 7 cities in each group.
Our summary point for a group will be
```
(median x value, median y value).
```
In the below table, the data has been sorted by 1985 price.
```
home.prices %>%
select(City, log.1985, log.2000) %>%
arrange(log.1985)
```
```
## City log.1985 log.2000
## 1 Detroit 1.713491 2.137354
## 2 Tampa 1.766413 2.016197
## 3 Cincinnati 1.779596 2.093422
## 4 Kansas_City 1.788168 2.077004
## 5 Cleveland 1.808886 2.083861
## 6 St._Louis 1.817565 1.999565
## 7 Atlanta 1.820858 2.098298
## 8 Milwaukee 1.829304 2.139564
## 9 Baltimore 1.860937 2.161967
## 10 Philadelphia 1.869232 2.065953
## 11 Phoenix 1.873902 2.116940
## 12 Minneapolis 1.876218 2.148294
## 13 Houston 1.895423 2.024486
## 14 Miami 1.905796 2.140508
## 15 Chicago 1.909021 2.221936
## 16 Denver 1.925828 2.258877
## 17 Wash._D.C. 1.987219 2.249198
## 18 San_Diego 2.031004 2.400365
## 19 Los_Angeles 2.097604 2.307282
## 20 New_York_City 2.127105 2.345374
## 21 San_Francisco 2.161667 2.621799
```
The first group consists of the seven cities with the smallest 1985 house prices. The median of the log 1985 prices for this group is 1\.79 and the median of the log 2000 prices is 2\.08 – so the left summary point is
\\\[
(x\_L, y\_L) \= (1\.79, 2\.08\).
\\]
In a similar fashion, we find summary points for the center and right groups:
\\\[
(x\_C, y\_C) \= (1\.87, 2\.14\), \\, \\, (x\_R, y\_R) \= (2\.03, 2\.31\).
\\]
(Note that a summary point may or may not be an actual data point.)
In the figure below, we’ve drawn vertical lines showing the division of the points into three groups, and the summary points are indicated by red dots.
```
ggplot(home.prices, aes(log.1985, log.2000)) +
geom_point() +
geom_vline(xintercept = c(1.825, 1.91)) +
geom_point(data = data.frame(x=c(1.79, 1.87, 2.03),
y=c(2.08, 2.14, 2.31)),
aes(x, y), size = 3, color="red",
shape = 4, stroke = 2)
```
13\.3 Fitting a line to three points
------------------------------------
We all know how to fit a line that goes through two points. How about a line that goes through (approximately) three points?
We write the line in the form
\\\[
y \= a\_0 \+ b\_0 (x \- x\_C) ,
\\]
where \\(b\_0\\) is the slope and \\(a\_0\\) is the value of \\(y\\) when \\(x\\) is equal to the middle summary point \\(x\_C\\).
We find the slope of this line by using the left and right summary points:
\\\[
b\_0 \= \\frac{y\_R \- y\_L}{x\_R \- x\_L} .
\\]
Actually, it is better to work with the summary points in R computed to higher precision. Here the slope would be
```
(b0 <- 0.920)
```
```
## [1] 0.92
```
To find the intercept, we first note that
\\\[
a\_0 \= y \- b\_0 (x \- x\_C),
\\]
and then define \\(a\_0\\) to be the mean of the {\\(y \- b\_0(x \- x\_C)\\)}, averaged over the three summary points:
\\\[
a\_0 \= \\frac{1}{3} \\left(\[y\_L \- b\_0 (x\_L \- x\_C)] \+ y\_C \+ \[y\_R \- b\_0 (x\_R \- x\_C) ] \\right).
\\]
Here the intercept turns out to be
```
(a0 <- 2.155)
```
```
## [1] 2.155
```
So the three\-group line is
\\\[
y \= .920 (x \- 1\.87\) \+ 2\.155
\\]
This line is graphed on the scatterplot below.
```
ggplot(home.prices, aes(log.1985, log.2000)) +
geom_point() +
geom_abline(slope = 0.920,
intercept = -0.920 * 1.87 + 2.155)
```
13\.4 Improving the line by fitting a line to the residuals
-----------------------------------------------------------
Is this the best line through the points? To check, we examine the residuals which are the vertical deviations from the points to the line. A little more formally, we define a residual as
\\\[
RESIDUAL \= DATA \- FIT,
\\]
where \\(DATA\\) is the \\(y\\) value and \\(FIT\\) is the predicted value of \\(y\\) from the line fit:
\\\[
FIT \= a\_0 \+ b\_0 (x \- x\_C).
\\]
Let’s find the residual for Detroit. Its \\(y\\) value (log 2000 house price) is \\(DATA \= 2\.14\\) and its predicted value from the line is
\\\[
FIT \= .920 (1\.71 \- 1\.87\) \+ 2\.155 \= 2\.01
\\]
so Detroit’s residual is
\\\[
RESIDUAL \= 2\.14 \- 2\.01 \= .013 .
\\]
The following R code uses the function `rline` to fit a single iteration of the resistant line. Outputs of this function are the intercept and slope, value of \\(x\_C\\), and the residuals. We create
a data frame that shows the fits and residuals for all cities.
```
myfit <- rline(log.2000 ~ log.1985, home.prices)
home.prices <-
mutate(home.prices,
FIT = myfit$a + myfit$b * (log.1985 - myfit$xC),
RESIDUAL = log.2000 - FIT)
select(home.prices, City, log.1985, log.2000, FIT, RESIDUAL)
```
```
## City log.1985 log.2000 FIT RESIDUAL
## 1 Atlanta 1.820858 2.098298 2.106212 -0.0079142188
## 2 Baltimore 1.860937 2.161967 2.143086 0.0188805053
## 3 Chicago 1.909021 2.221936 2.187326 0.0346095764
## 4 Cincinnati 1.779596 2.093422 2.068249 0.0251725826
## 5 Cleveland 1.808886 2.083861 2.095197 -0.0113360002
## 6 Denver 1.925828 2.258877 2.202789 0.0560875782
## 7 Detroit 1.713491 2.137354 2.007428 0.1299258047
## 8 Houston 1.895423 2.024486 2.174815 -0.1503292285
## 9 Kansas_City 1.788168 2.077004 2.076136 0.0008686638
## 10 Los_Angeles 2.097604 2.307282 2.360832 -0.0535502542
## 11 Miami 1.905796 2.140508 2.184359 -0.0438508423
## 12 Milwaukee 1.829304 2.139564 2.113982 0.0255819655
## 13 Minneapolis 1.876218 2.148294 2.157146 -0.0088515042
## 14 New_York_City 2.127105 2.345374 2.387974 -0.0426004858
## 15 Philadelphia 1.869232 2.065953 2.150718 -0.0847650389
## 16 Phoenix 1.873902 2.116940 2.155015 -0.0380748953
## 17 St._Louis 1.817565 1.999565 2.103182 -0.1036168913
## 18 San_Diego 2.031004 2.400365 2.299557 0.1008083642
## 19 San_Francisco 2.161667 2.621799 2.419774 0.2020256651
## 20 Tampa 1.766413 2.016197 2.056119 -0.0399221336
## 21 Wash._D.C. 1.987219 2.249198 2.259272 -0.0100741031
```
We graph the residuals on the vertical axis against the log 1985 prices below.
```
ggplot(home.prices, aes(log.1985, RESIDUAL)) +
geom_point() +
geom_hline(yintercept = 0, color = "red")
```
To see if we have fit a good line, we look for a pattern in the residuals. If there is some pattern – say, the residual plot seems to be increasing, then this tells us that we can improve our line fit.
We try to improve our line fit by fitting a line to the \\((x, RESIDUAL)\\) data. We use the same 3\-group method to fit our line. Some of the calculations are summarized in the table above.
* We first find three summary points. We already know the summary \\(x\\) values are 1\.79, 1\.87, 2\.03\. Looking at the residuals in each group, we find the summary residual values are respectively 0, \-.03, .03\. So our 3 summary points are
\\\[
(1\.79, 0\), (1\.87, \-.03\), (2\.03, .03\)
\\]
* We find the slope \\(d0\\) and the intercept \\(g0\\) as we did before. The slope is
\\\[ d0 \= (.03 \- 0\) / (2\.03 \- 1\.79\) \= .125\\]
and the intercept is
\\\[
g0 \= 1/3\[(0 \- .125 (1\.79 \- 1\.87\)) \+ (\-.03\) \+ (.03 \- .125 (2\.03 \- 1\.87\))] \= \-.003
\\]
So our line fit to the \\((x, RESIDUAL)\\) data is
\\\[
RESID \= \-.003 \+ .125 (x \- 1\.87\)
\\]
* Our new fit to the \\((x, y)\\) data has the form
\\\[
y \= a\_1 \+ b\_1 (x \- x\_C),
\\]
where we find the slope \\(b\_1\\) and the intercept \\(a\_1\\) are found by adding the slopes and intercepts from the two fits:
\\\[
b\_1 \= b\_0 \+ d\_0, a\_1 \= a\_0 \+ g\_0\.
\\]
Here
\\\[ b\_1 \= .958 \+ .125 \= 1\.083, a\_1 \= 2\.151 \- .003 \= 2\.148\.\\]
So our new fit to the data is
\\\[
y \= 1\.083 (x \- 1\.87\) \+ 2\.15
\\]
Now we can continue this procedure as follows:
* Find the residuals from this fit.
* Find three summary points of \\((x, RESID)\\) and fit a 3\-group line.
* Update the slope and intercept of the fit to the \\((x, y)\\) data.
In practice, we do this on R, and continue this procedure until there is little change in the adjustments to the slope and intercept.
For our example, I had the the function `rline` do ten iterations of this procedure with the following results (SLOPE is the current estimate of the slope of the resistant line and INTERCEPT is the current estimate at the intercept).
```
Results <- data.frame(Iteration=NULL, Slope=NULL, Intercept=NULL)
for(iterations in 1:10){
fit <- rline(log.2000 ~ log.1985, home.prices,
iter=iterations)
Results <- rbind(Results,
data.frame(Iteration=iterations,
Slope=fit$b, Intercept=fit$a))
}
Results
```
```
## Iteration Slope Intercept
## 1 1 0.9200503 2.155015
## 2 2 1.0951636 2.147055
## 3 3 1.2067010 2.149614
## 4 4 1.2777029 2.151248
## 5 5 1.3194267 2.152652
## 6 6 1.3439454 2.153477
## 7 7 1.3583537 2.153962
## 8 8 1.3668206 2.154247
## 9 9 1.3717961 2.154415
## 10 10 1.3747200 2.154513
```
Note that after ten iterations, the procedure has essentially converged and the resistant line has equation
\\\[
y \= 1\.3747 (x \- 1\.87\) \+ 2\.1545
\\]
This is typically the case, although there exist some examples where the procedure doesn’t converge.
13\.5 Comparison with a Least\-Squares Fit
------------------------------------------
We have just described a resistant method of fitting a line. We should explain why this is preferable to the popular least\-squares fit that you learned in your first stats course.
The least\-squares fit to these data is given by
```
lm(log.2000 ~ I(log.1985 - 1.87), data=home.prices)
```
```
##
## Call:
## lm(formula = log.2000 ~ I(log.1985 - 1.87), data = home.prices)
##
## Coefficients:
## (Intercept) I(log.1985 - 1.87)
## 2.148 1.044
```
If you compare the resistant line with the least\-squares line, they look pretty close. The slope of the resistant line is .88 which is a little bit smaller than the least\-squares slope of 1\.04\.
The big difference between the two fits is how they react to outliers. To illustrate this, notice that San Francisco has an median house price of 418\.6 (thousand dollars). Suppose instead that the median price was 1000, so log median price \= 3\.00 (instead of 2\.62\). What effect would this change have on our line fits?
We refit these data (with the unusally large price) using the resistant and least\-squares methods.
```
home.prices %>%
mutate(log.2000a = log.2000,
log.2000a = replace(log.2000a, y2000 == 418.6,
3.00)) -> home.prices
rline(log.2000a ~ log.1985,
home.prices, 5)[c("a", "b", "xC")]
```
```
## $a
## [1] 2.152652
##
## $b
## [1] 1.319427
##
## $xC
## [1] 1.873902
```
```
lm(log.2000a ~ I(log.1985 - 1.874), data = home.prices)
```
```
##
## Call:
## lm(formula = log.2000a ~ I(log.1985 - 1.874), data = home.prices)
##
## Coefficients:
## (Intercept) I(log.1985 - 1.874)
## 2.162 1.384
```
The resistant line is
\\\[
y \= 1\.319 (x \- 1\.874\) \+ 2\.153
\\]
which is identical to the line that we found earlier. The change in the largest house price had no effect on the fits since the resistant line is based on computing median values of \\(x\\) and \\(y\\) in each group.
In contrast, the least\-squares fit with the large house price is
\\\[
y \= \- 0\.408 \+ 1\.38 x
\\]
which is different from the earlier least\-squares fit – the slope has increased from 1\.04 to 1\.37\. So a single extreme observation can have a big effect on the least\-squares fit. The least\-squares line suffers from the same lack\-of\-resistance problem as our familiar measure of center, the mean.
13\.6 Interpreting the fit
--------------------------
Remember we initially reexpressed the house price data to logs – can we express our \`\`best line” in terms of the original house price data?
Our resistant line fit was
\\\[
y \= 1\.3747 (x \- 1\.87\) \+ 2\.1545
\\]
or
\\\[
y \= 1\.3747 x \-0\.416189
\\]
which means
\\\[
\\log ({\\rm house \\, price \\, in \\, 2000}) \= 1\.3747 \\log(
{\\rm house \\, price \\, in \\, 1985}) \-0\.416189\.
\\]
If we take each side to the 10th power, we get the equivalent equation
\\\[
{\\rm house \\, price \\, in \\, 2000} \= \[{\\rm house \\, price \\, in \\, 2000}]^{1\.3747} \\times 10^{\- 0\.416}.
\\]
So a linear fit to the \\((\\log x, \\log y)\\) data is the same as an exponential\-type fit in the \\((x, y)\\) data.
| Data Science |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.